title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
What is JavaScript Bitwise XOR (^) Operator?
If both the bits are different, then 1 is returned when Bitwise OR (|) operator is used. You can try to run the following code to learn how to work with JavaScript Bitwise XOR Operator. <!DOCTYPE html> <html> <body> <script> document.write("Bitwise XOR Operator<br>"); // 7 = 00000000000000000000000000000111 // 1 = 00000000000000000000000000000001 document.write(7 ^ 1); </script> </body> </html>
[ { "code": null, "e": 1151, "s": 1062, "text": "If both the bits are different, then 1 is returned when Bitwise OR (|) operator is used." }, { "code": null, "e": 1248, "s": 1151, "text": "You can try to run the following code to learn how to work with JavaScript Bitwise XOR Operator." }, { "code": null, "e": 1515, "s": 1248, "text": "<!DOCTYPE html>\n<html>\n <body>\n <script>\n document.write(\"Bitwise XOR Operator<br>\");\n\n // 7 = 00000000000000000000000000000111\n // 1 = 00000000000000000000000000000001\n document.write(7 ^ 1);\n </script>\n </body>\n</html>" } ]
What is the difference between method overloading and method hiding in Java?
method hiding − When super class and the sub class contains same methods including parameters, and if they are static and, when called, the super class method is hidden by the method of the sub class this is known as method hiding. Live Demo class Demo{ public static void demoMethod() { System.out.println("method of super class"); } } public class Sample extends Demo{ public static void demoMethod() { System.out.println("method of sub class"); } public static void main(String args[] ){ Sample.demoMethod(); } } method of sub class method overloading − When a class contains two methods with same name and different parameters, when called, JVM executes this method based on the method parameters this is known as method overloading. Live Demo public class Sample{ public static void add(int a, int b){ System.out.println(a+b); } public static void add(int a, int b, int c){ System.out.println(a+b+c); } public static void main(String args[]){ Sample obj = new Sample(); obj.add(20, 40); obj.add(40, 50, 60); } } 60 150
[ { "code": null, "e": 1294, "s": 1062, "text": "method hiding − When super class and the sub class contains same methods including parameters, and if they are static and, when called, the super class method is hidden by the method of the sub class this is known as method hiding." }, { "code": null, "e": 1305, "s": 1294, "text": " Live Demo" }, { "code": null, "e": 1615, "s": 1305, "text": "class Demo{\n public static void demoMethod() {\n System.out.println(\"method of super class\");\n }\n}\npublic class Sample extends Demo{\n public static void demoMethod() {\n System.out.println(\"method of sub class\");\n }\n public static void main(String args[] ){\n Sample.demoMethod();\n }\n}" }, { "code": null, "e": 1636, "s": 1615, "text": "method of sub class\n" }, { "code": null, "e": 1838, "s": 1636, "text": "method overloading − When a class contains two methods with same name and different parameters, when called, JVM executes this method based on the method parameters this is known as method overloading." }, { "code": null, "e": 1849, "s": 1838, "text": " Live Demo" }, { "code": null, "e": 2166, "s": 1849, "text": "public class Sample{\n public static void add(int a, int b){\n System.out.println(a+b);\n }\n public static void add(int a, int b, int c){\n System.out.println(a+b+c);\n }\n public static void main(String args[]){\n Sample obj = new Sample();\n obj.add(20, 40);\n obj.add(40, 50, 60);\n }\n}" }, { "code": null, "e": 2175, "s": 2166, "text": "60 \n150\n" } ]
Fetching rows added in last hour with MySQL?
You can use date-sub() and now() function from MySQL to fetch the rows added in last hour. The syntax is as follows − select *from yourTableName where yourDateTimeColumnName <=date_sub(now(),interval 1 hour); The above query gives the result added last hour. To understand the above concept, let us first create a table. The query to create a table is as follows − mysql> create table LastHourRecords -> ( -> Id int, -> Name varchar(100), -> Login datetime -> ); Query OK, 0 rows affected (0.67 sec) Insert records in the form of datetime using insert command. The query to insert record is as follows − mysql> insert into LastHourRecords values(1,'John',' 2018-12-19 10:00:00'); Query OK, 1 row affected (0.17 sec) mysql> insert into LastHourRecords values(2,'Carol','2018-12-19 10:10:00'); Query OK, 1 row affected (0.15 sec) mysql> insert into LastHourRecords values(3,'Sam','2018-12-19 10:05:00'); Query OK, 1 row affected (0.13 sec) mysql> insert into LastHourRecords values(4,'Mike','2018-12-18 12:10:00'); Query OK, 1 row affected (0.10 sec) Display all records from the table using select statement. The query is as follows − mysql> select *from LastHourRecords; +------+-------+---------------------+ | Id | Name | Login | +------+-------+---------------------+ | 1 | John | 2018-12-19 10:00:00 | | 2 | Carol | 2018-12-19 10:10:00 | | 3 | Sam | 2018-12-19 10:05:00 | | 4 | Mike | 2018-12-18 12:10:00 | +------+-------+---------------------+ 4 rows in set (0.00 sec) Let us see the query to fetch rows added in the last hour − mysql> select *from LastHourRecords -> where Login <=Date_sub(now(),interval 1 hour); +------+-------+---------------------+ | Id | Name | Login | +------+-------+---------------------+ | 1 | John | 2018-12-19 10:00:00 | | 2 | Carol | 2018-12-19 10:10:00 | | 3 | Sam | 2018-12-19 10:05:00 | +------+-------+---------------------+ 3 rows in set (0.00 sec)
[ { "code": null, "e": 1153, "s": 1062, "text": "You can use date-sub() and now() function from MySQL to fetch the rows added in last hour." }, { "code": null, "e": 1180, "s": 1153, "text": "The syntax is as follows −" }, { "code": null, "e": 1271, "s": 1180, "text": "select *from yourTableName\nwhere yourDateTimeColumnName <=date_sub(now(),interval 1 hour);" }, { "code": null, "e": 1427, "s": 1271, "text": "The above query gives the result added last hour. To understand the above concept, let us first create a table. The query to create a table is as follows −" }, { "code": null, "e": 1562, "s": 1427, "text": "mysql> create table LastHourRecords\n-> (\n-> Id int,\n-> Name varchar(100),\n-> Login datetime\n-> );\nQuery OK, 0 rows affected (0.67 sec)" }, { "code": null, "e": 1666, "s": 1562, "text": "Insert records in the form of datetime using insert command. The query to insert record is as follows −" }, { "code": null, "e": 2114, "s": 1666, "text": "mysql> insert into LastHourRecords values(1,'John',' 2018-12-19 10:00:00');\nQuery OK, 1 row affected (0.17 sec)\n\nmysql> insert into LastHourRecords values(2,'Carol','2018-12-19 10:10:00');\nQuery OK, 1 row affected (0.15 sec)\n\nmysql> insert into LastHourRecords values(3,'Sam','2018-12-19 10:05:00');\nQuery OK, 1 row affected (0.13 sec)\n\nmysql> insert into LastHourRecords values(4,'Mike','2018-12-18 12:10:00');\nQuery OK, 1 row affected (0.10 sec)" }, { "code": null, "e": 2199, "s": 2114, "text": "Display all records from the table using select statement. The query is as follows −" }, { "code": null, "e": 2236, "s": 2199, "text": "mysql> select *from LastHourRecords;" }, { "code": null, "e": 2573, "s": 2236, "text": "+------+-------+---------------------+\n| Id | Name | Login |\n+------+-------+---------------------+\n| 1 | John | 2018-12-19 10:00:00 |\n| 2 | Carol | 2018-12-19 10:10:00 |\n| 3 | Sam | 2018-12-19 10:05:00 |\n| 4 | Mike | 2018-12-18 12:10:00 |\n+------+-------+---------------------+\n4 rows in set (0.00 sec)" }, { "code": null, "e": 2633, "s": 2573, "text": "Let us see the query to fetch rows added in the last hour −" }, { "code": null, "e": 2719, "s": 2633, "text": "mysql> select *from LastHourRecords\n-> where Login <=Date_sub(now(),interval 1 hour);" }, { "code": null, "e": 3017, "s": 2719, "text": "+------+-------+---------------------+\n| Id | Name | Login |\n+------+-------+---------------------+\n| 1 | John | 2018-12-19 10:00:00 |\n| 2 | Carol | 2018-12-19 10:10:00 |\n| 3 | Sam | 2018-12-19 10:05:00 |\n+------+-------+---------------------+\n3 rows in set (0.00 sec)" } ]
Java program to print Fibonacci series of a given number.
Recursion is the process of repeating items in a self-similar way. In programming languages, if a program allows you to call a function inside the same function, then it is called a recursive call of the function. Following is an example to find Fibonacci series of a given number using a recursive function public class FibonacciSeriesUsingRecursion { public static long fibonacci(long number) { if ((number == 0) || (number == 1)) return number; else return fibonacci(number - 1) + fibonacci(number - 2); } public static void main(String[] args) { for (int counter = 0; counter <= 10; counter++){ System.out.print(" "+fibonacci(counter)); } } } 0 1 1 2 3 5 8 13 21 34 55
[ { "code": null, "e": 1276, "s": 1062, "text": "Recursion is the process of repeating items in a self-similar way. In programming languages, if a program allows you to call a function inside the same function, then it is called a recursive call of the function." }, { "code": null, "e": 1370, "s": 1276, "text": "Following is an example to find Fibonacci series of a given number using a recursive function" }, { "code": null, "e": 1778, "s": 1370, "text": "public class FibonacciSeriesUsingRecursion {\n public static long fibonacci(long number) {\n if ((number == 0) || (number == 1)) return number;\n else return fibonacci(number - 1) + fibonacci(number - 2);\n }\n public static void main(String[] args) {\n for (int counter = 0; counter <= 10; counter++){\n System.out.print(\" \"+fibonacci(counter));\n }\n }\n }" }, { "code": null, "e": 1804, "s": 1778, "text": "0 1 1 2 3 5 8 13 21 34 55" } ]
How to select subsets of data In SQL Query Style in Pandas?
In this post, I will show you how to perform Data Analysis with SQL style filtering with Pandas. Most of the corporate company’s data are stored in databases that require SQL to retrieve and manipulate it. For instance, there are companies like Oracle, IBM, Microsoft having their own databases with their own SQL implementations. Data scientists have to deal with SQL at some stage of their career as the data is not always stored in CSV files. I personally prefer to use Oracle, as the majority of my company’s data is stored in Oracle. Scenario – 1 Suppose we are given a task to find all the movies from our movies dataset with below conditions. The language of the movies should be either English(en) or Spanish(es). The popularity of the movies must be between 500 and 1000. The movie’s status must be released. The vote count must be greater than 5000. For the above scenario, the SQL statement would look some thing like below. SELECT FROM WHERE title AS movie_title ,original_language AS movie_language ,popularityAS movie_popularity ,statusAS movie_status ,vote_count AS movie_vote_count movies_data original_languageIN ('en', 'es') AND status=('Released') AND popularitybetween 500 AND 1000 AND vote_count > 5000; Now that you have seen the SQL for the requirement, let’s do this step by step using pandas. I will show you two methods. 1. Load the movies_data dataset to DataFrame. import pandas as pd movies = pd.read_csv("https://raw.githubusercontent.com/sasankac/TestDataSet/master/movies_data.csv") Assign a variable for each condition. languages = [ "en" , "es" ] condition_on_languages = movies . original_language . isin ( languages ) condition_on_status = movies . status == "Released" condition_on_popularity = movies . popularity . between ( 500 , 1000 ) condition_on_votecount = movies . vote_count > 5000 3. Combine all the conditions(boolean arrays) together. final_conditions = ( condition_on_languages & condition_on_status & condition_on_popularity & condition_on_votecount ) columns = [ "title" , "original_language" , "status" , "popularity" , "vote_count" ] # clubbing all together movies . loc [ final_conditions , columns ] The .query() method is a SQL where clause style way of filtering the data. The conditions can be passed as a string to this method, however, the column names must not contain any spaces. If you have spaces in your column names, replace them with underscores using the python replace function. From my experience I have seen query() method when applied on a larger DataFrame is faster than the previous method. import pandas as pd movies = pd . read_csv ( "https://raw.githubusercontent.com/sasankac/TestDataSet/master/movies_data.csv" ) 4.Build the query string and execute the method. Note the .query method does not work with triple quoted strings spanning multiple lines. final_conditions = ( "original_language in ['en','es']" "and status == 'Released' " "and popularity > 500 " "and popularity < 1000" "and vote_count > 5000" ) final_result = movies . query ( final_conditions ) final_result There is more, often in my coding, I have multiple values to check in my “in” clause. So the above syntax is not ideal to work with. It is possible to reference Python variables using the at symbol (@). You can also programmatically create the values as a python List and use them with (@). movie_languages = [ 'en' , 'es' ] final_conditions = ( "original_language in @movie_languages " "and status == 'Released' " "and popularity > 500 " "and popularity < 1000" "and vote_count > 5000" ) final_result = movies . query ( final_conditions ) final_result
[ { "code": null, "e": 1393, "s": 1062, "text": "In this post, I will show you how to perform Data Analysis with SQL style filtering with Pandas. Most of the corporate company’s data are stored in databases that require SQL to retrieve and manipulate it. For instance, there are companies like Oracle, IBM, Microsoft having their own databases with their own SQL implementations." }, { "code": null, "e": 1601, "s": 1393, "text": "Data scientists have to deal with SQL at some stage of their career as the data is not always stored in CSV files. I personally prefer to use Oracle, as the majority of my company’s data is stored in Oracle." }, { "code": null, "e": 1712, "s": 1601, "text": "Scenario – 1 Suppose we are given a task to find all the movies from our movies dataset with below conditions." }, { "code": null, "e": 1784, "s": 1712, "text": "The language of the movies should be either English(en) or Spanish(es)." }, { "code": null, "e": 1843, "s": 1784, "text": "The popularity of the movies must be between 500 and 1000." }, { "code": null, "e": 1880, "s": 1843, "text": "The movie’s status must be released." }, { "code": null, "e": 1998, "s": 1880, "text": "The vote count must be greater than 5000. For the above scenario, the SQL statement would look some thing like below." }, { "code": null, "e": 2288, "s": 1998, "text": "SELECT\nFROM WHERE\ntitle AS movie_title\n,original_language AS movie_language\n,popularityAS movie_popularity\n,statusAS movie_status\n,vote_count AS movie_vote_count movies_data\noriginal_languageIN ('en', 'es')\n\nAND status=('Released')\nAND popularitybetween 500 AND 1000\nAND vote_count > 5000;" }, { "code": null, "e": 2410, "s": 2288, "text": "Now that you have seen the SQL for the requirement, let’s do this step by step using pandas. I will show you two methods." }, { "code": null, "e": 2456, "s": 2410, "text": "1. Load the movies_data dataset to DataFrame." }, { "code": null, "e": 2578, "s": 2456, "text": "import pandas as pd movies = pd.read_csv(\"https://raw.githubusercontent.com/sasankac/TestDataSet/master/movies_data.csv\")" }, { "code": null, "e": 2616, "s": 2578, "text": "Assign a variable for each condition." }, { "code": null, "e": 2892, "s": 2616, "text": "languages = [ \"en\" , \"es\" ] condition_on_languages = movies . original_language . isin ( languages )\ncondition_on_status = movies . status == \"Released\"\ncondition_on_popularity = movies . popularity . between ( 500 , 1000 )\ncondition_on_votecount = movies . vote_count > 5000" }, { "code": null, "e": 2948, "s": 2892, "text": "3. Combine all the conditions(boolean arrays) together." }, { "code": null, "e": 3220, "s": 2948, "text": "final_conditions = ( condition_on_languages & condition_on_status & condition_on_popularity & condition_on_votecount )\ncolumns = [ \"title\" , \"original_language\" , \"status\" , \"popularity\" , \"vote_count\" ]\n# clubbing all together movies . loc [ final_conditions , columns ]" }, { "code": null, "e": 3407, "s": 3220, "text": "The .query() method is a SQL where clause style way of filtering the data. The conditions can be passed as a string to this method, however, the column names must not contain any spaces." }, { "code": null, "e": 3513, "s": 3407, "text": "If you have spaces in your column names, replace them with underscores using the python replace function." }, { "code": null, "e": 3630, "s": 3513, "text": "From my experience I have seen query() method when applied on a larger DataFrame is faster than the previous method." }, { "code": null, "e": 3757, "s": 3630, "text": "import pandas as pd movies = pd . read_csv ( \"https://raw.githubusercontent.com/sasankac/TestDataSet/master/movies_data.csv\" )" }, { "code": null, "e": 3806, "s": 3757, "text": "4.Build the query string and execute the method." }, { "code": null, "e": 3896, "s": 3806, "text": "Note the .query method does not work with triple quoted strings spanning multiple lines. " }, { "code": null, "e": 4118, "s": 3896, "text": "final_conditions = (\n\"original_language in ['en','es']\"\n\"and status == 'Released' \"\n\"and popularity > 500 \"\n\"and popularity < 1000\"\n\"and vote_count > 5000\"\n) final_result = movies . query ( final_conditions )\nfinal_result" }, { "code": null, "e": 4321, "s": 4118, "text": "There is more, often in my coding, I have multiple values to check in my “in” clause. So the above syntax is not ideal to work with. It is possible to reference Python variables using the at symbol (@)." }, { "code": null, "e": 4409, "s": 4321, "text": "You can also programmatically create the values as a python List and use them with (@)." }, { "code": null, "e": 4671, "s": 4409, "text": "movie_languages = [ 'en' , 'es' ]\nfinal_conditions = (\n\"original_language in @movie_languages \"\n\"and status == 'Released' \"\n\"and popularity > 500 \"\n\"and popularity < 1000\"\n\"and vote_count > 5000\" )\nfinal_result = movies . query ( final_conditions )\nfinal_result" } ]
Go - The Select Statement
The syntax for a select statement in Go programming language is as follows − select { case communication clause : statement(s); case communication clause : statement(s); /* you can have any number of case statements */ default : /* Optional */ statement(s); } The following rules apply to a select statement − You can have any number of case statements within a select. Each case is followed by the value to be compared to and a colon. You can have any number of case statements within a select. Each case is followed by the value to be compared to and a colon. The type for a case must be the a communication channel operation. The type for a case must be the a communication channel operation. When the channel operation occured the statements following that case will execute. No break is needed in the case statement. When the channel operation occured the statements following that case will execute. No break is needed in the case statement. A select statement can have an optional default case, which must appear at the end of the select. The default case can be used for performing a task when none of the cases is true. No break is needed in the default case. A select statement can have an optional default case, which must appear at the end of the select. The default case can be used for performing a task when none of the cases is true. No break is needed in the default case. package main import "fmt" func main() { var c1, c2, c3 chan int var i1, i2 int select { case i1 = <-c1: fmt.Printf("received ", i1, " from c1\n") case c2 <- i2: fmt.Printf("sent ", i2, " to c2\n") case i3, ok := (<-c3): // same as: i3, ok := <-c3 if ok { fmt.Printf("received ", i3, " from c3\n") } else { fmt.Printf("c3 is closed\n") } default: fmt.Printf("no communication\n") } } When the above code is compiled and executed, it produces the following result − no communication 64 Lectures 6.5 hours Ridhi Arora 20 Lectures 2.5 hours Asif Hussain 22 Lectures 4 hours Dilip Padmanabhan 48 Lectures 6 hours Arnab Chakraborty 7 Lectures 1 hours Aditya Kulkarni 44 Lectures 3 hours Arnab Chakraborty Print Add Notes Bookmark this page
[ { "code": null, "e": 2014, "s": 1937, "text": "The syntax for a select statement in Go programming language is as follows −" }, { "code": null, "e": 2237, "s": 2014, "text": "select {\n case communication clause :\n statement(s); \n case communication clause :\n statement(s); \n /* you can have any number of case statements */\n default : /* Optional */\n statement(s);\n}\n" }, { "code": null, "e": 2287, "s": 2237, "text": "The following rules apply to a select statement −" }, { "code": null, "e": 2413, "s": 2287, "text": "You can have any number of case statements within a select. Each case is followed by the value to be compared to and a colon." }, { "code": null, "e": 2539, "s": 2413, "text": "You can have any number of case statements within a select. Each case is followed by the value to be compared to and a colon." }, { "code": null, "e": 2606, "s": 2539, "text": "The type for a case must be the a communication channel operation." }, { "code": null, "e": 2673, "s": 2606, "text": "The type for a case must be the a communication channel operation." }, { "code": null, "e": 2799, "s": 2673, "text": "When the channel operation occured the statements following that case will execute. No break is needed in the case statement." }, { "code": null, "e": 2925, "s": 2799, "text": "When the channel operation occured the statements following that case will execute. No break is needed in the case statement." }, { "code": null, "e": 3146, "s": 2925, "text": "A select statement can have an optional default case, which must appear at the end of the select. The default case can be used for performing a task when none of the cases is true. No break is needed in the default case." }, { "code": null, "e": 3367, "s": 3146, "text": "A select statement can have an optional default case, which must appear at the end of the select. The default case can be used for performing a task when none of the cases is true. No break is needed in the default case." }, { "code": null, "e": 3874, "s": 3367, "text": "package main\n\nimport \"fmt\"\n\nfunc main() {\n var c1, c2, c3 chan int\n var i1, i2 int\n select {\n case i1 = <-c1:\n fmt.Printf(\"received \", i1, \" from c1\\n\")\n case c2 <- i2:\n fmt.Printf(\"sent \", i2, \" to c2\\n\")\n case i3, ok := (<-c3): // same as: i3, ok := <-c3\n if ok {\n fmt.Printf(\"received \", i3, \" from c3\\n\")\n } else {\n fmt.Printf(\"c3 is closed\\n\")\n }\n default:\n fmt.Printf(\"no communication\\n\")\n } \n} " }, { "code": null, "e": 3955, "s": 3874, "text": "When the above code is compiled and executed, it produces the following result −" }, { "code": null, "e": 3973, "s": 3955, "text": "no communication\n" }, { "code": null, "e": 4008, "s": 3973, "text": "\n 64 Lectures \n 6.5 hours \n" }, { "code": null, "e": 4021, "s": 4008, "text": " Ridhi Arora" }, { "code": null, "e": 4056, "s": 4021, "text": "\n 20 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4070, "s": 4056, "text": " Asif Hussain" }, { "code": null, "e": 4103, "s": 4070, "text": "\n 22 Lectures \n 4 hours \n" }, { "code": null, "e": 4122, "s": 4103, "text": " Dilip Padmanabhan" }, { "code": null, "e": 4155, "s": 4122, "text": "\n 48 Lectures \n 6 hours \n" }, { "code": null, "e": 4174, "s": 4155, "text": " Arnab Chakraborty" }, { "code": null, "e": 4206, "s": 4174, "text": "\n 7 Lectures \n 1 hours \n" }, { "code": null, "e": 4223, "s": 4206, "text": " Aditya Kulkarni" }, { "code": null, "e": 4256, "s": 4223, "text": "\n 44 Lectures \n 3 hours \n" }, { "code": null, "e": 4275, "s": 4256, "text": " Arnab Chakraborty" }, { "code": null, "e": 4282, "s": 4275, "text": " Print" }, { "code": null, "e": 4293, "s": 4282, "text": " Add Notes" } ]
Building a Face Recognizer in Python | by Behic Guven | Towards Data Science
In this post, I will show you how to build your own face recognizer using Python. Building a program that detects and recognizes faces is a very interesting and fun project to get started with computer vision. In previous posts, I showed how to recognize text and also how to detect faces in an image, these are great projects to practice python in computer vision. Today, we will do something a little more advance and that is face recognition. As can be understood from the name, we will write a program that will recognize faces in an image. When I say “program”, you can understand this as teaching a machine what to do and how to do it. I like to use teaching instead of programming because that’s actually what we will be doing. The best way of learning is teaching, so while teaching a machine how to detect faces, we are learning too. Before we start working on the project, I want to share the difference between face detection and face recognizer. This is something good to know. Face Detection vs Face Recognition Getting Started Libraries Training the Images Face Recognition Testing the Recognizer These two things might sound very similar but actually, they are not the same. Let’s understand the difference so that we don’t miss the point. Face Detection is the process of detecting faces, from an image or a video that doesn’t matter. The program doesn’t do anything more than finding the faces. But on the other hand, face recognition, the program that finds the faces and also it can tell which face belongs to who. So it is more informational than just detecting them. To write a code that recognizes faces needs some training data, we should train our machine so that it knows the faces and who are they. In this project, we are the ones teaching our program. In machine learning, there are two types of learning; supervised and unsupervised. I will not go into details, in this project we are going to use supervised learning. Here is a nice post about machine learning methods. We will use two main modules for this project, and they are called Face Recognition and OpenCV. OpenCV is a highly optimized library with a focus on real-time applications. OpenCV (Open Source Computer Vision Library) is an open-source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code. Source: https://opencv.org towardsdatascience.com We have to install some libraries so that our program works. Here is a list of the libraries we will install: cmake, face_recognition, numpy, opencv-python. Cmake is a prerequisite library so that face recognition library installation doesn’t give us an errors. We can install them in one line using PIP library manager: pip install cmake face_recognition numpy opencv-python After the installation is completed, let’s import them into our code editor. Some of these libraries are included in Python that’s why we can import them without installing them. import face_recognitionimport cv2import numpy as npimport osimport glob Great! Now we move to the next step, where we will import images and use them to train our program. First things first, let’s find our images. I’ve downloaded images of some famous people and added them to a new folder called “faces”. Also to get the current directory, in other words, the location of your program, we can use an os method called “getcwd()”. faces_encodings = []faces_names = []cur_direc = os.getcwd()path = os.path.join(cur_direc, 'data/faces/')list_of_files = [f for f in glob.glob(path+'*.jpg')]number_files = len(list_of_files)names = list_of_files.copy() Understanding the lines above: All the images are in one folder named “faces”. Image file names have to be the name of the person in the image. (Such as: bill-gates.jpg). File names are listed and assigned to “names” variable. File types have to be the same. In this exercise, I used “jpg” format. Let’s move to the next step. for i in range(number_files): globals()['image_{}'.format(i)] = face_recognition.load_image_file(list_of_files[i]) globals()['image_encoding_{}'.format(i)] = face_recognition.face_encodings(globals()['image_{}'.format(i)])[0] faces_encodings.append(globals()['image_encoding_{}'.format(i)])# Create array of known names names[i] = names[i].replace(cur_direc, "") faces_names.append(names[i]) To give you some idea, here is how my ‘names’ list looks like. Great! The images are trained. In the following step, we will use the device’s webcam to see how our code performs. We have long lines of code in this step. If you go through it you can easily understand what is happening in each line. Let’s define the variables that will be needed. face_locations = []face_encodings = []face_names = []process_this_frame = True Here comes the face recognition code. (You may need to reformat the spacing if you copy the following code, I recommend writing it from scratch by looking at the code, and also try to understand) video_capture = cv2.VideoCapture(0)while True: ret, frame = video_capture.read() small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25) rgb_small_frame = small_frame[:, :, ::-1] if process_this_frame: face_locations = face_recognition.face_locations( rgb_small_frame) face_encodings = face_recognition.face_encodings( rgb_small_frame, face_locations) face_names = [] for face_encoding in face_encodings: matches = face_recognition.compare_faces (faces_encodings, face_encoding) name = "Unknown" face_distances = face_recognition.face_distance( faces_encodings, face_encoding) best_match_index = np.argmin(face_distances) if matches[best_match_index]: name = faces_names[best_match_index] face_names.append(name)process_this_frame = not process_this_frame# Display the results for (top, right, bottom, left), name in zip(face_locations, face_names): top *= 4 right *= 4 bottom *= 4 left *= 4# Draw a rectangle around the face cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)# Input text label with a name below the face cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED) font = cv2.FONT_HERSHEY_DUPLEX cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)# Display the resulting image cv2.imshow('Video', frame)# Hit 'q' on the keyboard to quit! if cv2.waitKey(1) & 0xFF == ord('q'): break In the first picture, I am using the same exact image that was used in the training data. Now, I will try it with a different image of Taylor Swift. It works perfectly! Congrats!! You have created a program that detects and also recognizes faces in an image. Now, you have an idea of how to use computer vision in a real project. Hoping that you enjoyed reading this step-by-step guide. I would be glad if you learned something new today. Working on hands-on programming projects like this one is the best way to sharpen your coding skills. Feel free to contact me if you have any questions while implementing the code. I am Behic Guven, and I love sharing stories on programming, education, and life. Subscribe to my content to stay inspired and Towards Data Science to stay inspired. Thank you,
[ { "code": null, "e": 617, "s": 171, "text": "In this post, I will show you how to build your own face recognizer using Python. Building a program that detects and recognizes faces is a very interesting and fun project to get started with computer vision. In previous posts, I showed how to recognize text and also how to detect faces in an image, these are great projects to practice python in computer vision. Today, we will do something a little more advance and that is face recognition." }, { "code": null, "e": 1161, "s": 617, "text": "As can be understood from the name, we will write a program that will recognize faces in an image. When I say “program”, you can understand this as teaching a machine what to do and how to do it. I like to use teaching instead of programming because that’s actually what we will be doing. The best way of learning is teaching, so while teaching a machine how to detect faces, we are learning too. Before we start working on the project, I want to share the difference between face detection and face recognizer. This is something good to know." }, { "code": null, "e": 1196, "s": 1161, "text": "Face Detection vs Face Recognition" }, { "code": null, "e": 1212, "s": 1196, "text": "Getting Started" }, { "code": null, "e": 1222, "s": 1212, "text": "Libraries" }, { "code": null, "e": 1242, "s": 1222, "text": "Training the Images" }, { "code": null, "e": 1259, "s": 1242, "text": "Face Recognition" }, { "code": null, "e": 1282, "s": 1259, "text": "Testing the Recognizer" }, { "code": null, "e": 1426, "s": 1282, "text": "These two things might sound very similar but actually, they are not the same. Let’s understand the difference so that we don’t miss the point." }, { "code": null, "e": 1759, "s": 1426, "text": "Face Detection is the process of detecting faces, from an image or a video that doesn’t matter. The program doesn’t do anything more than finding the faces. But on the other hand, face recognition, the program that finds the faces and also it can tell which face belongs to who. So it is more informational than just detecting them." }, { "code": null, "e": 2171, "s": 1759, "text": "To write a code that recognizes faces needs some training data, we should train our machine so that it knows the faces and who are they. In this project, we are the ones teaching our program. In machine learning, there are two types of learning; supervised and unsupervised. I will not go into details, in this project we are going to use supervised learning. Here is a nice post about machine learning methods." }, { "code": null, "e": 2344, "s": 2171, "text": "We will use two main modules for this project, and they are called Face Recognition and OpenCV. OpenCV is a highly optimized library with a focus on real-time applications." }, { "code": null, "e": 2721, "s": 2344, "text": "OpenCV (Open Source Computer Vision Library) is an open-source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code." }, { "code": null, "e": 2748, "s": 2721, "text": "Source: https://opencv.org" }, { "code": null, "e": 2771, "s": 2748, "text": "towardsdatascience.com" }, { "code": null, "e": 3033, "s": 2771, "text": "We have to install some libraries so that our program works. Here is a list of the libraries we will install: cmake, face_recognition, numpy, opencv-python. Cmake is a prerequisite library so that face recognition library installation doesn’t give us an errors." }, { "code": null, "e": 3092, "s": 3033, "text": "We can install them in one line using PIP library manager:" }, { "code": null, "e": 3147, "s": 3092, "text": "pip install cmake face_recognition numpy opencv-python" }, { "code": null, "e": 3326, "s": 3147, "text": "After the installation is completed, let’s import them into our code editor. Some of these libraries are included in Python that’s why we can import them without installing them." }, { "code": null, "e": 3398, "s": 3326, "text": "import face_recognitionimport cv2import numpy as npimport osimport glob" }, { "code": null, "e": 3498, "s": 3398, "text": "Great! Now we move to the next step, where we will import images and use them to train our program." }, { "code": null, "e": 3541, "s": 3498, "text": "First things first, let’s find our images." }, { "code": null, "e": 3757, "s": 3541, "text": "I’ve downloaded images of some famous people and added them to a new folder called “faces”. Also to get the current directory, in other words, the location of your program, we can use an os method called “getcwd()”." }, { "code": null, "e": 3975, "s": 3757, "text": "faces_encodings = []faces_names = []cur_direc = os.getcwd()path = os.path.join(cur_direc, 'data/faces/')list_of_files = [f for f in glob.glob(path+'*.jpg')]number_files = len(list_of_files)names = list_of_files.copy()" }, { "code": null, "e": 4006, "s": 3975, "text": "Understanding the lines above:" }, { "code": null, "e": 4054, "s": 4006, "text": "All the images are in one folder named “faces”." }, { "code": null, "e": 4146, "s": 4054, "text": "Image file names have to be the name of the person in the image. (Such as: bill-gates.jpg)." }, { "code": null, "e": 4202, "s": 4146, "text": "File names are listed and assigned to “names” variable." }, { "code": null, "e": 4273, "s": 4202, "text": "File types have to be the same. In this exercise, I used “jpg” format." }, { "code": null, "e": 4302, "s": 4273, "text": "Let’s move to the next step." }, { "code": null, "e": 4711, "s": 4302, "text": "for i in range(number_files): globals()['image_{}'.format(i)] = face_recognition.load_image_file(list_of_files[i]) globals()['image_encoding_{}'.format(i)] = face_recognition.face_encodings(globals()['image_{}'.format(i)])[0] faces_encodings.append(globals()['image_encoding_{}'.format(i)])# Create array of known names names[i] = names[i].replace(cur_direc, \"\") faces_names.append(names[i])" }, { "code": null, "e": 4774, "s": 4711, "text": "To give you some idea, here is how my ‘names’ list looks like." }, { "code": null, "e": 4890, "s": 4774, "text": "Great! The images are trained. In the following step, we will use the device’s webcam to see how our code performs." }, { "code": null, "e": 5058, "s": 4890, "text": "We have long lines of code in this step. If you go through it you can easily understand what is happening in each line. Let’s define the variables that will be needed." }, { "code": null, "e": 5137, "s": 5058, "text": "face_locations = []face_encodings = []face_names = []process_this_frame = True" }, { "code": null, "e": 5333, "s": 5137, "text": "Here comes the face recognition code. (You may need to reformat the spacing if you copy the following code, I recommend writing it from scratch by looking at the code, and also try to understand)" }, { "code": null, "e": 6895, "s": 5333, "text": "video_capture = cv2.VideoCapture(0)while True: ret, frame = video_capture.read() small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25) rgb_small_frame = small_frame[:, :, ::-1] if process_this_frame: face_locations = face_recognition.face_locations( rgb_small_frame) face_encodings = face_recognition.face_encodings( rgb_small_frame, face_locations) face_names = [] for face_encoding in face_encodings: matches = face_recognition.compare_faces (faces_encodings, face_encoding) name = \"Unknown\" face_distances = face_recognition.face_distance( faces_encodings, face_encoding) best_match_index = np.argmin(face_distances) if matches[best_match_index]: name = faces_names[best_match_index] face_names.append(name)process_this_frame = not process_this_frame# Display the results for (top, right, bottom, left), name in zip(face_locations, face_names): top *= 4 right *= 4 bottom *= 4 left *= 4# Draw a rectangle around the face cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)# Input text label with a name below the face cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED) font = cv2.FONT_HERSHEY_DUPLEX cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)# Display the resulting image cv2.imshow('Video', frame)# Hit 'q' on the keyboard to quit! if cv2.waitKey(1) & 0xFF == ord('q'): break" }, { "code": null, "e": 6985, "s": 6895, "text": "In the first picture, I am using the same exact image that was used in the training data." }, { "code": null, "e": 7064, "s": 6985, "text": "Now, I will try it with a different image of Taylor Swift. It works perfectly!" }, { "code": null, "e": 7436, "s": 7064, "text": "Congrats!! You have created a program that detects and also recognizes faces in an image. Now, you have an idea of how to use computer vision in a real project. Hoping that you enjoyed reading this step-by-step guide. I would be glad if you learned something new today. Working on hands-on programming projects like this one is the best way to sharpen your coding skills." }, { "code": null, "e": 7515, "s": 7436, "text": "Feel free to contact me if you have any questions while implementing the code." } ]
How to create PowerShell alias permanently?
PowerShell alias can be created permanently by 2 methods below. To export all the aliases, you need to use Export-Alias cmdlet. When you use this command it will ask you the path for the file to import. To export the newly created alias, you need to give alias name and the name for the export, so later you can import it with the same name. In the below example, we have created alias name Edit for the Wordpad and we will export all the aliases with the name Alias1, so the newly created alias will also be stored and when you want to import your newly created aliases you need to write Import-Alias command. Now, we will export all the aliases. Export-Alias -Path D:\Temp\Alias1 You can check the exported aliases and manipulate them as well with the proper format. Notepad D:\Temp\Alias1 Next, whenever you run the new PowerShell console, you won’t find the new aliases so you need to import the exported aliases. Import-Alias -Path D:\Temp\Alias1 But when you run the above command, you will get an error that inbuilt aliases are already existed but we have remediation for it, we can use –Force parameter to forcefully overwrite those aliases. Import-Alias -Path D:\Temp\Alias1 -Force Now you see the newly created aliases as well in the PowerShell console. Another option but easier than Import/Export option is to create a profile script so every time PowerShell opens it loads a startup profile and all the commands and scripts are loaded which resides inside that profile folder. Here, we will create a profile file Profile.ps1 using below command on the $PROFILE path of your PowerShell. notepad $((Split-Path $profile -Parent) + "\profile.ps1") The above command will prompt to the user to create Profile1.ps1 on the $Profile path if it doesn’t exist and if it is already created then it will open the file to allow the user to manipulate. Once a file is opened, edit the file to set your aliases. Here we will set two aliases in the file. Type the following two commands in notepad and save it. Set-Alias edit notepad.exe Set-Alias edit1 "C:\Program Files\Windows NT\Accessories\wordpad.exe" Launch the PowerShell console again and when you type edit, it will open Notepad and when you type edit1, it will open Wordpad. With this simple method, you can add as many aliases in the profile script and launch them through the PowerShell console.
[ { "code": null, "e": 1126, "s": 1062, "text": "PowerShell alias can be created permanently by 2 methods below." }, { "code": null, "e": 1265, "s": 1126, "text": "To export all the aliases, you need to use Export-Alias cmdlet. When you use this command it will ask you the path for the file to import." }, { "code": null, "e": 1404, "s": 1265, "text": "To export the newly created alias, you need to give alias name and the name for the export, so later you can import it with the same name." }, { "code": null, "e": 1673, "s": 1404, "text": "In the below example, we have created alias name Edit for the Wordpad and we will export all the aliases with the name Alias1, so the newly created alias will also be\nstored and when you want to import your newly created aliases you need to write Import-Alias command." }, { "code": null, "e": 1710, "s": 1673, "text": "Now, we will export all the aliases." }, { "code": null, "e": 1744, "s": 1710, "text": "Export-Alias -Path D:\\Temp\\Alias1" }, { "code": null, "e": 1831, "s": 1744, "text": "You can check the exported aliases and manipulate them as well with the proper format." }, { "code": null, "e": 1854, "s": 1831, "text": "Notepad D:\\Temp\\Alias1" }, { "code": null, "e": 1980, "s": 1854, "text": "Next, whenever you run the new PowerShell console, you won’t find the new aliases so you need to import the exported aliases." }, { "code": null, "e": 2014, "s": 1980, "text": "Import-Alias -Path D:\\Temp\\Alias1" }, { "code": null, "e": 2212, "s": 2014, "text": "But when you run the above command, you will get an error that inbuilt aliases are already existed but we have remediation for it, we can use –Force parameter to forcefully overwrite those aliases." }, { "code": null, "e": 2253, "s": 2212, "text": "Import-Alias -Path D:\\Temp\\Alias1 -Force" }, { "code": null, "e": 2326, "s": 2253, "text": "Now you see the newly created aliases as well in the PowerShell console." }, { "code": null, "e": 2552, "s": 2326, "text": "Another option but easier than Import/Export option is to create a profile script so every time PowerShell opens it loads a startup profile and all the commands and\nscripts are loaded which resides inside that profile folder." }, { "code": null, "e": 2661, "s": 2552, "text": "Here, we will create a profile file Profile.ps1 using below command on the $PROFILE path of your PowerShell." }, { "code": null, "e": 2719, "s": 2661, "text": "notepad $((Split-Path $profile -Parent) + \"\\profile.ps1\")" }, { "code": null, "e": 2914, "s": 2719, "text": "The above command will prompt to the user to create Profile1.ps1 on the $Profile path if it doesn’t exist and if it is already created then it will open the file to allow the user to\nmanipulate." }, { "code": null, "e": 3070, "s": 2914, "text": "Once a file is opened, edit the file to set your aliases. Here we will set two aliases in the file. Type the following two commands in notepad and save it." }, { "code": null, "e": 3167, "s": 3070, "text": "Set-Alias edit notepad.exe\nSet-Alias edit1 \"C:\\Program Files\\Windows NT\\Accessories\\wordpad.exe\"" }, { "code": null, "e": 3295, "s": 3167, "text": "Launch the PowerShell console again and when you type edit, it will open Notepad and when you type edit1, it will open Wordpad." }, { "code": null, "e": 3418, "s": 3295, "text": "With this simple method, you can add as many aliases in the profile script and launch them through the PowerShell console." } ]
Object initializer in JavaScript
An object initializer is an expression that allow us to initialize a newly created object. It is a comma-separated list of zero or more pairs of property names and associated values of an object enclosed in a pair of curly braces {}. Following is the code for object initializer in JavaScript. Live Demo <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Document</title> <style> body { font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif; } .result { font-size: 20px; font-weight: 500; color: blueviolet; } </style> </head> <body> <h1>Object initializer in JavaScript</h1> <div class="result"></div> <br /> <button class="Btn">CLICK HERE</button> <h3>Click on the above button to initialize an object with object initializer and display it</h3> <script> let resEle = document.querySelector(".result"); let BtnEle = document.querySelector(".Btn"); let name = "Rohan"; let age = 22; let place = "Delhi"; const person = { name, age, place }; BtnEle.addEventListener("click", () => { resEle.innerHTML = "person.name = " + person.name + "<br>"; resEle.innerHTML += "person.age = " + person.age + "<br>"; resEle.innerHTML += "person.place = " + person.place + "<br>"; }); </script> </body> </html> On clicking the ‘CLICK HERE’ button −
[ { "code": null, "e": 1296, "s": 1062, "text": "An object initializer is an expression that allow us to initialize a newly created object. It is a comma-separated list of zero or more pairs of property names and associated values of an\nobject enclosed in a pair of curly braces {}." }, { "code": null, "e": 1356, "s": 1296, "text": "Following is the code for object initializer in JavaScript." }, { "code": null, "e": 1367, "s": 1356, "text": " Live Demo" }, { "code": null, "e": 2437, "s": 1367, "text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"UTF-8\" />\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n<title>Document</title>\n<style>\n body {\n font-family: \"Segoe UI\", Tahoma, Geneva, Verdana, sans-serif;\n }\n .result {\n font-size: 20px;\n font-weight: 500;\n color: blueviolet;\n }\n</style>\n</head>\n<body>\n<h1>Object initializer in JavaScript</h1>\n<div class=\"result\"></div>\n<br />\n<button class=\"Btn\">CLICK HERE</button>\n<h3>Click on the above button to initialize an object with object initializer and display it</h3>\n<script>\n let resEle = document.querySelector(\".result\");\n let BtnEle = document.querySelector(\".Btn\");\n let name = \"Rohan\";\n let age = 22;\n let place = \"Delhi\";\n const person = { name, age, place };\n BtnEle.addEventListener(\"click\", () => {\n resEle.innerHTML = \"person.name = \" + person.name + \"<br>\";\n resEle.innerHTML += \"person.age = \" + person.age + \"<br>\";\n resEle.innerHTML += \"person.place = \" + person.place + \"<br>\";\n });\n</script>\n</body>\n</html>" }, { "code": null, "e": 2475, "s": 2437, "text": "On clicking the ‘CLICK HERE’ button −" } ]
How to work with array variable in MySQL?
MySQL does not support array variables. To get the same result, use the table DUAL. Following is the syntax: SELECT yourValue1 AS ArrayValue FROM DUAL UNION ALL SELECT yourValue2 FROM DUAL UNION ALL SELECT yourValue3 FROM DUAL UNION ALL SELECT yourValue4 FROM DUAL UNION ALL . . . . . . SELECT yourValueN FROM DUAL; Let us create a sample table: mysql> SELECT 1 AS ArrayValue FROM DUAL UNION ALL SELECT 2 FROM DUAL UNION ALL SELECT 3 FROM DUAL UNION ALL SELECT 4 FROM DUAL UNION ALL SELECT 5 FROM DUAL UNION ALL SELECT 6 FROM DUAL UNION ALL SELECT 7 FROM DUAL UNION ALL SELECT 8 FROM DUAL UNION ALL SELECT 9 FROM DUAL UNION ALL SELECT 10 FROM DUAL ; This will produce the following output +------------+ | ArrayValue | +------------+ | 1 | | 2 | | 3 | | 4 | | 5 | | 6 | | 7 | | 8 | | 9 | | 10 | +------------+ 10 rows in set (0.00 sec)
[ { "code": null, "e": 1171, "s": 1062, "text": "MySQL does not support array variables. To get the same result, use the table DUAL. Following is the syntax:" }, { "code": null, "e": 1378, "s": 1171, "text": "SELECT yourValue1 AS ArrayValue FROM DUAL\nUNION ALL\nSELECT yourValue2 FROM DUAL\nUNION ALL\nSELECT yourValue3 FROM DUAL\nUNION ALL\nSELECT yourValue4 FROM DUAL\nUNION ALL\n.\n.\n.\n.\n.\n.\nSELECT yourValueN FROM DUAL;" }, { "code": null, "e": 1408, "s": 1378, "text": "Let us create a sample table:" }, { "code": null, "e": 1820, "s": 1408, "text": "mysql> SELECT 1 AS ArrayValue FROM DUAL\n UNION ALL\n SELECT 2 FROM DUAL\n UNION ALL\n SELECT 3 FROM DUAL\n UNION ALL\n SELECT 4 FROM DUAL\n UNION ALL\n SELECT 5 FROM DUAL\n UNION ALL\n SELECT 6 FROM DUAL\n UNION ALL\n SELECT 7 FROM DUAL\n UNION ALL\n SELECT 8 FROM DUAL\n UNION ALL\n SELECT 9 FROM DUAL\n UNION ALL\n SELECT 10 FROM DUAL ;" }, { "code": null, "e": 1859, "s": 1820, "text": "This will produce the following output" }, { "code": null, "e": 2095, "s": 1859, "text": "+------------+\n| ArrayValue |\n+------------+\n| 1 |\n| 2 |\n| 3 |\n| 4 |\n| 5 |\n| 6 |\n| 7 |\n| 8 |\n| 9 |\n| 10 |\n+------------+\n10 rows in set (0.00 sec)" } ]
Neural Networks for Real-Time Audio: Raspberry-Pi Guitar Pedal | by Keith Bloemer | Towards Data Science
This is the last of a five-part series on using neural networks for real-time audio.For the previous article on Stateful LSTMs, click here. In this article we will go step-by-step to build a functional guitar pedal running neural nets in real-time on the Raspberry Pi. We have now covered three different neural network models and implemented them in real-time guitar plugins with the JUCE framework. As a guitarist and engineer, the next logical step for me is to build a guitar pedal using what we have learned from the previous articles. Amazingly, all the software tools are available for free, and the hardware costs around $150 (less if you already have the required power and audio adapters handy). The secret sauce for making this possible is Elk Audio OS. This is a Linux based open source operating system made specifically for low-latency audio processing on embedded devices. It allows you to run existing VST3 plugins on devices such as the Raspberry Pi. The software I wrote is called NeuralPi, and is open source on Github. I included a model of the Ibanez TS9 Tubescreamer pedal, as well as a Fender Blues Jr. amplifier. Here is a video demonstrating the completed project, using the NeuralPi VST3 plugin on the Raspberry Pi hardware: This project consists of four main components: Raspberry Pi 4b (Mini computer running the plugin)HiFi Berry ADC + DAC (Analog to Digital, Digital to Analog board that sits on top of the Raspberry Pi.Elk Audio OS for Raspberry Pi4 (Low latency operating system specifically for audio)Real-time VST3 plugin (NeuralPi) cross-compiled for Elk Audio OS (the plugin running the neural net engine with guitar amp/pedal models) Raspberry Pi 4b (Mini computer running the plugin) HiFi Berry ADC + DAC (Analog to Digital, Digital to Analog board that sits on top of the Raspberry Pi. Elk Audio OS for Raspberry Pi4 (Low latency operating system specifically for audio) Real-time VST3 plugin (NeuralPi) cross-compiled for Elk Audio OS (the plugin running the neural net engine with guitar amp/pedal models) Here is a list of everything I purchased for the project, all ordered from Amazon except for the HiFiBerry ADC +DAC , which I ordered from Chicago Electronics. Raspberry Pi 4b (Amazon), $50 (I’ve seen the Rpi4 on sale for $35)Rpi4 Power Adapter (USB-C connector, 5.1V — 3.5A) $10Micro SD card (minimum 8GB, I bought 32GB), $12Rpi4 + HiFiBerry compatible enclosure, (you can spend as much or as little as you want here; I bought a customizable enclosure suitable for prototyping, but a different enclosure would be better for a finished product) $20HDMI micro to Standard HDMI Adapter Cable (Rpi4 has 2 micro HDMI outputs) $8HiFi Berry ADC + DAC, (There is a “pro” version for $65, but I stuck with the standard version) $50Dual 1/4" Female to 1/8" Male Stereo Audio Adapter (for plugging the guitar into HifiBerry), $8Stereo Male RCA to 1/4" Female Audio Adapter (I bought 1/8" for to use with my headphones, but 1/4" would be the typical guitar pedal output), $8 Raspberry Pi 4b (Amazon), $50 (I’ve seen the Rpi4 on sale for $35) Rpi4 Power Adapter (USB-C connector, 5.1V — 3.5A) $10 Micro SD card (minimum 8GB, I bought 32GB), $12 Rpi4 + HiFiBerry compatible enclosure, (you can spend as much or as little as you want here; I bought a customizable enclosure suitable for prototyping, but a different enclosure would be better for a finished product) $20 HDMI micro to Standard HDMI Adapter Cable (Rpi4 has 2 micro HDMI outputs) $8 HiFi Berry ADC + DAC, (There is a “pro” version for $65, but I stuck with the standard version) $50 Dual 1/4" Female to 1/8" Male Stereo Audio Adapter (for plugging the guitar into HifiBerry), $8 Stereo Male RCA to 1/4" Female Audio Adapter (I bought 1/8" for to use with my headphones, but 1/4" would be the typical guitar pedal output), $8 Total Parts Cost: $166 + Taxes and Shipping Additional items I already had: Separate Computer Monitor with HDMI input (can use any screen with HDMI input) ($50 and up) Wired USB Keyboard ($10) Laptop running Ubuntu Linux (for building VST3 plugin and communicating with Rpi4 over WiFi (Refurbished, $250), You can also use a virtual machine running Linux 1/8" male to 1/4" female adapter $5 (for using an amp instead of headphones) Note: The separate Linux Computer is not needed unless you are cross-compiling the plugin yourself. You will, however, need a separate computer to connect and upload the VST3 to the Raspberry Pi (either through WiFi or Ethernet) The HifiBerry plugs into the top of the Raspberry Pi via the two rows of pin connectors. It came with spacers that secure the HifiBerry card on top of the Rpi4. I chose an enclosure for a generic prototyping setup, but the HiFiBerry website offers a nice looking enclosure for a simple guitar pedal project with no additional footswitches or knobs. I’ll probably end up building my own enclosure for a finalized product. After the hardware is assembled you can connect all the necessary adapters. If you are familiar with using the Raspberry Pi 2, you’ll want to note a few key differences with the Raspberry Pi 4: The HDMI out is Micro (instead of standard size)The power cable is USB-C (instead of micro USB)Wifi and Bluetooth are built into the Rpi4 The HDMI out is Micro (instead of standard size) The power cable is USB-C (instead of micro USB) Wifi and Bluetooth are built into the Rpi4 The first step is to flash the micro SD card with the Elk Audio OS for Raspberry Pi 4. To do this, we will use a tool called balenaEtcher, but there are other options for imaging SD cards. Image the microSD with Elk Audio OS Download the Elk Audio OS image for Rpi4 from Github (.bz2). Unzip the .wic file.Download and install balenaEtcher.Plug in your microSD card to computer (minimum 8GB) (USB adapter may be needed to connect the microSD to your computer)Run balenaEtcher and follow the prompts to image the microSD with the download .wic file.When imaging is complete, remove the microSD and plug into the microSD slot on the Raspberry Pi 4. Download the Elk Audio OS image for Rpi4 from Github (.bz2). Unzip the .wic file. Download and install balenaEtcher. Plug in your microSD card to computer (minimum 8GB) (USB adapter may be needed to connect the microSD to your computer) Run balenaEtcher and follow the prompts to image the microSD with the download .wic file. When imaging is complete, remove the microSD and plug into the microSD slot on the Raspberry Pi 4. Boot up the Rpi4 with Elk Audio OS and perform initial setup Connect the Rpi4 to an external monitor with HDMI, connect a USB keyboard, and connect power.When the login prompt shows up, type “root” and hit enter to login as root (by default there is no password for root, but you can set one up if desired).Configure the Elk Audio OS for use with HiFiBerry ADC + DAC. Type this into the command prompt:sudo elk_system_utils --set-audio-hat hifiberry-dac-plus-adc Connect the Rpi4 to an external monitor with HDMI, connect a USB keyboard, and connect power. When the login prompt shows up, type “root” and hit enter to login as root (by default there is no password for root, but you can set one up if desired). Configure the Elk Audio OS for use with HiFiBerry ADC + DAC. Type this into the command prompt:sudo elk_system_utils --set-audio-hat hifiberry-dac-plus-adc And reboot the Rpi4 for the change to take effect:sudo reboot 4. Configure for communicating over Wifi or Ethernet as detailed in the official Elk documentation. 5. You can now ssh (secure shell) to your Rpi4 from another computer over WiFi. From my Ubuntu Linux computer this is as simple as running the following from the terminal: ssh mind@<ip_address_here># And enter the default password for "mind" user: "elk"# Files can also be transferred to the Rpi4scp -r YourFile root@<rpi-ip-address>:/target_path/ Note: Once you connect the Rpi4 to WiFi, you can obtain the IP Address assigned to it by typing “ip a” in the Elk OS terminal. In the past three articles, we covered three neural net models and their real-time implementations. Each model has its own strengths and weaknesses. To select the most appropriate model for our pedal, a simple trade study was conducted. Each neural net model was given a point ranking from 1 to 3, with 3 being the best. Three categories were chosen: training speed, sound quality, and real-time performance. Note: Each category was given an equal weighting, but realistically for using the Raspberry Pi, the Real-Time Performance would be the most important factor, followed by Sound Quality. The Stateful LSTM model received the highest score for it’s real-time performance and sound quality. The Stateless LSTM has superior training speed, but for our application of creating a custom guitar pedal this is secondary to having a great sound. The WavNet has a great sound but its higher CPU usage may cause problems on the Raspberry Pi (although this is worth going back and testing). Also, WavNet’s difficulty in handling high gain guitar sounds makes it less appealing for use in a guitar pedal. If you are using NeuralPi.vst3 from Github, you can skip this section. These steps are for if you want to compile your own VST3 using JUCE, or modify an existing VST3 plugin to run on Elk OS. The Elk Audio OS runs “headless” plugins, which means there is no graphical interface. JUCE 6 has the headless build feature included, but you may need to further modify your plugin to meet the requirements detailed in the Build Plugins for Elk documentation. On a Linux computer or Linux virtual machine (I used Ubuntu 18.04): Download the Elk Audio SDK for Rpi4Run the downloaded SDK (.sh file) to install. I used the default location: /opt/elk/1.0Open your .jucer project using the Projucer application (JUCE 6, I used JUCE 6.08), and modify the project according to the Elk documentation (see “Inside Projucer:” section).Create a Linux Makefile build target if it doesn’t already exist and save the Projucer project to generate the makefile directory and makefile.Follow the steps as detailed in the Cross-Compiling JUCE Plugin documentation. Here are the exact steps I used: Download the Elk Audio SDK for Rpi4 Run the downloaded SDK (.sh file) to install. I used the default location: /opt/elk/1.0 Open your .jucer project using the Projucer application (JUCE 6, I used JUCE 6.08), and modify the project according to the Elk documentation (see “Inside Projucer:” section). Create a Linux Makefile build target if it doesn’t already exist and save the Projucer project to generate the makefile directory and makefile. Follow the steps as detailed in the Cross-Compiling JUCE Plugin documentation. Here are the exact steps I used: After entering the make command, the plugin will begin to compile. If all goes smoothly, you will now have a .vst3 plugin compatible with Elk OS / Raspberry Pi 4. Important: Ensure the architecture folder name in the compiled VST3 is “aarch64-linux”, for example, “PluginName.vst3/Contents/aarch64-linux”, otherwise the plugin will not run on Elk OS. Rename the folder to “aarch64-linux” if necessary. I immediately ran into problems when I first ran my plugin on the Raspberry Pi. It was running without any errors, but it made this horrible sound: With some help from the Elk Audio forums, it was determined that my plugin was running too slowly. If I looked at the process diagnostics, it was using 99% of the CPU’s resources, and probably overrunning that amount. I had to find a way to optimize my code, or it simply would not be usable. In a very serendipitous turn of events, the day after I came to the conclusion that my current plugin wouldn’t work on the Pi, the creator of RTNeural reached out to me, interested in implementing his code in one of my plugins. It turns out that he had done quite a bit of research on optimizing neural networks for audio processing, and created RTNeural as an inferencing engine for real-time audio. It turned out to be fairly straight-forward to swap out my LSTM inference code for RTNeural. I loaded up the newly compiled plugin with RTNeural on the Pi, crossed my fingers, and looked at the process diagnostics. At first I couldn’t believe what I was seeing: it had dropped from 99% to 16%! I plugged in my guitar and headphones and it sounded just like the plugin on the laptop. I added some delay and reverb, and made this sample recording directly from the Raspberry Pi: I won’t go into the details of why RTNeural was able to supercharge my plugin, but it has to do with utilizing SIMD (Single Instruction, Multiple Data) instructions. These are low-level operations that use process vectorization to run computations in parallel (similar to what graphics cards do). Modern processors have this capability for high performance computing on the CPU. Elk uses a plugin host called “Sushi”. Sushi uses configuration files that set up audio routing, plugin settings, and midi settings. You can view the example configuration file for NeuralPi on Github. In order to move your plugin to the Raspberry Pi, I recommend using “ssh” (secure shell) over WiFi. I used a dedicated Ubuntu Linux laptop, but there are several ways you can accomplish this. I’ll go over the steps I used here. Download the NeuralPi-elk.zip from Github and extract the .vst3 and configuration files. Download the NeuralPi-elk.zip from Github and extract the .vst3 and configuration files. Note: This article references the original release of NeuralPi. For the latest release with updated features, see the NeuralPi Releases page. 2. Move the .vst3 to the Rpi4. Connect to the Pi using one of the methods described here. To connect through ssh over Wifi, I used the following commands on a separate Linux computer connected to the same Wifi network as the Pi. # Secure copy the .vst3 to the Raspberry Piscp -r YourPlugin.vst3 root@<rpi-ip-address>:/home/mind/plugins/# After you copy the vst3 to the RPi, you can ssh to verify it # copied correctlyssh root@<rpi_ip_address> 3. Create a .json config file for running the plugin. If you are running NeuralPi.vst3, you can use the one included in the Github release. # Secure copy the .config to the Raspberry Piscp -r YourConfig.config root@<rpi-ip-address>:/home/mind/config_files/ 4. Connect the audio adapters, guitar input, and audio output device (speakers, headphones, amp). 5. Run the plugin using Sushi. # Login to the "mind" user and run the plugin with sushi# (the default password is "elk")ssh mind@<rpi_ip_address>sushi -r --multicore-processing=2 -c ~/config_files/config_neuralpi.json & This will run the headless plugin in the background. You can view the process usage of the real-time plugin by typing the following: watch -n 0.5 cat /proc/xenomai/sched/stat Note: Using the standard Linux process tools like “top” will only show you the non-real time tasks. To quit the plugin, type the following in the terminal:pkill sushi You can set up the Raspberry Pi to automatically run a configuration when it boots up. This is useful for operating the Pi like a typical audio effect device. At 16% process usage on the Pi, there’s plenty of room for adding other effects such as cab simulation (impulse response), reverb, delay, flange, or any number of less CPU intensive effects. The sky is the limit when it comes to the possibilities on the Raspberry Pi. One could create a phone app that controls the plugin over WiFi, add knobs and controls, or even dual boot another OS running a media center, web browser, or video game emulator. Elk Audio OS is designed for low latency audio, so you don’t need to worry about other processes interfering with the plugin performance as you would on a normal laptop. The Raspberry Pi 4 processor has four cores, with two running real-time operations, and two for non-real time operations. This ensures that the audio performance stays constant. Right now this Raspberry Pi guitar pedal is very bare-bones, but I plan on adding physical knobs and a nice looking enclosure to turn it into an actual guitar pedal. I’ll post a new article on the build process once that is completed. UPDATE 1: Since the time of writing this article I’ve added the ability to control model selection, EQ/Gain/Volume over WiFi from a Windows or Mac computer using the NeuralPi plugin/app. Multiple amp and pedal models are available here. (Release v1.1) See the NeuralPi GitHub page for more details: github.com UPDATE 2: A 3-D printed case is available for NeuralPi. The STL files are available here. Update 3: Version 1.2 of the NeuralPi software now includes the ability to load Impulse Response files, as well as simple Delay and Reverb effects. Impulse response is a commonly used effect to model guitar cabinets or the reverb characteristics of a particular space. When combined with the LSTM model it provides a more realistic representation of a guitar amplifier. github.com Update 4: Version 1.3 of NeuralPi adds the ability to load conditioned models (full range of a gain/drive knob) and swaps the default snapshot models for conditioned models of the TS-9, Blues Jr., and HT40 amplifier. github.com This concludes my five part series on using Neural Networks for real-time audio. I hope it sparked your curiosity on the possibilities of neural networks for music and maybe even convinced you to try it out for yourself. Thank you for reading! If you would like to continue reading about machine learning for guitar effects, check out my next article here:
[ { "code": null, "e": 312, "s": 172, "text": "This is the last of a five-part series on using neural networks for real-time audio.For the previous article on Stateful LSTMs, click here." }, { "code": null, "e": 441, "s": 312, "text": "In this article we will go step-by-step to build a functional guitar pedal running neural nets in real-time on the Raspberry Pi." }, { "code": null, "e": 878, "s": 441, "text": "We have now covered three different neural network models and implemented them in real-time guitar plugins with the JUCE framework. As a guitarist and engineer, the next logical step for me is to build a guitar pedal using what we have learned from the previous articles. Amazingly, all the software tools are available for free, and the hardware costs around $150 (less if you already have the required power and audio adapters handy)." }, { "code": null, "e": 1140, "s": 878, "text": "The secret sauce for making this possible is Elk Audio OS. This is a Linux based open source operating system made specifically for low-latency audio processing on embedded devices. It allows you to run existing VST3 plugins on devices such as the Raspberry Pi." }, { "code": null, "e": 1309, "s": 1140, "text": "The software I wrote is called NeuralPi, and is open source on Github. I included a model of the Ibanez TS9 Tubescreamer pedal, as well as a Fender Blues Jr. amplifier." }, { "code": null, "e": 1423, "s": 1309, "text": "Here is a video demonstrating the completed project, using the NeuralPi VST3 plugin on the Raspberry Pi hardware:" }, { "code": null, "e": 1470, "s": 1423, "text": "This project consists of four main components:" }, { "code": null, "e": 1843, "s": 1470, "text": "Raspberry Pi 4b (Mini computer running the plugin)HiFi Berry ADC + DAC (Analog to Digital, Digital to Analog board that sits on top of the Raspberry Pi.Elk Audio OS for Raspberry Pi4 (Low latency operating system specifically for audio)Real-time VST3 plugin (NeuralPi) cross-compiled for Elk Audio OS (the plugin running the neural net engine with guitar amp/pedal models)" }, { "code": null, "e": 1894, "s": 1843, "text": "Raspberry Pi 4b (Mini computer running the plugin)" }, { "code": null, "e": 1997, "s": 1894, "text": "HiFi Berry ADC + DAC (Analog to Digital, Digital to Analog board that sits on top of the Raspberry Pi." }, { "code": null, "e": 2082, "s": 1997, "text": "Elk Audio OS for Raspberry Pi4 (Low latency operating system specifically for audio)" }, { "code": null, "e": 2219, "s": 2082, "text": "Real-time VST3 plugin (NeuralPi) cross-compiled for Elk Audio OS (the plugin running the neural net engine with guitar amp/pedal models)" }, { "code": null, "e": 2379, "s": 2219, "text": "Here is a list of everything I purchased for the project, all ordered from Amazon except for the HiFiBerry ADC +DAC , which I ordered from Chicago Electronics." }, { "code": null, "e": 3183, "s": 2379, "text": "Raspberry Pi 4b (Amazon), $50 (I’ve seen the Rpi4 on sale for $35)Rpi4 Power Adapter (USB-C connector, 5.1V — 3.5A) $10Micro SD card (minimum 8GB, I bought 32GB), $12Rpi4 + HiFiBerry compatible enclosure, (you can spend as much or as little as you want here; I bought a customizable enclosure suitable for prototyping, but a different enclosure would be better for a finished product) $20HDMI micro to Standard HDMI Adapter Cable (Rpi4 has 2 micro HDMI outputs) $8HiFi Berry ADC + DAC, (There is a “pro” version for $65, but I stuck with the standard version) $50Dual 1/4\" Female to 1/8\" Male Stereo Audio Adapter (for plugging the guitar into HifiBerry), $8Stereo Male RCA to 1/4\" Female Audio Adapter (I bought 1/8\" for to use with my headphones, but 1/4\" would be the typical guitar pedal output), $8" }, { "code": null, "e": 3250, "s": 3183, "text": "Raspberry Pi 4b (Amazon), $50 (I’ve seen the Rpi4 on sale for $35)" }, { "code": null, "e": 3304, "s": 3250, "text": "Rpi4 Power Adapter (USB-C connector, 5.1V — 3.5A) $10" }, { "code": null, "e": 3352, "s": 3304, "text": "Micro SD card (minimum 8GB, I bought 32GB), $12" }, { "code": null, "e": 3575, "s": 3352, "text": "Rpi4 + HiFiBerry compatible enclosure, (you can spend as much or as little as you want here; I bought a customizable enclosure suitable for prototyping, but a different enclosure would be better for a finished product) $20" }, { "code": null, "e": 3652, "s": 3575, "text": "HDMI micro to Standard HDMI Adapter Cable (Rpi4 has 2 micro HDMI outputs) $8" }, { "code": null, "e": 3752, "s": 3652, "text": "HiFi Berry ADC + DAC, (There is a “pro” version for $65, but I stuck with the standard version) $50" }, { "code": null, "e": 3848, "s": 3752, "text": "Dual 1/4\" Female to 1/8\" Male Stereo Audio Adapter (for plugging the guitar into HifiBerry), $8" }, { "code": null, "e": 3994, "s": 3848, "text": "Stereo Male RCA to 1/4\" Female Audio Adapter (I bought 1/8\" for to use with my headphones, but 1/4\" would be the typical guitar pedal output), $8" }, { "code": null, "e": 4038, "s": 3994, "text": "Total Parts Cost: $166 + Taxes and Shipping" }, { "code": null, "e": 4070, "s": 4038, "text": "Additional items I already had:" }, { "code": null, "e": 4162, "s": 4070, "text": "Separate Computer Monitor with HDMI input (can use any screen with HDMI input) ($50 and up)" }, { "code": null, "e": 4187, "s": 4162, "text": "Wired USB Keyboard ($10)" }, { "code": null, "e": 4349, "s": 4187, "text": "Laptop running Ubuntu Linux (for building VST3 plugin and communicating with Rpi4 over WiFi (Refurbished, $250), You can also use a virtual machine running Linux" }, { "code": null, "e": 4426, "s": 4349, "text": "1/8\" male to 1/4\" female adapter $5 (for using an amp instead of headphones)" }, { "code": null, "e": 4655, "s": 4426, "text": "Note: The separate Linux Computer is not needed unless you are cross-compiling the plugin yourself. You will, however, need a separate computer to connect and upload the VST3 to the Raspberry Pi (either through WiFi or Ethernet)" }, { "code": null, "e": 5076, "s": 4655, "text": "The HifiBerry plugs into the top of the Raspberry Pi via the two rows of pin connectors. It came with spacers that secure the HifiBerry card on top of the Rpi4. I chose an enclosure for a generic prototyping setup, but the HiFiBerry website offers a nice looking enclosure for a simple guitar pedal project with no additional footswitches or knobs. I’ll probably end up building my own enclosure for a finalized product." }, { "code": null, "e": 5270, "s": 5076, "text": "After the hardware is assembled you can connect all the necessary adapters. If you are familiar with using the Raspberry Pi 2, you’ll want to note a few key differences with the Raspberry Pi 4:" }, { "code": null, "e": 5408, "s": 5270, "text": "The HDMI out is Micro (instead of standard size)The power cable is USB-C (instead of micro USB)Wifi and Bluetooth are built into the Rpi4" }, { "code": null, "e": 5457, "s": 5408, "text": "The HDMI out is Micro (instead of standard size)" }, { "code": null, "e": 5505, "s": 5457, "text": "The power cable is USB-C (instead of micro USB)" }, { "code": null, "e": 5548, "s": 5505, "text": "Wifi and Bluetooth are built into the Rpi4" }, { "code": null, "e": 5737, "s": 5548, "text": "The first step is to flash the micro SD card with the Elk Audio OS for Raspberry Pi 4. To do this, we will use a tool called balenaEtcher, but there are other options for imaging SD cards." }, { "code": null, "e": 5773, "s": 5737, "text": "Image the microSD with Elk Audio OS" }, { "code": null, "e": 6195, "s": 5773, "text": "Download the Elk Audio OS image for Rpi4 from Github (.bz2). Unzip the .wic file.Download and install balenaEtcher.Plug in your microSD card to computer (minimum 8GB) (USB adapter may be needed to connect the microSD to your computer)Run balenaEtcher and follow the prompts to image the microSD with the download .wic file.When imaging is complete, remove the microSD and plug into the microSD slot on the Raspberry Pi 4." }, { "code": null, "e": 6277, "s": 6195, "text": "Download the Elk Audio OS image for Rpi4 from Github (.bz2). Unzip the .wic file." }, { "code": null, "e": 6312, "s": 6277, "text": "Download and install balenaEtcher." }, { "code": null, "e": 6432, "s": 6312, "text": "Plug in your microSD card to computer (minimum 8GB) (USB adapter may be needed to connect the microSD to your computer)" }, { "code": null, "e": 6522, "s": 6432, "text": "Run balenaEtcher and follow the prompts to image the microSD with the download .wic file." }, { "code": null, "e": 6621, "s": 6522, "text": "When imaging is complete, remove the microSD and plug into the microSD slot on the Raspberry Pi 4." }, { "code": null, "e": 6682, "s": 6621, "text": "Boot up the Rpi4 with Elk Audio OS and perform initial setup" }, { "code": null, "e": 7084, "s": 6682, "text": "Connect the Rpi4 to an external monitor with HDMI, connect a USB keyboard, and connect power.When the login prompt shows up, type “root” and hit enter to login as root (by default there is no password for root, but you can set one up if desired).Configure the Elk Audio OS for use with HiFiBerry ADC + DAC. Type this into the command prompt:sudo elk_system_utils --set-audio-hat hifiberry-dac-plus-adc" }, { "code": null, "e": 7178, "s": 7084, "text": "Connect the Rpi4 to an external monitor with HDMI, connect a USB keyboard, and connect power." }, { "code": null, "e": 7332, "s": 7178, "text": "When the login prompt shows up, type “root” and hit enter to login as root (by default there is no password for root, but you can set one up if desired)." }, { "code": null, "e": 7488, "s": 7332, "text": "Configure the Elk Audio OS for use with HiFiBerry ADC + DAC. Type this into the command prompt:sudo elk_system_utils --set-audio-hat hifiberry-dac-plus-adc" }, { "code": null, "e": 7550, "s": 7488, "text": "And reboot the Rpi4 for the change to take effect:sudo reboot" }, { "code": null, "e": 7650, "s": 7550, "text": "4. Configure for communicating over Wifi or Ethernet as detailed in the official Elk documentation." }, { "code": null, "e": 7822, "s": 7650, "text": "5. You can now ssh (secure shell) to your Rpi4 from another computer over WiFi. From my Ubuntu Linux computer this is as simple as running the following from the terminal:" }, { "code": null, "e": 7998, "s": 7822, "text": "ssh mind@<ip_address_here># And enter the default password for \"mind\" user: \"elk\"# Files can also be transferred to the Rpi4scp -r YourFile root@<rpi-ip-address>:/target_path/" }, { "code": null, "e": 8125, "s": 7998, "text": "Note: Once you connect the Rpi4 to WiFi, you can obtain the IP Address assigned to it by typing “ip a” in the Elk OS terminal." }, { "code": null, "e": 8534, "s": 8125, "text": "In the past three articles, we covered three neural net models and their real-time implementations. Each model has its own strengths and weaknesses. To select the most appropriate model for our pedal, a simple trade study was conducted. Each neural net model was given a point ranking from 1 to 3, with 3 being the best. Three categories were chosen: training speed, sound quality, and real-time performance." }, { "code": null, "e": 8719, "s": 8534, "text": "Note: Each category was given an equal weighting, but realistically for using the Raspberry Pi, the Real-Time Performance would be the most important factor, followed by Sound Quality." }, { "code": null, "e": 9224, "s": 8719, "text": "The Stateful LSTM model received the highest score for it’s real-time performance and sound quality. The Stateless LSTM has superior training speed, but for our application of creating a custom guitar pedal this is secondary to having a great sound. The WavNet has a great sound but its higher CPU usage may cause problems on the Raspberry Pi (although this is worth going back and testing). Also, WavNet’s difficulty in handling high gain guitar sounds makes it less appealing for use in a guitar pedal." }, { "code": null, "e": 9676, "s": 9224, "text": "If you are using NeuralPi.vst3 from Github, you can skip this section. These steps are for if you want to compile your own VST3 using JUCE, or modify an existing VST3 plugin to run on Elk OS. The Elk Audio OS runs “headless” plugins, which means there is no graphical interface. JUCE 6 has the headless build feature included, but you may need to further modify your plugin to meet the requirements detailed in the Build Plugins for Elk documentation." }, { "code": null, "e": 9744, "s": 9676, "text": "On a Linux computer or Linux virtual machine (I used Ubuntu 18.04):" }, { "code": null, "e": 10296, "s": 9744, "text": "Download the Elk Audio SDK for Rpi4Run the downloaded SDK (.sh file) to install. I used the default location: /opt/elk/1.0Open your .jucer project using the Projucer application (JUCE 6, I used JUCE 6.08), and modify the project according to the Elk documentation (see “Inside Projucer:” section).Create a Linux Makefile build target if it doesn’t already exist and save the Projucer project to generate the makefile directory and makefile.Follow the steps as detailed in the Cross-Compiling JUCE Plugin documentation. Here are the exact steps I used:" }, { "code": null, "e": 10332, "s": 10296, "text": "Download the Elk Audio SDK for Rpi4" }, { "code": null, "e": 10420, "s": 10332, "text": "Run the downloaded SDK (.sh file) to install. I used the default location: /opt/elk/1.0" }, { "code": null, "e": 10596, "s": 10420, "text": "Open your .jucer project using the Projucer application (JUCE 6, I used JUCE 6.08), and modify the project according to the Elk documentation (see “Inside Projucer:” section)." }, { "code": null, "e": 10740, "s": 10596, "text": "Create a Linux Makefile build target if it doesn’t already exist and save the Projucer project to generate the makefile directory and makefile." }, { "code": null, "e": 10852, "s": 10740, "text": "Follow the steps as detailed in the Cross-Compiling JUCE Plugin documentation. Here are the exact steps I used:" }, { "code": null, "e": 11015, "s": 10852, "text": "After entering the make command, the plugin will begin to compile. If all goes smoothly, you will now have a .vst3 plugin compatible with Elk OS / Raspberry Pi 4." }, { "code": null, "e": 11254, "s": 11015, "text": "Important: Ensure the architecture folder name in the compiled VST3 is “aarch64-linux”, for example, “PluginName.vst3/Contents/aarch64-linux”, otherwise the plugin will not run on Elk OS. Rename the folder to “aarch64-linux” if necessary." }, { "code": null, "e": 11402, "s": 11254, "text": "I immediately ran into problems when I first ran my plugin on the Raspberry Pi. It was running without any errors, but it made this horrible sound:" }, { "code": null, "e": 11695, "s": 11402, "text": "With some help from the Elk Audio forums, it was determined that my plugin was running too slowly. If I looked at the process diagnostics, it was using 99% of the CPU’s resources, and probably overrunning that amount. I had to find a way to optimize my code, or it simply would not be usable." }, { "code": null, "e": 12096, "s": 11695, "text": "In a very serendipitous turn of events, the day after I came to the conclusion that my current plugin wouldn’t work on the Pi, the creator of RTNeural reached out to me, interested in implementing his code in one of my plugins. It turns out that he had done quite a bit of research on optimizing neural networks for audio processing, and created RTNeural as an inferencing engine for real-time audio." }, { "code": null, "e": 12573, "s": 12096, "text": "It turned out to be fairly straight-forward to swap out my LSTM inference code for RTNeural. I loaded up the newly compiled plugin with RTNeural on the Pi, crossed my fingers, and looked at the process diagnostics. At first I couldn’t believe what I was seeing: it had dropped from 99% to 16%! I plugged in my guitar and headphones and it sounded just like the plugin on the laptop. I added some delay and reverb, and made this sample recording directly from the Raspberry Pi:" }, { "code": null, "e": 12952, "s": 12573, "text": "I won’t go into the details of why RTNeural was able to supercharge my plugin, but it has to do with utilizing SIMD (Single Instruction, Multiple Data) instructions. These are low-level operations that use process vectorization to run computations in parallel (similar to what graphics cards do). Modern processors have this capability for high performance computing on the CPU." }, { "code": null, "e": 13381, "s": 12952, "text": "Elk uses a plugin host called “Sushi”. Sushi uses configuration files that set up audio routing, plugin settings, and midi settings. You can view the example configuration file for NeuralPi on Github. In order to move your plugin to the Raspberry Pi, I recommend using “ssh” (secure shell) over WiFi. I used a dedicated Ubuntu Linux laptop, but there are several ways you can accomplish this. I’ll go over the steps I used here." }, { "code": null, "e": 13470, "s": 13381, "text": "Download the NeuralPi-elk.zip from Github and extract the .vst3 and configuration files." }, { "code": null, "e": 13559, "s": 13470, "text": "Download the NeuralPi-elk.zip from Github and extract the .vst3 and configuration files." }, { "code": null, "e": 13701, "s": 13559, "text": "Note: This article references the original release of NeuralPi. For the latest release with updated features, see the NeuralPi Releases page." }, { "code": null, "e": 13732, "s": 13701, "text": "2. Move the .vst3 to the Rpi4." }, { "code": null, "e": 13930, "s": 13732, "text": "Connect to the Pi using one of the methods described here. To connect through ssh over Wifi, I used the following commands on a separate Linux computer connected to the same Wifi network as the Pi." }, { "code": null, "e": 14145, "s": 13930, "text": "# Secure copy the .vst3 to the Raspberry Piscp -r YourPlugin.vst3 root@<rpi-ip-address>:/home/mind/plugins/# After you copy the vst3 to the RPi, you can ssh to verify it # copied correctlyssh root@<rpi_ip_address>" }, { "code": null, "e": 14285, "s": 14145, "text": "3. Create a .json config file for running the plugin. If you are running NeuralPi.vst3, you can use the one included in the Github release." }, { "code": null, "e": 14412, "s": 14285, "text": "# Secure copy the .config to the Raspberry Piscp -r YourConfig.config root@<rpi-ip-address>:/home/mind/config_files/" }, { "code": null, "e": 14510, "s": 14412, "text": "4. Connect the audio adapters, guitar input, and audio output device (speakers, headphones, amp)." }, { "code": null, "e": 14541, "s": 14510, "text": "5. Run the plugin using Sushi." }, { "code": null, "e": 14732, "s": 14541, "text": "# Login to the \"mind\" user and run the plugin with sushi# (the default password is \"elk\")ssh mind@<rpi_ip_address>sushi -r --multicore-processing=2 -c ~/config_files/config_neuralpi.json &" }, { "code": null, "e": 14907, "s": 14732, "text": "This will run the headless plugin in the background. You can view the process usage of the real-time plugin by typing the following: watch -n 0.5 cat /proc/xenomai/sched/stat" }, { "code": null, "e": 15007, "s": 14907, "text": "Note: Using the standard Linux process tools like “top” will only show you the non-real time tasks." }, { "code": null, "e": 15074, "s": 15007, "text": "To quit the plugin, type the following in the terminal:pkill sushi" }, { "code": null, "e": 15233, "s": 15074, "text": "You can set up the Raspberry Pi to automatically run a configuration when it boots up. This is useful for operating the Pi like a typical audio effect device." }, { "code": null, "e": 15680, "s": 15233, "text": "At 16% process usage on the Pi, there’s plenty of room for adding other effects such as cab simulation (impulse response), reverb, delay, flange, or any number of less CPU intensive effects. The sky is the limit when it comes to the possibilities on the Raspberry Pi. One could create a phone app that controls the plugin over WiFi, add knobs and controls, or even dual boot another OS running a media center, web browser, or video game emulator." }, { "code": null, "e": 16028, "s": 15680, "text": "Elk Audio OS is designed for low latency audio, so you don’t need to worry about other processes interfering with the plugin performance as you would on a normal laptop. The Raspberry Pi 4 processor has four cores, with two running real-time operations, and two for non-real time operations. This ensures that the audio performance stays constant." }, { "code": null, "e": 16263, "s": 16028, "text": "Right now this Raspberry Pi guitar pedal is very bare-bones, but I plan on adding physical knobs and a nice looking enclosure to turn it into an actual guitar pedal. I’ll post a new article on the build process once that is completed." }, { "code": null, "e": 16562, "s": 16263, "text": "UPDATE 1: Since the time of writing this article I’ve added the ability to control model selection, EQ/Gain/Volume over WiFi from a Windows or Mac computer using the NeuralPi plugin/app. Multiple amp and pedal models are available here. (Release v1.1) See the NeuralPi GitHub page for more details:" }, { "code": null, "e": 16573, "s": 16562, "text": "github.com" }, { "code": null, "e": 16663, "s": 16573, "text": "UPDATE 2: A 3-D printed case is available for NeuralPi. The STL files are available here." }, { "code": null, "e": 17033, "s": 16663, "text": "Update 3: Version 1.2 of the NeuralPi software now includes the ability to load Impulse Response files, as well as simple Delay and Reverb effects. Impulse response is a commonly used effect to model guitar cabinets or the reverb characteristics of a particular space. When combined with the LSTM model it provides a more realistic representation of a guitar amplifier." }, { "code": null, "e": 17044, "s": 17033, "text": "github.com" }, { "code": null, "e": 17261, "s": 17044, "text": "Update 4: Version 1.3 of NeuralPi adds the ability to load conditioned models (full range of a gain/drive knob) and swaps the default snapshot models for conditioned models of the TS-9, Blues Jr., and HT40 amplifier." }, { "code": null, "e": 17272, "s": 17261, "text": "github.com" }, { "code": null, "e": 17493, "s": 17272, "text": "This concludes my five part series on using Neural Networks for real-time audio. I hope it sparked your curiosity on the possibilities of neural networks for music and maybe even convinced you to try it out for yourself." }, { "code": null, "e": 17516, "s": 17493, "text": "Thank you for reading!" } ]
Object.entries() In JavaScript - GeeksforGeeks
22 Dec, 2021 Object and Object Constructors in JavaScript? In the living world of object-oriented programming we already know the importance of classes and objects but unlike other programming languages, JavaScript does not have the traditional classes as seen in other languages. But JavaScript has objects and constructors which work mostly in the same way to perform the same kind of operations. Constructors are general JavaScript functions which are used with the “new” keyword. Constructors are of two types in JavaScript i.e. built-in constructors(array and object) and custom constructors(define properties and methods for specific objects). Constructors can be useful when we need a way to create an object “type” that can be used multiple times without having to redefine the object every time and this could be achieved using the Object Constructor function. It’s a convention to capitalize the name of constructors to distinguish them from regular functions.For instance, consider the following code: function Automobile(color) { this.color=color; } var vehicle1 = new Automobile ("red"); The function “Automobile()” is an object constructor, and its properties and methods i.e “color” is declared inside it by prefixing it with the keyword “this”. Objects defined using an object constructor are then made instants using the keyword “new”. When new Automobile() is called, JavaScript does two things: It creates a fresh new object(instance) Automobile() and assigns it to a variable.It sets the constructor property i.e “color” of the object to Automobile. It creates a fresh new object(instance) Automobile() and assigns it to a variable. It sets the constructor property i.e “color” of the object to Automobile. Object.entries() MethodObject.entries() method is used to return an array consisting of enumerable property [key, value] pairs of the object which are passed as the parameter. The ordering of the properties is the same as that given by looping over the property values of the object manually.Difference between Object.entries() and Object.values() method Object.entries() method in JavaScript returns an array consisting of enumerable property [key, value] pairs of the object which are passed as the parameter whereas Object.values() method in JavaScript returns an array whose elements are the enumerable property values found on the object. Follow the example below for better understanding of the differences between these two functions. Input: var object = { 0: '23', 1: 'geeksforgeeks', 2: 'true' }; console.log(Object.values(object)); console.log(Object.entries(object)); Output: Array ["23", "geeksforgeeks", "true"] Array [["0", "23"], ["1", "geeksforgeeks"],["2", "true"]] Explanation: In the above example an object has been created with three [key, value] pairs and the object.entries() method returns the [key, value] pairs of the object and object.values() method returns the values found on the object. Applications: Object.entries() is used for listing properties related to an object. Object.entries() is used for listing all the [key,value] pairs of an object. Syntax: Object.entries(obj) Parameters Used: obj : It is the object whose enumerable own property [key, value] pairs are to be returned.Return Value: Return Value: Object.entries() returns an array consisting of enumerable property [key, value] pairs of the object passed. Examples of the above function are provided below. Examples: Input : const obj = { 0: 'adam', 1: 'billy', 2: 'chris' }; console.log(Object.entries(obj)[1]); Output : Array ["1", "billy"] Explanation: In this example, an object “obj” has been created with three property[key, value] pairs and the Object.entries() method is used to return the first property [key, value] pair of the object. Input : const obj = { 10: 'adam', 200: 'billy', 35: 'chris' }; console.log(Object.entries(obj)); Output : Array [ ["10", "adam"], ["35", "chris"], ["200", "billy"]] Explanation: In this example, an object “obj” has been created with three property[key, value] pairs and the Object.entries() method is used to return all the property [key, value] pairs of the object. Codes for the above function are provided below. Code 1: <script>// creating an object constructor// and assigning values to it const obj = { 0: 'adam', 1: 'billy', 2: 'chris' }; // Displaying the enumerable property [key, value] // pairs of the object using object.entries() method console.log(Object.entries(obj)[1]);</script> OUTPUT : Array ["1", "billy"] Code 2: <script>// creating an object constructor and // assigning values to it const obj = { 10: 'adam', 200: 'billy', 35: 'chris' }; // Displaying the enumerable property [key, value] // pairs of the object using object.entries() method console.log(Object.entries(obj)); </script> OUTPUT : Array [["10", "adam"], ["35", "chris"],["200", "billy"]] Exceptions : It causes a TypeError if the argument passed is not an object . It causes a RangeError if the key passed in the argument is not in the range of the property[key, value] pair . Supported Browsers: Chrome 54 and above Edge 14 and above Firefox 47 and above Opera 41 and above Safari 10.1 and above Reference :https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/entries Pallavi Yadav ysachin2314 javascript-functions javascript-object JavaScript Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Remove elements from a JavaScript Array Convert a string to an integer in JavaScript Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React How to append HTML code to a div using JavaScript ? How to Open URL in New Tab using JavaScript ? Difference Between PUT and PATCH Request JavaScript | console.log() with Examples How to read a local text file using JavaScript? Node.js | fs.writeFileSync() Method
[ { "code": null, "e": 24972, "s": 24944, "text": "\n22 Dec, 2021" }, { "code": null, "e": 25018, "s": 24972, "text": "Object and Object Constructors in JavaScript?" }, { "code": null, "e": 25358, "s": 25018, "text": "In the living world of object-oriented programming we already know the importance of classes and objects but unlike other programming languages, JavaScript does not have the traditional classes as seen in other languages. But JavaScript has objects and constructors which work mostly in the same way to perform the same kind of operations." }, { "code": null, "e": 25609, "s": 25358, "text": "Constructors are general JavaScript functions which are used with the “new” keyword. Constructors are of two types in JavaScript i.e. built-in constructors(array and object) and custom constructors(define properties and methods for specific objects)." }, { "code": null, "e": 25829, "s": 25609, "text": "Constructors can be useful when we need a way to create an object “type” that can be used multiple times without having to redefine the object every time and this could be achieved using the Object Constructor function." }, { "code": null, "e": 25972, "s": 25829, "text": "It’s a convention to capitalize the name of constructors to distinguish them from regular functions.For instance, consider the following code:" }, { "code": null, "e": 26064, "s": 25972, "text": "function Automobile(color) {\n this.color=color;\n}\n\nvar vehicle1 = new Automobile (\"red\");\n" }, { "code": null, "e": 26316, "s": 26064, "text": "The function “Automobile()” is an object constructor, and its properties and methods i.e “color” is declared inside it by prefixing it with the keyword “this”. Objects defined using an object constructor are then made instants using the keyword “new”." }, { "code": null, "e": 26377, "s": 26316, "text": "When new Automobile() is called, JavaScript does two things:" }, { "code": null, "e": 26533, "s": 26377, "text": "It creates a fresh new object(instance) Automobile() and assigns it to a variable.It sets the constructor property i.e “color” of the object to Automobile." }, { "code": null, "e": 26616, "s": 26533, "text": "It creates a fresh new object(instance) Automobile() and assigns it to a variable." }, { "code": null, "e": 26690, "s": 26616, "text": "It sets the constructor property i.e “color” of the object to Automobile." }, { "code": null, "e": 27045, "s": 26690, "text": "Object.entries() MethodObject.entries() method is used to return an array consisting of enumerable property [key, value] pairs of the object which are passed as the parameter. The ordering of the properties is the same as that given by looping over the property values of the object manually.Difference between Object.entries() and Object.values() method" }, { "code": null, "e": 27432, "s": 27045, "text": "Object.entries() method in JavaScript returns an array consisting of enumerable property [key, value] pairs of the object which are passed as the parameter whereas Object.values() method in JavaScript returns an array whose elements are the enumerable property values found on the object. Follow the example below for better understanding of the differences between these two functions." }, { "code": null, "e": 27698, "s": 27432, "text": "Input: var object = { 0: '23', 1: 'geeksforgeeks', 2: 'true' };\n console.log(Object.values(object));\n console.log(Object.entries(object));\n\nOutput: Array [\"23\", \"geeksforgeeks\", \"true\"]\n Array [[\"0\", \"23\"], [\"1\", \"geeksforgeeks\"],[\"2\", \"true\"]]\n" }, { "code": null, "e": 27933, "s": 27698, "text": "Explanation: In the above example an object has been created with three [key, value] pairs and the object.entries() method returns the [key, value] pairs of the object and object.values() method returns the values found on the object." }, { "code": null, "e": 27947, "s": 27933, "text": "Applications:" }, { "code": null, "e": 28017, "s": 27947, "text": "Object.entries() is used for listing properties related to an object." }, { "code": null, "e": 28094, "s": 28017, "text": "Object.entries() is used for listing all the [key,value] pairs of an object." }, { "code": null, "e": 28102, "s": 28094, "text": "Syntax:" }, { "code": null, "e": 28122, "s": 28102, "text": "Object.entries(obj)" }, { "code": null, "e": 28139, "s": 28122, "text": "Parameters Used:" }, { "code": null, "e": 28244, "s": 28139, "text": "obj : It is the object whose enumerable own property [key, value] pairs are to be returned.Return Value:" }, { "code": null, "e": 28258, "s": 28244, "text": "Return Value:" }, { "code": null, "e": 28367, "s": 28258, "text": "Object.entries() returns an array consisting of enumerable property [key, value] pairs of the object passed." }, { "code": null, "e": 28418, "s": 28367, "text": "Examples of the above function are provided below." }, { "code": null, "e": 28428, "s": 28418, "text": "Examples:" }, { "code": null, "e": 28564, "s": 28428, "text": "Input : const obj = { 0: 'adam', 1: 'billy', 2: 'chris' };\n console.log(Object.entries(obj)[1]);\n\nOutput : Array [\"1\", \"billy\"]\n" }, { "code": null, "e": 28767, "s": 28564, "text": "Explanation: In this example, an object “obj” has been created with three property[key, value] pairs and the Object.entries() method is used to return the first property [key, value] pair of the object." }, { "code": null, "e": 28943, "s": 28767, "text": "Input : const obj = { 10: 'adam', 200: 'billy', 35: 'chris' };\n console.log(Object.entries(obj)); \n\nOutput : Array [ [\"10\", \"adam\"], [\"35\", \"chris\"], [\"200\", \"billy\"]]\n" }, { "code": null, "e": 29145, "s": 28943, "text": "Explanation: In this example, an object “obj” has been created with three property[key, value] pairs and the Object.entries() method is used to return all the property [key, value] pairs of the object." }, { "code": null, "e": 29194, "s": 29145, "text": "Codes for the above function are provided below." }, { "code": null, "e": 29202, "s": 29194, "text": "Code 1:" }, { "code": "<script>// creating an object constructor// and assigning values to it const obj = { 0: 'adam', 1: 'billy', 2: 'chris' }; // Displaying the enumerable property [key, value] // pairs of the object using object.entries() method console.log(Object.entries(obj)[1]);</script>", "e": 29475, "s": 29202, "text": null }, { "code": null, "e": 29484, "s": 29475, "text": "OUTPUT :" }, { "code": null, "e": 29505, "s": 29484, "text": "Array [\"1\", \"billy\"]" }, { "code": null, "e": 29513, "s": 29505, "text": "Code 2:" }, { "code": "<script>// creating an object constructor and // assigning values to it const obj = { 10: 'adam', 200: 'billy', 35: 'chris' }; // Displaying the enumerable property [key, value] // pairs of the object using object.entries() method console.log(Object.entries(obj)); </script>", "e": 29789, "s": 29513, "text": null }, { "code": null, "e": 29798, "s": 29789, "text": "OUTPUT :" }, { "code": null, "e": 29855, "s": 29798, "text": "Array [[\"10\", \"adam\"], [\"35\", \"chris\"],[\"200\", \"billy\"]]" }, { "code": null, "e": 29868, "s": 29855, "text": "Exceptions :" }, { "code": null, "e": 29932, "s": 29868, "text": "It causes a TypeError if the argument passed is not an object ." }, { "code": null, "e": 30044, "s": 29932, "text": "It causes a RangeError if the key passed in the argument is not in the range of the property[key, value] pair ." }, { "code": null, "e": 30064, "s": 30044, "text": "Supported Browsers:" }, { "code": null, "e": 30084, "s": 30064, "text": "Chrome 54 and above" }, { "code": null, "e": 30102, "s": 30084, "text": "Edge 14 and above" }, { "code": null, "e": 30123, "s": 30102, "text": "Firefox 47 and above" }, { "code": null, "e": 30142, "s": 30123, "text": "Opera 41 and above" }, { "code": null, "e": 30164, "s": 30142, "text": "Safari 10.1 and above" }, { "code": null, "e": 30271, "s": 30164, "text": "Reference :https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/entries" }, { "code": null, "e": 30285, "s": 30271, "text": "Pallavi Yadav" }, { "code": null, "e": 30297, "s": 30285, "text": "ysachin2314" }, { "code": null, "e": 30318, "s": 30297, "text": "javascript-functions" }, { "code": null, "e": 30336, "s": 30318, "text": "javascript-object" }, { "code": null, "e": 30347, "s": 30336, "text": "JavaScript" }, { "code": null, "e": 30445, "s": 30347, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30485, "s": 30445, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 30530, "s": 30485, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 30591, "s": 30530, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 30663, "s": 30591, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 30715, "s": 30663, "text": "How to append HTML code to a div using JavaScript ?" }, { "code": null, "e": 30761, "s": 30715, "text": "How to Open URL in New Tab using JavaScript ?" }, { "code": null, "e": 30802, "s": 30761, "text": "Difference Between PUT and PATCH Request" }, { "code": null, "e": 30843, "s": 30802, "text": "JavaScript | console.log() with Examples" }, { "code": null, "e": 30891, "s": 30843, "text": "How to read a local text file using JavaScript?" } ]
Python Program for Heap Sort
In this article, we will learn about the solution to the problem statement given below. Problem statement − We are given an array, we need to sort it using the concept of heapsort. Here we place the maximum element at the end. This is repeated until the array is sorted. Now let’s observe the solution in the implementation below− Live Demo # heapify def heapify(arr, n, i): largest = i # largest value l = 2 * i + 1 # left r = 2 * i + 2 # right # if left child exists if l < n and arr[i] < arr[l]: largest = l # if right child exits if r < n and arr[largest] < arr[r]: largest = r # root if largest != i: arr[i],arr[largest] = arr[largest],arr[i] # swap # root. heapify(arr, n, largest) # sort def heapSort(arr): n = len(arr) # maxheap for i in range(n, -1, -1): heapify(arr, n, i) # element extraction for i in range(n-1, 0, -1): arr[i], arr[0] = arr[0], arr[i] # swap heapify(arr, i, 0) # main arr = [2,5,3,8,6,5,4,7] heapSort(arr) n = len(arr) print ("Sorted array is") for i in range(n): print (arr[i],end=" ") Sorted array is 2 3 4 5 5 6 7 8 All the variables are declared in the local scope and their references are seen in the figure above. In this article, we have learned about how we can make a Python Program for Heap Sort
[ { "code": null, "e": 1150, "s": 1062, "text": "In this article, we will learn about the solution to the problem statement given below." }, { "code": null, "e": 1243, "s": 1150, "text": "Problem statement − We are given an array, we need to sort it using the concept of heapsort." }, { "code": null, "e": 1333, "s": 1243, "text": "Here we place the maximum element at the end. This is repeated until the array is sorted." }, { "code": null, "e": 1393, "s": 1333, "text": "Now let’s observe the solution in the implementation below−" }, { "code": null, "e": 1404, "s": 1393, "text": " Live Demo" }, { "code": null, "e": 2172, "s": 1404, "text": "# heapify\ndef heapify(arr, n, i):\n largest = i # largest value\n l = 2 * i + 1 # left\n r = 2 * i + 2 # right\n # if left child exists\n if l < n and arr[i] < arr[l]:\n largest = l\n # if right child exits\n if r < n and arr[largest] < arr[r]:\n largest = r\n # root\n if largest != i:\n arr[i],arr[largest] = arr[largest],arr[i] # swap\n # root.\n heapify(arr, n, largest)\n# sort\ndef heapSort(arr):\n n = len(arr)\n # maxheap\n for i in range(n, -1, -1):\n heapify(arr, n, i)\n # element extraction\n for i in range(n-1, 0, -1):\n arr[i], arr[0] = arr[0], arr[i] # swap\n heapify(arr, i, 0)\n# main\narr = [2,5,3,8,6,5,4,7]\nheapSort(arr)\nn = len(arr)\nprint (\"Sorted array is\")\nfor i in range(n):\n print (arr[i],end=\" \")" }, { "code": null, "e": 2204, "s": 2172, "text": "Sorted array is\n2 3 4 5 5 6 7 8" }, { "code": null, "e": 2305, "s": 2204, "text": "All the variables are declared in the local scope and their references are seen in the figure above." }, { "code": null, "e": 2391, "s": 2305, "text": "In this article, we have learned about how we can make a Python Program for Heap Sort" } ]
SQLAlchemy Core - Multiple Table Deletes
In this chapter, we will look into the Multiple Table Deletes expression which is similar to using Multiple Table Updates function. More than one table can be referred in WHERE clause of DELETE statement in many DBMS dialects. For PG and MySQL, “DELETE USING” syntax is used; and for SQL Server, using “DELETE FROM” expression refers to more than one table. The SQLAlchemy delete() construct supports both of these modes implicitly, by specifying multiple tables in the WHERE clause as follows − stmt = users.delete().\ where(users.c.id == addresses.c.id).\ where(addresses.c.email_address.startswith('xyz%')) conn.execute(stmt) On a PostgreSQL backend, the resulting SQL from the above statement would render as − DELETE FROM users USING addresses WHERE users.id = addresses.id AND (addresses.email_address LIKE %(email_address_1)s || '%%') If this method is used with a database that doesn’t support this behaviour, the compiler will raise NotImplementedError. 21 Lectures 1.5 hours Jack Chan Print Add Notes Bookmark this page
[ { "code": null, "e": 2472, "s": 2340, "text": "In this chapter, we will look into the Multiple Table Deletes expression which is similar to using Multiple Table Updates function." }, { "code": null, "e": 2836, "s": 2472, "text": "More than one table can be referred in WHERE clause of DELETE statement in many DBMS dialects. For PG and MySQL, “DELETE USING” syntax is used; and for SQL Server, using “DELETE FROM” expression refers to more than one table. The SQLAlchemy delete() construct supports both of these modes implicitly, by specifying multiple tables in the WHERE clause as follows −" }, { "code": null, "e": 2975, "s": 2836, "text": "stmt = users.delete().\\\n where(users.c.id == addresses.c.id).\\\n where(addresses.c.email_address.startswith('xyz%'))\nconn.execute(stmt)" }, { "code": null, "e": 3061, "s": 2975, "text": "On a PostgreSQL backend, the resulting SQL from the above statement would render as −" }, { "code": null, "e": 3188, "s": 3061, "text": "DELETE FROM users USING addresses\nWHERE users.id = addresses.id\nAND (addresses.email_address LIKE %(email_address_1)s || '%%')" }, { "code": null, "e": 3309, "s": 3188, "text": "If this method is used with a database that doesn’t support this behaviour, the compiler will raise NotImplementedError." }, { "code": null, "e": 3344, "s": 3309, "text": "\n 21 Lectures \n 1.5 hours \n" }, { "code": null, "e": 3355, "s": 3344, "text": " Jack Chan" }, { "code": null, "e": 3362, "s": 3355, "text": " Print" }, { "code": null, "e": 3373, "s": 3362, "text": " Add Notes" } ]
Tk - Basic Widgets
Basic widgets are common widgets available in almost all Tk applications. The list of available basic widgets is given below − Widget for displaying single line of text. Widget that is clickable and triggers an action. Widget used to accept a single line of text as input. Widget for displaying multiple lines of text. Widget for displaying and optionally edit multiple lines of text. Widget used to create a frame that is a new top level window. A simple Tk example is shown below using basic widgets − #!/usr/bin/wish grid [label .myLabel -text "Label Widget" -textvariable labelText] grid [text .myText -width 20 -height 5] .myText insert 1.0 "Text\nWidget\n" grid [entry .myEntry -text "Entry Widget"] grid [message .myMessage -background red -foreground white -text "Message\nWidget"] grid [button .myButton1 -text "Button" -command "set labelText clicked"] When we run the above program, we will get the following output − Print Add Notes Bookmark this page
[ { "code": null, "e": 2328, "s": 2201, "text": "Basic widgets are common widgets available in almost all Tk applications. The list of available basic widgets is given below −" }, { "code": null, "e": 2371, "s": 2328, "text": "Widget for displaying single line of text." }, { "code": null, "e": 2420, "s": 2371, "text": "Widget that is clickable and triggers an action." }, { "code": null, "e": 2474, "s": 2420, "text": "Widget used to accept a single line of text as input." }, { "code": null, "e": 2520, "s": 2474, "text": "Widget for displaying multiple lines of text." }, { "code": null, "e": 2586, "s": 2520, "text": "Widget for displaying and optionally edit multiple lines of text." }, { "code": null, "e": 2648, "s": 2586, "text": "Widget used to create a frame that is a new top level window." }, { "code": null, "e": 2705, "s": 2648, "text": "A simple Tk example is shown below using basic widgets −" }, { "code": null, "e": 3067, "s": 2705, "text": "#!/usr/bin/wish\n\ngrid [label .myLabel -text \"Label Widget\" -textvariable labelText] \ngrid [text .myText -width 20 -height 5]\n.myText insert 1.0 \"Text\\nWidget\\n\"\ngrid [entry .myEntry -text \"Entry Widget\"]\ngrid [message .myMessage -background red -foreground white -text \"Message\\nWidget\"]\ngrid [button .myButton1 -text \"Button\" -command \"set labelText clicked\"]" }, { "code": null, "e": 3133, "s": 3067, "text": "When we run the above program, we will get the following output −" }, { "code": null, "e": 3140, "s": 3133, "text": " Print" }, { "code": null, "e": 3151, "s": 3140, "text": " Add Notes" } ]
How do you get selenium to recognize that a page loaded?
We can get Selenium to recognize that a page is loaded. We can set the implicit wait for this purpose. It shall make the driver to wait for a specific amount of time for an element to be available after page loaded. driver.manage().timeouts().implicitlyWait(); After the page is loaded, we can also invoke Javascript method document.readyState and wait till complete is returned. JavascriptExecutor js = (JavascriptExecutor)driver; js.executeScript("return document.readyState").toString().equals("complete"); After this, verify if the URL matches the one we are looking for. Code Implementation with implicit wait. import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.chrome.ChromeDriver; import java.util.concurrent.TimeUnit; public class Pageload{ public static void main(String[] args) { System.setProperty("webdriver.chrome.driver", "C:\\Users\\ghs6kor\\Desktop\\Java\\chromedriver.exe"); WebDriver driver = new ChromeDriver(); String url = "https://www.tutorialspoint.com/index.htm"; driver.get(url); // wait of 12 seconds driver.manage().timeouts().implicitlyWait(12, TimeUnit.SECONDS); // identify element, enter text WebElement m=driver.findElement(By.id("gsc-i-id1")); m.sendKeys("Selenium"); } } Code Implementation with Javascript Executor. import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.chrome.ChromeDriver; import java.util.concurrent.TimeUnit; import org.openqa.selenium.JavascriptExecutor; public class PagaLoadJS{ public static void main(String[] args) { System.setProperty("webdriver.chrome.driver", "C:\\Users\\ghs6kor\\Desktop\\Java\\chromedriver.exe"); WebDriver driver = new ChromeDriver(); String url = "https://www.tutorialspoint.com/index.htm"; driver.get(url); // Javascript executor to return value JavascriptExecutor j = (JavascriptExecutor) driver; j.executeScript("return document.readyState") .toString().equals("complete"); // get the current URL String s = driver.getCurrentUrl(); // checking condition if the URL is loaded if (s.equals(url)) { System.out.println("Page Loaded"); System.out.println("Current Url: " + s); } else { System.out.println("Page did not load"); } driver.quit(); } }
[ { "code": null, "e": 1278, "s": 1062, "text": "We can get Selenium to recognize that a page is loaded. We can set the implicit wait for this purpose. It shall make the driver to wait for a specific amount of time for an element to be available after page loaded." }, { "code": null, "e": 1323, "s": 1278, "text": "driver.manage().timeouts().implicitlyWait();" }, { "code": null, "e": 1442, "s": 1323, "text": "After the page is loaded, we can also invoke Javascript method document.readyState and wait till complete is returned." }, { "code": null, "e": 1572, "s": 1442, "text": "JavascriptExecutor js = (JavascriptExecutor)driver;\njs.executeScript(\"return document.readyState\").toString().equals(\"complete\");" }, { "code": null, "e": 1638, "s": 1572, "text": "After this, verify if the URL matches the one we are looking for." }, { "code": null, "e": 1678, "s": 1638, "text": "Code Implementation with implicit wait." }, { "code": null, "e": 2417, "s": 1678, "text": "import org.openqa.selenium.By;\nimport org.openqa.selenium.WebDriver;\nimport org.openqa.selenium.WebElement;\nimport org.openqa.selenium.chrome.ChromeDriver;\nimport java.util.concurrent.TimeUnit;\npublic class Pageload{\n public static void main(String[] args) {\n System.setProperty(\"webdriver.chrome.driver\",\n \"C:\\\\Users\\\\ghs6kor\\\\Desktop\\\\Java\\\\chromedriver.exe\");\n WebDriver driver = new ChromeDriver();\n String url = \"https://www.tutorialspoint.com/index.htm\";\n driver.get(url);\n // wait of 12 seconds\n driver.manage().timeouts().implicitlyWait(12, TimeUnit.SECONDS);\n // identify element, enter text\n WebElement m=driver.findElement(By.id(\"gsc-i-id1\"));\n m.sendKeys(\"Selenium\");\n }\n}" }, { "code": null, "e": 2463, "s": 2417, "text": "Code Implementation with Javascript Executor." }, { "code": null, "e": 3558, "s": 2463, "text": "import org.openqa.selenium.By;\nimport org.openqa.selenium.WebDriver;\nimport org.openqa.selenium.WebElement;\nimport org.openqa.selenium.chrome.ChromeDriver;\nimport java.util.concurrent.TimeUnit;\nimport org.openqa.selenium.JavascriptExecutor;\npublic class PagaLoadJS{\n public static void main(String[] args) {\n System.setProperty(\"webdriver.chrome.driver\",\n \"C:\\\\Users\\\\ghs6kor\\\\Desktop\\\\Java\\\\chromedriver.exe\");\n WebDriver driver = new ChromeDriver();\n String url = \"https://www.tutorialspoint.com/index.htm\";\n driver.get(url);\n // Javascript executor to return value\n JavascriptExecutor j = (JavascriptExecutor) driver;\n j.executeScript(\"return document.readyState\")\n .toString().equals(\"complete\");\n // get the current URL\n String s = driver.getCurrentUrl();\n // checking condition if the URL is loaded\n if (s.equals(url)) {\n System.out.println(\"Page Loaded\");\n System.out.println(\"Current Url: \" + s);\n }\n else {\n System.out.println(\"Page did not load\");\n }\n driver.quit();\n }\n}" } ]
Java Generics - Methods
You can write a single generic method declaration that can be called with arguments of different types. Based on the types of the arguments passed to the generic method, the compiler handles each method call appropriately. Following are the rules to define Generic Methods − All generic method declarations have a type parameter section delimited by angle brackets (< and >) that precedes the method's return type ( < E > in the next example). All generic method declarations have a type parameter section delimited by angle brackets (< and >) that precedes the method's return type ( < E > in the next example). Each type parameter section contains one or more type parameters separated by commas. A type parameter, also known as a type variable, is an identifier that specifies a generic type name. Each type parameter section contains one or more type parameters separated by commas. A type parameter, also known as a type variable, is an identifier that specifies a generic type name. The type parameters can be used to declare the return type and act as placeholders for the types of the arguments passed to the generic method, which are known as actual type arguments. The type parameters can be used to declare the return type and act as placeholders for the types of the arguments passed to the generic method, which are known as actual type arguments. A generic method's body is declared like that of any other method. Note that type parameters can represent only reference types, not primitive types (like int, double and char). A generic method's body is declared like that of any other method. Note that type parameters can represent only reference types, not primitive types (like int, double and char). Following example illustrates how we can print an array of different type using a single Generic method − public class GenericMethodTest { // generic method printArray public static < E > void printArray( E[] inputArray ) { // Display array elements for(E element : inputArray) { System.out.printf("%s ", element); } System.out.println(); } public static void main(String args[]) { // Create arrays of Integer, Double and Character Integer[] intArray = { 1, 2, 3, 4, 5 }; Double[] doubleArray = { 1.1, 2.2, 3.3, 4.4 }; Character[] charArray = { 'H', 'E', 'L', 'L', 'O' }; System.out.println("Array integerArray contains:"); printArray(intArray); // pass an Integer array System.out.println("\nArray doubleArray contains:"); printArray(doubleArray); // pass a Double array System.out.println("\nArray characterArray contains:"); printArray(charArray); // pass a Character array } } This will produce the following result − Array integerArray contains: 1 2 3 4 5 Array doubleArray contains: 1.1 2.2 3.3 4.4 Array characterArray contains: H E L L O 16 Lectures 2 hours Malhar Lathkar 19 Lectures 5 hours Malhar Lathkar 25 Lectures 2.5 hours Anadi Sharma 126 Lectures 7 hours Tushar Kale 119 Lectures 17.5 hours Monica Mittal 76 Lectures 7 hours Arnab Chakraborty Print Add Notes Bookmark this page
[ { "code": null, "e": 2915, "s": 2640, "text": "You can write a single generic method declaration that can be called with arguments of different types. Based on the types of the arguments passed to the generic method, the compiler handles each method call appropriately. Following are the rules to define Generic Methods −" }, { "code": null, "e": 3084, "s": 2915, "text": "All generic method declarations have a type parameter section delimited by angle brackets (< and >) that precedes the method's return type ( < E > in the next example)." }, { "code": null, "e": 3253, "s": 3084, "text": "All generic method declarations have a type parameter section delimited by angle brackets (< and >) that precedes the method's return type ( < E > in the next example)." }, { "code": null, "e": 3441, "s": 3253, "text": "Each type parameter section contains one or more type parameters separated by commas. A type parameter, also known as a type variable, is an identifier that specifies a generic type name." }, { "code": null, "e": 3629, "s": 3441, "text": "Each type parameter section contains one or more type parameters separated by commas. A type parameter, also known as a type variable, is an identifier that specifies a generic type name." }, { "code": null, "e": 3815, "s": 3629, "text": "The type parameters can be used to declare the return type and act as placeholders for the types of the arguments passed to the generic method, which are known as actual type arguments." }, { "code": null, "e": 4001, "s": 3815, "text": "The type parameters can be used to declare the return type and act as placeholders for the types of the arguments passed to the generic method, which are known as actual type arguments." }, { "code": null, "e": 4179, "s": 4001, "text": "A generic method's body is declared like that of any other method. Note that type parameters can represent only reference types, not primitive types (like int, double and char)." }, { "code": null, "e": 4357, "s": 4179, "text": "A generic method's body is declared like that of any other method. Note that type parameters can represent only reference types, not primitive types (like int, double and char)." }, { "code": null, "e": 4463, "s": 4357, "text": "Following example illustrates how we can print an array of different type using a single Generic method −" }, { "code": null, "e": 5356, "s": 4463, "text": "public class GenericMethodTest {\n // generic method printArray\n public static < E > void printArray( E[] inputArray ) {\n // Display array elements\n for(E element : inputArray) {\n System.out.printf(\"%s \", element);\n }\n System.out.println();\n }\n\n public static void main(String args[]) {\n // Create arrays of Integer, Double and Character\n Integer[] intArray = { 1, 2, 3, 4, 5 };\n Double[] doubleArray = { 1.1, 2.2, 3.3, 4.4 };\n Character[] charArray = { 'H', 'E', 'L', 'L', 'O' };\n\n System.out.println(\"Array integerArray contains:\");\n printArray(intArray); // pass an Integer array\n\n System.out.println(\"\\nArray doubleArray contains:\");\n printArray(doubleArray); // pass a Double array\n\n System.out.println(\"\\nArray characterArray contains:\");\n printArray(charArray); // pass a Character array\n }\n}" }, { "code": null, "e": 5397, "s": 5356, "text": "This will produce the following result −" }, { "code": null, "e": 5526, "s": 5397, "text": "Array integerArray contains:\n1 2 3 4 5 \n\nArray doubleArray contains:\n1.1 2.2 3.3 4.4 \n\nArray characterArray contains:\nH E L L O\n" }, { "code": null, "e": 5559, "s": 5526, "text": "\n 16 Lectures \n 2 hours \n" }, { "code": null, "e": 5575, "s": 5559, "text": " Malhar Lathkar" }, { "code": null, "e": 5608, "s": 5575, "text": "\n 19 Lectures \n 5 hours \n" }, { "code": null, "e": 5624, "s": 5608, "text": " Malhar Lathkar" }, { "code": null, "e": 5659, "s": 5624, "text": "\n 25 Lectures \n 2.5 hours \n" }, { "code": null, "e": 5673, "s": 5659, "text": " Anadi Sharma" }, { "code": null, "e": 5707, "s": 5673, "text": "\n 126 Lectures \n 7 hours \n" }, { "code": null, "e": 5721, "s": 5707, "text": " Tushar Kale" }, { "code": null, "e": 5758, "s": 5721, "text": "\n 119 Lectures \n 17.5 hours \n" }, { "code": null, "e": 5773, "s": 5758, "text": " Monica Mittal" }, { "code": null, "e": 5806, "s": 5773, "text": "\n 76 Lectures \n 7 hours \n" }, { "code": null, "e": 5825, "s": 5806, "text": " Arnab Chakraborty" }, { "code": null, "e": 5832, "s": 5825, "text": " Print" }, { "code": null, "e": 5843, "s": 5832, "text": " Add Notes" } ]
How to Code Memory Efficient Functions with Python Generators | by Erdem Isbilen | Towards Data Science
Generators are special functions that return a lazy iterator which we can iterate over to handle one unit of data at a time. As lazy iterators do not store the whole content of data in the memory, they are commonly used to work with data streams and large datasets. Generators in Python are very similar to normal functions with some characteristic differences listed below; Generator functions have yield expression, instead of return used in normal functions. Both yield and return statements return a value from a function. While the return statement ends the function completely, yield statement suspends the function by keeping all its state in the memory for later use. When the generator function yields, the function is not terminated. Instead, yield expression pauses the function and gives control over to the caller. After fully iterated, generators terminate and raise stopIteration exception. This post will introduce you to the basics of generators in Python. Let’s write a Python3 code that contains simple examples of implementing generators: # Simple Generatordef my_simple_generator(): k = 0 yield kk = 1 yield kk = 2 yield kmy_numbers = my_simple_generator()print(next(my_numbers))print(next(my_numbers))print(next(my_numbers))# Below call raises StopIteration as generator is fully iterated# print(next(my_numbers))# Defining Generators with Loopdef my_generator_with_loop(my_str): length = len(my_str) for k in range(length): yield my_str[k]my_text = my_generator_with_loop("Coding")for char in my_text: print(char)# Defining Generators with Expressionsmy_generator_expression = (number**2 for number in range(4))print (sum(my_generator_expression))# Defining Generator Pipelinemy_generator_01 = (number**2 for number in range(40))my_generator_02 = (number-5 for number in my_generator_01)print(sum(my_generator_02)) # Simple Generatordef my_simple_generator(): k = 0 yield kk = 1 yield kk = 2 yield kmy_numbers = my_simple_generator()print(next(my_numbers))print(next(my_numbers))print(next(my_numbers))# Below call raises StopIteration as generator is fully iterated# print(next(my_numbers))#Output012 Generator functions are defined as normal functions. The only difference in the code structure is the yield expression used instead of return. We have three yield expression in our example above. This means that we can iterate the generator maximum of 3 times. If we iterate it more than 3 times, the generator will raise a StopIteration exception. Once we define the generator function, we can assign it to a variable. With the assignment, my_numbers variable points to the generator object. Note that we are not iterating the generator with the assignment. To manually iterate the generator, we can use next(my_number) notation. In every iteration step, the value of k, and the inner state of the my_simple_generator are remembered by Python. To simplify the generator definition we can define and iterate the generators with the help of for loops. # Defining Generators with Loopdef my_generator_with_loop(my_str): length = len(my_str) for k in range(length): yield my_str[k]my_text = my_generator_with_loop("Coding")for char in my_text: print(char)#OutputCoding We can further simplify the definition of generators with generator expressions. The syntax for creating a generator expression is very similar to the one used for list comprehension. Round brackets are used to create generator expressions, instead of square brackets used in the list comprehension. As list comprehension returns and stores the whole content in the memory, generator expressions just returns and stores the portion of data when it is demanded. # Defining Generators with Expressionsmy_generator_expression = (number**2 for number in range(4))print (sum(my_generator_expression))#Output14 To create complex pipelines, multiple generators can be stacked easily as shown in the below example. # Defining Generator Pipelinemy_generator_01 = (number**2 for number in range(40))my_generator_02 = (number-5 for number in my_generator_01)print(sum(my_generator_02))#Output20340 Generators are memory-friendly as they return and store the portion of data only when it is demanded. Generators simplify the iterator definition process We can define generators with generators expressions or generator functions. We can develop memory-efficient data pipelines by using multiple generators. In this post, I explained the basics of generators in Python. The code in this post is available in my GitHub repository. I hope you found this post useful. Thank you for reading!
[ { "code": null, "e": 438, "s": 172, "text": "Generators are special functions that return a lazy iterator which we can iterate over to handle one unit of data at a time. As lazy iterators do not store the whole content of data in the memory, they are commonly used to work with data streams and large datasets." }, { "code": null, "e": 547, "s": 438, "text": "Generators in Python are very similar to normal functions with some characteristic differences listed below;" }, { "code": null, "e": 634, "s": 547, "text": "Generator functions have yield expression, instead of return used in normal functions." }, { "code": null, "e": 848, "s": 634, "text": "Both yield and return statements return a value from a function. While the return statement ends the function completely, yield statement suspends the function by keeping all its state in the memory for later use." }, { "code": null, "e": 1000, "s": 848, "text": "When the generator function yields, the function is not terminated. Instead, yield expression pauses the function and gives control over to the caller." }, { "code": null, "e": 1078, "s": 1000, "text": "After fully iterated, generators terminate and raise stopIteration exception." }, { "code": null, "e": 1146, "s": 1078, "text": "This post will introduce you to the basics of generators in Python." }, { "code": null, "e": 1231, "s": 1146, "text": "Let’s write a Python3 code that contains simple examples of implementing generators:" }, { "code": null, "e": 2011, "s": 1231, "text": "# Simple Generatordef my_simple_generator(): k = 0 yield kk = 1 yield kk = 2 yield kmy_numbers = my_simple_generator()print(next(my_numbers))print(next(my_numbers))print(next(my_numbers))# Below call raises StopIteration as generator is fully iterated# print(next(my_numbers))# Defining Generators with Loopdef my_generator_with_loop(my_str): length = len(my_str) for k in range(length): yield my_str[k]my_text = my_generator_with_loop(\"Coding\")for char in my_text: print(char)# Defining Generators with Expressionsmy_generator_expression = (number**2 for number in range(4))print (sum(my_generator_expression))# Defining Generator Pipelinemy_generator_01 = (number**2 for number in range(40))my_generator_02 = (number-5 for number in my_generator_01)print(sum(my_generator_02))" }, { "code": null, "e": 2298, "s": 2011, "text": "# Simple Generatordef my_simple_generator(): k = 0 yield kk = 1 yield kk = 2 yield kmy_numbers = my_simple_generator()print(next(my_numbers))print(next(my_numbers))print(next(my_numbers))# Below call raises StopIteration as generator is fully iterated# print(next(my_numbers))#Output012" }, { "code": null, "e": 2441, "s": 2298, "text": "Generator functions are defined as normal functions. The only difference in the code structure is the yield expression used instead of return." }, { "code": null, "e": 2647, "s": 2441, "text": "We have three yield expression in our example above. This means that we can iterate the generator maximum of 3 times. If we iterate it more than 3 times, the generator will raise a StopIteration exception." }, { "code": null, "e": 2929, "s": 2647, "text": "Once we define the generator function, we can assign it to a variable. With the assignment, my_numbers variable points to the generator object. Note that we are not iterating the generator with the assignment. To manually iterate the generator, we can use next(my_number) notation." }, { "code": null, "e": 3043, "s": 2929, "text": "In every iteration step, the value of k, and the inner state of the my_simple_generator are remembered by Python." }, { "code": null, "e": 3149, "s": 3043, "text": "To simplify the generator definition we can define and iterate the generators with the help of for loops." }, { "code": null, "e": 3365, "s": 3149, "text": "# Defining Generators with Loopdef my_generator_with_loop(my_str): length = len(my_str) for k in range(length): yield my_str[k]my_text = my_generator_with_loop(\"Coding\")for char in my_text: print(char)#OutputCoding" }, { "code": null, "e": 3665, "s": 3365, "text": "We can further simplify the definition of generators with generator expressions. The syntax for creating a generator expression is very similar to the one used for list comprehension. Round brackets are used to create generator expressions, instead of square brackets used in the list comprehension." }, { "code": null, "e": 3826, "s": 3665, "text": "As list comprehension returns and stores the whole content in the memory, generator expressions just returns and stores the portion of data when it is demanded." }, { "code": null, "e": 3970, "s": 3826, "text": "# Defining Generators with Expressionsmy_generator_expression = (number**2 for number in range(4))print (sum(my_generator_expression))#Output14" }, { "code": null, "e": 4072, "s": 3970, "text": "To create complex pipelines, multiple generators can be stacked easily as shown in the below example." }, { "code": null, "e": 4252, "s": 4072, "text": "# Defining Generator Pipelinemy_generator_01 = (number**2 for number in range(40))my_generator_02 = (number-5 for number in my_generator_01)print(sum(my_generator_02))#Output20340" }, { "code": null, "e": 4354, "s": 4252, "text": "Generators are memory-friendly as they return and store the portion of data only when it is demanded." }, { "code": null, "e": 4406, "s": 4354, "text": "Generators simplify the iterator definition process" }, { "code": null, "e": 4483, "s": 4406, "text": "We can define generators with generators expressions or generator functions." }, { "code": null, "e": 4560, "s": 4483, "text": "We can develop memory-efficient data pipelines by using multiple generators." }, { "code": null, "e": 4622, "s": 4560, "text": "In this post, I explained the basics of generators in Python." }, { "code": null, "e": 4682, "s": 4622, "text": "The code in this post is available in my GitHub repository." }, { "code": null, "e": 4717, "s": 4682, "text": "I hope you found this post useful." } ]
DAX Statistical - PERCENTILEX.EXC function
Returns the percentile number of an expression evaluated for each row in a table. DAX PERCENTILEX.EXC function is new in Excel 2016. PERCENTILEX.EXC (<table>, <expression>, <k>) table The table containing the rows for which the expression will be evaluated. expression The expression to be evaluated for each row of the table. k A number, 0 < k < 1. The percentile number of an expression evaluated for each row in a table. If k <= zero, or k >= 1, then it is out of range and an error is returned. If k <= zero, or k >= 1, then it is out of range and an error is returned. If k is non-numeric, an error is returned. If k is non-numeric, an error is returned. If k is blank, percentile rank of 1 / (n+1) returns the smallest value. If k is blank, percentile rank of 1 / (n+1) returns the smallest value. If k is not a multiple of 1 / (n + 1), PERCENTILEX.EXC will interpolate to determine the value at the kth percentile. If k is not a multiple of 1 / (n + 1), PERCENTILEX.EXC will interpolate to determine the value at the kth percentile. PERCENTILEX.EXC will interpolate when the value for the specified percentile is between two values in the array. If it cannot interpolate for the k percentile specified, an error is returned. = PERCENTILEX.EXC (Sales,Sales[Sales Amount],0.25) 53 Lectures 5.5 hours Abhay Gadiya 24 Lectures 2 hours Randy Minder 26 Lectures 4.5 hours Randy Minder Print Add Notes Bookmark this page
[ { "code": null, "e": 2083, "s": 2001, "text": "Returns the percentile number of an expression evaluated for each row in a table." }, { "code": null, "e": 2134, "s": 2083, "text": "DAX PERCENTILEX.EXC function is new in Excel 2016." }, { "code": null, "e": 2181, "s": 2134, "text": "PERCENTILEX.EXC (<table>, <expression>, <k>) \n" }, { "code": null, "e": 2187, "s": 2181, "text": "table" }, { "code": null, "e": 2261, "s": 2187, "text": "The table containing the rows for which the expression will be evaluated." }, { "code": null, "e": 2272, "s": 2261, "text": "expression" }, { "code": null, "e": 2330, "s": 2272, "text": "The expression to be evaluated for each row of the table." }, { "code": null, "e": 2332, "s": 2330, "text": "k" }, { "code": null, "e": 2353, "s": 2332, "text": "A number, 0 < k < 1." }, { "code": null, "e": 2427, "s": 2353, "text": "The percentile number of an expression evaluated for each row in a table." }, { "code": null, "e": 2502, "s": 2427, "text": "If k <= zero, or k >= 1, then it is out of range and an error is returned." }, { "code": null, "e": 2577, "s": 2502, "text": "If k <= zero, or k >= 1, then it is out of range and an error is returned." }, { "code": null, "e": 2620, "s": 2577, "text": "If k is non-numeric, an error is returned." }, { "code": null, "e": 2663, "s": 2620, "text": "If k is non-numeric, an error is returned." }, { "code": null, "e": 2735, "s": 2663, "text": "If k is blank, percentile rank of 1 / (n+1) returns the smallest value." }, { "code": null, "e": 2807, "s": 2735, "text": "If k is blank, percentile rank of 1 / (n+1) returns the smallest value." }, { "code": null, "e": 2925, "s": 2807, "text": "If k is not a multiple of 1 / (n + 1), PERCENTILEX.EXC will interpolate to determine the value at the kth percentile." }, { "code": null, "e": 3043, "s": 2925, "text": "If k is not a multiple of 1 / (n + 1), PERCENTILEX.EXC will interpolate to determine the value at the kth percentile." }, { "code": null, "e": 3235, "s": 3043, "text": "PERCENTILEX.EXC will interpolate when the value for the specified percentile is between two values in the array. If it cannot interpolate for the k percentile specified, an error is returned." }, { "code": null, "e": 3287, "s": 3235, "text": "= PERCENTILEX.EXC (Sales,Sales[Sales Amount],0.25) " }, { "code": null, "e": 3322, "s": 3287, "text": "\n 53 Lectures \n 5.5 hours \n" }, { "code": null, "e": 3336, "s": 3322, "text": " Abhay Gadiya" }, { "code": null, "e": 3369, "s": 3336, "text": "\n 24 Lectures \n 2 hours \n" }, { "code": null, "e": 3383, "s": 3369, "text": " Randy Minder" }, { "code": null, "e": 3418, "s": 3383, "text": "\n 26 Lectures \n 4.5 hours \n" }, { "code": null, "e": 3432, "s": 3418, "text": " Randy Minder" }, { "code": null, "e": 3439, "s": 3432, "text": " Print" }, { "code": null, "e": 3450, "s": 3439, "text": " Add Notes" } ]
How to use Python classes effectively | by Ari Joury | Towards Data Science
“There should only be one — and preferably only one — obvious way to do it”, says the Zen of Python. Yet there are areas where even seasoned programmers debate what the right or wrong way to do things is. One of these areas are Python classes. Borrowed from Object-Oriented Programming, they’re quite beautiful constructs which you can expand and modify as you code. The big problem is that classes can make your code more complicated than necessary, and make it harder to read and maintain. So when should you use classes, and when should you use standard functions instead? This story is a deeper dive into the matter. So if you’re in a hurry, you can skip the following two sections and scroll right down to the sections When to use classes and When classes are a bad idea. Classes are objects that allow you to group data structures and procedures in one place. For example, imagine you’re writing a piece of code to organize the inventory of a clothes shop. You could create a class that takes each item of clothing in the shop, and stores key quantities such as the type of clothing, and its color and size. We’ll add an option to add a price, too. class Clothing(object): def __init__(self, type, color, size, price=None): self.type = type self.color = color self.size = size self.price = price Now, we can define various instances of the class and keep them organized: bluejeans = Clothing("jeans", "blue", 12)redtshirt = Clothing("t-shirt", "red", 10, 10) We would add these two lines without indent, after the definition of the class. This code will run, but it’s not doing very much. We can add a method to set the price directly underneath the __init__ function, within the class definition: def set_price(self, price): """Set the price of an item of clothing.""" self.price = price print(f"Setting the price of the {self.color} {self.type} to ${price}.") We could also add some routines to tell us the price, or to promote an item by reducing the price: def get_price(self): """Get the price of an item of clothing, if price is set.""" try: print(f"The {self.color} {self.type} costs ${self.price}.") except: print(f"The price of the {self.color} {self.type} hasn't been set yet!") def promote(self, percentage): """Lower the price, if initial price is set.""" try: self.price = self.price * (1-percentage/100) print(f"The price of the {self.color} {self.type} has been reduced by {percentage} percent! It now only costs ${self.price:.0f}.") except: print(f"Oops. Set an initial price first!") Now, we can add some calls of our methods after the lines where we’ve initialized the instances of the class: print("blue jeans -------------------")bluejeans.promote(20)bluejeans.set_price(30)bluejeans.get_price()print("red t-shirt ------------------")redtshirt.get_price()redtshirt.promote(20) If you run the script, the output will be the following: blue jeans -------------------Oops. Set an initial price first!Setting the price of the blue jeans to $30.The blue jeans costs $30.red t-shirt ------------------The red t-shirt costs $10.The price of the red t-shirt has been reduced by 20 percent! It now only costs $8. If you need to add more routines, you can just put them in the class definition. The nicest part of all of this is that you can add and delete as many objects as you like. Deleting an attribute goes like so: del redtshirt.price And if you want to delete an entire object, you do like so: del redtshirt All of this is neat, simple, and expandable. Try doing this implementation with standard functions, and you’ll probably have a lot more trouble dealing with it. From a theoretical point of view, there are more reasons why Python classes are a beautiful concept in many situations. If you’ve attended lectures in computer science, it’s pretty likely that you’ve stumbled across the principle of “separation of concerns”. It basically means that you split up your program into different sections that deal with different pieces of information. Classes, by their nature, allow you to keep to that principle. In other words, when you set out writing a program and you’re thinking in terms of classes, you might be building a good architecture because you’re ensuring that each problem has its own place. Thinking in classes not only helps you keep features separate, but also independent of one another. Not only does this keep things neat and tidy; it is also a lot easier for maintenance. Say you found a bug in one class: you could fix that bug without worrying about the other classes because there is no connection between them. Likewise, you could add new features without fearing that you’ll get tangled up with other pieces of the software. By using classes, you’re ensuring that methods are only used on one set of data. This adds to the security of the code because you’re less likely to use functions where they don’t belong. Storing data structures and methods together is also called encapsulation. Since all of this is hidden from the end user, this allows you to modify data structures and methods without compromising the user experience. For example, you might have to build a method that is quite complex. The advantage of encapsulation is that a user doesn’t need to understand any of that complexity because they can use it like a black box. It is completely possible to build black-box-functions without using classes. With classes, however, this type of functioning is practically ensured. With classes, you only have to define a data structure once. When you define an instance of a class, that instance automatically inherits the given structure. In addition, inheritance makes it quite easy to delete or modify pieces of an instance or the whole class. This makes the whole construct more flexible. towardsdatascience.com With so many advantages, it might be tempting to use a class for everything and anything. In practice, however, there are situations where using classes makes perfect sense, and others where it doesn’t. As a rule of thumb, when you have a set of data with a specific structure and you want to perform specific methods on it, use a class. That is only valid, however, if you use multiple data structures in your code. If your whole code won’t ever deal with more than one structure. If you only have one data structure, it really depends on the problem at hand. You can get a rough idea by sketching out your program with or without a class; usually you’ll see pretty soon which solution is simpler. Another rule of thumb is this: If you’re tempted to use global variables to access data, it might be easier to define a class and build a method to access each piece of data. A heap, unlike a stack, is a way of storing data in a more flexible way because it has unlimited memory size and allows you to resize variables. On the other hand, accessing variables is slower with a heap and you must manage the memory yourself. If a heap suits your purposes better, you don’t need to define a class. Python’s inbuilt heapq, or heap queue algorithm, does the job for you. You might be tempted to use a class because you’re constantly calling a function with the same arguments. In most cases, it’s a better idea to use functools.partial() instead. It’s quite simple to implement. Say you have a function that multiplies two values, but you keep using it to double values. To avoid duplicate code, you could write this: from functools import partialdef multiply(x,y): return x * ydoubling = partial(multiply,2)print(doubling(4)) Way easier than defining a new class! Some programmers get obsessed with classes because they’re so flexible and expandable. That’s why, even at reputable companies and seasoned developers, you might encounter code like this: class newclass: """defining a new class to do something awesome""" pass The idea behind it is that, as the code grows, this class might be needed for whichever new data structure and the methods that go with it. But this is no good habit! Guess what these three lines of code do? Exactly nothing. And those lines are not exactly difficult to code. If you think you’ll need another class later on, and you really think that you could forget about that in the future, you could always leave a comment like this: # initiate a new class here if needed for purpose XY Although you’ll want to make your code expandable and idiot-proof, initializing a class that does nothing is a usually bad idea. towardsdatascience.com Classes are without doubt a powerful concept. Used correctly, they can make your code tidier, more readable and maintainable. But they get overused a lot. And when used wrongly, they can pollute your code until you understand nothing. Sometimes, especially in simpler programs, you could use a class or a bunch of generic functions, and the code would be very similar in its length and complexity. As programs get more complex, the differences get more prominent. In this sense, the Zen of Python has upheld its verdict: most of the time, there is indeed only one good way of doing things, whether that is with classes or without. It is, however, not always completely obvious. The difficult part is recognizing which way is the good one. If you have other questions regarding Python or other programming languages, let me know in the comments! Thanks to Lucas Soares for asking for tips on when to use classes and when not.
[ { "code": null, "e": 377, "s": 172, "text": "“There should only be one — and preferably only one — obvious way to do it”, says the Zen of Python. Yet there are areas where even seasoned programmers debate what the right or wrong way to do things is." }, { "code": null, "e": 539, "s": 377, "text": "One of these areas are Python classes. Borrowed from Object-Oriented Programming, they’re quite beautiful constructs which you can expand and modify as you code." }, { "code": null, "e": 748, "s": 539, "text": "The big problem is that classes can make your code more complicated than necessary, and make it harder to read and maintain. So when should you use classes, and when should you use standard functions instead?" }, { "code": null, "e": 949, "s": 748, "text": "This story is a deeper dive into the matter. So if you’re in a hurry, you can skip the following two sections and scroll right down to the sections When to use classes and When classes are a bad idea." }, { "code": null, "e": 1135, "s": 949, "text": "Classes are objects that allow you to group data structures and procedures in one place. For example, imagine you’re writing a piece of code to organize the inventory of a clothes shop." }, { "code": null, "e": 1327, "s": 1135, "text": "You could create a class that takes each item of clothing in the shop, and stores key quantities such as the type of clothing, and its color and size. We’ll add an option to add a price, too." }, { "code": null, "e": 1505, "s": 1327, "text": "class Clothing(object): def __init__(self, type, color, size, price=None): self.type = type self.color = color self.size = size self.price = price" }, { "code": null, "e": 1580, "s": 1505, "text": "Now, we can define various instances of the class and keep them organized:" }, { "code": null, "e": 1668, "s": 1580, "text": "bluejeans = Clothing(\"jeans\", \"blue\", 12)redtshirt = Clothing(\"t-shirt\", \"red\", 10, 10)" }, { "code": null, "e": 1907, "s": 1668, "text": "We would add these two lines without indent, after the definition of the class. This code will run, but it’s not doing very much. We can add a method to set the price directly underneath the __init__ function, within the class definition:" }, { "code": null, "e": 2096, "s": 1907, "text": " def set_price(self, price): \"\"\"Set the price of an item of clothing.\"\"\" self.price = price print(f\"Setting the price of the {self.color} {self.type} to ${price}.\")" }, { "code": null, "e": 2195, "s": 2096, "text": "We could also add some routines to tell us the price, or to promote an item by reducing the price:" }, { "code": null, "e": 2839, "s": 2195, "text": " def get_price(self): \"\"\"Get the price of an item of clothing, if price is set.\"\"\" try: print(f\"The {self.color} {self.type} costs ${self.price}.\") except: print(f\"The price of the {self.color} {self.type} hasn't been set yet!\") def promote(self, percentage): \"\"\"Lower the price, if initial price is set.\"\"\" try: self.price = self.price * (1-percentage/100) print(f\"The price of the {self.color} {self.type} has been reduced by {percentage} percent! It now only costs ${self.price:.0f}.\") except: print(f\"Oops. Set an initial price first!\")" }, { "code": null, "e": 2949, "s": 2839, "text": "Now, we can add some calls of our methods after the lines where we’ve initialized the instances of the class:" }, { "code": null, "e": 3135, "s": 2949, "text": "print(\"blue jeans -------------------\")bluejeans.promote(20)bluejeans.set_price(30)bluejeans.get_price()print(\"red t-shirt ------------------\")redtshirt.get_price()redtshirt.promote(20)" }, { "code": null, "e": 3192, "s": 3135, "text": "If you run the script, the output will be the following:" }, { "code": null, "e": 3462, "s": 3192, "text": "blue jeans -------------------Oops. Set an initial price first!Setting the price of the blue jeans to $30.The blue jeans costs $30.red t-shirt ------------------The red t-shirt costs $10.The price of the red t-shirt has been reduced by 20 percent! It now only costs $8." }, { "code": null, "e": 3543, "s": 3462, "text": "If you need to add more routines, you can just put them in the class definition." }, { "code": null, "e": 3670, "s": 3543, "text": "The nicest part of all of this is that you can add and delete as many objects as you like. Deleting an attribute goes like so:" }, { "code": null, "e": 3691, "s": 3670, "text": "del redtshirt.price " }, { "code": null, "e": 3751, "s": 3691, "text": "And if you want to delete an entire object, you do like so:" }, { "code": null, "e": 3765, "s": 3751, "text": "del redtshirt" }, { "code": null, "e": 3926, "s": 3765, "text": "All of this is neat, simple, and expandable. Try doing this implementation with standard functions, and you’ll probably have a lot more trouble dealing with it." }, { "code": null, "e": 4046, "s": 3926, "text": "From a theoretical point of view, there are more reasons why Python classes are a beautiful concept in many situations." }, { "code": null, "e": 4307, "s": 4046, "text": "If you’ve attended lectures in computer science, it’s pretty likely that you’ve stumbled across the principle of “separation of concerns”. It basically means that you split up your program into different sections that deal with different pieces of information." }, { "code": null, "e": 4565, "s": 4307, "text": "Classes, by their nature, allow you to keep to that principle. In other words, when you set out writing a program and you’re thinking in terms of classes, you might be building a good architecture because you’re ensuring that each problem has its own place." }, { "code": null, "e": 4752, "s": 4565, "text": "Thinking in classes not only helps you keep features separate, but also independent of one another. Not only does this keep things neat and tidy; it is also a lot easier for maintenance." }, { "code": null, "e": 5010, "s": 4752, "text": "Say you found a bug in one class: you could fix that bug without worrying about the other classes because there is no connection between them. Likewise, you could add new features without fearing that you’ll get tangled up with other pieces of the software." }, { "code": null, "e": 5198, "s": 5010, "text": "By using classes, you’re ensuring that methods are only used on one set of data. This adds to the security of the code because you’re less likely to use functions where they don’t belong." }, { "code": null, "e": 5416, "s": 5198, "text": "Storing data structures and methods together is also called encapsulation. Since all of this is hidden from the end user, this allows you to modify data structures and methods without compromising the user experience." }, { "code": null, "e": 5623, "s": 5416, "text": "For example, you might have to build a method that is quite complex. The advantage of encapsulation is that a user doesn’t need to understand any of that complexity because they can use it like a black box." }, { "code": null, "e": 5773, "s": 5623, "text": "It is completely possible to build black-box-functions without using classes. With classes, however, this type of functioning is practically ensured." }, { "code": null, "e": 5932, "s": 5773, "text": "With classes, you only have to define a data structure once. When you define an instance of a class, that instance automatically inherits the given structure." }, { "code": null, "e": 6085, "s": 5932, "text": "In addition, inheritance makes it quite easy to delete or modify pieces of an instance or the whole class. This makes the whole construct more flexible." }, { "code": null, "e": 6108, "s": 6085, "text": "towardsdatascience.com" }, { "code": null, "e": 6311, "s": 6108, "text": "With so many advantages, it might be tempting to use a class for everything and anything. In practice, however, there are situations where using classes makes perfect sense, and others where it doesn’t." }, { "code": null, "e": 6525, "s": 6311, "text": "As a rule of thumb, when you have a set of data with a specific structure and you want to perform specific methods on it, use a class. That is only valid, however, if you use multiple data structures in your code." }, { "code": null, "e": 6807, "s": 6525, "text": "If your whole code won’t ever deal with more than one structure. If you only have one data structure, it really depends on the problem at hand. You can get a rough idea by sketching out your program with or without a class; usually you’ll see pretty soon which solution is simpler." }, { "code": null, "e": 6982, "s": 6807, "text": "Another rule of thumb is this: If you’re tempted to use global variables to access data, it might be easier to define a class and build a method to access each piece of data." }, { "code": null, "e": 7229, "s": 6982, "text": "A heap, unlike a stack, is a way of storing data in a more flexible way because it has unlimited memory size and allows you to resize variables. On the other hand, accessing variables is slower with a heap and you must manage the memory yourself." }, { "code": null, "e": 7372, "s": 7229, "text": "If a heap suits your purposes better, you don’t need to define a class. Python’s inbuilt heapq, or heap queue algorithm, does the job for you." }, { "code": null, "e": 7548, "s": 7372, "text": "You might be tempted to use a class because you’re constantly calling a function with the same arguments. In most cases, it’s a better idea to use functools.partial() instead." }, { "code": null, "e": 7719, "s": 7548, "text": "It’s quite simple to implement. Say you have a function that multiplies two values, but you keep using it to double values. To avoid duplicate code, you could write this:" }, { "code": null, "e": 7831, "s": 7719, "text": "from functools import partialdef multiply(x,y): return x * ydoubling = partial(multiply,2)print(doubling(4))" }, { "code": null, "e": 7869, "s": 7831, "text": "Way easier than defining a new class!" }, { "code": null, "e": 8057, "s": 7869, "text": "Some programmers get obsessed with classes because they’re so flexible and expandable. That’s why, even at reputable companies and seasoned developers, you might encounter code like this:" }, { "code": null, "e": 8135, "s": 8057, "text": "class newclass: \"\"\"defining a new class to do something awesome\"\"\" pass" }, { "code": null, "e": 8302, "s": 8135, "text": "The idea behind it is that, as the code grows, this class might be needed for whichever new data structure and the methods that go with it. But this is no good habit!" }, { "code": null, "e": 8573, "s": 8302, "text": "Guess what these three lines of code do? Exactly nothing. And those lines are not exactly difficult to code. If you think you’ll need another class later on, and you really think that you could forget about that in the future, you could always leave a comment like this:" }, { "code": null, "e": 8626, "s": 8573, "text": "# initiate a new class here if needed for purpose XY" }, { "code": null, "e": 8755, "s": 8626, "text": "Although you’ll want to make your code expandable and idiot-proof, initializing a class that does nothing is a usually bad idea." }, { "code": null, "e": 8778, "s": 8755, "text": "towardsdatascience.com" }, { "code": null, "e": 8904, "s": 8778, "text": "Classes are without doubt a powerful concept. Used correctly, they can make your code tidier, more readable and maintainable." }, { "code": null, "e": 9013, "s": 8904, "text": "But they get overused a lot. And when used wrongly, they can pollute your code until you understand nothing." }, { "code": null, "e": 9242, "s": 9013, "text": "Sometimes, especially in simpler programs, you could use a class or a bunch of generic functions, and the code would be very similar in its length and complexity. As programs get more complex, the differences get more prominent." }, { "code": null, "e": 9517, "s": 9242, "text": "In this sense, the Zen of Python has upheld its verdict: most of the time, there is indeed only one good way of doing things, whether that is with classes or without. It is, however, not always completely obvious. The difficult part is recognizing which way is the good one." } ]
Bootstrap alert-success class
The .alert-success class in Bootstrap indicates a positive action. You can try to run the following code to implement the alert-success class in Bootstrap − Live Demo <!DOCTYPE html> <html> <head> <title>Bootstrap Example</title> <link href = "/bootstrap/css/bootstrap.min.css" rel = "stylesheet"> <script src = "/scripts/jquery.min.js"></script> <script src = "/bootstrap/js/bootstrap.min.js"></script> </head> <body> <div class = "alert alert-success alert-dismissable"> <button type = "button" class = "close" data-dismiss = "alert" aria-hidden = "true"> × </button> Success! </div> </body> </html>
[ { "code": null, "e": 1129, "s": 1062, "text": "The .alert-success class in Bootstrap indicates a positive action." }, { "code": null, "e": 1219, "s": 1129, "text": "You can try to run the following code to implement the alert-success class in Bootstrap −" }, { "code": null, "e": 1229, "s": 1219, "text": "Live Demo" }, { "code": null, "e": 1751, "s": 1229, "text": "<!DOCTYPE html>\n<html>\n <head>\n <title>Bootstrap Example</title>\n <link href = \"/bootstrap/css/bootstrap.min.css\" rel = \"stylesheet\">\n <script src = \"/scripts/jquery.min.js\"></script>\n <script src = \"/bootstrap/js/bootstrap.min.js\"></script>\n </head>\n <body>\n <div class = \"alert alert-success alert-dismissable\">\n <button type = \"button\" class = \"close\" data-dismiss = \"alert\" aria-hidden = \"true\">\n ×\n </button>\n Success!\n </div>\n </body>\n</html>" } ]
Sorting a vector of custom objects using C++ STL
You can sort a vector of custom objects using the C++ STL function std::sort. The sort function has an overloaded form that takes as arguments first, last, comparator. The first and last are iterators to first and last elements of the container. The comparator is a predicate function that can be used to tell how to sort the container. #include<iostream> #include<algorithm> #include<vector> using namespace std; struct MyStruct { int key; string data; MyStruct(int key, string data) { this -> key = key; this -> data = data; } }; int main() { std::vector<MyStruct> vec; vec.push_back(MyStruct(4, "test")); vec.push_back(MyStruct(2, "is")); vec.push_back(MyStruct(3, "a")); vec.push_back(MyStruct(1, "this")); // Using lambda expressions in C++11 sort(vec.begin(), vec.end(), [](const MyStruct& lhs, const MyStruct& rhs) { return lhs.key < rhs.key; }); for(auto it = vec.begin(); it != vec.end(); it++) { cout << it -> data << endl; } } This will give the output − this is a test If you're working on older C++ versions, you can pass a function reference as well − //define the function: bool comparator(const MyStruct& lhs, const MyStruct& rhs) { return lhs.key < rhs.key; } // pass it to sort: sort(vec.begin(), vec.end(), &comparator); You can also overload the < operator in the class/struct and use the sort(first, last) form directly. So when sorting, it will take this function to compare the items.
[ { "code": null, "e": 1400, "s": 1062, "text": "You can sort a vector of custom objects using the C++ STL function std::sort. The sort function has an overloaded form that takes as arguments first, last, comparator. The first and last are iterators to first and last elements of the container. The comparator is a predicate function that can be used to tell how to sort the container. " }, { "code": null, "e": 2072, "s": 1400, "text": "#include<iostream>\n#include<algorithm>\n#include<vector>\n\nusing namespace std;\nstruct MyStruct {\n int key;\n string data;\n MyStruct(int key, string data) {\n this -> key = key;\n this -> data = data;\n }\n};\nint main() {\n std::vector<MyStruct> vec;\n vec.push_back(MyStruct(4, \"test\"));\n vec.push_back(MyStruct(2, \"is\"));\n vec.push_back(MyStruct(3, \"a\"));\n vec.push_back(MyStruct(1, \"this\"));\n \n // Using lambda expressions in C++11\n sort(vec.begin(), vec.end(), [](const MyStruct& lhs, const MyStruct& rhs) {\n return lhs.key < rhs.key;\n });\n for(auto it = vec.begin(); it != vec.end(); it++) {\n cout << it -> data << endl;\n }\n}" }, { "code": null, "e": 2100, "s": 2072, "text": "This will give the output −" }, { "code": null, "e": 2115, "s": 2100, "text": "this is a test" }, { "code": null, "e": 2200, "s": 2115, "text": "If you're working on older C++ versions, you can pass a function reference as well −" }, { "code": null, "e": 2377, "s": 2200, "text": "//define the function:\nbool comparator(const MyStruct& lhs, const MyStruct& rhs) {\n return lhs.key < rhs.key;\n}\n// pass it to sort:\nsort(vec.begin(), vec.end(), &comparator);" }, { "code": null, "e": 2545, "s": 2377, "text": "You can also overload the < operator in the class/struct and use the sort(first, last) form directly. So when sorting, it will take this function to compare the items." } ]
How to Alter Multiple Columns at Once in SQL Server? - GeeksforGeeks
16 Nov, 2021 In SQL, sometimes we need to write a single query to update the values of all columns in a table. We will use the UPDATE keyword to achieve this. For this, we use a specific kind of query shown in the below demonstration. For this article, we will be using the Microsoft SQL Server as our database and Select keyword. Step 1: Create a Database. For this use the below command to create a database named GeeksForGeeks. Query: CREATE DATABASE GeeksForGeeks Output: Step 2: Use the GeeksForGeeks database. For this use the below command. Query: USE GeeksForGeeks Output: Step 3: Create a table of FIRM inside the database GeeksForGeeks. This table has 4 columns namely FIRST_NAME, LAST_NAME, SALARY, and BONUS containing the first names, last names, salaries, and bonuses of the members in a firm. Query: CREATE TABLE FIRM( FIRST_NAME VARCHAR(20), LAST_NAME VARCHAR(20), SALARY INT, BONUS INT ); Output: Step 4: Describe the structure of the table FIRM. Query: EXEC SP_COLUMNS FIRM; Output: Step 5: Insert 5 rows into the FIRM table. Query: INSERT INTO FIRM VALUES('ALEX','STONE',10000,1000); INSERT INTO FIRM VALUES('MATT','JONES',20000,2000); INSERT INTO FIRM VALUES('JOHN','STARK',30000,3000); INSERT INTO FIRM VALUES('GARY','SCOTT',40000,4000); INSERT INTO FIRM VALUES('RICHARD','WALT',50000,5000); Output: Step 6: Display all the rows of the FIRM table. Query: SELECT * FROM FIRM; Output: Step 7: Alter multiple(2) columns of the table FIRM by adding 2 columns to the table simultaneously. The 2 columns are JOINING_DATE and LEAVING_DATE containing the date of joining of the member and the date of leaving of the member. Use the keyword ALTER and ADD to achieve this. Syntax: ALTER TABLE TABLE_NAME ADD COLUMN1 DATA_TYPE, COLUMN2 DATA_TYPE........; Query: ALTER TABLE FIRM ADD JOINING_DATE DATE, LEAVING_DATE DATE; Output: Step 8: Describe the structure of the altered table FIRM. Query: EXEC SP_COLUMNS FIRM; Note: The table description now has 2 extra columns. Output: Step 9: Update the table by inserting data into the 2 newly added columns of the FIRM table. Use keyword UPDATE. Syntax: UPDATE TABLE_NAME SET COLUMN1=VALUE, COLUMN2=VALUE WHERE CONDITION; Query: UPDATE FIRM SET JOINING_DATE='01-JAN-2001', LEAVING_DATE='01-JAN-2002' WHERE FIRST_NAME='ALEX'; UPDATE FIRM SET JOINING_DATE='02-FEB-2001', LEAVING_DATE='02-FEB-2002' WHERE FIRST_NAME='MATT'; UPDATE FIRM SET JOINING_DATE='03-MAR-2001', LEAVING_DATE='03-MAR-2002' WHERE FIRST_NAME='JOHN'; UPDATE FIRM SET JOINING_DATE='04-APR-2001', LEAVING_DATE='04-APR-2002' WHERE FIRST_NAME='GARY'; UPDATE FIRM SET JOINING_DATE='05-MAY-2001', LEAVING_DATE='05-MAY-2002' WHERE FIRST_NAME='RICHARD'; Output: Step 10: Display all the rows of the altered FIRM table. Query: SELECT * FROM FIRM; Note: The displayed table now has 2 extra columns. Output: Step 11: Alter multiple(2) columns of the table FIRM by dropping 2 columns from the table simultaneously. The 2 columns are JOINING_DATE and LEAVING_DATE containing the date of joining of the member and the date of leaving of the member. Use the keyword ALTER and DROP to achieve this. Syntax: ALTER TABLE TABLE_NAME DROP COLUMN COLUMN1, COLUMN2........; Query: ALTER TABLE FIRM DROP COLUMN JOINING_DATE,LEAVING_DATE; Output: Step 12: Describe the structure of the altered table FIRM. Query: EXEC SP_COLUMNS FIRM; Note: The table description now has 2 fewer columns. Output: Step 13: Display all the rows of the altered FIRM table. Query: SELECT * FROM FIRM; Note: The displayed table now has 2 fewer columns. Output: Picked SQL-Server SQL SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Update Multiple Columns in Single Update Statement in SQL? What is Temporary Table in SQL? SQL Query for Matching Multiple Values in the Same Column SQL using Python SQL Query to Insert Multiple Rows SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter SQL | Subquery SQL | SEQUENCES SQL | DROP, TRUNCATE SQL Query to Convert VARCHAR to INT
[ { "code": null, "e": 24214, "s": 24186, "text": "\n16 Nov, 2021" }, { "code": null, "e": 24532, "s": 24214, "text": "In SQL, sometimes we need to write a single query to update the values of all columns in a table. We will use the UPDATE keyword to achieve this. For this, we use a specific kind of query shown in the below demonstration. For this article, we will be using the Microsoft SQL Server as our database and Select keyword." }, { "code": null, "e": 24632, "s": 24532, "text": "Step 1: Create a Database. For this use the below command to create a database named GeeksForGeeks." }, { "code": null, "e": 24639, "s": 24632, "text": "Query:" }, { "code": null, "e": 24669, "s": 24639, "text": "CREATE DATABASE GeeksForGeeks" }, { "code": null, "e": 24677, "s": 24669, "text": "Output:" }, { "code": null, "e": 24749, "s": 24677, "text": "Step 2: Use the GeeksForGeeks database. For this use the below command." }, { "code": null, "e": 24756, "s": 24749, "text": "Query:" }, { "code": null, "e": 24774, "s": 24756, "text": "USE GeeksForGeeks" }, { "code": null, "e": 24782, "s": 24774, "text": "Output:" }, { "code": null, "e": 25009, "s": 24782, "text": "Step 3: Create a table of FIRM inside the database GeeksForGeeks. This table has 4 columns namely FIRST_NAME, LAST_NAME, SALARY, and BONUS containing the first names, last names, salaries, and bonuses of the members in a firm." }, { "code": null, "e": 25016, "s": 25009, "text": "Query:" }, { "code": null, "e": 25107, "s": 25016, "text": "CREATE TABLE FIRM(\nFIRST_NAME VARCHAR(20),\nLAST_NAME VARCHAR(20),\nSALARY INT,\nBONUS INT\n);" }, { "code": null, "e": 25115, "s": 25107, "text": "Output:" }, { "code": null, "e": 25165, "s": 25115, "text": "Step 4: Describe the structure of the table FIRM." }, { "code": null, "e": 25172, "s": 25165, "text": "Query:" }, { "code": null, "e": 25194, "s": 25172, "text": "EXEC SP_COLUMNS FIRM;" }, { "code": null, "e": 25202, "s": 25194, "text": "Output:" }, { "code": null, "e": 25245, "s": 25202, "text": "Step 5: Insert 5 rows into the FIRM table." }, { "code": null, "e": 25252, "s": 25245, "text": "Query:" }, { "code": null, "e": 25514, "s": 25252, "text": "INSERT INTO FIRM VALUES('ALEX','STONE',10000,1000);\nINSERT INTO FIRM VALUES('MATT','JONES',20000,2000);\nINSERT INTO FIRM VALUES('JOHN','STARK',30000,3000);\nINSERT INTO FIRM VALUES('GARY','SCOTT',40000,4000);\nINSERT INTO FIRM VALUES('RICHARD','WALT',50000,5000);" }, { "code": null, "e": 25522, "s": 25514, "text": "Output:" }, { "code": null, "e": 25570, "s": 25522, "text": "Step 6: Display all the rows of the FIRM table." }, { "code": null, "e": 25577, "s": 25570, "text": "Query:" }, { "code": null, "e": 25597, "s": 25577, "text": "SELECT * FROM FIRM;" }, { "code": null, "e": 25605, "s": 25597, "text": "Output:" }, { "code": null, "e": 25885, "s": 25605, "text": "Step 7: Alter multiple(2) columns of the table FIRM by adding 2 columns to the table simultaneously. The 2 columns are JOINING_DATE and LEAVING_DATE containing the date of joining of the member and the date of leaving of the member. Use the keyword ALTER and ADD to achieve this." }, { "code": null, "e": 25893, "s": 25885, "text": "Syntax:" }, { "code": null, "e": 25967, "s": 25893, "text": "ALTER TABLE TABLE_NAME ADD COLUMN1 \nDATA_TYPE, COLUMN2 DATA_TYPE........;" }, { "code": null, "e": 25974, "s": 25967, "text": "Query:" }, { "code": null, "e": 26034, "s": 25974, "text": "ALTER TABLE FIRM ADD JOINING_DATE DATE,\n LEAVING_DATE DATE;" }, { "code": null, "e": 26042, "s": 26034, "text": "Output:" }, { "code": null, "e": 26100, "s": 26042, "text": "Step 8: Describe the structure of the altered table FIRM." }, { "code": null, "e": 26107, "s": 26100, "text": "Query:" }, { "code": null, "e": 26129, "s": 26107, "text": "EXEC SP_COLUMNS FIRM;" }, { "code": null, "e": 26182, "s": 26129, "text": "Note: The table description now has 2 extra columns." }, { "code": null, "e": 26190, "s": 26182, "text": "Output:" }, { "code": null, "e": 26303, "s": 26190, "text": "Step 9: Update the table by inserting data into the 2 newly added columns of the FIRM table. Use keyword UPDATE." }, { "code": null, "e": 26311, "s": 26303, "text": "Syntax:" }, { "code": null, "e": 26379, "s": 26311, "text": "UPDATE TABLE_NAME SET COLUMN1=VALUE,\nCOLUMN2=VALUE WHERE CONDITION;" }, { "code": null, "e": 26386, "s": 26379, "text": "Query:" }, { "code": null, "e": 26869, "s": 26386, "text": "UPDATE FIRM SET JOINING_DATE='01-JAN-2001',\nLEAVING_DATE='01-JAN-2002' WHERE FIRST_NAME='ALEX';\nUPDATE FIRM SET JOINING_DATE='02-FEB-2001',\nLEAVING_DATE='02-FEB-2002' WHERE FIRST_NAME='MATT';\nUPDATE FIRM SET JOINING_DATE='03-MAR-2001',\nLEAVING_DATE='03-MAR-2002' WHERE FIRST_NAME='JOHN';\nUPDATE FIRM SET JOINING_DATE='04-APR-2001',\nLEAVING_DATE='04-APR-2002' WHERE FIRST_NAME='GARY';\nUPDATE FIRM SET JOINING_DATE='05-MAY-2001',\nLEAVING_DATE='05-MAY-2002' WHERE FIRST_NAME='RICHARD';" }, { "code": null, "e": 26877, "s": 26869, "text": "Output:" }, { "code": null, "e": 26934, "s": 26877, "text": "Step 10: Display all the rows of the altered FIRM table." }, { "code": null, "e": 26941, "s": 26934, "text": "Query:" }, { "code": null, "e": 26961, "s": 26941, "text": "SELECT * FROM FIRM;" }, { "code": null, "e": 27012, "s": 26961, "text": "Note: The displayed table now has 2 extra columns." }, { "code": null, "e": 27020, "s": 27012, "text": "Output:" }, { "code": null, "e": 27306, "s": 27020, "text": "Step 11: Alter multiple(2) columns of the table FIRM by dropping 2 columns from the table simultaneously. The 2 columns are JOINING_DATE and LEAVING_DATE containing the date of joining of the member and the date of leaving of the member. Use the keyword ALTER and DROP to achieve this." }, { "code": null, "e": 27314, "s": 27306, "text": "Syntax:" }, { "code": null, "e": 27376, "s": 27314, "text": "ALTER TABLE TABLE_NAME DROP \nCOLUMN COLUMN1, COLUMN2........;" }, { "code": null, "e": 27383, "s": 27376, "text": "Query:" }, { "code": null, "e": 27439, "s": 27383, "text": "ALTER TABLE FIRM DROP COLUMN\nJOINING_DATE,LEAVING_DATE;" }, { "code": null, "e": 27447, "s": 27439, "text": "Output:" }, { "code": null, "e": 27506, "s": 27447, "text": "Step 12: Describe the structure of the altered table FIRM." }, { "code": null, "e": 27513, "s": 27506, "text": "Query:" }, { "code": null, "e": 27535, "s": 27513, "text": "EXEC SP_COLUMNS FIRM;" }, { "code": null, "e": 27588, "s": 27535, "text": "Note: The table description now has 2 fewer columns." }, { "code": null, "e": 27596, "s": 27588, "text": "Output:" }, { "code": null, "e": 27653, "s": 27596, "text": "Step 13: Display all the rows of the altered FIRM table." }, { "code": null, "e": 27660, "s": 27653, "text": "Query:" }, { "code": null, "e": 27680, "s": 27660, "text": "SELECT * FROM FIRM;" }, { "code": null, "e": 27731, "s": 27680, "text": "Note: The displayed table now has 2 fewer columns." }, { "code": null, "e": 27739, "s": 27731, "text": "Output:" }, { "code": null, "e": 27746, "s": 27739, "text": "Picked" }, { "code": null, "e": 27757, "s": 27746, "text": "SQL-Server" }, { "code": null, "e": 27761, "s": 27757, "text": "SQL" }, { "code": null, "e": 27765, "s": 27761, "text": "SQL" }, { "code": null, "e": 27863, "s": 27765, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27872, "s": 27863, "text": "Comments" }, { "code": null, "e": 27885, "s": 27872, "text": "Old Comments" }, { "code": null, "e": 27951, "s": 27885, "text": "How to Update Multiple Columns in Single Update Statement in SQL?" }, { "code": null, "e": 27983, "s": 27951, "text": "What is Temporary Table in SQL?" }, { "code": null, "e": 28041, "s": 27983, "text": "SQL Query for Matching Multiple Values in the Same Column" }, { "code": null, "e": 28058, "s": 28041, "text": "SQL using Python" }, { "code": null, "e": 28092, "s": 28058, "text": "SQL Query to Insert Multiple Rows" }, { "code": null, "e": 28170, "s": 28092, "text": "SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter" }, { "code": null, "e": 28185, "s": 28170, "text": "SQL | Subquery" }, { "code": null, "e": 28201, "s": 28185, "text": "SQL | SEQUENCES" }, { "code": null, "e": 28222, "s": 28201, "text": "SQL | DROP, TRUNCATE" } ]
Setup/Install Redis Server on Windows 10 - onlinetutorialspoint
PROGRAMMINGJava ExamplesC Examples Java Examples C Examples C Tutorials aws JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC EXCEPTIONS COLLECTIONS SWING JDBC JAVA 8 SPRING SPRING BOOT HIBERNATE PYTHON PHP JQUERY PROGRAMMINGJava ExamplesC Examples Java Examples C Examples C Tutorials aws In this tutorial, I am going to show how to install the Redis server in windows 10 operating system. Redis stands for Remote Dictionary Server, and it is an open-source in-memory key-value data structure store. It supports data structures such as strings, hashes, list, set and more. Redis comes with different flavours like caching, session management, producer/consumer topic messaging and database. Redis is speedy because everything is stored in an in-memory, so there is no hardware involved init. Redis was written in C language, that is why it is extremely fast. Step 1: Download the latest Redis zip file from the official git hub location. For me it is redis-2.4.5-win32-win64.zip. Step 2: Extract redis-2.4.5-win32-win64.zip file in your preferred location. Step 3: It will come with two different folders, one is for 32bit, and another one is for 64bit based on your operating system. Step 4: Goto 64bit there you can find the below files. Step 4: Double click on the redis-server.exe file, there you can see the redis-server startup and wait for connecting to clients like below. Step 5: Now open the redis-cli.exe file to the redis command-line interface. As this acts as a redis client, as soon as we open this cli, we can see the client connected message in redis server like below. Now we can say that the redis server and client connected successfully. Now let’s try to pass some messages from the client to the redis server. As we discussed redis is an in-memory key-value data structure store so that the data in redis represents as key-value pairs. Inserting data in redis: redis 127.0.0.1:6379> set "name" "chandra shekhar" OK Reading data from redis: redis 127.0.0.1:6379> get "name" "chandra shekhar" Open two individual redis-cli, make one cli as a producer and another one as a consumer. The syntax for Subscribe: subscribe "java-books" subscribe is a keyword is used to accept a channel, where the channel is java_books. The syntax for Publish: publish "java-books" "java8 in action" Like subscribe, publish is also a keyword to post a message on a specific topic. On the above example, I publish my message like “java8 in action” on “java-books” subscribers. Redis Document Happy Learning 🙂 Spring Boot Redis Cache Example – Redis Server How to setup or install MongoDB on Windows 10 How to install AWS CLI on Windows 10 Spring Boot Redis Data Example CRUD Operations Install Apache Kafka on Windows 10 Install Mysql on Windows 10 Step by Step How to install RabbitMQ on Windows 10 Install Docker Desktop on Windows 10 How to install PuTTY on windows 10 How to install SOAPUI on Windows 10 How to install Docker Toolbox on Windows 10 How to Install Git windows 10 Operating System Install Apache Solr on Windows 10 How to install Android Studio on Windows 10 Flask – How to install/setup Flask SQLAlchemy ? Spring Boot Redis Cache Example – Redis Server How to setup or install MongoDB on Windows 10 How to install AWS CLI on Windows 10 Spring Boot Redis Data Example CRUD Operations Install Apache Kafka on Windows 10 Install Mysql on Windows 10 Step by Step How to install RabbitMQ on Windows 10 Install Docker Desktop on Windows 10 How to install PuTTY on windows 10 How to install SOAPUI on Windows 10 How to install Docker Toolbox on Windows 10 How to Install Git windows 10 Operating System Install Apache Solr on Windows 10 How to install Android Studio on Windows 10 Flask – How to install/setup Flask SQLAlchemy ? ZYX October 27, 2020 at 5:06 am - Reply thank you. but... you are using a underline in ‘subscribe “java_books”‘ and a dash in ‘publish “java-books” “java8 in action”‘ chandrashekhar October 27, 2020 at 11:32 am - Reply Thanks for catching 🙂 updated.. prideaux October 12, 2021 at 12:21 am - Reply Dude you wrote this in 2018! Yet you point to a VERY OLD VERSION of redis on github 29 Dec 2011 !!! Crazy ZYX October 27, 2020 at 5:06 am - Reply thank you. but... you are using a underline in ‘subscribe “java_books”‘ and a dash in ‘publish “java-books” “java8 in action”‘ chandrashekhar October 27, 2020 at 11:32 am - Reply Thanks for catching 🙂 updated.. thank you. but... you are using a underline in ‘subscribe “java_books”‘ and a dash in ‘publish “java-books” “java8 in action”‘ chandrashekhar October 27, 2020 at 11:32 am - Reply Thanks for catching 🙂 updated.. Thanks for catching 🙂 updated.. prideaux October 12, 2021 at 12:21 am - Reply Dude you wrote this in 2018! Yet you point to a VERY OLD VERSION of redis on github 29 Dec 2011 !!! Crazy Dude you wrote this in 2018! Yet you point to a VERY OLD VERSION of redis on github 29 Dec 2011 !!! Crazy Δ Spring Boot – Hello World Spring Boot – MVC Example Spring Boot- Change Context Path Spring Boot – Change Tomcat Port Number Spring Boot – Change Tomcat to Jetty Server Spring Boot – Tomcat session timeout Spring Boot – Enable Random Port Spring Boot – Properties File Spring Boot – Beans Lazy Loading Spring Boot – Set Favicon image Spring Boot – Set Custom Banner Spring Boot – Set Application TimeZone Spring Boot – Send Mail Spring Boot – FileUpload Ajax Spring Boot – Actuator Spring Boot – Actuator Database Health Check Spring Boot – Swagger Spring Boot – Enable CORS Spring Boot – External Apache ActiveMQ Setup Spring Boot – Inmemory Apache ActiveMq Spring Boot – Scheduler Job Spring Boot – Exception Handling Spring Boot – Hibernate CRUD Spring Boot – JPA Integration CRUD Spring Boot – JPA DataRest CRUD Spring Boot – JdbcTemplate CRUD Spring Boot – Multiple Data Sources Config Spring Boot – JNDI Configuration Spring Boot – H2 Database CRUD Spring Boot – MongoDB CRUD Spring Boot – Redis Data CRUD Spring Boot – MVC Login Form Validation Spring Boot – Custom Error Pages Spring Boot – iText PDF Spring Boot – Enable SSL (HTTPs) Spring Boot – Basic Authentication Spring Boot – In Memory Basic Authentication Spring Boot – Security MySQL Database Integration Spring Boot – Redis Cache – Redis Server Spring Boot – Hazelcast Cache Spring Boot – EhCache Spring Boot – Kafka Producer Spring Boot – Kafka Consumer Spring Boot – Kafka JSON Message to Kafka Topic Spring Boot – RabbitMQ Publisher Spring Boot – RabbitMQ Consumer Spring Boot – SOAP Consumer Spring Boot – Soap WebServices Spring Boot – Batch Csv to Database Spring Boot – Eureka Server Spring Boot – MockMvc JUnit Spring Boot – Docker Deployment
[ { "code": null, "e": 158, "s": 123, "text": "PROGRAMMINGJava ExamplesC Examples" }, { "code": null, "e": 172, "s": 158, "text": "Java Examples" }, { "code": null, "e": 183, "s": 172, "text": "C Examples" }, { "code": null, "e": 195, "s": 183, "text": "C Tutorials" }, { "code": null, "e": 199, "s": 195, "text": "aws" }, { "code": null, "e": 234, "s": 199, "text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC" }, { "code": null, "e": 245, "s": 234, "text": "EXCEPTIONS" }, { "code": null, "e": 257, "s": 245, "text": "COLLECTIONS" }, { "code": null, "e": 263, "s": 257, "text": "SWING" }, { "code": null, "e": 268, "s": 263, "text": "JDBC" }, { "code": null, "e": 275, "s": 268, "text": "JAVA 8" }, { "code": null, "e": 282, "s": 275, "text": "SPRING" }, { "code": null, "e": 294, "s": 282, "text": "SPRING BOOT" }, { "code": null, "e": 304, "s": 294, "text": "HIBERNATE" }, { "code": null, "e": 311, "s": 304, "text": "PYTHON" }, { "code": null, "e": 315, "s": 311, "text": "PHP" }, { "code": null, "e": 322, "s": 315, "text": "JQUERY" }, { "code": null, "e": 357, "s": 322, "text": "PROGRAMMINGJava ExamplesC Examples" }, { "code": null, "e": 371, "s": 357, "text": "Java Examples" }, { "code": null, "e": 382, "s": 371, "text": "C Examples" }, { "code": null, "e": 394, "s": 382, "text": "C Tutorials" }, { "code": null, "e": 398, "s": 394, "text": "aws" }, { "code": null, "e": 499, "s": 398, "text": "In this tutorial, I am going to show how to install the Redis server in windows 10 operating system." }, { "code": null, "e": 800, "s": 499, "text": "Redis stands for Remote Dictionary Server, and it is an open-source in-memory key-value data structure store. It supports data structures such as strings, hashes, list, set and more. Redis comes with different flavours like caching, session management, producer/consumer topic messaging and database." }, { "code": null, "e": 968, "s": 800, "text": "Redis is speedy because everything is stored in an in-memory, so there is no hardware involved init. Redis was written in C language, that is why it is extremely fast." }, { "code": null, "e": 1089, "s": 968, "text": "Step 1: Download the latest Redis zip file from the official git hub location. For me it is redis-2.4.5-win32-win64.zip." }, { "code": null, "e": 1166, "s": 1089, "text": "Step 2: Extract redis-2.4.5-win32-win64.zip file in your preferred location." }, { "code": null, "e": 1294, "s": 1166, "text": "Step 3: It will come with two different folders, one is for 32bit, and another one is for 64bit based on your operating system." }, { "code": null, "e": 1349, "s": 1294, "text": "Step 4: Goto 64bit there you can find the below files." }, { "code": null, "e": 1490, "s": 1349, "text": "Step 4: Double click on the redis-server.exe file, there you can see the redis-server startup and wait for connecting to clients like below." }, { "code": null, "e": 1567, "s": 1490, "text": "Step 5: Now open the redis-cli.exe file to the redis command-line interface." }, { "code": null, "e": 1696, "s": 1567, "text": "As this acts as a redis client, as soon as we open this cli, we can see the client connected message in redis server like below." }, { "code": null, "e": 1841, "s": 1696, "text": "Now we can say that the redis server and client connected successfully. Now let’s try to pass some messages from the client to the redis server." }, { "code": null, "e": 1967, "s": 1841, "text": "As we discussed redis is an in-memory key-value data structure store so that the data in redis represents as key-value pairs." }, { "code": null, "e": 1992, "s": 1967, "text": "Inserting data in redis:" }, { "code": null, "e": 2046, "s": 1992, "text": "redis 127.0.0.1:6379> set \"name\" \"chandra shekhar\"\nOK" }, { "code": null, "e": 2071, "s": 2046, "text": "Reading data from redis:" }, { "code": null, "e": 2122, "s": 2071, "text": "redis 127.0.0.1:6379> get \"name\"\n\"chandra shekhar\"" }, { "code": null, "e": 2211, "s": 2122, "text": "Open two individual redis-cli, make one cli as a producer and another one as a consumer." }, { "code": null, "e": 2237, "s": 2211, "text": "The syntax for Subscribe:" }, { "code": null, "e": 2260, "s": 2237, "text": "subscribe \"java-books\"" }, { "code": null, "e": 2345, "s": 2260, "text": "subscribe is a keyword is used to accept a channel, where the channel is java_books." }, { "code": null, "e": 2369, "s": 2345, "text": "The syntax for Publish:" }, { "code": null, "e": 2408, "s": 2369, "text": "publish \"java-books\" \"java8 in action\"" }, { "code": null, "e": 2584, "s": 2408, "text": "Like subscribe, publish is also a keyword to post a message on a specific topic. On the above example, I publish my message like “java8 in action” on “java-books” subscribers." }, { "code": null, "e": 2599, "s": 2584, "text": "Redis Document" }, { "code": null, "e": 2616, "s": 2599, "text": "Happy Learning 🙂" }, { "code": null, "e": 3234, "s": 2616, "text": "\nSpring Boot Redis Cache Example – Redis Server\nHow to setup or install MongoDB on Windows 10\nHow to install AWS CLI on Windows 10\nSpring Boot Redis Data Example CRUD Operations\nInstall Apache Kafka on Windows 10\nInstall Mysql on Windows 10 Step by Step\nHow to install RabbitMQ on Windows 10\nInstall Docker Desktop on Windows 10\nHow to install PuTTY on windows 10\nHow to install SOAPUI on Windows 10\nHow to install Docker Toolbox on Windows 10\nHow to Install Git windows 10 Operating System\nInstall Apache Solr on Windows 10\nHow to install Android Studio on Windows 10\nFlask – How to install/setup Flask SQLAlchemy ?\n" }, { "code": null, "e": 3281, "s": 3234, "text": "Spring Boot Redis Cache Example – Redis Server" }, { "code": null, "e": 3327, "s": 3281, "text": "How to setup or install MongoDB on Windows 10" }, { "code": null, "e": 3364, "s": 3327, "text": "How to install AWS CLI on Windows 10" }, { "code": null, "e": 3411, "s": 3364, "text": "Spring Boot Redis Data Example CRUD Operations" }, { "code": null, "e": 3446, "s": 3411, "text": "Install Apache Kafka on Windows 10" }, { "code": null, "e": 3487, "s": 3446, "text": "Install Mysql on Windows 10 Step by Step" }, { "code": null, "e": 3525, "s": 3487, "text": "How to install RabbitMQ on Windows 10" }, { "code": null, "e": 3562, "s": 3525, "text": "Install Docker Desktop on Windows 10" }, { "code": null, "e": 3597, "s": 3562, "text": "How to install PuTTY on windows 10" }, { "code": null, "e": 3633, "s": 3597, "text": "How to install SOAPUI on Windows 10" }, { "code": null, "e": 3677, "s": 3633, "text": "How to install Docker Toolbox on Windows 10" }, { "code": null, "e": 3724, "s": 3677, "text": "How to Install Git windows 10 Operating System" }, { "code": null, "e": 3758, "s": 3724, "text": "Install Apache Solr on Windows 10" }, { "code": null, "e": 3802, "s": 3758, "text": "How to install Android Studio on Windows 10" }, { "code": null, "e": 3850, "s": 3802, "text": "Flask – How to install/setup Flask SQLAlchemy ?" }, { "code": null, "e": 4290, "s": 3850, "text": "\n\n\n\n\n\nZYX\nOctober 27, 2020 at 5:06 am - Reply \n\nthank you. but...\nyou are using a underline in ‘subscribe “java_books”‘\nand a dash in ‘publish “java-books” “java8 in action”‘\n\n\n\n\n\n\n\n\n\nchandrashekhar\nOctober 27, 2020 at 11:32 am - Reply \n\nThanks for catching 🙂 updated..\n\n\n\n\n\n\n\n\n\n\n\nprideaux\nOctober 12, 2021 at 12:21 am - Reply \n\nDude you wrote this in 2018! Yet you point to a VERY OLD VERSION of redis on github 29 Dec 2011 !!! Crazy\n\n\n\n\n" }, { "code": null, "e": 4565, "s": 4290, "text": "\n\n\n\n\nZYX\nOctober 27, 2020 at 5:06 am - Reply \n\nthank you. but...\nyou are using a underline in ‘subscribe “java_books”‘\nand a dash in ‘publish “java-books” “java8 in action”‘\n\n\n\n\n\n\n\n\n\nchandrashekhar\nOctober 27, 2020 at 11:32 am - Reply \n\nThanks for catching 🙂 updated..\n\n\n\n\n\n" }, { "code": null, "e": 4692, "s": 4565, "text": "thank you. but...\nyou are using a underline in ‘subscribe “java_books”‘\nand a dash in ‘publish “java-books” “java8 in action”‘" }, { "code": null, "e": 4787, "s": 4692, "text": "\n\n\n\n\nchandrashekhar\nOctober 27, 2020 at 11:32 am - Reply \n\nThanks for catching 🙂 updated..\n\n\n\n" }, { "code": null, "e": 4819, "s": 4787, "text": "Thanks for catching 🙂 updated.." }, { "code": null, "e": 4982, "s": 4819, "text": "\n\n\n\n\nprideaux\nOctober 12, 2021 at 12:21 am - Reply \n\nDude you wrote this in 2018! Yet you point to a VERY OLD VERSION of redis on github 29 Dec 2011 !!! Crazy\n\n\n\n" }, { "code": null, "e": 5088, "s": 4982, "text": "Dude you wrote this in 2018! Yet you point to a VERY OLD VERSION of redis on github 29 Dec 2011 !!! Crazy" }, { "code": null, "e": 5094, "s": 5092, "text": "Δ" }, { "code": null, "e": 5121, "s": 5094, "text": " Spring Boot – Hello World" }, { "code": null, "e": 5148, "s": 5121, "text": " Spring Boot – MVC Example" }, { "code": null, "e": 5182, "s": 5148, "text": " Spring Boot- Change Context Path" }, { "code": null, "e": 5223, "s": 5182, "text": " Spring Boot – Change Tomcat Port Number" }, { "code": null, "e": 5268, "s": 5223, "text": " Spring Boot – Change Tomcat to Jetty Server" }, { "code": null, "e": 5306, "s": 5268, "text": " Spring Boot – Tomcat session timeout" }, { "code": null, "e": 5340, "s": 5306, "text": " Spring Boot – Enable Random Port" }, { "code": null, "e": 5371, "s": 5340, "text": " Spring Boot – Properties File" }, { "code": null, "e": 5405, "s": 5371, "text": " Spring Boot – Beans Lazy Loading" }, { "code": null, "e": 5438, "s": 5405, "text": " Spring Boot – Set Favicon image" }, { "code": null, "e": 5471, "s": 5438, "text": " Spring Boot – Set Custom Banner" }, { "code": null, "e": 5511, "s": 5471, "text": " Spring Boot – Set Application TimeZone" }, { "code": null, "e": 5536, "s": 5511, "text": " Spring Boot – Send Mail" }, { "code": null, "e": 5567, "s": 5536, "text": " Spring Boot – FileUpload Ajax" }, { "code": null, "e": 5591, "s": 5567, "text": " Spring Boot – Actuator" }, { "code": null, "e": 5637, "s": 5591, "text": " Spring Boot – Actuator Database Health Check" }, { "code": null, "e": 5660, "s": 5637, "text": " Spring Boot – Swagger" }, { "code": null, "e": 5687, "s": 5660, "text": " Spring Boot – Enable CORS" }, { "code": null, "e": 5733, "s": 5687, "text": " Spring Boot – External Apache ActiveMQ Setup" }, { "code": null, "e": 5773, "s": 5733, "text": " Spring Boot – Inmemory Apache ActiveMq" }, { "code": null, "e": 5802, "s": 5773, "text": " Spring Boot – Scheduler Job" }, { "code": null, "e": 5836, "s": 5802, "text": " Spring Boot – Exception Handling" }, { "code": null, "e": 5866, "s": 5836, "text": " Spring Boot – Hibernate CRUD" }, { "code": null, "e": 5902, "s": 5866, "text": " Spring Boot – JPA Integration CRUD" }, { "code": null, "e": 5935, "s": 5902, "text": " Spring Boot – JPA DataRest CRUD" }, { "code": null, "e": 5968, "s": 5935, "text": " Spring Boot – JdbcTemplate CRUD" }, { "code": null, "e": 6012, "s": 5968, "text": " Spring Boot – Multiple Data Sources Config" }, { "code": null, "e": 6046, "s": 6012, "text": " Spring Boot – JNDI Configuration" }, { "code": null, "e": 6078, "s": 6046, "text": " Spring Boot – H2 Database CRUD" }, { "code": null, "e": 6106, "s": 6078, "text": " Spring Boot – MongoDB CRUD" }, { "code": null, "e": 6137, "s": 6106, "text": " Spring Boot – Redis Data CRUD" }, { "code": null, "e": 6178, "s": 6137, "text": " Spring Boot – MVC Login Form Validation" }, { "code": null, "e": 6212, "s": 6178, "text": " Spring Boot – Custom Error Pages" }, { "code": null, "e": 6237, "s": 6212, "text": " Spring Boot – iText PDF" }, { "code": null, "e": 6271, "s": 6237, "text": " Spring Boot – Enable SSL (HTTPs)" }, { "code": null, "e": 6307, "s": 6271, "text": " Spring Boot – Basic Authentication" }, { "code": null, "e": 6353, "s": 6307, "text": " Spring Boot – In Memory Basic Authentication" }, { "code": null, "e": 6404, "s": 6353, "text": " Spring Boot – Security MySQL Database Integration" }, { "code": null, "e": 6446, "s": 6404, "text": " Spring Boot – Redis Cache – Redis Server" }, { "code": null, "e": 6477, "s": 6446, "text": " Spring Boot – Hazelcast Cache" }, { "code": null, "e": 6500, "s": 6477, "text": " Spring Boot – EhCache" }, { "code": null, "e": 6530, "s": 6500, "text": " Spring Boot – Kafka Producer" }, { "code": null, "e": 6560, "s": 6530, "text": " Spring Boot – Kafka Consumer" }, { "code": null, "e": 6609, "s": 6560, "text": " Spring Boot – Kafka JSON Message to Kafka Topic" }, { "code": null, "e": 6643, "s": 6609, "text": " Spring Boot – RabbitMQ Publisher" }, { "code": null, "e": 6676, "s": 6643, "text": " Spring Boot – RabbitMQ Consumer" }, { "code": null, "e": 6705, "s": 6676, "text": " Spring Boot – SOAP Consumer" }, { "code": null, "e": 6737, "s": 6705, "text": " Spring Boot – Soap WebServices" }, { "code": null, "e": 6774, "s": 6737, "text": " Spring Boot – Batch Csv to Database" }, { "code": null, "e": 6803, "s": 6774, "text": " Spring Boot – Eureka Server" }, { "code": null, "e": 6832, "s": 6803, "text": " Spring Boot – MockMvc JUnit" } ]
Mobile Angular UI - APP Development
In this chapter, we will discuss the use of Using AngularJS and Ionic for app development. Ionic is an open source framework used for developing mobile applications. It provides tools and services for building Mobile UI with native look and feel. Ionic framework needs native wrapper to be able to run on mobile devices. In this chapter, we will learn just the basics on how we can make use of ionic and mobile angular UI to develop your app. For details of ionic refer − https://www.tutorialspoint.com/ionic/index.htm. To start working with ionic and angularjs, we need to first install cordova. The command is as follows − npm install -g cordova Create a folder ionic_mobileui/ and in that let us create our project setup using the below command − cordova create ionic-mobileui-angularjs Here ionic-mobileui-angularjs is the name of our app. Now let us install the packages that we need in our project. The list is given below − npm install --save-dev angular angular-route mobile-angular-ui @ionic/core The files are installed and the folder structure is shown below − All the angular and ionic files are inside node_modules. We are going to make use of www/ folder. Hence move the angular and ionic js and css files inside www/css/ and www/js/ folders. Let us modify the index.html with mobile angular UI components and also add app.js in js/ folder. index.html <!DOCTYPE html> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <html> <head> <!-- Customize this policy to fit your own app's needs. For more guidance, see: https://github.com/apache/cordova-plugin-whitelist/blob/master/README.md#content-security-policy Some notes: * gap: is required only on iOS (when using UIWebView) and is needed for JS->native communication * https://ssl.gstatic.com is required only on Android and is needed for TalkBack to function properly * Disables use of inline scripts in order to mitigate risk of XSS vulnerabilities. To change this: * Enable inline JS: add 'unsafe-inline' to default-src --> <meta http-equiv="Content-Security-Policy" content="default-src 'self' data: gap: https://ssl.gstatic.com 'unsafe-eval'; style-src 'self' 'unsafe-inline'; media-src *; img-src 'self' data: content:;"> <meta name="format-detection" content="telephone=no"> <meta name="msapplication-tap-highlight" content="no"> <meta name="viewport" content="initial-scale=1, width=device-width, viewport-fit=cover"> <link rel="stylesheet" type="text/css" href="css/index.css"> <title>Mobile Angular UI Demo</title> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" /> <meta name="apple-mobile-web-app-capable" content="yes" /> <meta name="viewport" content="user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimal-ui" /> <meta name="apple-mobile-web-app-status-bar-style" content="yes" /> <link rel="shortcut icon" href="/assets/img/favicon.png" type="image/x-icon" /> <link rel="stylesheet" href="css/mobile-angular-ui-hover.min.css" /> <link rel="stylesheet" href="css/mobile-angular-ui-base.min.css" /> <link rel="stylesheet" href="css/mobile-angular-ui-desktop.min.css" /> <script src="js/angular.min.js"></script> <script src="js/angular-route.min.js"></script> <script src="js/mobile-angular-ui.min.js"></script> <script src="js/ionic.js"></script> <link rel="stylesheet" href="css/app.css" /> <script src="js/app.js"></script> </head> <body ng-app="myFirstApp" ng-controller="MainController"> <!-- Sidebars --> <div class="sidebar sidebar-left"> <div class="scrollable"> <h1 class="scrollable-header app-name">Tutorials</h1> <div class="scrollable-content"> <div class="list-group" ui-turn-off='uiSidebarLeft'> <a class="list-group-item" href="/">Home Page </a> <a class="list-group-item" href="#/academic"><i class="fa fa-caret-right"></i>Academic Tutorials </a> <a class="list-group-item" href="#/bigdata"><i class="fa fa-caret-right"></i>Big Data & Analytics </a> <a class="list-group-item" href="#/computerProg"><i class="fa fa-caret-right"></i>Computer Programming </a> <a class="list-group-item" href="#/computerscience"><i class="fa fa-caret-right"></i>Computer Science </a> <a class="list-group-item" href="#/databases"><i class="fa fa-caret-right"></i>Databases </a> <a class="list-group-item" href="#/devops"><i class="fa fa-caret-right"></i>DevOps </a> </div> </div> </div> </div> <div class="sidebar sidebar-right"> <div class="scrollable"> <h1 class="scrollable-header app-name">eBooks</h1> <div class="scrollable-content"> <div class="list-group" ui-toggle="uiSidebarRight"> <a class="list-group-item" href="#/php"><i class="fa fa-caret-right"></i>PHP </a> <a class="list-group-item" href="#/Javascript"><i class="fa fa-caret-right"></i>Javascript </a> </div> </div> </div> </div> <div class="app"> <div class="navbar navbar-app navbar-absolute-top"> <div class="navbar-brand navbar-brand-center" ui-yield-to="title"> TutorialsPoint </div> <div class="btn-group pull-left"> <div ui-toggle="uiSidebarLeft" class="btn sidebar-left-toggle"> <i class="fa fa-th-large "></i> Tutorials </div> </div> <div class="btn-group pull-right" ui-yield-to="navbarAction"> <div ui-toggle="uiSidebarRight" class="btn sidebar-right-toggle"> <i class="fal fa-search"></i> eBooks </div> </div> </div> <div class="navbar navbar-app navbar-absolute-bottom"> <div class="btn-group justified"> <a ui-turn-on="aboutus_modal" class="btn btn-navbar"><i class="fal fa-globe"></i> About us</a> <a ui-turn-on="contactus_overlay" class="btn btn-navbar"><i class="fal fa-map-marker-alt"></i> Contact us</a> </div> </div> <!-- App body --> <div class='app-body'> <div class='app-content'> <ng-view></ng-view> </div> </div> </div><!-- ~ .app --> <!-- Modals and Overlays --> <div ui-yield-to="modals"></div> <div class="app"> <h1>Apache Cordova</h1> <div id="deviceready" class="blink"> <p class="event listening">Connecting to Device</p> <p class="event received">Device is Ready</p> </div> </div> <script type="text/javascript" src="cordova.js"></script> <script type="text/javascript" src="js/index.js"></script> </body> </html> All the js and css files are added in the head section. The module and controller is created inside app.js as shown below − /* eslint no-alert: 0 */ 'use strict'; // // Here is how to define your module // has dependent on mobile-angular-ui // var app=angular.module('myFirstApp', [ 'ngRoute', 'mobile-angular-ui' ]); app.config(function($routeProvider, $locationProvider) { $routeProvider .when("/", { templateUrl : "home/home.html" }); $locationProvider.html5Mode({enabled:true, requireBase:false}); }); app.directive('dragItem', ['$drag', function($drag) { return { controller: function($scope, $element) { $drag.bind($element, { transform: $drag.TRANSLATE_BOTH, end: function(drag) { drag.reset(); } }, { sensitiveArea: $element.parent() } ); } }; }]); app.controller('MainController', function($rootScope, $scope, $routeParams) { $scope.msg="Welcome to Tutorialspoint!"; }); Create home/home.html file in www/ folder. Following are details inside home.html. <div class="list-group text-center"> <div class="list-group-item list-group-item-home"> <h1>{{msg}}</h1> </div> </div> To run the app using cordova, execute the following command − cordova platform add browser Next, execute the below command to test the app in the browser − cordova run Hit the url : http://localhost:8000 in the browser, to test the app. Open home/home.html, add the following ionic card template − <ion-card> <ion-card-header> <ion-card-subtitle>Ionic Card</ion-card-subtitle> <ion-card-title>Mobile Angular UI + Ionic</ion-card-title> </ion-card-header> <ion-card-content> Welcome To TutorialsPoint! </ion-card-content> </ion-card> Once done stop cordova run and run it again. You should see the ionic card details as shown below − So now you can create an app of your choice by using AngularJs, Mobile Angular UI and Ionic. 28 Lectures 3 hours Asif Hussain 19 Lectures 5.5 hours Eduonix Learning Solutions 30 Lectures 3.5 hours Abhilash Nelson 16 Lectures 2.5 hours Frahaan Hussain 62 Lectures 4.5 hours Senol Atac 22 Lectures 3 hours Sandip Bhattacharya Print Add Notes Bookmark this page
[ { "code": null, "e": 2402, "s": 2311, "text": "In this chapter, we will discuss the use of Using AngularJS and Ionic for app development." }, { "code": null, "e": 2632, "s": 2402, "text": "Ionic is an open source framework used for developing mobile applications. It provides tools and services for building Mobile UI with native look and feel. Ionic framework needs native wrapper to be able to run on mobile devices." }, { "code": null, "e": 2754, "s": 2632, "text": "In this chapter, we will learn just the basics on how we can make use of ionic and mobile angular UI to develop your app." }, { "code": null, "e": 2832, "s": 2754, "text": "For details of ionic refer − https://www.tutorialspoint.com/ionic/index.htm." }, { "code": null, "e": 2937, "s": 2832, "text": "To start working with ionic and angularjs, we need to first install cordova. The command is as follows −" }, { "code": null, "e": 2961, "s": 2937, "text": "npm install -g cordova\n" }, { "code": null, "e": 3063, "s": 2961, "text": "Create a folder ionic_mobileui/ and in that let us create our project setup using the below command −" }, { "code": null, "e": 3104, "s": 3063, "text": "cordova create ionic-mobileui-angularjs\n" }, { "code": null, "e": 3158, "s": 3104, "text": "Here ionic-mobileui-angularjs is the name of our app." }, { "code": null, "e": 3245, "s": 3158, "text": "Now let us install the packages that we need in our project. The list is given below −" }, { "code": null, "e": 3321, "s": 3245, "text": "npm install --save-dev angular angular-route mobile-angular-ui @ionic/core\n" }, { "code": null, "e": 3387, "s": 3321, "text": "The files are installed and the folder structure is shown below −" }, { "code": null, "e": 3572, "s": 3387, "text": "All the angular and ionic files are inside node_modules. We are going to make use of www/ folder. Hence move the angular and ionic js and css files inside www/css/ and www/js/ folders." }, { "code": null, "e": 3670, "s": 3572, "text": "Let us modify the index.html with mobile angular UI components and also add app.js in js/ folder." }, { "code": null, "e": 3681, "s": 3670, "text": "index.html" }, { "code": null, "e": 10219, "s": 3681, "text": "<!DOCTYPE html> \n<!-- \n Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. \n See the NOTICE file distributed with this work for additional information regarding copyright \n ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the \n \"License\"); you may not use this file except in compliance with the License. You may obtain a \n copy of the License at\n \n http://www.apache.org/licenses/LICENSE-2.0\n \n Unless required by applicable law or agreed to in writing, software distributed under the License \n is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either \n express or implied. See the License for the specific language governing permissions and \n limitations under the License. \n--> \n<html> \n\n <head> \n <!-- \n Customize this policy to fit your own app's needs. For more guidance, see: \n https://github.com/apache/cordova-plugin-whitelist/blob/master/README.md#content-security-policy\n Some notes: \n * gap: is required only on iOS (when using UIWebView) and is needed for JS->native communication \n * https://ssl.gstatic.com is required only on Android and is needed for TalkBack to function properly \n * Disables use of inline scripts in order to mitigate risk of XSS vulnerabilities. To change this: \n * Enable inline JS: add 'unsafe-inline' to default-src \n --> \n <meta http-equiv=\"Content-Security-Policy\" content=\"default-src 'self' data: gap: https://ssl.gstatic.com 'unsafe-eval'; style-src 'self' 'unsafe-inline'; media-src *; img-src 'self' data: content:;\"> \n <meta name=\"format-detection\" content=\"telephone=no\"> \n <meta name=\"msapplication-tap-highlight\" content=\"no\"> \n <meta name=\"viewport\" content=\"initial-scale=1, width=device-width, viewport-fit=cover\"> \n <link rel=\"stylesheet\" type=\"text/css\" href=\"css/index.css\"> \n <title>Mobile Angular UI Demo</title> \n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge,chrome=1\" /> \n <meta name=\"apple-mobile-web-app-capable\" content=\"yes\" /> \n <meta name=\"viewport\" content=\"user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimal-ui\" /> \n <meta name=\"apple-mobile-web-app-status-bar-style\" content=\"yes\" /> \n <link rel=\"shortcut icon\" href=\"/assets/img/favicon.png\" type=\"image/x-icon\" />\n <link rel=\"stylesheet\" href=\"css/mobile-angular-ui-hover.min.css\" /> <link rel=\"stylesheet\" href=\"css/mobile-angular-ui-base.min.css\" /> \n <link rel=\"stylesheet\" href=\"css/mobile-angular-ui-desktop.min.css\" /> \n <script src=\"js/angular.min.js\"></script>\n <script src=\"js/angular-route.min.js\"></script> \n <script src=\"js/mobile-angular-ui.min.js\"></script> \n <script src=\"js/ionic.js\"></script> \n <link rel=\"stylesheet\" href=\"css/app.css\" /> \n <script src=\"js/app.js\"></script> \n </head> \n <body ng-app=\"myFirstApp\" ng-controller=\"MainController\"> \n <!-- Sidebars --> \n <div class=\"sidebar sidebar-left\"> \n <div class=\"scrollable\"> \n <h1 class=\"scrollable-header app-name\">Tutorials</h1> \n <div class=\"scrollable-content\"> <div class=\"list-group\" ui-turn-off='uiSidebarLeft'> \n <a class=\"list-group-item\" href=\"/\">Home Page </a> \n <a class=\"list-group-item\" href=\"#/academic\"><i class=\"fa fa-caret-right\"></i>Academic Tutorials </a> \n <a class=\"list-group-item\" href=\"#/bigdata\"><i class=\"fa fa-caret-right\"></i>Big Data & Analytics </a> \n <a class=\"list-group-item\" href=\"#/computerProg\"><i class=\"fa fa-caret-right\"></i>Computer Programming </a> \n <a class=\"list-group-item\" href=\"#/computerscience\"><i class=\"fa fa-caret-right\"></i>Computer Science </a> \n <a class=\"list-group-item\" href=\"#/databases\"><i class=\"fa fa-caret-right\"></i>Databases </a> <a class=\"list-group-item\" href=\"#/devops\"><i class=\"fa fa-caret-right\"></i>DevOps </a> \n </div> \n </div> \n </div> \n </div> \n <div class=\"sidebar sidebar-right\"> \n <div class=\"scrollable\"> \n <h1 class=\"scrollable-header app-name\">eBooks</h1>\n <div class=\"scrollable-content\"> \n <div class=\"list-group\" ui-toggle=\"uiSidebarRight\"> \n <a class=\"list-group-item\" href=\"#/php\"><i class=\"fa fa-caret-right\"></i>PHP </a> \n <a class=\"list-group-item\" href=\"#/Javascript\"><i class=\"fa fa-caret-right\"></i>Javascript </a> \n </div> \n </div> \n </div> \n </div> \n \n <div class=\"app\"> \n <div class=\"navbar navbar-app navbar-absolute-top\"> \n <div class=\"navbar-brand navbar-brand-center\" ui-yield-to=\"title\"> \n TutorialsPoint \n </div> \n <div class=\"btn-group pull-left\"> \n <div ui-toggle=\"uiSidebarLeft\" class=\"btn sidebar-left-toggle\"> \n <i class=\"fa fa-th-large \"></i> Tutorials \n </div> \n </div> \n <div class=\"btn-group pull-right\" ui-yield-to=\"navbarAction\"> \n <div ui-toggle=\"uiSidebarRight\" class=\"btn sidebar-right-toggle\"> \n <i class=\"fal fa-search\"></i> eBooks \n </div> \n </div> \n </div> \n <div class=\"navbar navbar-app navbar-absolute-bottom\"> \n <div class=\"btn-group justified\"> \n <a ui-turn-on=\"aboutus_modal\" class=\"btn btn-navbar\"><i class=\"fal fa-globe\"></i> About us</a> \n <a ui-turn-on=\"contactus_overlay\" class=\"btn btn-navbar\"><i class=\"fal fa-map-marker-alt\"></i> Contact us</a> \n </div> \n </div>\n \n <!-- App body --> \n <div class='app-body'> \n <div class='app-content'> \n <ng-view></ng-view> \n </div> \n </div> \n </div><!-- ~ .app -->\n\n <!-- Modals and Overlays --> \n <div ui-yield-to=\"modals\"></div> \n \n <div class=\"app\"> \n <h1>Apache Cordova</h1> \n <div id=\"deviceready\" class=\"blink\"> \n <p class=\"event listening\">Connecting to Device</p> \n <p class=\"event received\">Device is Ready</p> \n </div> \n </div> \n <script type=\"text/javascript\" src=\"cordova.js\"></script> \n <script type=\"text/javascript\" src=\"js/index.js\"></script> \n </body> \n</html>" }, { "code": null, "e": 10343, "s": 10219, "text": "All the js and css files are added in the head section. The module and controller is created inside app.js as shown below −" }, { "code": null, "e": 11319, "s": 10343, "text": "/* eslint no-alert: 0 */\n\n'use strict';\n// \n// Here is how to define your module \n// has dependent on mobile-angular-ui \n// var app=angular.module('myFirstApp', [\n 'ngRoute', \n 'mobile-angular-ui' \n]); \napp.config(function($routeProvider, $locationProvider) { \n $routeProvider \n .when(\"/\", { \n templateUrl : \"home/home.html\" \n }); \n $locationProvider.html5Mode({enabled:true, requireBase:false}); \n});\napp.directive('dragItem', ['$drag', function($drag) { \n return { \n controller: function($scope, $element) { \n $drag.bind($element, \n { \n transform: $drag.TRANSLATE_BOTH, \n end: function(drag) { \n drag.reset(); \n } \n }, \n { \n sensitiveArea: $element.parent() \n } \n ); \n } \n }; \n}]);\napp.controller('MainController', function($rootScope, $scope, $routeParams) { \n $scope.msg=\"Welcome to Tutorialspoint!\"; \n});" }, { "code": null, "e": 11402, "s": 11319, "text": "Create home/home.html file in www/ folder. Following are details inside home.html." }, { "code": null, "e": 11536, "s": 11402, "text": "<div class=\"list-group text-center\"> \n <div class=\"list-group-item list-group-item-home\">\n <h1>{{msg}}</h1> \n </div> \n</div>" }, { "code": null, "e": 11598, "s": 11536, "text": "To run the app using cordova, execute the following command −" }, { "code": null, "e": 11628, "s": 11598, "text": "cordova platform add browser\n" }, { "code": null, "e": 11693, "s": 11628, "text": "Next, execute the below command to test the app in the browser −" }, { "code": null, "e": 11706, "s": 11693, "text": "cordova run\n" }, { "code": null, "e": 11775, "s": 11706, "text": "Hit the url : http://localhost:8000 in the browser, to test the app." }, { "code": null, "e": 11836, "s": 11775, "text": "Open home/home.html, add the following ionic card template −" }, { "code": null, "e": 12108, "s": 11836, "text": "<ion-card> \n <ion-card-header> \n <ion-card-subtitle>Ionic Card</ion-card-subtitle> \n <ion-card-title>Mobile Angular UI + Ionic</ion-card-title>\n </ion-card-header>\n\n <ion-card-content> \n Welcome To TutorialsPoint! \n </ion-card-content> \n</ion-card>" }, { "code": null, "e": 12208, "s": 12108, "text": "Once done stop cordova run and run it again. You should see the ionic card details as shown below −" }, { "code": null, "e": 12301, "s": 12208, "text": "So now you can create an app of your choice by using AngularJs, Mobile Angular UI and Ionic." }, { "code": null, "e": 12334, "s": 12301, "text": "\n 28 Lectures \n 3 hours \n" }, { "code": null, "e": 12348, "s": 12334, "text": " Asif Hussain" }, { "code": null, "e": 12383, "s": 12348, "text": "\n 19 Lectures \n 5.5 hours \n" }, { "code": null, "e": 12411, "s": 12383, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 12446, "s": 12411, "text": "\n 30 Lectures \n 3.5 hours \n" }, { "code": null, "e": 12463, "s": 12446, "text": " Abhilash Nelson" }, { "code": null, "e": 12498, "s": 12463, "text": "\n 16 Lectures \n 2.5 hours \n" }, { "code": null, "e": 12515, "s": 12498, "text": " Frahaan Hussain" }, { "code": null, "e": 12550, "s": 12515, "text": "\n 62 Lectures \n 4.5 hours \n" }, { "code": null, "e": 12562, "s": 12550, "text": " Senol Atac" }, { "code": null, "e": 12595, "s": 12562, "text": "\n 22 Lectures \n 3 hours \n" }, { "code": null, "e": 12616, "s": 12595, "text": " Sandip Bhattacharya" }, { "code": null, "e": 12623, "s": 12616, "text": " Print" }, { "code": null, "e": 12634, "s": 12623, "text": " Add Notes" } ]
Pointer to an Array in Objective-C
It is most likely that you would not understand this chapter until you are through the chapter related to Pointers in Objective-C. So assuming you have a bit understanding on pointers in Objective-C programming language, let us start: An array name is a constant pointer to the first element of the array. Therefore, in the declaration − double balance[50]; balance is a pointer to &balance[0], which is the address of the first element of the array balance. Thus, the following program fragment assigns p the address of the first element of balance − double *p; double balance[10]; p = balance; It is legal to use array names as constant pointers, and vice versa. Therefore, *(balance + 4) is a legitimate way of accessing the data at balance[4]. Once you store the address of first element in p, you can access array elements using *p, *(p+1), *(p+2) and so on. Below is the example to show all the concepts discussed above − #import <Foundation/Foundation.h> int main () { /* an array with 5 elements */ double balance[5] = {1000.0, 2.0, 3.4, 17.0, 50.0}; double *p; int i; p = balance; /* output each array element's value */ NSLog( @"Array values using pointer\n"); for ( i = 0; i < 5; i++ ) { NSLog(@"*(p + %d) : %f\n", i, *(p + i) ); } NSLog(@"Array values using balance as address\n"); for ( i = 0; i < 5; i++ ) { NSLog(@"*(balance + %d) : %f\n", i, *(balance + i) ); } return 0; } When the above code is compiled and executed, it produces the following result − 2013-09-14 01:36:57.995 demo[31469] Array values using pointer 2013-09-14 01:36:57.995 demo[31469] *(p + 0) : 1000.000000 2013-09-14 01:36:57.995 demo[31469] *(p + 1) : 2.000000 2013-09-14 01:36:57.995 demo[31469] *(p + 2) : 3.400000 2013-09-14 01:36:57.995 demo[31469] *(p + 3) : 17.000000 2013-09-14 01:36:57.995 demo[31469] *(p + 4) : 50.000000 2013-09-14 01:36:57.995 demo[31469] Array values using balance as address 2013-09-14 01:36:57.995 demo[31469] *(balance + 0) : 1000.000000 2013-09-14 01:36:57.995 demo[31469] *(balance + 1) : 2.000000 2013-09-14 01:36:57.995 demo[31469] *(balance + 2) : 3.400000 2013-09-14 01:36:57.995 demo[31469] *(balance + 3) : 17.000000 2013-09-14 01:36:57.995 demo[31469] *(balance + 4) : 50.000000 In the above example, p is a pointer to double, which means it can store address of a variable of double type. Once we have address in p, then *p will give us value available at the address stored in p, as we have shown in the above example. 18 Lectures 1 hours PARTHA MAJUMDAR 6 Lectures 25 mins Ken Burke Print Add Notes Bookmark this page
[ { "code": null, "e": 2691, "s": 2560, "text": "It is most likely that you would not understand this chapter until you are through the chapter related to Pointers in Objective-C." }, { "code": null, "e": 2898, "s": 2691, "text": "So assuming you have a bit understanding on pointers in Objective-C programming language, let us start: An array name is a constant pointer to the first element of the array. Therefore, in the declaration −" }, { "code": null, "e": 2918, "s": 2898, "text": "double balance[50];" }, { "code": null, "e": 3112, "s": 2918, "text": "balance is a pointer to &balance[0], which is the address of the first element of the array balance. Thus, the following program fragment assigns p the address of the first element of balance −" }, { "code": null, "e": 3157, "s": 3112, "text": "double *p;\ndouble balance[10];\n\np = balance;" }, { "code": null, "e": 3309, "s": 3157, "text": "It is legal to use array names as constant pointers, and vice versa. Therefore, *(balance + 4) is a legitimate way of accessing the data at balance[4]." }, { "code": null, "e": 3489, "s": 3309, "text": "Once you store the address of first element in p, you can access array elements using *p, *(p+1), *(p+2) and so on. Below is the example to show all the concepts discussed above −" }, { "code": null, "e": 4015, "s": 3489, "text": "#import <Foundation/Foundation.h>\n\nint main () {\n \n /* an array with 5 elements */\n double balance[5] = {1000.0, 2.0, 3.4, 17.0, 50.0};\n double *p;\n int i;\n\n p = balance;\n \n /* output each array element's value */\n NSLog( @\"Array values using pointer\\n\");\n for ( i = 0; i < 5; i++ ) {\n NSLog(@\"*(p + %d) : %f\\n\", i, *(p + i) );\n }\n\n NSLog(@\"Array values using balance as address\\n\");\n for ( i = 0; i < 5; i++ ) {\n NSLog(@\"*(balance + %d) : %f\\n\", i, *(balance + i) );\n }\n \n return 0;\n}" }, { "code": null, "e": 4096, "s": 4015, "text": "When the above code is compiled and executed, it produces the following result −" }, { "code": null, "e": 4834, "s": 4096, "text": "2013-09-14 01:36:57.995 demo[31469] Array values using pointer\n2013-09-14 01:36:57.995 demo[31469] *(p + 0) : 1000.000000\n2013-09-14 01:36:57.995 demo[31469] *(p + 1) : 2.000000\n2013-09-14 01:36:57.995 demo[31469] *(p + 2) : 3.400000\n2013-09-14 01:36:57.995 demo[31469] *(p + 3) : 17.000000\n2013-09-14 01:36:57.995 demo[31469] *(p + 4) : 50.000000\n2013-09-14 01:36:57.995 demo[31469] Array values using balance as address\n2013-09-14 01:36:57.995 demo[31469] *(balance + 0) : 1000.000000\n2013-09-14 01:36:57.995 demo[31469] *(balance + 1) : 2.000000\n2013-09-14 01:36:57.995 demo[31469] *(balance + 2) : 3.400000\n2013-09-14 01:36:57.995 demo[31469] *(balance + 3) : 17.000000\n2013-09-14 01:36:57.995 demo[31469] *(balance + 4) : 50.000000\n" }, { "code": null, "e": 5076, "s": 4834, "text": "In the above example, p is a pointer to double, which means it can store address of a variable of double type. Once we have address in p, then *p will give us value available at the address stored in p, as we have shown in the above example." }, { "code": null, "e": 5109, "s": 5076, "text": "\n 18 Lectures \n 1 hours \n" }, { "code": null, "e": 5126, "s": 5109, "text": " PARTHA MAJUMDAR" }, { "code": null, "e": 5157, "s": 5126, "text": "\n 6 Lectures \n 25 mins\n" }, { "code": null, "e": 5168, "s": 5157, "text": " Ken Burke" }, { "code": null, "e": 5175, "s": 5168, "text": " Print" }, { "code": null, "e": 5186, "s": 5175, "text": " Add Notes" } ]
How to Write Switch Statements in Python | Towards Data Science
The typical way to deal with multiway branching in programming languages is the if-else clause. When we need to code numerous scenarios, an alternative is the so-called switch or case statement that is supported by most modern languages. For Python versions < 3.10 however, there was no such statement that is able to select a specified action based on the value of a particular variable. Instead, we usually had to write a statement incorporating multiple if-else statements or even create a dictionary that we could then be indexed based on a specific variable value. In today’s short guide we will demonstrate how to write switch or case statements in the form of if-else series or dictionaries. Additionally, we will demonstrate the new pattern matching feature that was introduced as of Python 3.10 version. Let’s assume that we have a variable named choice that takes some string values and based on the value of the variable will then print a float number. Using a series of if-else statements that would look like the code snippet shown below: if choice == 'optionA': print(1.25)elif choice == 'optionB': print(2.25)elif choice == 'optionC': print(1.75)elif choice == 'optionD': print(2.5)else: print(3.25) In traditional if-else statements we enclose the default option (i.e. the option that should be selected when the value of the variable does not match any of the available options) in the else clause. Another possibility (if you are using Python < 3.10) are dictionaries since they can be easily and efficiently indexed. For instance, we could create a mapping where our options correspond to the keys of the dictionaries and the values to the desired outcome or action. choices = { 'optionA': 1.25, 'optionB': 2.25, 'optionC': 1.75, 'optionD': 2.5,} Finally, we can pick the desired choice by providing the corresponding key: choice = 'optionA'print(choices[choice]) Now we somehow need to deal with the case where our choice is not included in the specified dictionary. If we try to provide a non-existent key, we will get back a KeyError. There are essentially two ways we can handle this scenario. The first option requires the use of get() method when selecting the value from the dictionary that also allows us to specify the default value when the key is not found. For example, >>> choice = 'optionE'>>> print(choices.get(choice, 3.25)3.25 Alternatively, we can use the try statement which is a general way for handling defaults by catching the KeyError. choice = 'optionE'try: print(choices[choice])except KeyError: print(3.25) Python 3.10 introduced the new structural pattern matching (PEP 634): Structural pattern matching has been added in the form of a match statement and case statements of patterns with associated actions. Patterns consist of sequences, mappings, primitive data types as well as class instances. Pattern matching enables programs to extract information from complex data types, branch on the structure of data, and apply specific actions based on different forms of data. Source — Python Docs Therefore, we can now simplify our code as below: choice: case 'optionA': print(1.25) case 'optionB': print(2.25) case 'optionC': print(1.75) case 'optionD': print(2.5) case _: print(3.25) Note that we can even combine several choices in a single case or pattern using | (‘or’) operator as demonstrated below: case 'optionA' | 'optionB': print(1.25) In today’s article we discussed a few alternative ways for implementing switch statements in Python given that prior to version 3.10 there’s no built-in construct like many other programming languages. Therefore, you could instead write traditional if-else statements or even initialise a dictionary where the keys correspond to the conditions that would have been used in if/else if statements and values correspond to the desired value when that particular condition (key) holds. Additionally, we showcased how to use the new pattern matching switch statement that was recently introduced in Python 3.10. Become a member and read every story on Medium. Your membership fee directly supports me and other writers you read. You’ll also get full access to every story on Medium. gmyrianthous.medium.com You may also like
[ { "code": null, "e": 409, "s": 171, "text": "The typical way to deal with multiway branching in programming languages is the if-else clause. When we need to code numerous scenarios, an alternative is the so-called switch or case statement that is supported by most modern languages." }, { "code": null, "e": 741, "s": 409, "text": "For Python versions < 3.10 however, there was no such statement that is able to select a specified action based on the value of a particular variable. Instead, we usually had to write a statement incorporating multiple if-else statements or even create a dictionary that we could then be indexed based on a specific variable value." }, { "code": null, "e": 984, "s": 741, "text": "In today’s short guide we will demonstrate how to write switch or case statements in the form of if-else series or dictionaries. Additionally, we will demonstrate the new pattern matching feature that was introduced as of Python 3.10 version." }, { "code": null, "e": 1135, "s": 984, "text": "Let’s assume that we have a variable named choice that takes some string values and based on the value of the variable will then print a float number." }, { "code": null, "e": 1223, "s": 1135, "text": "Using a series of if-else statements that would look like the code snippet shown below:" }, { "code": null, "e": 1401, "s": 1223, "text": "if choice == 'optionA': print(1.25)elif choice == 'optionB': print(2.25)elif choice == 'optionC': print(1.75)elif choice == 'optionD': print(2.5)else: print(3.25)" }, { "code": null, "e": 1602, "s": 1401, "text": "In traditional if-else statements we enclose the default option (i.e. the option that should be selected when the value of the variable does not match any of the available options) in the else clause." }, { "code": null, "e": 1872, "s": 1602, "text": "Another possibility (if you are using Python < 3.10) are dictionaries since they can be easily and efficiently indexed. For instance, we could create a mapping where our options correspond to the keys of the dictionaries and the values to the desired outcome or action." }, { "code": null, "e": 1964, "s": 1872, "text": "choices = { 'optionA': 1.25, 'optionB': 2.25, 'optionC': 1.75, 'optionD': 2.5,}" }, { "code": null, "e": 2040, "s": 1964, "text": "Finally, we can pick the desired choice by providing the corresponding key:" }, { "code": null, "e": 2081, "s": 2040, "text": "choice = 'optionA'print(choices[choice])" }, { "code": null, "e": 2315, "s": 2081, "text": "Now we somehow need to deal with the case where our choice is not included in the specified dictionary. If we try to provide a non-existent key, we will get back a KeyError. There are essentially two ways we can handle this scenario." }, { "code": null, "e": 2499, "s": 2315, "text": "The first option requires the use of get() method when selecting the value from the dictionary that also allows us to specify the default value when the key is not found. For example," }, { "code": null, "e": 2561, "s": 2499, "text": ">>> choice = 'optionE'>>> print(choices.get(choice, 3.25)3.25" }, { "code": null, "e": 2676, "s": 2561, "text": "Alternatively, we can use the try statement which is a general way for handling defaults by catching the KeyError." }, { "code": null, "e": 2756, "s": 2676, "text": "choice = 'optionE'try: print(choices[choice])except KeyError: print(3.25)" }, { "code": null, "e": 2826, "s": 2756, "text": "Python 3.10 introduced the new structural pattern matching (PEP 634):" }, { "code": null, "e": 3225, "s": 2826, "text": "Structural pattern matching has been added in the form of a match statement and case statements of patterns with associated actions. Patterns consist of sequences, mappings, primitive data types as well as class instances. Pattern matching enables programs to extract information from complex data types, branch on the structure of data, and apply specific actions based on different forms of data." }, { "code": null, "e": 3246, "s": 3225, "text": "Source — Python Docs" }, { "code": null, "e": 3296, "s": 3246, "text": "Therefore, we can now simplify our code as below:" }, { "code": null, "e": 3485, "s": 3296, "text": "choice: case 'optionA': print(1.25) case 'optionB': print(2.25) case 'optionC': print(1.75) case 'optionD': print(2.5) case _: print(3.25)" }, { "code": null, "e": 3606, "s": 3485, "text": "Note that we can even combine several choices in a single case or pattern using | (‘or’) operator as demonstrated below:" }, { "code": null, "e": 3649, "s": 3606, "text": "case 'optionA' | 'optionB': print(1.25)" }, { "code": null, "e": 3851, "s": 3649, "text": "In today’s article we discussed a few alternative ways for implementing switch statements in Python given that prior to version 3.10 there’s no built-in construct like many other programming languages." }, { "code": null, "e": 4131, "s": 3851, "text": "Therefore, you could instead write traditional if-else statements or even initialise a dictionary where the keys correspond to the conditions that would have been used in if/else if statements and values correspond to the desired value when that particular condition (key) holds." }, { "code": null, "e": 4256, "s": 4131, "text": "Additionally, we showcased how to use the new pattern matching switch statement that was recently introduced in Python 3.10." }, { "code": null, "e": 4427, "s": 4256, "text": "Become a member and read every story on Medium. Your membership fee directly supports me and other writers you read. You’ll also get full access to every story on Medium." }, { "code": null, "e": 4451, "s": 4427, "text": "gmyrianthous.medium.com" } ]
Bokeh - Plot Tools
When a Bokeh plot is rendered, normally a tool bar appears on the right side of the figure. It contains a default set of tools. First of all, the position of toolbar can be configured by toolbar_location property in figure() function. This property can take one of the following values − "above" "below" "left" "right" "None" For example, following statement will cause toolbar to be displayed below the plot − Fig = figure(toolbar_location = "below") This toolbar can be configured according to the requirement by adding required from various tools defined in bokeh.models module. For example − Fig.add_tools(WheelZoomTool()) The tools can be classified under following categories − Pan/Drag Tools Click/Tap Tools Scroll/Pinch Tools BoxSelectTool Name : 'box_select' LassoSelectTool name: 'lasso_select PanTool name: 'pan', 'xpan', 'ypan', TapTool name: 'tap WheelZoomTool name: 'wheel_zoom', 'xwheel_zoom', 'ywheel_zoom' WheelPanTool name: 'xwheel_pan', 'ywheel_pan' ResetTool name: 'reset' SaveTool name: 'save' ZoomInTool name: 'zoom_in', 'xzoom_in', 'yzoom_in' ZoomOutTool name: 'zoom_out', 'xzoom_out', 'yzoom_out' CrosshairTool name: 'crosshair' Print Add Notes Bookmark this page
[ { "code": null, "e": 2558, "s": 2270, "text": "When a Bokeh plot is rendered, normally a tool bar appears on the right side of the figure. It contains a default set of tools. First of all, the position of toolbar can be configured by toolbar_location property in figure() function. This property can take one of the following values −" }, { "code": null, "e": 2566, "s": 2558, "text": "\"above\"" }, { "code": null, "e": 2574, "s": 2566, "text": "\"below\"" }, { "code": null, "e": 2581, "s": 2574, "text": "\"left\"" }, { "code": null, "e": 2589, "s": 2581, "text": "\"right\"" }, { "code": null, "e": 2596, "s": 2589, "text": "\"None\"" }, { "code": null, "e": 2681, "s": 2596, "text": "For example, following statement will cause toolbar to be displayed below the plot −" }, { "code": null, "e": 2722, "s": 2681, "text": "Fig = figure(toolbar_location = \"below\")" }, { "code": null, "e": 2866, "s": 2722, "text": "This toolbar can be configured according to the requirement by adding required from various tools defined in bokeh.models module. For example −" }, { "code": null, "e": 2897, "s": 2866, "text": "Fig.add_tools(WheelZoomTool())" }, { "code": null, "e": 2954, "s": 2897, "text": "The tools can be classified under following categories −" }, { "code": null, "e": 2969, "s": 2954, "text": "Pan/Drag Tools" }, { "code": null, "e": 2985, "s": 2969, "text": "Click/Tap Tools" }, { "code": null, "e": 3004, "s": 2985, "text": "Scroll/Pinch Tools" }, { "code": null, "e": 3018, "s": 3004, "text": "BoxSelectTool" }, { "code": null, "e": 3038, "s": 3018, "text": "Name : 'box_select'" }, { "code": null, "e": 3054, "s": 3038, "text": "LassoSelectTool" }, { "code": null, "e": 3074, "s": 3054, "text": "name: 'lasso_select" }, { "code": null, "e": 3082, "s": 3074, "text": "PanTool" }, { "code": null, "e": 3111, "s": 3082, "text": "name: 'pan', 'xpan', 'ypan'," }, { "code": null, "e": 3119, "s": 3111, "text": "TapTool" }, { "code": null, "e": 3130, "s": 3119, "text": "name: 'tap" }, { "code": null, "e": 3144, "s": 3130, "text": "WheelZoomTool" }, { "code": null, "e": 3193, "s": 3144, "text": "name: 'wheel_zoom', 'xwheel_zoom', 'ywheel_zoom'" }, { "code": null, "e": 3206, "s": 3193, "text": "WheelPanTool" }, { "code": null, "e": 3239, "s": 3206, "text": "name: 'xwheel_pan', 'ywheel_pan'" }, { "code": null, "e": 3249, "s": 3239, "text": "ResetTool" }, { "code": null, "e": 3263, "s": 3249, "text": "name: 'reset'" }, { "code": null, "e": 3272, "s": 3263, "text": "SaveTool" }, { "code": null, "e": 3285, "s": 3272, "text": "name: 'save'" }, { "code": null, "e": 3296, "s": 3285, "text": "ZoomInTool" }, { "code": null, "e": 3336, "s": 3296, "text": "name: 'zoom_in', 'xzoom_in', 'yzoom_in'" }, { "code": null, "e": 3348, "s": 3336, "text": "ZoomOutTool" }, { "code": null, "e": 3391, "s": 3348, "text": "name: 'zoom_out', 'xzoom_out', 'yzoom_out'" }, { "code": null, "e": 3405, "s": 3391, "text": "CrosshairTool" }, { "code": null, "e": 3423, "s": 3405, "text": "name: 'crosshair'" }, { "code": null, "e": 3430, "s": 3423, "text": " Print" }, { "code": null, "e": 3441, "s": 3430, "text": " Add Notes" } ]
How to find the mean of each variable using dplyr by factor variable with ignoring the NA values in R?
If there are NA’s in our data set for multiple values of numerical variables with the grouping variable then using na.rm = FALSE needs to be performed multiple times to find the mean or any other statistic for each of the variables with the mean function. But we can do it with summarise_all function of dplyr package that will result in the mean of all numerical variables in just two lines of code. Loading dplyr package − > library(dplyr) Consider the ToothGrowth data set in base R − > str(ToothGrowth) 'data.frame': 60 obs. of 3 variables: $ len : num 4.2 11.5 7.3 5.8 6.4 10 11.2 11.2 5.2 7 ... $ supp: Factor w/ 2 levels "OJ","VC": 2 2 2 2 2 2 2 2 2 2 ... $ dose: num 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 ... > grouping_by_supp <- ToothGrowth %>% group_by(supp) > grouping_by_supp %>% summarise_each(funs(mean(., na.rm = TRUE))) # A tibble: 2 x 3 supp len dose <fct> <dbl> <dbl> 1 OJ 20.7 1.17 2 VC 17.0 1.17 Consider the mtcars data set in base R − > str(mtcars) 'data.frame': 32 obs. of 11 variables: $ mpg : num 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ... $ cyl : Factor w/ 3 levels "four","six","eight": 2 2 1 2 3 2 3 1 1 2 ... $ disp: num 160 160 108 258 360 ... $ hp : num 110 110 93 110 175 105 245 62 95 123 ... $ drat: num 3.9 3.9 3.85 3.08 3.15 2.76 3.21 3.69 3.92 3.92 ... $ wt : num 2.62 2.88 2.32 3.21 3.44 ... $ qsec: num 16.5 17 18.6 19.4 17 ... $ vs : num 0 0 1 1 0 1 0 1 1 1 ... $ am : num 1 1 1 0 0 0 0 0 0 0 ... $ gear: num 4 4 4 3 3 3 3 4 4 4 ... $ carb: num 4 4 1 1 2 1 4 2 2 4 ... > grouping_by_cyl <- mtcars %>% group_by(cyl) > grouping_by_cyl %>% summarise_each(funs(mean(., na.rm = TRUE))) # A tibble: 3 x 11 cyl mpg disp hp drat wt qsec vs am gear carb <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 four 26.7 105. 82.6 4.07 2.29 19.1 0.909 0.727 4.09 1.55 2 six 19.7 183. 122. 3.59 3.12 18.0 0.571 0.429 3.86 3.43 3 eight 15.1 353. 209. 3.23 4.00 16.8 0 0.143 3.29 3.5 Consider the CO2 data set in base R − > str(CO2) Classes ‘nfnGroupedData’, ‘nfGroupedData’, ‘groupedData’ and 'data.frame': 84 obs. of 5 variables: $ Plant : Ord.factor w/ 12 levels "Qn1"<"Qn2"<"Qn3"<..: 1 1 1 1 1 1 1 2 2 2 ... $ Type : Factor w/ 2 levels "Quebec","Mississippi": 1 1 1 1 1 1 1 1 1 1 ... $ Treatment: Factor w/ 2 levels "nonchilled","chilled": 1 1 1 1 1 1 1 1 1 1 ... $ conc : num 95 175 250 350 500 675 1000 95 175 250 ... $ uptake : num 16 30.4 34.8 37.2 35.3 39.2 39.7 13.6 27.3 37.1 ... - attr(*, "formula")=Class 'formula' language uptake ~ conc | Plant .. ..- attr(*, ".Environment")=<environment: R_EmptyEnv> - attr(*, "outer")=Class 'formula' language ~Treatment * Type .. ..- attr(*, ".Environment")=<environment: R_EmptyEnv> - attr(*, "labels")=List of 2 ..$ x: chr "Ambient carbon dioxide concentration" ..$ y: chr "CO2 uptake rate" - attr(*, "units")=List of 2 ..$ x: chr "(uL/L)" ..$ y: chr "(umol/m^2 s)" > grouping_by_Type <- CO2 %>% group_by(Type) > grouping_by_Type %>% summarise_all(funs(mean(., na.rm = TRUE))) # A tibble: 2 x 5 Type Plant Treatment conc uptake <fct> <dbl> <dbl> <dbl> <dbl> 1 Quebec NA NA 435 33.5 2 Mississippi NA NA 435 20.9 In mean.default(Plant, na.rm = TRUE) − argument is not numeric or logical− returning NA In mean.default(Plant, na.rm = TRUE) − argument is not numeric or logical− returning NA In mean.default(Treatment, na.rm = TRUE) − argument is not numeric or logical− returning NA In mean.default(Treatment, na.rm = TRUE) − argument is not numeric or logical − returning NA Here, we are getting some warning messages because the variable Plant and Treatment are not numerical.
[ { "code": null, "e": 1463, "s": 1062, "text": "If there are NA’s in our data set for multiple values of numerical variables with the grouping variable then using na.rm = FALSE needs to be performed multiple times to find the mean or any other statistic for each of the variables with the mean function. But we can do it with summarise_all function of dplyr package that will result in the mean of all numerical variables in just two lines of code." }, { "code": null, "e": 1487, "s": 1463, "text": "Loading dplyr package −" }, { "code": null, "e": 1504, "s": 1487, "text": "> library(dplyr)" }, { "code": null, "e": 1550, "s": 1504, "text": "Consider the ToothGrowth data set in base R −" }, { "code": null, "e": 1981, "s": 1550, "text": "> str(ToothGrowth)\n'data.frame': 60 obs. of 3 variables:\n$ len : num 4.2 11.5 7.3 5.8 6.4 10 11.2 11.2 5.2 7 ...\n$ supp: Factor w/ 2 levels \"OJ\",\"VC\": 2 2 2 2 2 2 2 2 2 2 ...\n$ dose: num 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 ...\n> grouping_by_supp <- ToothGrowth %>% group_by(supp)\n> grouping_by_supp %>% summarise_each(funs(mean(., na.rm = TRUE)))\n# A tibble: 2 x 3\nsupp len dose\n<fct> <dbl> <dbl>\n1 OJ 20.7 1.17\n2 VC 17.0 1.17" }, { "code": null, "e": 2022, "s": 1981, "text": "Consider the mtcars data set in base R −" }, { "code": null, "e": 2995, "s": 2022, "text": "> str(mtcars)\n'data.frame': 32 obs. of 11 variables:\n$ mpg : num 21 21 22.8 21.4 18.7 18.1 14.3 24.4 22.8 19.2 ...\n$ cyl : Factor w/ 3 levels \"four\",\"six\",\"eight\": 2 2 1 2 3 2 3 1 1 2 ...\n$ disp: num 160 160 108 258 360 ...\n$ hp : num 110 110 93 110 175 105 245 62 95 123 ...\n$ drat: num 3.9 3.9 3.85 3.08 3.15 2.76 3.21 3.69 3.92 3.92 ...\n$ wt : num 2.62 2.88 2.32 3.21 3.44 ...\n$ qsec: num 16.5 17 18.6 19.4 17 ...\n$ vs : num 0 0 1 1 0 1 0 1 1 1 ...\n$ am : num 1 1 1 0 0 0 0 0 0 0 ...\n$ gear: num 4 4 4 3 3 3 3 4 4 4 ...\n$ carb: num 4 4 1 1 2 1 4 2 2 4 ...\n> grouping_by_cyl <- mtcars %>% group_by(cyl)\n> grouping_by_cyl %>% summarise_each(funs(mean(., na.rm = TRUE)))\n# A tibble: 3 x 11\ncyl mpg disp hp drat wt qsec vs am gear carb\n<fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>\n1 four 26.7 105. 82.6 4.07 2.29 19.1 0.909 0.727 4.09 1.55\n2 six 19.7 183. 122. 3.59 3.12 18.0 0.571 0.429 3.86 3.43\n3 eight 15.1 353. 209. 3.23 4.00 16.8 0 0.143 3.29 3.5" }, { "code": null, "e": 3033, "s": 2995, "text": "Consider the CO2 data set in base R −" }, { "code": null, "e": 4175, "s": 3033, "text": "> str(CO2)\nClasses ‘nfnGroupedData’, ‘nfGroupedData’, ‘groupedData’ and 'data.frame': 84 obs. of 5 variables:\n$ Plant : Ord.factor w/ 12 levels \"Qn1\"<\"Qn2\"<\"Qn3\"<..: 1 1 1 1 1 1 1 2 2 2 ...\n$ Type : Factor w/ 2 levels \"Quebec\",\"Mississippi\": 1 1 1 1 1 1 1 1 1 1 ...\n$ Treatment: Factor w/ 2 levels \"nonchilled\",\"chilled\": 1 1 1 1 1 1 1 1 1 1 ...\n$ conc : num 95 175 250 350 500 675 1000 95 175 250 ...\n$ uptake : num 16 30.4 34.8 37.2 35.3 39.2 39.7 13.6 27.3 37.1 ...\n- attr(*, \"formula\")=Class 'formula' language uptake ~ conc | Plant\n.. ..- attr(*, \".Environment\")=<environment: R_EmptyEnv>\n- attr(*, \"outer\")=Class 'formula' language ~Treatment * Type\n.. ..- attr(*, \".Environment\")=<environment: R_EmptyEnv>\n- attr(*, \"labels\")=List of 2\n..$ x: chr \"Ambient carbon dioxide concentration\"\n..$ y: chr \"CO2 uptake rate\"\n- attr(*, \"units\")=List of 2\n..$ x: chr \"(uL/L)\"\n..$ y: chr \"(umol/m^2 s)\"\n> grouping_by_Type <- CO2 %>% group_by(Type)\n> grouping_by_Type %>% summarise_all(funs(mean(., na.rm = TRUE)))\n# A tibble: 2 x 5\nType Plant Treatment conc uptake\n<fct> <dbl> <dbl> <dbl> <dbl>\n1 Quebec NA NA 435 33.5\n2 Mississippi NA NA 435 20.9" }, { "code": null, "e": 4263, "s": 4175, "text": "In mean.default(Plant, na.rm = TRUE) − argument is not numeric or logical− returning NA" }, { "code": null, "e": 4351, "s": 4263, "text": "In mean.default(Plant, na.rm = TRUE) − argument is not numeric or logical− returning NA" }, { "code": null, "e": 4443, "s": 4351, "text": "In mean.default(Treatment, na.rm = TRUE) − argument is not numeric or logical− returning NA" }, { "code": null, "e": 4536, "s": 4443, "text": "In mean.default(Treatment, na.rm = TRUE) − argument is not numeric or logical − returning NA" }, { "code": null, "e": 4639, "s": 4536, "text": "Here, we are getting some warning messages because the variable Plant and Treatment are not numerical." } ]
MLOps with Kubernetes, RabbitMQ and FastAPI | by Andrej Baranovskij | Towards Data Science
You often could hear people saying — many ML projects are stopped before they reach the production phase. One of the reasons for this, typically ML projects are implemented as monoliths from the start and when the time comes to run them in production, it is impossible to manage, transform and maintain the code. ML project code is implemented as a few or even one large notebook, where data processing, model training, and prediction all are glued together. This makes it hard to maintain such cumbersome code when the time comes to change the code and introduce user requests. As a result, users are not happy and this leads to project termination. Much more effective to build ML system from the start and follow microservice architecture. You can use containers to encapsulate the logic. There can be a separate container for data processing, ML model training, and ML model serving. When running separate containers, not only simplifies code maintenance, but you could also scale containers separately and run them on different hardware. This can improve system performance. The question comes, how you could implement communication between these services. I was researching available tools, such as MLFlow. These tools are great, but often they are too complex and large for the task. Especially when you want simply to run ML logic in different containers and that’s pretty much it. This is why I decided to build my own small and simple open-source product Skipper to run ML workloads. In this article, I will explain how you can scale TensorFlow model on Kubernetes with Skipper. The same approach can be applied for PyTorch models or any other non-ML-related functionality. The public port is exposed through Nginx FastAPI is serving REST endpoints API. At the moment with providing two generic endpoints, one for async requests and another for sync Workflow container responsible for request routing Logger container provides generic logging capability Celery container is used to execute the async request RabbitMQ is a message broker, it enables event-based communication between Skipper containers SkipperLib is a Python library, it encapsulates API code specific to RabbitMQ A set of microservices is created as a sample containers, to show how Skipper works with ML specific (can be non ML too) services You can run Skipper containers in multiple ways: Directly with Python virtual environment on your machine On Docker containers through Docker compose. Follow the readme file for the instructions On Kubernetes. Follow the readme file for the instructions In a production environment, you should run Skipper in Kubernetes, with Kubernetes it is easier to scale containers. Skipper REST API is exposed through Kubernetes NGINX Ingress controller: apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: api-ingressspec: rules: - host: kubernetes.docker.internal http: paths: - path: /api/v1/skipper/tasks/ pathType: Prefix backend: service: name: skipper-api port: number: 8000 ingressClassName: nginx---apiVersion: networking.k8s.io/v1kind: IngressClassmetadata: name: nginxspec: controller: k8s.io/ingress-nginx Ingress redirects to /api/v1/skipper/tasks/, which is served from FastAPI container. There are two generic endpoints defined. One to serve async requests and another to serve sync requests. Async request executes a call to train model, this is a long-running task. Sync request handles model prediction calls and routes requests to model serving. All requests travel through RabbitMQ message queue. All calls to RabbitMQ are executed through SkipperLib. This allows to encapsulate RabbitMQ specific code inside the library and change it, without touching code in the containers. Kubernetes Pod for model training runs two containers. First is a master container, which trains a model. The second is a side-car container, responsible for data preparation and processing. We are running these two containers in the same Pod, because there is no need to scale them separately and it is more convenient to share data between containers when they run in the same Pod. This is not true about prediction logic, which we run in separate Pod, to be able to scale it separately, more about this in the next chapter. Model training and data preparation containers are sharing the same Kubernetes volume for data storage. Model training container: volumeMounts: - name: data mountPath: /usr/src/trainingservice/models Data preparation container: volumeMounts: - name: data mountPath: /usr/src/dataservice/models Mount paths are different, but the target location will be the same. Data accessed through these paths will be the same. Because both containers are using the same volume ‘data’: volumes:- name: data persistentVolumeClaim: claimName: training-service-claim When the model is trained and the model file is saved, we need to transfer it to the serving Pod, when the model prediction container runs. One of the solutions is to use external cloud storage and upload model files there. But if the model file is not too huge, Skipper allows to transfer it directly from training to serving Pod. Model is archived, encoded into a string, wrapped into JSON together with other metadata, and sent to RabbitMQ queue to be delivered to serving Pod. The model structure is archived into a single file: shutil.make_archive(base_name=os.getenv('MODELS_FOLDER') + str(ts), format='zip', root_dir=os.getenv('MODELS_FOLDER') + str(ts)) The archived model file is encoded into a base64 string: model_encoded = Nonetry: with open(os.getenv('MODELS_FILE'), 'rb') as model_file: model_encoded = base64.b64encode(model_file.read())except Exception as e: print(str(e)) In the last step, we wrap everything into JSON: data = { 'name': 'model_boston_' + str(ts), 'archive_name': 'model_boston_' + str(ts) + '.zip', 'model': model_encoded, 'stats': stats_encoded, 'stats_name': 'train_stats.csv'}content = json.dumps(data) This message is submitted to RabbitMQ for delivery. Model is sent through ‘fanout’ exchange on RabbitMQ, this allows to send the same data at once to all subscribers. By default, RabbitMQ would send the message to one subscriber at a time, which works great in a cluster as a load balancing. But in this case, we want all receivers in the cluster to get the new model, this is why we are using ‘fanout’ exchange. This is how the message is published to ‘fanout’ exchange through RabbitMQ: credentials = pika.PlainCredentials(self.username, self.password)connection = pika.BlockingConnection( pika.ConnectionParameters(host=self.host, port=self.port, credentials=credentials))channel = connection.channel()channel.exchange_declare(exchange='skipper_storage', exchange_type='fanout')channel.basic_publish(exchange='skipper_storage', routing_key='', body=payload)connection.close() Kubernetes Pod for model serving runs two containers. The master container is responsible to execute prediction requests using TensorFlow API. Side-car container listens for the messages from RabbitMQ, when the new model file is sent, decodes the file and extracts the model. Both containers share the same storage. Serving container: volumeMounts: - name: data mountPath: /usr/src/servingservice/models/serving Side-car container for model file processing: volumeMounts: - name: data mountPath: /usr/src/servingservice/storage/models/serving/ Storage is mounted to the same volume claim: volumes:- name: data persistentVolumeClaim: claimName: serving-service-claim When the container responsible for model file processing receives the model, it executes similar steps, as the container in model training Pod, where the model was prepared to be sent through RabbitMQ: data_json = json.loads(data)model_name = data_json['name']archive_name = data_json['archive_name']stats_name = data_json['stats_name']model_decoded = base64.b64decode(data_json['model'])stats_decoded = base64.b64decode(data_json['stats']) It decodes the string, extracts the file. Model serving Pod can be scaled to multiple instances. If instances would run on separate cluster nodes, then each node would receive the new model from RabbitMQ message. But if several instances would run on the single node, both of them would try to write the model into the same storage. We are handling the exception if one of the instances would fail. The goal of this article is to introduce Skipper. Our open-source product for MLOps. Currently, this product is ready for production use. Our goal is to further enhance it, in particular, to add FastAPI security configuration, add more sophisticated workflow support and improve logging. We plan to test Skipper with Kubernetes auto-scaling functionality. We are using the Skipper platform to implement our ML services. Skipper GitHub repo. Follow readme for setup instructions
[ { "code": null, "e": 823, "s": 172, "text": "You often could hear people saying — many ML projects are stopped before they reach the production phase. One of the reasons for this, typically ML projects are implemented as monoliths from the start and when the time comes to run them in production, it is impossible to manage, transform and maintain the code. ML project code is implemented as a few or even one large notebook, where data processing, model training, and prediction all are glued together. This makes it hard to maintain such cumbersome code when the time comes to change the code and introduce user requests. As a result, users are not happy and this leads to project termination." }, { "code": null, "e": 1252, "s": 823, "text": "Much more effective to build ML system from the start and follow microservice architecture. You can use containers to encapsulate the logic. There can be a separate container for data processing, ML model training, and ML model serving. When running separate containers, not only simplifies code maintenance, but you could also scale containers separately and run them on different hardware. This can improve system performance." }, { "code": null, "e": 1666, "s": 1252, "text": "The question comes, how you could implement communication between these services. I was researching available tools, such as MLFlow. These tools are great, but often they are too complex and large for the task. Especially when you want simply to run ML logic in different containers and that’s pretty much it. This is why I decided to build my own small and simple open-source product Skipper to run ML workloads." }, { "code": null, "e": 1856, "s": 1666, "text": "In this article, I will explain how you can scale TensorFlow model on Kubernetes with Skipper. The same approach can be applied for PyTorch models or any other non-ML-related functionality." }, { "code": null, "e": 1897, "s": 1856, "text": "The public port is exposed through Nginx" }, { "code": null, "e": 2032, "s": 1897, "text": "FastAPI is serving REST endpoints API. At the moment with providing two generic endpoints, one for async requests and another for sync" }, { "code": null, "e": 2083, "s": 2032, "text": "Workflow container responsible for request routing" }, { "code": null, "e": 2136, "s": 2083, "text": "Logger container provides generic logging capability" }, { "code": null, "e": 2190, "s": 2136, "text": "Celery container is used to execute the async request" }, { "code": null, "e": 2284, "s": 2190, "text": "RabbitMQ is a message broker, it enables event-based communication between Skipper containers" }, { "code": null, "e": 2362, "s": 2284, "text": "SkipperLib is a Python library, it encapsulates API code specific to RabbitMQ" }, { "code": null, "e": 2492, "s": 2362, "text": "A set of microservices is created as a sample containers, to show how Skipper works with ML specific (can be non ML too) services" }, { "code": null, "e": 2541, "s": 2492, "text": "You can run Skipper containers in multiple ways:" }, { "code": null, "e": 2598, "s": 2541, "text": "Directly with Python virtual environment on your machine" }, { "code": null, "e": 2687, "s": 2598, "text": "On Docker containers through Docker compose. Follow the readme file for the instructions" }, { "code": null, "e": 2746, "s": 2687, "text": "On Kubernetes. Follow the readme file for the instructions" }, { "code": null, "e": 2863, "s": 2746, "text": "In a production environment, you should run Skipper in Kubernetes, with Kubernetes it is easier to scale containers." }, { "code": null, "e": 2936, "s": 2863, "text": "Skipper REST API is exposed through Kubernetes NGINX Ingress controller:" }, { "code": null, "e": 3419, "s": 2936, "text": "apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: api-ingressspec: rules: - host: kubernetes.docker.internal http: paths: - path: /api/v1/skipper/tasks/ pathType: Prefix backend: service: name: skipper-api port: number: 8000 ingressClassName: nginx---apiVersion: networking.k8s.io/v1kind: IngressClassmetadata: name: nginxspec: controller: k8s.io/ingress-nginx" }, { "code": null, "e": 3504, "s": 3419, "text": "Ingress redirects to /api/v1/skipper/tasks/, which is served from FastAPI container." }, { "code": null, "e": 3818, "s": 3504, "text": "There are two generic endpoints defined. One to serve async requests and another to serve sync requests. Async request executes a call to train model, this is a long-running task. Sync request handles model prediction calls and routes requests to model serving. All requests travel through RabbitMQ message queue." }, { "code": null, "e": 3998, "s": 3818, "text": "All calls to RabbitMQ are executed through SkipperLib. This allows to encapsulate RabbitMQ specific code inside the library and change it, without touching code in the containers." }, { "code": null, "e": 4189, "s": 3998, "text": "Kubernetes Pod for model training runs two containers. First is a master container, which trains a model. The second is a side-car container, responsible for data preparation and processing." }, { "code": null, "e": 4525, "s": 4189, "text": "We are running these two containers in the same Pod, because there is no need to scale them separately and it is more convenient to share data between containers when they run in the same Pod. This is not true about prediction logic, which we run in separate Pod, to be able to scale it separately, more about this in the next chapter." }, { "code": null, "e": 4629, "s": 4525, "text": "Model training and data preparation containers are sharing the same Kubernetes volume for data storage." }, { "code": null, "e": 4655, "s": 4629, "text": "Model training container:" }, { "code": null, "e": 4729, "s": 4655, "text": "volumeMounts: - name: data mountPath: /usr/src/trainingservice/models" }, { "code": null, "e": 4757, "s": 4729, "text": "Data preparation container:" }, { "code": null, "e": 4827, "s": 4757, "text": "volumeMounts: - name: data mountPath: /usr/src/dataservice/models" }, { "code": null, "e": 5006, "s": 4827, "text": "Mount paths are different, but the target location will be the same. Data accessed through these paths will be the same. Because both containers are using the same volume ‘data’:" }, { "code": null, "e": 5088, "s": 5006, "text": "volumes:- name: data persistentVolumeClaim: claimName: training-service-claim" }, { "code": null, "e": 5569, "s": 5088, "text": "When the model is trained and the model file is saved, we need to transfer it to the serving Pod, when the model prediction container runs. One of the solutions is to use external cloud storage and upload model files there. But if the model file is not too huge, Skipper allows to transfer it directly from training to serving Pod. Model is archived, encoded into a string, wrapped into JSON together with other metadata, and sent to RabbitMQ queue to be delivered to serving Pod." }, { "code": null, "e": 5621, "s": 5569, "text": "The model structure is archived into a single file:" }, { "code": null, "e": 5788, "s": 5621, "text": "shutil.make_archive(base_name=os.getenv('MODELS_FOLDER') + str(ts), format='zip', root_dir=os.getenv('MODELS_FOLDER') + str(ts))" }, { "code": null, "e": 5845, "s": 5788, "text": "The archived model file is encoded into a base64 string:" }, { "code": null, "e": 6028, "s": 5845, "text": "model_encoded = Nonetry: with open(os.getenv('MODELS_FILE'), 'rb') as model_file: model_encoded = base64.b64encode(model_file.read())except Exception as e: print(str(e))" }, { "code": null, "e": 6076, "s": 6028, "text": "In the last step, we wrap everything into JSON:" }, { "code": null, "e": 6294, "s": 6076, "text": "data = { 'name': 'model_boston_' + str(ts), 'archive_name': 'model_boston_' + str(ts) + '.zip', 'model': model_encoded, 'stats': stats_encoded, 'stats_name': 'train_stats.csv'}content = json.dumps(data)" }, { "code": null, "e": 6707, "s": 6294, "text": "This message is submitted to RabbitMQ for delivery. Model is sent through ‘fanout’ exchange on RabbitMQ, this allows to send the same data at once to all subscribers. By default, RabbitMQ would send the message to one subscriber at a time, which works great in a cluster as a load balancing. But in this case, we want all receivers in the cluster to get the new model, this is why we are using ‘fanout’ exchange." }, { "code": null, "e": 6783, "s": 6707, "text": "This is how the message is published to ‘fanout’ exchange through RabbitMQ:" }, { "code": null, "e": 7302, "s": 6783, "text": "credentials = pika.PlainCredentials(self.username, self.password)connection = pika.BlockingConnection( pika.ConnectionParameters(host=self.host, port=self.port, credentials=credentials))channel = connection.channel()channel.exchange_declare(exchange='skipper_storage', exchange_type='fanout')channel.basic_publish(exchange='skipper_storage', routing_key='', body=payload)connection.close()" }, { "code": null, "e": 7578, "s": 7302, "text": "Kubernetes Pod for model serving runs two containers. The master container is responsible to execute prediction requests using TensorFlow API. Side-car container listens for the messages from RabbitMQ, when the new model file is sent, decodes the file and extracts the model." }, { "code": null, "e": 7618, "s": 7578, "text": "Both containers share the same storage." }, { "code": null, "e": 7637, "s": 7618, "text": "Serving container:" }, { "code": null, "e": 7718, "s": 7637, "text": "volumeMounts: - name: data mountPath: /usr/src/servingservice/models/serving" }, { "code": null, "e": 7764, "s": 7718, "text": "Side-car container for model file processing:" }, { "code": null, "e": 7854, "s": 7764, "text": "volumeMounts: - name: data mountPath: /usr/src/servingservice/storage/models/serving/" }, { "code": null, "e": 7899, "s": 7854, "text": "Storage is mounted to the same volume claim:" }, { "code": null, "e": 7980, "s": 7899, "text": "volumes:- name: data persistentVolumeClaim: claimName: serving-service-claim" }, { "code": null, "e": 8182, "s": 7980, "text": "When the container responsible for model file processing receives the model, it executes similar steps, as the container in model training Pod, where the model was prepared to be sent through RabbitMQ:" }, { "code": null, "e": 8421, "s": 8182, "text": "data_json = json.loads(data)model_name = data_json['name']archive_name = data_json['archive_name']stats_name = data_json['stats_name']model_decoded = base64.b64decode(data_json['model'])stats_decoded = base64.b64decode(data_json['stats'])" }, { "code": null, "e": 8463, "s": 8421, "text": "It decodes the string, extracts the file." }, { "code": null, "e": 8820, "s": 8463, "text": "Model serving Pod can be scaled to multiple instances. If instances would run on separate cluster nodes, then each node would receive the new model from RabbitMQ message. But if several instances would run on the single node, both of them would try to write the model into the same storage. We are handling the exception if one of the instances would fail." }, { "code": null, "e": 9240, "s": 8820, "text": "The goal of this article is to introduce Skipper. Our open-source product for MLOps. Currently, this product is ready for production use. Our goal is to further enhance it, in particular, to add FastAPI security configuration, add more sophisticated workflow support and improve logging. We plan to test Skipper with Kubernetes auto-scaling functionality. We are using the Skipper platform to implement our ML services." } ]
How to add binary numbers using Python?
If you have binary numbers as strings, you can convert them to ints first using int(str, base) by providing the base as 2. Then add the numbers like you'd normally do. Finally convert it back to a string using the bin function. For example, a = '001' b = '011' sm = int(a,2) + int(b,2) c = bin(sm) print(c) This will give the output: 0b100
[ { "code": null, "e": 1303, "s": 1062, "text": "If you have binary numbers as strings, you can convert them to ints first using int(str, base) by providing the base as 2. Then add the numbers like you'd normally do. Finally convert it back to a string using the bin function. For example," }, { "code": null, "e": 1369, "s": 1303, "text": "a = '001'\nb = '011'\nsm = int(a,2) + int(b,2)\nc = bin(sm)\nprint(c)" }, { "code": null, "e": 1396, "s": 1369, "text": "This will give the output:" }, { "code": null, "e": 1402, "s": 1396, "text": "0b100" } ]
How to start a service from notification in Android?
This example demonstrate about How to start a service from notification in Android. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <? xml version = "1.0" encoding = "utf-8" ?> <RelativeLayout xmlns:android = "http://schemas.android.com/apk/res/android" xmlns:tools = "http://schemas.android.com/tools" android:layout_width = "match_parent" android:layout_height = "match_parent" android:padding = "16dp" tools:context= ".MainActivity" > <Button android:onClick = "createNotification" android:text = "create notification" android:layout_centerInParent = "true" android:layout_width = "match_parent" android:layout_height = "wrap_content" /> </RelativeLayout> Step 3 − Add the following code to src/MainActivity. package app.tutorialspoint.com.notifyme ; import android.app.AlarmManager ; import android.app.PendingIntent ; import android.content.Intent ; import android.os.Bundle ; import android.support.v7.app.AppCompatActivity ; import android.view.View ; import java.util.Calendar ; public class MainActivity extends AppCompatActivity { @Override protected void onCreate (Bundle savedInstanceState) { super .onCreate(savedInstanceState) ; setContentView(R.layout. activity_main ) ; } public void createNotification (View view) { Intent myIntent = new Intent(getApplicationContext() , NotifyService. class ) ; AlarmManager alarmManager = (AlarmManager) getSystemService( ALARM_SERVICE ) ; PendingIntent pendingIntent = PendingIntent. getService ( this, 0 , myIntent , 0 ) ; Calendar calendar = Calendar. getInstance () ; calendar.set(Calendar. SECOND , 0 ) ; calendar.set(Calendar. MINUTE , 0 ) ; calendar.set(Calendar. HOUR , 0 ) ; calendar.set(Calendar. AM_PM , Calendar. AM ) ; calendar.add(Calendar. DAY_OF_MONTH , 1 ) ; alarmManager.setRepeating(AlarmManager. RTC_WAKEUP , calendar.getTimeInMillis() , 1000 * 60 * 60 * 24 , pendingIntent) ; } } Step 4 − Add the following code to src/NotifyService package app.tutorialspoint.com.notifyme ; import android.app.NotificationChannel ; import android.app.NotificationManager ; import android.app.PendingIntent ; import android.app.Service ; import android.content.Intent ; import android.os.IBinder ; import android.support.v4.app.NotificationCompat public class NotifyService extends Service { public static final String NOTIFICATION_CHANNEL_ID = "10001" ; private final static String default_notification_channel_id = "default" ; public NotifyService () { } @Override public IBinder onBind (Intent intent) { Intent notificationIntent = new Intent(getApplicationContext() , MainActivity. class ) ; notificationIntent.putExtra( "fromNotification" , true ) ; notificationIntent.setFlags(Intent. FLAG_ACTIVITY_CLEAR_TOP | Intent. FLAG_ACTIVITY_SINGLE_TOP ) ; PendingIntent pendingIntent = PendingIntent. getActivity ( this, 0 , notificationIntent , 0 ) ; NotificationManager mNotificationManager = (NotificationManager) getSystemService( NOTIFICATION_SERVICE ) ; NotificationCompat.Builder mBuilder = new NotificationCompat.Builder(getApplicationContext() , default_notification_channel_id ) ; mBuilder.setContentTitle( "My Notification" ) ; mBuilder.setContentIntent(pendingIntent) ; mBuilder.setContentText( "Notification Listener Service Example" ) ; mBuilder.setSmallIcon(R.drawable. ic_launcher_foreground ) ; mBuilder.setAutoCancel( true ) ; if (android.os.Build.VERSION. SDK_INT >= android.os.Build.VERSION_CODES. O ) { int importance = NotificationManager. IMPORTANCE_HIGH ; NotificationChannel notificationChannel = new NotificationChannel( NOTIFICATION_CHANNEL_ID , "NOTIFICATION_CHANNEL_NAME" , importance) ; mBuilder.setChannelId( NOTIFICATION_CHANNEL_ID ) ; assert mNotificationManager != null; mNotificationManager.createNotificationChannel(notificationChannel) ; } assert mNotificationManager != null; mNotificationManager.notify(( int ) System. currentTimeMillis () , mBuilder.build()) ; throw new UnsupportedOperationException( "Not yet implemented" ) ; } } Step 5 − Add the following code to AndroidManifest.xml <? xml version = "1.0" encoding= "utf-8" ?> <manifest xmlns: android = "http://schemas.android.com/apk/res/android" package= "app.tutorialspoint.com.notifyme" gt; <uses-permission android :name = "android.permission.VIBRATE" /> <uses-permission android :name= "android.permission.RECEIVE_BOOT_COMPLETED" /> <application android :allowBackup= "true" android :icon= "@mipmap/ic_launcher" android :label= "@string/app_name" android :roundIcon= "@mipmap/ic_launcher_round" android :supportsRtl= "true" android :theme= "@style/AppTheme" > <service android :name= ".NotifyService" android :enabled= "true" android :exported= "true" > </service> <activity android :name= ".MainActivity" > <intent-filter> <action android :name= "android.intent.action.MAIN" /> <category android :name= "android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − Click here to download the project code
[ { "code": null, "e": 1146, "s": 1062, "text": "This example demonstrate about How to start a service from notification in Android." }, { "code": null, "e": 1275, "s": 1146, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1340, "s": 1275, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 1915, "s": 1340, "text": "<? xml version = \"1.0\" encoding = \"utf-8\" ?>\n<RelativeLayout xmlns:android = \"http://schemas.android.com/apk/res/android\"\n xmlns:tools = \"http://schemas.android.com/tools\"\n android:layout_width = \"match_parent\"\n android:layout_height = \"match_parent\"\n android:padding = \"16dp\"\n tools:context= \".MainActivity\" >\n <Button\n android:onClick = \"createNotification\"\n android:text = \"create notification\"\n android:layout_centerInParent = \"true\"\n android:layout_width = \"match_parent\"\n android:layout_height = \"wrap_content\" />\n</RelativeLayout>" }, { "code": null, "e": 1968, "s": 1915, "text": "Step 3 − Add the following code to src/MainActivity." }, { "code": null, "e": 3202, "s": 1968, "text": "package app.tutorialspoint.com.notifyme ;\nimport android.app.AlarmManager ;\nimport android.app.PendingIntent ;\nimport android.content.Intent ;\nimport android.os.Bundle ;\nimport android.support.v7.app.AppCompatActivity ;\nimport android.view.View ;\nimport java.util.Calendar ;\npublic class MainActivity extends AppCompatActivity {\n @Override\n protected void onCreate (Bundle savedInstanceState) {\n super .onCreate(savedInstanceState) ;\n setContentView(R.layout. activity_main ) ;\n }\n public void createNotification (View view) {\n Intent myIntent = new Intent(getApplicationContext() , NotifyService. class ) ;\n AlarmManager alarmManager = (AlarmManager) getSystemService( ALARM_SERVICE ) ;\n PendingIntent pendingIntent = PendingIntent. getService ( this, 0 , myIntent , 0 ) ;\n Calendar calendar = Calendar. getInstance () ;\n calendar.set(Calendar. SECOND , 0 ) ;\n calendar.set(Calendar. MINUTE , 0 ) ;\n calendar.set(Calendar. HOUR , 0 ) ;\n calendar.set(Calendar. AM_PM , Calendar. AM ) ;\n calendar.add(Calendar. DAY_OF_MONTH , 1 ) ;\n alarmManager.setRepeating(AlarmManager. RTC_WAKEUP , calendar.getTimeInMillis() ,\n 1000 * 60 * 60 * 24 , pendingIntent) ;\n }\n}" }, { "code": null, "e": 3255, "s": 3202, "text": "Step 4 − Add the following code to src/NotifyService" }, { "code": null, "e": 5449, "s": 3255, "text": "package app.tutorialspoint.com.notifyme ;\nimport android.app.NotificationChannel ;\nimport android.app.NotificationManager ;\nimport android.app.PendingIntent ;\nimport android.app.Service ;\nimport android.content.Intent ;\nimport android.os.IBinder ;\nimport android.support.v4.app.NotificationCompat\npublic class NotifyService extends Service {\n public static final String NOTIFICATION_CHANNEL_ID = \"10001\" ;\n private final static String default_notification_channel_id = \"default\" ;\n public NotifyService () {\n }\n @Override\n public IBinder onBind (Intent intent) {\n Intent notificationIntent = new Intent(getApplicationContext() ,\n MainActivity. class ) ;\n notificationIntent.putExtra( \"fromNotification\" , true ) ;\n notificationIntent.setFlags(Intent. FLAG_ACTIVITY_CLEAR_TOP | Intent. FLAG_ACTIVITY_SINGLE_TOP ) ;\n PendingIntent pendingIntent = PendingIntent. getActivity ( this, 0 , notificationIntent , 0 ) ;\n NotificationManager mNotificationManager = (NotificationManager) getSystemService( NOTIFICATION_SERVICE ) ;\n NotificationCompat.Builder mBuilder = new NotificationCompat.Builder(getApplicationContext() , default_notification_channel_id ) ;\n mBuilder.setContentTitle( \"My Notification\" ) ;\n mBuilder.setContentIntent(pendingIntent) ;\n mBuilder.setContentText( \"Notification Listener Service Example\" ) ;\n mBuilder.setSmallIcon(R.drawable. ic_launcher_foreground ) ;\n mBuilder.setAutoCancel( true ) ;\n if (android.os.Build.VERSION. SDK_INT >= android.os.Build.VERSION_CODES. O ) {\n int importance = NotificationManager. IMPORTANCE_HIGH ;\n NotificationChannel notificationChannel = new NotificationChannel( NOTIFICATION_CHANNEL_ID , \"NOTIFICATION_CHANNEL_NAME\" , importance) ;\n mBuilder.setChannelId( NOTIFICATION_CHANNEL_ID ) ;\n assert mNotificationManager != null;\n mNotificationManager.createNotificationChannel(notificationChannel) ;\n }\n assert mNotificationManager != null;\n mNotificationManager.notify(( int ) System. currentTimeMillis () ,\n mBuilder.build()) ;\n throw new UnsupportedOperationException( \"Not yet implemented\" ) ;\n }\n}" }, { "code": null, "e": 5504, "s": 5449, "text": "Step 5 − Add the following code to AndroidManifest.xml" }, { "code": null, "e": 6520, "s": 5504, "text": "<? xml version = \"1.0\" encoding= \"utf-8\" ?>\n<manifest xmlns: android = \"http://schemas.android.com/apk/res/android\"\n package= \"app.tutorialspoint.com.notifyme\" gt;\n <uses-permission android :name = \"android.permission.VIBRATE\" />\n <uses-permission android :name= \"android.permission.RECEIVE_BOOT_COMPLETED\" />\n <application\n android :allowBackup= \"true\"\n android :icon= \"@mipmap/ic_launcher\"\n android :label= \"@string/app_name\"\n android :roundIcon= \"@mipmap/ic_launcher_round\"\n android :supportsRtl= \"true\"\n android :theme= \"@style/AppTheme\" >\n <service\n android :name= \".NotifyService\"\n android :enabled= \"true\"\n android :exported= \"true\" >\n </service>\n <activity android :name= \".MainActivity\" >\n <intent-filter>\n <action android :name= \"android.intent.action.MAIN\" />\n <category android :name= \"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 6867, "s": 6520, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −" }, { "code": null, "e": 6909, "s": 6867, "text": "Click here to download the project code" } ]
Transfer Learning with VGG16 and Keras | by Gabriel Cassimiro | Towards Data Science
The main goal of this article is to demonstrate with code and examples how can you use an already trained CNN (convolutional neural network) to solve your specific problem. Convolutional Networks are great for image problems however, they are computationally expensive if you use a big architecture and don’t have a GPU. For that, we have two solutions: GPUs are much more efficient to train NNs but they are not that common on regular computers. So that is where google colab come to save us. They offer virtual machines with GPUs up to 16 GB of RAM and the best part of it all: It is Free. But even with those upgraded specs, you can still struggle when training a brand new CNN. That’s where Transfer Learning can help you achieve great results with less expensive computation. So what is transfer learning? To better explain that we must first understand the basic architecture of a CNN. A CNN can be divided into two main parts: Feature learning and classification. Feature Learning In this part, the main goal of the NN is to find patterns in the pixels of the images that can be useful to identify the targets of the classification. That happens in the convolution layers of the network that specializes in those patterns for the problem at hand. I’m not going deep into how this works underneath the hood, but if you want to dig deeper I highly recommend this article and this amazing video. Classification Now we want to use those patterns to classify our images to their correct label. This part of the network does exactly that job, it uses the inputs from the previous layers to find the best class to your matched patterns in the new image. Definition So now we can define Transfer Learning in our context as utilizing the feature learning layers of a trained CNN to classify a different problem than the one it was created for. In other words, we use the patterns that the NN found to be useful to classify images of a given problem to classify a completely different problem without retraining that part of the network. Now I am going to demonstrate how you can do that with Keras, and prove that for a lot of cases this gives better results than training a new network. I will use for this demonstration a famous NN called VGG16. This is its architecture: This network was trained on the ImageNet dataset, containing more than 14 million high-resolution images belonging to 1000 different labels. If you want to dig deeper into this specific model you can study this paper. For this demonstration, I will use the tf_flowers dataset. Just as a reminder: The VGG16 network was not trained to classify different kinds of flowers. This is what the data looks like: Finally... First, we have to load the dataset from TensorFlow: Now we can load the VGG16 model. We use Include_top=False to remove the classification layer that was trained on the ImageNet dataset and set the model as not trainable. Also, we used the preprocess_input function from VGG16 to normalize the input data. We can run this code to check the model summary. base_model.summary() Two main points: the model has over 14 Million trained parameters and ends with a maxpooling layer that belongs to the Feature Learning part of the network. Now we add the last layers for our specific problem. And compile and fit the model. Evaluating this model on the test set we got a 96% Accuracy! That’s it! It is this simple. And it is kind of beautiful right? How we can find some patterns in the world that can be used to identify completely different things. If you want to check out the complete code and a jupyter notebook, here’s the GitHubrepo: github.com To be sure that this approach can be better in both computational resources and precision I created a hand-made simple model for this problem. This is the code: I used the same final layers and fit parameters to be able to compare the impact of the convolutions. The accuracy of the hand-made model was 83%. Much worse than the 96% that we got from the VGG16 model.
[ { "code": null, "e": 345, "s": 172, "text": "The main goal of this article is to demonstrate with code and examples how can you use an already trained CNN (convolutional neural network) to solve your specific problem." }, { "code": null, "e": 526, "s": 345, "text": "Convolutional Networks are great for image problems however, they are computationally expensive if you use a big architecture and don’t have a GPU. For that, we have two solutions:" }, { "code": null, "e": 764, "s": 526, "text": "GPUs are much more efficient to train NNs but they are not that common on regular computers. So that is where google colab come to save us. They offer virtual machines with GPUs up to 16 GB of RAM and the best part of it all: It is Free." }, { "code": null, "e": 953, "s": 764, "text": "But even with those upgraded specs, you can still struggle when training a brand new CNN. That’s where Transfer Learning can help you achieve great results with less expensive computation." }, { "code": null, "e": 983, "s": 953, "text": "So what is transfer learning?" }, { "code": null, "e": 1064, "s": 983, "text": "To better explain that we must first understand the basic architecture of a CNN." }, { "code": null, "e": 1143, "s": 1064, "text": "A CNN can be divided into two main parts: Feature learning and classification." }, { "code": null, "e": 1160, "s": 1143, "text": "Feature Learning" }, { "code": null, "e": 1426, "s": 1160, "text": "In this part, the main goal of the NN is to find patterns in the pixels of the images that can be useful to identify the targets of the classification. That happens in the convolution layers of the network that specializes in those patterns for the problem at hand." }, { "code": null, "e": 1572, "s": 1426, "text": "I’m not going deep into how this works underneath the hood, but if you want to dig deeper I highly recommend this article and this amazing video." }, { "code": null, "e": 1587, "s": 1572, "text": "Classification" }, { "code": null, "e": 1826, "s": 1587, "text": "Now we want to use those patterns to classify our images to their correct label. This part of the network does exactly that job, it uses the inputs from the previous layers to find the best class to your matched patterns in the new image." }, { "code": null, "e": 1837, "s": 1826, "text": "Definition" }, { "code": null, "e": 2014, "s": 1837, "text": "So now we can define Transfer Learning in our context as utilizing the feature learning layers of a trained CNN to classify a different problem than the one it was created for." }, { "code": null, "e": 2207, "s": 2014, "text": "In other words, we use the patterns that the NN found to be useful to classify images of a given problem to classify a completely different problem without retraining that part of the network." }, { "code": null, "e": 2358, "s": 2207, "text": "Now I am going to demonstrate how you can do that with Keras, and prove that for a lot of cases this gives better results than training a new network." }, { "code": null, "e": 2444, "s": 2358, "text": "I will use for this demonstration a famous NN called VGG16. This is its architecture:" }, { "code": null, "e": 2585, "s": 2444, "text": "This network was trained on the ImageNet dataset, containing more than 14 million high-resolution images belonging to 1000 different labels." }, { "code": null, "e": 2662, "s": 2585, "text": "If you want to dig deeper into this specific model you can study this paper." }, { "code": null, "e": 2815, "s": 2662, "text": "For this demonstration, I will use the tf_flowers dataset. Just as a reminder: The VGG16 network was not trained to classify different kinds of flowers." }, { "code": null, "e": 2849, "s": 2815, "text": "This is what the data looks like:" }, { "code": null, "e": 2860, "s": 2849, "text": "Finally..." }, { "code": null, "e": 2912, "s": 2860, "text": "First, we have to load the dataset from TensorFlow:" }, { "code": null, "e": 2945, "s": 2912, "text": "Now we can load the VGG16 model." }, { "code": null, "e": 3166, "s": 2945, "text": "We use Include_top=False to remove the classification layer that was trained on the ImageNet dataset and set the model as not trainable. Also, we used the preprocess_input function from VGG16 to normalize the input data." }, { "code": null, "e": 3215, "s": 3166, "text": "We can run this code to check the model summary." }, { "code": null, "e": 3236, "s": 3215, "text": "base_model.summary()" }, { "code": null, "e": 3393, "s": 3236, "text": "Two main points: the model has over 14 Million trained parameters and ends with a maxpooling layer that belongs to the Feature Learning part of the network." }, { "code": null, "e": 3446, "s": 3393, "text": "Now we add the last layers for our specific problem." }, { "code": null, "e": 3477, "s": 3446, "text": "And compile and fit the model." }, { "code": null, "e": 3538, "s": 3477, "text": "Evaluating this model on the test set we got a 96% Accuracy!" }, { "code": null, "e": 3549, "s": 3538, "text": "That’s it!" }, { "code": null, "e": 3603, "s": 3549, "text": "It is this simple. And it is kind of beautiful right?" }, { "code": null, "e": 3704, "s": 3603, "text": "How we can find some patterns in the world that can be used to identify completely different things." }, { "code": null, "e": 3794, "s": 3704, "text": "If you want to check out the complete code and a jupyter notebook, here’s the GitHubrepo:" }, { "code": null, "e": 3805, "s": 3794, "text": "github.com" }, { "code": null, "e": 3948, "s": 3805, "text": "To be sure that this approach can be better in both computational resources and precision I created a hand-made simple model for this problem." }, { "code": null, "e": 3966, "s": 3948, "text": "This is the code:" }, { "code": null, "e": 4068, "s": 3966, "text": "I used the same final layers and fit parameters to be able to compare the impact of the convolutions." } ]
How to hide a navigation menu on scroll down with CSS and JavaScript?
Following is the code for hiding navigation menu when scrolling using CSS and JavaScript − Live Demo <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> <style> body{ margin:0px; margin-top:60px; padding: 0px; } nav{ position: fixed; top: 0px; width: 100%; background-color: rgb(39, 39, 39); overflow: auto; height: auto; transition: 0.5s; } .links { display: inline-block; text-align: center; padding: 14px; color: rgb(178, 137, 253); text-decoration: none; font-size: 17px; } .links:hover { background-color: rgb(100, 100, 100); } .selected{ background-color: rgb(0, 18, 43); } .sample-content{ height: 150vh; } </style> </head> <body> <nav> <a class="links selected" href="#">Home</a> <a class="links" href="#"> Login</a> <a class="links" href="#"> Register</a> <a class="links" href="#"> Contact Us</a> <a class="links" href="#">More Info</a> </nav> <div class="sample-content"> <h1>Here are some headers</h1> <h2>Here are some headers</h2> <h3>Here are some headers</h3> <h4>Here are some headers</h4> <h1>Here are some headers</h1> <h2>Here are some headers</h2> <h3>Here are some headers</h3> <h4>Here are some headers</h4> </div> <script> window.onscroll = scrollShowNav; function scrollShowNav() { if (document.body.scrollTop > 20 || document.documentElement.scrollTop > 20) { document.getElementsByTagName("nav")[0].style.top = "-50px"; } else { document.getElementsByTagName("nav")[0].style.top = "0px"; } } </script> </body> </html> The above code will produce the following output − On scrolling down the navbar will disappear as follows −
[ { "code": null, "e": 1153, "s": 1062, "text": "Following is the code for hiding navigation menu when scrolling using CSS and JavaScript −" }, { "code": null, "e": 1164, "s": 1153, "text": " Live Demo" }, { "code": null, "e": 2695, "s": 1164, "text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"UTF-8\">\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n<title>Document</title>\n<style>\nbody{\n margin:0px;\n margin-top:60px;\n padding: 0px;\n}\nnav{\n position: fixed;\n top: 0px;\n width: 100%;\n background-color: rgb(39, 39, 39);\n overflow: auto;\n height: auto;\n transition: 0.5s;\n}\n.links {\n display: inline-block;\n text-align: center;\n padding: 14px;\n color: rgb(178, 137, 253);\n text-decoration: none;\n font-size: 17px;\n}\n.links:hover {\n background-color: rgb(100, 100, 100);\n}\n.selected{\n background-color: rgb(0, 18, 43);\n}\n.sample-content{\n height: 150vh;\n}\n</style>\n</head>\n<body>\n<nav>\n<a class=\"links selected\" href=\"#\">Home</a>\n<a class=\"links\" href=\"#\"> Login</a>\n<a class=\"links\" href=\"#\"> Register</a>\n<a class=\"links\" href=\"#\"> Contact Us</a>\n<a class=\"links\" href=\"#\">More Info</a>\n</nav>\n<div class=\"sample-content\">\n<h1>Here are some headers</h1>\n<h2>Here are some headers</h2>\n<h3>Here are some headers</h3>\n<h4>Here are some headers</h4>\n<h1>Here are some headers</h1>\n<h2>Here are some headers</h2>\n<h3>Here are some headers</h3>\n<h4>Here are some headers</h4>\n</div>\n<script>\nwindow.onscroll = scrollShowNav;\nfunction scrollShowNav() {\n if (document.body.scrollTop > 20 || document.documentElement.scrollTop > 20) {\n document.getElementsByTagName(\"nav\")[0].style.top = \"-50px\";\n } else {\n document.getElementsByTagName(\"nav\")[0].style.top = \"0px\";\n }\n}\n</script>\n</body>\n</html>" }, { "code": null, "e": 2746, "s": 2695, "text": "The above code will produce the following output −" }, { "code": null, "e": 2803, "s": 2746, "text": "On scrolling down the navbar will disappear as follows −" } ]
Split a column after hyphen in MySQL and display the remaining value?
To split a column after hyphen, use the SUBSTRING_INDEX() method − select substring_index(yourColumnName,'-',-1) AS anyAliasName from yourTableName; Let us first create a table − mysql> create table DemoTable -> ( -> StreetName text -> ); Query OK, 0 rows affected (0.60 sec) Insert some records in the table using insert command − mysql> insert into DemoTable values('Paris Hill St.-CA-83745646') ; Query OK, 1 row affected (0.32 sec) mysql> insert into DemoTable values('502 South Armstrong Street-9948443'); Query OK, 1 row affected (0.20 sec) Display all records from the table using select statement − mysql> select *from DemoTable; This will produce the following output − +------------------------------------+ | StreetName | +------------------------------------+ | Paris Hill St.-CA-83745646 | | 502 South Armstrong Street-9948443 | +------------------------------------+ 2 rows in set (0.00 sec) Following is the query to split a column after specific characters − mysql> select substring_index(StreetName,'-',-1) AS Split from DemoTable; This will produce the following output − +----------+ | Split | +----------+ | 83745646 | | 9948443 | +----------+ 2 rows in set (0.00 sec)
[ { "code": null, "e": 1129, "s": 1062, "text": "To split a column after hyphen, use the SUBSTRING_INDEX() method −" }, { "code": null, "e": 1211, "s": 1129, "text": "select substring_index(yourColumnName,'-',-1) AS anyAliasName from yourTableName;" }, { "code": null, "e": 1241, "s": 1211, "text": "Let us first create a table −" }, { "code": null, "e": 1347, "s": 1241, "text": "mysql> create table DemoTable\n -> (\n -> StreetName text\n -> );\nQuery OK, 0 rows affected (0.60 sec)" }, { "code": null, "e": 1403, "s": 1347, "text": "Insert some records in the table using insert command −" }, { "code": null, "e": 1619, "s": 1403, "text": "mysql> insert into DemoTable values('Paris Hill St.-CA-83745646') ;\nQuery OK, 1 row affected (0.32 sec)\n\nmysql> insert into DemoTable values('502 South Armstrong Street-9948443');\nQuery OK, 1 row affected (0.20 sec)" }, { "code": null, "e": 1679, "s": 1619, "text": "Display all records from the table using select statement −" }, { "code": null, "e": 1710, "s": 1679, "text": "mysql> select *from DemoTable;" }, { "code": null, "e": 1751, "s": 1710, "text": "This will produce the following output −" }, { "code": null, "e": 2010, "s": 1751, "text": "+------------------------------------+\n| StreetName |\n+------------------------------------+\n| Paris Hill St.-CA-83745646 |\n| 502 South Armstrong Street-9948443 |\n+------------------------------------+\n2 rows in set (0.00 sec)" }, { "code": null, "e": 2079, "s": 2010, "text": "Following is the query to split a column after specific characters −" }, { "code": null, "e": 2153, "s": 2079, "text": "mysql> select substring_index(StreetName,'-',-1) AS Split from DemoTable;" }, { "code": null, "e": 2194, "s": 2153, "text": "This will produce the following output −" }, { "code": null, "e": 2297, "s": 2194, "text": "+----------+\n| Split |\n+----------+\n| 83745646 |\n| 9948443 |\n+----------+\n2 rows in set (0.00 sec)" } ]
How to create date object in Java?
You can create a Date object using the Date() constructor of java.util.Date constructor as shown in the following example. The object created using this constructor represents the current time. Live Demo import java.util.Date; public class CreateDate { public static void main(String args[]) { Date date = new Date(); System.out.print(date); } } Thu Nov 02 15:43:01 IST 2018 Using the SimpleDateFormat class and the parse() method of this you can parse a date string in the required format and create a Date object representing the specified date. Live Demo import java.text.ParseException; import java.text.SimpleDateFormat; import java.util.Date; public class Test { public static void main(String args[]) throws ParseException { String date_string = "26-09-1989"; //Instantiating the SimpleDateFormat class SimpleDateFormat formatter = new SimpleDateFormat("dd-MM-yyyy"); //Parsing the given String to Date object Date date = formatter.parse(date_string); System.out.println("Date value: "+date); } } Date value: Tue Sep 26 00:00:00 IST 1989 A LocalDate object is similar to the date object except it represents the date without time zone, you can use this object instead of Date. The now() method of this class returns a LocalDate object representing the current time The of() method accepts the year, month and day values as parameters an returns the respective LocalDate object. The parse() method accepts a date-string as a parameter and returns the LocalDate object5 representing the given date. Live Demo import java.time.LocalDate; public class Test { public static void main(String args[]) { LocalDate date1 = LocalDate.of(2014, 9, 11); System.out.println(date1); LocalDate date2 = LocalDate.parse("2007-12-03"); System.out.println(date2); LocalDate date3 = LocalDate.now(); System.out.println(date3); } } 2014-09-11 2007-12-03 2020-11-05
[ { "code": null, "e": 1256, "s": 1062, "text": "You can create a Date object using the Date() constructor of java.util.Date constructor as shown in the following example. The object created using this constructor represents the current time." }, { "code": null, "e": 1266, "s": 1256, "text": "Live Demo" }, { "code": null, "e": 1432, "s": 1266, "text": "import java.util.Date;\npublic class CreateDate {\n public static void main(String args[]) { \n Date date = new Date();\n System.out.print(date);\n }\n}" }, { "code": null, "e": 1461, "s": 1432, "text": "Thu Nov 02 15:43:01 IST 2018" }, { "code": null, "e": 1634, "s": 1461, "text": "Using the SimpleDateFormat class and the parse() method of this you can parse a date string in the required format and create a Date object representing the specified date." }, { "code": null, "e": 1644, "s": 1634, "text": "Live Demo" }, { "code": null, "e": 2152, "s": 1644, "text": "import java.text.ParseException;\nimport java.text.SimpleDateFormat;\nimport java.util.Date;\npublic class Test {\n public static void main(String args[]) throws ParseException { \n String date_string = \"26-09-1989\";\n //Instantiating the SimpleDateFormat class\n SimpleDateFormat formatter = new SimpleDateFormat(\"dd-MM-yyyy\"); \n //Parsing the given String to Date object\n Date date = formatter.parse(date_string); \n System.out.println(\"Date value: \"+date);\n }\n}" }, { "code": null, "e": 2193, "s": 2152, "text": "Date value: Tue Sep 26 00:00:00 IST 1989" }, { "code": null, "e": 2332, "s": 2193, "text": "A LocalDate object is similar to the date object except it represents the date without time zone, you can use this object instead of Date." }, { "code": null, "e": 2420, "s": 2332, "text": "The now() method of this class returns a LocalDate object representing the current time" }, { "code": null, "e": 2533, "s": 2420, "text": "The of() method accepts the year, month and day values as parameters an returns the respective LocalDate object." }, { "code": null, "e": 2652, "s": 2533, "text": "The parse() method accepts a date-string as a parameter and returns the LocalDate object5 representing the given date." }, { "code": null, "e": 2662, "s": 2652, "text": "Live Demo" }, { "code": null, "e": 3009, "s": 2662, "text": "import java.time.LocalDate;\npublic class Test {\n public static void main(String args[]) { \n LocalDate date1 = LocalDate.of(2014, 9, 11);\n System.out.println(date1);\n LocalDate date2 = LocalDate.parse(\"2007-12-03\");\n System.out.println(date2);\n LocalDate date3 = LocalDate.now();\n System.out.println(date3);\n }\n}" }, { "code": null, "e": 3042, "s": 3009, "text": "2014-09-11\n2007-12-03\n2020-11-05" } ]
How to add values in columns having same name and merge them in R?
To add values in columns having same name and merge them in R, we can follow the below steps − First of all, create a data frame. Add column values that have same name and merge them by using cbind with do.call. Let's create a data frame as shown below − df<- data.frame(x=rpois(25,1),y=rpois(25,2),x=rpois(25,5),z=rpois(25,2),y=rpois(25,1),z=rpoi s(25,5),check.names=FALSE) df On executing, the above script generates the below output(this output will vary on your system due to randomization) − x y x z y z 1 1 1 7 1 1 1 2 3 2 4 0 3 4 3 1 3 4 2 0 3 4 1 1 9 2 3 3 5 1 2 3 0 3 8 6 1 3 2 1 0 4 7 1 2 5 2 2 4 8 0 1 6 3 2 7 9 1 4 4 2 1 1 10 1 2 3 3 0 3 11 2 2 10 1 0 14 12 2 2 2 2 0 1 13 0 3 5 3 0 6 14 2 1 3 2 0 5 15 0 2 5 6 1 4 16 1 2 3 0 0 5 17 0 2 7 2 2 11 18 3 2 4 1 2 5 19 0 7 5 1 0 10 20 0 3 3 1 0 6 21 1 1 5 7 5 6 22 1 0 5 3 2 5 23 3 3 3 3 0 2 24 2 0 6 4 1 7 25 0 7 1 2 1 4 Use do.call function with cbind function to add column values with colSums function as shown below − df<- data.frame(x=rpois(25,1),y=rpois(25,2),x=rpois(25,5),z=rpois(25,2),y=rpois(25,1),z=rpoi s(25,5),check.names=FALSE) Merged_df<-as.data.frame(do.call(cbind, by(t(df),INDICES=names(df),FUN=colSums))) Merged_df x y z V1 8 2 2 V2 7 5 4 V3 5 3 5 V4 10 4 5 V5 4 5 8 V6 3 3 5 V7 6 4 6 V8 6 3 10 V9 5 5 3 V10 4 2 6 V11 12 2 15 V12 4 2 3 V13 5 3 9 V14 5 1 7 V15 5 3 10 V16 4 2 5 V17 7 4 13 V18 7 4 6 V19 5 7 11 V20 3 3 7 V21 6 6 13 V22 6 2 8 V23 6 3 5 V24 8 1 11 V25 1 8 6
[ { "code": null, "e": 1157, "s": 1062, "text": "To add values in columns having same name and merge them in R, we can follow the\nbelow steps −" }, { "code": null, "e": 1192, "s": 1157, "text": "First of all, create a data frame." }, { "code": null, "e": 1274, "s": 1192, "text": "Add column values that have same name and merge them by using cbind with\ndo.call." }, { "code": null, "e": 1317, "s": 1274, "text": "Let's create a data frame as shown below −" }, { "code": null, "e": 1440, "s": 1317, "text": "df<-\ndata.frame(x=rpois(25,1),y=rpois(25,2),x=rpois(25,5),z=rpois(25,2),y=rpois(25,1),z=rpoi\ns(25,5),check.names=FALSE)\ndf" }, { "code": null, "e": 1559, "s": 1440, "text": "On executing, the above script generates the below output(this output will vary on your system due to randomization) −" }, { "code": null, "e": 1943, "s": 1559, "text": " x y x z y z\n1 1 1 7 1 1 1\n2 3 2 4 0 3 4\n3 1 3 4 2 0 3\n4 1 1 9 2 3 3\n5 1 2 3 0 3 8\n6 1 3 2 1 0 4\n7 1 2 5 2 2 4\n8 0 1 6 3 2 7\n9 1 4 4 2 1 1\n10 1 2 3 3 0 3\n11 2 2 10 1 0 14\n12 2 2 2 2 0 1\n13 0 3 5 3 0 6\n14 2 1 3 2 0 5\n15 0 2 5 6 1 4\n16 1 2 3 0 0 5\n17 0 2 7 2 2 11\n18 3 2 4 1 2 5\n19 0 7 5 1 0 10\n20 0 3 3 1 0 6\n21 1 1 5 7 5 6\n22 1 0 5 3 2 5\n23 3 3 3 3 0 2\n24 2 0 6 4 1 7\n25 0 7 1 2 1 4" }, { "code": null, "e": 2044, "s": 1943, "text": "Use do.call function with cbind function to add column values with colSums function as shown below −" }, { "code": null, "e": 2256, "s": 2044, "text": "df<-\ndata.frame(x=rpois(25,1),y=rpois(25,2),x=rpois(25,5),z=rpois(25,2),y=rpois(25,1),z=rpoi\ns(25,5),check.names=FALSE)\nMerged_df<-as.data.frame(do.call(cbind,\nby(t(df),INDICES=names(df),FUN=colSums)))\nMerged_df" }, { "code": null, "e": 2515, "s": 2256, "text": " x y z\nV1 8 2 2\nV2 7 5 4\nV3 5 3 5\nV4 10 4 5\nV5 4 5 8\nV6 3 3 5\nV7 6 4 6\nV8 6 3 10\nV9 5 5 3\nV10 4 2 6\nV11 12 2 15\nV12 4 2 3\nV13 5 3 9\nV14 5 1 7\nV15 5 3 10\nV16 4 2 5\nV17 7 4 13\nV18 7 4 6\nV19 5 7 11\nV20 3 3 7\nV21 6 6 13\nV22 6 2 8\nV23 6 3 5\nV24 8 1 11\nV25 1 8 6" } ]
Java sql.Time valueOf() method with example
The valueOf() method of the java.sql.Time class accepts a String value representing a time in JDBC escape format and converts the given String value into Time object. Time time = Time.valueOf("time_string"); Let us create a table with name dispatches in MySQL database using CREATE statement as follows − CREATE TABLE dispatches( ProductName VARCHAR(255), CustomerName VARCHAR(255), DispatchDate date, DeliveryTime time, Price INT, Location VARCHAR(255)); Now, we will insert 5 records in dispatches table using INSERT statements − insert into dispatches values('Key-Board', 'Raja', DATE('2019-09-01'), TIME('11:00:00'), 7000, 'Hyderabad'); insert into dispatches values('Earphones', 'Roja', DATE('2019-05-01'), TIME('11:00:00'), 2000, 'Vishakhapatnam'); insert into dispatches values('Mouse', 'Puja', DATE('2019-03-01'), TIME('10:59:59'), 3000, 'Vijayawada'); insert into dispatches values('Mobile', 'Vanaja', DATE('2019-03-01'), TIME('10:10:52'), 9000, 'Chennai'); insert into dispatches values('Headset', 'Jalaja', DATE('2019-04-06'), TIME('11:08:59'), 6000, 'Goa'); Following JDBC program establishes connection with the database and inserts a new record in to the dispatches table. import java.sql.Connection; import java.sql.Date; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; import java.sql.Time; public class Time_valueOf { public static void main(String args[]) throws SQLException { //Registering the Driver DriverManager.registerDriver(new com.mysql.jdbc.Driver()); //Getting the connection String mysqlUrl = "jdbc:mysql://localhost/mydatabase"; Connection con = DriverManager.getConnection(mysqlUrl, "root", "password"); System.out.println("Connection established......"); //Inserting values to a table String query = "INSERT INTO dispatches VALUES (?, ?, ?, ?, ?, ?)"; PreparedStatement pstmt = con.prepareStatement(query); pstmt.setString(1, "Watch"); pstmt.setString(2, "Rajan"); pstmt.setDate(3, new Date(1567315800000L)); Time time = Time.valueOf("10:59:59"); pstmt.setTime(4, time); pstmt.setInt(5, 4000); pstmt.setString(6, "Chennai"); pstmt.execute(); //Retrieving data Statement stmt = con.createStatement(); ResultSet rs = stmt.executeQuery("select * from dispatches"); while(rs.next()) { System.out.print("Name: "+rs.getString("ProductName")+", "); System.out.print("Customer Name: "+rs.getString("CustomerName")+", "); System.out.print("Dispatch Date: "+rs.getDate("DispatchDate")+", "); System.out.print("Delivery Time: "+rs.getTime("DeliveryTime")+", "); System.out.print("Price: "+rs.getInt("Price")+", "); System.out.print("Location: "+rs.getString("Location")); System.out.println(); } } } Here, in this program we are taking the time value in String format and converting it into the java.util.Time object using the valueOf() method. Connection established...... Name: Key-Board, Customer Name: Raja, Dispatch Date: 2019-09-01, Delivery Time: 11:00:00, Price: 7000, Location: Hyderabad, Name: Earphones, Customer Name: Roja, Dispatch Date: 2019-05-01, Delivery Time: 11:00:00, Price: 2000, Location: Vishakhapatnam, Name: Mouse, Customer Name: Puja, Dispatch Date: 2019-03-01, Delivery Time: 10:59:59, Price: 3000, Location: Vijayawada, Name: Mobile, Customer Name: Vanaja, Dispatch Date: 2019-03-01, Delivery Time: 10:10:52, Price: 9000, Location: Chennai, Name: Headset, Customer Name: Jalaja, Dispatch Date: 2019-04-06, Delivery Time: 11:08:59, Price: 6000, Location: Goa, Name: Watch, Customer Name: Rajan, Dispatch Date: 2019-09-01, Delivery Time: 10:59:59, Price: 4000, Location: Chennai
[ { "code": null, "e": 1229, "s": 1062, "text": "The valueOf() method of the java.sql.Time class accepts a String value representing a time in JDBC escape format and converts the given String value into Time object." }, { "code": null, "e": 1270, "s": 1229, "text": "Time time = Time.valueOf(\"time_string\");" }, { "code": null, "e": 1367, "s": 1270, "text": "Let us create a table with name dispatches in MySQL database using CREATE statement as follows −" }, { "code": null, "e": 1536, "s": 1367, "text": "CREATE TABLE dispatches(\n ProductName VARCHAR(255),\n CustomerName VARCHAR(255),\n DispatchDate date,\n DeliveryTime time,\n Price INT,\n Location VARCHAR(255));" }, { "code": null, "e": 1612, "s": 1536, "text": "Now, we will insert 5 records in dispatches table using INSERT statements −" }, { "code": null, "e": 2150, "s": 1612, "text": "insert into dispatches values('Key-Board', 'Raja', DATE('2019-09-01'), TIME('11:00:00'), 7000, 'Hyderabad');\ninsert into dispatches values('Earphones', 'Roja', DATE('2019-05-01'), TIME('11:00:00'), 2000, 'Vishakhapatnam');\ninsert into dispatches values('Mouse', 'Puja', DATE('2019-03-01'), TIME('10:59:59'), 3000, 'Vijayawada');\ninsert into dispatches values('Mobile', 'Vanaja', DATE('2019-03-01'), TIME('10:10:52'), 9000, 'Chennai');\ninsert into dispatches values('Headset', 'Jalaja', DATE('2019-04-06'), TIME('11:08:59'), 6000, 'Goa');" }, { "code": null, "e": 2267, "s": 2150, "text": "Following JDBC program establishes connection with the database and inserts a new record in to the dispatches table." }, { "code": null, "e": 4005, "s": 2267, "text": "import java.sql.Connection;\nimport java.sql.Date;\nimport java.sql.DriverManager;\nimport java.sql.PreparedStatement;\nimport java.sql.ResultSet;\nimport java.sql.SQLException;\nimport java.sql.Statement;\nimport java.sql.Time;\npublic class Time_valueOf {\n public static void main(String args[]) throws SQLException {\n //Registering the Driver\n DriverManager.registerDriver(new com.mysql.jdbc.Driver());\n //Getting the connection\n String mysqlUrl = \"jdbc:mysql://localhost/mydatabase\";\n Connection con = DriverManager.getConnection(mysqlUrl, \"root\", \"password\");\n System.out.println(\"Connection established......\");\n //Inserting values to a table\n String query = \"INSERT INTO dispatches VALUES (?, ?, ?, ?, ?, ?)\";\n PreparedStatement pstmt = con.prepareStatement(query);\n pstmt.setString(1, \"Watch\");\n pstmt.setString(2, \"Rajan\");\n pstmt.setDate(3, new Date(1567315800000L));\n Time time = Time.valueOf(\"10:59:59\");\n pstmt.setTime(4, time);\n pstmt.setInt(5, 4000);\n pstmt.setString(6, \"Chennai\");\n pstmt.execute();\n //Retrieving data\n Statement stmt = con.createStatement();\n ResultSet rs = stmt.executeQuery(\"select * from dispatches\");\n while(rs.next()) {\n System.out.print(\"Name: \"+rs.getString(\"ProductName\")+\", \");\n System.out.print(\"Customer Name: \"+rs.getString(\"CustomerName\")+\", \");\n System.out.print(\"Dispatch Date: \"+rs.getDate(\"DispatchDate\")+\", \");\n System.out.print(\"Delivery Time: \"+rs.getTime(\"DeliveryTime\")+\", \");\n System.out.print(\"Price: \"+rs.getInt(\"Price\")+\", \");\n System.out.print(\"Location: \"+rs.getString(\"Location\"));\n System.out.println();\n }\n }\n}" }, { "code": null, "e": 4150, "s": 4005, "text": "Here, in this program we are taking the time value in String format and converting it into the java.util.Time object using the valueOf() method." }, { "code": null, "e": 4910, "s": 4150, "text": "Connection established......\nName: Key-Board, Customer Name: Raja, Dispatch Date: 2019-09-01, Delivery Time: 11:00:00, Price: 7000, Location: Hyderabad,\nName: Earphones, Customer Name: Roja, Dispatch Date: 2019-05-01, Delivery Time: 11:00:00, Price: 2000, Location: Vishakhapatnam,\nName: Mouse, Customer Name: Puja, Dispatch Date: 2019-03-01, Delivery Time: 10:59:59, Price: 3000, Location: Vijayawada,\nName: Mobile, Customer Name: Vanaja, Dispatch Date: 2019-03-01, Delivery Time: 10:10:52, Price: 9000, Location: Chennai,\nName: Headset, Customer Name: Jalaja, Dispatch Date: 2019-04-06, Delivery Time: 11:08:59, Price: 6000, Location: Goa,\nName: Watch, Customer Name: Rajan, Dispatch Date: 2019-09-01, Delivery Time: 10:59:59, Price: 4000, Location: Chennai" } ]
How to create a Borderless Window in Java?
To create a borderless window in Java, do not decorate the window. The following is an example to create a BorderLess Window − package my; import java.awt.GraphicsEnvironment; import java.awt.GridLayout; import java.awt.Point; import javax.swing.JLabel; import javax.swing.JPasswordField; import javax.swing.JTextField; import javax.swing.JWindow; import javax.swing.SwingConstants; public class SwingDemo { public static void main(String[] args) throws Exception { JWindow frame = new JWindow(); JLabel label1, label2, label3; frame.setLayout(new GridLayout(2, 2)); label1 = new JLabel("Id", SwingConstants.CENTER); label2 = new JLabel("Age", SwingConstants.CENTER); label3 = new JLabel("Password", SwingConstants.CENTER); JTextField emailId = new JTextField(20); JTextField rank = new JTextField(20); JPasswordField passwd = new JPasswordField(); passwd.setEchoChar('*'); frame.add(label1); frame.add(label2); frame.add(label3); frame.add(emailId); frame.add(rank); frame.add(passwd); Point center = GraphicsEnvironment.getLocalGraphicsEnvironment().getCenterPoint(); int width = 500; int height = 200; frame.setBounds(center.x - width / 2, center.y - height / 2, width, height); frame.setVisible(true); } }
[ { "code": null, "e": 1189, "s": 1062, "text": "To create a borderless window in Java, do not decorate the window. The following is an example to create a BorderLess Window −" }, { "code": null, "e": 2404, "s": 1189, "text": "package my;\nimport java.awt.GraphicsEnvironment;\nimport java.awt.GridLayout;\nimport java.awt.Point;\nimport javax.swing.JLabel;\nimport javax.swing.JPasswordField;\nimport javax.swing.JTextField;\nimport javax.swing.JWindow;\nimport javax.swing.SwingConstants;\npublic class SwingDemo {\n public static void main(String[] args) throws Exception {\n JWindow frame = new JWindow();\n JLabel label1, label2, label3;\n frame.setLayout(new GridLayout(2, 2));\n label1 = new JLabel(\"Id\", SwingConstants.CENTER);\n label2 = new JLabel(\"Age\", SwingConstants.CENTER);\n label3 = new JLabel(\"Password\", SwingConstants.CENTER);\n JTextField emailId = new JTextField(20);\n JTextField rank = new JTextField(20);\n JPasswordField passwd = new JPasswordField();\n passwd.setEchoChar('*');\n frame.add(label1);\n frame.add(label2);\n frame.add(label3);\n frame.add(emailId);\n frame.add(rank);\n frame.add(passwd);\n Point center = GraphicsEnvironment.getLocalGraphicsEnvironment().getCenterPoint();\n int width = 500;\n int height = 200;\n frame.setBounds(center.x - width / 2, center.y - height / 2, width, height);\n frame.setVisible(true);\n }\n}" } ]
Disable images in Selenium Google ChromeDriver.
We can disable images in Selenium in chromedriver. The images are sometimes disabled so that page load takes less time and execution is quick. In Chrome, we can do this with the help of the prefs setting. prefs.put("profile.managed_default_content_settings.images", 2); Let’s us make an attempt to disable all image from the below page − Code Implementation. import org.openqa.selenium.By; import org.openqa.selenium.chrome.ChromeDriver; import org.openqa.selenium.chrome.ChromeOptions; import java.util.HashMap; import java.util.Map; public class ChromeDisableImg { public static void main(String[] args) throws IOException { System.setProperty("webdriver.chrome.driver", "C:\\Users\\ghs6kor\\Desktop\\Java\\chromedriver.exe"); Map<String, Object> prefs = new HashMap<String, Object>(); // browser setting to disable image prefs.put("profile.managed_default_content_settings.images", 2); //adding capabilities to browser ChromeOptions op = new ChromeOptions(); op.setExperimentalOption("prefs", prefs); // putting desired capabilities to browser WebDriver driver= new ChromeDriver(op); driver.get("https://www.tutorialspoint.com/index.htm/"); } }
[ { "code": null, "e": 1267, "s": 1062, "text": "We can disable images in Selenium in chromedriver. The images are sometimes disabled so that page load takes less time and execution is quick. In Chrome, we can do this with the help of the prefs setting." }, { "code": null, "e": 1332, "s": 1267, "text": "prefs.put(\"profile.managed_default_content_settings.images\", 2);" }, { "code": null, "e": 1400, "s": 1332, "text": "Let’s us make an attempt to disable all image from the below page −" }, { "code": null, "e": 1421, "s": 1400, "text": "Code Implementation." }, { "code": null, "e": 2276, "s": 1421, "text": "import org.openqa.selenium.By;\nimport org.openqa.selenium.chrome.ChromeDriver;\nimport org.openqa.selenium.chrome.ChromeOptions;\nimport java.util.HashMap;\nimport java.util.Map;\npublic class ChromeDisableImg {\n public static void main(String[] args) throws IOException {\n System.setProperty(\"webdriver.chrome.driver\", \"C:\\\\Users\\\\ghs6kor\\\\Desktop\\\\Java\\\\chromedriver.exe\");\n Map<String, Object> prefs = new HashMap<String, Object>();\n // browser setting to disable image\n prefs.put(\"profile.managed_default_content_settings.images\", 2);\n //adding capabilities to browser\n ChromeOptions op = new ChromeOptions();\n op.setExperimentalOption(\"prefs\", prefs);\n // putting desired capabilities to browser\n WebDriver driver= new ChromeDriver(op);\n driver.get(\"https://www.tutorialspoint.com/index.htm/\");\n }\n}" } ]
How to create a bar graph using ggplot2 without horizontal gridlines and Y-axes labels in R?
A bar graph plotted with ggplot function of ggplot2 shows horizontal and vertical gridlines. If we are interested only in the bar heights then we might prefer to remove the horizontal gridlines. In this way, we can have X-axis that helps us to look at the different categories we have in our variable of interest and get rid of the unnecessary information. This can be done by setting breaks argument to NULL in scale_y_discrete function. Consider the below data frame − > x<-1:5 > y<-c(20,18,10,15,17) > df<-data.frame(x,y) Loading ggplot2 package − > library(ggplot2) Creating the plot with all gridlines − > ggplot(df,aes(x,y))+ + geom_bar(stat='identity') Creating the plot without horizontal gridlines − > ggplot(df,aes(x,y))+ + geom_bar(stat='identity')+ + scale_y_discrete(breaks = NULL)
[ { "code": null, "e": 1501, "s": 1062, "text": "A bar graph plotted with ggplot function of ggplot2 shows horizontal and vertical gridlines. If we are interested only in the bar heights then we might prefer to remove the horizontal gridlines. In this way, we can have X-axis that helps us to look at the different categories we have in our variable of interest and get rid of the unnecessary information. This can be done by setting breaks argument to NULL in scale_y_discrete function." }, { "code": null, "e": 1533, "s": 1501, "text": "Consider the below data frame −" }, { "code": null, "e": 1587, "s": 1533, "text": "> x<-1:5\n> y<-c(20,18,10,15,17)\n> df<-data.frame(x,y)" }, { "code": null, "e": 1613, "s": 1587, "text": "Loading ggplot2 package −" }, { "code": null, "e": 1632, "s": 1613, "text": "> library(ggplot2)" }, { "code": null, "e": 1671, "s": 1632, "text": "Creating the plot with all gridlines −" }, { "code": null, "e": 1722, "s": 1671, "text": "> ggplot(df,aes(x,y))+\n+ geom_bar(stat='identity')" }, { "code": null, "e": 1771, "s": 1722, "text": "Creating the plot without horizontal gridlines −" }, { "code": null, "e": 1857, "s": 1771, "text": "> ggplot(df,aes(x,y))+\n+ geom_bar(stat='identity')+\n+ scale_y_discrete(breaks = NULL)" } ]
How to Create a GraphQL API using AWS AppSync | by Janitha Tennakoon | Towards Data Science
Nowadays whenever we talk or think about creating/designing an API what pops to the mind at first is REST. REST(REpresentational State Transfer) has been the go-to standard until recently when developing an API platform. Even though REST became the standard, it did have its own disadvantages. One of the main disadvantages is the inflexibility for the clients who are going to consume them. So even though at the beginning we create our REST API according to client requirements, that API will have very few options when there is a rapid change of requirements in the client. To support these rapid changes clients would have the need to send multiple calls and get multiple unnecessary data along with it. GraphQL was developed mainly focusing on providing this flexibility for the clients. It was started as an internal project inside Facebook but later they made it open source. The main concept is to let the client have the ability to choose what data to be queried and what data needs to be returned without making multiple API calls. Yes as you read with GraphQL, there are no multiple endpoints, but rather only a single endpoint. To illustrate how GraphQL works let’s try to implement a simple GraphQL API using node.js and express framework. The data model for our GraphQL will be a User. Following npm packages will be used in our code. express, express-graphql, graphql A GraphQL API mainly consists of four components. Schema Queries Mutations Resolvers GraphQL schema is the core element where we define the functionalities clients can execute after they are connected to the API. The main building block on schema is type. Above as you can see using building block type we have created three blocks. type Query, type Mutation, and type User. Query and Mutation will be described below. For type User as you can see we have defined the attribute fields that will be available for clients when querying for User. Query type is used to define what type of queries will be available for the clients to be accessed. In REST terms Query type can be mapped to GET requests. In the above schema, we have defined three queries with arguments they accept and what the return value type will be. (! stands for required) Whatever functions which make data change should be done as a Mutation in GraphQL. POST, PUT, and DELETE requests on REST can be mapped as mutations on GraphQL. As we defined queries, mutations are also defined with arguments and return value type. In resolvers, we define the functionality of queries and mutations we defined in the schema. Resolvers map the schema defined methods to our executing functioning methods. In above as you can see, for each query and mutation we defined, we have mapped a Javascript function that will execute the functional logic. (getUser, getUserByName, getUserByStatus and updateUser are Javascript functions) All right, now that we have covered the main concepts in our code, below is the complete code for our simple GraphQL API. As mentioned above you need to install the mentioned npm packages in order to run the application. Now you can start the node.js server and to issue queries to our API we can use GraphiQL tool which will run on http://localhost:4000/graphql Now, as shown above, we can execute queries for our created API. Above we are getting the user with an id of 1, and we request only name and age fields to be returned. Same as queries, we can send mutations to update users as well. Above we have changed the age of the user to 25. AWS AppSync is a service provided by Amazon Web Services which simplifies the API application development by letting developers create a secure, flexible GraphQL API on their infrastructure. The benefit of using AWS AppSync is that it also provides additional features like Cognito, IAM permissions, API key, and many other AWS services to integrate with our API. With AppSync main concepts of GraphQL remain mostly the same with one additional type, Subscriptions. Subscriptions are invoked to a mutation done through the API so it can be used to create real-time GrapgQL APIs. Also, we need to talk about two more additional components in AWS AppSync before we start creating our own GraphQL in AWS AppSync. Data Source — Data source can either be persistent storage (relational database or NoSQL database) or a trigger (AWS Lambda functions or another HTTP API) Resolver — Have the same concept as a resolver in GrapQL, but here we map request payload to our data sources or triggers. These resolvers mainly compromise with mapping templates which contains execution logics. Let’s start implementing our API. First go to the AppSync service where you will greeted with the below screen if you have no APIs created already. Click on Create API which will take us to the API creation page. Here AWS will provide us with several options. We can either choose from one of the already created templates or start from scratch. For this article let’s choose Build from scratch so we will be able to learn how everything connects behind the scene. Next, provide a Name and create our API. Then we will be forwarded to a screen where we will have the option of editing our schema and running queries against our API. Since we have nothing on our API yet, let’s first define our schema. In this article, I am going to discuss on two types of data sources that AppSync supports. One is DynamoDB and the other will be AWS Lambda functions. As the above created simple GraphQL API, let’s assume our User data model. Let’s say we need our Users to be saved in a DynamoDB table. So all the queries and mutations done on type User will be directly happening on our DynamoDB table. So first let’s define our User type in the schema. On the schema page click on Create Resource where we will define our User schema. We define the new type of User and then it will ask for the DynamoDB table details. Here we can provide the name of the table and also configure different types of indexes we needed to create as well. And lastly it will show the schema blocks that it has automatically generated for our type User which will be merged to our schema. Click on Create and it will create the DynamoDB table along with resolvers as well. Now let’s look more into our schema. We can see that AppSync has automatically generated query types and mutations for us and it have already mapped these queries and mutations to resolvers as well. Let’s look at one resolver to identify how resolvers work on AppSync. Click on createUser mutation. Here we can see the template that AppSync has used for this mutation. In the request mapping template, we can see that it will take id as the key for our userTable and create a user collection inside the table. $ctx.args.input is the arguments we will pass to our mutation. Response mapping defines the response we will send back to the client. Here it will directly send the output sent from DynamoDb which will be the newly created user. We can test our API using the provided Queries tool. Let’s first add a user and try to query the user using the id. The API so far we created can do all CRUD operations directly on our DynamoDB database. But APIs do not only contain CRUD operations. There can be different kinds of functional logics like starting a process by sending messages to a queue or rather than doing CRUD operation on a database we might need to use a different AWS resource like ElasticSearch. To cater to these kinds of scenarios we can use AWS lambda functions as our Data source. For this article let’s assume that rather than taking user data from our DynamoDb we want to take them from an AWS lambda function. For that first, let’s create our lambda function. The query that we are going to map as a resolver is getUser(id: Int!): User Create a new lambda function and add the below code for the function. Here we are checking for the event.field parameter which we will send through the resolver. So if the field parameter is getUsers then we will return the filtered user. Now let’s configure a resolver for this function. Before that, we need to register this function as a data source in our API. For that go to the Data Sources tab and click on Create data source. Next, provide a name for the data source, select DataStorageType as Lambda, and then region and lastly the lambda function we created. The next step is to assign this data source in our schema. For that first delete the resolver already mapped to getUser() when we mapped our schema to DynamoDB table. After deleting click on Attach. Then select the data source as the data source we created for our lambda function. Next will be to add templates for request mappings and response mappings. { "version": "2017-02-28", "operation": "Invoke", "payload": { "field": "getUser", "arguments": $utils.toJson($context.arguments) }} Add the above as a request mapping template. Here we specify the field as getUser which we use in our lambda function as event.field. For the response mapping, we can leave it as it is. Now let’s try to query this from our GraphQL API. That’s it. Even though here the lambda function is used to just return a user I think you might be able to figure out whatever a lambda function can do can be mapped to our GraphQL as well, which indeed creates a serverless API. I think you have learned more about what GraphQL and how to use AWS AppSync to create a GraphQL for us. There are many more concepts that have not been covered in this article. So if you are keen on this make sure to follow official documentation as well as other awesome articles that are available out there. Thank you.
[ { "code": null, "e": 880, "s": 172, "text": "Nowadays whenever we talk or think about creating/designing an API what pops to the mind at first is REST. REST(REpresentational State Transfer) has been the go-to standard until recently when developing an API platform. Even though REST became the standard, it did have its own disadvantages. One of the main disadvantages is the inflexibility for the clients who are going to consume them. So even though at the beginning we create our REST API according to client requirements, that API will have very few options when there is a rapid change of requirements in the client. To support these rapid changes clients would have the need to send multiple calls and get multiple unnecessary data along with it." }, { "code": null, "e": 1472, "s": 880, "text": "GraphQL was developed mainly focusing on providing this flexibility for the clients. It was started as an internal project inside Facebook but later they made it open source. The main concept is to let the client have the ability to choose what data to be queried and what data needs to be returned without making multiple API calls. Yes as you read with GraphQL, there are no multiple endpoints, but rather only a single endpoint. To illustrate how GraphQL works let’s try to implement a simple GraphQL API using node.js and express framework. The data model for our GraphQL will be a User." }, { "code": null, "e": 1521, "s": 1472, "text": "Following npm packages will be used in our code." }, { "code": null, "e": 1555, "s": 1521, "text": "express, express-graphql, graphql" }, { "code": null, "e": 1605, "s": 1555, "text": "A GraphQL API mainly consists of four components." }, { "code": null, "e": 1612, "s": 1605, "text": "Schema" }, { "code": null, "e": 1620, "s": 1612, "text": "Queries" }, { "code": null, "e": 1630, "s": 1620, "text": "Mutations" }, { "code": null, "e": 1640, "s": 1630, "text": "Resolvers" }, { "code": null, "e": 1811, "s": 1640, "text": "GraphQL schema is the core element where we define the functionalities clients can execute after they are connected to the API. The main building block on schema is type." }, { "code": null, "e": 2099, "s": 1811, "text": "Above as you can see using building block type we have created three blocks. type Query, type Mutation, and type User. Query and Mutation will be described below. For type User as you can see we have defined the attribute fields that will be available for clients when querying for User." }, { "code": null, "e": 2397, "s": 2099, "text": "Query type is used to define what type of queries will be available for the clients to be accessed. In REST terms Query type can be mapped to GET requests. In the above schema, we have defined three queries with arguments they accept and what the return value type will be. (! stands for required)" }, { "code": null, "e": 2646, "s": 2397, "text": "Whatever functions which make data change should be done as a Mutation in GraphQL. POST, PUT, and DELETE requests on REST can be mapped as mutations on GraphQL. As we defined queries, mutations are also defined with arguments and return value type." }, { "code": null, "e": 2818, "s": 2646, "text": "In resolvers, we define the functionality of queries and mutations we defined in the schema. Resolvers map the schema defined methods to our executing functioning methods." }, { "code": null, "e": 3042, "s": 2818, "text": "In above as you can see, for each query and mutation we defined, we have mapped a Javascript function that will execute the functional logic. (getUser, getUserByName, getUserByStatus and updateUser are Javascript functions)" }, { "code": null, "e": 3263, "s": 3042, "text": "All right, now that we have covered the main concepts in our code, below is the complete code for our simple GraphQL API. As mentioned above you need to install the mentioned npm packages in order to run the application." }, { "code": null, "e": 3405, "s": 3263, "text": "Now you can start the node.js server and to issue queries to our API we can use GraphiQL tool which will run on http://localhost:4000/graphql" }, { "code": null, "e": 3573, "s": 3405, "text": "Now, as shown above, we can execute queries for our created API. Above we are getting the user with an id of 1, and we request only name and age fields to be returned." }, { "code": null, "e": 3686, "s": 3573, "text": "Same as queries, we can send mutations to update users as well. Above we have changed the age of the user to 25." }, { "code": null, "e": 4050, "s": 3686, "text": "AWS AppSync is a service provided by Amazon Web Services which simplifies the API application development by letting developers create a secure, flexible GraphQL API on their infrastructure. The benefit of using AWS AppSync is that it also provides additional features like Cognito, IAM permissions, API key, and many other AWS services to integrate with our API." }, { "code": null, "e": 4396, "s": 4050, "text": "With AppSync main concepts of GraphQL remain mostly the same with one additional type, Subscriptions. Subscriptions are invoked to a mutation done through the API so it can be used to create real-time GrapgQL APIs. Also, we need to talk about two more additional components in AWS AppSync before we start creating our own GraphQL in AWS AppSync." }, { "code": null, "e": 4551, "s": 4396, "text": "Data Source — Data source can either be persistent storage (relational database or NoSQL database) or a trigger (AWS Lambda functions or another HTTP API)" }, { "code": null, "e": 4764, "s": 4551, "text": "Resolver — Have the same concept as a resolver in GrapQL, but here we map request payload to our data sources or triggers. These resolvers mainly compromise with mapping templates which contains execution logics." }, { "code": null, "e": 4912, "s": 4764, "text": "Let’s start implementing our API. First go to the AppSync service where you will greeted with the below screen if you have no APIs created already." }, { "code": null, "e": 5229, "s": 4912, "text": "Click on Create API which will take us to the API creation page. Here AWS will provide us with several options. We can either choose from one of the already created templates or start from scratch. For this article let’s choose Build from scratch so we will be able to learn how everything connects behind the scene." }, { "code": null, "e": 5397, "s": 5229, "text": "Next, provide a Name and create our API. Then we will be forwarded to a screen where we will have the option of editing our schema and running queries against our API." }, { "code": null, "e": 5692, "s": 5397, "text": "Since we have nothing on our API yet, let’s first define our schema. In this article, I am going to discuss on two types of data sources that AppSync supports. One is DynamoDB and the other will be AWS Lambda functions. As the above created simple GraphQL API, let’s assume our User data model." }, { "code": null, "e": 5905, "s": 5692, "text": "Let’s say we need our Users to be saved in a DynamoDB table. So all the queries and mutations done on type User will be directly happening on our DynamoDB table. So first let’s define our User type in the schema." }, { "code": null, "e": 5987, "s": 5905, "text": "On the schema page click on Create Resource where we will define our User schema." }, { "code": null, "e": 6404, "s": 5987, "text": "We define the new type of User and then it will ask for the DynamoDB table details. Here we can provide the name of the table and also configure different types of indexes we needed to create as well. And lastly it will show the schema blocks that it has automatically generated for our type User which will be merged to our schema. Click on Create and it will create the DynamoDB table along with resolvers as well." }, { "code": null, "e": 6603, "s": 6404, "text": "Now let’s look more into our schema. We can see that AppSync has automatically generated query types and mutations for us and it have already mapped these queries and mutations to resolvers as well." }, { "code": null, "e": 6673, "s": 6603, "text": "Let’s look at one resolver to identify how resolvers work on AppSync." }, { "code": null, "e": 6977, "s": 6673, "text": "Click on createUser mutation. Here we can see the template that AppSync has used for this mutation. In the request mapping template, we can see that it will take id as the key for our userTable and create a user collection inside the table. $ctx.args.input is the arguments we will pass to our mutation." }, { "code": null, "e": 7143, "s": 6977, "text": "Response mapping defines the response we will send back to the client. Here it will directly send the output sent from DynamoDb which will be the newly created user." }, { "code": null, "e": 7259, "s": 7143, "text": "We can test our API using the provided Queries tool. Let’s first add a user and try to query the user using the id." }, { "code": null, "e": 7703, "s": 7259, "text": "The API so far we created can do all CRUD operations directly on our DynamoDB database. But APIs do not only contain CRUD operations. There can be different kinds of functional logics like starting a process by sending messages to a queue or rather than doing CRUD operation on a database we might need to use a different AWS resource like ElasticSearch. To cater to these kinds of scenarios we can use AWS lambda functions as our Data source." }, { "code": null, "e": 7937, "s": 7703, "text": "For this article let’s assume that rather than taking user data from our DynamoDb we want to take them from an AWS lambda function. For that first, let’s create our lambda function. The query that we are going to map as a resolver is" }, { "code": null, "e": 7961, "s": 7937, "text": "getUser(id: Int!): User" }, { "code": null, "e": 8200, "s": 7961, "text": "Create a new lambda function and add the below code for the function. Here we are checking for the event.field parameter which we will send through the resolver. So if the field parameter is getUsers then we will return the filtered user." }, { "code": null, "e": 8395, "s": 8200, "text": "Now let’s configure a resolver for this function. Before that, we need to register this function as a data source in our API. For that go to the Data Sources tab and click on Create data source." }, { "code": null, "e": 8530, "s": 8395, "text": "Next, provide a name for the data source, select DataStorageType as Lambda, and then region and lastly the lambda function we created." }, { "code": null, "e": 8697, "s": 8530, "text": "The next step is to assign this data source in our schema. For that first delete the resolver already mapped to getUser() when we mapped our schema to DynamoDB table." }, { "code": null, "e": 8886, "s": 8697, "text": "After deleting click on Attach. Then select the data source as the data source we created for our lambda function. Next will be to add templates for request mappings and response mappings." }, { "code": null, "e": 9046, "s": 8886, "text": "{ \"version\": \"2017-02-28\", \"operation\": \"Invoke\", \"payload\": { \"field\": \"getUser\", \"arguments\": $utils.toJson($context.arguments) }}" }, { "code": null, "e": 9232, "s": 9046, "text": "Add the above as a request mapping template. Here we specify the field as getUser which we use in our lambda function as event.field. For the response mapping, we can leave it as it is." }, { "code": null, "e": 9282, "s": 9232, "text": "Now let’s try to query this from our GraphQL API." }, { "code": null, "e": 9511, "s": 9282, "text": "That’s it. Even though here the lambda function is used to just return a user I think you might be able to figure out whatever a lambda function can do can be mapped to our GraphQL as well, which indeed creates a serverless API." } ]
Dart Programming - Collection Queue
A Queue is a collection that can be manipulated at both ends. Queues are useful when you want to build a first-in, first-out collection. Simply put, a queue inserts data from one end and deletes from another end. The values are removed / read in the order of their insertion. Identifier = new Queue() The add() function can be used to insert values to the queue. This function inserts the value specified to the end of the queue. The following example illustrates the same. import 'dart:collection'; void main() { Queue queue = new Queue(); print("Default implementation ${queue.runtimeType}"); queue.add(10); queue.add(20); queue.add(30); queue.add(40); for(var no in queue){ print(no); } } It should produce the following output − Default implementation ListQueue 10 20 30 40 The addAll() function enables adding multiple values to a queue, all at once. This function takes an iterable list of values. import 'dart:collection'; void main() { Queue queue = new Queue(); print("Default implementation ${queue.runtimeType}"); queue.addAll([10,12,13,14]); for(var no in queue){ print(no); } } It should produce the following output − Default implementation ListQueue 10 12 13 14 The addFirst() method adds the specified value to the beginning of the queue. This function is passed an object that represents the value to be added. The addLast() function adds the specified object to the end of the queue. The following example shows how you can add a value at the beginning of a Queue using the addFirst() method − import 'dart:collection'; void main() { Queue numQ = new Queue(); numQ.addAll([100,200,300]); print("Printing Q.. ${numQ}"); numQ.addFirst(400); print("Printing Q.. ${numQ}"); } It should produce the following output − Printing Q.. {100, 200, 300} Printing Q.. {400, 100, 200, 300} The following example shows how you can add a value at the beginning of a Queue using the addLast() method − import 'dart:collection'; void main() { Queue numQ = new Queue(); numQ.addAll([100,200,300]); print("Printing Q.. ${numQ}"); numQ.addLast(400); print("Printing Q.. ${numQ}"); } It should produce the following output − Printing Q.. {100, 200, 300} Printing Q.. {100, 200, 300, 400} 44 Lectures 4.5 hours Sriyank Siddhartha 34 Lectures 4 hours Sriyank Siddhartha 69 Lectures 4 hours Frahaan Hussain 117 Lectures 10 hours Frahaan Hussain 22 Lectures 1.5 hours Pranjal Srivastava 34 Lectures 3 hours Pranjal Srivastava Print Add Notes Bookmark this page
[ { "code": null, "e": 2801, "s": 2525, "text": "A Queue is a collection that can be manipulated at both ends. Queues are useful when you want to build a first-in, first-out collection. Simply put, a queue inserts data from one end and deletes from another end. The values are removed / read in the order of their insertion." }, { "code": null, "e": 2827, "s": 2801, "text": "Identifier = new Queue()\n" }, { "code": null, "e": 3001, "s": 2827, "text": "The add() function can be used to insert values to the queue. This function inserts the value specified to the end of the queue. The following example illustrates the same." }, { "code": null, "e": 3270, "s": 3001, "text": "import 'dart:collection'; \nvoid main() { \n Queue queue = new Queue(); \n print(\"Default implementation ${queue.runtimeType}\"); \n queue.add(10); \n queue.add(20); \n queue.add(30); \n queue.add(40); \n \n for(var no in queue){ \n print(no); \n } \n} " }, { "code": null, "e": 3311, "s": 3270, "text": "It should produce the following output −" }, { "code": null, "e": 3361, "s": 3311, "text": "Default implementation ListQueue\n10 \n20 \n30 \n40 \n" }, { "code": null, "e": 3487, "s": 3361, "text": "The addAll() function enables adding multiple values to a queue, all at once. This function takes an iterable list of values." }, { "code": null, "e": 3703, "s": 3487, "text": "import 'dart:collection'; \nvoid main() { \n Queue queue = new Queue(); \n print(\"Default implementation ${queue.runtimeType}\"); \n queue.addAll([10,12,13,14]); \n for(var no in queue){ \n print(no); \n } \n}" }, { "code": null, "e": 3744, "s": 3703, "text": "It should produce the following output −" }, { "code": null, "e": 3795, "s": 3744, "text": "Default implementation ListQueue \n10 \n12 \n13 \n14 \n" }, { "code": null, "e": 4020, "s": 3795, "text": "The addFirst() method adds the specified value to the beginning of the queue. This function is passed an object that represents the value to be added. The addLast() function adds the specified object to the end of the queue." }, { "code": null, "e": 4130, "s": 4020, "text": "The following example shows how you can add a value at the beginning of a Queue using the addFirst() method −" }, { "code": null, "e": 4332, "s": 4130, "text": "import 'dart:collection'; \nvoid main() { \n Queue numQ = new Queue(); \n numQ.addAll([100,200,300]); \n print(\"Printing Q.. ${numQ}\");\n numQ.addFirst(400); \n print(\"Printing Q.. ${numQ}\"); \n} " }, { "code": null, "e": 4373, "s": 4332, "text": "It should produce the following output −" }, { "code": null, "e": 4438, "s": 4373, "text": "Printing Q.. {100, 200, 300} \nPrinting Q.. {400, 100, 200, 300}\n" }, { "code": null, "e": 4547, "s": 4438, "text": "The following example shows how you can add a value at the beginning of a Queue using the addLast() method −" }, { "code": null, "e": 4748, "s": 4547, "text": "import 'dart:collection'; \nvoid main() { \n Queue numQ = new Queue(); \n numQ.addAll([100,200,300]); \n print(\"Printing Q.. ${numQ}\"); \n numQ.addLast(400); \n print(\"Printing Q.. ${numQ}\"); \n} " }, { "code": null, "e": 4789, "s": 4748, "text": "It should produce the following output −" }, { "code": null, "e": 4855, "s": 4789, "text": "Printing Q.. {100, 200, 300} \nPrinting Q.. {100, 200, 300, 400} \n" }, { "code": null, "e": 4890, "s": 4855, "text": "\n 44 Lectures \n 4.5 hours \n" }, { "code": null, "e": 4910, "s": 4890, "text": " Sriyank Siddhartha" }, { "code": null, "e": 4943, "s": 4910, "text": "\n 34 Lectures \n 4 hours \n" }, { "code": null, "e": 4963, "s": 4943, "text": " Sriyank Siddhartha" }, { "code": null, "e": 4996, "s": 4963, "text": "\n 69 Lectures \n 4 hours \n" }, { "code": null, "e": 5013, "s": 4996, "text": " Frahaan Hussain" }, { "code": null, "e": 5048, "s": 5013, "text": "\n 117 Lectures \n 10 hours \n" }, { "code": null, "e": 5065, "s": 5048, "text": " Frahaan Hussain" }, { "code": null, "e": 5100, "s": 5065, "text": "\n 22 Lectures \n 1.5 hours \n" }, { "code": null, "e": 5120, "s": 5100, "text": " Pranjal Srivastava" }, { "code": null, "e": 5153, "s": 5120, "text": "\n 34 Lectures \n 3 hours \n" }, { "code": null, "e": 5173, "s": 5153, "text": " Pranjal Srivastava" }, { "code": null, "e": 5180, "s": 5173, "text": " Print" }, { "code": null, "e": 5191, "s": 5180, "text": " Add Notes" } ]
tee command in Linux with examples - GeeksforGeeks
19 Feb, 2021 tee command reads the standard input and writes it to both the standard output and one or more files. The command is named after the T-splitter used in plumbing. It basically breaks the output of a program so that it can be both displayed and saved in a file. It does both the tasks simultaneously, copies the result into the specified files or variables and also display the result. SYNTAX: tee [OPTION]... [FILE]... Options :1.-a Option : It basically do not overwrite the file but append to the given file.Suppose we have file1.txt Input: geek for geeks and file2.txt Input:geeks for geeks SYNTAX : geek@HP:~$ wc -l file1.txt|tee -a file2.txt OUTPUT : 3 file1.txt geek@HP:~$cat file2.txt OUTPUT: geeks for geeks 3 file1.txt 2.–help Option : It gives the help message and exit.SYNTAX : geek@HP:~$ tee --help 3.–version Option : It gives the version information and exit.SYNTAX : geek@HP:~$ tee --version Application Suppose we want to count number of characters in our file and also want to save the output to new text file so to do both activities at same time, we use tee command. geek@HP:~$ wc -l file1.txt| tee file2.txt OUTPUT: geek@HP:~$15 file1.txt Here we have file1 with 15 characters, so the output will be 15 and the output will be stored to file2. In order to check the output we use : geek@HP:~$ cat file2.txt OUTPUT: geek@HP:~$15 file1.txt YouTubeGeeksforGeeks500K subscribersLinux Tutorials | Pipe and tee | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:002:11 / 3:37•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=19mmVar-s5Y" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div> linux-command Linux-Shell-Commands Linux-Unix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments tar command in Linux with examples UDP Server-Client implementation in C Conditional Statements | Shell Script curl command in Linux with Examples echo command in Linux with Examples Cat command in Linux with examples touch command in Linux with Examples Mutex lock for Linux Thread Synchronization Tail command in Linux with examples Thread functions in C/C++
[ { "code": null, "e": 23962, "s": 23934, "text": "\n19 Feb, 2021" }, { "code": null, "e": 24346, "s": 23962, "text": "tee command reads the standard input and writes it to both the standard output and one or more files. The command is named after the T-splitter used in plumbing. It basically breaks the output of a program so that it can be both displayed and saved in a file. It does both the tasks simultaneously, copies the result into the specified files or variables and also display the result." }, { "code": null, "e": 24354, "s": 24346, "text": "SYNTAX:" }, { "code": null, "e": 24381, "s": 24354, "text": "tee [OPTION]... [FILE]...\n" }, { "code": null, "e": 24498, "s": 24381, "text": "Options :1.-a Option : It basically do not overwrite the file but append to the given file.Suppose we have file1.txt" }, { "code": null, "e": 24535, "s": 24498, "text": "Input: geek\n for\n geeks\n" }, { "code": null, "e": 24549, "s": 24535, "text": "and file2.txt" }, { "code": null, "e": 24584, "s": 24549, "text": "Input:geeks\n for\n geeks\n" }, { "code": null, "e": 24593, "s": 24584, "text": "SYNTAX :" }, { "code": null, "e": 24638, "s": 24593, "text": "geek@HP:~$ wc -l file1.txt|tee -a file2.txt\n" }, { "code": null, "e": 24647, "s": 24638, "text": "OUTPUT :" }, { "code": null, "e": 24660, "s": 24647, "text": "3 file1.txt\n" }, { "code": null, "e": 24745, "s": 24660, "text": "geek@HP:~$cat file2.txt\nOUTPUT:\n geeks\n for\n geeks\n 3 file1.txt\n" }, { "code": null, "e": 24806, "s": 24745, "text": "2.–help Option : It gives the help message and exit.SYNTAX :" }, { "code": null, "e": 24829, "s": 24806, "text": "geek@HP:~$ tee --help\n" }, { "code": null, "e": 24900, "s": 24829, "text": "3.–version Option : It gives the version information and exit.SYNTAX :" }, { "code": null, "e": 24926, "s": 24900, "text": "geek@HP:~$ tee --version\n" }, { "code": null, "e": 24938, "s": 24926, "text": "Application" }, { "code": null, "e": 25105, "s": 24938, "text": "Suppose we want to count number of characters in our file and also want to save the output to new text file so to do both activities at same time, we use tee command." }, { "code": null, "e": 25179, "s": 25105, "text": "geek@HP:~$ wc -l file1.txt| tee file2.txt\nOUTPUT:\ngeek@HP:~$15 file1.txt\n" }, { "code": null, "e": 25321, "s": 25179, "text": "Here we have file1 with 15 characters, so the output will be 15 and the output will be stored to file2. In order to check the output we use :" }, { "code": null, "e": 25378, "s": 25321, "text": "geek@HP:~$ cat file2.txt\nOUTPUT:\ngeek@HP:~$15 file1.txt\n" }, { "code": null, "e": 26207, "s": 25378, "text": "YouTubeGeeksforGeeks500K subscribersLinux Tutorials | Pipe and tee | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:002:11 / 3:37•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=19mmVar-s5Y\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>" }, { "code": null, "e": 26221, "s": 26207, "text": "linux-command" }, { "code": null, "e": 26242, "s": 26221, "text": "Linux-Shell-Commands" }, { "code": null, "e": 26253, "s": 26242, "text": "Linux-Unix" }, { "code": null, "e": 26351, "s": 26253, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26360, "s": 26351, "text": "Comments" }, { "code": null, "e": 26373, "s": 26360, "text": "Old Comments" }, { "code": null, "e": 26408, "s": 26373, "text": "tar command in Linux with examples" }, { "code": null, "e": 26446, "s": 26408, "text": "UDP Server-Client implementation in C" }, { "code": null, "e": 26484, "s": 26446, "text": "Conditional Statements | Shell Script" }, { "code": null, "e": 26520, "s": 26484, "text": "curl command in Linux with Examples" }, { "code": null, "e": 26556, "s": 26520, "text": "echo command in Linux with Examples" }, { "code": null, "e": 26591, "s": 26556, "text": "Cat command in Linux with examples" }, { "code": null, "e": 26628, "s": 26591, "text": "touch command in Linux with Examples" }, { "code": null, "e": 26672, "s": 26628, "text": "Mutex lock for Linux Thread Synchronization" }, { "code": null, "e": 26708, "s": 26672, "text": "Tail command in Linux with examples" } ]
How can we add multiple sub-panels to the main panel in Java?
A JPanel is a subclass of JComponent class and it is an invisible component in Java. The FlowLayout is a default layout for a JPanel. We can add most of the components like buttons, text fields, labels, tables, lists, trees, etc. to a JPanel. We can also add multiple sub-panels to the main panel using the add() method of Container class. public Component add(Component comp) import java.awt.*; import javax.swing.*; public class MultiPanelTest extends JFrame { private JPanel mainPanel, subPanel1, subPanel2; public MultiPanelTest() { setTitle("MultiPanel Test"); mainPanel = new JPanel(); // main panel mainPanel.setLayout(new GridLayout(3, 1)); mainPanel.add(new JLabel("Main Panel", SwingConstants.CENTER)); mainPanel.setBackground(Color.white); mainPanel.setBorder(BorderFactory.createLineBorder(Color.black, 1)); subPanel1 = new JPanel(); // sub-panel 1 subPanel1.add(new JLabel("Panel One", SwingConstants.CENTER)); subPanel1.setBackground(Color.red); subPanel2 = new JPanel(); // sub-panel 2 subPanel2.setBackground(Color.blue); subPanel2.add(new JLabel("Panel Two", SwingConstants.CENTER)); mainPanel.add(subPanel1); mainPanel.add(subPanel2); add(mainPanel); setSize(400, 300); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setLocationRelativeTo(null); setVisible(true); } public static void main(String[] args) { new MultiPanelTest(); } }
[ { "code": null, "e": 1305, "s": 1062, "text": "A JPanel is a subclass of JComponent class and it is an invisible component in Java. The FlowLayout is a default layout for a JPanel. We can add most of the components like buttons, text fields, labels, tables, lists, trees, etc. to a JPanel." }, { "code": null, "e": 1402, "s": 1305, "text": "We can also add multiple sub-panels to the main panel using the add() method of Container class." }, { "code": null, "e": 1439, "s": 1402, "text": "public Component add(Component comp)" }, { "code": null, "e": 2549, "s": 1439, "text": "import java.awt.*;\nimport javax.swing.*;\npublic class MultiPanelTest extends JFrame {\n private JPanel mainPanel, subPanel1, subPanel2;\n public MultiPanelTest() {\n setTitle(\"MultiPanel Test\");\n mainPanel = new JPanel(); // main panel\n mainPanel.setLayout(new GridLayout(3, 1));\n mainPanel.add(new JLabel(\"Main Panel\", SwingConstants.CENTER));\n mainPanel.setBackground(Color.white);\n mainPanel.setBorder(BorderFactory.createLineBorder(Color.black, 1));\n subPanel1 = new JPanel(); // sub-panel 1\n subPanel1.add(new JLabel(\"Panel One\", SwingConstants.CENTER));\n subPanel1.setBackground(Color.red);\n subPanel2 = new JPanel(); // sub-panel 2\n subPanel2.setBackground(Color.blue);\n subPanel2.add(new JLabel(\"Panel Two\", SwingConstants.CENTER));\n mainPanel.add(subPanel1);\n mainPanel.add(subPanel2);\n add(mainPanel);\n setSize(400, 300);\n setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n setLocationRelativeTo(null);\n setVisible(true);\n }\n public static void main(String[] args) {\n new MultiPanelTest();\n }\n}" } ]
Dealing with Multiclass Data. Forest Cover Type Prediction | by Amber Teng | Towards Data Science
Have you ever thought about what to do when you encounter a classification problem that consists of over three classes? How did you deal with multiclass data, and how did you evaluate your model? Was overfitting a challenge — and if so, how did you surmount that? Read on to discover how I worked through these questions on my latest project — analyzing forest cover types from a dataset provided by the US Forest Service, and housed on the UCI Machine Learning Repository. This dataset is specifically interesting because it consists of a mix of both categorical and continuous variables, which has historically required different techniques of analysis. These variables describe the geology of each sample forest region, and a multiclass label (one of seven possible tree cover types) serves as our target variable. According to Kaggle, these seven possible cover types are the following: Spruce/FirLodgepole PinePonderosa PineCottonwood/WillowAspenDouglas-firKrummholz Spruce/Fir Lodgepole Pine Ponderosa Pine Cottonwood/Willow Aspen Douglas-fir Krummholz I thought that this was a cool application of what I’ve learned during my fellowship at SharpestMinds because successful forest cover type classification has so much potential for positive change, particularly in areas like environmental conservation, flora and fauna research, and geological studies. This Medium blogpost will focus on tree-based methods I explored to analyze this dataset, but you can view my whole repo, which contains other machine learning methods, here: https://github.com/angelaaaateng/Covertype_Analysis The first step is exploratory data analysis. I loaded the data and read the information into a dataframe using pandas. Then, I checked the data types to ensure that there were no anomalies in my dataset — and to get a better view of what kind of data I was dealing with. Based on this, we also saw that there were no NaN values, the columns were sensible, and in general, the dataset was pretty clean and didn’t need additional cleaning at this stage and for the goals of this project. We also see that we’re dealing with a pretty large dataset, so it’s highly likely that we’ll need to downsample from the 581,012 entries. We also know that we have 55 columns or features, which will be explored more thoroughly in the next section. Next, let’s take a look at the frequency distributions of these forest cover types. The bulk of the dataset is comprised of covertype 1 and covertype 2, and with a dataset that is this imbalanced, we have to make a choice about the way we downsample or build our toy dataset. The key lies in understanding the answer to this question: will we aim to have equal representation of all covertypes, or will we aim to represent each covertype in proportion to its frequency in the overall dataset? One important consideration is that downsampling does reduce the number of data points we can use to train our model, so despite this analysis, it’s always possible that using all samples will be the wiser choice. This depends highly on business priorities, including the relative cost of mistaking one covertype for another. For example, if we’re equally concerned about classification accuracy for each covertype, then downsampling to get an even distribution of covertypes is likely to make the most sense. However, if we only cared about classifying cover type 1 versus the other 6 cover types, then we might use a different sampling method. Nonetheless, for this project, I’ll start by assuming that we’re equally concerned with correctly classifying each covertype, and I’ll explore the impact of this decision towards the end of the post. Looking more deeply into the statistics of each cover type, we see that there are big differences in count across samples, and we also see that there is a wide spread of data for each variable. Particularly for elevation, we see that the standard deviation among covertypes 1 through 7 ranges from 95 to 196. We also see that for cover type 4, we have a minimum of 2,747 data points, while for cover type 2, we have a high of 28,8301, further showing that we have an unbalanced dataset. We can explore the correlation between and among continuous variables to better understand underlying relationships in the data using a correlation matrix: Note that I specifically left out the categorical variables in this correlation plot because the correlation matrix displays the Pearson correlation coefficients between different feature pairs, and Pearson correlations don’t make sense for non-continuous inputs. Based on the correlation matrices above, it seems like the continuous variables that are linearly related to covertype are slope and elevation, as well as aspect. It also appears that our three hillshade features are highly correlated to one another. This is to be expected, and does seem logical given that they all measure similar things. The correlation between these hillshades suggests that they may contain a fair amount of redundant information — an idea we’ll return to when we do feature selection. Based on the data exploration we showed earlier, we saw that the target variable is multi-class and binary. To determine what a “good” model looks like, and how we’re defining success in this project, I first looked into establishing a baseline model. However, before we can go about training a multiclass baseline model, we need to sample the data first. Not only will it be time consuming to train a model on all 500,000+ of our entries, it is also very inefficient. For our particular dataset, we’ll begin with a simple randomized sampling method, where we ensure a balanced class distribution, for the reasons outlined at the beginning of this post. This means that we want to take a sample of n entries from each covertype from 1 to 7, and ensure that we have the same number of entries per class. We’ll start with n = 2700, which is the minimum number of data points for the smallest class, and we’ll begin by using a simple decision tree to model our data. Our goal in doing this will be to understand our data better, by studying how our decision tree ends up interrogating our dataset, and to get a lower bound on our classification performance. This step puts us right at the fuzzy boundary between data exploration and predictive modeling, a grey zone that I’ve found particularly helpful to explore especially with tree-based models. Let’s start by understanding our data a bit better. I started by fitting a very short decision tree to the downsampled dataset (using a max_depth of 3), and visualized the result: The first thing we’ll note from this is the importance of the elevation parameter, which appears to dominate all others in this admittedly simple model. This is consistent with what we found in the correlation matrix earlier, which showed that elevation was the continuous feature that was most correlated with our target variable (Cover_Type). Although this model’s max_depth is severely constraining, it still performs much better than random guessing: the predictive accuracy it gets across all 7 classes is X% (training accuracy) and Y% (validation accuracy). Next, I wanted to see how an unconstrained model (without a max_depth of 3) would perform, to get a better idea of the full predictive power of a simple decision tree on this dataset. I trained a new tree with the default parameters provided by sklearn (which includes a max_depth of None, meaning that the tree will grow without limit). Here were the results: We’ll be using accuracy score as a measure of performance. predictions = dtree.predict(X_test)print ("Decision Tree Train Accuracy:", metrics.accuracy_score(y_train, dtree.predict(X_train)))print ("Decision Tree Test Accuracy:", metrics.accuracy_score(y_test, dtree.predict(X_test)))from sklearn.metrics import classification_reportprint(classification_report(y_test, y_pred)) We can see that there is overfitting here, as the accuracy score of the train data is 1. This is an important result because it is consistent with the general trend of decision trees tending to overfit. For fun, I thought I’d show exactly how this overfitting process plays out by plotting training and validation accuracies as a function of the max_depth parameter: The above graph shows us that the testing accuracy is less than the training accuracy, which is the definition of overfitting: our model is twisting itself into a shape that perfectly captures the trends in the training set, at the expense of being able to generalize to unseen data. To wrap up our data exploration-and-initial modeling step, let’s take a look at the confusion matrix that results from applying our deep tree to a validation set: Looking at the confusion matrix above, it seems that most of these errors are coming from classes 1 and 2. Now that we know what to expect from our problem, it’s time to get serious about modeling. Random forests are the model I’ll use here, because 1) they’re robust, and generalize well; and 2) they’re readily interpretable. Random forests are ensembles of decision trees: they consist of a bunch of independent decision trees, each of which is trained using only a subset of the features in our training set to ensure that they’re learning to make their predictions in different ways. Their outputs are then pooled together using simple voting. As always, my first step was to use an out of the box random forest classifier model. This resulted in a big bump in performance: 86% accuracy on the validation set, and 100% accuracy on the training set. In other words, the model is overfitting (or rather, each decision tree in the ensemble is overfitting) but we’re nonetheless seeing a big improvement in performance from pooling together a bunch of overfit decision trees. First, let’s do feature selection to identify the most predictive variables that affect the accuracy rate of our random forest ensemble model. People use many different methods for this, but here, we’ll focus on permutation feature importance. Permutation feature importance works by selecting a column (i.e. feature) in our validation set, then shuffling it randomly, thereby destroying the correlations between that feature and all the other features used by our model to make its predictions, and finally measuring our model’s performance on this freshly shuffled validation set. If the performance drops significantly, that tells us that the feature we shuffled must have been important. Using permutation importance, we see the following results: Based on this, it really seems like only a small number of our features dominate the others. In fact, if when I tried keeping only the top 13 features indicated here, I ended up sacrificing only a negligible amount of accuracy, obtaining once again 86% accuracy on the validation set. One key to hyperparameter tuning in random forests is that in general, the model’s performance only increases with the number of decision trees that we add to our ensemble. As a result, it’s actually the last parameter we’ll tune, after we’ve finished tuning all the other relevant parameters (like max_depth, min_samples_leaf and min_samples_split) using GridSearchCV. We get the final set of best estimators, and, given these best parameters, we then apply these to our model and compare the results: n_best_grid = n_grid_search.best_estimator_n_optimal_param_grid = { 'bootstrap': [True], 'max_depth': [20], #setting this so as not to create a tree that's too big #'max_features': [2, 3, 4, 10], 'min_samples_leaf': [1], 'min_samples_split': [2], 'n_estimators': [300]}nn_grid_search = GridSearchCV(estimator = n_rfc_gs, param_grid = n_optimal_param_grid, cv = 3, n_jobs = -1, verbose = 2)nn_grid_search.fit(X_train, y_train)nn_rfc_pred_gs = nn_grid_search.predict(X_test)nn_y_pred_gs = nn_grid_search.predict(X_test)print ("Random Forest Train Accuracy Baseline After Grid Search and N-estimators Search:", metrics.accuracy_score(y_train, nn_grid_search.predict(X_train)))print ("Random Forest Test Accuracy Baseline After Grid Search and N-estimators Search:", metrics.accuracy_score(y_test, nn_grid_search.pre Our model has performance has improved modestly, from 0.860 to 0.863. This isn’t entirely surprising — hyperparameter tuning doesn’t always have a huge impact on performance. Finally, we’ll take a look at the num_estimators parameter, taking care to plot the training and validation set accuracies together. As expected, both increase approximately monotonically with num_estimators, until reaching a plateau: We can then compare our current models and their performance based on accuracy, and compare it to the original out of the box models. One important point to discuss here is how in the initial run (not shown in this blogpost but in the notebook here), our testing accuracy went from 86% without parameter search to 84% after using GridSearchCV. We can see that the tuned model performs no better than the RF trained on default values. That’s because of the effect of variance on our dataset — GridSearch struggles to differentiate between the best hyperparameters and the worse ones, so it might save time and computing power to go with the out of the box random forest model. Some important possibilities to think about in relation to this point are the following: We are using GridSearch with cv = 3 (3-fold cross-validation). That means that for every hyperparameter combination, the model gets trained on only 2/3 of the data in the training set (because 1/3 is kept for validation). So based on this, we’d expect GridSearch to produce a more pessimistic result. One way to avoid this is to increase the value of cv so that you have smaller validation sets and larger training sets. There’s always noise (or “variance”) to consider: when we train our model on a different dataset, it will in general give different results. If the variance of our model is high enough (meaning that its performance depends heavily on the particular points that it’s trained on) then it’s possible that GridSearch actually can’t differentiate between the “best” hyperparameters and the worst ones. This is the most likely explanation for our current results. This means that we might as well just go with the default values. Another area we can more deeply explore is max_features. So, how do we do better? Based on our confusion matrices, it appears that classes 1 and 2 are where most of the error is coming from. In my next project, we will explore model stacking to see if we can gain any accuracy here. For any questions or comments, feel free to reach out to me on Twitter @ambervteng Thanks for reading, and see you next time! Github Repo: https://github.com/angelaaaateng/Covertype_Analysis Heroku Web App: https://covertype.herokuapp.com/ Jupyter Notebook: https://github.com/angelaaaateng/Projects/blob/master/Covertype_Prediction/Scripts/Tree-Based%20and%20Bagging%20Methods.ipynb [1] DataCamp Decision Tree Tutorial https://www.datacamp.com/community/tutorials/decision-tree-classification-python [2] Seif, G. Decision Trees for ML https://towardsdatascience.com/a-guide-to-decision-trees-for-machine-learning-and-data-science-fe2607241956 [3] Matplotlib Documentation https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.xticks.html
[ { "code": null, "e": 436, "s": 172, "text": "Have you ever thought about what to do when you encounter a classification problem that consists of over three classes? How did you deal with multiclass data, and how did you evaluate your model? Was overfitting a challenge — and if so, how did you surmount that?" }, { "code": null, "e": 646, "s": 436, "text": "Read on to discover how I worked through these questions on my latest project — analyzing forest cover types from a dataset provided by the US Forest Service, and housed on the UCI Machine Learning Repository." }, { "code": null, "e": 990, "s": 646, "text": "This dataset is specifically interesting because it consists of a mix of both categorical and continuous variables, which has historically required different techniques of analysis. These variables describe the geology of each sample forest region, and a multiclass label (one of seven possible tree cover types) serves as our target variable." }, { "code": null, "e": 1063, "s": 990, "text": "According to Kaggle, these seven possible cover types are the following:" }, { "code": null, "e": 1144, "s": 1063, "text": "Spruce/FirLodgepole PinePonderosa PineCottonwood/WillowAspenDouglas-firKrummholz" }, { "code": null, "e": 1155, "s": 1144, "text": "Spruce/Fir" }, { "code": null, "e": 1170, "s": 1155, "text": "Lodgepole Pine" }, { "code": null, "e": 1185, "s": 1170, "text": "Ponderosa Pine" }, { "code": null, "e": 1203, "s": 1185, "text": "Cottonwood/Willow" }, { "code": null, "e": 1209, "s": 1203, "text": "Aspen" }, { "code": null, "e": 1221, "s": 1209, "text": "Douglas-fir" }, { "code": null, "e": 1231, "s": 1221, "text": "Krummholz" }, { "code": null, "e": 1533, "s": 1231, "text": "I thought that this was a cool application of what I’ve learned during my fellowship at SharpestMinds because successful forest cover type classification has so much potential for positive change, particularly in areas like environmental conservation, flora and fauna research, and geological studies." }, { "code": null, "e": 1760, "s": 1533, "text": "This Medium blogpost will focus on tree-based methods I explored to analyze this dataset, but you can view my whole repo, which contains other machine learning methods, here: https://github.com/angelaaaateng/Covertype_Analysis" }, { "code": null, "e": 1879, "s": 1760, "text": "The first step is exploratory data analysis. I loaded the data and read the information into a dataframe using pandas." }, { "code": null, "e": 2246, "s": 1879, "text": "Then, I checked the data types to ensure that there were no anomalies in my dataset — and to get a better view of what kind of data I was dealing with. Based on this, we also saw that there were no NaN values, the columns were sensible, and in general, the dataset was pretty clean and didn’t need additional cleaning at this stage and for the goals of this project." }, { "code": null, "e": 2578, "s": 2246, "text": "We also see that we’re dealing with a pretty large dataset, so it’s highly likely that we’ll need to downsample from the 581,012 entries. We also know that we have 55 columns or features, which will be explored more thoroughly in the next section. Next, let’s take a look at the frequency distributions of these forest cover types." }, { "code": null, "e": 2987, "s": 2578, "text": "The bulk of the dataset is comprised of covertype 1 and covertype 2, and with a dataset that is this imbalanced, we have to make a choice about the way we downsample or build our toy dataset. The key lies in understanding the answer to this question: will we aim to have equal representation of all covertypes, or will we aim to represent each covertype in proportion to its frequency in the overall dataset?" }, { "code": null, "e": 3633, "s": 2987, "text": "One important consideration is that downsampling does reduce the number of data points we can use to train our model, so despite this analysis, it’s always possible that using all samples will be the wiser choice. This depends highly on business priorities, including the relative cost of mistaking one covertype for another. For example, if we’re equally concerned about classification accuracy for each covertype, then downsampling to get an even distribution of covertypes is likely to make the most sense. However, if we only cared about classifying cover type 1 versus the other 6 cover types, then we might use a different sampling method." }, { "code": null, "e": 3833, "s": 3633, "text": "Nonetheless, for this project, I’ll start by assuming that we’re equally concerned with correctly classifying each covertype, and I’ll explore the impact of this decision towards the end of the post." }, { "code": null, "e": 4027, "s": 3833, "text": "Looking more deeply into the statistics of each cover type, we see that there are big differences in count across samples, and we also see that there is a wide spread of data for each variable." }, { "code": null, "e": 4320, "s": 4027, "text": "Particularly for elevation, we see that the standard deviation among covertypes 1 through 7 ranges from 95 to 196. We also see that for cover type 4, we have a minimum of 2,747 data points, while for cover type 2, we have a high of 28,8301, further showing that we have an unbalanced dataset." }, { "code": null, "e": 4476, "s": 4320, "text": "We can explore the correlation between and among continuous variables to better understand underlying relationships in the data using a correlation matrix:" }, { "code": null, "e": 5248, "s": 4476, "text": "Note that I specifically left out the categorical variables in this correlation plot because the correlation matrix displays the Pearson correlation coefficients between different feature pairs, and Pearson correlations don’t make sense for non-continuous inputs. Based on the correlation matrices above, it seems like the continuous variables that are linearly related to covertype are slope and elevation, as well as aspect. It also appears that our three hillshade features are highly correlated to one another. This is to be expected, and does seem logical given that they all measure similar things. The correlation between these hillshades suggests that they may contain a fair amount of redundant information — an idea we’ll return to when we do feature selection." }, { "code": null, "e": 5500, "s": 5248, "text": "Based on the data exploration we showed earlier, we saw that the target variable is multi-class and binary. To determine what a “good” model looks like, and how we’re defining success in this project, I first looked into establishing a baseline model." }, { "code": null, "e": 5717, "s": 5500, "text": "However, before we can go about training a multiclass baseline model, we need to sample the data first. Not only will it be time consuming to train a model on all 500,000+ of our entries, it is also very inefficient." }, { "code": null, "e": 6051, "s": 5717, "text": "For our particular dataset, we’ll begin with a simple randomized sampling method, where we ensure a balanced class distribution, for the reasons outlined at the beginning of this post. This means that we want to take a sample of n entries from each covertype from 1 to 7, and ensure that we have the same number of entries per class." }, { "code": null, "e": 6594, "s": 6051, "text": "We’ll start with n = 2700, which is the minimum number of data points for the smallest class, and we’ll begin by using a simple decision tree to model our data. Our goal in doing this will be to understand our data better, by studying how our decision tree ends up interrogating our dataset, and to get a lower bound on our classification performance. This step puts us right at the fuzzy boundary between data exploration and predictive modeling, a grey zone that I’ve found particularly helpful to explore especially with tree-based models." }, { "code": null, "e": 6774, "s": 6594, "text": "Let’s start by understanding our data a bit better. I started by fitting a very short decision tree to the downsampled dataset (using a max_depth of 3), and visualized the result:" }, { "code": null, "e": 7119, "s": 6774, "text": "The first thing we’ll note from this is the importance of the elevation parameter, which appears to dominate all others in this admittedly simple model. This is consistent with what we found in the correlation matrix earlier, which showed that elevation was the continuous feature that was most correlated with our target variable (Cover_Type)." }, { "code": null, "e": 7338, "s": 7119, "text": "Although this model’s max_depth is severely constraining, it still performs much better than random guessing: the predictive accuracy it gets across all 7 classes is X% (training accuracy) and Y% (validation accuracy)." }, { "code": null, "e": 7699, "s": 7338, "text": "Next, I wanted to see how an unconstrained model (without a max_depth of 3) would perform, to get a better idea of the full predictive power of a simple decision tree on this dataset. I trained a new tree with the default parameters provided by sklearn (which includes a max_depth of None, meaning that the tree will grow without limit). Here were the results:" }, { "code": null, "e": 7758, "s": 7699, "text": "We’ll be using accuracy score as a measure of performance." }, { "code": null, "e": 8076, "s": 7758, "text": "predictions = dtree.predict(X_test)print (\"Decision Tree Train Accuracy:\", metrics.accuracy_score(y_train, dtree.predict(X_train)))print (\"Decision Tree Test Accuracy:\", metrics.accuracy_score(y_test, dtree.predict(X_test)))from sklearn.metrics import classification_reportprint(classification_report(y_test, y_pred))" }, { "code": null, "e": 8443, "s": 8076, "text": "We can see that there is overfitting here, as the accuracy score of the train data is 1. This is an important result because it is consistent with the general trend of decision trees tending to overfit. For fun, I thought I’d show exactly how this overfitting process plays out by plotting training and validation accuracies as a function of the max_depth parameter:" }, { "code": null, "e": 8727, "s": 8443, "text": "The above graph shows us that the testing accuracy is less than the training accuracy, which is the definition of overfitting: our model is twisting itself into a shape that perfectly captures the trends in the training set, at the expense of being able to generalize to unseen data." }, { "code": null, "e": 8890, "s": 8727, "text": "To wrap up our data exploration-and-initial modeling step, let’s take a look at the confusion matrix that results from applying our deep tree to a validation set:" }, { "code": null, "e": 8997, "s": 8890, "text": "Looking at the confusion matrix above, it seems that most of these errors are coming from classes 1 and 2." }, { "code": null, "e": 9539, "s": 8997, "text": "Now that we know what to expect from our problem, it’s time to get serious about modeling. Random forests are the model I’ll use here, because 1) they’re robust, and generalize well; and 2) they’re readily interpretable. Random forests are ensembles of decision trees: they consist of a bunch of independent decision trees, each of which is trained using only a subset of the features in our training set to ensure that they’re learning to make their predictions in different ways. Their outputs are then pooled together using simple voting." }, { "code": null, "e": 9967, "s": 9539, "text": "As always, my first step was to use an out of the box random forest classifier model. This resulted in a big bump in performance: 86% accuracy on the validation set, and 100% accuracy on the training set. In other words, the model is overfitting (or rather, each decision tree in the ensemble is overfitting) but we’re nonetheless seeing a big improvement in performance from pooling together a bunch of overfit decision trees." }, { "code": null, "e": 10659, "s": 9967, "text": "First, let’s do feature selection to identify the most predictive variables that affect the accuracy rate of our random forest ensemble model. People use many different methods for this, but here, we’ll focus on permutation feature importance. Permutation feature importance works by selecting a column (i.e. feature) in our validation set, then shuffling it randomly, thereby destroying the correlations between that feature and all the other features used by our model to make its predictions, and finally measuring our model’s performance on this freshly shuffled validation set. If the performance drops significantly, that tells us that the feature we shuffled must have been important." }, { "code": null, "e": 10719, "s": 10659, "text": "Using permutation importance, we see the following results:" }, { "code": null, "e": 11004, "s": 10719, "text": "Based on this, it really seems like only a small number of our features dominate the others. In fact, if when I tried keeping only the top 13 features indicated here, I ended up sacrificing only a negligible amount of accuracy, obtaining once again 86% accuracy on the validation set." }, { "code": null, "e": 11374, "s": 11004, "text": "One key to hyperparameter tuning in random forests is that in general, the model’s performance only increases with the number of decision trees that we add to our ensemble. As a result, it’s actually the last parameter we’ll tune, after we’ve finished tuning all the other relevant parameters (like max_depth, min_samples_leaf and min_samples_split) using GridSearchCV." }, { "code": null, "e": 11507, "s": 11374, "text": "We get the final set of best estimators, and, given these best parameters, we then apply these to our model and compare the results:" }, { "code": null, "e": 12365, "s": 11507, "text": "n_best_grid = n_grid_search.best_estimator_n_optimal_param_grid = { 'bootstrap': [True], 'max_depth': [20], #setting this so as not to create a tree that's too big #'max_features': [2, 3, 4, 10], 'min_samples_leaf': [1], 'min_samples_split': [2], 'n_estimators': [300]}nn_grid_search = GridSearchCV(estimator = n_rfc_gs, param_grid = n_optimal_param_grid, cv = 3, n_jobs = -1, verbose = 2)nn_grid_search.fit(X_train, y_train)nn_rfc_pred_gs = nn_grid_search.predict(X_test)nn_y_pred_gs = nn_grid_search.predict(X_test)print (\"Random Forest Train Accuracy Baseline After Grid Search and N-estimators Search:\", metrics.accuracy_score(y_train, nn_grid_search.predict(X_train)))print (\"Random Forest Test Accuracy Baseline After Grid Search and N-estimators Search:\", metrics.accuracy_score(y_test, nn_grid_search.pre" }, { "code": null, "e": 12540, "s": 12365, "text": "Our model has performance has improved modestly, from 0.860 to 0.863. This isn’t entirely surprising — hyperparameter tuning doesn’t always have a huge impact on performance." }, { "code": null, "e": 12775, "s": 12540, "text": "Finally, we’ll take a look at the num_estimators parameter, taking care to plot the training and validation set accuracies together. As expected, both increase approximately monotonically with num_estimators, until reaching a plateau:" }, { "code": null, "e": 12909, "s": 12775, "text": "We can then compare our current models and their performance based on accuracy, and compare it to the original out of the box models." }, { "code": null, "e": 13451, "s": 12909, "text": "One important point to discuss here is how in the initial run (not shown in this blogpost but in the notebook here), our testing accuracy went from 86% without parameter search to 84% after using GridSearchCV. We can see that the tuned model performs no better than the RF trained on default values. That’s because of the effect of variance on our dataset — GridSearch struggles to differentiate between the best hyperparameters and the worse ones, so it might save time and computing power to go with the out of the box random forest model." }, { "code": null, "e": 13540, "s": 13451, "text": "Some important possibilities to think about in relation to this point are the following:" }, { "code": null, "e": 13961, "s": 13540, "text": "We are using GridSearch with cv = 3 (3-fold cross-validation). That means that for every hyperparameter combination, the model gets trained on only 2/3 of the data in the training set (because 1/3 is kept for validation). So based on this, we’d expect GridSearch to produce a more pessimistic result. One way to avoid this is to increase the value of cv so that you have smaller validation sets and larger training sets." }, { "code": null, "e": 14485, "s": 13961, "text": "There’s always noise (or “variance”) to consider: when we train our model on a different dataset, it will in general give different results. If the variance of our model is high enough (meaning that its performance depends heavily on the particular points that it’s trained on) then it’s possible that GridSearch actually can’t differentiate between the “best” hyperparameters and the worst ones. This is the most likely explanation for our current results. This means that we might as well just go with the default values." }, { "code": null, "e": 14542, "s": 14485, "text": "Another area we can more deeply explore is max_features." }, { "code": null, "e": 14768, "s": 14542, "text": "So, how do we do better? Based on our confusion matrices, it appears that classes 1 and 2 are where most of the error is coming from. In my next project, we will explore model stacking to see if we can gain any accuracy here." }, { "code": null, "e": 14894, "s": 14768, "text": "For any questions or comments, feel free to reach out to me on Twitter @ambervteng Thanks for reading, and see you next time!" }, { "code": null, "e": 14959, "s": 14894, "text": "Github Repo: https://github.com/angelaaaateng/Covertype_Analysis" }, { "code": null, "e": 15008, "s": 14959, "text": "Heroku Web App: https://covertype.herokuapp.com/" }, { "code": null, "e": 15152, "s": 15008, "text": "Jupyter Notebook: https://github.com/angelaaaateng/Projects/blob/master/Covertype_Prediction/Scripts/Tree-Based%20and%20Bagging%20Methods.ipynb" }, { "code": null, "e": 15269, "s": 15152, "text": "[1] DataCamp Decision Tree Tutorial https://www.datacamp.com/community/tutorials/decision-tree-classification-python" }, { "code": null, "e": 15412, "s": 15269, "text": "[2] Seif, G. Decision Trees for ML https://towardsdatascience.com/a-guide-to-decision-trees-for-machine-learning-and-data-science-fe2607241956" } ]
How to build a custom Dataset for Tensorflow | by Ivelin Ivanov | Towards Data Science
Tensorflow inspires developers to experiment with their exciting AI ideas in almost any domain that comes to mind. There are three well known factors in the ML community that make up a good Deep Neural Network model do magical things. Model ArchitectureHigh quality training dataSufficient Compute Capacity Model Architecture High quality training data Sufficient Compute Capacity My area of interest is Real Time Communication. Coming up with practical ML use cases that may add value to RTC applications is the easy part. I wrote about a few of these recently. As my co-founder and good friend Jean Deruelle pointed out, there are many more adjacent use cases if we wander into ambient computing with new generation communication devices seamlessly enhancing home and work experiences. So I wanted to build a simple prototype and jumped right into connecting Restcomm to Tensorflow. After a few days of research, I realized that there is no easy way to feed real time streaming audio/video media (SIP/RTP) into a tensorflow model. Something similar to the Google Cloud’s Speech to Text streaming gRPC API would have been an acceptable initial fallback, but I could not find that in the open source Tensorflow community. There are ways to read from offline audio files and video files, but that’s quite different from processing real time latency sensitive media streams. Eventually my search took me to the Tensorflow IO project lead by Yong Tang. TF IO is a young project with a growing community supported by Google, IBM and others. Yong pointed me to an open github issue for live audio support waiting on contributors. That started a good conversation. A couple of weekends later I had build enough courage to take on a small coding challenge — implementing a new Tensorflow Dataset for PCAP network capture files. PCAP files are closely related to real time media streams, because they are precise historical snapshots of network activity. PCAP files enable recording and replay of actual network packets as they come into the media processing software including dropped packets and time delays. Back to the subject of this article — I will now walk you through the main steps in my quest to building a TF PcapDataset and contributing it to the Tensorflow IO project: Fork Tensorflow IO and build from sourceLook at the adjacent datasets in the source tree and pick one that’s closest to pcap. I leveraged code from text, cifar and parquet. There is also a document on creating TF ops that proved helpful.Ask for help on the gitter channel. There are folks who pay attention and respond within hours. I got valuable advise from Stephan Uphoff and Yong. There are also monthly conference calls where anyone can chime in on project issues.Submit a pull request when ready. The TF IO team is quite responsive and supportive guiding contributors through tweaks and fixes to meet best practices. Fork Tensorflow IO and build from source Look at the adjacent datasets in the source tree and pick one that’s closest to pcap. I leveraged code from text, cifar and parquet. There is also a document on creating TF ops that proved helpful. Ask for help on the gitter channel. There are folks who pay attention and respond within hours. I got valuable advise from Stephan Uphoff and Yong. There are also monthly conference calls where anyone can chime in on project issues. Submit a pull request when ready. The TF IO team is quite responsive and supportive guiding contributors through tweaks and fixes to meet best practices. Step 2 turned out to be the one where I spent most of my weekend-hobby time learning TF infrastructure and APIs. Let me break it down for you. Fundamentally TF is a graph structure with operations at each node. Data comes into the graph, operations take data samples as inputs, process these samples and pass outputs to the next operations in the graph that their node is connected to. The figure below is an example of a TF graph from the official docs. Operations work with a common data type named tensors (hence the name TensorFlow). The term tensor has mathematical definition, but the data structure for a tensor is essentially an n-dimensional vector: 0D scalar (number, character or string), 1D list of scalars, 2D matrix of scalars or higher dimension vector of vectors. Data has to be pre-processed and formatted into a Tensor data structure before it’s fed into a TF model. This tensor format requirement is due to the linear algebra extensively used in Deep Neural Networks and the optimizations possible with these structures applying computational parallelism on GPUs or TPUs. Its helpful to understand the benefits of TF Datasets and all the convenience functions that come out of the box such as batching, mapping, shuffling, repeating. These functions make it easier and more efficient to build and train TF models with limited amounts of data and compute power. Datasets and other TF operations can be built in C++ or Python. I picked the C++ route just so I can learn some of the TF C++ framework. Then I wrapped them in Python. In the future, I plan to write a few pure Python datasets, which should be a bit easier. Let’s look at the source code file structure for a TF IO Dataset. Tensorflow uses Bazel as build system, which Google open sourced in 2015. Following is the PcapDataset BUILD file. It declares the public name of the dynamic pcap library (_pcap_ops.so). Lists the two source files to build from (pcap_input.cc and pcap_ops.cc). And declares a few TF dependencies required for the build. The next source file of significance is pcap_ops.cc where we declare the TF ops that will be registered with the TF runtime environment and be available to use in TF apps. Most of the code here is boilerplate. It says that we are introducing a PcapInput op that can read from pcap files and a PcapDataset op that is populated by a PcapInput. The relationship between the two will become more apparent in a few moments. From the time when I started my contribution work till the time it was accepted into the TF master branch, there were several simplifications introduced in the base TF 2.0 framework that reduced boilerplate code in my files. I suspect there will be more of these simplifications in the near future. The core TF team understands that in order to attract larger community of contributors, its important to lower the barrier of entry. New contributors should be able to only focus on the net new code they are writing and not sweat the details of interacting with the TF environment until they are ready for that. The next file in the package is pcap_input.cc. That’s where most of the heavy lifting takes place. I spent a fair share of time writing and testing this file. It has a section that declares the relationship between PcapDataset, PcapInput and PcapInputStream. We will see what each of these does. PcapInputStream contains most of the logic reading from a raw pcap file and converting it to a tensor. To get a flavor of the input, here is a screenshot of the test http.pcap file viewed with CocoaPacketAnalyzer. Let me skip the logic specific to pcap files and point out a few defining elements for the conversion from raw binary file data to tensors. This ReadRecord line reads from the pcap file the next pcap packet and populates two local variables: packet_timestamp double and packet_data_buffer string. ReadRecord(packet_timestamp, &packet_data_buffer, record_count); If a new pcap record was populated successfully, the scalars are placed into respective tensor placeholders. The shape of the resulting output tensor is a matrix with two columns. One column holds the timestamp scalars for each read pcap packet. The other column holds the corresponding packet data as a string. Each row in the output tensor (matrix) corresponds to a pcap packet. Tensor timestamp_tensor = (*out_tensors)[0];timestamp_tensor.flat<double>()(*record_read) = packet_timestamp;Tensor data_tensor = (*out_tensors)[1];data_tensor.flat<string>()(*record_read) = std::move(packet_data_buffer); out_tensors are the placeholder tensors prepared when a new batch is requested from the PcapDataset. That is done here; before the read loop. The packet_timestamp scalar is placed at the first column (index [0]) and (*record_read) row using the typed flat function. Respectively packet_data_buffer is placed at the second column (index [1]) and same (*record_read) row. This covers the key elements of the C++ code. Now lets look at the Python files. _init_.py at the top pcap directory level instructs the TF Python documentation generator how to traverse the python code and extract API reference documentation. You can read more about the documentation best practices here. The code above instructs the Pyhton API docs generator to focus on the PcapDataset class and ignore other code in this model. Next, pcap_ops.py wraps the C++ DataSet op and makes it available to Python apps. The C++ dynamic library is imported as follows: from tensorflow_io import _load_librarypcap_ops = _load_library('_pcap_ops.so') One of the main roles of the dataset constructor is to provide metadata about the dataset tensors types it produces. First it has to describe the tensor types in an individual data sample. PcapDataset samples are a vector of two scalars. One for the pcap packet timestamp of type tf.float64 and another for the packet data of type tf.string. dtypes = [tf.float64, tf.string] Batch is the number of training examples in one forward/backward pass through the neural network. In our case, when we define the size of the batch, we also define the shape of the tensor. When multiple pcap packets are grouped in one batch, both timestamp (tf.float64) and data (tf.string) are 1-D tensors and have the shapes of tf.TensorShape([batch]). Since we don’t know the number of total samples beforehand and the total samples may not be divisible by the size of batch, we would rather set the shape as tf.TensorShape([None]) to give us more flexibility. Batch size of 0 is a special case where the shape of each individual tensor degenerates into tf.TensorShape([]), or 0-D scalar tensor. shapes = [ tf.TensorShape([]), tf.TensorShape([])] if batch == 0 else [ tf.TensorShape([None]), tf.TensorShape([None])] Almost there. We just need a test case. test_pcap_eager.py exercises PcapDataset while sampling from http.pcap. The test code is straightforward. Iterates over all pcap packets and tests the values in the first one against known constants. To build just the PcapDataset and run its test, I used the following lines from the local io directory: $ bazel build -s --verbose_failures //tensorflow_io/pcap/...$ pytest tests/test_pcap_eager.py That’s it! Hope this helps you build your own custom Dataset. When you do, I hope you will consider contributing it to the TF community to accelerate the progress of open source AI. Feel free to ask questions in the comments section. I will try to answer to my best ability.
[ { "code": null, "e": 407, "s": 172, "text": "Tensorflow inspires developers to experiment with their exciting AI ideas in almost any domain that comes to mind. There are three well known factors in the ML community that make up a good Deep Neural Network model do magical things." }, { "code": null, "e": 479, "s": 407, "text": "Model ArchitectureHigh quality training dataSufficient Compute Capacity" }, { "code": null, "e": 498, "s": 479, "text": "Model Architecture" }, { "code": null, "e": 525, "s": 498, "text": "High quality training data" }, { "code": null, "e": 553, "s": 525, "text": "Sufficient Compute Capacity" }, { "code": null, "e": 735, "s": 553, "text": "My area of interest is Real Time Communication. Coming up with practical ML use cases that may add value to RTC applications is the easy part. I wrote about a few of these recently." }, { "code": null, "e": 960, "s": 735, "text": "As my co-founder and good friend Jean Deruelle pointed out, there are many more adjacent use cases if we wander into ambient computing with new generation communication devices seamlessly enhancing home and work experiences." }, { "code": null, "e": 1394, "s": 960, "text": "So I wanted to build a simple prototype and jumped right into connecting Restcomm to Tensorflow. After a few days of research, I realized that there is no easy way to feed real time streaming audio/video media (SIP/RTP) into a tensorflow model. Something similar to the Google Cloud’s Speech to Text streaming gRPC API would have been an acceptable initial fallback, but I could not find that in the open source Tensorflow community." }, { "code": null, "e": 1545, "s": 1394, "text": "There are ways to read from offline audio files and video files, but that’s quite different from processing real time latency sensitive media streams." }, { "code": null, "e": 1993, "s": 1545, "text": "Eventually my search took me to the Tensorflow IO project lead by Yong Tang. TF IO is a young project with a growing community supported by Google, IBM and others. Yong pointed me to an open github issue for live audio support waiting on contributors. That started a good conversation. A couple of weekends later I had build enough courage to take on a small coding challenge — implementing a new Tensorflow Dataset for PCAP network capture files." }, { "code": null, "e": 2275, "s": 1993, "text": "PCAP files are closely related to real time media streams, because they are precise historical snapshots of network activity. PCAP files enable recording and replay of actual network packets as they come into the media processing software including dropped packets and time delays." }, { "code": null, "e": 2447, "s": 2275, "text": "Back to the subject of this article — I will now walk you through the main steps in my quest to building a TF PcapDataset and contributing it to the Tensorflow IO project:" }, { "code": null, "e": 3070, "s": 2447, "text": "Fork Tensorflow IO and build from sourceLook at the adjacent datasets in the source tree and pick one that’s closest to pcap. I leveraged code from text, cifar and parquet. There is also a document on creating TF ops that proved helpful.Ask for help on the gitter channel. There are folks who pay attention and respond within hours. I got valuable advise from Stephan Uphoff and Yong. There are also monthly conference calls where anyone can chime in on project issues.Submit a pull request when ready. The TF IO team is quite responsive and supportive guiding contributors through tweaks and fixes to meet best practices." }, { "code": null, "e": 3111, "s": 3070, "text": "Fork Tensorflow IO and build from source" }, { "code": null, "e": 3309, "s": 3111, "text": "Look at the adjacent datasets in the source tree and pick one that’s closest to pcap. I leveraged code from text, cifar and parquet. There is also a document on creating TF ops that proved helpful." }, { "code": null, "e": 3542, "s": 3309, "text": "Ask for help on the gitter channel. There are folks who pay attention and respond within hours. I got valuable advise from Stephan Uphoff and Yong. There are also monthly conference calls where anyone can chime in on project issues." }, { "code": null, "e": 3696, "s": 3542, "text": "Submit a pull request when ready. The TF IO team is quite responsive and supportive guiding contributors through tweaks and fixes to meet best practices." }, { "code": null, "e": 3839, "s": 3696, "text": "Step 2 turned out to be the one where I spent most of my weekend-hobby time learning TF infrastructure and APIs. Let me break it down for you." }, { "code": null, "e": 4151, "s": 3839, "text": "Fundamentally TF is a graph structure with operations at each node. Data comes into the graph, operations take data samples as inputs, process these samples and pass outputs to the next operations in the graph that their node is connected to. The figure below is an example of a TF graph from the official docs." }, { "code": null, "e": 4476, "s": 4151, "text": "Operations work with a common data type named tensors (hence the name TensorFlow). The term tensor has mathematical definition, but the data structure for a tensor is essentially an n-dimensional vector: 0D scalar (number, character or string), 1D list of scalars, 2D matrix of scalars or higher dimension vector of vectors." }, { "code": null, "e": 4787, "s": 4476, "text": "Data has to be pre-processed and formatted into a Tensor data structure before it’s fed into a TF model. This tensor format requirement is due to the linear algebra extensively used in Deep Neural Networks and the optimizations possible with these structures applying computational parallelism on GPUs or TPUs." }, { "code": null, "e": 5076, "s": 4787, "text": "Its helpful to understand the benefits of TF Datasets and all the convenience functions that come out of the box such as batching, mapping, shuffling, repeating. These functions make it easier and more efficient to build and train TF models with limited amounts of data and compute power." }, { "code": null, "e": 5333, "s": 5076, "text": "Datasets and other TF operations can be built in C++ or Python. I picked the C++ route just so I can learn some of the TF C++ framework. Then I wrapped them in Python. In the future, I plan to write a few pure Python datasets, which should be a bit easier." }, { "code": null, "e": 5399, "s": 5333, "text": "Let’s look at the source code file structure for a TF IO Dataset." }, { "code": null, "e": 5719, "s": 5399, "text": "Tensorflow uses Bazel as build system, which Google open sourced in 2015. Following is the PcapDataset BUILD file. It declares the public name of the dynamic pcap library (_pcap_ops.so). Lists the two source files to build from (pcap_input.cc and pcap_ops.cc). And declares a few TF dependencies required for the build." }, { "code": null, "e": 5891, "s": 5719, "text": "The next source file of significance is pcap_ops.cc where we declare the TF ops that will be registered with the TF runtime environment and be available to use in TF apps." }, { "code": null, "e": 6138, "s": 5891, "text": "Most of the code here is boilerplate. It says that we are introducing a PcapInput op that can read from pcap files and a PcapDataset op that is populated by a PcapInput. The relationship between the two will become more apparent in a few moments." }, { "code": null, "e": 6437, "s": 6138, "text": "From the time when I started my contribution work till the time it was accepted into the TF master branch, there were several simplifications introduced in the base TF 2.0 framework that reduced boilerplate code in my files. I suspect there will be more of these simplifications in the near future." }, { "code": null, "e": 6749, "s": 6437, "text": "The core TF team understands that in order to attract larger community of contributors, its important to lower the barrier of entry. New contributors should be able to only focus on the net new code they are writing and not sweat the details of interacting with the TF environment until they are ready for that." }, { "code": null, "e": 6908, "s": 6749, "text": "The next file in the package is pcap_input.cc. That’s where most of the heavy lifting takes place. I spent a fair share of time writing and testing this file." }, { "code": null, "e": 7045, "s": 6908, "text": "It has a section that declares the relationship between PcapDataset, PcapInput and PcapInputStream. We will see what each of these does." }, { "code": null, "e": 7259, "s": 7045, "text": "PcapInputStream contains most of the logic reading from a raw pcap file and converting it to a tensor. To get a flavor of the input, here is a screenshot of the test http.pcap file viewed with CocoaPacketAnalyzer." }, { "code": null, "e": 7399, "s": 7259, "text": "Let me skip the logic specific to pcap files and point out a few defining elements for the conversion from raw binary file data to tensors." }, { "code": null, "e": 7556, "s": 7399, "text": "This ReadRecord line reads from the pcap file the next pcap packet and populates two local variables: packet_timestamp double and packet_data_buffer string." }, { "code": null, "e": 7621, "s": 7556, "text": "ReadRecord(packet_timestamp, &packet_data_buffer, record_count);" }, { "code": null, "e": 8002, "s": 7621, "text": "If a new pcap record was populated successfully, the scalars are placed into respective tensor placeholders. The shape of the resulting output tensor is a matrix with two columns. One column holds the timestamp scalars for each read pcap packet. The other column holds the corresponding packet data as a string. Each row in the output tensor (matrix) corresponds to a pcap packet." }, { "code": null, "e": 8224, "s": 8002, "text": "Tensor timestamp_tensor = (*out_tensors)[0];timestamp_tensor.flat<double>()(*record_read) = packet_timestamp;Tensor data_tensor = (*out_tensors)[1];data_tensor.flat<string>()(*record_read) = std::move(packet_data_buffer);" }, { "code": null, "e": 8366, "s": 8224, "text": "out_tensors are the placeholder tensors prepared when a new batch is requested from the PcapDataset. That is done here; before the read loop." }, { "code": null, "e": 8594, "s": 8366, "text": "The packet_timestamp scalar is placed at the first column (index [0]) and (*record_read) row using the typed flat function. Respectively packet_data_buffer is placed at the second column (index [1]) and same (*record_read) row." }, { "code": null, "e": 8675, "s": 8594, "text": "This covers the key elements of the C++ code. Now lets look at the Python files." }, { "code": null, "e": 8901, "s": 8675, "text": "_init_.py at the top pcap directory level instructs the TF Python documentation generator how to traverse the python code and extract API reference documentation. You can read more about the documentation best practices here." }, { "code": null, "e": 9027, "s": 8901, "text": "The code above instructs the Pyhton API docs generator to focus on the PcapDataset class and ignore other code in this model." }, { "code": null, "e": 9109, "s": 9027, "text": "Next, pcap_ops.py wraps the C++ DataSet op and makes it available to Python apps." }, { "code": null, "e": 9157, "s": 9109, "text": "The C++ dynamic library is imported as follows:" }, { "code": null, "e": 9237, "s": 9157, "text": "from tensorflow_io import _load_librarypcap_ops = _load_library('_pcap_ops.so')" }, { "code": null, "e": 9579, "s": 9237, "text": "One of the main roles of the dataset constructor is to provide metadata about the dataset tensors types it produces. First it has to describe the tensor types in an individual data sample. PcapDataset samples are a vector of two scalars. One for the pcap packet timestamp of type tf.float64 and another for the packet data of type tf.string." }, { "code": null, "e": 9612, "s": 9579, "text": "dtypes = [tf.float64, tf.string]" }, { "code": null, "e": 9801, "s": 9612, "text": "Batch is the number of training examples in one forward/backward pass through the neural network. In our case, when we define the size of the batch, we also define the shape of the tensor." }, { "code": null, "e": 9967, "s": 9801, "text": "When multiple pcap packets are grouped in one batch, both timestamp (tf.float64) and data (tf.string) are 1-D tensors and have the shapes of tf.TensorShape([batch])." }, { "code": null, "e": 10176, "s": 9967, "text": "Since we don’t know the number of total samples beforehand and the total samples may not be divisible by the size of batch, we would rather set the shape as tf.TensorShape([None]) to give us more flexibility." }, { "code": null, "e": 10311, "s": 10176, "text": "Batch size of 0 is a special case where the shape of each individual tensor degenerates into tf.TensorShape([]), or 0-D scalar tensor." }, { "code": null, "e": 10449, "s": 10311, "text": "shapes = [ tf.TensorShape([]), tf.TensorShape([])] if batch == 0 else [ tf.TensorShape([None]), tf.TensorShape([None])]" }, { "code": null, "e": 10561, "s": 10449, "text": "Almost there. We just need a test case. test_pcap_eager.py exercises PcapDataset while sampling from http.pcap." }, { "code": null, "e": 10689, "s": 10561, "text": "The test code is straightforward. Iterates over all pcap packets and tests the values in the first one against known constants." }, { "code": null, "e": 10793, "s": 10689, "text": "To build just the PcapDataset and run its test, I used the following lines from the local io directory:" }, { "code": null, "e": 10887, "s": 10793, "text": "$ bazel build -s --verbose_failures //tensorflow_io/pcap/...$ pytest tests/test_pcap_eager.py" }, { "code": null, "e": 11069, "s": 10887, "text": "That’s it! Hope this helps you build your own custom Dataset. When you do, I hope you will consider contributing it to the TF community to accelerate the progress of open source AI." } ]
Python - Remove Negative Elements in List - GeeksforGeeks
03 Jul, 2020 Sometimes, while working with Python lists, we can have a problem in which we need to remove all the negative elements from list. This kind of problem can have application in many domains such as school programming and web development. Let’s discuss certain ways in which this task can be performed. Input : test_list = [6, 4, 3]Output : [6, 4, 3] Input : test_list = [-6, -4]Output : [] Method #1 : Using list comprehensionThe combination of above functions can be used to solve this problem. In this, we perform the task of removing negative elements by iteration in one liner using list comprehension # Python3 code to demonstrate working of # Remove Negative Elements in List# Using list comprehension # initializing listtest_list = [5, 6, -3, -8, 9, 11, -12, 2] # printing original listprint("The original list is : " + str(test_list)) # Remove Negative Elements in List# Using list comprehensionres = [ele for ele in test_list if ele > 0] # printing result print("List after filtering : " + str(res)) The original list is : [5, 6, -3, -8, 9, 11, -12, 2] List after filtering : [5, 6, 9, 11, 2] Method #2 : Using filter() + lambdaThe combination of above functions can also offer an alternative to this problem. In this, we extend logic of retaining positive formed using lambda function and extended using filter(). # Python3 code to demonstrate working of # Remove Negative Elements in List# Using filter() + lambda # initializing listtest_list = [5, 6, -3, -8, 9, 11, -12, 2] # printing original listprint("The original list is : " + str(test_list)) # Remove Negative Elements in List# Using filter() + lambdares = list(filter(lambda x : x > 0, test_list)) # printing result print("List after filtering : " + str(res)) The original list is : [5, 6, -3, -8, 9, 11, -12, 2] List after filtering : [5, 6, 9, 11, 2] Python list-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Check if element exists in list in Python How To Convert Python Dictionary To JSON? How to drop one or multiple columns in Pandas Dataframe Python Classes and Objects Defaultdict in Python Python | Get dictionary keys as a list Python | Split string into list of characters Python | Convert a list to dictionary How to print without newline in Python?
[ { "code": null, "e": 25647, "s": 25619, "text": "\n03 Jul, 2020" }, { "code": null, "e": 25947, "s": 25647, "text": "Sometimes, while working with Python lists, we can have a problem in which we need to remove all the negative elements from list. This kind of problem can have application in many domains such as school programming and web development. Let’s discuss certain ways in which this task can be performed." }, { "code": null, "e": 25995, "s": 25947, "text": "Input : test_list = [6, 4, 3]Output : [6, 4, 3]" }, { "code": null, "e": 26035, "s": 25995, "text": "Input : test_list = [-6, -4]Output : []" }, { "code": null, "e": 26251, "s": 26035, "text": "Method #1 : Using list comprehensionThe combination of above functions can be used to solve this problem. In this, we perform the task of removing negative elements by iteration in one liner using list comprehension" }, { "code": "# Python3 code to demonstrate working of # Remove Negative Elements in List# Using list comprehension # initializing listtest_list = [5, 6, -3, -8, 9, 11, -12, 2] # printing original listprint(\"The original list is : \" + str(test_list)) # Remove Negative Elements in List# Using list comprehensionres = [ele for ele in test_list if ele > 0] # printing result print(\"List after filtering : \" + str(res))", "e": 26659, "s": 26251, "text": null }, { "code": null, "e": 26753, "s": 26659, "text": "The original list is : [5, 6, -3, -8, 9, 11, -12, 2]\nList after filtering : [5, 6, 9, 11, 2]\n" }, { "code": null, "e": 26977, "s": 26755, "text": "Method #2 : Using filter() + lambdaThe combination of above functions can also offer an alternative to this problem. In this, we extend logic of retaining positive formed using lambda function and extended using filter()." }, { "code": "# Python3 code to demonstrate working of # Remove Negative Elements in List# Using filter() + lambda # initializing listtest_list = [5, 6, -3, -8, 9, 11, -12, 2] # printing original listprint(\"The original list is : \" + str(test_list)) # Remove Negative Elements in List# Using filter() + lambdares = list(filter(lambda x : x > 0, test_list)) # printing result print(\"List after filtering : \" + str(res))", "e": 27387, "s": 26977, "text": null }, { "code": null, "e": 27481, "s": 27387, "text": "The original list is : [5, 6, -3, -8, 9, 11, -12, 2]\nList after filtering : [5, 6, 9, 11, 2]\n" }, { "code": null, "e": 27502, "s": 27481, "text": "Python list-programs" }, { "code": null, "e": 27509, "s": 27502, "text": "Python" }, { "code": null, "e": 27525, "s": 27509, "text": "Python Programs" }, { "code": null, "e": 27623, "s": 27525, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27655, "s": 27623, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 27697, "s": 27655, "text": "Check if element exists in list in Python" }, { "code": null, "e": 27739, "s": 27697, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 27795, "s": 27739, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 27822, "s": 27795, "text": "Python Classes and Objects" }, { "code": null, "e": 27844, "s": 27822, "text": "Defaultdict in Python" }, { "code": null, "e": 27883, "s": 27844, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 27929, "s": 27883, "text": "Python | Split string into list of characters" }, { "code": null, "e": 27967, "s": 27929, "text": "Python | Convert a list to dictionary" } ]
How to Download historical stock prices in Python ? - GeeksforGeeks
05 Apr, 2021 Stock prices refer to the current price of the share of that stock. Stock prices are widely used in the field of Machine Learning for the demonstration of the regression problem. Stock prediction is an application of Machine learning where we predict the stocks of a particular firm by looking at its past data. Now to build something like this first step is to get our historical stock data. We can get our historical stock data using API’s provided as library support in Python. A few of the API’s are mentioned below: Yahoo Finance Pandas DataReader Quandl Approach: Each of the methods uses a different python module, but they have a similar procedural structure which includes the following steps: 1. Import required libraries We are using datetime module to get the date of the starting and ending limit of the stock data required. We are using matplotlib module to display the data extracted in a graphical format. 2. Initialize the start and end date for getting the stock data during that time period. 3. Get the data using the dedicated functions provided in each of the modules. 4. Display the data using the matplotlib library. We use the plot() function to plot the data in a graphical format. We can get stock using the yfinance.download() function provided in the yfinance module which is a module for Yahoo’s Finance API. We can download the module using the following command. pip install yfinance We need to supply 3 required parameters in the yfinance.download() function which are Stock Symbol Start date End date Below is the implementation. Python3 # import modulesfrom datetime import datetimeimport yfinance as yfimport matplotlib.pyplot as plt # initialize parametersstart_date = datetime(2020, 1, 1)end_date = datetime(2021, 1, 1) # get the datadata = yf.download('SPY', start = start_date, end = end_date) # displayplt.figure(figsize = (20,10))plt.title('Opening Prices from {} to {}'.format(start_date, end_date))plt.plot(data['Open'])plt.show() Output: Another way of getting the historical stock data is to use the pandas_datareader library. It also uses Yahoo’s Finance API to load in the data. We can download the module using the following command. pip install pandas_datareader It also requires the similar three fields to load in the data which are Stock Symbol Start date End date Below is the implementation: Python3 # import modulesfrom pandas_datareader import data as pdrimport matplotlib.pyplot as plt # initializing Parametersstart = "2020-01-01"end = "2021-01-01"symbols = ["AAPL"] # Getting the datadata = pdr.get_data_yahoo(symbols, start, end) # Displayplt.figure(figsize = (20,10))plt.title('Opening Prices from {} to {}'.format(start, end))plt.plot(data['Open'])plt.show() Output: Quandl has hundreds of free and paid data sources, across equities, fixed incomes, commodities, exchange rates, etc. In order to get the access, we need to create an account on Quandl and get an API Key to access the data for free. After that, we need to download the API support quandl library of python using the following command. pip install quandl We will use quandl.get() function to get the data. It takes four fields to load in the data Symbol start_date end_date Authentication token Below is the implementation: Python3 # import modulesimport quandlfrom datetime import datetimeimport matplotlib.pyplot as plt # initialize parametersstart = datetime(2015, 1, 1)end = datetime(2020, 1, 1) # get the datadf = quandl.get('NSE/OIL', start_date = start, end_date = end, authtoken = 'enter_your_api_key') # displayplt.figure(figsize=(20,10))plt.title('Opening Prices from {} to {}'.format(start, end))plt.plot(df['Open'])plt.show() Output: Picked python-utility Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Read a file line by line in Python How to Install PIP on Windows ? Enumerate() in Python Different ways to create Pandas Dataframe Iterate over a list in Python *args and **kwargs in Python Reading and Writing to text files in Python Create a Pandas DataFrame from Lists Check if element exists in list in Python
[ { "code": null, "e": 26213, "s": 26185, "text": "\n05 Apr, 2021" }, { "code": null, "e": 26606, "s": 26213, "text": "Stock prices refer to the current price of the share of that stock. Stock prices are widely used in the field of Machine Learning for the demonstration of the regression problem. Stock prediction is an application of Machine learning where we predict the stocks of a particular firm by looking at its past data. Now to build something like this first step is to get our historical stock data." }, { "code": null, "e": 26734, "s": 26606, "text": "We can get our historical stock data using API’s provided as library support in Python. A few of the API’s are mentioned below:" }, { "code": null, "e": 26748, "s": 26734, "text": "Yahoo Finance" }, { "code": null, "e": 26766, "s": 26748, "text": "Pandas DataReader" }, { "code": null, "e": 26773, "s": 26766, "text": "Quandl" }, { "code": null, "e": 26783, "s": 26773, "text": "Approach:" }, { "code": null, "e": 26916, "s": 26783, "text": "Each of the methods uses a different python module, but they have a similar procedural structure which includes the following steps:" }, { "code": null, "e": 26945, "s": 26916, "text": "1. Import required libraries" }, { "code": null, "e": 27051, "s": 26945, "text": "We are using datetime module to get the date of the starting and ending limit of the stock data required." }, { "code": null, "e": 27135, "s": 27051, "text": "We are using matplotlib module to display the data extracted in a graphical format." }, { "code": null, "e": 27224, "s": 27135, "text": "2. Initialize the start and end date for getting the stock data during that time period." }, { "code": null, "e": 27303, "s": 27224, "text": "3. Get the data using the dedicated functions provided in each of the modules." }, { "code": null, "e": 27421, "s": 27303, "text": "4. Display the data using the matplotlib library. We use the plot() function to plot the data in a graphical format." }, { "code": null, "e": 27608, "s": 27421, "text": "We can get stock using the yfinance.download() function provided in the yfinance module which is a module for Yahoo’s Finance API. We can download the module using the following command." }, { "code": null, "e": 27629, "s": 27608, "text": "pip install yfinance" }, { "code": null, "e": 27716, "s": 27629, "text": "We need to supply 3 required parameters in the yfinance.download() function which are " }, { "code": null, "e": 27729, "s": 27716, "text": "Stock Symbol" }, { "code": null, "e": 27740, "s": 27729, "text": "Start date" }, { "code": null, "e": 27749, "s": 27740, "text": "End date" }, { "code": null, "e": 27778, "s": 27749, "text": "Below is the implementation." }, { "code": null, "e": 27786, "s": 27778, "text": "Python3" }, { "code": "# import modulesfrom datetime import datetimeimport yfinance as yfimport matplotlib.pyplot as plt # initialize parametersstart_date = datetime(2020, 1, 1)end_date = datetime(2021, 1, 1) # get the datadata = yf.download('SPY', start = start_date, end = end_date) # displayplt.figure(figsize = (20,10))plt.title('Opening Prices from {} to {}'.format(start_date, end_date))plt.plot(data['Open'])plt.show()", "e": 28257, "s": 27786, "text": null }, { "code": null, "e": 28265, "s": 28257, "text": "Output:" }, { "code": null, "e": 28465, "s": 28265, "text": "Another way of getting the historical stock data is to use the pandas_datareader library. It also uses Yahoo’s Finance API to load in the data. We can download the module using the following command." }, { "code": null, "e": 28495, "s": 28465, "text": "pip install pandas_datareader" }, { "code": null, "e": 28567, "s": 28495, "text": "It also requires the similar three fields to load in the data which are" }, { "code": null, "e": 28580, "s": 28567, "text": "Stock Symbol" }, { "code": null, "e": 28591, "s": 28580, "text": "Start date" }, { "code": null, "e": 28600, "s": 28591, "text": "End date" }, { "code": null, "e": 28629, "s": 28600, "text": "Below is the implementation:" }, { "code": null, "e": 28637, "s": 28629, "text": "Python3" }, { "code": "# import modulesfrom pandas_datareader import data as pdrimport matplotlib.pyplot as plt # initializing Parametersstart = \"2020-01-01\"end = \"2021-01-01\"symbols = [\"AAPL\"] # Getting the datadata = pdr.get_data_yahoo(symbols, start, end) # Displayplt.figure(figsize = (20,10))plt.title('Opening Prices from {} to {}'.format(start, end))plt.plot(data['Open'])plt.show()", "e": 29007, "s": 28637, "text": null }, { "code": null, "e": 29015, "s": 29007, "text": "Output:" }, { "code": null, "e": 29349, "s": 29015, "text": "Quandl has hundreds of free and paid data sources, across equities, fixed incomes, commodities, exchange rates, etc. In order to get the access, we need to create an account on Quandl and get an API Key to access the data for free. After that, we need to download the API support quandl library of python using the following command." }, { "code": null, "e": 29368, "s": 29349, "text": "pip install quandl" }, { "code": null, "e": 29461, "s": 29368, "text": " We will use quandl.get() function to get the data. It takes four fields to load in the data" }, { "code": null, "e": 29468, "s": 29461, "text": "Symbol" }, { "code": null, "e": 29479, "s": 29468, "text": "start_date" }, { "code": null, "e": 29488, "s": 29479, "text": "end_date" }, { "code": null, "e": 29509, "s": 29488, "text": "Authentication token" }, { "code": null, "e": 29538, "s": 29509, "text": "Below is the implementation:" }, { "code": null, "e": 29546, "s": 29538, "text": "Python3" }, { "code": "# import modulesimport quandlfrom datetime import datetimeimport matplotlib.pyplot as plt # initialize parametersstart = datetime(2015, 1, 1)end = datetime(2020, 1, 1) # get the datadf = quandl.get('NSE/OIL', start_date = start, end_date = end, authtoken = 'enter_your_api_key') # displayplt.figure(figsize=(20,10))plt.title('Opening Prices from {} to {}'.format(start, end))plt.plot(df['Open'])plt.show()", "e": 29986, "s": 29546, "text": null }, { "code": null, "e": 29994, "s": 29986, "text": "Output:" }, { "code": null, "e": 30001, "s": 29994, "text": "Picked" }, { "code": null, "e": 30016, "s": 30001, "text": "python-utility" }, { "code": null, "e": 30023, "s": 30016, "text": "Python" }, { "code": null, "e": 30121, "s": 30023, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30139, "s": 30121, "text": "Python Dictionary" }, { "code": null, "e": 30174, "s": 30139, "text": "Read a file line by line in Python" }, { "code": null, "e": 30206, "s": 30174, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 30228, "s": 30206, "text": "Enumerate() in Python" }, { "code": null, "e": 30270, "s": 30228, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 30300, "s": 30270, "text": "Iterate over a list in Python" }, { "code": null, "e": 30329, "s": 30300, "text": "*args and **kwargs in Python" }, { "code": null, "e": 30373, "s": 30329, "text": "Reading and Writing to text files in Python" }, { "code": null, "e": 30410, "s": 30373, "text": "Create a Pandas DataFrame from Lists" } ]
How to extract the names of vector values from a named vector in R?
How to extract the names of vector values from a named vector in R? The names of vector values are created by using name function and the names can be extracted by using the same function. For example, if we have a vector called x that contains five values(1 to 5) and their names are defined as first, second, third, fourth and fifth then the names of values in x can be extracted by using names(x)[x==1]. Live Demo > x1<-1:4 > names(x1)<-c("one","two","three","four") > x1 one two three four 1 2 3 4 > names(x1)[x1==1] [1] "one" > names(x1)[x1==2] [1] "two" > names(x1)[x1==3] [1] "three" > names(x1)[x1==4] [1] "four" Live Demo > x2<-sample(0:5,120,replace=TRUE) > x2 [1] 3 5 1 4 5 2 3 2 4 5 1 4 2 4 4 3 1 0 5 0 2 1 5 1 3 1 0 2 3 0 3 2 4 2 3 2 0 [38] 3 5 3 4 3 2 3 3 4 4 2 3 2 3 5 5 5 2 1 2 5 2 3 2 2 4 4 2 4 1 4 0 5 5 5 4 0 [75] 1 3 0 3 3 5 1 2 4 5 3 0 3 5 0 3 5 4 3 5 5 2 5 2 1 3 0 0 3 1 2 1 2 3 1 3 4 [112] 0 4 1 4 4 4 4 5 2 > names(x2)<-sample(c("India","Russia","Turkey","France","Germany"),120,replace=TRUE) > x2 France Russia Russia Russia France Germany Germany France Turkey Germany 3 5 1 4 5 2 3 2 4 5 Germany India Turkey Russia Germany Turkey Germany Germany Russia Russia 1 4 2 4 4 3 1 0 5 0 Turkey France India France Germany Russia Russia Russia India Turkey 2 1 5 1 3 1 0 2 3 0 Turkey France India Germany Turkey India France India France Russia 3 2 4 2 3 2 0 3 5 3 India Turkey Germany Germany Turkey Russia Germany Turkey Germany Russia 4 3 2 3 3 4 4 2 3 2 Russia India Turkey Germany India France Russia Russia India France 3 5 5 5 2 1 2 5 2 3 France Russia India India India Turkey France India France India 2 2 4 4 2 4 1 4 0 5 Russia France India India France Turkey France Turkey Turkey Turkey 5 5 4 0 1 3 0 3 3 5 Turkey Turkey India India Russia Germany Turkey Germany France Russia 1 2 4 5 3 0 3 5 0 3 India Germany India India Germany Russia India Russia Russia Russia 5 4 3 5 5 2 5 2 1 3 Germany Russia Germany Turkey India Russia Germany Russia France India 0 0 3 1 2 1 2 3 1 3 France Germany Germany Germany Russia France India Turkey Russia Germany 4 0 4 1 4 4 4 4 5 2 > names(x2)[x2==0] [1] "Germany" "Russia" "Russia" "Turkey" "France" "France" "India" [8] "France" "Germany" "France" "Germany" "Russia" "Germany" > names(x2)[x2==1] [1] "Russia" "Germany" "Germany" "France" "France" "Russia" "France" [8] "France" "France" "Turkey" "Russia" "Turkey" "Russia" "France" [15] "Germany" > names(x2)[x2==2] [1] "Germany" "France" "Turkey" "Turkey" "Russia" "France" "Germany" [8] "India" "Germany" "Turkey" "Russia" "India" "Russia" "India" [15] "France" "Russia" "India" "Turkey" "Russia" "Russia" "India" [22] "Germany" "Germany" > names(x2)[x2==3] [1] "France" "Germany" "Turkey" "Germany" "India" "Turkey" "Turkey" [8] "India" "Russia" "Turkey" "Germany" "Turkey" "Germany" "Russia" [15] "France" "Turkey" "Turkey" "Turkey" "Russia" "Turkey" "Russia" [22] "India" "Russia" "Germany" "Russia" "India" > names(x2)[x2==4] [1] "Russia" "Turkey" "India" "Russia" "Germany" "India" "India" [8] "Russia" "Germany" "India" "India" "Turkey" "India" "India" [15] "India" "Germany" "France" "Germany" "Russia" "France" "India" [22] "Turkey" > names(x2)[x2==5] [1] "Russia" "France" "Germany" "Russia" "India" "France" "India" [8] "Turkey" "Germany" "Russia" "India" "Russia" "France" "Turkey" [15] "India" "Germany" "India" "India" "Germany" "India" "Russia" Live Demo > x3<-sample(LETTERS[1:26],100,replace=TRUE) > x3 [1] "A" "U" "E" "B" "X" "S" "N" "C" "I" "W" "K" "R" "O" "G" "X" "G" "H" "C" [19] "G" "M" "M" "U" "O" "C" "C" "N" "W" "O" "W" "C" "B" "D" "L" "P" "W" "M" [37] "J" "L" "W" "A" "R" "M" "B" "U" "J" "B" "L" "W" "D" "Y" "J" "N" "I" "V" [55] "J" "S" "Q" "K" "K" "K" "J" "C" "H" "E" "C" "T" "R" "X" "I" "T" "Z" "H" [73] "E" "I" "Q" "E" "D" "C" "I" "T" "Y" "H" "K" "W" "X" "M" "Y" "E" "Q" "F" [91] "G" "Q" "Q" "O" "K" "L" "R" "Z" "F" "K" > names(x3)<-sample(c("C1","C2","C3","C4","C5"),100,replace=TRUE) > x3 C2 C4 C4 C4 C1 C3 C3 C1 C5 C2 C5 C5 C4 C1 C2 C5 C1 C3 C5 C3 "A" "U" "E" "B" "X" "S" "N" "C" "I" "W" "K" "R" "O" "G" "X" "G" "H" "C" "G" "M" C5 C3 C1 C1 C3 C4 C3 C1 C4 C2 C4 C1 C3 C3 C2 C1 C1 C3 C2 C2 "M" "U" "O" "C" "C" "N" "W" "O" "W" "C" "B" "D" "L" "P" "W" "M" "J" "L" "W" "A" C1 C2 C4 C1 C5 C1 C5 C1 C1 C3 C3 C1 C3 C1 C3 C4 C5 C1 C3 C3 "R" "M" "B" "U" "J" "B" "L" "W" "D" "Y" "J" "N" "I" "V" "J" "S" "Q" "K" "K" "K" C2 C1 C1 C5 C1 C3 C5 C1 C2 C1 C4 C5 C3 C1 C4 C4 C5 C3 C4 C5 "J" "C" "H" "E" "C" "T" "R" "X" "I" "T" "Z" "H" "E" "I" "Q" "E" "D" "C" "I" "T" C2 C4 C4 C4 C1 C5 C5 C1 C3 C2 C4 C2 C4 C1 C1 C1 C2 C3 C5 C5 "Y" "H" "K" "W" "X" "M" "Y" "E" "Q" "F" "G" "Q" "Q" "O" "K" "L" "R" "Z" "F" "K" > names(x3)[x3=="A"] [1] "C2" "C2" > names(x3)[x3=="B"] [1] "C4" "C4" "C4" "C1" > names(x3)[x3=="C"] [1] "C1" "C3" "C1" "C3" "C2" "C1" "C1" "C3" > names(x3)[x3=="D"] [1] "C1" "C1" "C5" > names(x3)[x3=="E"] [1] "C4" "C5" "C3" "C4" "C1" > names(x3)[x3=="T"] [1] "C3" "C1" "C5" > names(x3)[x3=="U"] [1] "C4" "C3" "C1" > names(x3)[x3=="W"] [1] "C2" "C3" "C4" "C2" "C2" "C1" "C4" > names(x3)[x3=="X"] [1] "C1" "C2" "C1" "C1" > names(x3)[x3=="Y"] [1] "C3" "C2" "C5" > names(x3)[x3=="Z"] [1] "C4" "C3"
[ { "code": null, "e": 1130, "s": 1062, "text": "How to extract the names of vector values from a named vector in R?" }, { "code": null, "e": 1469, "s": 1130, "text": "The names of vector values are created by using name function and the names can be extracted by using the same function. For example, if we have a vector called x that contains five values(1 to 5) and their names are defined as first, second, third, fourth and fifth then the names of values in x can be extracted by using names(x)[x==1]." }, { "code": null, "e": 1479, "s": 1469, "text": "Live Demo" }, { "code": null, "e": 1537, "s": 1479, "text": "> x1<-1:4\n> names(x1)<-c(\"one\",\"two\",\"three\",\"four\")\n> x1" }, { "code": null, "e": 1564, "s": 1537, "text": "one two three four\n1 2 3 4" }, { "code": null, "e": 1583, "s": 1564, "text": "> names(x1)[x1==1]" }, { "code": null, "e": 1593, "s": 1583, "text": "[1] \"one\"" }, { "code": null, "e": 1612, "s": 1593, "text": "> names(x1)[x1==2]" }, { "code": null, "e": 1623, "s": 1612, "text": "[1] \"two\"\n" }, { "code": null, "e": 1642, "s": 1623, "text": "> names(x1)[x1==3]" }, { "code": null, "e": 1654, "s": 1642, "text": "[1] \"three\"" }, { "code": null, "e": 1673, "s": 1654, "text": "> names(x1)[x1==4]" }, { "code": null, "e": 1685, "s": 1673, "text": "[1] \"four\"\n" }, { "code": null, "e": 1695, "s": 1685, "text": "Live Demo" }, { "code": null, "e": 1735, "s": 1695, "text": "> x2<-sample(0:5,120,replace=TRUE)\n> x2" }, { "code": null, "e": 1996, "s": 1735, "text": " [1] 3 5 1 4 5 2 3 2 4 5 1 4 2 4 4 3 1 0 5 0 2 1 5 1 3 1 0 2 3 0 3 2 4 2 3 2 0\n[38] 3 5 3 4 3 2 3 3 4 4 2 3 2 3 5 5 5 2 1 2 5 2 3 2 2 4 4 2 4 1 4 0 5 5 5 4 0\n[75] 1 3 0 3 3 5 1 2 4 5 3 0 3 5 0 3 5 4 3 5 5 2 5 2 1 3 0 0 3 1 2 1 2 3 1 3 4\n[112] 0 4 1 4 4 4 4 5 2" }, { "code": null, "e": 2087, "s": 1996, "text": "> names(x2)<-sample(c(\"India\",\"Russia\",\"Turkey\",\"France\",\"Germany\"),120,replace=TRUE)\n> x2" }, { "code": null, "e": 3166, "s": 2087, "text": "France Russia Russia Russia France Germany Germany France Turkey Germany\n3 5 1 4 5 2 3 2 4 5\nGermany India Turkey Russia Germany Turkey Germany Germany Russia Russia\n1 4 2 4 4 3 1 0 5 0\nTurkey France India France Germany Russia Russia Russia India Turkey\n2 1 5 1 3 1 0 2 3 0\nTurkey France India Germany Turkey India France India France Russia\n3 2 4 2 3 2 0 3 5 3\nIndia Turkey Germany Germany Turkey Russia Germany Turkey Germany Russia\n4 3 2 3 3 4 4 2 3 2\nRussia India Turkey Germany India France Russia Russia India France\n3 5 5 5 2 1 2 5 2 3\nFrance Russia India India India Turkey France India France India\n2 2 4 4 2 4 1 4 0 5\nRussia France India India France Turkey France Turkey Turkey Turkey\n5 5 4 0 1 3 0 3 3 5\nTurkey Turkey India India Russia Germany Turkey Germany France Russia\n1 2 4 5 3 0 3 5 0 3\nIndia Germany India India Germany Russia India Russia Russia Russia\n5 4 3 5 5 2 5 2 1 3\nGermany Russia Germany Turkey India Russia Germany Russia France India\n0 0 3 1 2 1 2 3 1 3\nFrance Germany Germany Germany Russia France India Turkey Russia Germany\n4 0 4 1 4 4 4 4 5 2" }, { "code": null, "e": 3185, "s": 3166, "text": "> names(x2)[x2==0]" }, { "code": null, "e": 3313, "s": 3185, "text": "[1] \"Germany\" \"Russia\" \"Russia\" \"Turkey\" \"France\" \"France\" \"India\"\n[8] \"France\" \"Germany\" \"France\" \"Germany\" \"Russia\" \"Germany\"" }, { "code": null, "e": 3332, "s": 3313, "text": "> names(x2)[x2==1]" }, { "code": null, "e": 3483, "s": 3332, "text": "[1] \"Russia\" \"Germany\" \"Germany\" \"France\" \"France\" \"Russia\" \"France\"\n[8] \"France\" \"France\" \"Turkey\" \"Russia\" \"Turkey\" \"Russia\" \"France\"\n[15] \"Germany\"" }, { "code": null, "e": 3502, "s": 3483, "text": "> names(x2)[x2==2]" }, { "code": null, "e": 3727, "s": 3502, "text": "[1] \"Germany\" \"France\" \"Turkey\" \"Turkey\" \"Russia\" \"France\" \"Germany\"\n[8] \"India\" \"Germany\" \"Turkey\" \"Russia\" \"India\" \"Russia\" \"India\"\n[15] \"France\" \"Russia\" \"India\" \"Turkey\" \"Russia\" \"Russia\" \"India\"\n[22] \"Germany\" \"Germany\"" }, { "code": null, "e": 3746, "s": 3727, "text": "> names(x2)[x2==3]" }, { "code": null, "e": 3999, "s": 3746, "text": "[1] \"France\" \"Germany\" \"Turkey\" \"Germany\" \"India\" \"Turkey\" \"Turkey\"\n[8] \"India\" \"Russia\" \"Turkey\" \"Germany\" \"Turkey\" \"Germany\" \"Russia\"\n[15] \"France\" \"Turkey\" \"Turkey\" \"Turkey\" \"Russia\" \"Turkey\" \"Russia\"\n[22] \"India\" \"Russia\" \"Germany\" \"Russia\" \"India\"" }, { "code": null, "e": 4018, "s": 3999, "text": "> names(x2)[x2==4]" }, { "code": null, "e": 4229, "s": 4018, "text": "[1] \"Russia\" \"Turkey\" \"India\" \"Russia\" \"Germany\" \"India\" \"India\"\n[8] \"Russia\" \"Germany\" \"India\" \"India\" \"Turkey\" \"India\" \"India\"\n[15] \"India\" \"Germany\" \"France\" \"Germany\" \"Russia\" \"France\" \"India\"\n[22] \"Turkey\"" }, { "code": null, "e": 4248, "s": 4229, "text": "> names(x2)[x2==5]" }, { "code": null, "e": 4447, "s": 4248, "text": "[1] \"Russia\" \"France\" \"Germany\" \"Russia\" \"India\" \"France\" \"India\"\n[8] \"Turkey\" \"Germany\" \"Russia\" \"India\" \"Russia\" \"France\" \"Turkey\"\n[15] \"India\" \"Germany\" \"India\" \"India\" \"Germany\" \"India\" \"Russia\"" }, { "code": null, "e": 4457, "s": 4447, "text": "Live Demo" }, { "code": null, "e": 4507, "s": 4457, "text": "> x3<-sample(LETTERS[1:26],100,replace=TRUE)\n> x3" }, { "code": null, "e": 4936, "s": 4507, "text": "[1] \"A\" \"U\" \"E\" \"B\" \"X\" \"S\" \"N\" \"C\" \"I\" \"W\" \"K\" \"R\" \"O\" \"G\" \"X\" \"G\" \"H\" \"C\"\n[19] \"G\" \"M\" \"M\" \"U\" \"O\" \"C\" \"C\" \"N\" \"W\" \"O\" \"W\" \"C\" \"B\" \"D\" \"L\" \"P\" \"W\" \"M\"\n[37] \"J\" \"L\" \"W\" \"A\" \"R\" \"M\" \"B\" \"U\" \"J\" \"B\" \"L\" \"W\" \"D\" \"Y\" \"J\" \"N\" \"I\" \"V\"\n[55] \"J\" \"S\" \"Q\" \"K\" \"K\" \"K\" \"J\" \"C\" \"H\" \"E\" \"C\" \"T\" \"R\" \"X\" \"I\" \"T\" \"Z\" \"H\"\n[73] \"E\" \"I\" \"Q\" \"E\" \"D\" \"C\" \"I\" \"T\" \"Y\" \"H\" \"K\" \"W\" \"X\" \"M\" \"Y\" \"E\" \"Q\" \"F\"\n[91] \"G\" \"Q\" \"Q\" \"O\" \"K\" \"L\" \"R\" \"Z\" \"F\" \"K\"" }, { "code": null, "e": 5007, "s": 4936, "text": "> names(x3)<-sample(c(\"C1\",\"C2\",\"C3\",\"C4\",\"C5\"),100,replace=TRUE)\n> x3" }, { "code": null, "e": 5707, "s": 5007, "text": "C2 C4 C4 C4 C1 C3 C3 C1 C5 C2 C5 C5 C4 C1 C2 C5 C1 C3 C5 C3\n\"A\" \"U\" \"E\" \"B\" \"X\" \"S\" \"N\" \"C\" \"I\" \"W\" \"K\" \"R\" \"O\" \"G\" \"X\" \"G\" \"H\" \"C\" \"G\" \"M\"\nC5 C3 C1 C1 C3 C4 C3 C1 C4 C2 C4 C1 C3 C3 C2 C1 C1 C3 C2 C2\n\"M\" \"U\" \"O\" \"C\" \"C\" \"N\" \"W\" \"O\" \"W\" \"C\" \"B\" \"D\" \"L\" \"P\" \"W\" \"M\" \"J\" \"L\" \"W\" \"A\"\nC1 C2 C4 C1 C5 C1 C5 C1 C1 C3 C3 C1 C3 C1 C3 C4 C5 C1 C3 C3\n\"R\" \"M\" \"B\" \"U\" \"J\" \"B\" \"L\" \"W\" \"D\" \"Y\" \"J\" \"N\" \"I\" \"V\" \"J\" \"S\" \"Q\" \"K\" \"K\" \"K\"\nC2 C1 C1 C5 C1 C3 C5 C1 C2 C1 C4 C5 C3 C1 C4 C4 C5 C3 C4 C5\n\"J\" \"C\" \"H\" \"E\" \"C\" \"T\" \"R\" \"X\" \"I\" \"T\" \"Z\" \"H\" \"E\" \"I\" \"Q\" \"E\" \"D\" \"C\" \"I\" \"T\"\nC2 C4 C4 C4 C1 C5 C5 C1 C3 C2 C4 C2 C4 C1 C1 C1 C2 C3 C5 C5\n\"Y\" \"H\" \"K\" \"W\" \"X\" \"M\" \"Y\" \"E\" \"Q\" \"F\" \"G\" \"Q\" \"Q\" \"O\" \"K\" \"L\" \"R\" \"Z\" \"F\" \"K\"" }, { "code": null, "e": 5728, "s": 5707, "text": "> names(x3)[x3==\"A\"]" }, { "code": null, "e": 5742, "s": 5728, "text": "[1] \"C2\" \"C2\"" }, { "code": null, "e": 5763, "s": 5742, "text": "> names(x3)[x3==\"B\"]" }, { "code": null, "e": 5787, "s": 5763, "text": "[1] \"C4\" \"C4\" \"C4\" \"C1\"" }, { "code": null, "e": 5808, "s": 5787, "text": "> names(x3)[x3==\"C\"]" }, { "code": null, "e": 5852, "s": 5808, "text": "[1] \"C1\" \"C3\" \"C1\" \"C3\" \"C2\" \"C1\" \"C1\" \"C3\"" }, { "code": null, "e": 5873, "s": 5852, "text": "> names(x3)[x3==\"D\"]" }, { "code": null, "e": 5892, "s": 5873, "text": "[1] \"C1\" \"C1\" \"C5\"" }, { "code": null, "e": 5913, "s": 5892, "text": "> names(x3)[x3==\"E\"]" }, { "code": null, "e": 5942, "s": 5913, "text": "[1] \"C4\" \"C5\" \"C3\" \"C4\" \"C1\"" }, { "code": null, "e": 5963, "s": 5942, "text": "> names(x3)[x3==\"T\"]" }, { "code": null, "e": 5982, "s": 5963, "text": "[1] \"C3\" \"C1\" \"C5\"" }, { "code": null, "e": 6003, "s": 5982, "text": "> names(x3)[x3==\"U\"]" }, { "code": null, "e": 6022, "s": 6003, "text": "[1] \"C4\" \"C3\" \"C1\"" }, { "code": null, "e": 6043, "s": 6022, "text": "> names(x3)[x3==\"W\"]" }, { "code": null, "e": 6082, "s": 6043, "text": "[1] \"C2\" \"C3\" \"C4\" \"C2\" \"C2\" \"C1\" \"C4\"" }, { "code": null, "e": 6103, "s": 6082, "text": "> names(x3)[x3==\"X\"]" }, { "code": null, "e": 6127, "s": 6103, "text": "[1] \"C1\" \"C2\" \"C1\" \"C1\"" }, { "code": null, "e": 6148, "s": 6127, "text": "> names(x3)[x3==\"Y\"]" }, { "code": null, "e": 6167, "s": 6148, "text": "[1] \"C3\" \"C2\" \"C5\"" }, { "code": null, "e": 6188, "s": 6167, "text": "> names(x3)[x3==\"Z\"]" }, { "code": null, "e": 6202, "s": 6188, "text": "[1] \"C4\" \"C3\"" } ]
How to check if a string contains only decimal characters?
There is a method called isdigit() in String class that returns true if all characters in the string are digits and there is at least one character, false otherwise. You can call it as follows: >>> "12345".isdigit() True >>> "12345a".isdigit() False But this would fail for floating-point numbers. We can use the following method for those numbers: def isfloat(value): try: float(value) return True except ValueError: return False isfloat('12.345') isfloat('12a') This will give the output: True False You can also use regexes for the same result. For matching decimals, we can call the re.match(regex, string) using the regex: "^\d+?\.\d+?$". For example, >>> bool(re.match("^\d+?\.\d+?$", '123abc')) False >>> bool(re.match("^\d+?\.\d+?$", '12.345')) True
[ { "code": null, "e": 1256, "s": 1062, "text": "There is a method called isdigit() in String class that returns true if all characters in the string are digits and there is at least one character, false otherwise. You can call it as follows:" }, { "code": null, "e": 1312, "s": 1256, "text": ">>> \"12345\".isdigit()\nTrue\n>>> \"12345a\".isdigit()\nFalse" }, { "code": null, "e": 1411, "s": 1312, "text": "But this would fail for floating-point numbers. We can use the following method for those numbers:" }, { "code": null, "e": 1580, "s": 1411, "text": "def isfloat(value):\n try:\n float(value)\n return True\n except ValueError:\n return False\nisfloat('12.345')\nisfloat('12a')\nThis will give the output:\nTrue\nFalse" }, { "code": null, "e": 1735, "s": 1580, "text": "You can also use regexes for the same result. For matching decimals, we can call the re.match(regex, string) using the regex: \"^\\d+?\\.\\d+?$\". For example," }, { "code": null, "e": 1836, "s": 1735, "text": ">>> bool(re.match(\"^\\d+?\\.\\d+?$\", '123abc'))\nFalse\n>>> bool(re.match(\"^\\d+?\\.\\d+?$\", '12.345'))\nTrue" } ]
MFC - Libraries
A library is a group of functions, classes, or other resources that can be made available to programs that need already implemented entities without the need to know how these functions, classes, or resources were created or how they function. A library makes it easy for a programmer to use functions, classes, and resources etc. created by another person or company and trust that this external source is reliable and efficient. Some unique features related to libraries are − A library is created and functions like a normal regular program, using functions or other resources and communicating with other programs. A library is created and functions like a normal regular program, using functions or other resources and communicating with other programs. To implement its functionality, a library contains functions that other programs would need to complete their functionality. To implement its functionality, a library contains functions that other programs would need to complete their functionality. At the same time, a library may use some functions that other programs would not need. At the same time, a library may use some functions that other programs would not need. The program that uses the library, are also called the clients of the library. The program that uses the library, are also called the clients of the library. There are two types of functions you will create or include in your libraries − An internal function is one used only by the library itself and clients of the library will not need access to these functions. An internal function is one used only by the library itself and clients of the library will not need access to these functions. External functions are those that can be accessed by the clients of the library. External functions are those that can be accessed by the clients of the library. There are two broad categories of libraries you will deal with in your programs − Static libraries Dynamic libraries A static library is a file that contains functions, classes, or resources that an external program can use to complement its functionality. To use a library, the programmer has to create a link to it. The project can be a console application, a Win32 or an MFC application. The library file has the lib extension. Step 1 − Let us look into a simple example of static library by creating a new Win32 Project. Step 2 − On Application Wizard dialog box, choose the Static Library option. Step 3 − Click Finish to continue. Step 4 − Right-click on the project in solution explorer and add a header file from Add → New Item...menu option. Step 5 − Enter Calculator.h in the Name field and click Add. Add the following code in the header file − #pragma once #ifndef _CALCULATOR_H_ #define _CALCULATOR_H_ double Min(const double *Numbers, const int Count); double Max(const double *Numbers, const int Count); double Sum(const double *Numbers, const int Count); double Average(const double *Numbers, const int Count); long GreatestCommonDivisor(long Nbr1, long Nbr2); #endif // _CALCULATOR_H_ Step 6 − Add a source (*.cpp) file in the project. Step 7 − Enter Calculator.cpp in the Name field and click Add. Step 8 − Add the following code in the *.cpp file − #include "StdAfx.h" #include "Calculator.h" double Min(const double *Nbr, const int Total) { double Minimum = Nbr[0]; for (int i = 0; i < Total; i++) if (Minimum > Nbr[i]) Minimum = Nbr[i]; return Minimum; } double Max(const double *Nbr, const int Total) { double Maximum = Nbr[0]; for (int i = 0; i < Total; i++) if (Maximum < Nbr[i]) Maximum = Nbr[i]; return Maximum; } double Sum(const double *Nbr, const int Total) { double S = 0; for (int i = 0; i < Total; i++) S += Nbr[i]; return S; } double Average(const double *Nbr, const int Total) { double avg, S = 0; for (int i = 0; i < Total; i++) S += Nbr[i]; avg = S / Total; return avg; } long GreatestCommonDivisor(long Nbr1, long Nbr2) { while (true) { Nbr1 = Nbr1 % Nbr2; if (Nbr1 == 0) return Nbr2; Nbr2 = Nbr2 % Nbr1; if (Nbr2 == 0) return Nbr1; } } Step 9 − Build this library from the main menu, by clicking Build → Build MFCLib. Step 10 − When library is built successfully, it will display the above message. Step 11 − To use these functions from the library, let us add another MFC dialog application based from File → New → Project. Step 12 − Go to the MFCLib\Debug folder and copy the header file and *.lib files to the MFCLibTest project as shown in the following snapshot. Step 13 − To add the library to the current project, on the main menu, click Project → Add Existing Item and select MFCLib.lib. Step 14 − Design your dialog box as shown in the following snapshot. Step 15 − Add value variable for both edit controls of value type double. Step 16 − Add value variable for Static text control, which is at the end of the dialog box. Step 17 − Add the event handler for Calculate button. To add functionality from the library, we need to include the header file in CMFCLibTestDlg.cpp file. #include "stdafx.h" #include "MFCLibTest.h" #include "MFCLibTestDlg.h" #include "afxdialogex.h" #include "Calculator.h" Step 18 − Here is the implementation of button event handler. void CMFCLibTestDlg::OnBnClickedButtonCal() { // TODO: Add your control notification handler code here UpdateData(TRUE); CString strTemp; double numbers[2]; numbers[0] = m_Num1; numbers[1] = m_Num2; strTemp.Format(L"%.2f", Max(numbers,2)); m_strText.Append(L"Max is:\t" + strTemp); strTemp.Format(L"%.2f", Min(numbers, 2)); m_strText.Append(L"\nMin is:\t" + strTemp); strTemp.Format(L"%.2f", Sum(numbers, 2)); m_strText.Append(L"\nSum is:\t" + strTemp); strTemp.Format(L"%.2f", Average(numbers, 2)); m_strText.Append(L"\nAverage is:\t" + strTemp); strTemp.Format(L"%d", GreatestCommonDivisor(m_Num1, m_Num2)); m_strText.Append(L"\nGDC is:\t" + strTemp); UpdateData(FALSE); } Step 19 − When the above code is compiled and executed, you will see the following output. Step 20 − Enter two values in the edit field and click Calculate. You will now see the result after calculating from the library. A Win32 DLL is a library that can be made available to programs that run on a Microsoft Windows computer. As a normal library, it is made of functions and/or other resources grouped in a file. The DLL abbreviation stands for Dynamic Link Library. This means that, as opposed to a static library, a DLL allows the programmer to decide on when and how other applications will be linked to this type of library. For example, a DLL allows difference applications to use its library as they see fit and as necessary. In fact, applications created on different programming environments can use functions or resources stored in one particular DLL. For this reason, an application dynamically links to the library. Step 1 − Let us look into a simple example by creating a new Win32 Project. Step 2 − In the Application Type section, click the DLL radio button. Step 3 − Click Finish to continue. Step 4 − Add the following functions in MFCDynamicLib.cpp file and expose its definitions by using − extern "C" _declspec(dllexport) Step 5 − Use the _declspec(dllexport) modifier for each function that will be accessed outside the DLL. // MFCDynamicLib.cpp : Defines the exported functions for the DLL application.// #include "stdafx.h" extern "C" _declspec(dllexport) double Min(const double *Numbers, const int Count); extern "C" _declspec(dllexport) double Max(const double *Numbers, const int Count); extern "C" _declspec(dllexport) double Sum(const double *Numbers, const int Count); extern "C" _declspec(dllexport) double Average(const double *Numbers, const int Count); extern "C" _declspec(dllexport) long GreatestCommonDivisor(long Nbr1, long Nbr2); double Min(const double *Nbr, const int Total) { double Minimum = Nbr[0]; for (int i = 0; i < Total; i++) if (Minimum > Nbr[i]) Minimum = Nbr[i]; return Minimum; } double Max(const double *Nbr, const int Total) { double Maximum = Nbr[0]; for (int i = 0; i < Total; i++) if (Maximum < Nbr[i]) Maximum = Nbr[i]; return Maximum; } double Sum(const double *Nbr, const int Total) { double S = 0; for (int i = 0; i < Total; i++) S += Nbr[i]; return S; } double Average(const double *Nbr, const int Total){ double avg, S = 0; for (int i = 0; i < Total; i++) S += Nbr[i]; avg = S / Total; return avg; } long GreatestCommonDivisor(long Nbr1, long Nbr2) { while (true) { Nbr1 = Nbr1 % Nbr2; if (Nbr1 == 0) return Nbr2; Nbr2 = Nbr2 % Nbr1; if (Nbr2 == 0) return Nbr1; } } Step 6 − To create the DLL, on the main menu, click Build > Build MFCDynamicLib from the main menu. Step 7 − Once the DLL is successfully created, you will see amessage display in output window. Step 8 − Open Windows Explorer and then the Debug folder of the current project. Step 9 − Notice that a file with dll extension and another file with lib extension has been created. Step 10 − To test this file with dll extension, we need to create a new MFC dialog based application from File → New → Project. Step 11 − Go to the MFCDynamicLib\Debug folder and copy the *.dll and *.lib files to the MFCLibTest project as shown in the following snapshot. Step 12 − To add the DLL to the current project, on the main menu, click Project → Add Existing Item and then, select MFCDynamicLib.lib file. Step 13 − Design your dialog box as shown in the following snapshot. Step 14 − Add value variable for both edit controls of value type double. Step 15 − Add value variable for Static text control, which is at the end of the dialog box. Step 16 − Add the event handler for Calculate button. Step 17 − In the project that is using the DLL, each function that will be accessed must be declared using the _declspec(dllimport) modifier. Step 18 − Add the following function declaration in MFCLibTestDlg.cpp file. extern "C" _declspec(dllimport) double Min(const double *Numbers, const int Count); extern "C" _declspec(dllimport) double Max(const double *Numbers, const int Count); extern "C" _declspec(dllimport) double Sum(const double *Numbers, const int Count); extern "C" _declspec(dllimport) double Average(const double *Numbers, const int Count); extern "C" _declspec(dllimport) long GreatestCommonDivisor(long Nbr1, long Nbr2); Step 19 − Here is the implementation of button event handler. void CMFCLibTestDlg::OnBnClickedButtonCal() { // TODO: Add your control notification handler code here UpdateData(TRUE); CString strTemp; double numbers[2]; numbers[0] = m_Num1; numbers[1] = m_Num2; strTemp.Format(L"%.2f", Max(numbers,2)); m_strText.Append(L"Max is:\t" + strTemp); strTemp.Format(L"%.2f", Min(numbers, 2)); m_strText.Append(L"\nMin is:\t" + strTemp); strTemp.Format(L"%.2f", Sum(numbers, 2)); m_strText.Append(L"\nSum is:\t" + strTemp); strTemp.Format(L"%.2f", Average(numbers, 2)); m_strText.Append(L"\nAverage is:\t" + strTemp); strTemp.Format(L"%d", GreatestCommonDivisor(m_Num1, m_Num2)); m_strText.Append(L"\nGDC is:\t" + strTemp); UpdateData(FALSE); } Step 20 − When the above code is compiled and executed, you will see the following output. Step 21 − Enter two values in the edit field and click Calculate. You will now see the result after calculating from the DLL. Print Add Notes Bookmark this page
[ { "code": null, "e": 2546, "s": 2067, "text": "A library is a group of functions, classes, or other resources that can be made available to programs that need already implemented entities without the need to know how these functions, classes, or resources were created or how they function. A library makes it easy for a programmer to use functions, classes, and resources etc. created by another person or company and trust that this external source is reliable and efficient. Some unique features related to libraries are −" }, { "code": null, "e": 2686, "s": 2546, "text": "A library is created and functions like a normal regular program, using functions or other resources and communicating with other programs." }, { "code": null, "e": 2826, "s": 2686, "text": "A library is created and functions like a normal regular program, using functions or other resources and communicating with other programs." }, { "code": null, "e": 2951, "s": 2826, "text": "To implement its functionality, a library contains functions that other programs would need to complete their functionality." }, { "code": null, "e": 3076, "s": 2951, "text": "To implement its functionality, a library contains functions that other programs would need to complete their functionality." }, { "code": null, "e": 3163, "s": 3076, "text": "At the same time, a library may use some functions that other programs would not need." }, { "code": null, "e": 3250, "s": 3163, "text": "At the same time, a library may use some functions that other programs would not need." }, { "code": null, "e": 3329, "s": 3250, "text": "The program that uses the library, are also called the clients of the library." }, { "code": null, "e": 3408, "s": 3329, "text": "The program that uses the library, are also called the clients of the library." }, { "code": null, "e": 3488, "s": 3408, "text": "There are two types of functions you will create or include in your libraries −" }, { "code": null, "e": 3616, "s": 3488, "text": "An internal function is one used only by the library itself and clients of the library will not need access to these functions." }, { "code": null, "e": 3744, "s": 3616, "text": "An internal function is one used only by the library itself and clients of the library will not need access to these functions." }, { "code": null, "e": 3825, "s": 3744, "text": "External functions are those that can be accessed by the clients of the library." }, { "code": null, "e": 3906, "s": 3825, "text": "External functions are those that can be accessed by the clients of the library." }, { "code": null, "e": 3988, "s": 3906, "text": "There are two broad categories of libraries you will deal with in your programs −" }, { "code": null, "e": 4005, "s": 3988, "text": "Static libraries" }, { "code": null, "e": 4023, "s": 4005, "text": "Dynamic libraries" }, { "code": null, "e": 4337, "s": 4023, "text": "A static library is a file that contains functions, classes, or resources that an external program can use to complement its functionality. To use a library, the programmer has to create a link to it. The project can be a console application, a Win32 or an MFC application. The library file has the lib extension." }, { "code": null, "e": 4431, "s": 4337, "text": "Step 1 − Let us look into a simple example of static library by creating a new Win32 Project." }, { "code": null, "e": 4508, "s": 4431, "text": "Step 2 − On Application Wizard dialog box, choose the Static Library option." }, { "code": null, "e": 4543, "s": 4508, "text": "Step 3 − Click Finish to continue." }, { "code": null, "e": 4657, "s": 4543, "text": "Step 4 − Right-click on the project in solution explorer and add a header file from Add → New Item...menu option." }, { "code": null, "e": 4718, "s": 4657, "text": "Step 5 − Enter Calculator.h in the Name field and click Add." }, { "code": null, "e": 4762, "s": 4718, "text": "Add the following code in the header file −" }, { "code": null, "e": 5109, "s": 4762, "text": "#pragma once\n#ifndef _CALCULATOR_H_\n#define _CALCULATOR_H_\ndouble Min(const double *Numbers, const int Count);\ndouble Max(const double *Numbers, const int Count);\ndouble Sum(const double *Numbers, const int Count);\ndouble Average(const double *Numbers, const int Count);\nlong GreatestCommonDivisor(long Nbr1, long Nbr2);\n#endif // _CALCULATOR_H_\n" }, { "code": null, "e": 5160, "s": 5109, "text": "Step 6 − Add a source (*.cpp) file in the project." }, { "code": null, "e": 5223, "s": 5160, "text": "Step 7 − Enter Calculator.cpp in the Name field and click Add." }, { "code": null, "e": 5275, "s": 5223, "text": "Step 8 − Add the following code in the *.cpp file −" }, { "code": null, "e": 6211, "s": 5275, "text": "#include \"StdAfx.h\"\n#include \"Calculator.h\"\ndouble Min(const double *Nbr, const int Total) {\n double Minimum = Nbr[0];\n for (int i = 0; i < Total; i++)\n if (Minimum > Nbr[i])\n Minimum = Nbr[i];\n return Minimum;\n}\ndouble Max(const double *Nbr, const int Total) {\n double Maximum = Nbr[0];\n for (int i = 0; i < Total; i++)\n if (Maximum < Nbr[i])\n Maximum = Nbr[i];\n return Maximum;\n}\ndouble Sum(const double *Nbr, const int Total) {\n double S = 0;\n for (int i = 0; i < Total; i++)\n S += Nbr[i];\n return S;\n}\ndouble Average(const double *Nbr, const int Total) {\n double avg, S = 0;\n for (int i = 0; i < Total; i++)\n S += Nbr[i];\n avg = S / Total;\n return avg;\n}\nlong GreatestCommonDivisor(long Nbr1, long Nbr2) {\n while (true) {\n Nbr1 = Nbr1 % Nbr2;\n if (Nbr1 == 0)\n return Nbr2;\n Nbr2 = Nbr2 % Nbr1;\n if (Nbr2 == 0)\n return Nbr1;\n }\n}" }, { "code": null, "e": 6293, "s": 6211, "text": "Step 9 − Build this library from the main menu, by clicking Build → Build MFCLib." }, { "code": null, "e": 6374, "s": 6293, "text": "Step 10 − When library is built successfully, it will display the above message." }, { "code": null, "e": 6500, "s": 6374, "text": "Step 11 − To use these functions from the library, let us add another MFC dialog application based from File → New → Project." }, { "code": null, "e": 6643, "s": 6500, "text": "Step 12 − Go to the MFCLib\\Debug folder and copy the header file and *.lib files to the MFCLibTest project as shown in the following snapshot." }, { "code": null, "e": 6771, "s": 6643, "text": "Step 13 − To add the library to the current project, on the main menu, click Project → Add Existing Item and select MFCLib.lib." }, { "code": null, "e": 6840, "s": 6771, "text": "Step 14 − Design your dialog box as shown in the following snapshot." }, { "code": null, "e": 6914, "s": 6840, "text": "Step 15 − Add value variable for both edit controls of value type double." }, { "code": null, "e": 7007, "s": 6914, "text": "Step 16 − Add value variable for Static text control, which is at the end of the dialog box." }, { "code": null, "e": 7061, "s": 7007, "text": "Step 17 − Add the event handler for Calculate button." }, { "code": null, "e": 7163, "s": 7061, "text": "To add functionality from the library, we need to include the header file in CMFCLibTestDlg.cpp file." }, { "code": null, "e": 7284, "s": 7163, "text": "#include \"stdafx.h\"\n#include \"MFCLibTest.h\"\n#include \"MFCLibTestDlg.h\"\n#include \"afxdialogex.h\"\n#include \"Calculator.h\"\n" }, { "code": null, "e": 7346, "s": 7284, "text": "Step 18 − Here is the implementation of button event handler." }, { "code": null, "e": 8081, "s": 7346, "text": "void CMFCLibTestDlg::OnBnClickedButtonCal() {\n // TODO: Add your control notification handler code here\n UpdateData(TRUE);\n CString strTemp;\n double numbers[2];\n numbers[0] = m_Num1;\n numbers[1] = m_Num2;\n\n strTemp.Format(L\"%.2f\", Max(numbers,2));\n m_strText.Append(L\"Max is:\\t\" + strTemp);\n\n strTemp.Format(L\"%.2f\", Min(numbers, 2));\n m_strText.Append(L\"\\nMin is:\\t\" + strTemp);\n \n strTemp.Format(L\"%.2f\", Sum(numbers, 2));\n m_strText.Append(L\"\\nSum is:\\t\" + strTemp);\n\n strTemp.Format(L\"%.2f\", Average(numbers, 2));\n m_strText.Append(L\"\\nAverage is:\\t\" + strTemp);\n\n strTemp.Format(L\"%d\", GreatestCommonDivisor(m_Num1, m_Num2));\n m_strText.Append(L\"\\nGDC is:\\t\" + strTemp);\n\n UpdateData(FALSE);\n}" }, { "code": null, "e": 8172, "s": 8081, "text": "Step 19 − When the above code is compiled and executed, you will see the following output." }, { "code": null, "e": 8302, "s": 8172, "text": "Step 20 − Enter two values in the edit field and click Calculate. You will now see the result after calculating from the library." }, { "code": null, "e": 8495, "s": 8302, "text": "A Win32 DLL is a library that can be made available to programs that run on a Microsoft Windows computer. As a normal library, it is made of functions and/or other resources grouped in a file." }, { "code": null, "e": 8711, "s": 8495, "text": "The DLL abbreviation stands for Dynamic Link Library. This means that, as opposed to a static library, a DLL allows the programmer to decide on when and how other applications will be linked to this type of library." }, { "code": null, "e": 9009, "s": 8711, "text": "For example, a DLL allows difference applications to use its library as they see fit and as necessary. In fact, applications created on different programming environments can use functions or resources stored in one particular DLL. For this reason, an application dynamically links to the library." }, { "code": null, "e": 9085, "s": 9009, "text": "Step 1 − Let us look into a simple example by creating a new Win32 Project." }, { "code": null, "e": 9155, "s": 9085, "text": "Step 2 − In the Application Type section, click the DLL radio button." }, { "code": null, "e": 9190, "s": 9155, "text": "Step 3 − Click Finish to continue." }, { "code": null, "e": 9291, "s": 9190, "text": "Step 4 − Add the following functions in MFCDynamicLib.cpp file and expose its definitions by using −" }, { "code": null, "e": 9324, "s": 9291, "text": "extern \"C\" _declspec(dllexport)\n" }, { "code": null, "e": 9428, "s": 9324, "text": "Step 5 − Use the _declspec(dllexport) modifier for each function that will be accessed outside the DLL." }, { "code": null, "e": 10844, "s": 9428, "text": "// MFCDynamicLib.cpp : Defines the exported functions for the DLL application.//\n\n#include \"stdafx.h\"\n\nextern \"C\" _declspec(dllexport) double Min(const double *Numbers, const int Count);\nextern \"C\" _declspec(dllexport) double Max(const double *Numbers, const int Count);\nextern \"C\" _declspec(dllexport) double Sum(const double *Numbers, const int Count);\nextern \"C\" _declspec(dllexport) double Average(const double *Numbers, const int Count);\nextern \"C\" _declspec(dllexport) long GreatestCommonDivisor(long Nbr1, long Nbr2);\n\ndouble Min(const double *Nbr, const int Total) {\n double Minimum = Nbr[0];\n for (int i = 0; i < Total; i++)\n if (Minimum > Nbr[i])\n Minimum = Nbr[i];\n return Minimum;\n}\ndouble Max(const double *Nbr, const int Total) {\n double Maximum = Nbr[0];\n for (int i = 0; i < Total; i++)\n if (Maximum < Nbr[i])\n Maximum = Nbr[i];\n return Maximum;\n}\ndouble Sum(const double *Nbr, const int Total) {\n double S = 0;\n for (int i = 0; i < Total; i++)\n S += Nbr[i];\n return S;\n}\ndouble Average(const double *Nbr, const int Total){\n double avg, S = 0;\n for (int i = 0; i < Total; i++)\n S += Nbr[i];\n avg = S / Total;\n return avg;\n}\nlong GreatestCommonDivisor(long Nbr1, long Nbr2) {\n while (true) {\n Nbr1 = Nbr1 % Nbr2;\n if (Nbr1 == 0)\n return Nbr2;\n Nbr2 = Nbr2 % Nbr1;\n if (Nbr2 == 0)\n return Nbr1;\n }\n}" }, { "code": null, "e": 10944, "s": 10844, "text": "Step 6 − To create the DLL, on the main menu, click Build > Build MFCDynamicLib from the main menu." }, { "code": null, "e": 11039, "s": 10944, "text": "Step 7 − Once the DLL is successfully created, you will see amessage display in output window." }, { "code": null, "e": 11120, "s": 11039, "text": "Step 8 − Open Windows Explorer and then the Debug folder of the current project." }, { "code": null, "e": 11221, "s": 11120, "text": "Step 9 − Notice that a file with dll extension and another file with lib extension has been created." }, { "code": null, "e": 11349, "s": 11221, "text": "Step 10 − To test this file with dll extension, we need to create a new MFC dialog based application from File → New → Project." }, { "code": null, "e": 11493, "s": 11349, "text": "Step 11 − Go to the MFCDynamicLib\\Debug folder and copy the *.dll and *.lib files to the MFCLibTest project as shown in the following snapshot." }, { "code": null, "e": 11635, "s": 11493, "text": "Step 12 − To add the DLL to the current project, on the main menu, click Project → Add Existing Item and then, select MFCDynamicLib.lib file." }, { "code": null, "e": 11704, "s": 11635, "text": "Step 13 − Design your dialog box as shown in the following snapshot." }, { "code": null, "e": 11778, "s": 11704, "text": "Step 14 − Add value variable for both edit controls of value type double." }, { "code": null, "e": 11871, "s": 11778, "text": "Step 15 − Add value variable for Static text control, which is at the end of the dialog box." }, { "code": null, "e": 11925, "s": 11871, "text": "Step 16 − Add the event handler for Calculate button." }, { "code": null, "e": 12067, "s": 11925, "text": "Step 17 − In the project that is using the DLL, each function that will be accessed must be declared using the _declspec(dllimport) modifier." }, { "code": null, "e": 12143, "s": 12067, "text": "Step 18 − Add the following function declaration in MFCLibTestDlg.cpp file." }, { "code": null, "e": 12566, "s": 12143, "text": "extern \"C\" _declspec(dllimport) double Min(const double *Numbers, const int Count);\nextern \"C\" _declspec(dllimport) double Max(const double *Numbers, const int Count);\nextern \"C\" _declspec(dllimport) double Sum(const double *Numbers, const int Count);\nextern \"C\" _declspec(dllimport) double Average(const double *Numbers, const int Count);\nextern \"C\" _declspec(dllimport) long GreatestCommonDivisor(long Nbr1, long Nbr2);\n" }, { "code": null, "e": 12628, "s": 12566, "text": "Step 19 − Here is the implementation of button event handler." }, { "code": null, "e": 13363, "s": 12628, "text": "void CMFCLibTestDlg::OnBnClickedButtonCal() {\n\n // TODO: Add your control notification handler code here\n UpdateData(TRUE);\n\n CString strTemp;\n double numbers[2];\n numbers[0] = m_Num1;\n numbers[1] = m_Num2;\n\n strTemp.Format(L\"%.2f\", Max(numbers,2));\n m_strText.Append(L\"Max is:\\t\" + strTemp);\n\n strTemp.Format(L\"%.2f\", Min(numbers, 2));\n m_strText.Append(L\"\\nMin is:\\t\" + strTemp);\n\n strTemp.Format(L\"%.2f\", Sum(numbers, 2));\n m_strText.Append(L\"\\nSum is:\\t\" + strTemp);\n\n strTemp.Format(L\"%.2f\", Average(numbers, 2));\n m_strText.Append(L\"\\nAverage is:\\t\" + strTemp);\n\n strTemp.Format(L\"%d\", GreatestCommonDivisor(m_Num1, m_Num2));\n m_strText.Append(L\"\\nGDC is:\\t\" + strTemp);\n \n UpdateData(FALSE);\n}" }, { "code": null, "e": 13454, "s": 13363, "text": "Step 20 − When the above code is compiled and executed, you will see the following output." }, { "code": null, "e": 13580, "s": 13454, "text": "Step 21 − Enter two values in the edit field and click Calculate. You will now see the result after calculating from the DLL." }, { "code": null, "e": 13587, "s": 13580, "text": " Print" }, { "code": null, "e": 13598, "s": 13587, "text": " Add Notes" } ]
How to find out all the indexes for a DB2 table TAB1?
To find out all the indexes built on the DB2 table TAB1 we can use the DB2 system table SYSIBM.SYSINDEXES. The SYSINDEXES database has one row for every index present in DB2. We can find indexes built on a particular table using the below SQL query. SELECT NAME, UNIQUERULE, CLUSTERING FROM SYSIBM.SYSINDEXES WHERE TBNAME=’TAB1’ The column UNIQUERULE in the SELECT statement returns ‘P’ for primary index and ‘U’ for alternate index. The CLUSTERING column will be returned as ‘YES’ for clustered index and ‘NO’ for non-clustered index.
[ { "code": null, "e": 1312, "s": 1062, "text": "To find out all the indexes built on the DB2 table TAB1 we can use the DB2 system table SYSIBM.SYSINDEXES. The SYSINDEXES database has one row for every index present in DB2. We can find indexes built on a particular table using the below SQL query." }, { "code": null, "e": 1394, "s": 1312, "text": "SELECT NAME, UNIQUERULE, CLUSTERING\n FROM SYSIBM.SYSINDEXES WHERE TBNAME=’TAB1’" }, { "code": null, "e": 1601, "s": 1394, "text": "The column UNIQUERULE in the SELECT statement returns ‘P’ for primary index and ‘U’ for alternate index. The CLUSTERING column will be returned as ‘YES’ for clustered index and ‘NO’ for non-clustered index." } ]
Fit a Linear Regression Model with Gradient Descent from Scratch | by GreekDataGuy | Towards Data Science
We all know sklearn can fit models for us. But do we know what it’s actually doing when we call .fit(). Keep reading to find out. Today we’ll write a set of functions which implement gradient descent to fit a linear regression model. Then we’ll compare our model’s weights to the weights from a fitted sklearn model. Fitting = finding a model’s bias and coefficient(s) that minimize error. Error = Mean Squared Error (MSE). The mean of the squared differences between actual and predicted values, across a dataset. Simple Linear Regression = A model based on the equation of a line, “y=mx+b”. It takes a single feature as input, applies and bias and coefficient, and predicts y. Also, coefficient and bias together will sometimes be referred to as just, “weights”. Download the California Housing Dataset from kaggle, and load it into a dataframe. import pandas as pddf = pd.read_csv('california-housing-dataset.csv')df.head() For this dataset, we typically try to predict median_house_value using all the other features. But in our case, we’re concerned with fitting a simple linear regression (which only takes a single input feature) so we’ll choose median_income as that feature and ignore the rest. This won’t create the best model possible, but it will make implementing gradient descent simpler. Count examples in the dataset. len(df)#=> 20640 That’s a lot of data. Let’s reduce the size by 75%. df = df.sample(frac=0.25)len(df)#=> 5160 That’s more manageable. Collect our input features and labels. X = df['median_income'].tolist()y = df['median_house_value'].tolist() According to wikipedia, Gradient descent is a first-order iterative optimization algorithm for finding the local minimum of a function. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or approximate gradient) of the function at the current point. My translation: Gradient descent uses the gradient of the error function to predict in which direction the coefficient, m, and bias, b, should be updated to reduce error on a given dataset. The gradient of each weight, m and b, are calculated separately. A gradient is calculated by averaging a weight’s partial derivative across all examples. The direction and steepness of the gradient determines in which direction weights are updated, and by how much. The latter is also influenced by a hyper-parameter, the learning rate. This algorithm repeatedly iterates over the training set and updates weights until the cost function is at it’s minimum. You can find the partial derivatives of the MSE function (as below), all over the internet so we won’t derive it here. We’ll implement the above with the following, while iterating across the dataset. m_gradient += -(2/N) * x * (y - y_hat)b_gradient += -(2/N) * (y - y_hat) Let’s write a function which takes current weights, features, labels and learning rate, and outputs updated weights. def bias_coef_update(m, b, X, Y, learning_rate): m_gradient = 0 b_gradient = 0 N = len(Y) # iterate over examples for idx in range(len(Y)): x = X[idx] y = Y[idx] # predict y with current bias and coefficient y_hat = (m * x) + b m_gradient += -(2/N) * x * (y - y_hat) b_gradient += -(2/N) * (y - y_hat) # use gradient with learning_rate to nudge bias and coefficient new_coef = m - (m_gradient * learning_rate) new_bias = b - (b_gradient * learning_rate) return new_coef, new_bias The latter part of this function multiplies the gradient by the learning rate and uses the result to update current weights. The higher the learning rate the faster the model fits, at the cost of finding the exact local minimum (note: it will never actually reach the true minimum). Write another function which iteratively applies the above function for a set number of epochs. This is a less sophisticated approach (for simplicity) than returning fitted weights at some predetermined gradient steepness. def run(epoch_count=1000): # store output to plot later epochs = [] costs = [] m = 0 b = 0 learning_rate = 0.01 for i in range(epoch_count): m, b = bias_coef_update(m, b, X, y, learning_rate) print(m,b) C = cost(b, m, x_y_pairs) epochs.append(i) costs.append(C) return epochs, costs, m, bepochs, costs, m, b = run() Let’s output the final cost and weights of the fitted model. print(m)print(b)print(costs[-1])# I've rounded these myself so they're nicer to look at#=> 46,804#=> 19,963#=> 7,261,908,362 And chart how it improved over epochs. import matplotlib.pyplot as pltplt.xlabel('Epoch')plt.ylabel('Error')plt.suptitle('Cost by epoch')plt.plot(epochs,costs, linewidth=1) Cool. We can see that it made most of it’s progress within the first 100 epochs. Now we’ll check the same values in a model fitted with sklearn. Reshape features. import numpy as npX_array = np.array(X).reshape(5160,1) Fit a model. from sklearn.linear_model import LinearRegressionmodel = LinearRegression()model.fit(X_array,y) Check weights and error. from sklearn.metrics import mean_squared_errorm = model.coef_[0]b = model.intercept_mse = mean_squared_error(y_test, y_pred, sample_weight=None, multioutput='uniform_average')print(m)print(b)print(mse)# rounded#=> 42,324#=> 41,356#=> 7,134,555,443 Sklearn’s tuning outperformed ours by a small margin, 7,134,555,443 VS 7,261,908,362 but we got pretty close. Our bias is also quite different from the bias that sklearn found, 41,356 VS 19,963. Without digging into how values changed after each epoch (which you can easily do), I’d guess we could improve our model by both decreasing the learning rate and increasing the number of epochs.
[ { "code": null, "e": 302, "s": 172, "text": "We all know sklearn can fit models for us. But do we know what it’s actually doing when we call .fit(). Keep reading to find out." }, { "code": null, "e": 489, "s": 302, "text": "Today we’ll write a set of functions which implement gradient descent to fit a linear regression model. Then we’ll compare our model’s weights to the weights from a fitted sklearn model." }, { "code": null, "e": 562, "s": 489, "text": "Fitting = finding a model’s bias and coefficient(s) that minimize error." }, { "code": null, "e": 687, "s": 562, "text": "Error = Mean Squared Error (MSE). The mean of the squared differences between actual and predicted values, across a dataset." }, { "code": null, "e": 851, "s": 687, "text": "Simple Linear Regression = A model based on the equation of a line, “y=mx+b”. It takes a single feature as input, applies and bias and coefficient, and predicts y." }, { "code": null, "e": 937, "s": 851, "text": "Also, coefficient and bias together will sometimes be referred to as just, “weights”." }, { "code": null, "e": 1020, "s": 937, "text": "Download the California Housing Dataset from kaggle, and load it into a dataframe." }, { "code": null, "e": 1099, "s": 1020, "text": "import pandas as pddf = pd.read_csv('california-housing-dataset.csv')df.head()" }, { "code": null, "e": 1194, "s": 1099, "text": "For this dataset, we typically try to predict median_house_value using all the other features." }, { "code": null, "e": 1475, "s": 1194, "text": "But in our case, we’re concerned with fitting a simple linear regression (which only takes a single input feature) so we’ll choose median_income as that feature and ignore the rest. This won’t create the best model possible, but it will make implementing gradient descent simpler." }, { "code": null, "e": 1506, "s": 1475, "text": "Count examples in the dataset." }, { "code": null, "e": 1523, "s": 1506, "text": "len(df)#=> 20640" }, { "code": null, "e": 1575, "s": 1523, "text": "That’s a lot of data. Let’s reduce the size by 75%." }, { "code": null, "e": 1616, "s": 1575, "text": "df = df.sample(frac=0.25)len(df)#=> 5160" }, { "code": null, "e": 1640, "s": 1616, "text": "That’s more manageable." }, { "code": null, "e": 1679, "s": 1640, "text": "Collect our input features and labels." }, { "code": null, "e": 1749, "s": 1679, "text": "X = df['median_income'].tolist()y = df['median_house_value'].tolist()" }, { "code": null, "e": 1773, "s": 1749, "text": "According to wikipedia," }, { "code": null, "e": 2072, "s": 1773, "text": "Gradient descent is a first-order iterative optimization algorithm for finding the local minimum of a function. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or approximate gradient) of the function at the current point." }, { "code": null, "e": 2088, "s": 2072, "text": "My translation:" }, { "code": null, "e": 2262, "s": 2088, "text": "Gradient descent uses the gradient of the error function to predict in which direction the coefficient, m, and bias, b, should be updated to reduce error on a given dataset." }, { "code": null, "e": 2416, "s": 2262, "text": "The gradient of each weight, m and b, are calculated separately. A gradient is calculated by averaging a weight’s partial derivative across all examples." }, { "code": null, "e": 2599, "s": 2416, "text": "The direction and steepness of the gradient determines in which direction weights are updated, and by how much. The latter is also influenced by a hyper-parameter, the learning rate." }, { "code": null, "e": 2720, "s": 2599, "text": "This algorithm repeatedly iterates over the training set and updates weights until the cost function is at it’s minimum." }, { "code": null, "e": 2839, "s": 2720, "text": "You can find the partial derivatives of the MSE function (as below), all over the internet so we won’t derive it here." }, { "code": null, "e": 2921, "s": 2839, "text": "We’ll implement the above with the following, while iterating across the dataset." }, { "code": null, "e": 2994, "s": 2921, "text": "m_gradient += -(2/N) * x * (y - y_hat)b_gradient += -(2/N) * (y - y_hat)" }, { "code": null, "e": 3111, "s": 2994, "text": "Let’s write a function which takes current weights, features, labels and learning rate, and outputs updated weights." }, { "code": null, "e": 3702, "s": 3111, "text": "def bias_coef_update(m, b, X, Y, learning_rate): m_gradient = 0 b_gradient = 0 N = len(Y) # iterate over examples for idx in range(len(Y)): x = X[idx] y = Y[idx] # predict y with current bias and coefficient y_hat = (m * x) + b m_gradient += -(2/N) * x * (y - y_hat) b_gradient += -(2/N) * (y - y_hat) # use gradient with learning_rate to nudge bias and coefficient new_coef = m - (m_gradient * learning_rate) new_bias = b - (b_gradient * learning_rate) return new_coef, new_bias" }, { "code": null, "e": 3985, "s": 3702, "text": "The latter part of this function multiplies the gradient by the learning rate and uses the result to update current weights. The higher the learning rate the faster the model fits, at the cost of finding the exact local minimum (note: it will never actually reach the true minimum)." }, { "code": null, "e": 4208, "s": 3985, "text": "Write another function which iteratively applies the above function for a set number of epochs. This is a less sophisticated approach (for simplicity) than returning fitted weights at some predetermined gradient steepness." }, { "code": null, "e": 4613, "s": 4208, "text": "def run(epoch_count=1000): # store output to plot later epochs = [] costs = [] m = 0 b = 0 learning_rate = 0.01 for i in range(epoch_count): m, b = bias_coef_update(m, b, X, y, learning_rate) print(m,b) C = cost(b, m, x_y_pairs) epochs.append(i) costs.append(C) return epochs, costs, m, bepochs, costs, m, b = run()" }, { "code": null, "e": 4674, "s": 4613, "text": "Let’s output the final cost and weights of the fitted model." }, { "code": null, "e": 4799, "s": 4674, "text": "print(m)print(b)print(costs[-1])# I've rounded these myself so they're nicer to look at#=> 46,804#=> 19,963#=> 7,261,908,362" }, { "code": null, "e": 4838, "s": 4799, "text": "And chart how it improved over epochs." }, { "code": null, "e": 4972, "s": 4838, "text": "import matplotlib.pyplot as pltplt.xlabel('Epoch')plt.ylabel('Error')plt.suptitle('Cost by epoch')plt.plot(epochs,costs, linewidth=1)" }, { "code": null, "e": 5053, "s": 4972, "text": "Cool. We can see that it made most of it’s progress within the first 100 epochs." }, { "code": null, "e": 5117, "s": 5053, "text": "Now we’ll check the same values in a model fitted with sklearn." }, { "code": null, "e": 5135, "s": 5117, "text": "Reshape features." }, { "code": null, "e": 5191, "s": 5135, "text": "import numpy as npX_array = np.array(X).reshape(5160,1)" }, { "code": null, "e": 5204, "s": 5191, "text": "Fit a model." }, { "code": null, "e": 5300, "s": 5204, "text": "from sklearn.linear_model import LinearRegressionmodel = LinearRegression()model.fit(X_array,y)" }, { "code": null, "e": 5325, "s": 5300, "text": "Check weights and error." }, { "code": null, "e": 5573, "s": 5325, "text": "from sklearn.metrics import mean_squared_errorm = model.coef_[0]b = model.intercept_mse = mean_squared_error(y_test, y_pred, sample_weight=None, multioutput='uniform_average')print(m)print(b)print(mse)# rounded#=> 42,324#=> 41,356#=> 7,134,555,443" }, { "code": null, "e": 5683, "s": 5573, "text": "Sklearn’s tuning outperformed ours by a small margin, 7,134,555,443 VS 7,261,908,362 but we got pretty close." }, { "code": null, "e": 5768, "s": 5683, "text": "Our bias is also quite different from the bias that sklearn found, 41,356 VS 19,963." } ]
C# Program to access first element in a Dictionary
The following is our Dictionary with some elements − Dictionary<int, string> d = new Dictionary<int, string>() { {1,"Electronics"}, {2, "Clothing"}, {3,"Toys"}, {4,"Footwear"}, {5, "Accessories"} }; Now to display the first element, set the key like this. d[1]; The above displays the first element. Live Demo using System; using System.Collections.Generic; public class Program { public static void Main() { Dictionary<int, string> d = new Dictionary<int, string>() { {1,"Electronics"}, {2, "Clothing"}, {3,"Toys"}, {4,"Footwear"}, {5, "Accessories"} }; foreach (KeyValuePair<int, string> ele in d) { Console.WriteLine("Key = {0}, Value = {1}", ele.Key, ele.Value); } Console.WriteLine("First element: "+d[1]); } } Key = 1, Value = Electronics Key = 2, Value = Clothing Key = 3, Value = Toys Key = 4, Value = Footwear Key = 5, Value = Accessories First element: Electronics
[ { "code": null, "e": 1115, "s": 1062, "text": "The following is our Dictionary with some elements −" }, { "code": null, "e": 1276, "s": 1115, "text": "Dictionary<int, string> d = new Dictionary<int, string>() {\n {1,\"Electronics\"},\n {2, \"Clothing\"},\n {3,\"Toys\"},\n {4,\"Footwear\"},\n {5, \"Accessories\"}\n};" }, { "code": null, "e": 1333, "s": 1276, "text": "Now to display the first element, set the key like this." }, { "code": null, "e": 1339, "s": 1333, "text": "d[1];" }, { "code": null, "e": 1377, "s": 1339, "text": "The above displays the first element." }, { "code": null, "e": 1388, "s": 1377, "text": " Live Demo" }, { "code": null, "e": 1884, "s": 1388, "text": "using System;\nusing System.Collections.Generic;\npublic class Program {\n public static void Main() {\n Dictionary<int, string> d = new Dictionary<int, string>() {\n {1,\"Electronics\"},\n {2, \"Clothing\"},\n {3,\"Toys\"},\n {4,\"Footwear\"},\n {5, \"Accessories\"}\n };\n foreach (KeyValuePair<int, string> ele in d) {\n Console.WriteLine(\"Key = {0}, Value = {1}\", ele.Key, ele.Value);\n }\n Console.WriteLine(\"First element: \"+d[1]);\n }\n}" }, { "code": null, "e": 2043, "s": 1884, "text": "Key = 1, Value = Electronics\nKey = 2, Value = Clothing\nKey = 3, Value = Toys\nKey = 4, Value = Footwear\nKey = 5, Value = Accessories\nFirst element: Electronics" } ]
What is multithreading in C#?
In C#, the System.Threading.Thread class is used for working with threads. It allows creating and accessing individual threads in a multithreaded application. The first thread to be executed in a process is called the main thread. When a C# program starts execution, the main thread is automatically created. The threads created using the Thread class are called the child threads of the main thread. The following is an example showing how to create a thread in C# − using System; using System.Threading; namespace Demo { class Program { static void Main(string[] args) { Thread th = Thread.CurrentThread; th.Name = "MainThread"; Console.WriteLine("This is {0}", th.Name); Console.ReadKey(); } } } Here is another example showing how to manage threads in C# − Live Demo using System; using System.Threading; namespace MultithreadingApplication { class ThreadCreationProgram { public static void CallToChildThread() { Console.WriteLine("Child thread starts"); // the thread is paused for 5000 milliseconds int sleepfor = 5000; Console.WriteLine("Child Thread Paused for {0} seconds", sleepfor / 1000); Thread.Sleep(sleepfor); Console.WriteLine("Child thread resumes"); } static void Main(string[] args) { ThreadStart childref = new ThreadStart(CallToChildThread); Console.WriteLine("In Main: Creating the Child thread"); Thread childThread = new Thread(childref); childThread.Start(); Console.ReadKey(); } } } In Main: Creating the Child thread Child thread starts Child Thread Paused for 5 seconds Child thread resumes
[ { "code": null, "e": 1293, "s": 1062, "text": "In C#, the System.Threading.Thread class is used for working with threads. It allows creating and accessing individual threads in a multithreaded application. The first thread to be executed in a process is called the main thread." }, { "code": null, "e": 1463, "s": 1293, "text": "When a C# program starts execution, the main thread is automatically created. The threads created using the Thread class are called the child threads of the main thread." }, { "code": null, "e": 1530, "s": 1463, "text": "The following is an example showing how to create a thread in C# −" }, { "code": null, "e": 1816, "s": 1530, "text": "using System;\nusing System.Threading;\n\nnamespace Demo {\n class Program {\n static void Main(string[] args) {\n Thread th = Thread.CurrentThread;\n th.Name = \"MainThread\";\n Console.WriteLine(\"This is {0}\", th.Name);\n Console.ReadKey();\n }\n }\n}" }, { "code": null, "e": 1878, "s": 1816, "text": "Here is another example showing how to manage threads in C# −" }, { "code": null, "e": 1889, "s": 1878, "text": " Live Demo" }, { "code": null, "e": 2660, "s": 1889, "text": "using System;\nusing System.Threading;\n\nnamespace MultithreadingApplication {\n class ThreadCreationProgram {\n public static void CallToChildThread() {\n Console.WriteLine(\"Child thread starts\");\n // the thread is paused for 5000 milliseconds\n int sleepfor = 5000;\n Console.WriteLine(\"Child Thread Paused for {0} seconds\", sleepfor / 1000);\n Thread.Sleep(sleepfor);\n Console.WriteLine(\"Child thread resumes\");\n }\n\n static void Main(string[] args) {\n ThreadStart childref = new ThreadStart(CallToChildThread);\n Console.WriteLine(\"In Main: Creating the Child thread\");\n\n Thread childThread = new Thread(childref);\n childThread.Start();\n Console.ReadKey();\n }\n }\n}" }, { "code": null, "e": 2770, "s": 2660, "text": "In Main: Creating the Child thread\nChild thread starts\nChild Thread Paused for 5 seconds\nChild thread resumes" } ]
Count elements smaller than or equal to x in a sorted matrix in C++
We are given a matrix of size n x n, an integer variable x, and also, the elements in a matrix are placed in sorted order and the task is to calculate the count of those elements that are equal to or less than x. Input − matrix[3][3] = {{1, 2, 3}, {4, 5, 6}, {6, 7, 8}} and X = 4 Output − count is 4 Explanation − we have to match our matrix data with the value x, so the elements less than or equals to x i.e. 4 are 1, 2, 3, 4. So the count is 4. Input − matrix[3][3] = {{1, 2, 3}, {4, 5, 6}, {6, 7, 8}} and X = 0 Output − count is 0 Explanation − we have to match our matrix data with the value x, so there is no element that is less than or equals to x. So the count is 0. Input the size of the matrix and then create the matrix of size nxn Input the size of the matrix and then create the matrix of size nxn Start the loop, I from 0 to row size Start the loop, I from 0 to row size Inside the loop, I, start another loop j from 0 to column size Inside the loop, I, start another loop j from 0 to column size Now, check if matrix[i][j] = x, IF yes then increase the count by 1 Else ignore the condition Now, check if matrix[i][j] = x, IF yes then increase the count by 1 Else ignore the condition Return the total count Return the total count Print the result. Print the result. Live Demo #include <bits/stdc++.h> using namespace std; #define size 3 //function to count the total elements int count(int matrix[size][size], int x){ int count=0; //traversing the matrix row-wise for(int i = 0 ;i<size; i++){ for (int j = 0; j<size ; j++){ //check if value of matrix is less than or //equals to the x if(matrix[i][j]<= x){ count++; } } } return count; } int main(){ int matrix[size][size] ={ {1, 2, 3}, {4, 5, 6}, {7, 8, 9} }; int x = 5; cout<<"Count of elements smaller than or equal to x in a sorted matrix is: "<<count(matrix,x); return 0; } If we run the above code we will get the following output − Count of elements smaller than or equal to x in a sorted matrix is: 5
[ { "code": null, "e": 1275, "s": 1062, "text": "We are given a matrix of size n x n, an integer variable x, and also, the elements in a matrix are placed in sorted order and the task is to calculate the count of those elements that are equal to or less than x." }, { "code": null, "e": 1283, "s": 1275, "text": "Input −" }, { "code": null, "e": 1342, "s": 1283, "text": "matrix[3][3] = {{1, 2, 3}, {4, 5, 6}, {6, 7, 8}} and X = 4" }, { "code": null, "e": 1351, "s": 1342, "text": "Output −" }, { "code": null, "e": 1362, "s": 1351, "text": "count is 4" }, { "code": null, "e": 1510, "s": 1362, "text": "Explanation − we have to match our matrix data with the value x, so the elements less than or equals to x i.e. 4 are 1, 2, 3, 4. So the count is 4." }, { "code": null, "e": 1518, "s": 1510, "text": "Input −" }, { "code": null, "e": 1577, "s": 1518, "text": "matrix[3][3] = {{1, 2, 3}, {4, 5, 6}, {6, 7, 8}} and X = 0" }, { "code": null, "e": 1586, "s": 1577, "text": "Output −" }, { "code": null, "e": 1597, "s": 1586, "text": "count is 0" }, { "code": null, "e": 1738, "s": 1597, "text": "Explanation − we have to match our matrix data with the value x, so there is no element that is less than or equals to x. So the count is 0." }, { "code": null, "e": 1806, "s": 1738, "text": "Input the size of the matrix and then create the matrix of size nxn" }, { "code": null, "e": 1874, "s": 1806, "text": "Input the size of the matrix and then create the matrix of size nxn" }, { "code": null, "e": 1911, "s": 1874, "text": "Start the loop, I from 0 to row size" }, { "code": null, "e": 1948, "s": 1911, "text": "Start the loop, I from 0 to row size" }, { "code": null, "e": 2011, "s": 1948, "text": "Inside the loop, I, start another loop j from 0 to column size" }, { "code": null, "e": 2074, "s": 2011, "text": "Inside the loop, I, start another loop j from 0 to column size" }, { "code": null, "e": 2168, "s": 2074, "text": "Now, check if matrix[i][j] = x, IF yes then increase the count by 1 Else ignore the\ncondition" }, { "code": null, "e": 2262, "s": 2168, "text": "Now, check if matrix[i][j] = x, IF yes then increase the count by 1 Else ignore the\ncondition" }, { "code": null, "e": 2285, "s": 2262, "text": "Return the total count" }, { "code": null, "e": 2308, "s": 2285, "text": "Return the total count" }, { "code": null, "e": 2326, "s": 2308, "text": "Print the result." }, { "code": null, "e": 2344, "s": 2326, "text": "Print the result." }, { "code": null, "e": 2355, "s": 2344, "text": " Live Demo" }, { "code": null, "e": 3016, "s": 2355, "text": "#include <bits/stdc++.h>\nusing namespace std;\n#define size 3\n//function to count the total elements\nint count(int matrix[size][size], int x){\n int count=0;\n //traversing the matrix row-wise\n for(int i = 0 ;i<size; i++){\n for (int j = 0; j<size ; j++){\n //check if value of matrix is less than or\n //equals to the x\n if(matrix[i][j]<= x){\n count++;\n }\n }\n }\n return count;\n}\nint main(){\n int matrix[size][size] ={\n {1, 2, 3},\n {4, 5, 6},\n {7, 8, 9}\n };\n int x = 5;\n cout<<\"Count of elements smaller than or equal to x in a sorted matrix is: \"<<count(matrix,x);\n return 0;\n}" }, { "code": null, "e": 3076, "s": 3016, "text": "If we run the above code we will get the following output −" }, { "code": null, "e": 3146, "s": 3076, "text": "Count of elements smaller than or equal to x in a sorted matrix is: 5" } ]
How to simulate a keypress event in JavaScript?
To simulate a key press event, use event handlers. You can try to run the following code to simulate a key press event Live Demo <html> <head> <script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"> </script> <script> jQuery(document).ready(function($) { $('body').keypress(function(e) { alert(String.fromCharCode(e.which)); }); }); jQuery.fn.simulateKeyPress = function(character) { jQuery(this).trigger({ type: 'keypress', which: character.charCodeAt(0) }); }; setTimeout(function() { $('body').simulateKeyPress('z'); }, 2000); </script> </head> <body> <p>press any key</p> </body> </html>
[ { "code": null, "e": 1181, "s": 1062, "text": "To simulate a key press event, use event handlers. You can try to run the following code to simulate a key press event" }, { "code": null, "e": 1191, "s": 1181, "text": "Live Demo" }, { "code": null, "e": 1881, "s": 1191, "text": "<html>\n <head>\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js\">\n </script>\n <script>\n jQuery(document).ready(function($) {\n $('body').keypress(function(e) {\n alert(String.fromCharCode(e.which));\n });\n });\n jQuery.fn.simulateKeyPress = function(character) {\n jQuery(this).trigger({\n type: 'keypress',\n which: character.charCodeAt(0)\n });\n };\n setTimeout(function() {\n $('body').simulateKeyPress('z');\n }, 2000);\n </script>\n </head>\n <body>\n <p>press any key</p>\n </body>\n</html>" } ]
Count all sub-sequences having product <= K – Recursive approach in C++
In this tutorial, we will be discussing a program to find the number of sub-sequences having product <= k. For this we will be provided with an array and a value K. Our task is to find the number of sub sequences having their product as K. Live Demo #include <bits/stdc++.h> #define ll long long using namespace std; //keeping count of discarded sub sequences ll discard_count = 0; ll power(ll a, ll n){ if (n == 0) return 1; ll p = power(a, n / 2); p = p * p; if (n & 1) p = p * a; return p; } //recursive approach to count //discarded sub sequences void solve(int i, int n, float sum, float k, float* a, float* prefix){ if (sum > k) { discard_count += power(2, n - i); return; } if (i == n) return; float rem = prefix[n - 1] - prefix[i]; if (sum + a[i] + rem > k) solve(i + 1, n, sum + a[i], k, a, prefix); if (sum + rem > k) solve(i + 1, n, sum, k, a, prefix); } int countSubsequences(const int* arr, int n, ll K){ float sum = 0.0; float k = log2(K); float prefix[n], a[n]; for (int i = 0; i < n; i++) { a[i] = log2(arr[i]); sum += a[i]; } prefix[0] = a[0]; for (int i = 1; i < n; i++) { prefix[i] = prefix[i - 1] + a[i]; } ll total = power(2, n) - 1; if (sum <= k) { return total; } solve(0, n, 0.0, k, a, prefix); return total - discard_count; } int main() { int arr[] = { 4, 8, 7, 2 }; int n = sizeof(arr) / sizeof(arr[0]); ll k = 50; cout << countSubsequences(arr, n, k); return 0; } 9
[ { "code": null, "e": 1169, "s": 1062, "text": "In this tutorial, we will be discussing a program to find the number of sub-sequences having product <= k." }, { "code": null, "e": 1302, "s": 1169, "text": "For this we will be provided with an array and a value K. Our task is to find the number of sub sequences having their product as K." }, { "code": null, "e": 1313, "s": 1302, "text": " Live Demo" }, { "code": null, "e": 2610, "s": 1313, "text": "#include <bits/stdc++.h>\n#define ll long long\nusing namespace std;\n//keeping count of discarded sub sequences\nll discard_count = 0;\nll power(ll a, ll n){\n if (n == 0)\n return 1;\n ll p = power(a, n / 2);\n p = p * p;\n if (n & 1)\n p = p * a;\n return p;\n}\n//recursive approach to count\n//discarded sub sequences\nvoid solve(int i, int n, float sum, float k,\nfloat* a, float* prefix){\n if (sum > k) {\n discard_count += power(2, n - i);\n return;\n }\n if (i == n)\n return;\n float rem = prefix[n - 1] - prefix[i];\n if (sum + a[i] + rem > k)\n solve(i + 1, n, sum + a[i], k, a, prefix);\n if (sum + rem > k)\n solve(i + 1, n, sum, k, a, prefix);\n}\nint countSubsequences(const int* arr,\nint n, ll K){\n float sum = 0.0;\n float k = log2(K);\n float prefix[n], a[n];\n for (int i = 0; i < n; i++) {\n a[i] = log2(arr[i]);\n sum += a[i];\n }\n prefix[0] = a[0];\n for (int i = 1; i < n; i++) {\n prefix[i] = prefix[i - 1] + a[i];\n }\n ll total = power(2, n) - 1;\n if (sum <= k) {\n return total;\n }\n solve(0, n, 0.0, k, a, prefix);\n return total - discard_count;\n}\nint main() {\n int arr[] = { 4, 8, 7, 2 };\n int n = sizeof(arr) / sizeof(arr[0]);\n ll k = 50;\n cout << countSubsequences(arr, n, k);\n return 0;\n}" }, { "code": null, "e": 2612, "s": 2610, "text": "9" } ]
How to solve the simultaneous linear equations in R?
The data in simultaneous equations can be read as matrix and then we can solve those matrices to find the value of the variables. For example, if we have three equations as − x + y + z = 6 3x + 2y + 4z = 9 2x + 2y – 6z = 3 then we will convert these equations into matrices and solve them using solve function in R. Live Demo > A<-matrix(c(1,1,2,3,2,4,2,3,-6),nrow=3,byrow=TRUE) > A [,1] [,2] [,3] [1,] 1 1 2 [2,] 3 2 4 [3,] 2 3 -6 Live Demo > b<-matrix(c(6,9,3)) > b [,1] [1,] 6 [2,] 9 [3,] 3 > solve(A,b) [,1] [1,] -3.0 [2,] 6.0 [3,] 1.5 Hence, the answer is x = -3, y = 6, and z = 1.5. 4x - 3y + x = -10 2x + y + 3z = 0 -1x + 2y - 5z = 17 Live Demo > A<-matrix(c(4,-3,1,2,1,3,-1,2,-5),nrow=3,byrow=TRUE) > A [,1] [,2] [,3] [1,] 4 -3 1 [2,] 2 1 3 [3,] -1 2 -5 Live Demo > b<-matrix(c(-10,0,17)) > b [,1] [1,] -10 [2,] 0 [3,] 17 > solve(A,b) [,1] [1,] 1 [2,] 4 [3,] -2 4x – 2y + 3z = 1 x + 3y – 4z = -7 3x + y + 2z = 5 Live Demo > A<-matrix(c(4,-2,3,1,3,-4,3,1,2),nrow=3,byrow=TRUE) > A [,1] [,2] [,3] [1,] 4 -2 3 [2,] 1 3 -4 [3,] 3 1 2 Live Demo > b<-matrix(c(1,-7,5)) > b [,1] [1,] 1 [2,] -7 [3,] 5 > solve(A,b) [,1] [1,] -1 [2,] 2 [3,] 3 x + 2y – 3z + 4t = 12 2x + 2y – 2z + 3t = 10 y + z = -1 x - y + z – 2t = -4 Live Demo > A<-matrix(c(1,2,-3,4,2,2,-2,3,0,1,1,0,1,-1,1,-2),nrow=4,byrow=TRUE) > A [,1] [,2] [,3] [,4] [1,] 1 2 -3 4 [2,] 2 2 -2 3 [3,] 0 1 1 0 [4,] 1 -1 1 -2 Live Demo > b<-matrix(c(12,10,-1,-4)) > b [,1] [1,] 12 [2,] 10 [3,] -1 [4,] -4 > solve(A,b) [,1] [1,] 1 [2,] 0 [3,] -1 [4,] 2
[ { "code": null, "e": 1237, "s": 1062, "text": "The data in simultaneous equations can be read as matrix and then we can solve those matrices to find the value of the variables. For example, if we have three equations as −" }, { "code": null, "e": 1285, "s": 1237, "text": "x + y + z = 6\n3x + 2y + 4z = 9\n2x + 2y – 6z = 3" }, { "code": null, "e": 1378, "s": 1285, "text": "then we will convert these equations into matrices and solve them using solve function in R." }, { "code": null, "e": 1389, "s": 1378, "text": " Live Demo" }, { "code": null, "e": 1446, "s": 1389, "text": "> A<-matrix(c(1,1,2,3,2,4,2,3,-6),nrow=3,byrow=TRUE)\n> A" }, { "code": null, "e": 1516, "s": 1446, "text": " [,1] [,2] [,3]\n[1,] 1 1 2\n[2,] 3 2 4\n[3,] 2 3 -6" }, { "code": null, "e": 1527, "s": 1516, "text": " Live Demo" }, { "code": null, "e": 1553, "s": 1527, "text": "> b<-matrix(c(6,9,3))\n> b" }, { "code": null, "e": 1579, "s": 1553, "text": "[,1]\n[1,] 6\n[2,] 9\n[3,] 3" }, { "code": null, "e": 1592, "s": 1579, "text": "> solve(A,b)" }, { "code": null, "e": 1625, "s": 1592, "text": "[,1]\n[1,] -3.0\n[2,] 6.0\n[3,] 1.5" }, { "code": null, "e": 1674, "s": 1625, "text": "Hence, the answer is x = -3, y = 6, and z = 1.5." }, { "code": null, "e": 1727, "s": 1674, "text": "4x - 3y + x = -10\n2x + y + 3z = 0\n-1x + 2y - 5z = 17" }, { "code": null, "e": 1738, "s": 1727, "text": " Live Demo" }, { "code": null, "e": 1797, "s": 1738, "text": "> A<-matrix(c(4,-3,1,2,1,3,-1,2,-5),nrow=3,byrow=TRUE)\n> A" }, { "code": null, "e": 1870, "s": 1797, "text": " [,1] [,2] [,3]\n[1,] 4 -3 1\n[2,] 2 1 3\n[3,] -1 2 -5" }, { "code": null, "e": 1880, "s": 1870, "text": "Live Demo" }, { "code": null, "e": 1909, "s": 1880, "text": "> b<-matrix(c(-10,0,17))\n> b" }, { "code": null, "e": 1938, "s": 1909, "text": "[,1]\n[1,] -10\n[2,] 0\n[3,] 17" }, { "code": null, "e": 1951, "s": 1938, "text": "> solve(A,b)" }, { "code": null, "e": 1978, "s": 1951, "text": "[,1]\n[1,] 1\n[2,] 4\n[3,] -2" }, { "code": null, "e": 2028, "s": 1978, "text": "4x – 2y + 3z = 1\nx + 3y – 4z = -7\n3x + y + 2z = 5" }, { "code": null, "e": 2038, "s": 2028, "text": "Live Demo" }, { "code": null, "e": 2096, "s": 2038, "text": "> A<-matrix(c(4,-2,3,1,3,-4,3,1,2),nrow=3,byrow=TRUE)\n> A" }, { "code": null, "e": 2167, "s": 2096, "text": " [,1] [,2] [,3]\n[1,] 4 -2 3\n[2,] 1 3 -4\n[3,] 3 1 2" }, { "code": null, "e": 2178, "s": 2167, "text": " Live Demo" }, { "code": null, "e": 2205, "s": 2178, "text": "> b<-matrix(c(1,-7,5))\n> b" }, { "code": null, "e": 2232, "s": 2205, "text": "[,1]\n[1,] 1\n[2,] -7\n[3,] 5" }, { "code": null, "e": 2245, "s": 2232, "text": "> solve(A,b)" }, { "code": null, "e": 2272, "s": 2245, "text": "[,1]\n[1,] -1\n[2,] 2\n[3,] 3" }, { "code": null, "e": 2348, "s": 2272, "text": "x + 2y – 3z + 4t = 12\n2x + 2y – 2z + 3t = 10\ny + z = -1\nx - y + z – 2t = -4" }, { "code": null, "e": 2358, "s": 2348, "text": "Live Demo" }, { "code": null, "e": 2432, "s": 2358, "text": "> A<-matrix(c(1,2,-3,4,2,2,-2,3,0,1,1,0,1,-1,1,-2),nrow=4,byrow=TRUE)\n> A" }, { "code": null, "e": 2547, "s": 2432, "text": " [,1] [,2] [,3] [,4]\n[1,] 1 2 -3 4\n[2,] 2 2 -2 3\n[3,] 0 1 1 0\n[4,] 1 -1 1 -2" }, { "code": null, "e": 2558, "s": 2547, "text": " Live Demo" }, { "code": null, "e": 2590, "s": 2558, "text": "> b<-matrix(c(12,10,-1,-4))\n> b" }, { "code": null, "e": 2627, "s": 2590, "text": "[,1]\n[1,] 12\n[2,] 10\n[3,] -1\n[4,] -4" }, { "code": null, "e": 2640, "s": 2627, "text": "> solve(A,b)" }, { "code": null, "e": 2674, "s": 2640, "text": "[,1]\n[1,] 1\n[2,] 0\n[3,] -1\n[4,] 2" } ]
How to write a simple calculator program using C language?
Begin by writing the C code to create a simple calculator. Then, follow the algorithm given below to write a C program. Step 1: Declare variables Step 2: Enter any operator at runtime Step 3: Enter any two integer values at runtime Step 4: Apply switch case to select the operator: // case '+': result = num1 + num2; break; case '-': result = num1 - num2; break; case '*': result = num1 * num2; break; case '/': result = num1 / num2; break; default: printf("\n Invalid Operator "); Step 5: Print the result Following is the C program for calculator by using the Switch Case − Live Demo #include <stdio.h> int main(){ char Operator; float num1, num2, result = 0; printf("\n Enter any one operator like +, -, *, / : "); scanf("%c", &Operator); printf("Enter the values of Operands num1 and num2 \n : "); scanf("%f%f", &num1, &num2); switch(Operator){ case '+': result = num1 + num2; break; case '-': result = num1 - num2; break; case '*': result = num1 * num2; break; case '/': result = num1 / num2; break; default: printf("\n Invalid Operator "); } printf("The value = %f", result); return 0; } When the above program is executed, it produces the following result − Enter any one operator: + Enter values of Operands num1 and num2: 23 45 The value = 68.000000
[ { "code": null, "e": 1182, "s": 1062, "text": "Begin by writing the C code to create a simple calculator. Then, follow the algorithm given below to write a C program." }, { "code": null, "e": 1681, "s": 1182, "text": "Step 1: Declare variables\nStep 2: Enter any operator at runtime\nStep 3: Enter any two integer values at runtime\nStep 4: Apply switch case to select the operator:\n // case '+': result = num1 + num2;\n break;\n case '-': result = num1 - num2;\n break;\n case '*': result = num1 * num2;\n break;\n case '/': result = num1 / num2;\n break;\n default: printf(\"\\n Invalid Operator \");\nStep 5: Print the result" }, { "code": null, "e": 1750, "s": 1681, "text": "Following is the C program for calculator by using the Switch Case −" }, { "code": null, "e": 1761, "s": 1750, "text": " Live Demo" }, { "code": null, "e": 2365, "s": 1761, "text": "#include <stdio.h>\nint main(){\n char Operator;\n float num1, num2, result = 0;\n printf(\"\\n Enter any one operator like +, -, *, / : \");\n scanf(\"%c\", &Operator);\n printf(\"Enter the values of Operands num1 and num2 \\n : \");\n scanf(\"%f%f\", &num1, &num2);\n switch(Operator){\n case '+': result = num1 + num2;\n break;\n case '-': result = num1 - num2;\n break;\n case '*': result = num1 * num2;\n break;\n case '/': result = num1 / num2;\n break;\n default: printf(\"\\n Invalid Operator \");\n }\n printf(\"The value = %f\", result);\n return 0;\n}" }, { "code": null, "e": 2436, "s": 2365, "text": "When the above program is executed, it produces the following result −" }, { "code": null, "e": 2530, "s": 2436, "text": "Enter any one operator: +\nEnter values of Operands num1 and num2:\n23\n45\nThe value = 68.000000" } ]
How to reuse plots in Matplotlib?
To reuse plots in Matplotlib, we can take the following steps − Set the figure size and adjust the padding between and around the subplots. Create a new figure or activate an existing figure using figure() method. Plot a line with some input lists. To reuse the plot, update y data and the linewidth of the plot To display the figure, use show() method. from matplotlib import pyplot as plt plt.rcParams["figure.figsize"] = [7.50, 3.50] plt.rcParams["figure.autolayout"] = True line, = plt.plot([1, 3], [3, 4], label="line plot", color='red', lw=0.5) line.set_ydata([3.5]) line.set_linewidth(4) plt.show()
[ { "code": null, "e": 1126, "s": 1062, "text": "To reuse plots in Matplotlib, we can take the following steps −" }, { "code": null, "e": 1202, "s": 1126, "text": "Set the figure size and adjust the padding between and around the subplots." }, { "code": null, "e": 1276, "s": 1202, "text": "Create a new figure or activate an existing figure using figure() method." }, { "code": null, "e": 1311, "s": 1276, "text": "Plot a line with some input lists." }, { "code": null, "e": 1374, "s": 1311, "text": "To reuse the plot, update y data and the linewidth of the plot" }, { "code": null, "e": 1416, "s": 1374, "text": "To display the figure, use show() method." }, { "code": null, "e": 1672, "s": 1416, "text": "from matplotlib import pyplot as plt\n\nplt.rcParams[\"figure.figsize\"] = [7.50, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\n\nline, = plt.plot([1, 3], [3, 4], label=\"line plot\", color='red', lw=0.5)\n\nline.set_ydata([3.5])\nline.set_linewidth(4)\n\nplt.show()" } ]
LESS - Lighten
It lightens the color in the element. It has the following parameters − color − It represents the color object. color − It represents the color object. amount − It contains percentage between 0 - 100%. amount − It contains percentage between 0 - 100%. method − It is an optional parameter which is used for adjustment to be relative to current value by setting it to relative. method − It is an optional parameter which is used for adjustment to be relative to current value by setting it to relative. The following example demonstrates the use of lighten color operation in the LESS file − <html> <head> <title>Lighten</title> <link rel = "stylesheet" type = "text/css" href = "style.css"/> </head> <body> <h2>Example of Lighten Color Operation</h2> <div class = "myclass1"> <p>color :<br> #426105</p> </div><br> <div class = "myclass2"> <p>result :<br> #639108</p> </div> </body> </html> Next, create the style.less file. .myclass1 { height:100px; width:100px; padding: 30px 0px 0px 25px; background-color: hsl(80, 90%, 20%); color:white; } .myclass2 { height:100px; width:100px; padding: 30px 0px 0px 25px; background-color: lighten(hsl(80, 90%, 20%), 10%); color:white; } You can compile the style.less to style.css by using the following command − lessc style.less style.css Execute the above command; it will create the style.css file automatically with the following code − .myclass1 { height: 100px; width: 100px; padding: 30px 0px 0px 25px; background-color: #426105; color: white; } .myclass2 { height: 100px; width: 100px; padding: 30px 0px 0px 25px; background-color: #639108; color: white; } Follow these steps to see how the above code works − Save the above html code in the lighten.html file. Save the above html code in the lighten.html file. Open this HTML file in a browser, the following output will get displayed. Open this HTML file in a browser, the following output will get displayed. 20 Lectures 1 hours Anadi Sharma 44 Lectures 7.5 hours Eduonix Learning Solutions 17 Lectures 2 hours Zach Miller 23 Lectures 1.5 hours Zach Miller 34 Lectures 4 hours Syed Raza 31 Lectures 3 hours Harshit Srivastava Print Add Notes Bookmark this page
[ { "code": null, "e": 2622, "s": 2550, "text": "It lightens the color in the element. It has the following parameters −" }, { "code": null, "e": 2662, "s": 2622, "text": "color − It represents the color object." }, { "code": null, "e": 2702, "s": 2662, "text": "color − It represents the color object." }, { "code": null, "e": 2752, "s": 2702, "text": "amount − It contains percentage between 0 - 100%." }, { "code": null, "e": 2802, "s": 2752, "text": "amount − It contains percentage between 0 - 100%." }, { "code": null, "e": 2927, "s": 2802, "text": "method − It is an optional parameter which is used for adjustment to be relative to current value by setting it to relative." }, { "code": null, "e": 3052, "s": 2927, "text": "method − It is an optional parameter which is used for adjustment to be relative to current value by setting it to relative." }, { "code": null, "e": 3141, "s": 3052, "text": "The following example demonstrates the use of lighten color operation in the LESS file −" }, { "code": null, "e": 3514, "s": 3141, "text": "<html>\n <head>\n <title>Lighten</title>\n <link rel = \"stylesheet\" type = \"text/css\" href = \"style.css\"/>\n </head>\n\n <body>\n <h2>Example of Lighten Color Operation</h2>\n <div class = \"myclass1\">\n <p>color :<br> #426105</p>\n </div><br>\n\n <div class = \"myclass2\">\n <p>result :<br> #639108</p>\n </div>\n </body>\n</html>" }, { "code": null, "e": 3548, "s": 3514, "text": "Next, create the style.less file." }, { "code": null, "e": 3831, "s": 3548, "text": ".myclass1 {\n height:100px;\n width:100px;\n padding: 30px 0px 0px 25px;\n background-color: hsl(80, 90%, 20%);\n color:white;\n}\n\n.myclass2 {\n height:100px;\n width:100px;\n padding: 30px 0px 0px 25px;\n background-color: lighten(hsl(80, 90%, 20%), 10%);\n color:white;\n}" }, { "code": null, "e": 3908, "s": 3831, "text": "You can compile the style.less to style.css by using the following command −" }, { "code": null, "e": 3936, "s": 3908, "text": "lessc style.less style.css\n" }, { "code": null, "e": 4037, "s": 3936, "text": "Execute the above command; it will create the style.css file automatically with the following code −" }, { "code": null, "e": 4292, "s": 4037, "text": ".myclass1 {\n height: 100px;\n width: 100px;\n padding: 30px 0px 0px 25px;\n background-color: #426105;\n color: white;\n}\n\n.myclass2 {\n height: 100px;\n width: 100px;\n padding: 30px 0px 0px 25px;\n background-color: #639108;\n color: white;\n}" }, { "code": null, "e": 4345, "s": 4292, "text": "Follow these steps to see how the above code works −" }, { "code": null, "e": 4396, "s": 4345, "text": "Save the above html code in the lighten.html file." }, { "code": null, "e": 4447, "s": 4396, "text": "Save the above html code in the lighten.html file." }, { "code": null, "e": 4522, "s": 4447, "text": "Open this HTML file in a browser, the following output will get displayed." }, { "code": null, "e": 4597, "s": 4522, "text": "Open this HTML file in a browser, the following output will get displayed." }, { "code": null, "e": 4630, "s": 4597, "text": "\n 20 Lectures \n 1 hours \n" }, { "code": null, "e": 4644, "s": 4630, "text": " Anadi Sharma" }, { "code": null, "e": 4679, "s": 4644, "text": "\n 44 Lectures \n 7.5 hours \n" }, { "code": null, "e": 4707, "s": 4679, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 4740, "s": 4707, "text": "\n 17 Lectures \n 2 hours \n" }, { "code": null, "e": 4753, "s": 4740, "text": " Zach Miller" }, { "code": null, "e": 4788, "s": 4753, "text": "\n 23 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4801, "s": 4788, "text": " Zach Miller" }, { "code": null, "e": 4834, "s": 4801, "text": "\n 34 Lectures \n 4 hours \n" }, { "code": null, "e": 4845, "s": 4834, "text": " Syed Raza" }, { "code": null, "e": 4878, "s": 4845, "text": "\n 31 Lectures \n 3 hours \n" }, { "code": null, "e": 4898, "s": 4878, "text": " Harshit Srivastava" }, { "code": null, "e": 4905, "s": 4898, "text": " Print" }, { "code": null, "e": 4916, "s": 4905, "text": " Add Notes" } ]
State Space Model and Kalman Filter for Time-Series Prediction | by Sarit Maitra | Towards Data Science
ERROR: type should be string, got "https://sarit-maitra.medium.com/membership\nTime series consist of four major components: Seasonal variations (SV), Trend variations (TV), Cyclical variations (CV), and Random variations (RV). Here, we will perform predictive analytics using state space model on uni-variate time series data. This model has continuous hidden and observed state.\nLet us use historical data of Schlumberger Limited (SLB) from 1986 onwards.\ndf1 = ts(df1$Open, start= c(1986,1), end = c(2019,12), frequency = 12)xyplot(df1, ylab = “Price (US $)”, main = “Time series plot for Schlumberger price”)\nHere, data is on monthly frequency (12 months) for the ease of computation.\nThe line plot shows fluctuating price all throughout with high volatility.\nThe distribution plot comprising density and normal QQ plot below clearly shows that data distribution is not normal.\npar(mfrow=c(2,1)) # set up the graphics hist(df1, prob=TRUE, 12) # histogram lines(density(df1)) # density for details qqnorm(df1) # normal Q-Q plot qqline(df1)\nLet us perform stationarity test (ADF, Phillips-Perron & KPSS) on original data.\nstationary.test(df1, method = “adf”)stationary.test(df1, method = “pp”) # same as pp.test(x)stationary.test(df1, method = “kpss”)Augmented Dickey-Fuller Test alternative: stationary Type 1: no drift no trend lag ADF p.value[1,] 0 0.843 0.887[2,] 1 0.886 0.899[3,] 2 0.937 0.906[4,] 3 0.924 0.904[5,] 4 0.864 0.893[6,] 5 1.024 0.917Type 2: with drift no trend lag ADF p.value[1,] 0 -0.1706 0.936[2,] 1 -0.0728 0.950[3,] 2 -0.0496 0.952[4,] 3 -0.0435 0.952[5,] 4 -0.0883 0.947[6,] 5 0.3066 0.978Type 3: with drift and trend lag ADF p.value[1,] 0 -2.84 0.224[2,] 1 -2.83 0.228[3,] 2 -2.72 0.272[4,] 3 -2.79 0.242[5,] 4 -2.96 0.172[6,] 5 -2.96 0.173---- Note: in fact, p.value = 0.01 means p.value <= 0.01 Phillips-Perron Unit Root Test alternative: stationary Type 1: no drift no trend lag Z_rho p.value 5 0.343 0.768----- Type 2: with drift no trend lag Z_rho p.value 5 -0.0692 0.953----- Type 3: with drift and trend lag Z_rho p.value 5 -11.6 0.386--------------- Note: p-value = 0.01 means p.value <= 0.01 KPSS Unit Root Test alternative: nonstationary Type 1: no drift no trend lag stat p.value 4 0.261 0.1----- Type 2: with drift no trend lag stat p.value 4 0.367 0.0914----- Type 1: with drift and trend lag stat p.value 4 0.123 0.0924----------- Note: p.value = 0.01 means p.value <= 0.01 : p.value = 0.10 means p.value >= 0.10\nI have normalized the dataset using mean and std. dev.\nThe stationary = Gaussian noise and one with a trend = cumulative sum of Gaussian noise.\nHere, we will check each for characteristics of stationarity by looking at the auto-correlation functions of each signal. We would expect the ACF to go to 0 for each time lag (τ) for a stationary signal, because we expect no dependence with time.\nWe see here that, the stationary signal has very few lags exceeding the CI of the ACF . The trend resulted in almost all lags exceeding the confidence interval. It can be concluded that the ACF signal is stationary. But, the trend signal is not stationary . The stationary series has a better variance around the mean level, and the peaks are evidence of the interventions in the original series.\nWe will further decompose the time series which involves a combination of level, trend, seasonality, and noise components. Decomposition helps to provide a better understanding of problems during analysis and forecasting.\nWe may apply differencing the data or log transform the data to eliminate trend and seasonality. Such process may not be a shortcoming if we are only concerned with forecasting. However, in many contexts of statistics and econometric application, knowledge of this components has underlying importance. Estimates of trend & seasonal can be recovered from differenced series by maximizing the residual mean square but this is not as appealing as modeling the components directly. We have to remember that, real series are never stationary.\nHere, we will use simple moving average smoothing method of the time series to estimate the trend component.\ndf1SMA8 <- SMA(df1, n=8) # smoothing with moving average 8plot.ts(df1SMA8)\ndf1Comp <- decompose(df1SMA8) # decomposingplot(df1Comp, yax.flip=TRUE)\nThe plot shows the original time series (top), the estimated trend component (second from top), the estimated seasonal component (third from top), and the estimated irregular component (bottom).\nWe see that the estimated trend component shows a small decrease from about 9 in 1997 to about 7 in 1999, followed by a steady increase from then on to about 12 in 2019.\ndf1.Comp.seasonal <- sapply(df1Comp$seasonal, nchar)df1SeasonAdj <- df1 — df1.Comp.seasonalplot.ts(df1SeasonAdj)\nWe will also explore Kalman filter for series filtering & smoothening purpose prior to prediction.\nStructural time series models are (linear Gaussian) state-space models for (uni-variate) time series. When considering state space architecture, normally we are interested in considering three primary areas:\nPrediction which is forecasting subsequent values of the state\nFiltering which is estimating the current values of the state from past and current observations\nSmoothing which is estimating the past values of the state given the observations\nWe will use Kalman Filter to carry out the various types of inference.\nFiltering helps us to update our knowledge of the system as each observation comes in. Smoothing helps us to base our estimates of quantities of interest on the entire sample.\nStructural mode has the advantage of being of simple usage and quite reliable. It gives the main tools for fitting a structural model for a time series by maximum likelihood.\nStructural time series state-space model based on a decomposition of the series into a number of components. They are specified by a set of error variances, some of which may be zero. We will use a basic structural model to fit the stochastic level model to forecast. The two main components which make up state space models are (1) an observed data and (2) the unobserved states.\nThe simplest model is the local level model has an underlying level μt which evolves by:\nWe need to see the observations, since the states are hidden to us by system noise. The observations are a linear combination of the current state and some additional random variation known as measurement noise. The observations are:\nIt is in fact an ARIMA(0,1,1) model, but with restrictions on the parameter set. This is stochastically varying level (random walk) observed with noise.\nThe local linear trend model has the same measurement equation, but with a time-varying slope in the dynamics for μt, given by\nwith three variance parameters. Here εt , ξt and ζt are independent Gaussian white noise processes. The basic structural model, is a local trend model with an additional seasonal component. Thus the measurement equation is:\nwhere γt is a seasonal component with dynamics\nIt’s best practice to check the convergence of the structural procedure. As with any structural process we need to have appropriate initial starting points to ensure the algorithm will converge to the right maximum.\nautoplot(training, series=”Training data”) + autolayer(fitted(train, h=12), series=”12-step fitted values”)\nCross validation is an important step of time series analysis.\nFit model to data y1, . . . , yt\nGenerate 1-step ahead forecast ˆyt+1\nCompute forecast error e ∗ t+1 = yt+1 − yˆt+1\nRepeat steps 1–3 for t = m, . . . , n − 1 where m is minimum number of observations to fit model\nCompute forecast MSE from e ∗ m+1, . . . , e ∗\nThe p-value of Ljung-Box test of residuals is 0.2131015 > significant level(0.05); therefore, it is not advisable to use the result of the cross-validation as the model is clearly under-fitting the data.\nThe first diagnostic that we do with any statistical analysis is check that our residuals correspond to our assumed error structure. We have two types of errors in a uni-variate state-space model: process errors, the wt, and observation errors, the vt. They should not have a temporal trend.\nvt are the difference between the data and the predicted data at time t: vt = yt − Zxt − a\nIn a state-space model, xt is stochastic and the model residuals are a random variable. yt is also stochastic, though often observed unlike xt. The model residual random variable is: Vt = Yt − ZXt − a\nThe unconditional mean and variance of Vt is 0 and R\ncheckresiduals(train)\nKalman filter algorithm uses a series of measurements observed over time, containing noise and other inaccuracies, and produces estimates of unknown variables. This estimate tend to be more accurate than those based on a single measurement alone. Using a Kalman filter does not assume that the errors are Gaussian; however, the filter yields the exact conditional probability estimate in the special case that all errors are Gaussian.\nKalman filter is a means to find the estimates of the process. Filtering comes from its primitive use of reducing or “filtering out” unwanted variables which in our case is the estimation error.\nsm <- tsSmooth(train)plot(df1)lines(sm[,1],col=’blue’)lines(fitted(train)[,1],col=’red’)# Seasonally adjusted datatraining.sa <- df1 — sm[, 1]lines(training.sa, col=’black’)legend(“topleft”,col=c(‘blue’, ’red’, ‘black’), lty=1, legend=c(“Filtered level”, ”Smoothed level”)\nx <- trainingmiss <- sample(1:length(x), 12)x[miss] <- NAestim <- sm[,1] + sm[, 2]plot(x, ylim=range(df1))points(time(x)[miss], estim[miss], col = ’red’, pch = 1)points(time(x)[miss], df1[miss], col = ’blue’, pch = 1)legend(“topleft”, pch = 1, col = c(2,1), legend = c(“Estimate”, ”Actual”))\nplot(sm, main = “”)mtext(text = “decomposition of the basic structural”, side = 3, adj = 0, line = 1)\nsm %>% forecast(h=12) %>% autoplot() + autolayer(testing)\nBelow plot shows the foretasted Schlumberger data together with 50% and 90% probability intervals.\nAs we can see that, BSM model is been able to pick up the seasonal component quite well . One can experiment here with SMA based decomposition ( as shown earlier) and compare the forecast accuracy.\ndlm models are a special case of state space models where the errors of the state and observed components are normally distributed. Here, Kalman filter will be used to:\nfiltered values of state vectors.\nsmoothed values of state vectors and finally,\nforecast provides means and variances of future observations and states.\nWe have to define the parameters before fitting a dlm model. The parameters are V, W (covariance matrices of the measurement and state equations, respectively), FF and GG (measurement equation matrix and transition matrix respectively), and m0, C0 (prior mean and covariance matrix of the state vector).\nHowever, here, we start the dlm model by writing a small function as below:\nI have considered a local level model with dlm A polynomial DLM (a local linear trend is a polynomial DLM of order 2) and seasonal component 12. It’s good practice rather part of best practice to check the convergence of the MLE procedure.\nKalman filter and smoother have been applied as well.\nWe can see that, dlm model’s prediction accuracy fairly well. Filter and smooth lines are almost moving together in the series and do not differ much from each other. The seasonal components are ignored here. The lines of forecast series and the original series are quite close.\nA good example of state-space models with time series analysis can be found here.\nState space models come in lots of flavors and a flexible way of handling lots of time series models and provide a framework for handling missing values, likelihood estimation, smoothing, forecasting, etc. Both uni-variate and multi-variate data can be used to fit state space model. We have shown a basic level model in this exercise.\nI can be reached here.\nReference:\nDurbin, J., & Koopman, S. J. (2012). Time series analysis by state space methods. Oxford university press.Giovanni Petris & Sonia Petrone (2011), State Space Models in R, Journal of Statistical SoftwareG Petris, S Petrone, and P Campagnoli (2009). Dynamic Linear Models with R. SpringerHyndman, R. J., & Athanasopoulos, G. (2018). Forecasting: principles and practice. OTexts.\nDurbin, J., & Koopman, S. J. (2012). Time series analysis by state space methods. Oxford university press.\nGiovanni Petris & Sonia Petrone (2011), State Space Models in R, Journal of Statistical Software\nG Petris, S Petrone, and P Campagnoli (2009). Dynamic Linear Models with R. Springer\nHyndman, R. J., & Athanasopoulos, G. (2018). Forecasting: principles and practice. OTexts."
[ { "code": null, "e": 215, "s": 172, "text": "https://sarit-maitra.medium.com/membership" }, { "code": null, "e": 517, "s": 215, "text": "Time series consist of four major components: Seasonal variations (SV), Trend variations (TV), Cyclical variations (CV), and Random variations (RV). Here, we will perform predictive analytics using state space model on uni-variate time series data. This model has continuous hidden and observed state." }, { "code": null, "e": 593, "s": 517, "text": "Let us use historical data of Schlumberger Limited (SLB) from 1986 onwards." }, { "code": null, "e": 748, "s": 593, "text": "df1 = ts(df1$Open, start= c(1986,1), end = c(2019,12), frequency = 12)xyplot(df1, ylab = “Price (US $)”, main = “Time series plot for Schlumberger price”)" }, { "code": null, "e": 824, "s": 748, "text": "Here, data is on monthly frequency (12 months) for the ease of computation." }, { "code": null, "e": 899, "s": 824, "text": "The line plot shows fluctuating price all throughout with high volatility." }, { "code": null, "e": 1017, "s": 899, "text": "The distribution plot comprising density and normal QQ plot below clearly shows that data distribution is not normal." }, { "code": null, "e": 1178, "s": 1017, "text": "par(mfrow=c(2,1)) # set up the graphics hist(df1, prob=TRUE, 12) # histogram lines(density(df1)) # density for details qqnorm(df1) # normal Q-Q plot qqline(df1)" }, { "code": null, "e": 1259, "s": 1178, "text": "Let us perform stationarity test (ADF, Phillips-Perron & KPSS) on original data." }, { "code": null, "e": 2733, "s": 1259, "text": "stationary.test(df1, method = “adf”)stationary.test(df1, method = “pp”) # same as pp.test(x)stationary.test(df1, method = “kpss”)Augmented Dickey-Fuller Test alternative: stationary Type 1: no drift no trend lag ADF p.value[1,] 0 0.843 0.887[2,] 1 0.886 0.899[3,] 2 0.937 0.906[4,] 3 0.924 0.904[5,] 4 0.864 0.893[6,] 5 1.024 0.917Type 2: with drift no trend lag ADF p.value[1,] 0 -0.1706 0.936[2,] 1 -0.0728 0.950[3,] 2 -0.0496 0.952[4,] 3 -0.0435 0.952[5,] 4 -0.0883 0.947[6,] 5 0.3066 0.978Type 3: with drift and trend lag ADF p.value[1,] 0 -2.84 0.224[2,] 1 -2.83 0.228[3,] 2 -2.72 0.272[4,] 3 -2.79 0.242[5,] 4 -2.96 0.172[6,] 5 -2.96 0.173---- Note: in fact, p.value = 0.01 means p.value <= 0.01 Phillips-Perron Unit Root Test alternative: stationary Type 1: no drift no trend lag Z_rho p.value 5 0.343 0.768----- Type 2: with drift no trend lag Z_rho p.value 5 -0.0692 0.953----- Type 3: with drift and trend lag Z_rho p.value 5 -11.6 0.386--------------- Note: p-value = 0.01 means p.value <= 0.01 KPSS Unit Root Test alternative: nonstationary Type 1: no drift no trend lag stat p.value 4 0.261 0.1----- Type 2: with drift no trend lag stat p.value 4 0.367 0.0914----- Type 1: with drift and trend lag stat p.value 4 0.123 0.0924----------- Note: p.value = 0.01 means p.value <= 0.01 : p.value = 0.10 means p.value >= 0.10" }, { "code": null, "e": 2788, "s": 2733, "text": "I have normalized the dataset using mean and std. dev." }, { "code": null, "e": 2877, "s": 2788, "text": "The stationary = Gaussian noise and one with a trend = cumulative sum of Gaussian noise." }, { "code": null, "e": 3124, "s": 2877, "text": "Here, we will check each for characteristics of stationarity by looking at the auto-correlation functions of each signal. We would expect the ACF to go to 0 for each time lag (τ) for a stationary signal, because we expect no dependence with time." }, { "code": null, "e": 3521, "s": 3124, "text": "We see here that, the stationary signal has very few lags exceeding the CI of the ACF . The trend resulted in almost all lags exceeding the confidence interval. It can be concluded that the ACF signal is stationary. But, the trend signal is not stationary . The stationary series has a better variance around the mean level, and the peaks are evidence of the interventions in the original series." }, { "code": null, "e": 3743, "s": 3521, "text": "We will further decompose the time series which involves a combination of level, trend, seasonality, and noise components. Decomposition helps to provide a better understanding of problems during analysis and forecasting." }, { "code": null, "e": 4282, "s": 3743, "text": "We may apply differencing the data or log transform the data to eliminate trend and seasonality. Such process may not be a shortcoming if we are only concerned with forecasting. However, in many contexts of statistics and econometric application, knowledge of this components has underlying importance. Estimates of trend & seasonal can be recovered from differenced series by maximizing the residual mean square but this is not as appealing as modeling the components directly. We have to remember that, real series are never stationary." }, { "code": null, "e": 4391, "s": 4282, "text": "Here, we will use simple moving average smoothing method of the time series to estimate the trend component." }, { "code": null, "e": 4466, "s": 4391, "text": "df1SMA8 <- SMA(df1, n=8) # smoothing with moving average 8plot.ts(df1SMA8)" }, { "code": null, "e": 4538, "s": 4466, "text": "df1Comp <- decompose(df1SMA8) # decomposingplot(df1Comp, yax.flip=TRUE)" }, { "code": null, "e": 4733, "s": 4538, "text": "The plot shows the original time series (top), the estimated trend component (second from top), the estimated seasonal component (third from top), and the estimated irregular component (bottom)." }, { "code": null, "e": 4903, "s": 4733, "text": "We see that the estimated trend component shows a small decrease from about 9 in 1997 to about 7 in 1999, followed by a steady increase from then on to about 12 in 2019." }, { "code": null, "e": 5016, "s": 4903, "text": "df1.Comp.seasonal <- sapply(df1Comp$seasonal, nchar)df1SeasonAdj <- df1 — df1.Comp.seasonalplot.ts(df1SeasonAdj)" }, { "code": null, "e": 5115, "s": 5016, "text": "We will also explore Kalman filter for series filtering & smoothening purpose prior to prediction." }, { "code": null, "e": 5323, "s": 5115, "text": "Structural time series models are (linear Gaussian) state-space models for (uni-variate) time series. When considering state space architecture, normally we are interested in considering three primary areas:" }, { "code": null, "e": 5386, "s": 5323, "text": "Prediction which is forecasting subsequent values of the state" }, { "code": null, "e": 5483, "s": 5386, "text": "Filtering which is estimating the current values of the state from past and current observations" }, { "code": null, "e": 5565, "s": 5483, "text": "Smoothing which is estimating the past values of the state given the observations" }, { "code": null, "e": 5636, "s": 5565, "text": "We will use Kalman Filter to carry out the various types of inference." }, { "code": null, "e": 5812, "s": 5636, "text": "Filtering helps us to update our knowledge of the system as each observation comes in. Smoothing helps us to base our estimates of quantities of interest on the entire sample." }, { "code": null, "e": 5987, "s": 5812, "text": "Structural mode has the advantage of being of simple usage and quite reliable. It gives the main tools for fitting a structural model for a time series by maximum likelihood." }, { "code": null, "e": 6368, "s": 5987, "text": "Structural time series state-space model based on a decomposition of the series into a number of components. They are specified by a set of error variances, some of which may be zero. We will use a basic structural model to fit the stochastic level model to forecast. The two main components which make up state space models are (1) an observed data and (2) the unobserved states." }, { "code": null, "e": 6457, "s": 6368, "text": "The simplest model is the local level model has an underlying level μt which evolves by:" }, { "code": null, "e": 6691, "s": 6457, "text": "We need to see the observations, since the states are hidden to us by system noise. The observations are a linear combination of the current state and some additional random variation known as measurement noise. The observations are:" }, { "code": null, "e": 6844, "s": 6691, "text": "It is in fact an ARIMA(0,1,1) model, but with restrictions on the parameter set. This is stochastically varying level (random walk) observed with noise." }, { "code": null, "e": 6971, "s": 6844, "text": "The local linear trend model has the same measurement equation, but with a time-varying slope in the dynamics for μt, given by" }, { "code": null, "e": 7195, "s": 6971, "text": "with three variance parameters. Here εt , ξt and ζt are independent Gaussian white noise processes. The basic structural model, is a local trend model with an additional seasonal component. Thus the measurement equation is:" }, { "code": null, "e": 7242, "s": 7195, "text": "where γt is a seasonal component with dynamics" }, { "code": null, "e": 7458, "s": 7242, "text": "It’s best practice to check the convergence of the structural procedure. As with any structural process we need to have appropriate initial starting points to ensure the algorithm will converge to the right maximum." }, { "code": null, "e": 7566, "s": 7458, "text": "autoplot(training, series=”Training data”) + autolayer(fitted(train, h=12), series=”12-step fitted values”)" }, { "code": null, "e": 7629, "s": 7566, "text": "Cross validation is an important step of time series analysis." }, { "code": null, "e": 7662, "s": 7629, "text": "Fit model to data y1, . . . , yt" }, { "code": null, "e": 7699, "s": 7662, "text": "Generate 1-step ahead forecast ˆyt+1" }, { "code": null, "e": 7745, "s": 7699, "text": "Compute forecast error e ∗ t+1 = yt+1 − yˆt+1" }, { "code": null, "e": 7842, "s": 7745, "text": "Repeat steps 1–3 for t = m, . . . , n − 1 where m is minimum number of observations to fit model" }, { "code": null, "e": 7889, "s": 7842, "text": "Compute forecast MSE from e ∗ m+1, . . . , e ∗" }, { "code": null, "e": 8093, "s": 7889, "text": "The p-value of Ljung-Box test of residuals is 0.2131015 > significant level(0.05); therefore, it is not advisable to use the result of the cross-validation as the model is clearly under-fitting the data." }, { "code": null, "e": 8385, "s": 8093, "text": "The first diagnostic that we do with any statistical analysis is check that our residuals correspond to our assumed error structure. We have two types of errors in a uni-variate state-space model: process errors, the wt, and observation errors, the vt. They should not have a temporal trend." }, { "code": null, "e": 8476, "s": 8385, "text": "vt are the difference between the data and the predicted data at time t: vt = yt − Zxt − a" }, { "code": null, "e": 8677, "s": 8476, "text": "In a state-space model, xt is stochastic and the model residuals are a random variable. yt is also stochastic, though often observed unlike xt. The model residual random variable is: Vt = Yt − ZXt − a" }, { "code": null, "e": 8730, "s": 8677, "text": "The unconditional mean and variance of Vt is 0 and R" }, { "code": null, "e": 8752, "s": 8730, "text": "checkresiduals(train)" }, { "code": null, "e": 9187, "s": 8752, "text": "Kalman filter algorithm uses a series of measurements observed over time, containing noise and other inaccuracies, and produces estimates of unknown variables. This estimate tend to be more accurate than those based on a single measurement alone. Using a Kalman filter does not assume that the errors are Gaussian; however, the filter yields the exact conditional probability estimate in the special case that all errors are Gaussian." }, { "code": null, "e": 9382, "s": 9187, "text": "Kalman filter is a means to find the estimates of the process. Filtering comes from its primitive use of reducing or “filtering out” unwanted variables which in our case is the estimation error." }, { "code": null, "e": 9655, "s": 9382, "text": "sm <- tsSmooth(train)plot(df1)lines(sm[,1],col=’blue’)lines(fitted(train)[,1],col=’red’)# Seasonally adjusted datatraining.sa <- df1 — sm[, 1]lines(training.sa, col=’black’)legend(“topleft”,col=c(‘blue’, ’red’, ‘black’), lty=1, legend=c(“Filtered level”, ”Smoothed level”)" }, { "code": null, "e": 9947, "s": 9655, "text": "x <- trainingmiss <- sample(1:length(x), 12)x[miss] <- NAestim <- sm[,1] + sm[, 2]plot(x, ylim=range(df1))points(time(x)[miss], estim[miss], col = ’red’, pch = 1)points(time(x)[miss], df1[miss], col = ’blue’, pch = 1)legend(“topleft”, pch = 1, col = c(2,1), legend = c(“Estimate”, ”Actual”))" }, { "code": null, "e": 10049, "s": 9947, "text": "plot(sm, main = “”)mtext(text = “decomposition of the basic structural”, side = 3, adj = 0, line = 1)" }, { "code": null, "e": 10107, "s": 10049, "text": "sm %>% forecast(h=12) %>% autoplot() + autolayer(testing)" }, { "code": null, "e": 10206, "s": 10107, "text": "Below plot shows the foretasted Schlumberger data together with 50% and 90% probability intervals." }, { "code": null, "e": 10404, "s": 10206, "text": "As we can see that, BSM model is been able to pick up the seasonal component quite well . One can experiment here with SMA based decomposition ( as shown earlier) and compare the forecast accuracy." }, { "code": null, "e": 10573, "s": 10404, "text": "dlm models are a special case of state space models where the errors of the state and observed components are normally distributed. Here, Kalman filter will be used to:" }, { "code": null, "e": 10607, "s": 10573, "text": "filtered values of state vectors." }, { "code": null, "e": 10653, "s": 10607, "text": "smoothed values of state vectors and finally," }, { "code": null, "e": 10726, "s": 10653, "text": "forecast provides means and variances of future observations and states." }, { "code": null, "e": 11030, "s": 10726, "text": "We have to define the parameters before fitting a dlm model. The parameters are V, W (covariance matrices of the measurement and state equations, respectively), FF and GG (measurement equation matrix and transition matrix respectively), and m0, C0 (prior mean and covariance matrix of the state vector)." }, { "code": null, "e": 11106, "s": 11030, "text": "However, here, we start the dlm model by writing a small function as below:" }, { "code": null, "e": 11346, "s": 11106, "text": "I have considered a local level model with dlm A polynomial DLM (a local linear trend is a polynomial DLM of order 2) and seasonal component 12. It’s good practice rather part of best practice to check the convergence of the MLE procedure." }, { "code": null, "e": 11400, "s": 11346, "text": "Kalman filter and smoother have been applied as well." }, { "code": null, "e": 11679, "s": 11400, "text": "We can see that, dlm model’s prediction accuracy fairly well. Filter and smooth lines are almost moving together in the series and do not differ much from each other. The seasonal components are ignored here. The lines of forecast series and the original series are quite close." }, { "code": null, "e": 11761, "s": 11679, "text": "A good example of state-space models with time series analysis can be found here." }, { "code": null, "e": 12097, "s": 11761, "text": "State space models come in lots of flavors and a flexible way of handling lots of time series models and provide a framework for handling missing values, likelihood estimation, smoothing, forecasting, etc. Both uni-variate and multi-variate data can be used to fit state space model. We have shown a basic level model in this exercise." }, { "code": null, "e": 12120, "s": 12097, "text": "I can be reached here." }, { "code": null, "e": 12131, "s": 12120, "text": "Reference:" }, { "code": null, "e": 12508, "s": 12131, "text": "Durbin, J., & Koopman, S. J. (2012). Time series analysis by state space methods. Oxford university press.Giovanni Petris & Sonia Petrone (2011), State Space Models in R, Journal of Statistical SoftwareG Petris, S Petrone, and P Campagnoli (2009). Dynamic Linear Models with R. SpringerHyndman, R. J., & Athanasopoulos, G. (2018). Forecasting: principles and practice. OTexts." }, { "code": null, "e": 12615, "s": 12508, "text": "Durbin, J., & Koopman, S. J. (2012). Time series analysis by state space methods. Oxford university press." }, { "code": null, "e": 12712, "s": 12615, "text": "Giovanni Petris & Sonia Petrone (2011), State Space Models in R, Journal of Statistical Software" }, { "code": null, "e": 12797, "s": 12712, "text": "G Petris, S Petrone, and P Campagnoli (2009). Dynamic Linear Models with R. Springer" } ]
Program to find maximum score in stone game in Python
Suppose there are several stones placed in a row, and each of these stones has an associated number which is given in an array stoneValue. In each round Amal divides the row into two parts then Bimal calculates the value of each part which is the sum of the values of all the stones in this part. Bimal throws away the part which has the maximum value, and Amal's score increases by the value of the remaining part. When the values of two parts are same, Bimal lets Amal decide which part will be thrown away. The next round starts with the remaining part. The game ends when there is only one stone remaining. We have to find the maximum score that Amal can get. So, if the input is like stoneValue = [7,3,4,5,6,6], then the output will be At round 1, Amal divides the row like [7,3,4], [5,6,6]. Sum of left row is 14 and sum of right row is 17. Bimal throws away the right row and Amal's score is now 14. At round 1, Amal divides the row like [7,3,4], [5,6,6]. Sum of left row is 14 and sum of right row is 17. Bimal throws away the right row and Amal's score is now 14. At round 2, Amal divides the row to [7], [3,4]. So Bimal throws away the left row and Amal's score becomes (14 + 7) = 21. At round 2, Amal divides the row to [7], [3,4]. So Bimal throws away the left row and Amal's score becomes (14 + 7) = 21. At round 3, Amal has only one choice to divide the row which is [3], [4]. Bimal throws away the right row and Amal's score is now (21 + 3) = 24. At round 3, Amal has only one choice to divide the row which is [3], [4]. Bimal throws away the right row and Amal's score is now (21 + 3) = 24. To solve this, we will follow these steps − Define a function dfs() . This will take start, end Define a function dfs() . This will take start, end if start >= end, thenreturn 0 if start >= end, then return 0 return 0 max_score := 0 max_score := 0 for cut in range start to end, dosum1 := partial_sum[start, cut]sum2 := partial_sum[cut+1, end]if sum1 > sum2 , thenscore := sum2 + dfs(cut+1, end)otherwise when sum1 < sum2, thenscore := sum1 + dfs(start, cut)otherwise,score := sum1 + maximum of dfs(start, cut) and dfs(cut+1, end)max_score := maximum of score and max_score for cut in range start to end, do sum1 := partial_sum[start, cut] sum1 := partial_sum[start, cut] sum2 := partial_sum[cut+1, end] sum2 := partial_sum[cut+1, end] if sum1 > sum2 , thenscore := sum2 + dfs(cut+1, end) if sum1 > sum2 , then score := sum2 + dfs(cut+1, end) score := sum2 + dfs(cut+1, end) otherwise when sum1 < sum2, thenscore := sum1 + dfs(start, cut) otherwise when sum1 < sum2, then score := sum1 + dfs(start, cut) score := sum1 + dfs(start, cut) otherwise,score := sum1 + maximum of dfs(start, cut) and dfs(cut+1, end) otherwise, score := sum1 + maximum of dfs(start, cut) and dfs(cut+1, end) score := sum1 + maximum of dfs(start, cut) and dfs(cut+1, end) max_score := maximum of score and max_score max_score := maximum of score and max_score return max_score return max_score Define a function getPartialSum() . This will take Define a function getPartialSum() . This will take for i in range 0 to n - 1, dopartial_sum[i, i] := stoneValue[i] for i in range 0 to n - 1, do partial_sum[i, i] := stoneValue[i] partial_sum[i, i] := stoneValue[i] for i in range 0 to n - 1, dofor j in range i+1 to n - 1, dopartial_sum[i, j] := partial_sum[i, j-1] + stoneValue[j] for i in range 0 to n - 1, do for j in range i+1 to n - 1, dopartial_sum[i, j] := partial_sum[i, j-1] + stoneValue[j] for j in range i+1 to n - 1, do partial_sum[i, j] := partial_sum[i, j-1] + stoneValue[j] partial_sum[i, j] := partial_sum[i, j-1] + stoneValue[j] From the main method, do the following From the main method, do the following n := size of stoneValue n := size of stoneValue partial_sum := A square array of size n x n and filled with 0 partial_sum := A square array of size n x n and filled with 0 getPartialSum() getPartialSum() return dfs(0, n-1) return dfs(0, n-1) Let us see the following implementation to get better understanding def solve(stoneValue): def dfs(start, end): if start >= end: return 0 max_score = 0 for cut in range(start, end): sum1 = partial_sum[start][cut] sum2 = partial_sum[cut+1][end] if sum1 > sum2: score = sum2+dfs(cut+1, end) elif sum1 < sum2: score = sum1+dfs(start, cut) else: score = sum1+max(dfs(start, cut), dfs(cut+1, end)) max_score = max(score, max_score) return max_score def getPartialSum(): for i in range(n): partial_sum[i][i] = stoneValue[i] for i in range(n): for j in range(i+1, n): partial_sum[i][j] = partial_sum[i][j-1]+stoneValue[j] n = len(stoneValue) partial_sum = [[0]*n for _ in range(n)] getPartialSum() return dfs(0, n-1) stoneValue = [7,3,4,5,6,6] print(solve(stoneValue)) [7,3,4,5,6,6] 0
[ { "code": null, "e": 1726, "s": 1062, "text": "Suppose there are several stones placed in a row, and each of these stones has an associated number which is given in an array stoneValue. In each round Amal divides the row into two parts then Bimal calculates the value of each part which is the sum of the values of all the stones in this part. Bimal throws away the part which has the maximum value, and Amal's score increases by the value of the remaining part. When the values of two parts are same, Bimal lets Amal decide which part will be thrown away. The next round starts with the remaining part. The game ends when there is only one stone remaining. We have to find the maximum score that Amal can get." }, { "code": null, "e": 1803, "s": 1726, "text": "So, if the input is like stoneValue = [7,3,4,5,6,6], then the output will be" }, { "code": null, "e": 1969, "s": 1803, "text": "At round 1, Amal divides the row like [7,3,4], [5,6,6]. Sum of left row is 14 and sum of right row is 17. Bimal throws away the right row and Amal's score is now 14." }, { "code": null, "e": 2135, "s": 1969, "text": "At round 1, Amal divides the row like [7,3,4], [5,6,6]. Sum of left row is 14 and sum of right row is 17. Bimal throws away the right row and Amal's score is now 14." }, { "code": null, "e": 2257, "s": 2135, "text": "At round 2, Amal divides the row to [7], [3,4]. So Bimal throws away the left row and Amal's score becomes (14 + 7) = 21." }, { "code": null, "e": 2379, "s": 2257, "text": "At round 2, Amal divides the row to [7], [3,4]. So Bimal throws away the left row and Amal's score becomes (14 + 7) = 21." }, { "code": null, "e": 2524, "s": 2379, "text": "At round 3, Amal has only one choice to divide the row which is [3], [4]. Bimal throws away the right row and Amal's score is now (21 + 3) = 24." }, { "code": null, "e": 2669, "s": 2524, "text": "At round 3, Amal has only one choice to divide the row which is [3], [4]. Bimal throws away the right row and Amal's score is now (21 + 3) = 24." }, { "code": null, "e": 2713, "s": 2669, "text": "To solve this, we will follow these steps −" }, { "code": null, "e": 2765, "s": 2713, "text": "Define a function dfs() . This will take start, end" }, { "code": null, "e": 2817, "s": 2765, "text": "Define a function dfs() . This will take start, end" }, { "code": null, "e": 2847, "s": 2817, "text": "if start >= end, thenreturn 0" }, { "code": null, "e": 2869, "s": 2847, "text": "if start >= end, then" }, { "code": null, "e": 2878, "s": 2869, "text": "return 0" }, { "code": null, "e": 2887, "s": 2878, "text": "return 0" }, { "code": null, "e": 2902, "s": 2887, "text": "max_score := 0" }, { "code": null, "e": 2917, "s": 2902, "text": "max_score := 0" }, { "code": null, "e": 3243, "s": 2917, "text": "for cut in range start to end, dosum1 := partial_sum[start, cut]sum2 := partial_sum[cut+1, end]if sum1 > sum2 , thenscore := sum2 + dfs(cut+1, end)otherwise when sum1 < sum2, thenscore := sum1 + dfs(start, cut)otherwise,score := sum1 + maximum of dfs(start, cut) and dfs(cut+1, end)max_score := maximum of score and max_score" }, { "code": null, "e": 3277, "s": 3243, "text": "for cut in range start to end, do" }, { "code": null, "e": 3309, "s": 3277, "text": "sum1 := partial_sum[start, cut]" }, { "code": null, "e": 3341, "s": 3309, "text": "sum1 := partial_sum[start, cut]" }, { "code": null, "e": 3373, "s": 3341, "text": "sum2 := partial_sum[cut+1, end]" }, { "code": null, "e": 3405, "s": 3373, "text": "sum2 := partial_sum[cut+1, end]" }, { "code": null, "e": 3458, "s": 3405, "text": "if sum1 > sum2 , thenscore := sum2 + dfs(cut+1, end)" }, { "code": null, "e": 3480, "s": 3458, "text": "if sum1 > sum2 , then" }, { "code": null, "e": 3512, "s": 3480, "text": "score := sum2 + dfs(cut+1, end)" }, { "code": null, "e": 3544, "s": 3512, "text": "score := sum2 + dfs(cut+1, end)" }, { "code": null, "e": 3608, "s": 3544, "text": "otherwise when sum1 < sum2, thenscore := sum1 + dfs(start, cut)" }, { "code": null, "e": 3641, "s": 3608, "text": "otherwise when sum1 < sum2, then" }, { "code": null, "e": 3673, "s": 3641, "text": "score := sum1 + dfs(start, cut)" }, { "code": null, "e": 3705, "s": 3673, "text": "score := sum1 + dfs(start, cut)" }, { "code": null, "e": 3778, "s": 3705, "text": "otherwise,score := sum1 + maximum of dfs(start, cut) and dfs(cut+1, end)" }, { "code": null, "e": 3789, "s": 3778, "text": "otherwise," }, { "code": null, "e": 3852, "s": 3789, "text": "score := sum1 + maximum of dfs(start, cut) and dfs(cut+1, end)" }, { "code": null, "e": 3915, "s": 3852, "text": "score := sum1 + maximum of dfs(start, cut) and dfs(cut+1, end)" }, { "code": null, "e": 3959, "s": 3915, "text": "max_score := maximum of score and max_score" }, { "code": null, "e": 4003, "s": 3959, "text": "max_score := maximum of score and max_score" }, { "code": null, "e": 4020, "s": 4003, "text": "return max_score" }, { "code": null, "e": 4037, "s": 4020, "text": "return max_score" }, { "code": null, "e": 4088, "s": 4037, "text": "Define a function getPartialSum() . This will take" }, { "code": null, "e": 4139, "s": 4088, "text": "Define a function getPartialSum() . This will take" }, { "code": null, "e": 4203, "s": 4139, "text": "for i in range 0 to n - 1, dopartial_sum[i, i] := stoneValue[i]" }, { "code": null, "e": 4233, "s": 4203, "text": "for i in range 0 to n - 1, do" }, { "code": null, "e": 4268, "s": 4233, "text": "partial_sum[i, i] := stoneValue[i]" }, { "code": null, "e": 4303, "s": 4268, "text": "partial_sum[i, i] := stoneValue[i]" }, { "code": null, "e": 4420, "s": 4303, "text": "for i in range 0 to n - 1, dofor j in range i+1 to n - 1, dopartial_sum[i, j] := partial_sum[i, j-1] + stoneValue[j]" }, { "code": null, "e": 4450, "s": 4420, "text": "for i in range 0 to n - 1, do" }, { "code": null, "e": 4538, "s": 4450, "text": "for j in range i+1 to n - 1, dopartial_sum[i, j] := partial_sum[i, j-1] + stoneValue[j]" }, { "code": null, "e": 4570, "s": 4538, "text": "for j in range i+1 to n - 1, do" }, { "code": null, "e": 4627, "s": 4570, "text": "partial_sum[i, j] := partial_sum[i, j-1] + stoneValue[j]" }, { "code": null, "e": 4684, "s": 4627, "text": "partial_sum[i, j] := partial_sum[i, j-1] + stoneValue[j]" }, { "code": null, "e": 4723, "s": 4684, "text": "From the main method, do the following" }, { "code": null, "e": 4762, "s": 4723, "text": "From the main method, do the following" }, { "code": null, "e": 4786, "s": 4762, "text": "n := size of stoneValue" }, { "code": null, "e": 4810, "s": 4786, "text": "n := size of stoneValue" }, { "code": null, "e": 4872, "s": 4810, "text": "partial_sum := A square array of size n x n and filled with 0" }, { "code": null, "e": 4934, "s": 4872, "text": "partial_sum := A square array of size n x n and filled with 0" }, { "code": null, "e": 4950, "s": 4934, "text": "getPartialSum()" }, { "code": null, "e": 4966, "s": 4950, "text": "getPartialSum()" }, { "code": null, "e": 4985, "s": 4966, "text": "return dfs(0, n-1)" }, { "code": null, "e": 5004, "s": 4985, "text": "return dfs(0, n-1)" }, { "code": null, "e": 5072, "s": 5004, "text": "Let us see the following implementation to get better understanding" }, { "code": null, "e": 5958, "s": 5072, "text": "def solve(stoneValue):\n def dfs(start, end):\n if start >= end:\n return 0\n max_score = 0\n\n for cut in range(start, end):\n sum1 = partial_sum[start][cut]\n sum2 = partial_sum[cut+1][end]\n if sum1 > sum2:\n score = sum2+dfs(cut+1, end)\n elif sum1 < sum2:\n score = sum1+dfs(start, cut)\n else:\n score = sum1+max(dfs(start, cut), dfs(cut+1, end))\n max_score = max(score, max_score)\n return max_score\n\n\n def getPartialSum():\n for i in range(n):\n partial_sum[i][i] = stoneValue[i]\n for i in range(n):\n for j in range(i+1, n):\n partial_sum[i][j] = partial_sum[i][j-1]+stoneValue[j]\n\n\n n = len(stoneValue)\n partial_sum = [[0]*n for _ in range(n)]\n getPartialSum()\n return dfs(0, n-1)\n\nstoneValue = [7,3,4,5,6,6]\nprint(solve(stoneValue))" }, { "code": null, "e": 5972, "s": 5958, "text": "[7,3,4,5,6,6]" }, { "code": null, "e": 5974, "s": 5972, "text": "0" } ]
C# | Convert.ToInt32(String, IFormatProvider) Method - GeeksforGeeks
05 Dec, 2019 This method is used to converts the specified string representation of a number to an equivalent 32-bit signed integer, using the specified culture-specific formatting information. Syntax: public static int ToInt32 (string value, IFormatProvider provider); Parameters: value: It is a string that contains the number to convert. provider: An object that supplies culture-specific formatting information. Return Value: This method returns a 32-bit signed integer that is equivalent to the number in value, or 0 (zero) if value is null. Exceptions: FormatException: If the value does not consist of an optional sign followed by a sequence of digits (0 through 9). OverflowException: If the value represents a number that is less than MinValue or greater than MaxValue. Below programs illustrate the use of Convert.ToInt32(String, IFormatProvider) Method: Example 1: // C# program to demonstrate the// Convert.ToInt32() Methodusing System;using System.Globalization; class GFG { // Main Methodpublic static void Main(){ try { // creating object of CultureInfo CultureInfo cultures = new CultureInfo("en-US"); // declaring and initializing String array string[] values = {"12345", "+12345", "-12345"}; // calling get() Method Console.Write("Converted int value " + "of specified strings: "); for (int j = 0; j < values.Length; j++) { get(values[j], cultures); } } catch (FormatException e) { Console.WriteLine("\n"); Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message); } catch (OverflowException e) { Console.WriteLine("\n"); Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message); }} // Defining get() methodpublic static void get(string s, CultureInfo cultures){ // converting string to specified int int val = Convert.ToInt32(s, cultures); // display the converted char value Console.Write(" {0}, ", val);}} Converted int value of specified strings: 12345, 12345, -12345, Example 2: For FormatException // C# program to demonstrate the// Convert.ToInt32() Methodusing System;using System.Globalization; class GFG { // Main Methodpublic static void Main(){ try { // creating object of CultureInfo CultureInfo cultures = new CultureInfo("en-US"); // declaring and initializing String array string[] values = {"12345", "+12345", "-12345" }; // calling get() Method Console.WriteLine("Converted int value" + " of specified strings: "); for (int j = 0; j < values.Length; j++) { get(values[j], cultures); } Console.WriteLine("\n"); string s = "123 456, 789"; Console.WriteLine("format of s is invalid "); // converting string to specified char int val = Convert.ToInt32(s, cultures); // display the converted char value Console.Write(" {0}, ", val);} catch (FormatException e) { Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message);}catch (OverflowException e) { Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message);}} // Defining get() methodpublic static void get(string s, CultureInfo cultures){ // converting string to// specified int valueint val = Convert.ToInt32(s, cultures); // display the converted int valueConsole.Write(" {0}, ", val);}} Converted int value of specified strings: 12345, 12345, -12345, format of s is invalid Exception Thrown: System.FormatException Example 3: For OverFlowException // C# program to demonstrate the// Convert.ToInt32() Methodusing System;using System.Globalization; class GFG { // Main Methodpublic static void Main(){ try { // creating object of CultureInfo CultureInfo cultures = new CultureInfo("en-US"); // declaring and initializing String array string[] values = {"12345", "+12345", "-12345" }; // calling get() Method Console.WriteLine("Converted int value " + "of specified strings: "); for (int j = 0; j < values.Length; j++) { get(values[j], cultures); } Console.WriteLine("\n"); string s = "-7922816251426433759354395033500000"; Console.WriteLine("s is less than the MinValue"); // converting string to specified char int val = Convert.ToInt32(s, cultures); // display the converted char value Console.Write(" {0}, ", val); } catch (FormatException e) { Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message); } catch (OverflowException e) { Console.Write("Exception Thrown: "); Console.Write("{0}", e.GetType(), e.Message); }} // Defining get() methodpublic static void get(string s, CultureInfo cultures){ // converting string to // specified int value int val = Convert.ToInt32(s, cultures); // display the converted int value Console.Write(" {0}, ", val);}} Converted int value of specified strings: 12345, 12345, -12345, s is less than the MinValue Exception Thrown: System.OverflowException Reference: https://docs.microsoft.com/en-us/dotnet/api/system.convert.toint32?view=netframework-4.7.2#System_Convert_ToInt32_System_String_System_IFormatProvider_ CSharp Convert Class CSharp-method C# Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Extension Method in C# HashSet in C# with Examples Top 50 C# Interview Questions & Answers C# | How to insert an element in an Array? C# | List Class C# | Inheritance Partial Classes in C# Convert String to Character Array in C# Lambda Expressions in C# Difference between Hashtable and Dictionary in C#
[ { "code": null, "e": 24302, "s": 24274, "text": "\n05 Dec, 2019" }, { "code": null, "e": 24483, "s": 24302, "text": "This method is used to converts the specified string representation of a number to an equivalent 32-bit signed integer, using the specified culture-specific formatting information." }, { "code": null, "e": 24491, "s": 24483, "text": "Syntax:" }, { "code": null, "e": 24559, "s": 24491, "text": "public static int ToInt32 (string value, IFormatProvider provider);" }, { "code": null, "e": 24571, "s": 24559, "text": "Parameters:" }, { "code": null, "e": 24630, "s": 24571, "text": "value: It is a string that contains the number to convert." }, { "code": null, "e": 24705, "s": 24630, "text": "provider: An object that supplies culture-specific formatting information." }, { "code": null, "e": 24836, "s": 24705, "text": "Return Value: This method returns a 32-bit signed integer that is equivalent to the number in value, or 0 (zero) if value is null." }, { "code": null, "e": 24848, "s": 24836, "text": "Exceptions:" }, { "code": null, "e": 24963, "s": 24848, "text": "FormatException: If the value does not consist of an optional sign followed by a sequence of digits (0 through 9)." }, { "code": null, "e": 25068, "s": 24963, "text": "OverflowException: If the value represents a number that is less than MinValue or greater than MaxValue." }, { "code": null, "e": 25154, "s": 25068, "text": "Below programs illustrate the use of Convert.ToInt32(String, IFormatProvider) Method:" }, { "code": null, "e": 25165, "s": 25154, "text": "Example 1:" }, { "code": "// C# program to demonstrate the// Convert.ToInt32() Methodusing System;using System.Globalization; class GFG { // Main Methodpublic static void Main(){ try { // creating object of CultureInfo CultureInfo cultures = new CultureInfo(\"en-US\"); // declaring and initializing String array string[] values = {\"12345\", \"+12345\", \"-12345\"}; // calling get() Method Console.Write(\"Converted int value \" + \"of specified strings: \"); for (int j = 0; j < values.Length; j++) { get(values[j], cultures); } } catch (FormatException e) { Console.WriteLine(\"\\n\"); Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message); } catch (OverflowException e) { Console.WriteLine(\"\\n\"); Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message); }} // Defining get() methodpublic static void get(string s, CultureInfo cultures){ // converting string to specified int int val = Convert.ToInt32(s, cultures); // display the converted char value Console.Write(\" {0}, \", val);}}", "e": 26407, "s": 25165, "text": null }, { "code": null, "e": 26475, "s": 26407, "text": "Converted int value of specified strings: 12345, 12345, -12345,\n" }, { "code": null, "e": 26506, "s": 26475, "text": "Example 2: For FormatException" }, { "code": "// C# program to demonstrate the// Convert.ToInt32() Methodusing System;using System.Globalization; class GFG { // Main Methodpublic static void Main(){ try { // creating object of CultureInfo CultureInfo cultures = new CultureInfo(\"en-US\"); // declaring and initializing String array string[] values = {\"12345\", \"+12345\", \"-12345\" }; // calling get() Method Console.WriteLine(\"Converted int value\" + \" of specified strings: \"); for (int j = 0; j < values.Length; j++) { get(values[j], cultures); } Console.WriteLine(\"\\n\"); string s = \"123 456, 789\"; Console.WriteLine(\"format of s is invalid \"); // converting string to specified char int val = Convert.ToInt32(s, cultures); // display the converted char value Console.Write(\" {0}, \", val);} catch (FormatException e) { Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message);}catch (OverflowException e) { Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message);}} // Defining get() methodpublic static void get(string s, CultureInfo cultures){ // converting string to// specified int valueint val = Convert.ToInt32(s, cultures); // display the converted int valueConsole.Write(\" {0}, \", val);}}", "e": 27858, "s": 26506, "text": null }, { "code": null, "e": 27994, "s": 27858, "text": "Converted int value of specified strings: \n 12345, 12345, -12345, \n\nformat of s is invalid \nException Thrown: System.FormatException\n" }, { "code": null, "e": 28027, "s": 27994, "text": "Example 3: For OverFlowException" }, { "code": "// C# program to demonstrate the// Convert.ToInt32() Methodusing System;using System.Globalization; class GFG { // Main Methodpublic static void Main(){ try { // creating object of CultureInfo CultureInfo cultures = new CultureInfo(\"en-US\"); // declaring and initializing String array string[] values = {\"12345\", \"+12345\", \"-12345\" }; // calling get() Method Console.WriteLine(\"Converted int value \" + \"of specified strings: \"); for (int j = 0; j < values.Length; j++) { get(values[j], cultures); } Console.WriteLine(\"\\n\"); string s = \"-7922816251426433759354395033500000\"; Console.WriteLine(\"s is less than the MinValue\"); // converting string to specified char int val = Convert.ToInt32(s, cultures); // display the converted char value Console.Write(\" {0}, \", val); } catch (FormatException e) { Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message); } catch (OverflowException e) { Console.Write(\"Exception Thrown: \"); Console.Write(\"{0}\", e.GetType(), e.Message); }} // Defining get() methodpublic static void get(string s, CultureInfo cultures){ // converting string to // specified int value int val = Convert.ToInt32(s, cultures); // display the converted int value Console.Write(\" {0}, \", val);}}", "e": 29559, "s": 28027, "text": null }, { "code": null, "e": 29701, "s": 29559, "text": "Converted int value of specified strings: \n 12345, 12345, -12345, \n\ns is less than the MinValue\nException Thrown: System.OverflowException\n" }, { "code": null, "e": 29712, "s": 29701, "text": "Reference:" }, { "code": null, "e": 29864, "s": 29712, "text": "https://docs.microsoft.com/en-us/dotnet/api/system.convert.toint32?view=netframework-4.7.2#System_Convert_ToInt32_System_String_System_IFormatProvider_" }, { "code": null, "e": 29885, "s": 29864, "text": "CSharp Convert Class" }, { "code": null, "e": 29899, "s": 29885, "text": "CSharp-method" }, { "code": null, "e": 29902, "s": 29899, "text": "C#" }, { "code": null, "e": 30000, "s": 29902, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30023, "s": 30000, "text": "Extension Method in C#" }, { "code": null, "e": 30051, "s": 30023, "text": "HashSet in C# with Examples" }, { "code": null, "e": 30091, "s": 30051, "text": "Top 50 C# Interview Questions & Answers" }, { "code": null, "e": 30134, "s": 30091, "text": "C# | How to insert an element in an Array?" }, { "code": null, "e": 30150, "s": 30134, "text": "C# | List Class" }, { "code": null, "e": 30167, "s": 30150, "text": "C# | Inheritance" }, { "code": null, "e": 30189, "s": 30167, "text": "Partial Classes in C#" }, { "code": null, "e": 30229, "s": 30189, "text": "Convert String to Character Array in C#" }, { "code": null, "e": 30254, "s": 30229, "text": "Lambda Expressions in C#" } ]
JavaScript | Object.isExtensible() Method - GeeksforGeeks
17 Sep, 2021 The Object.preventExtensions() method in JavaScript is standard built-in objects which checks whether an object is extensible or not.Syntax: Object.isExtensible( obj ) Parameters: This method accepts single parameter as mentioned above and described below: obj: This parameter holds the object which which should be checked for extensiblity. Return value: This method returns a Boolean value indicating if the given object is extensible or not.Below examples illustrate the Object.isExtensible() Method in JavaScript:Example 1: javascript const geeks1 = {};console.log(Object.isExtensible(geeks1));Object.preventExtensions(geeks1);console.log(Object.isExtensible(geeks1)); const geeks2 = {}; Object.preventExtensions(geeks2); console.log( Object.isExtensible(geeks2) ); Output: true false false Example 2: javascript var geeks1 = {};document.writeln(Object.isExtensible(geeks1));document.writeln("<br>");document.writeln(Object.preventExtensions(geeks1));document.writeln("<br>");document.writeln(Object.isExtensible(geeks1));document.writeln("<br>"); var geeks2 = Object.seal({});document.writeln(Object.isExtensible(geeks2));document.writeln("<br>"); var geeks3 = Object.freeze({});document.writeln(Object.isExtensible(geeks3)); Output: true [object Object] false false false Supported Browsers: The browsers supported by Object.isExtensible() method are listed below: Google Chrome 6 and above Edge 12 and above Firefox 4 and above Internet Explorer 9 Opera 12 and above Safari 5.1 and above ysachin2314 javascript-functions JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Difference between var, let and const keywords in JavaScript Difference Between PUT and PATCH Request How to get character array from string in JavaScript? Remove elements from a JavaScript Array How to get selected value in dropdown list using JavaScript ? Top 10 Front End Developer Skills That You Need in 2022 Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 24909, "s": 24881, "text": "\n17 Sep, 2021" }, { "code": null, "e": 25052, "s": 24909, "text": "The Object.preventExtensions() method in JavaScript is standard built-in objects which checks whether an object is extensible or not.Syntax: " }, { "code": null, "e": 25079, "s": 25052, "text": "Object.isExtensible( obj )" }, { "code": null, "e": 25170, "s": 25079, "text": "Parameters: This method accepts single parameter as mentioned above and described below: " }, { "code": null, "e": 25255, "s": 25170, "text": "obj: This parameter holds the object which which should be checked for extensiblity." }, { "code": null, "e": 25443, "s": 25255, "text": "Return value: This method returns a Boolean value indicating if the given object is extensible or not.Below examples illustrate the Object.isExtensible() Method in JavaScript:Example 1: " }, { "code": null, "e": 25454, "s": 25443, "text": "javascript" }, { "code": "const geeks1 = {};console.log(Object.isExtensible(geeks1));Object.preventExtensions(geeks1);console.log(Object.isExtensible(geeks1)); const geeks2 = {}; Object.preventExtensions(geeks2); console.log( Object.isExtensible(geeks2) ); ", "e": 25690, "s": 25454, "text": null }, { "code": null, "e": 25700, "s": 25690, "text": "Output: " }, { "code": null, "e": 25717, "s": 25700, "text": "true\nfalse\nfalse" }, { "code": null, "e": 25730, "s": 25717, "text": "Example 2: " }, { "code": null, "e": 25741, "s": 25730, "text": "javascript" }, { "code": "var geeks1 = {};document.writeln(Object.isExtensible(geeks1));document.writeln(\"<br>\");document.writeln(Object.preventExtensions(geeks1));document.writeln(\"<br>\");document.writeln(Object.isExtensible(geeks1));document.writeln(\"<br>\"); var geeks2 = Object.seal({});document.writeln(Object.isExtensible(geeks2));document.writeln(\"<br>\"); var geeks3 = Object.freeze({});document.writeln(Object.isExtensible(geeks3));", "e": 26155, "s": 25741, "text": null }, { "code": null, "e": 26165, "s": 26155, "text": "Output: " }, { "code": null, "e": 26204, "s": 26165, "text": "true\n[object Object]\nfalse\nfalse\nfalse" }, { "code": null, "e": 26298, "s": 26204, "text": "Supported Browsers: The browsers supported by Object.isExtensible() method are listed below: " }, { "code": null, "e": 26324, "s": 26298, "text": "Google Chrome 6 and above" }, { "code": null, "e": 26342, "s": 26324, "text": "Edge 12 and above" }, { "code": null, "e": 26362, "s": 26342, "text": "Firefox 4 and above" }, { "code": null, "e": 26382, "s": 26362, "text": "Internet Explorer 9" }, { "code": null, "e": 26401, "s": 26382, "text": "Opera 12 and above" }, { "code": null, "e": 26422, "s": 26401, "text": "Safari 5.1 and above" }, { "code": null, "e": 26436, "s": 26424, "text": "ysachin2314" }, { "code": null, "e": 26457, "s": 26436, "text": "javascript-functions" }, { "code": null, "e": 26468, "s": 26457, "text": "JavaScript" }, { "code": null, "e": 26485, "s": 26468, "text": "Web Technologies" }, { "code": null, "e": 26583, "s": 26485, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26592, "s": 26583, "text": "Comments" }, { "code": null, "e": 26605, "s": 26592, "text": "Old Comments" }, { "code": null, "e": 26666, "s": 26605, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 26707, "s": 26666, "text": "Difference Between PUT and PATCH Request" }, { "code": null, "e": 26761, "s": 26707, "text": "How to get character array from string in JavaScript?" }, { "code": null, "e": 26801, "s": 26761, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 26863, "s": 26801, "text": "How to get selected value in dropdown list using JavaScript ?" }, { "code": null, "e": 26919, "s": 26863, "text": "Top 10 Front End Developer Skills That You Need in 2022" }, { "code": null, "e": 26952, "s": 26919, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 27014, "s": 26952, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 27057, "s": 27014, "text": "How to fetch data from an API in ReactJS ?" } ]
Pascal - Constants
A constant is an entity that remains unchanged during program execution. Pascal allows only constants of the following types to be declared − Ordinal types Set types Pointer types (but the only allowed value is Nil). Real types Char String Syntax for declaring constants is as follows − const identifier = constant_value; The following table provides examples of some valid constant declarations − Real type constant Ordinal(Integer)type constant valid_age = 21; Set type constant Vowels = set of (A,E,I,O,U); Pointer type constant P = NIL; e = 2.7182818; velocity_light = 3.0E+10; Character type constant Operator = '+'; String type constant president = 'Johnny Depp'; The following example illustrates the concept − program const_circle (input,output); const PI = 3.141592654; var r, d, c : real; {variable declaration: radius, dia, circumference} begin writeln('Enter the radius of the circle'); readln(r); d := 2 * r; c := PI * d; writeln('The circumference of the circle is ',c:7:2); end. When the above code is compiled and executed, it produces the following result − Enter the radius of the circle 23 The circumference of the circle is 144.51 Observe the formatting in the output statement of the program. The variable c is to be formatted with total number of digits 7 and 2 digits after the decimal sign. Pascal allows such output formatting with the numerical variables. 94 Lectures 8.5 hours Stone River ELearning Print Add Notes Bookmark this page
[ { "code": null, "e": 2225, "s": 2083, "text": "A constant is an entity that remains unchanged during program execution. Pascal allows only constants of the following types to be declared −" }, { "code": null, "e": 2239, "s": 2225, "text": "Ordinal types" }, { "code": null, "e": 2249, "s": 2239, "text": "Set types" }, { "code": null, "e": 2300, "s": 2249, "text": "Pointer types (but the only allowed value is Nil)." }, { "code": null, "e": 2311, "s": 2300, "text": "Real types" }, { "code": null, "e": 2316, "s": 2311, "text": "Char" }, { "code": null, "e": 2323, "s": 2316, "text": "String" }, { "code": null, "e": 2370, "s": 2323, "text": "Syntax for declaring constants is as follows −" }, { "code": null, "e": 2405, "s": 2370, "text": "const\nidentifier = constant_value;" }, { "code": null, "e": 2481, "s": 2405, "text": "The following table provides examples of some valid constant declarations −" }, { "code": null, "e": 2500, "s": 2481, "text": "Real type constant" }, { "code": null, "e": 2530, "s": 2500, "text": "Ordinal(Integer)type constant" }, { "code": null, "e": 2546, "s": 2530, "text": "valid_age = 21;" }, { "code": null, "e": 2564, "s": 2546, "text": "Set type constant" }, { "code": null, "e": 2593, "s": 2564, "text": "Vowels = set of (A,E,I,O,U);" }, { "code": null, "e": 2615, "s": 2593, "text": "Pointer type constant" }, { "code": null, "e": 2624, "s": 2615, "text": "P = NIL;" }, { "code": null, "e": 2639, "s": 2624, "text": "e = 2.7182818;" }, { "code": null, "e": 2665, "s": 2639, "text": "velocity_light = 3.0E+10;" }, { "code": null, "e": 2689, "s": 2665, "text": "Character type constant" }, { "code": null, "e": 2705, "s": 2689, "text": "Operator = '+';" }, { "code": null, "e": 2726, "s": 2705, "text": "String type constant" }, { "code": null, "e": 2753, "s": 2726, "text": "president = 'Johnny Depp';" }, { "code": null, "e": 2801, "s": 2753, "text": "The following example illustrates the concept −" }, { "code": null, "e": 3101, "s": 2801, "text": "program const_circle (input,output);\nconst\nPI = 3.141592654;\n\nvar\nr, d, c : real; {variable declaration: radius, dia, circumference}\n\nbegin\n writeln('Enter the radius of the circle');\n readln(r);\n \n d := 2 * r;\n c := PI * d;\n writeln('The circumference of the circle is ',c:7:2);\nend." }, { "code": null, "e": 3182, "s": 3101, "text": "When the above code is compiled and executed, it produces the following result −" }, { "code": null, "e": 3259, "s": 3182, "text": "Enter the radius of the circle\n23\nThe circumference of the circle is 144.51\n" }, { "code": null, "e": 3490, "s": 3259, "text": "Observe the formatting in the output statement of the program. The variable c is to be formatted with total number of digits 7 and 2 digits after the decimal sign. Pascal allows such output formatting with the numerical variables." }, { "code": null, "e": 3525, "s": 3490, "text": "\n 94 Lectures \n 8.5 hours \n" }, { "code": null, "e": 3548, "s": 3525, "text": " Stone River ELearning" }, { "code": null, "e": 3555, "s": 3548, "text": " Print" }, { "code": null, "e": 3566, "s": 3555, "text": " Add Notes" } ]
MySQL Tryit Editor v1.0
SELECT LOCATE("3", "W3Schools.com") AS MatchPosition; ​ Edit the SQL Statement, and click "Run SQL" to see the result. This SQL-Statement is not supported in the WebSQL Database. The example still works, because it uses a modified version of SQL. Your browser does not support WebSQL. Your are now using a light-version of the Try-SQL Editor, with a read-only Database. If you switch to a browser with WebSQL support, you can try any SQL statement, and play with the Database as much as you like. The Database can also be restored at any time. Our Try-SQL Editor uses WebSQL to demonstrate SQL. A Database-object is created in your browser, for testing purposes. You can try any SQL statement, and play with the Database as much as you like. The Database can be restored at any time, simply by clicking the "Restore Database" button. WebSQL stores a Database locally, on the user's computer. Each user gets their own Database object. WebSQL is supported in Chrome, Safari, and Opera. If you use another browser you will still be able to use our Try SQL Editor, but a different version, using a server-based ASP application, with a read-only Access Database, where users are not allowed to make any changes to the data.
[ { "code": null, "e": 54, "s": 0, "text": "SELECT LOCATE(\"3\", \"W3Schools.com\") AS MatchPosition;" }, { "code": null, "e": 56, "s": 54, "text": "​" }, { "code": null, "e": 128, "s": 65, "text": "Edit the SQL Statement, and click \"Run SQL\" to see the result." }, { "code": null, "e": 188, "s": 128, "text": "This SQL-Statement is not supported in the WebSQL Database." }, { "code": null, "e": 256, "s": 188, "text": "The example still works, because it uses a modified version of SQL." }, { "code": null, "e": 294, "s": 256, "text": "Your browser does not support WebSQL." }, { "code": null, "e": 379, "s": 294, "text": "Your are now using a light-version of the Try-SQL Editor, with a read-only Database." }, { "code": null, "e": 553, "s": 379, "text": "If you switch to a browser with WebSQL support, you can try any SQL statement, and play with the Database as much as you like. The Database can also be restored at any time." }, { "code": null, "e": 604, "s": 553, "text": "Our Try-SQL Editor uses WebSQL to demonstrate SQL." }, { "code": null, "e": 672, "s": 604, "text": "A Database-object is created in your browser, for testing purposes." }, { "code": null, "e": 843, "s": 672, "text": "You can try any SQL statement, and play with the Database as much as you like. The Database can be restored at any time, simply by clicking the \"Restore Database\" button." }, { "code": null, "e": 943, "s": 843, "text": "WebSQL stores a Database locally, on the user's computer. Each user gets their own Database object." }, { "code": null, "e": 993, "s": 943, "text": "WebSQL is supported in Chrome, Safari, and Opera." } ]
Sort array of points by ascending distance from a given point JavaScript
Let’s say, we have an array of objects with each object having exactly two properties, x and y that represent the coordinates of a point. We have to write a function that takes in this array and an object with x and y coordinates of a point and we have to sort the points (objects) in the array according to the distance from the given point (nearest to farthest). It is a mathematical formula that states that the shortest distance between two points (x1, y1) and (x2, y2) in a two-dimensional plane is given by − S=((x2−x1)2+(y2−y1)2) We will be using this formula to calculate the distance of each point from the given point and sort them according to that. const coordinates = [{x:2,y:6},{x:14,y:10},{x:7,y:10},{x:11,y:6},{x:6,y:2}]; const distance = (coor1, coor2) => { const x = coor2.x - coor1.x; const y = coor2.y - coor1.y; return Math.sqrt((x*x) + (y*y)); }; const sortByDistance = (coordinates, point) => { const sorter = (a, b) => distance(a, point) - distance(b, point); coordinates.sort(sorter); }; sortByDistance(coordinates, {x: 5, y: 4}); console.log(coordinates); The output in the console will be − [ { x: 6, y: 2 }, { x: 2, y: 6 }, { x: 7, y: 10 }, { x: 11, y: 6 }, { x: 14, y: 10 } ] And this is in fact the correct order as (6, 2) is nearest to (5,4), then comes (2, 6) then (7, 10) and so on.
[ { "code": null, "e": 1427, "s": 1062, "text": "Let’s say, we have an array of objects with each object having exactly two properties, x and y\nthat represent the coordinates of a point. We have to write a function that takes in this array and\nan object with x and y coordinates of a point and we have to sort the points (objects) in the array\naccording to the distance from the given point (nearest to farthest)." }, { "code": null, "e": 1577, "s": 1427, "text": "It is a mathematical formula that states that the shortest distance between two points (x1, y1)\nand (x2, y2) in a two-dimensional plane is given by −" }, { "code": null, "e": 1599, "s": 1577, "text": "S=((x2−x1)2+(y2−y1)2)" }, { "code": null, "e": 1723, "s": 1599, "text": "We will be using this formula to calculate the distance of each point from the given point and\nsort them according to that." }, { "code": null, "e": 2159, "s": 1723, "text": "const coordinates =\n[{x:2,y:6},{x:14,y:10},{x:7,y:10},{x:11,y:6},{x:6,y:2}];\nconst distance = (coor1, coor2) => {\n const x = coor2.x - coor1.x;\n const y = coor2.y - coor1.y;\n return Math.sqrt((x*x) + (y*y));\n};\nconst sortByDistance = (coordinates, point) => {\n const sorter = (a, b) => distance(a, point) - distance(b, point);\n coordinates.sort(sorter);\n};\nsortByDistance(coordinates, {x: 5, y: 4});\nconsole.log(coordinates);" }, { "code": null, "e": 2195, "s": 2159, "text": "The output in the console will be −" }, { "code": null, "e": 2297, "s": 2195, "text": "[\n { x: 6, y: 2 },\n { x: 2, y: 6 },\n { x: 7, y: 10 },\n { x: 11, y: 6 },\n { x: 14, y: 10 }\n]" }, { "code": null, "e": 2408, "s": 2297, "text": "And this is in fact the correct order as (6, 2) is nearest to (5,4), then comes (2, 6) then (7, 10)\nand so on." } ]
Instruction type PUSH rp in 8085 Microprocessor
In 8085 Instruction set, PUSH rp instruction stores contents of register pair rp by pushing it into two locations above the top of the stack. rp stands for one of the following register pairs. rp = BC, DE, HL, or PSW As rp can have any of the four values, there are four opcodes for this type of instruction. It occupies only 1-Byte in memory. In the above mention Opcodes, 2-bits are occupied to mention the register pair. 2-bits can have 4 combinations. So 4 register pairs can be mentioned with POP. As mentioned earlier, they are BC, DE, HL and AF or PSW. Note with LXI instruction, we are having 4 possible register pairs can be used i.e. BC, DE, HL and SP. So at the same time we can’t have SP and PSW applicable with the same instruction. Let us consider PUSH B as an example instruction of this category. It is a 1-Byte instruction. The result of execution of this instruction is shown below with an example. (BC) (SP) (3FFFH) (3FFEH) The timing diagram against this instruction PUSH B execution is as follows − Summary − So this instruction PUSH B requires 1-Byte, 3-Machine Cycles (Opcode Fetch, Memory Write, Memory Write) and 12 T-States for execution as shown in the timing diagram.
[ { "code": null, "e": 1255, "s": 1062, "text": "In 8085 Instruction set, PUSH rp instruction stores contents of register pair rp by pushing it into two locations above the top of the stack. rp stands for one of the following register pairs." }, { "code": null, "e": 1280, "s": 1255, "text": "rp = BC, DE, HL, or PSW\n" }, { "code": null, "e": 1407, "s": 1280, "text": "As rp can have any of the four values, there are four opcodes for this type of instruction. It occupies only 1-Byte in memory." }, { "code": null, "e": 1623, "s": 1407, "text": "In the above mention Opcodes, 2-bits are occupied to mention the register pair. 2-bits can have 4 combinations. So 4 register pairs can be mentioned with POP. As mentioned earlier, they are BC, DE, HL and AF or PSW." }, { "code": null, "e": 1809, "s": 1623, "text": "Note with LXI instruction, we are having 4 possible register pairs can be used i.e. BC, DE, HL and SP. So at the same time we can’t have SP and PSW applicable with the same instruction." }, { "code": null, "e": 1904, "s": 1809, "text": "Let us consider PUSH B as an example instruction of this category. It is a 1-Byte instruction." }, { "code": null, "e": 1980, "s": 1904, "text": "The result of execution of this instruction is shown below with an example." }, { "code": null, "e": 1985, "s": 1980, "text": "(BC)" }, { "code": null, "e": 1990, "s": 1985, "text": "(SP)" }, { "code": null, "e": 1998, "s": 1990, "text": "(3FFFH)" }, { "code": null, "e": 2006, "s": 1998, "text": "(3FFEH)" }, { "code": null, "e": 2083, "s": 2006, "text": "The timing diagram against this instruction PUSH B execution is as follows −" }, { "code": null, "e": 2259, "s": 2083, "text": "Summary − So this instruction PUSH B requires 1-Byte, 3-Machine Cycles (Opcode Fetch, Memory Write, Memory Write) and 12 T-States for execution as shown in the timing diagram." } ]
Sum of all the elements in an array divisible by a given number K - GeeksforGeeks
03 Aug, 2021 Given an array containing N elements and a number K. The task is to find the sum of all such elements which are divisible by K.Examples: Input : arr[] = {15, 16, 10, 9, 6, 7, 17} K = 3 Output : 30 Explanation: As 15, 9, 6 are divisible by 3. So, sum of elements divisible by K = 15 + 9 + 6 = 30. Input : arr[] = {5, 3, 6, 8, 4, 1, 2, 9} K = 2 Output : 20 The idea is to traverse the array and check the elements one by one. If an element is divisible by K then add that element’s value with the sum so far and continue this process while the end of the array reached.Below is the implementation of the above approach: C++ Java Python3 C# PHP Javascript // C++ program to find sum of all the elements// in an array divisible by a given number K #include <iostream>using namespace std; // Function to find sum of all the elements// in an array divisible by a given number Kint findSum(int arr[], int n, int k){ int sum = 0; // Traverse the array for (int i = 0; i < n; i++) { // If current element is divisible by k // add it to sum if (arr[i] % k == 0) { sum += arr[i]; } } // Return calculated sum return sum;} // Driver codeint main(){ int arr[] = { 15, 16, 10, 9, 6, 7, 17 }; int n = sizeof(arr) / sizeof(arr[0]); int k = 3; cout << findSum(arr, n, k); return 0;} // Java program to find sum of all the elements// in an array divisible by a given number K import java.io.*; class GFG { // Function to find sum of all the elements// in an array divisible by a given number Kstatic int findSum(int arr[], int n, int k){ int sum = 0; // Traverse the array for (int i = 0; i < n; i++) { // If current element is divisible by k // add it to sum if (arr[i] % k == 0) { sum += arr[i]; } } // Return calculated sum return sum;} // Driver code public static void main (String[] args) { int arr[] = { 15, 16, 10, 9, 6, 7, 17 }; int n = arr.length; int k = 3; System.out.println( findSum(arr, n, k)); }} // this code is contributed by anuj_67.. # Python3 program to find sum of# all the elements in an array# divisible by a given number K # Function to find sum of all# the elements in an array# divisible by a given number Kdef findSum(arr, n, k) : sum = 0 # Traverse the array for i in range(n) : # If current element is divisible # by k add it to sum if arr[i] % k == 0 : sum += arr[i] # Return calculated sum return sum # Driver codeif __name__ == "__main__" : arr = [ 15, 16, 10, 9, 6, 7, 17] n = len(arr) k = 3 print(findSum(arr, n, k)) # This code is contributed by ANKITRAI1 // C# program to find sum of all the elements// in an array divisible by a given number K using System; public class GFG{ // Function to find sum of all the elements// in an array divisible by a given number Kstatic int findSum(int []arr, int n, int k){ int sum = 0; // Traverse the array for (int i = 0; i < n; i++) { // If current element is divisible by k // add it to sum if (arr[i] % k == 0) { sum += arr[i]; } } // Return calculated sum return sum;} // Driver code static public void Main (){ int []arr = { 15, 16, 10, 9, 6, 7, 17 }; int n = arr.Length; int k = 3; Console.WriteLine( findSum(arr, n, k)); }}//This code is contributed by anuj_67.. <?php// PHP program to find sum of all// the elements in an array divisible// by a given number K // Function to find sum of all// the elements in an array// divisible by a given number Kfunction findSum($arr, $n, $k){ $sum = 0; // Traverse the array for ($i = 0; $i < $n; $i++) { // If current element is divisible // by k add it to sum if ($arr[$i] % $k == 0) { $sum += $arr[$i]; } } // Return calculated sum return $sum;} // Driver code$arr = array(15, 16, 10, 9, 6, 7, 17);$n = sizeof($arr);$k = 3; echo findSum($arr, $n, $k); // This code is contributed// by Akanksha Rai(Abby_akku) <script> // Javascript program to find sum of all the elements // in an array divisible by a given number K // Function to find sum of all the elements // in an array divisible by a given number K function findSum(arr, n, k) { let sum = 0; // Traverse the array for (let i = 0; i < n; i++) { // If current element is divisible by k // add it to sum if (arr[i] % k == 0) { sum += arr[i]; } } // Return calculated sum return sum; } let arr = [ 15, 16, 10, 9, 6, 7, 17 ]; let n = arr.length; let k = 3; document.write(findSum(arr, n, k)); </script> 30 Time Complexity: O(N), where N is the number of elements in the array.Auxiliary Space: O(1) vt_m ankthon Akanksha_Rai decode2207 pankajsharmagfg Technical Scripter 2018 Arrays Data Structures Data Structures Arrays Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Stack Data Structure (Introduction and Program) Top 50 Array Coding Problems for Interviews Introduction to Arrays Multidimensional Arrays in Java Linear Search SDE SHEET - A Complete Guide for SDE Preparation Top 50 Array Coding Problems for Interviews DSA Sheet by Love Babbar Doubly Linked List | Set 1 (Introduction and Insertion) Implementing a Linked List in Java using Class
[ { "code": null, "e": 24536, "s": 24508, "text": "\n03 Aug, 2021" }, { "code": null, "e": 24675, "s": 24536, "text": "Given an array containing N elements and a number K. The task is to find the sum of all such elements which are divisible by K.Examples: " }, { "code": null, "e": 24910, "s": 24675, "text": "Input : arr[] = {15, 16, 10, 9, 6, 7, 17}\n K = 3\nOutput : 30\nExplanation: As 15, 9, 6 are divisible by 3. So, sum of elements divisible by K = 15 + 9 + 6 = 30.\n\nInput : arr[] = {5, 3, 6, 8, 4, 1, 2, 9}\n K = 2\nOutput : 20" }, { "code": null, "e": 25177, "s": 24912, "text": "The idea is to traverse the array and check the elements one by one. If an element is divisible by K then add that element’s value with the sum so far and continue this process while the end of the array reached.Below is the implementation of the above approach: " }, { "code": null, "e": 25181, "s": 25177, "text": "C++" }, { "code": null, "e": 25186, "s": 25181, "text": "Java" }, { "code": null, "e": 25194, "s": 25186, "text": "Python3" }, { "code": null, "e": 25197, "s": 25194, "text": "C#" }, { "code": null, "e": 25201, "s": 25197, "text": "PHP" }, { "code": null, "e": 25212, "s": 25201, "text": "Javascript" }, { "code": "// C++ program to find sum of all the elements// in an array divisible by a given number K #include <iostream>using namespace std; // Function to find sum of all the elements// in an array divisible by a given number Kint findSum(int arr[], int n, int k){ int sum = 0; // Traverse the array for (int i = 0; i < n; i++) { // If current element is divisible by k // add it to sum if (arr[i] % k == 0) { sum += arr[i]; } } // Return calculated sum return sum;} // Driver codeint main(){ int arr[] = { 15, 16, 10, 9, 6, 7, 17 }; int n = sizeof(arr) / sizeof(arr[0]); int k = 3; cout << findSum(arr, n, k); return 0;}", "e": 25902, "s": 25212, "text": null }, { "code": "// Java program to find sum of all the elements// in an array divisible by a given number K import java.io.*; class GFG { // Function to find sum of all the elements// in an array divisible by a given number Kstatic int findSum(int arr[], int n, int k){ int sum = 0; // Traverse the array for (int i = 0; i < n; i++) { // If current element is divisible by k // add it to sum if (arr[i] % k == 0) { sum += arr[i]; } } // Return calculated sum return sum;} // Driver code public static void main (String[] args) { int arr[] = { 15, 16, 10, 9, 6, 7, 17 }; int n = arr.length; int k = 3; System.out.println( findSum(arr, n, k)); }} // this code is contributed by anuj_67..", "e": 26660, "s": 25902, "text": null }, { "code": "# Python3 program to find sum of# all the elements in an array# divisible by a given number K # Function to find sum of all# the elements in an array# divisible by a given number Kdef findSum(arr, n, k) : sum = 0 # Traverse the array for i in range(n) : # If current element is divisible # by k add it to sum if arr[i] % k == 0 : sum += arr[i] # Return calculated sum return sum # Driver codeif __name__ == \"__main__\" : arr = [ 15, 16, 10, 9, 6, 7, 17] n = len(arr) k = 3 print(findSum(arr, n, k)) # This code is contributed by ANKITRAI1", "e": 27264, "s": 26660, "text": null }, { "code": "// C# program to find sum of all the elements// in an array divisible by a given number K using System; public class GFG{ // Function to find sum of all the elements// in an array divisible by a given number Kstatic int findSum(int []arr, int n, int k){ int sum = 0; // Traverse the array for (int i = 0; i < n; i++) { // If current element is divisible by k // add it to sum if (arr[i] % k == 0) { sum += arr[i]; } } // Return calculated sum return sum;} // Driver code static public void Main (){ int []arr = { 15, 16, 10, 9, 6, 7, 17 }; int n = arr.Length; int k = 3; Console.WriteLine( findSum(arr, n, k)); }}//This code is contributed by anuj_67.. ", "e": 28014, "s": 27264, "text": null }, { "code": "<?php// PHP program to find sum of all// the elements in an array divisible// by a given number K // Function to find sum of all// the elements in an array// divisible by a given number Kfunction findSum($arr, $n, $k){ $sum = 0; // Traverse the array for ($i = 0; $i < $n; $i++) { // If current element is divisible // by k add it to sum if ($arr[$i] % $k == 0) { $sum += $arr[$i]; } } // Return calculated sum return $sum;} // Driver code$arr = array(15, 16, 10, 9, 6, 7, 17);$n = sizeof($arr);$k = 3; echo findSum($arr, $n, $k); // This code is contributed// by Akanksha Rai(Abby_akku)", "e": 28672, "s": 28014, "text": null }, { "code": "<script> // Javascript program to find sum of all the elements // in an array divisible by a given number K // Function to find sum of all the elements // in an array divisible by a given number K function findSum(arr, n, k) { let sum = 0; // Traverse the array for (let i = 0; i < n; i++) { // If current element is divisible by k // add it to sum if (arr[i] % k == 0) { sum += arr[i]; } } // Return calculated sum return sum; } let arr = [ 15, 16, 10, 9, 6, 7, 17 ]; let n = arr.length; let k = 3; document.write(findSum(arr, n, k)); </script>", "e": 29368, "s": 28672, "text": null }, { "code": null, "e": 29371, "s": 29368, "text": "30" }, { "code": null, "e": 29467, "s": 29373, "text": "Time Complexity: O(N), where N is the number of elements in the array.Auxiliary Space: O(1) " }, { "code": null, "e": 29472, "s": 29467, "text": "vt_m" }, { "code": null, "e": 29480, "s": 29472, "text": "ankthon" }, { "code": null, "e": 29493, "s": 29480, "text": "Akanksha_Rai" }, { "code": null, "e": 29504, "s": 29493, "text": "decode2207" }, { "code": null, "e": 29520, "s": 29504, "text": "pankajsharmagfg" }, { "code": null, "e": 29544, "s": 29520, "text": "Technical Scripter 2018" }, { "code": null, "e": 29551, "s": 29544, "text": "Arrays" }, { "code": null, "e": 29567, "s": 29551, "text": "Data Structures" }, { "code": null, "e": 29583, "s": 29567, "text": "Data Structures" }, { "code": null, "e": 29590, "s": 29583, "text": "Arrays" }, { "code": null, "e": 29688, "s": 29590, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29697, "s": 29688, "text": "Comments" }, { "code": null, "e": 29710, "s": 29697, "text": "Old Comments" }, { "code": null, "e": 29758, "s": 29710, "text": "Stack Data Structure (Introduction and Program)" }, { "code": null, "e": 29802, "s": 29758, "text": "Top 50 Array Coding Problems for Interviews" }, { "code": null, "e": 29825, "s": 29802, "text": "Introduction to Arrays" }, { "code": null, "e": 29857, "s": 29825, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 29871, "s": 29857, "text": "Linear Search" }, { "code": null, "e": 29920, "s": 29871, "text": "SDE SHEET - A Complete Guide for SDE Preparation" }, { "code": null, "e": 29964, "s": 29920, "text": "Top 50 Array Coding Problems for Interviews" }, { "code": null, "e": 29989, "s": 29964, "text": "DSA Sheet by Love Babbar" }, { "code": null, "e": 30045, "s": 29989, "text": "Doubly Linked List | Set 1 (Introduction and Insertion)" } ]
Go Programming Language (Introduction) - GeeksforGeeks
05 Mar, 2021 Introduction Go is a procedural programming language. It was developed in 2007 by Robert Griesemer, Rob Pike, and Ken Thompson at Google but launched in 2009 as an open-source programming language. Programs are assembled by using packages, for efficient management of dependencies. This language also supports environment adopting patterns alike to dynamic languages. For eg., type inference (y := 0 is a valid declaration of a variable y of type float). Beginning with Go programming There are various online IDEs such as The Go Playground, repl.it, etc. which can be used to run Go programs without installing. For installing Go in own PCs or Laptop we need of following two software: Text editor and Compiler Text Editor: Text editor gives you a platform where you write your source code. Following are the list of text editors: Windows notepad OS Edit command Brief Epsilon vm or vi Emacs VS Code Finding a Go Compiler: Go distribution comes as a binary installable for FreeBSD (release 8 and above), Linux, Mac OS X (Snow Leopard and above), and Windows operating systems with 32-bit (386) and 64-bit (amd64) x86 processor architectures. For more instructions about installing. Please visit For installing GO distribution Note: Extension of source code file of go language must be .go Writing first program in Go: Go package main import "fmt" func main() { // prints geeksforgeeks fmt.Println("Hello, geeksforgeeks")} Output: Hello, geeksforgeeks Explanation of the syntax of Go program: Line 1: It contains the package main of the program, which have overall content of the program.It is the initial point to run the program, So it is compulsory to write. Line 2: It contains import “fmt”, it is a preprocessor command which tells the compiler to include the files lying in the package. Line 3: main function, it is beginning of execution of program. Line 4: fmt.Println() is a standard library function to print something as a output on screen.In this, fmt package has transmitted Println method which is used to display the output. Comment: Comments are used for explaining code and are used in similar manner as in Java or C or C++. Compilers ignore the comment entries and does not execute them. Comments can be of single line or multiple lines. Single Line Comment: Syntax: // single line comment Multi-line Comment: Syntax: /* multiline comment */ Following is another example: Go package mainimport "fmt" func main() { fmt.Println("1 + 1 =", 1 + 1)} Output: 1 + 1 = 2 Explanation of the above program: In this above program, the same package line, the same import line, the same function declaration and uses the same Println function as we have used in 1st GO program. This time instead of printing the string “Hello, geeksforgeeks” we print the string 1 + 1 = followed by the result of the expression 1 + 1. This expression is made up of three parts: the numeric literal 1 (which is of type int), the + operator (which represents addition) and another numeric literal 1. Why this “Go language”? Because Go language is an effort to combine the ease of programming of an interpreted, dynamically typed language with the efficiency and safety of a statically typed, compiled language. It also aims to be modern, with support for networked and multicore computing. What excluding in Go which is present in other languages? Go attempts to reduce the amount of typing in both senses of the word. Throughout its design, developers tried to reduce clutter and complexity. There are no forward declarations and no header files; everything is declared exactly once. Stuttering is reduced by simple type derivation using the := declare-and-initialize construct. There is no type hierarchy: types just are, they don’t have to announce their relationships. Hardware Limitations We have observed that in a decade, the hardware and processing configuration is changing at a very slow rate. In 2004, P4 was having the clock speed of 3.0 GHz and now in 2018, Macbook pro has the clock speed of Approx (2.3Ghz v 2.66Ghz). To speed up, the functionality we use more processors, but using more processor the cost also increases. And due to this we use limited processors and using limited processor we have a heavy programming language whose threading takes more memory and slows down the performance of our system. Hence, to overcome such problem Golang has been designed in such a way that instead of using threading it uses Goroutine, which is similar to threading but consumes very less memory. Like threading consumes 1MB whereas Goroutine consumes 2KB of memory, hence at the same time, we can have millions of goroutine triggered. So the above-discussed point makes golang a strong language that handles concurrency like C++ and Java. Advantages and Disadvantages of Go Language Advantages: Flexible- It is concise, simple and easy to read.Concurrency- It allows multiple process running simultaneously and effectively.Quick Outcome- Its compilation time is very fast.Library- It provides a rich standard library.Garbage collection- It is a key feature of go. Go excels in giving a lot of control over memory allocation and has dramatically reduced latency in the most recent versions of the garbage collector.It validates for the interface and type embedding. Flexible- It is concise, simple and easy to read. Concurrency- It allows multiple process running simultaneously and effectively. Quick Outcome- Its compilation time is very fast. Library- It provides a rich standard library. Garbage collection- It is a key feature of go. Go excels in giving a lot of control over memory allocation and has dramatically reduced latency in the most recent versions of the garbage collector. It validates for the interface and type embedding. Disadvantages: It has no support for generics, even if there are many discussions about it.The packages distributed with this programming language is quite useful but Go is not so object-oriented in the conventional sense.There is absence of some libraries especially a UI tool kit. It has no support for generics, even if there are many discussions about it. The packages distributed with this programming language is quite useful but Go is not so object-oriented in the conventional sense. There is absence of some libraries especially a UI tool kit. Some popular Applications developed in Go Language Docker: a set of tools for deploying linux containers Openshift: a cloud computing platform as a service by Red Hat. Kubernetes: The future of seamlessly automated deployment processes Dropbox: migrated some of their critical components from Python to Go. Netflix: for two part of their server architecture. InfluxDB: is an open-source time series database developed by InfluxData. Golang: The language itself was written in Go. Country wise Companies which are currently using Go Language. Features of go language Language Design: The designers of the language made a conscious purposeful to keep the language simple and easy to understand. The entire detailing is in a few pages and some interesting design decisions were made through Object-Oriented support in the language.Towards this, the language is opinionated and recommends an idiomatic way of achieving things. It prefers Composition over Inheritance. In Go Language, “Do More with Less” is the mantra. Package Management: Go merges modern day developer workflow of working with Open Source projects and includes that in the way it manages external packages. Support is provided directly in the tooling to get external packages and publish your own packages in a set of easy commands. Powerful standard library: Go has powerful standard library, which is distributed as packages. Static Typing:Go is static typed language. So, in this compiler not just work on compiling the code successfully but also ensures on type conversions and compatibility. Because of this feature Go avoid all those problems which we face in dynamically typed languages. Testing Support: Go provides us the unit testing features by itself i.e., a simple mechanism to write your unit test parallel with your code because of this you can understand you code coverage by your own tests. And that can be easily used in generating your code documentation as an example. Platform Independent: Go language is just like Java language as it support platform independency. Due to its modular design and modularity i.e., the code is compiled and is converted into binary form which is as small as possible and hence, it requires no dependency. Its code can be compiled in any platform or any server and application you work on. kautukdwivedi1 Go-Basics Golang Go Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Different ways to concatenate two strings in Golang time.Sleep() Function in Golang With Examples strings.Replace() Function in Golang With Examples strings.Contains Function in Golang with Examples Time Formatting in Golang fmt.Sprintf() Function in Golang With Examples Golang Maps How to convert a string in lower case in Golang? Different Ways to Find the Type of Variable in Golang Inheritance in GoLang
[ { "code": null, "e": 24539, "s": 24511, "text": "\n05 Mar, 2021" }, { "code": null, "e": 24552, "s": 24539, "text": "Introduction" }, { "code": null, "e": 24994, "s": 24552, "text": "Go is a procedural programming language. It was developed in 2007 by Robert Griesemer, Rob Pike, and Ken Thompson at Google but launched in 2009 as an open-source programming language. Programs are assembled by using packages, for efficient management of dependencies. This language also supports environment adopting patterns alike to dynamic languages. For eg., type inference (y := 0 is a valid declaration of a variable y of type float)." }, { "code": null, "e": 25025, "s": 24994, "text": " Beginning with Go programming" }, { "code": null, "e": 25154, "s": 25025, "text": "There are various online IDEs such as The Go Playground, repl.it, etc. which can be used to run Go programs without installing. " }, { "code": null, "e": 25375, "s": 25154, "text": "For installing Go in own PCs or Laptop we need of following two software: Text editor and Compiler Text Editor: Text editor gives you a platform where you write your source code. Following are the list of text editors: " }, { "code": null, "e": 25391, "s": 25375, "text": "Windows notepad" }, { "code": null, "e": 25407, "s": 25391, "text": "OS Edit command" }, { "code": null, "e": 25413, "s": 25407, "text": "Brief" }, { "code": null, "e": 25421, "s": 25413, "text": "Epsilon" }, { "code": null, "e": 25430, "s": 25421, "text": "vm or vi" }, { "code": null, "e": 25436, "s": 25430, "text": "Emacs" }, { "code": null, "e": 25444, "s": 25436, "text": "VS Code" }, { "code": null, "e": 25772, "s": 25444, "text": "Finding a Go Compiler: Go distribution comes as a binary installable for FreeBSD (release 8 and above), Linux, Mac OS X (Snow Leopard and above), and Windows operating systems with 32-bit (386) and 64-bit (amd64) x86 processor architectures. For more instructions about installing. Please visit For installing GO distribution " }, { "code": null, "e": 25835, "s": 25772, "text": "Note: Extension of source code file of go language must be .go" }, { "code": null, "e": 25866, "s": 25835, "text": "Writing first program in Go: " }, { "code": null, "e": 25869, "s": 25866, "text": "Go" }, { "code": "package main import \"fmt\" func main() { // prints geeksforgeeks fmt.Println(\"Hello, geeksforgeeks\")}", "e": 25981, "s": 25869, "text": null }, { "code": null, "e": 25991, "s": 25981, "text": "Output: " }, { "code": null, "e": 26012, "s": 25991, "text": "Hello, geeksforgeeks" }, { "code": null, "e": 26054, "s": 26012, "text": "Explanation of the syntax of Go program: " }, { "code": null, "e": 26223, "s": 26054, "text": "Line 1: It contains the package main of the program, which have overall content of the program.It is the initial point to run the program, So it is compulsory to write." }, { "code": null, "e": 26354, "s": 26223, "text": "Line 2: It contains import “fmt”, it is a preprocessor command which tells the compiler to include the files lying in the package." }, { "code": null, "e": 26418, "s": 26354, "text": "Line 3: main function, it is beginning of execution of program." }, { "code": null, "e": 26601, "s": 26418, "text": "Line 4: fmt.Println() is a standard library function to print something as a output on screen.In this, fmt package has transmitted Println method which is used to display the output." }, { "code": null, "e": 26817, "s": 26601, "text": "Comment: Comments are used for explaining code and are used in similar manner as in Java or C or C++. Compilers ignore the comment entries and does not execute them. Comments can be of single line or multiple lines." }, { "code": null, "e": 26839, "s": 26817, "text": "Single Line Comment: " }, { "code": null, "e": 26848, "s": 26839, "text": "Syntax: " }, { "code": null, "e": 26871, "s": 26848, "text": "// single line comment" }, { "code": null, "e": 26892, "s": 26871, "text": "Multi-line Comment: " }, { "code": null, "e": 26901, "s": 26892, "text": "Syntax: " }, { "code": null, "e": 26925, "s": 26901, "text": "/* multiline comment */" }, { "code": null, "e": 26957, "s": 26925, "text": "Following is another example: " }, { "code": null, "e": 26960, "s": 26957, "text": "Go" }, { "code": "package mainimport \"fmt\" func main() { fmt.Println(\"1 + 1 =\", 1 + 1)}", "e": 27032, "s": 26960, "text": null }, { "code": null, "e": 27042, "s": 27032, "text": "Output: " }, { "code": null, "e": 27052, "s": 27042, "text": "1 + 1 = 2" }, { "code": null, "e": 27558, "s": 27052, "text": "Explanation of the above program: In this above program, the same package line, the same import line, the same function declaration and uses the same Println function as we have used in 1st GO program. This time instead of printing the string “Hello, geeksforgeeks” we print the string 1 + 1 = followed by the result of the expression 1 + 1. This expression is made up of three parts: the numeric literal 1 (which is of type int), the + operator (which represents addition) and another numeric literal 1. " }, { "code": null, "e": 27582, "s": 27558, "text": "Why this “Go language”?" }, { "code": null, "e": 27849, "s": 27582, "text": "Because Go language is an effort to combine the ease of programming of an interpreted, dynamically typed language with the efficiency and safety of a statically typed, compiled language. It also aims to be modern, with support for networked and multicore computing. " }, { "code": null, "e": 27908, "s": 27849, "text": "What excluding in Go which is present in other languages? " }, { "code": null, "e": 28053, "s": 27908, "text": "Go attempts to reduce the amount of typing in both senses of the word. Throughout its design, developers tried to reduce clutter and complexity." }, { "code": null, "e": 28145, "s": 28053, "text": "There are no forward declarations and no header files; everything is declared exactly once." }, { "code": null, "e": 28240, "s": 28145, "text": "Stuttering is reduced by simple type derivation using the := declare-and-initialize construct." }, { "code": null, "e": 28333, "s": 28240, "text": "There is no type hierarchy: types just are, they don’t have to announce their relationships." }, { "code": null, "e": 28354, "s": 28333, "text": "Hardware Limitations" }, { "code": null, "e": 29312, "s": 28354, "text": "We have observed that in a decade, the hardware and processing configuration is changing at a very slow rate. In 2004, P4 was having the clock speed of 3.0 GHz and now in 2018, Macbook pro has the clock speed of Approx (2.3Ghz v 2.66Ghz). To speed up, the functionality we use more processors, but using more processor the cost also increases. And due to this we use limited processors and using limited processor we have a heavy programming language whose threading takes more memory and slows down the performance of our system. Hence, to overcome such problem Golang has been designed in such a way that instead of using threading it uses Goroutine, which is similar to threading but consumes very less memory. Like threading consumes 1MB whereas Goroutine consumes 2KB of memory, hence at the same time, we can have millions of goroutine triggered. So the above-discussed point makes golang a strong language that handles concurrency like C++ and Java. " }, { "code": null, "e": 29356, "s": 29312, "text": "Advantages and Disadvantages of Go Language" }, { "code": null, "e": 29370, "s": 29356, "text": "Advantages: " }, { "code": null, "e": 29840, "s": 29370, "text": "Flexible- It is concise, simple and easy to read.Concurrency- It allows multiple process running simultaneously and effectively.Quick Outcome- Its compilation time is very fast.Library- It provides a rich standard library.Garbage collection- It is a key feature of go. Go excels in giving a lot of control over memory allocation and has dramatically reduced latency in the most recent versions of the garbage collector.It validates for the interface and type embedding." }, { "code": null, "e": 29890, "s": 29840, "text": "Flexible- It is concise, simple and easy to read." }, { "code": null, "e": 29970, "s": 29890, "text": "Concurrency- It allows multiple process running simultaneously and effectively." }, { "code": null, "e": 30020, "s": 29970, "text": "Quick Outcome- Its compilation time is very fast." }, { "code": null, "e": 30066, "s": 30020, "text": "Library- It provides a rich standard library." }, { "code": null, "e": 30264, "s": 30066, "text": "Garbage collection- It is a key feature of go. Go excels in giving a lot of control over memory allocation and has dramatically reduced latency in the most recent versions of the garbage collector." }, { "code": null, "e": 30315, "s": 30264, "text": "It validates for the interface and type embedding." }, { "code": null, "e": 30332, "s": 30315, "text": "Disadvantages: " }, { "code": null, "e": 30600, "s": 30332, "text": "It has no support for generics, even if there are many discussions about it.The packages distributed with this programming language is quite useful but Go is not so object-oriented in the conventional sense.There is absence of some libraries especially a UI tool kit." }, { "code": null, "e": 30677, "s": 30600, "text": "It has no support for generics, even if there are many discussions about it." }, { "code": null, "e": 30809, "s": 30677, "text": "The packages distributed with this programming language is quite useful but Go is not so object-oriented in the conventional sense." }, { "code": null, "e": 30870, "s": 30809, "text": "There is absence of some libraries especially a UI tool kit." }, { "code": null, "e": 30921, "s": 30870, "text": "Some popular Applications developed in Go Language" }, { "code": null, "e": 30975, "s": 30921, "text": "Docker: a set of tools for deploying linux containers" }, { "code": null, "e": 31038, "s": 30975, "text": "Openshift: a cloud computing platform as a service by Red Hat." }, { "code": null, "e": 31106, "s": 31038, "text": "Kubernetes: The future of seamlessly automated deployment processes" }, { "code": null, "e": 31177, "s": 31106, "text": "Dropbox: migrated some of their critical components from Python to Go." }, { "code": null, "e": 31229, "s": 31177, "text": "Netflix: for two part of their server architecture." }, { "code": null, "e": 31303, "s": 31229, "text": "InfluxDB: is an open-source time series database developed by InfluxData." }, { "code": null, "e": 31350, "s": 31303, "text": "Golang: The language itself was written in Go." }, { "code": null, "e": 31413, "s": 31350, "text": "Country wise Companies which are currently using Go Language. " }, { "code": null, "e": 31438, "s": 31413, "text": "Features of go language " }, { "code": null, "e": 31887, "s": 31438, "text": "Language Design: The designers of the language made a conscious purposeful to keep the language simple and easy to understand. The entire detailing is in a few pages and some interesting design decisions were made through Object-Oriented support in the language.Towards this, the language is opinionated and recommends an idiomatic way of achieving things. It prefers Composition over Inheritance. In Go Language, “Do More with Less” is the mantra." }, { "code": null, "e": 32169, "s": 31887, "text": "Package Management: Go merges modern day developer workflow of working with Open Source projects and includes that in the way it manages external packages. Support is provided directly in the tooling to get external packages and publish your own packages in a set of easy commands." }, { "code": null, "e": 32264, "s": 32169, "text": "Powerful standard library: Go has powerful standard library, which is distributed as packages." }, { "code": null, "e": 32531, "s": 32264, "text": "Static Typing:Go is static typed language. So, in this compiler not just work on compiling the code successfully but also ensures on type conversions and compatibility. Because of this feature Go avoid all those problems which we face in dynamically typed languages." }, { "code": null, "e": 32825, "s": 32531, "text": "Testing Support: Go provides us the unit testing features by itself i.e., a simple mechanism to write your unit test parallel with your code because of this you can understand you code coverage by your own tests. And that can be easily used in generating your code documentation as an example." }, { "code": null, "e": 33177, "s": 32825, "text": "Platform Independent: Go language is just like Java language as it support platform independency. Due to its modular design and modularity i.e., the code is compiled and is converted into binary form which is as small as possible and hence, it requires no dependency. Its code can be compiled in any platform or any server and application you work on." }, { "code": null, "e": 33192, "s": 33177, "text": "kautukdwivedi1" }, { "code": null, "e": 33202, "s": 33192, "text": "Go-Basics" }, { "code": null, "e": 33209, "s": 33202, "text": "Golang" }, { "code": null, "e": 33221, "s": 33209, "text": "Go Language" }, { "code": null, "e": 33319, "s": 33221, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 33328, "s": 33319, "text": "Comments" }, { "code": null, "e": 33341, "s": 33328, "text": "Old Comments" }, { "code": null, "e": 33393, "s": 33341, "text": "Different ways to concatenate two strings in Golang" }, { "code": null, "e": 33439, "s": 33393, "text": "time.Sleep() Function in Golang With Examples" }, { "code": null, "e": 33490, "s": 33439, "text": "strings.Replace() Function in Golang With Examples" }, { "code": null, "e": 33540, "s": 33490, "text": "strings.Contains Function in Golang with Examples" }, { "code": null, "e": 33566, "s": 33540, "text": "Time Formatting in Golang" }, { "code": null, "e": 33613, "s": 33566, "text": "fmt.Sprintf() Function in Golang With Examples" }, { "code": null, "e": 33625, "s": 33613, "text": "Golang Maps" }, { "code": null, "e": 33674, "s": 33625, "text": "How to convert a string in lower case in Golang?" }, { "code": null, "e": 33728, "s": 33674, "text": "Different Ways to Find the Type of Variable in Golang" } ]
How to plot single data with two Y-axes (two units) in Matplotlib?
To plot single data with two Y-Axes (Two units) in Matplotlib, we can take the following steps − Set the figure size and adjust the padding between and around the subplots. Create speed and acceleration data points using numpy. Add a subplot to the current figure. Plot speed data points using plot() method. Create a twin Axes sharing the X-axis. Plot acceleration data point using plot() method. Place a legend on the figure. To display the figure, use show() method. import matplotlib.pyplot as plt import numpy as np plt.rcParams["figure.figsize"] = [7.50, 3.50] plt.rcParams["figure.autolayout"] = True speed = np.array([3, 1, 2, 0, 5]) acceleration = np.array([6, 5, 7, 1, 5]) ax1 = plt.subplot() l1, = ax1.plot(speed, color='red') ax2 = ax1.twinx() l2, = ax2.plot(acceleration, color='orange') plt.legend([l1, l2], ["speed", "acceleration"]) plt.show()
[ { "code": null, "e": 1159, "s": 1062, "text": "To plot single data with two Y-Axes (Two units) in Matplotlib, we can take the following steps −" }, { "code": null, "e": 1235, "s": 1159, "text": "Set the figure size and adjust the padding between and around the subplots." }, { "code": null, "e": 1290, "s": 1235, "text": "Create speed and acceleration data points using numpy." }, { "code": null, "e": 1327, "s": 1290, "text": "Add a subplot to the current figure." }, { "code": null, "e": 1371, "s": 1327, "text": "Plot speed data points using plot() method." }, { "code": null, "e": 1410, "s": 1371, "text": "Create a twin Axes sharing the X-axis." }, { "code": null, "e": 1460, "s": 1410, "text": "Plot acceleration data point using plot() method." }, { "code": null, "e": 1490, "s": 1460, "text": "Place a legend on the figure." }, { "code": null, "e": 1532, "s": 1490, "text": "To display the figure, use show() method." }, { "code": null, "e": 1927, "s": 1532, "text": "import matplotlib.pyplot as plt\nimport numpy as np\n\nplt.rcParams[\"figure.figsize\"] = [7.50, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\n\nspeed = np.array([3, 1, 2, 0, 5])\nacceleration = np.array([6, 5, 7, 1, 5])\n\nax1 = plt.subplot()\nl1, = ax1.plot(speed, color='red')\nax2 = ax1.twinx()\nl2, = ax2.plot(acceleration, color='orange')\n\nplt.legend([l1, l2], [\"speed\", \"acceleration\"])\n\nplt.show()" } ]
strdup() and strdndup() in C/C++
The function strdup() is used to duplicate a string. It returns a pointer to null-terminated byte string. Here is the syntax of strdup() in C language, char *strdup(const char *string); Here is an example of strdup() in C language, Live Demo #include <stdio.h> #include<string.h> int main() { char *str = "Helloworld"; char *result; result = strdup(str); printf("The string : %s", result); return 0; } The string : Helloworld The function strndup works similar to the function strndup(). This function duplicates the string at most size bytes i.e. the given size in the function. It also returns a pointer to null-terminated byte string. Here is the syntax of strndup() in C language, char *strndup(const char *string , size_t size); Here is an example of strndup() in C language, Live Demo #include <stdio.h> #include<string.h> int main() { char *str = "Helloworld"; char *result; result = strndup(str, 3); printf("The string : %s", result); return 0; } The string : Hel
[ { "code": null, "e": 1168, "s": 1062, "text": "The function strdup() is used to duplicate a string. It returns a pointer to null-terminated byte string." }, { "code": null, "e": 1214, "s": 1168, "text": "Here is the syntax of strdup() in C language," }, { "code": null, "e": 1248, "s": 1214, "text": "char *strdup(const char *string);" }, { "code": null, "e": 1294, "s": 1248, "text": "Here is an example of strdup() in C language," }, { "code": null, "e": 1305, "s": 1294, "text": " Live Demo" }, { "code": null, "e": 1480, "s": 1305, "text": "#include <stdio.h>\n#include<string.h>\nint main() {\n char *str = \"Helloworld\";\n char *result;\n result = strdup(str);\n printf(\"The string : %s\", result);\n return 0;\n}" }, { "code": null, "e": 1504, "s": 1480, "text": "The string : Helloworld" }, { "code": null, "e": 1716, "s": 1504, "text": "The function strndup works similar to the function strndup(). This function duplicates the string at most size bytes i.e. the given size in the function. It also returns a pointer to null-terminated byte string." }, { "code": null, "e": 1763, "s": 1716, "text": "Here is the syntax of strndup() in C language," }, { "code": null, "e": 1812, "s": 1763, "text": "char *strndup(const char *string , size_t size);" }, { "code": null, "e": 1859, "s": 1812, "text": "Here is an example of strndup() in C language," }, { "code": null, "e": 1870, "s": 1859, "text": " Live Demo" }, { "code": null, "e": 2049, "s": 1870, "text": "#include <stdio.h>\n#include<string.h>\nint main() {\n char *str = \"Helloworld\";\n char *result;\n result = strndup(str, 3);\n printf(\"The string : %s\", result);\n return 0;\n}" }, { "code": null, "e": 2066, "s": 2049, "text": "The string : Hel" } ]
Google Guice - Provider Class
As @provides method becomes more complex, this method can be moved to separate classes using Provider interface. class SpellCheckerProvider implements Provider<SpellChecker> { @Override public SpellChecker get() { String dbUrl = "jdbc:mysql://localhost:5326/emp"; String user = "user"; int timeout = 100; SpellChecker SpellChecker = new SpellCheckerImpl(dbUrl, user, timeout); return SpellChecker; } } Next, you have to map the provider to type. bind(SpellChecker.class).toProvider(SpellCheckerProvider.class); See the complete example below. Create a java class named GuiceTester. GuiceTester.java import com.google.inject.AbstractModule; import com.google.inject.Guice; import com.google.inject.Inject; import com.google.inject.Injector; import com.google.inject.Provider; public class GuiceTester { public static void main(String[] args) { Injector injector = Guice.createInjector(new TextEditorModule()); TextEditor editor = injector.getInstance(TextEditor.class); editor.makeSpellCheck(); } } class TextEditor { private SpellChecker spellChecker; @Inject public TextEditor( SpellChecker spellChecker) { this.spellChecker = spellChecker; } public void makeSpellCheck() { spellChecker.checkSpelling(); } } //Binding Module class TextEditorModule extends AbstractModule { @Override protected void configure() { bind(SpellChecker.class).toProvider(SpellCheckerProvider.class); } } //spell checker interface interface SpellChecker { public void checkSpelling(); } //spell checker implementation class SpellCheckerImpl implements SpellChecker { private String dbUrl; private String user; private Integer timeout; @Inject public SpellCheckerImpl(String dbUrl, String user, Integer timeout) { this.dbUrl = dbUrl; this.user = user; this.timeout = timeout; } @Override public void checkSpelling() { System.out.println("Inside checkSpelling." ); System.out.println(dbUrl); System.out.println(user); System.out.println(timeout); } } class SpellCheckerProvider implements Provider<SpellChecker> { @Override public SpellChecker get() { String dbUrl = "jdbc:mysql://localhost:5326/emp"; String user = "user"; int timeout = 100; SpellChecker SpellChecker = new SpellCheckerImpl(dbUrl, user, timeout); return SpellChecker; } } Now, compile and run the file. You can see the following output − Inside checkSpelling. jdbc:mysql://localhost:5326/emp user 100 27 Lectures 1.5 hours Lemuel Ogbunude Print Add Notes Bookmark this page
[ { "code": null, "e": 2215, "s": 2102, "text": "As @provides method becomes more complex, this method can be moved to separate classes using Provider interface." }, { "code": null, "e": 2544, "s": 2215, "text": "class SpellCheckerProvider implements Provider<SpellChecker> {\n @Override\n public SpellChecker get() {\n String dbUrl = \"jdbc:mysql://localhost:5326/emp\";\n String user = \"user\";\n int timeout = 100;\n SpellChecker SpellChecker = new SpellCheckerImpl(dbUrl, user, timeout);\n return SpellChecker;\n } \n}" }, { "code": null, "e": 2588, "s": 2544, "text": "Next, you have to map the provider to type." }, { "code": null, "e": 2654, "s": 2588, "text": "bind(SpellChecker.class).toProvider(SpellCheckerProvider.class);\n" }, { "code": null, "e": 2686, "s": 2654, "text": "See the complete example below." }, { "code": null, "e": 2725, "s": 2686, "text": "Create a java class named GuiceTester." }, { "code": null, "e": 2742, "s": 2725, "text": "GuiceTester.java" }, { "code": null, "e": 4576, "s": 2742, "text": "import com.google.inject.AbstractModule;\nimport com.google.inject.Guice;\nimport com.google.inject.Inject;\nimport com.google.inject.Injector;\nimport com.google.inject.Provider;\n\npublic class GuiceTester {\n public static void main(String[] args) {\n Injector injector = Guice.createInjector(new TextEditorModule());\n TextEditor editor = injector.getInstance(TextEditor.class);\n editor.makeSpellCheck();\n } \n}\nclass TextEditor {\n private SpellChecker spellChecker;\n \n @Inject\n public TextEditor( SpellChecker spellChecker) {\n this.spellChecker = spellChecker;\n }\n public void makeSpellCheck() {\n spellChecker.checkSpelling();\n } \n}\n\n//Binding Module\nclass TextEditorModule extends AbstractModule {\n @Override\n \n protected void configure() {\n bind(SpellChecker.class).toProvider(SpellCheckerProvider.class);\n } \n}\n\n//spell checker interface\ninterface SpellChecker {\n public void checkSpelling();\n}\n\n//spell checker implementation\nclass SpellCheckerImpl implements SpellChecker {\n\n private String dbUrl;\n private String user;\n private Integer timeout;\n\n @Inject\n public SpellCheckerImpl(String dbUrl, \n String user, \n Integer timeout) {\n this.dbUrl = dbUrl;\n this.user = user;\n this.timeout = timeout;\n } \n @Override\n public void checkSpelling() { \n System.out.println(\"Inside checkSpelling.\" );\n System.out.println(dbUrl);\n System.out.println(user);\n System.out.println(timeout);\n }\n}\nclass SpellCheckerProvider implements Provider<SpellChecker> {\n @Override\n \n public SpellChecker get() {\n String dbUrl = \"jdbc:mysql://localhost:5326/emp\";\n String user = \"user\";\n int timeout = 100;\n\n SpellChecker SpellChecker = new SpellCheckerImpl(dbUrl, user, timeout);\n return SpellChecker;\n }\n}" }, { "code": null, "e": 4642, "s": 4576, "text": "Now, compile and run the file. You can see the following output −" }, { "code": null, "e": 4706, "s": 4642, "text": "Inside checkSpelling.\njdbc:mysql://localhost:5326/emp\nuser\n100\n" }, { "code": null, "e": 4741, "s": 4706, "text": "\n 27 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4758, "s": 4741, "text": " Lemuel Ogbunude" }, { "code": null, "e": 4765, "s": 4758, "text": " Print" }, { "code": null, "e": 4776, "s": 4765, "text": " Add Notes" } ]
Count 1's in a sorted binary array - GeeksforGeeks
28 Feb, 2022 Given a binary array sorted in non-increasing order, count the number of 1’s in it. Examples: Input: arr[] = {1, 1, 0, 0, 0, 0, 0} Output: 2 Input: arr[] = {1, 1, 1, 1, 1, 1, 1} Output: 7 Input: arr[] = {0, 0, 0, 0, 0, 0, 0} Output: 0 A simple solution is to linearly traverse the array. The time complexity of the simple solution is O(n). We can use Binary Search to find count in O(Logn) time. The idea is to look for last occurrence of 1 using Binary Search. Once we find the index last occurrence, we return index + 1 as count.The following is the implementation of above idea. C++ Python3 Java C# PHP Javascript // C++ program to count one's in a boolean array#include <bits/stdc++.h>using namespace std; /* Returns counts of 1's in arr[low..high]. The array is assumed to be sorted in non-increasing order */int countOnes(bool arr[], int low, int high){ if (high >= low) { // get the middle index int mid = low + (high - low)/2; // check if the element at middle index is last 1 if ( (mid == high || arr[mid+1] == 0) && (arr[mid] == 1)) return mid+1; // If element is not last 1, recur for right side if (arr[mid] == 1) return countOnes(arr, (mid + 1), high); // else recur for left side return countOnes(arr, low, (mid -1)); } return 0;} /* Driver Code */int main(){ bool arr[] = {1, 1, 1, 1, 0, 0, 0}; int n = sizeof(arr)/sizeof(arr[0]); cout << "Count of 1's in given array is " << countOnes(arr, 0, n-1); return 0;} # Python program to count one's in a boolean array # Returns counts of 1's in arr[low..high]. The array is# assumed to be sorted in non-increasing orderdef countOnes(arr,low,high): if high>=low: # get the middle index mid = low + (high-low)//2 # check if the element at middle index is last 1 if ((mid == high or arr[mid+1]==0) and (arr[mid]==1)): return mid+1 # If element is not last 1, recur for right side if arr[mid]==1: return countOnes(arr, (mid+1), high) # else recur for left side return countOnes(arr, low, mid-1) return 0 # Driver Codearr=[1, 1, 1, 1, 0, 0, 0]print ("Count of 1's in given array is",countOnes(arr, 0 , len(arr)-1)) # This code is contributed by __Devesh Agrawal__ // Java program to count 1's in a sorted arrayclass CountOnes{ /* Returns counts of 1's in arr[low..high]. The array is assumed to be sorted in non-increasing order */ int countOnes(int arr[], int low, int high) { if (high >= low) { // get the middle index int mid = low + (high - low) / 2; // check if the element at middle index is last // 1 if ((mid == high || arr[mid + 1] == 0) && (arr[mid] == 1)) return mid + 1; // If element is not last 1, recur for right // side if (arr[mid] == 1) return countOnes(arr, (mid + 1), high); // else recur for left side return countOnes(arr, low, (mid - 1)); } return 0; } /* Driver code */ public static void main(String args[]) { CountOnes ob = new CountOnes(); int arr[] = { 1, 1, 1, 1, 0, 0, 0 }; int n = arr.length; System.out.println("Count of 1's in given array is " + ob.countOnes(arr, 0, n - 1)); }}/* This code is contributed by Rajat Mishra */ // C# program to count 1's in a sorted arrayusing System; class GFG { /* Returns counts of 1's in arr[low..high]. The array is assumed to be sorted in non-increasing order */ static int countOnes(int[] arr, int low, int high) { if (high >= low) { // get the middle index int mid = low + (high - low) / 2; // check if the element at middle // index is last 1 if ((mid == high || arr[mid + 1] == 0) && (arr[mid] == 1)) return mid + 1; // If element is not last 1, recur // for right side if (arr[mid] == 1) return countOnes(arr, (mid + 1), high); // else recur for left side return countOnes(arr, low, (mid - 1)); } return 0; } /* Driver code */ public static void Main() { int[] arr = { 1, 1, 1, 1, 0, 0, 0 }; int n = arr.Length; Console.WriteLine("Count of 1's in given " + "array is " + countOnes(arr, 0, n - 1)); }} // This code is contributed by Sam007 <?php// PHP program to count one's in a// boolean array /* Returns counts of 1's in arr[low..high].The array is assumed to be sorted innon-increasing order */function countOnes( $arr, $low, $high){ if ($high >= $low) { // get the middle index $mid = $low + ($high - $low)/2; // check if the element at middle // index is last 1 if ( ($mid == $high or $arr[$mid+1] == 0) and ($arr[$mid] == 1)) return $mid+1; // If element is not last 1, recur for // right side if ($arr[$mid] == 1) return countOnes($arr, ($mid + 1), $high); // else recur for left side return countOnes($arr, $low, ($mid -1)); } return 0;} /* Driver code */$arr = array(1, 1, 1, 1, 0, 0, 0);$n = count($arr);echo "Count of 1's in given array is " , countOnes($arr, 0, $n-1); // This code is contributed by anuj_67.?> <script> // Javascript program to count one's in a boolean array /* Returns counts of 1's in arr[low..high]. The array is assumed to be sorted in non-increasing order */function countOnes( arr, low, high){ if (high >= low) { // get the middle index let mid = Math.trunc(low + (high - low)/2); // check if the element at middle index is last 1 if ( (mid == high || arr[mid+1] == 0) && (arr[mid] == 1)) return mid+1; // If element is not last 1, recur for right side if (arr[mid] == 1) return countOnes(arr, (mid + 1), high); // else recur for left side return countOnes(arr, low, (mid -1)); } return 0;} // Driver program let arr = [ 1, 1, 1, 1, 0, 0, 0 ]; let n = arr.length; document.write("Count of 1's in given array is " + countOnes(arr, 0, n-1)); </script> Count of 1's in given array is 4 Time complexity of the above solution is O(Logn) Space complexity o(log n) (function call stack) The same approach with iterative solution would be C++ Java Python3 C# Javascript #include <bits/stdc++.h>using namespace std;/* Returns counts of 1's in arr[low..high]. The array is assumed to be sorted in non-increasing order */ int countOnes(bool arr[], int n){ int ans; int low = 0, high = n - 1; while (low <= high) { // get the middle index int mid = (low + high) / 2; // else recur for left side if (arr[mid] < 1) high = mid - 1; // If element is not last 1, recur for right side else if (arr[mid] > 1) low = mid + 1; else // check if the element at middle index is last 1 { if (mid == n - 1 || arr[mid + 1] != 1) return mid + 1; else low = mid + 1; } }} int main(){ bool arr[] = { 1, 1, 1, 1, 0, 0, 0 }; int n = sizeof(arr) / sizeof(arr[0]); cout << "Count of 1's in given array is " << countOnes(arr, n); return 0;} /*package whatever //do not write package name here */import java.io.*; class GFG{ static int countOnes(int arr[], int n){ int ans; int low = 0, high = n - 1; while (low <= high) { // get the middle index int mid = (low + high) / 2; // else recur for left side if (arr[mid] < 1) high = mid - 1; // If element is not last 1, recur for right side else if (arr[mid] > 1) low = mid + 1; else // check if the element at middle index is last 1 { if (mid == n - 1 || arr[mid + 1] != 1) return mid + 1; else low = mid + 1; } } return 0;} // Driver code public static void main (String[] args) { int arr[] = { 1, 1, 1, 1, 0, 0, 0 }; int n = arr.length; System.out.println("Count of 1's in given array is "+ countOnes(arr, n)); }} // This code is contributed by patel2127. '''package whatever #do not write package name here '''def countOnes(arr, n): low = 0; high = n - 1; while (low <= high): # get the middle index mid = (low + high) // 2; # else recur for left side if (arr[mid] < 1): high = mid - 1; # If element is not last 1, recur for right side elif(arr[mid] > 1): low = mid + 1; else: # check if the element at middle index is last 1 if (mid == n - 1 or arr[mid + 1] != 1): return mid + 1; else: low = mid + 1; return 0; # Driver codeif __name__ == '__main__': arr = [ 1, 1, 1, 1, 0, 0, 0 ]; n = len(arr); print("Count of 1's in given array is " , countOnes(arr, n)); # This code is contributed by umadevi9616 /*package whatever //do not write package name here */using System;public class GFG { static int countOnes(int []arr, int n) { int low = 0, high = n - 1; while (low <= high) { // get the middle index int mid = (low + high) / 2; // else recur for left side if (arr[mid] < 1) high = mid - 1; // If element is not last 1, recur for right side else if (arr[mid] > 1) low = mid + 1; else // check if the element at middle index is last 1 { if (mid == n - 1 || arr[mid + 1] != 1) return mid + 1; else low = mid + 1; } } return 0; } // Driver code public static void Main(String[] args) { int []arr = { 1, 1, 1, 1, 0, 0, 0 }; int n = arr.Length; Console.WriteLine("Count of 1's in given array is " + countOnes(arr, n)); }} // This code is contributed by umadevi9616 <script>/* Returns counts of 1's in arr[low..high]. The array is assumed to be sorted in non-increasing order */ function countOnes(arr,n){ let ans; let low = 0, high = n - 1; while (low <= high) { // get the middle index let mid = Math.floor((low + high) / 2); // else recur for left side if (arr[mid] < 1) high = mid - 1; // If element is not last 1, recur for right side else if (arr[mid] > 1) low = mid + 1; else // check if the element at middle index is last 1 { if (mid == n - 1 || arr[mid + 1] != 1) return mid + 1; else low = mid + 1; } }} let arr=[ 1, 1, 1, 1, 0, 0, 0];let n = arr.length;document.write( "Count of 1's in given array is "+ countOnes(arr, n)); // This code is contributed by unknown2108</script> Count of 1's in given array is 4 Time complexity of the above solution is O(Logn) Space complexity is O(1) The function will return the pointer to the index after the last occurrence of the element which we pass into the function. We can use it here as the array is sorted in non-increasing order. int arr[] = {1,1,1,1,0,0,0,0}; int size = sizeof(arr)/sizeof(arr[0]); auto ptr = upper_bound( arr, arr + size, 1, greater<int>() ); upper_bound() will return the iterator to the index after the last occurrence of 1. Here, ptr will point to index 4 of the array, i.e, after the last 1. We need to use the greater<int>() function since the array is reverse sorted, i.e, 1s occur before 0s. Now, subtracting the pointer to the array beginning from ptr ( ptr – arr ) will give us the number of 1s. since the (pointer to first 0 – pointer to array beginning) = number of 1s just like index of first 0 (4) – index of array beginning (0) = number of 1s int no_of_1 = ptr - arr; C++ #include <bits/stdc++.h>using namespace std; int main(){ int arr[] = { 1, 1, 1, 1, 0, 0, 0, 0, 0 }; int size = sizeof(arr) / sizeof(arr[0]); // Pointer to the first occurence of zero auto ptr = upper_bound(arr, arr + size, 1, greater<int>()); cout << "Count of 1's in given array is " << (ptr - arr); return 0;} Count of 1's in given array is 4 The time complexity of the above solution is O(Logn). Space complexity is O(1). YouTubeGeeksforGeeks502K subscribersCount 1’s in a sorted binary array | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 2:44•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=EoO9c9UOyww" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div> Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Sam007 vt_m jeetpareshshah jana_sayantan anushikasethh unknown2108 patel2127 umadevi9616 amartyaghoshgfg sudip5banerjee1974 Binary Search binary-string Searching Searching Binary Search Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Best First Search (Informed Search) 3 Different ways to print Fibonacci series in Java Program to remove vowels from a String Find whether an array is subset of another array | Added Method 5 Find common elements in three sorted arrays Non-Repeating Element Recursive Programs to find Minimum and Maximum elements of array Find closest number in array Search, insert and delete in a sorted array Check if a string contains uppercase, lowercase, special characters and numeric values
[ { "code": null, "e": 24968, "s": 24940, "text": "\n28 Feb, 2022" }, { "code": null, "e": 25053, "s": 24968, "text": "Given a binary array sorted in non-increasing order, count the number of 1’s in it. " }, { "code": null, "e": 25064, "s": 25053, "text": "Examples: " }, { "code": null, "e": 25207, "s": 25064, "text": "Input: arr[] = {1, 1, 0, 0, 0, 0, 0}\nOutput: 2\n\nInput: arr[] = {1, 1, 1, 1, 1, 1, 1}\nOutput: 7\n\nInput: arr[] = {0, 0, 0, 0, 0, 0, 0}\nOutput: 0" }, { "code": null, "e": 25555, "s": 25207, "text": "A simple solution is to linearly traverse the array. The time complexity of the simple solution is O(n). We can use Binary Search to find count in O(Logn) time. The idea is to look for last occurrence of 1 using Binary Search. Once we find the index last occurrence, we return index + 1 as count.The following is the implementation of above idea. " }, { "code": null, "e": 25559, "s": 25555, "text": "C++" }, { "code": null, "e": 25567, "s": 25559, "text": "Python3" }, { "code": null, "e": 25572, "s": 25567, "text": "Java" }, { "code": null, "e": 25575, "s": 25572, "text": "C#" }, { "code": null, "e": 25579, "s": 25575, "text": "PHP" }, { "code": null, "e": 25590, "s": 25579, "text": "Javascript" }, { "code": "// C++ program to count one's in a boolean array#include <bits/stdc++.h>using namespace std; /* Returns counts of 1's in arr[low..high]. The array is assumed to be sorted in non-increasing order */int countOnes(bool arr[], int low, int high){ if (high >= low) { // get the middle index int mid = low + (high - low)/2; // check if the element at middle index is last 1 if ( (mid == high || arr[mid+1] == 0) && (arr[mid] == 1)) return mid+1; // If element is not last 1, recur for right side if (arr[mid] == 1) return countOnes(arr, (mid + 1), high); // else recur for left side return countOnes(arr, low, (mid -1)); } return 0;} /* Driver Code */int main(){ bool arr[] = {1, 1, 1, 1, 0, 0, 0}; int n = sizeof(arr)/sizeof(arr[0]); cout << \"Count of 1's in given array is \" << countOnes(arr, 0, n-1); return 0;}", "e": 26451, "s": 25590, "text": null }, { "code": "# Python program to count one's in a boolean array # Returns counts of 1's in arr[low..high]. The array is# assumed to be sorted in non-increasing orderdef countOnes(arr,low,high): if high>=low: # get the middle index mid = low + (high-low)//2 # check if the element at middle index is last 1 if ((mid == high or arr[mid+1]==0) and (arr[mid]==1)): return mid+1 # If element is not last 1, recur for right side if arr[mid]==1: return countOnes(arr, (mid+1), high) # else recur for left side return countOnes(arr, low, mid-1) return 0 # Driver Codearr=[1, 1, 1, 1, 0, 0, 0]print (\"Count of 1's in given array is\",countOnes(arr, 0 , len(arr)-1)) # This code is contributed by __Devesh Agrawal__", "e": 27282, "s": 26451, "text": null }, { "code": "// Java program to count 1's in a sorted arrayclass CountOnes{ /* Returns counts of 1's in arr[low..high]. The array is assumed to be sorted in non-increasing order */ int countOnes(int arr[], int low, int high) { if (high >= low) { // get the middle index int mid = low + (high - low) / 2; // check if the element at middle index is last // 1 if ((mid == high || arr[mid + 1] == 0) && (arr[mid] == 1)) return mid + 1; // If element is not last 1, recur for right // side if (arr[mid] == 1) return countOnes(arr, (mid + 1), high); // else recur for left side return countOnes(arr, low, (mid - 1)); } return 0; } /* Driver code */ public static void main(String args[]) { CountOnes ob = new CountOnes(); int arr[] = { 1, 1, 1, 1, 0, 0, 0 }; int n = arr.length; System.out.println(\"Count of 1's in given array is \" + ob.countOnes(arr, 0, n - 1)); }}/* This code is contributed by Rajat Mishra */", "e": 28454, "s": 27282, "text": null }, { "code": "// C# program to count 1's in a sorted arrayusing System; class GFG { /* Returns counts of 1's in arr[low..high]. The array is assumed to be sorted in non-increasing order */ static int countOnes(int[] arr, int low, int high) { if (high >= low) { // get the middle index int mid = low + (high - low) / 2; // check if the element at middle // index is last 1 if ((mid == high || arr[mid + 1] == 0) && (arr[mid] == 1)) return mid + 1; // If element is not last 1, recur // for right side if (arr[mid] == 1) return countOnes(arr, (mid + 1), high); // else recur for left side return countOnes(arr, low, (mid - 1)); } return 0; } /* Driver code */ public static void Main() { int[] arr = { 1, 1, 1, 1, 0, 0, 0 }; int n = arr.Length; Console.WriteLine(\"Count of 1's in given \" + \"array is \" + countOnes(arr, 0, n - 1)); }} // This code is contributed by Sam007", "e": 29601, "s": 28454, "text": null }, { "code": "<?php// PHP program to count one's in a// boolean array /* Returns counts of 1's in arr[low..high].The array is assumed to be sorted innon-increasing order */function countOnes( $arr, $low, $high){ if ($high >= $low) { // get the middle index $mid = $low + ($high - $low)/2; // check if the element at middle // index is last 1 if ( ($mid == $high or $arr[$mid+1] == 0) and ($arr[$mid] == 1)) return $mid+1; // If element is not last 1, recur for // right side if ($arr[$mid] == 1) return countOnes($arr, ($mid + 1), $high); // else recur for left side return countOnes($arr, $low, ($mid -1)); } return 0;} /* Driver code */$arr = array(1, 1, 1, 1, 0, 0, 0);$n = count($arr);echo \"Count of 1's in given array is \" , countOnes($arr, 0, $n-1); // This code is contributed by anuj_67.?>", "e": 30597, "s": 29601, "text": null }, { "code": "<script> // Javascript program to count one's in a boolean array /* Returns counts of 1's in arr[low..high]. The array is assumed to be sorted in non-increasing order */function countOnes( arr, low, high){ if (high >= low) { // get the middle index let mid = Math.trunc(low + (high - low)/2); // check if the element at middle index is last 1 if ( (mid == high || arr[mid+1] == 0) && (arr[mid] == 1)) return mid+1; // If element is not last 1, recur for right side if (arr[mid] == 1) return countOnes(arr, (mid + 1), high); // else recur for left side return countOnes(arr, low, (mid -1)); } return 0;} // Driver program let arr = [ 1, 1, 1, 1, 0, 0, 0 ]; let n = arr.length; document.write(\"Count of 1's in given array is \" + countOnes(arr, 0, n-1)); </script>", "e": 31440, "s": 30597, "text": null }, { "code": null, "e": 31473, "s": 31440, "text": "Count of 1's in given array is 4" }, { "code": null, "e": 31522, "s": 31473, "text": "Time complexity of the above solution is O(Logn)" }, { "code": null, "e": 31570, "s": 31522, "text": "Space complexity o(log n) (function call stack)" }, { "code": null, "e": 31621, "s": 31570, "text": "The same approach with iterative solution would be" }, { "code": null, "e": 31625, "s": 31621, "text": "C++" }, { "code": null, "e": 31630, "s": 31625, "text": "Java" }, { "code": null, "e": 31638, "s": 31630, "text": "Python3" }, { "code": null, "e": 31641, "s": 31638, "text": "C#" }, { "code": null, "e": 31652, "s": 31641, "text": "Javascript" }, { "code": "#include <bits/stdc++.h>using namespace std;/* Returns counts of 1's in arr[low..high]. The array is assumed to be sorted in non-increasing order */ int countOnes(bool arr[], int n){ int ans; int low = 0, high = n - 1; while (low <= high) { // get the middle index int mid = (low + high) / 2; // else recur for left side if (arr[mid] < 1) high = mid - 1; // If element is not last 1, recur for right side else if (arr[mid] > 1) low = mid + 1; else // check if the element at middle index is last 1 { if (mid == n - 1 || arr[mid + 1] != 1) return mid + 1; else low = mid + 1; } }} int main(){ bool arr[] = { 1, 1, 1, 1, 0, 0, 0 }; int n = sizeof(arr) / sizeof(arr[0]); cout << \"Count of 1's in given array is \" << countOnes(arr, n); return 0;}", "e": 32568, "s": 31652, "text": null }, { "code": "/*package whatever //do not write package name here */import java.io.*; class GFG{ static int countOnes(int arr[], int n){ int ans; int low = 0, high = n - 1; while (low <= high) { // get the middle index int mid = (low + high) / 2; // else recur for left side if (arr[mid] < 1) high = mid - 1; // If element is not last 1, recur for right side else if (arr[mid] > 1) low = mid + 1; else // check if the element at middle index is last 1 { if (mid == n - 1 || arr[mid + 1] != 1) return mid + 1; else low = mid + 1; } } return 0;} // Driver code public static void main (String[] args) { int arr[] = { 1, 1, 1, 1, 0, 0, 0 }; int n = arr.length; System.out.println(\"Count of 1's in given array is \"+ countOnes(arr, n)); }} // This code is contributed by patel2127.", "e": 33558, "s": 32568, "text": null }, { "code": "'''package whatever #do not write package name here '''def countOnes(arr, n): low = 0; high = n - 1; while (low <= high): # get the middle index mid = (low + high) // 2; # else recur for left side if (arr[mid] < 1): high = mid - 1; # If element is not last 1, recur for right side elif(arr[mid] > 1): low = mid + 1; else: # check if the element at middle index is last 1 if (mid == n - 1 or arr[mid + 1] != 1): return mid + 1; else: low = mid + 1; return 0; # Driver codeif __name__ == '__main__': arr = [ 1, 1, 1, 1, 0, 0, 0 ]; n = len(arr); print(\"Count of 1's in given array is \" , countOnes(arr, n)); # This code is contributed by umadevi9616", "e": 34363, "s": 33558, "text": null }, { "code": "/*package whatever //do not write package name here */using System;public class GFG { static int countOnes(int []arr, int n) { int low = 0, high = n - 1; while (low <= high) { // get the middle index int mid = (low + high) / 2; // else recur for left side if (arr[mid] < 1) high = mid - 1; // If element is not last 1, recur for right side else if (arr[mid] > 1) low = mid + 1; else // check if the element at middle index is last 1 { if (mid == n - 1 || arr[mid + 1] != 1) return mid + 1; else low = mid + 1; } } return 0; } // Driver code public static void Main(String[] args) { int []arr = { 1, 1, 1, 1, 0, 0, 0 }; int n = arr.Length; Console.WriteLine(\"Count of 1's in given array is \" + countOnes(arr, n)); }} // This code is contributed by umadevi9616", "e": 35249, "s": 34363, "text": null }, { "code": "<script>/* Returns counts of 1's in arr[low..high]. The array is assumed to be sorted in non-increasing order */ function countOnes(arr,n){ let ans; let low = 0, high = n - 1; while (low <= high) { // get the middle index let mid = Math.floor((low + high) / 2); // else recur for left side if (arr[mid] < 1) high = mid - 1; // If element is not last 1, recur for right side else if (arr[mid] > 1) low = mid + 1; else // check if the element at middle index is last 1 { if (mid == n - 1 || arr[mid + 1] != 1) return mid + 1; else low = mid + 1; } }} let arr=[ 1, 1, 1, 1, 0, 0, 0];let n = arr.length;document.write( \"Count of 1's in given array is \"+ countOnes(arr, n)); // This code is contributed by unknown2108</script>", "e": 36126, "s": 35249, "text": null }, { "code": null, "e": 36159, "s": 36126, "text": "Count of 1's in given array is 4" }, { "code": null, "e": 36208, "s": 36159, "text": "Time complexity of the above solution is O(Logn)" }, { "code": null, "e": 36233, "s": 36208, "text": "Space complexity is O(1)" }, { "code": null, "e": 36357, "s": 36233, "text": "The function will return the pointer to the index after the last occurrence of the element which we pass into the function." }, { "code": null, "e": 36425, "s": 36357, "text": "We can use it here as the array is sorted in non-increasing order. " }, { "code": null, "e": 36557, "s": 36425, "text": "int arr[] = {1,1,1,1,0,0,0,0};\nint size = sizeof(arr)/sizeof(arr[0]);\nauto ptr = upper_bound( arr, arr + size, 1, greater<int>() );" }, { "code": null, "e": 36642, "s": 36557, "text": "upper_bound() will return the iterator to the index after the last occurrence of 1. " }, { "code": null, "e": 36711, "s": 36642, "text": "Here, ptr will point to index 4 of the array, i.e, after the last 1." }, { "code": null, "e": 36814, "s": 36711, "text": "We need to use the greater<int>() function since the array is reverse sorted, i.e, 1s occur before 0s." }, { "code": null, "e": 36920, "s": 36814, "text": "Now, subtracting the pointer to the array beginning from ptr ( ptr – arr ) will give us the number of 1s." }, { "code": null, "e": 36995, "s": 36920, "text": "since the (pointer to first 0 – pointer to array beginning) = number of 1s" }, { "code": null, "e": 37073, "s": 36995, "text": "just like index of first 0 (4) – index of array beginning (0) = number of 1s " }, { "code": null, "e": 37098, "s": 37073, "text": "int no_of_1 = ptr - arr;" }, { "code": null, "e": 37102, "s": 37098, "text": "C++" }, { "code": "#include <bits/stdc++.h>using namespace std; int main(){ int arr[] = { 1, 1, 1, 1, 0, 0, 0, 0, 0 }; int size = sizeof(arr) / sizeof(arr[0]); // Pointer to the first occurence of zero auto ptr = upper_bound(arr, arr + size, 1, greater<int>()); cout << \"Count of 1's in given array is \" << (ptr - arr); return 0;}", "e": 37451, "s": 37102, "text": null }, { "code": null, "e": 37484, "s": 37451, "text": "Count of 1's in given array is 4" }, { "code": null, "e": 37538, "s": 37484, "text": "The time complexity of the above solution is O(Logn)." }, { "code": null, "e": 37564, "s": 37538, "text": "Space complexity is O(1)." }, { "code": null, "e": 38397, "s": 37564, "text": "YouTubeGeeksforGeeks502K subscribersCount 1’s in a sorted binary array | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 2:44•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=EoO9c9UOyww\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>" }, { "code": null, "e": 38522, "s": 38397, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 38529, "s": 38522, "text": "Sam007" }, { "code": null, "e": 38534, "s": 38529, "text": "vt_m" }, { "code": null, "e": 38549, "s": 38534, "text": "jeetpareshshah" }, { "code": null, "e": 38563, "s": 38549, "text": "jana_sayantan" }, { "code": null, "e": 38577, "s": 38563, "text": "anushikasethh" }, { "code": null, "e": 38589, "s": 38577, "text": "unknown2108" }, { "code": null, "e": 38599, "s": 38589, "text": "patel2127" }, { "code": null, "e": 38611, "s": 38599, "text": "umadevi9616" }, { "code": null, "e": 38627, "s": 38611, "text": "amartyaghoshgfg" }, { "code": null, "e": 38646, "s": 38627, "text": "sudip5banerjee1974" }, { "code": null, "e": 38660, "s": 38646, "text": "Binary Search" }, { "code": null, "e": 38674, "s": 38660, "text": "binary-string" }, { "code": null, "e": 38684, "s": 38674, "text": "Searching" }, { "code": null, "e": 38694, "s": 38684, "text": "Searching" }, { "code": null, "e": 38708, "s": 38694, "text": "Binary Search" }, { "code": null, "e": 38806, "s": 38708, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 38842, "s": 38806, "text": "Best First Search (Informed Search)" }, { "code": null, "e": 38893, "s": 38842, "text": "3 Different ways to print Fibonacci series in Java" }, { "code": null, "e": 38932, "s": 38893, "text": "Program to remove vowels from a String" }, { "code": null, "e": 38998, "s": 38932, "text": "Find whether an array is subset of another array | Added Method 5" }, { "code": null, "e": 39042, "s": 38998, "text": "Find common elements in three sorted arrays" }, { "code": null, "e": 39064, "s": 39042, "text": "Non-Repeating Element" }, { "code": null, "e": 39129, "s": 39064, "text": "Recursive Programs to find Minimum and Maximum elements of array" }, { "code": null, "e": 39158, "s": 39129, "text": "Find closest number in array" }, { "code": null, "e": 39202, "s": 39158, "text": "Search, insert and delete in a sorted array" } ]
A Practical Guide on Missing Values with Pandas | by Soner Yıldırım | Towards Data Science
Missing values indicate we do not have the information about a feature (column) of a particular observation (row). Why not just remove that observation from the dataset and go ahead? We can but should not. The reasons are: We typically have many features of an observation so we don’t want to lose the observation just because of one missing feature. Data is valuable. We typically have more than one observation with missing values. In some cases, we cannot afford to remove many observations from the dataset. Again, data is valuable. In this post, we will go through how to detect and handle missing values as well as some key points to keep in mind. The outline of the post: Missing value markers Detecting missing values Calculations with missing values Handling missing values As always, we start with importing numpy and pandas. import numpy as npimport pandas as pd The default missing value representation in Pandas is NaN but Python’s None is also detected as missing value. s = pd.Series([1, 3, 4, np.nan, None, 8])s Although we created a series with integers, the values are upcasted to float because np.nan is float. A new representation for missing values is introduced with Pandas 1.0 which is <NA>. It can be used with integers without causing upcasting. We need to explicitly request the dtype to be pd.Int64Dtype(). s = pd.Series([1, 3, 4, np.nan, None, 8], dtype=pd.Int64Dtype())s The integer values are not upcasted to float. Another missing value representation is NaT which is used to represent datetime64[ns] datatypes. Note: np.nan’s do not compare equal whereas None’s are considered as equal. Note: Not all missing values come in nice and clean np.nan or None format. For example, the dataset we work on may include “?” and “- -“ values in some cells. We can convert them to np.nan representation when reading the dataset into a pandas dataframe. We just need to pass these values to na_values parameter. Let’s first create a sample dataframe and add some missing values to it. df = pd.DataFrame({'col_a':np.random.randint(10, size=8),'col_b':np.random.random(8),'col_c':[True, False, True, False, False, True, True, False],'col_d':pd.date_range('2020-01-01', periods=8),'col_e':['A','A','A','B','B','B','C','C']})df.iloc[2:4, 1:2] = np.nandf.iloc[3:5, 3] = np.nandf.iloc[[1,4,6], 0] = np.nandf As we mentioned earlier, NaT is used to represent datetime missing values. isna() returns the dataframe indicating missing values with booleans. isna().sum() returns the number of missing values in each column. notna is the opposite of isna so notna().sum() returns the number of non-missing values. isna().any() returns a boolean value for each column. If there is at least one missing value in that column, the result is True. Arithmetical operations between np.nan and numbers return np.nan. df['sum_a_b'] = df['col_a'] + df['col_b']df Cumulative methods like cumsum and cumprod ignore missing values by default but they preserve the positions of missing values. df[['col_a','col_b']].cumsum() We can change this behavior by setting skipna parameter as False. df[['col_a','col_b']].cumsum(skipna=False) The missing values are included in the summation now. Thus, all the values after the first nan are also nan. Groupby function excludes missing values by default. df[['col_e','col_a']].groupby('col_e').sum() There are mainly two ways to handle missing values. We can either drop the missing values or replace them with an appropriate value. The better option is to replace missing values but in some cases, we may need to drop them. Dropping missing values We can drop a row or column with missing values using dropna() function. how parameter is used to set condition to drop. how=’any’ : drop if there is any missing value how=’all’ : drop if all values are missing Let’s first modify our dataframe a little: df.iloc[7,:] = np.nandf how=’any’ will drop all rows except for the first and sixth: how=’all’ will only drop the last row: Note: In order to save these changes in the original dataframe, we need to set inplace parameter as True. Using thresh parameter, we can set a threshold for missing values in order for a row/column to be dropped. Dropna also does column-wise operation if axis parameter is set to 1. Replacing missing values fillna() function of Pandas conveniently handles missing values. Using fillna(), missing values can be replaced by a special value or an aggreate value such as mean, median. Furthermore, missing values can be replaced with the value before or after it which is pretty useful for time-series datasets. We can select one value to replace all missing values in a dataframe but it does not make any sense. Instead, we can create a dictionary indicating a separate value to be used in different columns. replacements = {'col_a':0, 'col_b':0.5, 'col_e':'Other'}df.fillna(replacements) We can use an aggregate function as the value to replace missing values: df['col_b'].fillna(df['col_b'].mean()) We can also fill missing values with the values before or after them using the method parameter. ffill stands for “forward fill” and replaces missing values with the values in the previous row. As the name suggests, bfill (backward fill) does the opposite. If there are many consecutive missing values in a column or row, we can use limit parameter to limit the number of missing values to be forward or backward filled. All the missing values are filled with the values in the previous cell. Let’s limit the number of missing values to be filled as 1: Let’s try to fill missing values using bfill method: The values at the end remain as missing because there are no values after them. interpolate fills missing values by interpolation which is especially useful for sequential or time series data. The default method is linear but it can be changed using method parameter. Some available options are polynomial, quadratic, cubic. Let’s do an example with linear interpolation. s = pd.Series(np.random.random(50))s[4, 5, 9, 11, 18, 19, 33, 34, 46, 47, 48] = np.nans.plot() ts.interpolate().plot() Thank you for reading. Please let me know if you have any feedback.
[ { "code": null, "e": 395, "s": 172, "text": "Missing values indicate we do not have the information about a feature (column) of a particular observation (row). Why not just remove that observation from the dataset and go ahead? We can but should not. The reasons are:" }, { "code": null, "e": 541, "s": 395, "text": "We typically have many features of an observation so we don’t want to lose the observation just because of one missing feature. Data is valuable." }, { "code": null, "e": 709, "s": 541, "text": "We typically have more than one observation with missing values. In some cases, we cannot afford to remove many observations from the dataset. Again, data is valuable." }, { "code": null, "e": 826, "s": 709, "text": "In this post, we will go through how to detect and handle missing values as well as some key points to keep in mind." }, { "code": null, "e": 851, "s": 826, "text": "The outline of the post:" }, { "code": null, "e": 873, "s": 851, "text": "Missing value markers" }, { "code": null, "e": 898, "s": 873, "text": "Detecting missing values" }, { "code": null, "e": 931, "s": 898, "text": "Calculations with missing values" }, { "code": null, "e": 955, "s": 931, "text": "Handling missing values" }, { "code": null, "e": 1008, "s": 955, "text": "As always, we start with importing numpy and pandas." }, { "code": null, "e": 1046, "s": 1008, "text": "import numpy as npimport pandas as pd" }, { "code": null, "e": 1157, "s": 1046, "text": "The default missing value representation in Pandas is NaN but Python’s None is also detected as missing value." }, { "code": null, "e": 1200, "s": 1157, "text": "s = pd.Series([1, 3, 4, np.nan, None, 8])s" }, { "code": null, "e": 1506, "s": 1200, "text": "Although we created a series with integers, the values are upcasted to float because np.nan is float. A new representation for missing values is introduced with Pandas 1.0 which is <NA>. It can be used with integers without causing upcasting. We need to explicitly request the dtype to be pd.Int64Dtype()." }, { "code": null, "e": 1572, "s": 1506, "text": "s = pd.Series([1, 3, 4, np.nan, None, 8], dtype=pd.Int64Dtype())s" }, { "code": null, "e": 1618, "s": 1572, "text": "The integer values are not upcasted to float." }, { "code": null, "e": 1715, "s": 1618, "text": "Another missing value representation is NaT which is used to represent datetime64[ns] datatypes." }, { "code": null, "e": 1791, "s": 1715, "text": "Note: np.nan’s do not compare equal whereas None’s are considered as equal." }, { "code": null, "e": 2103, "s": 1791, "text": "Note: Not all missing values come in nice and clean np.nan or None format. For example, the dataset we work on may include “?” and “- -“ values in some cells. We can convert them to np.nan representation when reading the dataset into a pandas dataframe. We just need to pass these values to na_values parameter." }, { "code": null, "e": 2176, "s": 2103, "text": "Let’s first create a sample dataframe and add some missing values to it." }, { "code": null, "e": 2493, "s": 2176, "text": "df = pd.DataFrame({'col_a':np.random.randint(10, size=8),'col_b':np.random.random(8),'col_c':[True, False, True, False, False, True, True, False],'col_d':pd.date_range('2020-01-01', periods=8),'col_e':['A','A','A','B','B','B','C','C']})df.iloc[2:4, 1:2] = np.nandf.iloc[3:5, 3] = np.nandf.iloc[[1,4,6], 0] = np.nandf" }, { "code": null, "e": 2568, "s": 2493, "text": "As we mentioned earlier, NaT is used to represent datetime missing values." }, { "code": null, "e": 2638, "s": 2568, "text": "isna() returns the dataframe indicating missing values with booleans." }, { "code": null, "e": 2704, "s": 2638, "text": "isna().sum() returns the number of missing values in each column." }, { "code": null, "e": 2793, "s": 2704, "text": "notna is the opposite of isna so notna().sum() returns the number of non-missing values." }, { "code": null, "e": 2922, "s": 2793, "text": "isna().any() returns a boolean value for each column. If there is at least one missing value in that column, the result is True." }, { "code": null, "e": 2988, "s": 2922, "text": "Arithmetical operations between np.nan and numbers return np.nan." }, { "code": null, "e": 3032, "s": 2988, "text": "df['sum_a_b'] = df['col_a'] + df['col_b']df" }, { "code": null, "e": 3159, "s": 3032, "text": "Cumulative methods like cumsum and cumprod ignore missing values by default but they preserve the positions of missing values." }, { "code": null, "e": 3190, "s": 3159, "text": "df[['col_a','col_b']].cumsum()" }, { "code": null, "e": 3256, "s": 3190, "text": "We can change this behavior by setting skipna parameter as False." }, { "code": null, "e": 3299, "s": 3256, "text": "df[['col_a','col_b']].cumsum(skipna=False)" }, { "code": null, "e": 3408, "s": 3299, "text": "The missing values are included in the summation now. Thus, all the values after the first nan are also nan." }, { "code": null, "e": 3461, "s": 3408, "text": "Groupby function excludes missing values by default." }, { "code": null, "e": 3506, "s": 3461, "text": "df[['col_e','col_a']].groupby('col_e').sum()" }, { "code": null, "e": 3731, "s": 3506, "text": "There are mainly two ways to handle missing values. We can either drop the missing values or replace them with an appropriate value. The better option is to replace missing values but in some cases, we may need to drop them." }, { "code": null, "e": 3755, "s": 3731, "text": "Dropping missing values" }, { "code": null, "e": 3876, "s": 3755, "text": "We can drop a row or column with missing values using dropna() function. how parameter is used to set condition to drop." }, { "code": null, "e": 3923, "s": 3876, "text": "how=’any’ : drop if there is any missing value" }, { "code": null, "e": 3966, "s": 3923, "text": "how=’all’ : drop if all values are missing" }, { "code": null, "e": 4009, "s": 3966, "text": "Let’s first modify our dataframe a little:" }, { "code": null, "e": 4033, "s": 4009, "text": "df.iloc[7,:] = np.nandf" }, { "code": null, "e": 4094, "s": 4033, "text": "how=’any’ will drop all rows except for the first and sixth:" }, { "code": null, "e": 4133, "s": 4094, "text": "how=’all’ will only drop the last row:" }, { "code": null, "e": 4239, "s": 4133, "text": "Note: In order to save these changes in the original dataframe, we need to set inplace parameter as True." }, { "code": null, "e": 4416, "s": 4239, "text": "Using thresh parameter, we can set a threshold for missing values in order for a row/column to be dropped. Dropna also does column-wise operation if axis parameter is set to 1." }, { "code": null, "e": 4441, "s": 4416, "text": "Replacing missing values" }, { "code": null, "e": 4742, "s": 4441, "text": "fillna() function of Pandas conveniently handles missing values. Using fillna(), missing values can be replaced by a special value or an aggreate value such as mean, median. Furthermore, missing values can be replaced with the value before or after it which is pretty useful for time-series datasets." }, { "code": null, "e": 4940, "s": 4742, "text": "We can select one value to replace all missing values in a dataframe but it does not make any sense. Instead, we can create a dictionary indicating a separate value to be used in different columns." }, { "code": null, "e": 5020, "s": 4940, "text": "replacements = {'col_a':0, 'col_b':0.5, 'col_e':'Other'}df.fillna(replacements)" }, { "code": null, "e": 5093, "s": 5020, "text": "We can use an aggregate function as the value to replace missing values:" }, { "code": null, "e": 5132, "s": 5093, "text": "df['col_b'].fillna(df['col_b'].mean())" }, { "code": null, "e": 5553, "s": 5132, "text": "We can also fill missing values with the values before or after them using the method parameter. ffill stands for “forward fill” and replaces missing values with the values in the previous row. As the name suggests, bfill (backward fill) does the opposite. If there are many consecutive missing values in a column or row, we can use limit parameter to limit the number of missing values to be forward or backward filled." }, { "code": null, "e": 5685, "s": 5553, "text": "All the missing values are filled with the values in the previous cell. Let’s limit the number of missing values to be filled as 1:" }, { "code": null, "e": 5738, "s": 5685, "text": "Let’s try to fill missing values using bfill method:" }, { "code": null, "e": 5818, "s": 5738, "text": "The values at the end remain as missing because there are no values after them." }, { "code": null, "e": 6110, "s": 5818, "text": "interpolate fills missing values by interpolation which is especially useful for sequential or time series data. The default method is linear but it can be changed using method parameter. Some available options are polynomial, quadratic, cubic. Let’s do an example with linear interpolation." }, { "code": null, "e": 6205, "s": 6110, "text": "s = pd.Series(np.random.random(50))s[4, 5, 9, 11, 18, 19, 33, 34, 46, 47, 48] = np.nans.plot()" }, { "code": null, "e": 6229, "s": 6205, "text": "ts.interpolate().plot()" } ]
C program to perform union operation on two arrays
A union is a special data type available in C programming language that allows to store different data types in the same memory location. Unions provide an efficient way of using the same memory location for multiple-purpose. If array 1 = { 1,2,3,4,6} Array 2 = {1,2,5,6,7} Then, union of array1 and array 2 is Array1 U array 2 = {1,2,3,4,6} U {1,2,5,6,7} = {1,2,3,4,5,6,7} Set of all elements without repetition is called union. The logic for union is as follows − for(i=0;i<size1;i++){ uni[j]=a[i]; j++; } for(i=0;i<size2;i++){ uni[j]=b[i]; j++; } The logic for removing repeated elements is as follows − int removerepeated(int size,int a[]){ int i,j,k; for(i=0;i<size;i++){ for(j=i+1;j<size;){ if(a[i]==a[j]){ for(k=j;k<size;k++){ a[k]=a[k+1]; } size--; }else{ j++; } } } return(size); } Following is the C program to perform union operation on two arrays − Live Demo #include<stdio.h> int removerepeated(int size,int a[]); void sort(int size,int a[]); main(){ int i,size1,size2,size,j=0,k; printf("Enter size of an array1\n"); scanf("%d",&size1); printf("Enter size of an array2\n"); scanf("%d",&size2); int a[size1],b[size2],uni[size1+size2]; printf("Enter numbers for array 1\n"); for(i=0;i<size1;i++){ scanf("%d",&a[i]); } printf("Enter numbers for array 2\n"); for(i=0;i<size2;i++){ scanf("%d",&b[i]); } //union start for(i=0;i<size1;i++){ uni[j]=a[i]; j++; } for(i=0;i<size2;i++){ uni[j]=b[i]; j++; } //Sorting sort(size1+size2,uni); //Remove repeated elements size=removerepeated(size1+size2,uni); printf("Array afetr Union \n"); for(i=0;i<size;i++){ printf("%d\n",uni[i]); } //Sorting } int removerepeated(int size,int a[]){ int i,j,k; for(i=0;i<size;i++){ for(j=i+1;j<size;){ if(a[i]==a[j]){ for(k=j;k<size;k++){ a[k]=a[k+1]; } size--; }else{ j++; } } } return(size); } void sort(int size,int a[]){ int i,j,temp; for(i=0;i<size;i++){ for(j=i+1;j<size;j++){ if(a[i]>a[j]){ temp=a[i]; a[i]=a[j]; a[j]=temp; } } } } When the above program is executed, it produces the following result − Enter size of an array1 4 Enter size of an array2 3 Enter numbers for array 1 1 2 3 4 Enter numbers for array 2 3 5 6 Array after Union 1 2 3 4 5 6
[ { "code": null, "e": 1288, "s": 1062, "text": "A union is a special data type available in C programming language that allows to store different data types in the same memory location. Unions provide an efficient way of using the same memory location for multiple-purpose." }, { "code": null, "e": 1314, "s": 1288, "text": "If array 1 = { 1,2,3,4,6}" }, { "code": null, "e": 1340, "s": 1314, "text": " Array 2 = {1,2,5,6,7}" }, { "code": null, "e": 1377, "s": 1340, "text": "Then, union of array1 and array 2 is" }, { "code": null, "e": 1422, "s": 1377, "text": "Array1 U array 2 = {1,2,3,4,6} U {1,2,5,6,7}" }, { "code": null, "e": 1470, "s": 1422, "text": " = {1,2,3,4,5,6,7}" }, { "code": null, "e": 1526, "s": 1470, "text": "Set of all elements without repetition is called union." }, { "code": null, "e": 1562, "s": 1526, "text": "The logic for union is as follows −" }, { "code": null, "e": 1658, "s": 1562, "text": "for(i=0;i<size1;i++){\n uni[j]=a[i];\n j++;\n}\nfor(i=0;i<size2;i++){\n uni[j]=b[i];\n j++;\n}" }, { "code": null, "e": 1715, "s": 1658, "text": "The logic for removing repeated elements is as follows −" }, { "code": null, "e": 2013, "s": 1715, "text": "int removerepeated(int size,int a[]){\n int i,j,k;\n for(i=0;i<size;i++){\n for(j=i+1;j<size;){\n if(a[i]==a[j]){\n for(k=j;k<size;k++){\n a[k]=a[k+1];\n }\n size--;\n }else{\n j++;\n }\n }\n }\n return(size);\n}" }, { "code": null, "e": 2083, "s": 2013, "text": "Following is the C program to perform union operation on two arrays −" }, { "code": null, "e": 2094, "s": 2083, "text": " Live Demo" }, { "code": null, "e": 3454, "s": 2094, "text": "#include<stdio.h>\nint removerepeated(int size,int a[]);\nvoid sort(int size,int a[]);\nmain(){\n int i,size1,size2,size,j=0,k;\n printf(\"Enter size of an array1\\n\");\n scanf(\"%d\",&size1);\n printf(\"Enter size of an array2\\n\");\n scanf(\"%d\",&size2);\n int a[size1],b[size2],uni[size1+size2];\n printf(\"Enter numbers for array 1\\n\");\n for(i=0;i<size1;i++){\n scanf(\"%d\",&a[i]);\n }\n printf(\"Enter numbers for array 2\\n\");\n for(i=0;i<size2;i++){\n scanf(\"%d\",&b[i]);\n }\n //union start\n for(i=0;i<size1;i++){\n uni[j]=a[i];\n j++;\n }\n for(i=0;i<size2;i++){\n uni[j]=b[i];\n j++;\n }\n //Sorting\n sort(size1+size2,uni);\n //Remove repeated elements\n size=removerepeated(size1+size2,uni);\n printf(\"Array afetr Union \\n\");\n for(i=0;i<size;i++){\n printf(\"%d\\n\",uni[i]);\n }\n //Sorting\n}\nint removerepeated(int size,int a[]){\n int i,j,k;\n for(i=0;i<size;i++){\n for(j=i+1;j<size;){\n if(a[i]==a[j]){\n for(k=j;k<size;k++){\n a[k]=a[k+1];\n }\n size--;\n }else{\n j++;\n }\n }\n }\n return(size);\n}\nvoid sort(int size,int a[]){\n int i,j,temp;\n for(i=0;i<size;i++){\n for(j=i+1;j<size;j++){\n if(a[i]>a[j]){\n temp=a[i];\n a[i]=a[j];\n a[j]=temp;\n }\n }\n }\n}" }, { "code": null, "e": 3525, "s": 3454, "text": "When the above program is executed, it produces the following result −" }, { "code": null, "e": 3673, "s": 3525, "text": "Enter size of an array1\n4\nEnter size of an array2\n3\nEnter numbers for array 1\n1\n2\n3\n4\nEnter numbers for array 2\n3\n5\n6\nArray after Union\n1\n2\n3\n4\n5\n6" } ]
JavaScript - Math sqrt Method
This method returns the square root of a number. If the value of a number is negative, sqrt returns NaN. Its syntax is as follows − Math.sqrt( x ) ; x − A number Returns the square root of a given number. Try the following example program. <html> <head> <title>JavaScript Math sqrt() Method</title> </head> <body> <script type = "text/javascript"> var value = Math.sqrt( 0.5 ); document.write("First Test Value : " + value ); var value = Math.sqrt( 81 ); document.write("<br />Second Test Value : " + value ); var value = Math.sqrt( 13 ); document.write("<br />Third Test Value : " + value ); var value = Math.sqrt( -4 ); document.write("<br />Fourth Test Value : " + value ); </script> </body> </html> First Test Value : 0.7071067811865476 Second Test Value : 9 Third Test Value : 3.605551275463989 Fourth Test Value : NaN 25 Lectures 2.5 hours Anadi Sharma 74 Lectures 10 hours Lets Kode It 72 Lectures 4.5 hours Frahaan Hussain 70 Lectures 4.5 hours Frahaan Hussain 46 Lectures 6 hours Eduonix Learning Solutions 88 Lectures 14 hours Eduonix Learning Solutions Print Add Notes Bookmark this page
[ { "code": null, "e": 2571, "s": 2466, "text": "This method returns the square root of a number. If the value of a number is negative, sqrt returns NaN." }, { "code": null, "e": 2598, "s": 2571, "text": "Its syntax is as follows −" }, { "code": null, "e": 2616, "s": 2598, "text": "Math.sqrt( x ) ;\n" }, { "code": null, "e": 2629, "s": 2616, "text": "x − A number" }, { "code": null, "e": 2672, "s": 2629, "text": "Returns the square root of a given number." }, { "code": null, "e": 2707, "s": 2672, "text": "Try the following example program." }, { "code": null, "e": 3324, "s": 2707, "text": "<html> \n <head>\n <title>JavaScript Math sqrt() Method</title>\n </head>\n \n <body> \n <script type = \"text/javascript\">\n var value = Math.sqrt( 0.5 );\n document.write(\"First Test Value : \" + value );\n \n var value = Math.sqrt( 81 );\n document.write(\"<br />Second Test Value : \" + value ); \n \n var value = Math.sqrt( 13 );\n document.write(\"<br />Third Test Value : \" + value ); \n \n var value = Math.sqrt( -4 );\n document.write(\"<br />Fourth Test Value : \" + value ); \n </script> \n </body>\n</html>" }, { "code": null, "e": 3448, "s": 3324, "text": "First Test Value : 0.7071067811865476\nSecond Test Value : 9\nThird Test Value : 3.605551275463989\nFourth Test Value : NaN \n" }, { "code": null, "e": 3483, "s": 3448, "text": "\n 25 Lectures \n 2.5 hours \n" }, { "code": null, "e": 3497, "s": 3483, "text": " Anadi Sharma" }, { "code": null, "e": 3531, "s": 3497, "text": "\n 74 Lectures \n 10 hours \n" }, { "code": null, "e": 3545, "s": 3531, "text": " Lets Kode It" }, { "code": null, "e": 3580, "s": 3545, "text": "\n 72 Lectures \n 4.5 hours \n" }, { "code": null, "e": 3597, "s": 3580, "text": " Frahaan Hussain" }, { "code": null, "e": 3632, "s": 3597, "text": "\n 70 Lectures \n 4.5 hours \n" }, { "code": null, "e": 3649, "s": 3632, "text": " Frahaan Hussain" }, { "code": null, "e": 3682, "s": 3649, "text": "\n 46 Lectures \n 6 hours \n" }, { "code": null, "e": 3710, "s": 3682, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 3744, "s": 3710, "text": "\n 88 Lectures \n 14 hours \n" }, { "code": null, "e": 3772, "s": 3744, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 3779, "s": 3772, "text": " Print" }, { "code": null, "e": 3790, "s": 3779, "text": " Add Notes" } ]
Using else conditional statement with for loop in python
In this article, we will be learning about loop-else statements in Python 3.x. Or earlier. In this tutorial, we will focus on for loop & else statement way of execution. In other languages, the else functionality is only provided in if-else pairs. But Python allows us to implement the else functionality with for loops as well . The else functionality is available for use only when the loop terminates normally. In case of forceful termination of loop else statement is overlooked by the interpreter and hence its execution is skipped. Now let’s take a quick glance over some illustrations to understand the loop else statement in a better manner. Live Demo for i in ['T','P']: print(i) else: # Loop else statement print("Loop-else statement successfully executed") T P Loop-else statement successfully executed Live Demo for i in ['T','P']: print(i) break else: # Loop else statement print("Loop-else statement successfully executed") T Explanation − The loop else statement is executed in ILLUSTRATION 1 as the for loop terminates normally after completing its iteration over the list[‘T’,’P’].But in ILLUSTRATION 2 ,the loop-else statement is not executed as the loop is forcefully terminated by using jump statements like break . These ILLUSTRATIONS clearly indicates the loop-else statement is not executed when loop is terminated forcefully. Now Let’s look at an illustration wherein some condition the loop-else statement is executed and in some, it’s not. Live Demo def pos_nev_test(): for i in [5,6,7]: if i>=0: print ("Positive number") else: print ("Negative number") break else: print ("Loop-else Executed") # main function pos_nev_test() Positive number Positive number Positive number Loop-else Executed Explanation − Here as the else block in the if-else construct is not executed as if the condition evaluates to be true, the Loop-Else statement is executed. If we replace the list in for loop [5, 6, 7 ] by [7, -1, 3 ] then the output changes to Positive number Negative number In this article, we learnt the implementation of loop-else statement and a variety of ways in which it can be implemented.
[ { "code": null, "e": 1232, "s": 1062, "text": "In this article, we will be learning about loop-else statements in Python 3.x. Or earlier. In this tutorial, we will focus on for loop & else statement way of execution." }, { "code": null, "e": 1392, "s": 1232, "text": "In other languages, the else functionality is only provided in if-else pairs. But Python allows us to implement the else functionality with for loops as well ." }, { "code": null, "e": 1600, "s": 1392, "text": "The else functionality is available for use only when the loop terminates normally. In case of forceful termination of loop else statement is overlooked by the interpreter and hence its execution is skipped." }, { "code": null, "e": 1712, "s": 1600, "text": "Now let’s take a quick glance over some illustrations to understand the loop else statement in a better manner." }, { "code": null, "e": 1723, "s": 1712, "text": " Live Demo" }, { "code": null, "e": 1837, "s": 1723, "text": "for i in ['T','P']:\n print(i)\nelse: # Loop else statement\n print(\"Loop-else statement successfully executed\")" }, { "code": null, "e": 1883, "s": 1837, "text": "T\nP\nLoop-else statement successfully executed" }, { "code": null, "e": 1894, "s": 1883, "text": " Live Demo" }, { "code": null, "e": 2017, "s": 1894, "text": "for i in ['T','P']:\n print(i)\n break\nelse: # Loop else statement\n print(\"Loop-else statement successfully executed\")" }, { "code": null, "e": 2019, "s": 2017, "text": "T" }, { "code": null, "e": 2315, "s": 2019, "text": "Explanation − The loop else statement is executed in ILLUSTRATION 1 as the for loop terminates normally after completing its iteration over the list[‘T’,’P’].But in ILLUSTRATION 2 ,the loop-else statement is not executed as the loop is forcefully terminated by using jump statements like break ." }, { "code": null, "e": 2429, "s": 2315, "text": "These ILLUSTRATIONS clearly indicates the loop-else statement is not executed when loop is terminated forcefully." }, { "code": null, "e": 2545, "s": 2429, "text": "Now Let’s look at an illustration wherein some condition the loop-else statement is executed and in some, it’s not." }, { "code": null, "e": 2556, "s": 2545, "text": " Live Demo" }, { "code": null, "e": 2769, "s": 2556, "text": "def pos_nev_test():\n for i in [5,6,7]:\n if i>=0:\n print (\"Positive number\")\n else:\n print (\"Negative number\")\n break\n else:\n print (\"Loop-else Executed\")\n# main function\npos_nev_test()" }, { "code": null, "e": 2836, "s": 2769, "text": "Positive number\nPositive number\nPositive number\nLoop-else Executed" }, { "code": null, "e": 2993, "s": 2836, "text": "Explanation − Here as the else block in the if-else construct is not executed as if the condition evaluates to be true, the Loop-Else statement is executed." }, { "code": null, "e": 3081, "s": 2993, "text": "If we replace the list in for loop [5, 6, 7 ] by [7, -1, 3 ] then the output changes to" }, { "code": null, "e": 3113, "s": 3081, "text": "Positive number\nNegative number" }, { "code": null, "e": 3236, "s": 3113, "text": "In this article, we learnt the implementation of loop-else statement and a variety of ways in which it can be implemented." } ]
Case Study: Breast Cancer Classification Using a Support Vector Machine | by Mahsa Mir | Towards Data Science
In this tutorial, we’re going to create a model to predict whether a patient has a positive breast cancer diagnosis based on several tumor features. The breast cancer database is a publicly available dataset from the UCI Machine learning Repository. It gives information on tumor features such as tumor size, density, and texture. Goal: To create a classification model that looks at predicts if the cancer diagnosis is benign or malignant based on several features. Data used: Kaggle-Breast Cancer Prediction Dataset First, let’s understand our dataset: #import required librariesimport pandas as pdimport numpy as npimport matplotlib.pyplot as plt%matplotlib inlineimport seaborn as sns#import models from scikit learn module:from sklearn.model_selection import train_test_splitfrom sklearn import metricsfrom sklearn.svm import SVC#import Datadf_cancer = pd.read_csv('Breast_cancer_data.csv')df_cancer.head()#get some information about our Data-Setdf_cancer.info()df_cancer.describe()#visualizing datasns.pairplot(df_cancer, hue = 'diagnosis')plt.figure(figsize=(7,7))sns.heatmap(df_cancer['mean_radius mean_texture mean_perimeter mean_area mean_smoothness diagnosis'.split()].corr(), annot=True)sns.scatterplot(x = 'mean_texture', y = 'mean_perimeter', hue = 'diagnosis', data = df_cancer) #visualizing features correlationpalette ={0 : 'orange', 1 : 'blue'}edgecolor = 'grey'fig = plt.figure(figsize=(12,12))plt.subplot(221)ax1 = sns.scatterplot(x = df_cancer['mean_radius'], y = df_cancer['mean_texture'], hue = "diagnosis",data = df_cancer, palette =palette, edgecolor=edgecolor)plt.title('mean_radius vs mean_texture')plt.subplot(222)ax2 = sns.scatterplot(x = df_cancer['mean_radius'], y = df_cancer['mean_perimeter'], hue = "diagnosis",data = df_cancer, palette =palette, edgecolor=edgecolor)plt.title('mean_radius vs mean_perimeter')plt.subplot(223)ax3 = sns.scatterplot(x = df_cancer['mean_radius'], y = df_cancer['mean_area'], hue = "diagnosis",data = df_cancer, palette =palette, edgecolor=edgecolor)plt.title('mean_radius vs mean_area')plt.subplot(224)ax4 = sns.scatterplot(x = df_cancer['mean_radius'], y = df_cancer['mean_smoothness'], hue = "diagnosis",data = df_cancer, palette =palette, edgecolor=edgecolor)plt.title('mean_radius vs mean_smoothness')fig.suptitle('Features Correlation', fontsize = 20)plt.savefig('2')plt.show() Before applying any method, we need to check if any values are missing and then deal with them if so. In this dataset, there are no missing values — but always keep the habit of checking for null values in a dataset! Since machine learning models are based on mathematical equations, we need to encode the categorical variables. Here I used label encoding since we have two distinct values in the “diagnosis” column: #check how many values are missing (NaN)here we do not have any missing valuesdf_cancer.isnull().sum()#handling categorical datadf_cancer['diagnosis'].unique()df_cancer['diagnosis'] = df_cancer['diagnosis'].map({'benign':0,'malignant':1})df_cancer.head() Let’s keep climbing our dataset: #visualizing diagnosis column >>> 'benign':0,'malignant':1sns.countplot(x='diagnosis',data = df_cancer)plt.title('number of Benign_0 vs Malignan_1')# correlation between featuresdf_cancer.corr()['diagnosis'][:-1].sort_values().plot(kind ='bar')plt.title('Corr. between features and target') Data is divided into the Train set and Test set. We use the Train set to make the algorithm learn the data’s behavior and then check the accuracy of our model on the Test set. Features (X): The columns that are inserted into our model will be used to make predictions. Prediction (y): Target variable that will be predicted by the features. #define X variables and our target(y)X = df_cancer.drop(['diagnosis'],axis=1).valuesy = df_cancer['diagnosis'].values#split Train and Testfrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=101) Support Vector Machine (SVM) is one of the most useful supervised ML algorithms. It can be used for both classification and regression tasks. There are a couple of concepts we first need to understand: What is the SVM Job? SVM chooses the hyperplane that does maximum separation between classes. What are hard and soft margins? If data can be linearly separable, SVM might return maximum accuracy (Hard Margin). When data is not linearly separable, all we need do is relax the margin to allow misclassifications (Soft Margin). What is Hyper-parameter C? The number of misclassifications errors can be controlled using the C parameter, which has a direct effect on the hyperplane. What is Hyper-parameter gamma? Gamma is used to give weightage to points close to support vector. In other words, changing the value of gamma would change the shape of the hyperplane. What is Kernel Trick? if our data is not linearly separable, we could apply a “Kernel Trick” method which maps the nonlinear data to higher dimensional space. Now let’s get back to our code! #Support Vector Classification modelfrom sklearn.svm import SVCsvc_model = SVC()svc_model.fit(X_train, y_train) from sklearn.metrics import classification_report, confusion_matrixy_predict = svc_model.predict(X_test)cm = confusion_matrix(y_test, y_predict)sns.heatmap(cm, annot=True) What does the confusion_matrix information result mean?: We had 143 women in our test set. Out of 55 women predicted to not have breast cancer, two were classified as not having when actually they had (type one error). Out of 88 women predicted to have breast cancer, 14 were classified as having breast cancer whey they did not (type two error). What does this classification report result mean? Basically it means that the SVM Model was able to classify tumors into malignant and benign with 89% accuracy. Note: Precision is the fraction of relevant results. Recall is the fraction of all relevant results that were correctly classified. F1-score is the harmonic mean between precision and recall that ranges between 0 (terrible) to 1 (perfection). Feature scaling will help us to see all the variables from the same lens (same scale), in this way we will bring all values into the range [0,1]: #normalized scaler - fit&transform on train, fit only on testfrom sklearn.preprocessing import MinMaxScalern_scaler = MinMaxScaler()X_train_scaled = n_scaler.fit_transform(X_train.astype(np.float))X_test_scaled = n_scaler.transform(X_test.astype(np.float))#Support Vector Classification model - apply on scaled datafrom sklearn.svm import SVCsvc_model = SVC()svc_model.fit(X_train_scaled, y_train)from sklearn.metrics import classification_report, confusion_matrixy_predict_scaled = svc_model.predict(X_test_scaled)cm = confusion_matrix(y_test, y_predict_scaled)sns.heatmap(cm, annot=True)print(classification_report(y_test, y_predict_scaled)) What does the confusion matrix information result mean?: We had 143 women in our test set Out of 55 women predicted to not have breast cancer, 4 women were classified as not having when actually they had (Type 1 error) out of 88 women predicted to have breast cancer, 7 were classified as having breast cancer whey they did not (Type 2 error) What does this classification report result mean? Basically, it means that the SVM model was able to classify tumors into malignant/benign with 92% accuracy. C parameter — as we said, it controls the cost of misclassification on Train data. Smaller C: Lower variance but higher bias (soft margin) and reduce the cost of miss-classification (less penalty). Larger C: Lower bias and higher variance (hard margin) and increase the cost of miss-classification (more strict). Gamma:Smaller Gamma: Large variance, far reach, and more generalized solution.Larger Gamma: High variance and low bias, close reach, and also closer data points have a higher weight. So, let’s find the optimal parameters for our model using grid search: #find best hyper parametersfrom sklearn.model_selection import GridSearchCVparam_grid = {'C':[0.1,1,10,100,1000],'gamma':[1,0.1,0.01,0.001,0.001], 'kernel':['rbf']}grid = GridSearchCV(SVC(),param_grid,verbose = 4)grid.fit(X_train_scaled,y_train)grid.best_params_grid.best_estimator_grid_predictions = grid.predict(X_test_scaled)cmG = confusion_matrix(y_test,grid_predictions)sns.heatmap(cmG, annot=True)print(classification_report(y_test,grid_predictions)) As you can see in this case the last model improvement did not yield the percentage of accuracy. However, we were succeed to decrease an error type II. I hope this has helped you to understand the topic better. Any feedback is welcome as it allows me to get new insights and correct any mistakes!
[ { "code": null, "e": 320, "s": 171, "text": "In this tutorial, we’re going to create a model to predict whether a patient has a positive breast cancer diagnosis based on several tumor features." }, { "code": null, "e": 502, "s": 320, "text": "The breast cancer database is a publicly available dataset from the UCI Machine learning Repository. It gives information on tumor features such as tumor size, density, and texture." }, { "code": null, "e": 638, "s": 502, "text": "Goal: To create a classification model that looks at predicts if the cancer diagnosis is benign or malignant based on several features." }, { "code": null, "e": 689, "s": 638, "text": "Data used: Kaggle-Breast Cancer Prediction Dataset" }, { "code": null, "e": 726, "s": 689, "text": "First, let’s understand our dataset:" }, { "code": null, "e": 1465, "s": 726, "text": "#import required librariesimport pandas as pdimport numpy as npimport matplotlib.pyplot as plt%matplotlib inlineimport seaborn as sns#import models from scikit learn module:from sklearn.model_selection import train_test_splitfrom sklearn import metricsfrom sklearn.svm import SVC#import Datadf_cancer = pd.read_csv('Breast_cancer_data.csv')df_cancer.head()#get some information about our Data-Setdf_cancer.info()df_cancer.describe()#visualizing datasns.pairplot(df_cancer, hue = 'diagnosis')plt.figure(figsize=(7,7))sns.heatmap(df_cancer['mean_radius mean_texture mean_perimeter mean_area mean_smoothness diagnosis'.split()].corr(), annot=True)sns.scatterplot(x = 'mean_texture', y = 'mean_perimeter', hue = 'diagnosis', data = df_cancer)" }, { "code": null, "e": 2518, "s": 1465, "text": "#visualizing features correlationpalette ={0 : 'orange', 1 : 'blue'}edgecolor = 'grey'fig = plt.figure(figsize=(12,12))plt.subplot(221)ax1 = sns.scatterplot(x = df_cancer['mean_radius'], y = df_cancer['mean_texture'], hue = \"diagnosis\",data = df_cancer, palette =palette, edgecolor=edgecolor)plt.title('mean_radius vs mean_texture')plt.subplot(222)ax2 = sns.scatterplot(x = df_cancer['mean_radius'], y = df_cancer['mean_perimeter'], hue = \"diagnosis\",data = df_cancer, palette =palette, edgecolor=edgecolor)plt.title('mean_radius vs mean_perimeter')plt.subplot(223)ax3 = sns.scatterplot(x = df_cancer['mean_radius'], y = df_cancer['mean_area'], hue = \"diagnosis\",data = df_cancer, palette =palette, edgecolor=edgecolor)plt.title('mean_radius vs mean_area')plt.subplot(224)ax4 = sns.scatterplot(x = df_cancer['mean_radius'], y = df_cancer['mean_smoothness'], hue = \"diagnosis\",data = df_cancer, palette =palette, edgecolor=edgecolor)plt.title('mean_radius vs mean_smoothness')fig.suptitle('Features Correlation', fontsize = 20)plt.savefig('2')plt.show()" }, { "code": null, "e": 2735, "s": 2518, "text": "Before applying any method, we need to check if any values are missing and then deal with them if so. In this dataset, there are no missing values — but always keep the habit of checking for null values in a dataset!" }, { "code": null, "e": 2935, "s": 2735, "text": "Since machine learning models are based on mathematical equations, we need to encode the categorical variables. Here I used label encoding since we have two distinct values in the “diagnosis” column:" }, { "code": null, "e": 3190, "s": 2935, "text": "#check how many values are missing (NaN)here we do not have any missing valuesdf_cancer.isnull().sum()#handling categorical datadf_cancer['diagnosis'].unique()df_cancer['diagnosis'] = df_cancer['diagnosis'].map({'benign':0,'malignant':1})df_cancer.head()" }, { "code": null, "e": 3223, "s": 3190, "text": "Let’s keep climbing our dataset:" }, { "code": null, "e": 3514, "s": 3223, "text": "#visualizing diagnosis column >>> 'benign':0,'malignant':1sns.countplot(x='diagnosis',data = df_cancer)plt.title('number of Benign_0 vs Malignan_1')# correlation between featuresdf_cancer.corr()['diagnosis'][:-1].sort_values().plot(kind ='bar')plt.title('Corr. between features and target')" }, { "code": null, "e": 3690, "s": 3514, "text": "Data is divided into the Train set and Test set. We use the Train set to make the algorithm learn the data’s behavior and then check the accuracy of our model on the Test set." }, { "code": null, "e": 3783, "s": 3690, "text": "Features (X): The columns that are inserted into our model will be used to make predictions." }, { "code": null, "e": 3855, "s": 3783, "text": "Prediction (y): Target variable that will be predicted by the features." }, { "code": null, "e": 4137, "s": 3855, "text": "#define X variables and our target(y)X = df_cancer.drop(['diagnosis'],axis=1).valuesy = df_cancer['diagnosis'].values#split Train and Testfrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=101)" }, { "code": null, "e": 4279, "s": 4137, "text": "Support Vector Machine (SVM) is one of the most useful supervised ML algorithms. It can be used for both classification and regression tasks." }, { "code": null, "e": 4339, "s": 4279, "text": "There are a couple of concepts we first need to understand:" }, { "code": null, "e": 4433, "s": 4339, "text": "What is the SVM Job? SVM chooses the hyperplane that does maximum separation between classes." }, { "code": null, "e": 4664, "s": 4433, "text": "What are hard and soft margins? If data can be linearly separable, SVM might return maximum accuracy (Hard Margin). When data is not linearly separable, all we need do is relax the margin to allow misclassifications (Soft Margin)." }, { "code": null, "e": 4817, "s": 4664, "text": "What is Hyper-parameter C? The number of misclassifications errors can be controlled using the C parameter, which has a direct effect on the hyperplane." }, { "code": null, "e": 5001, "s": 4817, "text": "What is Hyper-parameter gamma? Gamma is used to give weightage to points close to support vector. In other words, changing the value of gamma would change the shape of the hyperplane." }, { "code": null, "e": 5160, "s": 5001, "text": "What is Kernel Trick? if our data is not linearly separable, we could apply a “Kernel Trick” method which maps the nonlinear data to higher dimensional space." }, { "code": null, "e": 5192, "s": 5160, "text": "Now let’s get back to our code!" }, { "code": null, "e": 5304, "s": 5192, "text": "#Support Vector Classification modelfrom sklearn.svm import SVCsvc_model = SVC()svc_model.fit(X_train, y_train)" }, { "code": null, "e": 5476, "s": 5304, "text": "from sklearn.metrics import classification_report, confusion_matrixy_predict = svc_model.predict(X_test)cm = confusion_matrix(y_test, y_predict)sns.heatmap(cm, annot=True)" }, { "code": null, "e": 5533, "s": 5476, "text": "What does the confusion_matrix information result mean?:" }, { "code": null, "e": 5567, "s": 5533, "text": "We had 143 women in our test set." }, { "code": null, "e": 5695, "s": 5567, "text": "Out of 55 women predicted to not have breast cancer, two were classified as not having when actually they had (type one error)." }, { "code": null, "e": 5823, "s": 5695, "text": "Out of 88 women predicted to have breast cancer, 14 were classified as having breast cancer whey they did not (type two error)." }, { "code": null, "e": 5984, "s": 5823, "text": "What does this classification report result mean? Basically it means that the SVM Model was able to classify tumors into malignant and benign with 89% accuracy." }, { "code": null, "e": 5990, "s": 5984, "text": "Note:" }, { "code": null, "e": 6037, "s": 5990, "text": "Precision is the fraction of relevant results." }, { "code": null, "e": 6116, "s": 6037, "text": "Recall is the fraction of all relevant results that were correctly classified." }, { "code": null, "e": 6227, "s": 6116, "text": "F1-score is the harmonic mean between precision and recall that ranges between 0 (terrible) to 1 (perfection)." }, { "code": null, "e": 6373, "s": 6227, "text": "Feature scaling will help us to see all the variables from the same lens (same scale), in this way we will bring all values into the range [0,1]:" }, { "code": null, "e": 7018, "s": 6373, "text": "#normalized scaler - fit&transform on train, fit only on testfrom sklearn.preprocessing import MinMaxScalern_scaler = MinMaxScaler()X_train_scaled = n_scaler.fit_transform(X_train.astype(np.float))X_test_scaled = n_scaler.transform(X_test.astype(np.float))#Support Vector Classification model - apply on scaled datafrom sklearn.svm import SVCsvc_model = SVC()svc_model.fit(X_train_scaled, y_train)from sklearn.metrics import classification_report, confusion_matrixy_predict_scaled = svc_model.predict(X_test_scaled)cm = confusion_matrix(y_test, y_predict_scaled)sns.heatmap(cm, annot=True)print(classification_report(y_test, y_predict_scaled))" }, { "code": null, "e": 7075, "s": 7018, "text": "What does the confusion matrix information result mean?:" }, { "code": null, "e": 7108, "s": 7075, "text": "We had 143 women in our test set" }, { "code": null, "e": 7237, "s": 7108, "text": "Out of 55 women predicted to not have breast cancer, 4 women were classified as not having when actually they had (Type 1 error)" }, { "code": null, "e": 7361, "s": 7237, "text": "out of 88 women predicted to have breast cancer, 7 were classified as having breast cancer whey they did not (Type 2 error)" }, { "code": null, "e": 7519, "s": 7361, "text": "What does this classification report result mean? Basically, it means that the SVM model was able to classify tumors into malignant/benign with 92% accuracy." }, { "code": null, "e": 7602, "s": 7519, "text": "C parameter — as we said, it controls the cost of misclassification on Train data." }, { "code": null, "e": 7717, "s": 7602, "text": "Smaller C: Lower variance but higher bias (soft margin) and reduce the cost of miss-classification (less penalty)." }, { "code": null, "e": 7832, "s": 7717, "text": "Larger C: Lower bias and higher variance (hard margin) and increase the cost of miss-classification (more strict)." }, { "code": null, "e": 8015, "s": 7832, "text": "Gamma:Smaller Gamma: Large variance, far reach, and more generalized solution.Larger Gamma: High variance and low bias, close reach, and also closer data points have a higher weight." }, { "code": null, "e": 8086, "s": 8015, "text": "So, let’s find the optimal parameters for our model using grid search:" }, { "code": null, "e": 8543, "s": 8086, "text": "#find best hyper parametersfrom sklearn.model_selection import GridSearchCVparam_grid = {'C':[0.1,1,10,100,1000],'gamma':[1,0.1,0.01,0.001,0.001], 'kernel':['rbf']}grid = GridSearchCV(SVC(),param_grid,verbose = 4)grid.fit(X_train_scaled,y_train)grid.best_params_grid.best_estimator_grid_predictions = grid.predict(X_test_scaled)cmG = confusion_matrix(y_test,grid_predictions)sns.heatmap(cmG, annot=True)print(classification_report(y_test,grid_predictions))" }, { "code": null, "e": 8695, "s": 8543, "text": "As you can see in this case the last model improvement did not yield the percentage of accuracy. However, we were succeed to decrease an error type II." } ]
SharePoint - Feature\Event Receiver
In this chapter, we will learn to add code handle. Code handles are events that are raised when a Feature is activated or deactivated. In other words, we will be examining Feature Receivers. The Visual Studio project that we created in the last chapter had one Feature and when it was activated, it provisioned our Contacts list, our SitePage, and the link to the SitePage. However, when the Feature is deactivated, SharePoint only removes the link, the SitePage and the Contacts list still remain. We can write the code when the Feature is deactivated to remove the list and the page, if we want to. In this chapter, we will learn how to remove content and elements, when a Feature is deactivated. To handle the events for a Feature, we need a Feature Receiver. Step 1 − To get Feature receiver, right-click on the Feature in the Solution Explorer and then choose Add Event Receiver. using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; namespace FeaturesAndElements.Features.Sample { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("e873932c-d514-46f9-9d17-320bd3fbcb86")] public class SampleEventReceiver : SPFeatureReceiver { // Uncomment the method below to handle the event raised after a feature has been activated. //public override void FeatureActivated(SPFeatureReceiverProperties properties)//{ // } // Uncomment the method below to handle the event raised before a feature is deactivated. //public override void FeatureDeactivating(SPFeatureReceiverProperties properties)// { // } // Uncomment the method below to handle the event raised after a feature has been installed. //public override void FeatureInstalled(SPFeatureReceiverProperties properties)// { // } // Uncomment the method below to handle the event raised before a feature is uninstalled. //public override void FeatureUninstalling(SPFeatureReceiverProperties properties)// { // } // Uncomment the method below to handle the event raised when a feature is upgrading. //public override void FeatureUpgrading(SPFeatureReceiverProperties properties, string upgradeActionName, System.Collections.Generic.IDictionary<string, string> parameters) // { // } } } You can see what we get is a class that inherits from SPFeatureReceiver. In SharePoint, there are different classes for different kinds of events you can handle. For example, events on lists, events on list items, events on sites. You can create a class that is derived from a specific event receiver and then you can override methods inside of that class to handle the events. The Events of a Feature are used when it is being − Activated Deactivated Installed Uninstalled Upgrading Next, you need to attach that class as the event handler for the specific item. For example, if there is an event handler that handles list events, you need to attach that class to the list. Therefore, we will handle two Features − When the feature is activated and When the feature is activated and When it is being deactivated. When it is being deactivated. Step 2 − We will implement the FeatureActivated and FeatureDeactivated methods as shown below − using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; namespace FeaturesAndElements.Features.Sample { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("e873932c-d514-46f9-9d17-320bd3fbcb86")] public class SampleEventReceiver : SPFeatureReceiver { private const string listName = "Announcements"; public override void FeatureActivated(SPFeatureReceiverProperties properties) { var web = properties.Feature.Parent as SPWeb; if (web == null) return; var list = web.Lists.TryGetList(listName); if (list != null) return; var listId = web.Lists.Add(listName, string.Empty, SPListTemplateType.Announcements); list = web.Lists[listId]; list.OnQuickLaunch = true; list.Update(); } public override void FeatureDeactivating(SPFeatureReceiverProperties properties) { var web = properties.Feature.Parent as SPWeb; if (web == null) return; var list = web.Lists.TryGetList(listName); if (list == null) return; if (list.ItemCount == 0) { list.Delete(); } } } } Note − When the feature is activated, we will create an Announcements list. When the feature is activated, we will create an Announcements list. When the feature is deactivated, we will check to see if the Announcements list is empty and if it is, we will delete it. When the feature is deactivated, we will check to see if the Announcements list is empty and if it is, we will delete it. Step 3 − Now right-click on the Project and choose deploy. You will see the following Deployment Conflict warning. Visual Studio is telling us that we are trying to create a list called contacts, but there is already a list in the site called Contacts. It is asking us if we want to overwrite the existing list, and in this case click Resolve. Step 4 − Go back to SharePoint and then refresh your site and go to Site Actions → Site settings → Manage site features → Sample feature. You can see that there are no announcements list in the left pane. Step 5 − Let us Activate Sample feature and you will see the Announcements list, but it is empty right now. Note − If you deactivate your Sample Feature then you will notice that the Announcements list goes away. Step 6 − Let us reactivate the feature. Go to Announcements and then Add a new announcement. We will call this Test and then click Save. You will see the Test file under Announcements. Now when you Deactivate Announcements, you will see that the Announcements list stays because it was not empty. 13 Lectures 3 hours Darwish 124 Lectures 6.5 hours JM Ekhteyari 44 Lectures 3.5 hours Simon Sez IT 23 Lectures 1.5 hours Sonic Performance Print Add Notes Bookmark this page
[ { "code": null, "e": 2506, "s": 2315, "text": "In this chapter, we will learn to add code handle. Code handles are events that are raised when a Feature is activated or deactivated. In other words, we will be examining Feature Receivers." }, { "code": null, "e": 2689, "s": 2506, "text": "The Visual Studio project that we created in the last chapter had one Feature and when it was activated, it provisioned our Contacts list, our SitePage, and the link to the SitePage." }, { "code": null, "e": 2814, "s": 2689, "text": "However, when the Feature is deactivated, SharePoint only removes the link, the SitePage and the Contacts list still remain." }, { "code": null, "e": 3014, "s": 2814, "text": "We can write the code when the Feature is deactivated to remove the list and the page, if we want to. In this chapter, we will learn how to remove content and elements, when a Feature is deactivated." }, { "code": null, "e": 3078, "s": 3014, "text": "To handle the events for a Feature, we need a Feature Receiver." }, { "code": null, "e": 3200, "s": 3078, "text": "Step 1 − To get Feature receiver, right-click on the Feature in the Solution Explorer and then choose Add Event Receiver." }, { "code": null, "e": 4936, "s": 3200, "text": "using System;\nusing System.Runtime.InteropServices;\nusing System.Security.Permissions;\nusing Microsoft.SharePoint;\n\nnamespace FeaturesAndElements.Features.Sample {\n /// <summary>\n /// This class handles events raised during feature activation, deactivation,\n installation, uninstallation, and upgrade.\n /// </summary>\n /// <remarks>\n /// The GUID attached to this class may be used during packaging and should not be modified.\n /// </remarks>\n [Guid(\"e873932c-d514-46f9-9d17-320bd3fbcb86\")]\n \n public class SampleEventReceiver : SPFeatureReceiver {\n // Uncomment the method below to handle the event raised after a feature has been activated.\n //public override void FeatureActivated(SPFeatureReceiverProperties properties)//{\n //\n }\n // Uncomment the method below to handle the event raised before a feature is deactivated.\n //public override void FeatureDeactivating(SPFeatureReceiverProperties properties)// {\n //\n }\n // Uncomment the method below to handle the event raised after a feature has been installed.\n //public override void FeatureInstalled(SPFeatureReceiverProperties properties)// {\n //\n }\n // Uncomment the method below to handle the event raised before a feature is uninstalled.\n //public override void FeatureUninstalling(SPFeatureReceiverProperties properties)// {\n //\n }\n // Uncomment the method below to handle the event raised when a feature is upgrading.\n //public override void FeatureUpgrading(SPFeatureReceiverProperties\n properties, string upgradeActionName,\n System.Collections.Generic.IDictionary<string, string> parameters) // {\n //\n }\n }\n}" }, { "code": null, "e": 5009, "s": 4936, "text": "You can see what we get is a class that inherits from SPFeatureReceiver." }, { "code": null, "e": 5314, "s": 5009, "text": "In SharePoint, there are different classes for different kinds of events you can handle. For example, events on lists, events on list items, events on sites. You can create a class that is derived from a specific event receiver and then you can override methods inside of that class to handle the events." }, { "code": null, "e": 5366, "s": 5314, "text": "The Events of a Feature are used when it is being −" }, { "code": null, "e": 5376, "s": 5366, "text": "Activated" }, { "code": null, "e": 5388, "s": 5376, "text": "Deactivated" }, { "code": null, "e": 5398, "s": 5388, "text": "Installed" }, { "code": null, "e": 5410, "s": 5398, "text": "Uninstalled" }, { "code": null, "e": 5420, "s": 5410, "text": "Upgrading" }, { "code": null, "e": 5611, "s": 5420, "text": "Next, you need to attach that class as the event handler for the specific item. For example, if there is an event handler that handles list events, you need to attach that class to the list." }, { "code": null, "e": 5652, "s": 5611, "text": "Therefore, we will handle two Features −" }, { "code": null, "e": 5686, "s": 5652, "text": "When the feature is activated and" }, { "code": null, "e": 5720, "s": 5686, "text": "When the feature is activated and" }, { "code": null, "e": 5750, "s": 5720, "text": "When it is being deactivated." }, { "code": null, "e": 5780, "s": 5750, "text": "When it is being deactivated." }, { "code": null, "e": 5876, "s": 5780, "text": "Step 2 − We will implement the FeatureActivated and FeatureDeactivated methods as shown below −" }, { "code": null, "e": 7388, "s": 5876, "text": "using System;\nusing System.Runtime.InteropServices;\nusing System.Security.Permissions;\nusing Microsoft.SharePoint;\n\nnamespace FeaturesAndElements.Features.Sample {\n /// <summary>\n /// This class handles events raised during feature activation, deactivation,\n installation, uninstallation, and upgrade.\n /// </summary>\n /// <remarks>\n /// The GUID attached to this class may be used during packaging and should\n not be modified.\n /// </remarks>\n\n [Guid(\"e873932c-d514-46f9-9d17-320bd3fbcb86\")]\n public class SampleEventReceiver : SPFeatureReceiver {\n private const string listName = \"Announcements\";\n \n public override void FeatureActivated(SPFeatureReceiverProperties properties) {\n var web = properties.Feature.Parent as SPWeb;\n \n if (web == null) return;\n var list = web.Lists.TryGetList(listName);\n \n if (list != null) return;\n var listId = web.Lists.Add(listName, string.Empty,\n SPListTemplateType.Announcements);\n list = web.Lists[listId];\n list.OnQuickLaunch = true;\n list.Update();\n }\n public override void FeatureDeactivating(SPFeatureReceiverProperties properties) {\n var web = properties.Feature.Parent as SPWeb;\n \n if (web == null) return;\n var list = web.Lists.TryGetList(listName);\n \n if (list == null) return;\n if (list.ItemCount == 0) {\n list.Delete();\n }\n }\n }\n}" }, { "code": null, "e": 7395, "s": 7388, "text": "Note −" }, { "code": null, "e": 7464, "s": 7395, "text": "When the feature is activated, we will create an Announcements list." }, { "code": null, "e": 7533, "s": 7464, "text": "When the feature is activated, we will create an Announcements list." }, { "code": null, "e": 7655, "s": 7533, "text": "When the feature is deactivated, we will check to see if the Announcements list is empty and if it is, we will delete it." }, { "code": null, "e": 7777, "s": 7655, "text": "When the feature is deactivated, we will check to see if the Announcements list is empty and if it is, we will delete it." }, { "code": null, "e": 7892, "s": 7777, "text": "Step 3 − Now right-click on the Project and choose deploy. You will see the following Deployment Conflict warning." }, { "code": null, "e": 8121, "s": 7892, "text": "Visual Studio is telling us that we are trying to create a list called contacts, but there is already a list in the site called Contacts. It is asking us if we want to overwrite the existing list, and in this case click Resolve." }, { "code": null, "e": 8259, "s": 8121, "text": "Step 4 − Go back to SharePoint and then refresh your site and go to Site Actions → Site settings → Manage site features → Sample feature." }, { "code": null, "e": 8326, "s": 8259, "text": "You can see that there are no announcements list in the left pane." }, { "code": null, "e": 8434, "s": 8326, "text": "Step 5 − Let us Activate Sample feature and you will see the Announcements list, but it is empty right now." }, { "code": null, "e": 8539, "s": 8434, "text": "Note − If you deactivate your Sample Feature then you will notice that the Announcements list goes away." }, { "code": null, "e": 8676, "s": 8539, "text": "Step 6 − Let us reactivate the feature. Go to Announcements and then Add a new announcement. We will call this Test and then click Save." }, { "code": null, "e": 8724, "s": 8676, "text": "You will see the Test file under Announcements." }, { "code": null, "e": 8836, "s": 8724, "text": "Now when you Deactivate Announcements, you will see that the Announcements list stays because it was not empty." }, { "code": null, "e": 8869, "s": 8836, "text": "\n 13 Lectures \n 3 hours \n" }, { "code": null, "e": 8878, "s": 8869, "text": " Darwish" }, { "code": null, "e": 8914, "s": 8878, "text": "\n 124 Lectures \n 6.5 hours \n" }, { "code": null, "e": 8928, "s": 8914, "text": " JM Ekhteyari" }, { "code": null, "e": 8963, "s": 8928, "text": "\n 44 Lectures \n 3.5 hours \n" }, { "code": null, "e": 8977, "s": 8963, "text": " Simon Sez IT" }, { "code": null, "e": 9012, "s": 8977, "text": "\n 23 Lectures \n 1.5 hours \n" }, { "code": null, "e": 9031, "s": 9012, "text": " Sonic Performance" }, { "code": null, "e": 9038, "s": 9031, "text": " Print" }, { "code": null, "e": 9049, "s": 9038, "text": " Add Notes" } ]
ChronoZonedDateTime format() method in Java with Examples - GeeksforGeeks
28 May, 2019 The format() method of ChronoZonedDateTime interface in Java is used to format this date-time using the specified formatter passed as parameter.This date-time will be passed to the formatter to produce a string. Syntax: default String format(DateTimeFormatter formatter) Parameters: This method accepts a single parameter formatter which represents the formatter to use. This is a mandatory parameter and should not be NULL. Return value: This method returns a String represents the formatted date-time string. Exception: This method throws a DateTimeException if an error occurs during printing. Below programs illustrate the format() method:Program 1: // Java program to demonstrate// ChronoZonedDateTime.format() method import java.time.*;import java.time.chrono.*;import java.time.format.*; public class GFG { public static void main(String[] args) { // create ChronoZonedDateTime objects ChronoZonedDateTime zoneddatetime = ZonedDateTime.parse( "2018-12-06T19:21:12.123+05:30[Asia/Calcutta]"); // create a formatter DateTimeFormatter formatter = DateTimeFormatter.ISO_TIME; // apply format() String value = zoneddatetime.format(formatter); // print result System.out.println("Result: " + value); }} Result: 19:21:12.123+05:30 Program 2: // Java program to demonstrate// ChronoZonedDateTime.format() method import java.time.*;import java.time.chrono.*;import java.time.format.*; public class GFG { public static void main(String[] args) { // create ChronoZonedDateTime objects ChronoZonedDateTime zoneddatetime = ZonedDateTime.parse( "2018-10-25T23:12:31.123+02:00[Europe/Paris]"); // create a formatter DateTimeFormatter formatter = DateTimeFormatter.BASIC_ISO_DATE; // apply format() String value = zoneddatetime.format(formatter); // print result System.out.println("Result: " + value); }} Result: 20181025+0200 Reference: https://docs.oracle.com/javase/9/docs/api/java/time/chrono/ChronoZonedDateTime.html#format-java.time.format.DateTimeFormatter- Java-ChronoZonedDateTime Java-Functions Java-Time-Chrono package Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Stream In Java Constructors in Java Different ways of Reading a text file in Java Exceptions in Java Functional Interfaces in Java Generics in Java Comparator Interface in Java with Examples Introduction to Java HashMap get() Method in Java Strings in Java
[ { "code": null, "e": 23948, "s": 23920, "text": "\n28 May, 2019" }, { "code": null, "e": 24160, "s": 23948, "text": "The format() method of ChronoZonedDateTime interface in Java is used to format this date-time using the specified formatter passed as parameter.This date-time will be passed to the formatter to produce a string." }, { "code": null, "e": 24168, "s": 24160, "text": "Syntax:" }, { "code": null, "e": 24220, "s": 24168, "text": "default String format(DateTimeFormatter formatter)\n" }, { "code": null, "e": 24374, "s": 24220, "text": "Parameters: This method accepts a single parameter formatter which represents the formatter to use. This is a mandatory parameter and should not be NULL." }, { "code": null, "e": 24460, "s": 24374, "text": "Return value: This method returns a String represents the formatted date-time string." }, { "code": null, "e": 24546, "s": 24460, "text": "Exception: This method throws a DateTimeException if an error occurs during printing." }, { "code": null, "e": 24603, "s": 24546, "text": "Below programs illustrate the format() method:Program 1:" }, { "code": "// Java program to demonstrate// ChronoZonedDateTime.format() method import java.time.*;import java.time.chrono.*;import java.time.format.*; public class GFG { public static void main(String[] args) { // create ChronoZonedDateTime objects ChronoZonedDateTime zoneddatetime = ZonedDateTime.parse( \"2018-12-06T19:21:12.123+05:30[Asia/Calcutta]\"); // create a formatter DateTimeFormatter formatter = DateTimeFormatter.ISO_TIME; // apply format() String value = zoneddatetime.format(formatter); // print result System.out.println(\"Result: \" + value); }}", "e": 25276, "s": 24603, "text": null }, { "code": null, "e": 25304, "s": 25276, "text": "Result: 19:21:12.123+05:30\n" }, { "code": null, "e": 25315, "s": 25304, "text": "Program 2:" }, { "code": "// Java program to demonstrate// ChronoZonedDateTime.format() method import java.time.*;import java.time.chrono.*;import java.time.format.*; public class GFG { public static void main(String[] args) { // create ChronoZonedDateTime objects ChronoZonedDateTime zoneddatetime = ZonedDateTime.parse( \"2018-10-25T23:12:31.123+02:00[Europe/Paris]\"); // create a formatter DateTimeFormatter formatter = DateTimeFormatter.BASIC_ISO_DATE; // apply format() String value = zoneddatetime.format(formatter); // print result System.out.println(\"Result: \" + value); }}", "e": 25993, "s": 25315, "text": null }, { "code": null, "e": 26016, "s": 25993, "text": "Result: 20181025+0200\n" }, { "code": null, "e": 26154, "s": 26016, "text": "Reference: https://docs.oracle.com/javase/9/docs/api/java/time/chrono/ChronoZonedDateTime.html#format-java.time.format.DateTimeFormatter-" }, { "code": null, "e": 26179, "s": 26154, "text": "Java-ChronoZonedDateTime" }, { "code": null, "e": 26194, "s": 26179, "text": "Java-Functions" }, { "code": null, "e": 26219, "s": 26194, "text": "Java-Time-Chrono package" }, { "code": null, "e": 26224, "s": 26219, "text": "Java" }, { "code": null, "e": 26229, "s": 26224, "text": "Java" }, { "code": null, "e": 26327, "s": 26229, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26342, "s": 26327, "text": "Stream In Java" }, { "code": null, "e": 26363, "s": 26342, "text": "Constructors in Java" }, { "code": null, "e": 26409, "s": 26363, "text": "Different ways of Reading a text file in Java" }, { "code": null, "e": 26428, "s": 26409, "text": "Exceptions in Java" }, { "code": null, "e": 26458, "s": 26428, "text": "Functional Interfaces in Java" }, { "code": null, "e": 26475, "s": 26458, "text": "Generics in Java" }, { "code": null, "e": 26518, "s": 26475, "text": "Comparator Interface in Java with Examples" }, { "code": null, "e": 26539, "s": 26518, "text": "Introduction to Java" }, { "code": null, "e": 26568, "s": 26539, "text": "HashMap get() Method in Java" } ]
Elasticsearch Search Engine | An introduction - GeeksforGeeks
07 Feb, 2019 Elasticsearch is a full-text search and analytics engine based on Apache Lucene. Elasticsearch makes it easier to perform data aggregation operations on data from multiple sources and to perform unstructured queries such as Fuzzy Searches on the stored data. It stores data in a document-like format, similar to how MongoDB does it. Data is serialized in JSON format. This adds a Non-relational nature to it and thus, it can also be used as a NoSQL/Non-relational database. A typical Elasticsearch document would look like: { "first_name": "Divij", "last_name":"Sehgal", "email":"[email protected]", "dob":"04-11-1995", "city":"Mumbai", "state":"Maharashtra", "country":"India", "occupation":"Software Engineer", } It is distributed, horizontally scalable, as in more Elasticsearch instances can be added toa cluster as and when need arises, as opposed to increasing the capability of one machine running an Elasticsearch instance. It is RESTful and API centric, thus making it more usable. Its operations can easily be accessed over HTTP through the RestFul API so it can be integrated seamlessly into any application. Further, numerous wrappers are available in various Programming languages, obviating the need to use the API manually and most operations can be accessed via library function calls that handle communication with the engine themselves. Through the use of CRUD operations – Create, Read, Update, Delete – it is possible to effectively operate on the data present in persistent storage. These are similar to the CRUD achieved by relational databases and can be performed through HTTP interface present in the RESTful APIs. Where do we use Elasticsearch? Elasticsearch is a good fit for – Storing and operating on unstructured or semi-structured data, which may often change in structure. Due to schema-less nature, adding new columns does not require the overhead of adding a new column to the table. By simply adding new columns to incoming data to an index, Elasticsearch is able to accommodate new column and make it available to further operations. Full-text searches: By ranking each document for relevance to a search by correlating search terms with document content using TF-IDF count for each document, fuzzy searches are able to rank documents by relevance to the search made. It is common to have Elasticsearch to be used as a storage and analysis tool for Logs generated by disparate systems. Aggregation tools such as Kibana can be used to build aggregations and visualizations in real-time from the collected data. It works well with Time-series analysis of data as it can extract metrics from the incoming data in real time. Infrastructure monitoring in CI/CD pipelines. Elasticsearch Concepts Elasticsearch works on a concept known as inverse indexing. This concept comes from the Lucene library(Remember Apache Lucene from above). This index is similar to terms present at the back of a book, that show the pages on which each important term in the book may be present or discussed. The inverted index makes it easier to resolve queries to specific documents they could be related to, based on the keywords present in the query, and speeds up a document retrieval process by limiting the search space of documents to be considered for that query. Let’s take the following three Game of Thrones dialogues: “Winter is coming.”“A mind needs books as a sword needs a whetstone, if it is to keep its edge.”“Every flight begins with a fall.”“Words can accomplish what swords cannot.” “Winter is coming.” “A mind needs books as a sword needs a whetstone, if it is to keep its edge.” “Every flight begins with a fall.” “Words can accomplish what swords cannot.” Consider each of these dialogues as a single document, i.e, each document has a structure like: { "dialogue": "....." } After some simple text processing: After lowercasing the text and removing punctuations, we can construct the “inverted index” as follows: The first two columns form what is called the Dictionary. This is where Elasticsearch searches for the search terms to get to know which documents could be relevant to the current search. The third column is also referred to as Postings. This links each individual term with the document it could be present in. Few common terms associated with Elasticsearch are as follows: Cluster: A cluster is a group of systems running Elasticsearch engine, that participate and operate in close correspondence with each other to store data and resolve a query. These are further classified, based on their role in the cluster. Node: A node is a JVM Process running an instance of the Elasticsearch runtime, independently accessible over a network by other machines or nodes in a cluster. Index: An index in Elasticsearch is analogous to tables in relational databases. Mapping: Each index has a mapping associated with it, which is essentially a schema-definition of the data that each individual document in the index can hold. This can be manually created for each index or it can be automatically be added when data is pushed to an index. Document: A JSON document. In relational terms, this would represent a single row in a table. Shard: Shards are blocks of data that may or may not belong to the same index. Since data belonging to a single index may get very large, say a few hundred GBs or even a few TBs in size, it is infeasible to vertically grow storage. Instead, data is logically divided into shards stored on different nodes, which individually operate on the data contained in them. This allows for horizontal scaling. Replicas: Each shard in a cluster may be replicated to one or more nodes in a the cluster. This allows for a failover backup. In case one of the nodes goes down or cannot utilize its resources at the moment, a replica with the data is always available to work on the data. By default, one replica for each shard is created and the number is configurable. In addition to Failover, use of replicas are also increases search performance. Advanced Computer Subject Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Decision Tree ML | Stochastic Gradient Descent (SGD) KDD Process in Data Mining ML | Linear Regression Reinforcement learning Installation of Node.js on Linux Roadmap to Become a Web Developer in 2022 How to fetch data from an API in ReactJS ? Top 10 Projects For Beginners To Practice HTML and CSS Skills How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 24328, "s": 24300, "text": "\n07 Feb, 2019" }, { "code": null, "e": 24587, "s": 24328, "text": "Elasticsearch is a full-text search and analytics engine based on Apache Lucene. Elasticsearch makes it easier to perform data aggregation operations on data from multiple sources and to perform unstructured queries such as Fuzzy Searches on the stored data." }, { "code": null, "e": 24802, "s": 24587, "text": "It stores data in a document-like format, similar to how MongoDB does it. Data is serialized in JSON format. This adds a Non-relational nature to it and thus, it can also be used as a NoSQL/Non-relational database." }, { "code": null, "e": 24852, "s": 24802, "text": "A typical Elasticsearch document would look like:" }, { "code": null, "e": 25056, "s": 24852, "text": "{\n \"first_name\": \"Divij\",\n \"last_name\":\"Sehgal\",\n \"email\":\"[email protected]\",\n \"dob\":\"04-11-1995\",\n \"city\":\"Mumbai\",\n \"state\":\"Maharashtra\",\n \"country\":\"India\",\n \"occupation\":\"Software Engineer\",\n}\n" }, { "code": null, "e": 25273, "s": 25056, "text": "It is distributed, horizontally scalable, as in more Elasticsearch instances can be added toa cluster as and when need arises, as opposed to increasing the capability of one machine running an Elasticsearch instance." }, { "code": null, "e": 25696, "s": 25273, "text": "It is RESTful and API centric, thus making it more usable. Its operations can easily be accessed over HTTP through the RestFul API so it can be integrated seamlessly into any application. Further, numerous wrappers are available in various Programming languages, obviating the need to use the API manually and most operations can be accessed via library function calls that handle communication with the engine themselves." }, { "code": null, "e": 25981, "s": 25696, "text": "Through the use of CRUD operations – Create, Read, Update, Delete – it is possible to effectively operate on the data present in persistent storage. These are similar to the CRUD achieved by relational databases and can be performed through HTTP interface present in the RESTful APIs." }, { "code": null, "e": 26012, "s": 25981, "text": "Where do we use Elasticsearch?" }, { "code": null, "e": 26046, "s": 26012, "text": "Elasticsearch is a good fit for –" }, { "code": null, "e": 26411, "s": 26046, "text": "Storing and operating on unstructured or semi-structured data, which may often change in structure. Due to schema-less nature, adding new columns does not require the overhead of adding a new column to the table. By simply adding new columns to incoming data to an index, Elasticsearch is able to accommodate new column and make it available to further operations." }, { "code": null, "e": 26645, "s": 26411, "text": "Full-text searches: By ranking each document for relevance to a search by correlating search terms with document content using TF-IDF count for each document, fuzzy searches are able to rank documents by relevance to the search made." }, { "code": null, "e": 26887, "s": 26645, "text": "It is common to have Elasticsearch to be used as a storage and analysis tool for Logs generated by disparate systems. Aggregation tools such as Kibana can be used to build aggregations and visualizations in real-time from the collected data." }, { "code": null, "e": 26998, "s": 26887, "text": "It works well with Time-series analysis of data as it can extract metrics from the incoming data in real time." }, { "code": null, "e": 27044, "s": 26998, "text": "Infrastructure monitoring in CI/CD pipelines." }, { "code": null, "e": 27067, "s": 27044, "text": "Elasticsearch Concepts" }, { "code": null, "e": 27206, "s": 27067, "text": "Elasticsearch works on a concept known as inverse indexing. This concept comes from the Lucene library(Remember Apache Lucene from above)." }, { "code": null, "e": 27622, "s": 27206, "text": "This index is similar to terms present at the back of a book, that show the pages on which each important term in the book may be present or discussed. The inverted index makes it easier to resolve queries to specific documents they could be related to, based on the keywords present in the query, and speeds up a document retrieval process by limiting the search space of documents to be considered for that query." }, { "code": null, "e": 27680, "s": 27622, "text": "Let’s take the following three Game of Thrones dialogues:" }, { "code": null, "e": 27853, "s": 27680, "text": "“Winter is coming.”“A mind needs books as a sword needs a whetstone, if it is to keep its edge.”“Every flight begins with a fall.”“Words can accomplish what swords cannot.”" }, { "code": null, "e": 27873, "s": 27853, "text": "“Winter is coming.”" }, { "code": null, "e": 27951, "s": 27873, "text": "“A mind needs books as a sword needs a whetstone, if it is to keep its edge.”" }, { "code": null, "e": 27986, "s": 27951, "text": "“Every flight begins with a fall.”" }, { "code": null, "e": 28029, "s": 27986, "text": "“Words can accomplish what swords cannot.”" }, { "code": null, "e": 28125, "s": 28029, "text": "Consider each of these dialogues as a single document, i.e, each document has a structure like:" }, { "code": null, "e": 28154, "s": 28125, "text": "{\n \"dialogue\": \".....\"\n}\n" }, { "code": null, "e": 28293, "s": 28154, "text": "After some simple text processing: After lowercasing the text and removing punctuations, we can construct the “inverted index” as follows:" }, { "code": null, "e": 28481, "s": 28293, "text": "The first two columns form what is called the Dictionary. This is where Elasticsearch searches for the search terms to get to know which documents could be relevant to the current search." }, { "code": null, "e": 28605, "s": 28481, "text": "The third column is also referred to as Postings. This links each individual term with the document it could be present in." }, { "code": null, "e": 28668, "s": 28605, "text": "Few common terms associated with Elasticsearch are as follows:" }, { "code": null, "e": 28909, "s": 28668, "text": "Cluster: A cluster is a group of systems running Elasticsearch engine, that participate and operate in close correspondence with each other to store data and resolve a query. These are further classified, based on their role in the cluster." }, { "code": null, "e": 29070, "s": 28909, "text": "Node: A node is a JVM Process running an instance of the Elasticsearch runtime, independently accessible over a network by other machines or nodes in a cluster." }, { "code": null, "e": 29151, "s": 29070, "text": "Index: An index in Elasticsearch is analogous to tables in relational databases." }, { "code": null, "e": 29424, "s": 29151, "text": "Mapping: Each index has a mapping associated with it, which is essentially a schema-definition of the data that each individual document in the index can hold. This can be manually created for each index or it can be automatically be added when data is pushed to an index." }, { "code": null, "e": 29518, "s": 29424, "text": "Document: A JSON document. In relational terms, this would represent a single row in a table." }, { "code": null, "e": 29918, "s": 29518, "text": "Shard: Shards are blocks of data that may or may not belong to the same index. Since data belonging to a single index may get very large, say a few hundred GBs or even a few TBs in size, it is infeasible to vertically grow storage. Instead, data is logically divided into shards stored on different nodes, which individually operate on the data contained in them. This allows for horizontal scaling." }, { "code": null, "e": 30353, "s": 29918, "text": "Replicas: Each shard in a cluster may be replicated to one or more nodes in a the cluster. This allows for a failover backup. In case one of the nodes goes down or cannot utilize its resources at the moment, a replica with the data is always available to work on the data. By default, one replica for each shard is created and the number is configurable. In addition to Failover, use of replicas are also increases search performance." }, { "code": null, "e": 30379, "s": 30353, "text": "Advanced Computer Subject" }, { "code": null, "e": 30396, "s": 30379, "text": "Web Technologies" }, { "code": null, "e": 30494, "s": 30396, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30503, "s": 30494, "text": "Comments" }, { "code": null, "e": 30516, "s": 30503, "text": "Old Comments" }, { "code": null, "e": 30530, "s": 30516, "text": "Decision Tree" }, { "code": null, "e": 30569, "s": 30530, "text": "ML | Stochastic Gradient Descent (SGD)" }, { "code": null, "e": 30596, "s": 30569, "text": "KDD Process in Data Mining" }, { "code": null, "e": 30619, "s": 30596, "text": "ML | Linear Regression" }, { "code": null, "e": 30642, "s": 30619, "text": "Reinforcement learning" }, { "code": null, "e": 30675, "s": 30642, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 30717, "s": 30675, "text": "Roadmap to Become a Web Developer in 2022" }, { "code": null, "e": 30760, "s": 30717, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 30822, "s": 30760, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" } ]
Find whether a given number is a power of 4 or not - GeeksforGeeks
12 Jan, 2022 Given an integer n, find whether it is a power of 4 or not. Example : Input : 16 Output : 16 is a power of 4 Input : 20 Output : 20 is not a power of 4 1. A simple method is to take a log of the given number on base 4, and if we get an integer then the number is the power of 4. 2. Another solution is to keep dividing the number by 4, i.e, do n = n/4 iteratively. In any iteration, if n%4 becomes non-zero and n is not 1 then n is not a power of 4, otherwise, n is a power of 4. C++ C Java Python3 C# PHP Javascript // C++ program to find whether a given// number is a power of 4 or not#include<iostream> using namespace std;#define bool int class GFG{ /* Function to check if x is power of 4*/public : bool isPowerOfFour(int n){ if(n == 0) return 0; while(n != 1) { if(n % 4 != 0) return 0; n = n / 4; } return 1;}}; /*Driver code*/int main(){ GFG g; int test_no = 64; if(g.isPowerOfFour(test_no)) cout << test_no << " is a power of 4"; else cout << test_no << "is not a power of 4"; getchar();} // This code is contributed by SoM15242 #include<stdio.h>#define bool int /* Function to check if x is power of 4*/bool isPowerOfFour(int n){ if(n == 0) return 0; while(n != 1) { if(n % 4 != 0) return 0; n = n / 4; } return 1;} /*Driver program to test above function*/int main(){ int test_no = 64; if(isPowerOfFour(test_no)) printf("%d is a power of 4", test_no); else printf("%d is not a power of 4", test_no); getchar();} // Java code to check if given// number is power of 4 or not class GFG { // Function to check if // x is power of 4 static int isPowerOfFour(int n) { if(n == 0) return 0; while(n != 1) { if(n % 4 != 0) return 0; n = n / 4; } return 1; } // Driver program public static void main(String[] args) { int test_no = 64; if(isPowerOfFour(test_no) == 1) System.out.println(test_no + " is a power of 4"); else System.out.println(test_no + "is not a power of 4"); }} // This code is contributed// by prerna saini # Python3 program to check if given# number is power of 4 or not # Function to check if x is power of 4def isPowerOfFour(n): if (n == 0): return False while (n != 1): if (n % 4 != 0): return False n = n // 4 return True # Driver codetest_no = 64if(isPowerOfFour(64)): print(test_no, 'is a power of 4')else: print(test_no, 'is not a power of 4') # This code is contributed by Danish Raza // C# code to check if given// number is power of 4 or notusing System; class GFG { // Function to check if // x is power of 4 static int isPowerOfFour(int n) { if (n == 0) return 0; while (n != 1) { if (n % 4 != 0) return 0; n = n / 4; } return 1; } // Driver code public static void Main() { int test_no = 64; if (isPowerOfFour(test_no) == 1) Console.Write(test_no + " is a power of 4"); else Console.Write(test_no + " is not a power of 4"); }} // This code is contributed by Sam007 <?php// PHP code to check if given// number is power of 4 or not // Function to check if// x is power of 4function isPowerOfFour($n){ if($n == 0) return 0; while($n != 1) { if($n % 4 != 0) return 0; $n = $n / 4; } return 1;} // Driver Code$test_no = 64; if(isPowerOfFour($test_no)) echo $test_no," is a power of 4";else echo $test_no," is not a power of 4"; // This code is contributed by Rajesh?> <script> /* Function to check if x is power of 4*/function isPowerOfFour( n){ if(n == 0) return false; while(n != 1) { if(n % 4 != 0) return false; n = n / 4; } return true;} /*Driver program to test above function*/let test_no = 64; if(isPowerOfFour(test_no)) document.write(test_no+" is a power of 4"); else document.write(test_no+" is not a power of 4"); // This code is contributed by gauravrajput1 </script> 64 is a power of 4 Time Complexity: O(log4n) Auxiliary Space: O(1)3. A number n is a power of 4 if the following conditions are met. a) There is only one bit set in the binary representation of n (or n is a power of 2) b) The count of zero bits before the (only) set bit is even.For example 16 (10000) is the power of 4 because there is only one bit set and count of 0s before the set bit is 4 which is even.Thanks to Geek4u for suggesting the approach and providing the code. C++ C Java Python3 C# PHP Javascript // C++ program to check// if given number is// power of 4 or not#include<bits/stdc++.h> using namespace std; bool isPowerOfFour(unsigned int n){ int count = 0; /*Check if there is only one bit set in n*/ if ( n && !(n&(n-1)) ) { /* count 0 bits before set bit */ while(n > 1) { n >>= 1; count += 1; } /*If count is even then return true else false*/ return (count%2 == 0)? 1 :0; } /* If there are more than 1 bit set then n is not a power of 4*/ return 0;} /*Driver code*/int main(){ int test_no = 64; if(isPowerOfFour(test_no)) cout << test_no << " is a power of 4" ; else cout << test_no << " is not a power of 4";} // This code is contributed by Shivi_Aggarwal #include<stdio.h>#define bool int bool isPowerOfFour(unsigned int n){ int count = 0; /*Check if there is only one bit set in n*/ if ( n && !(n&(n-1)) ) { /* count 0 bits before set bit */ while(n > 1) { n >>= 1; count += 1; } /*If count is even then return true else false*/ return (count%2 == 0)? 1 :0; } /* If there are more than 1 bit set then n is not a power of 4*/ return 0;} /*Driver program to test above function*/int main(){ int test_no = 64; if(isPowerOfFour(test_no)) printf("%d is a power of 4", test_no); else printf("%d is not a power of 4", test_no); getchar();} // Java program to check// if given number is// power of 4 or notimport java.io.*;class GFG{ static int isPowerOfFour(int n) { int count = 0; /*Check if there is only one bit set in n*/ int x = n & (n - 1); if ( n > 0 && x == 0) { /* count 0 bits before set bit */ while(n > 1) { n >>= 1; count += 1; } /*If count is even then return true else false*/ return (count % 2 == 0) ? 1 : 0; } /* If there are more than 1 bit set then n is not a power of 4*/ return 0; } // Driver Code public static void main(String[] args) { int test_no = 64; if(isPowerOfFour(test_no)>0) System.out.println(test_no + " is a power of 4"); else System.out.println(test_no + " is not a power of 4"); }} // This code is contributed by mits # Python3 program to check if given# number is power of 4 or not # Function to check if x is power of 4def isPowerOfFour(n): count = 0 # Check if there is only one # bit set in n if (n and (not(n & (n - 1)))): # count 0 bits before set bit while(n > 1): n >>= 1 count += 1 # If count is even then return # true else false if(count % 2 == 0): return True else: return False # Driver codetest_no = 64if(isPowerOfFour(64)): print(test_no, 'is a power of 4')else: print(test_no, 'is not a power of 4') # This code is contributed by Danish Raza // C# program to check if given// number is power of 4 or notusing System; class GFG { static int isPowerOfFour(int n) { int count = 0; /*Check if there is only one bit set in n*/ int x = n & (n-1); if ( n > 0 && x == 0) { /* count 0 bits before set bit */ while(n > 1) { n >>= 1; count += 1; } /*If count is even then return true else false*/ return (count % 2 == 0) ? 1 : 0; } /* If there are more than 1 bit set then n is not a power of 4*/ return 0; } /*Driver program to test above function*/ static void Main() { int test_no = 64; if(isPowerOfFour(test_no)>0) Console.WriteLine("{0} is a power of 4", test_no); else Console.WriteLine("{0} is not a power of 4", test_no); }} // This Code is Contributed by mits <?php function isPowerOfFour($n){ $count = 0; /*Check if there is only one bit set in n*/if ( $n && !($n&($n-1)) ){ /* count 0 bits before set bit */ while($n > 1) { $n >>= 1; $count += 1; } /*If count is even then return true else false*/ return ($count%2 == 0)? 1 :0;} /* If there are more than 1 bit set then n is not a power of 4*/return 0;} /*Driver program to test above function*/ $test_no = 64; if(isPowerOfFour($test_no)) echo $test_no, " is a power of 4"; else echo $test_no, " not is a power of 4"; #This Code is Contributed by Ajit?> <script> // javascript program to check// if given number is// power of 4 or not function isPowerOfFour( n){ let count = 0; /*Check if there is only one bit set in n*/ if ( n && !(n&(n-1)) ) { /* count 0 bits before set bit */ while(n > 1) { n >>= 1; count += 1; } /*If count is even then return true else false*/ return (count%2 == 0)? 1 :0; } /* If there are more than 1 bit set then n is not a power of 4*/ return 0;} /*Driver code*/ let test_no = 64; if(isPowerOfFour(test_no)) document.write( test_no +" is a power of 4" ); else document.write(test_no + " is not a power of 4"); // This code contributed by aashish1995 </script> 64 is a power of 4 Time Complexity: O(log4n) Auxiliary Space: O(1) 4. A number n is a power of 4 if the following conditions are met. a) There is only one bit set in the binary representation of n (or n is a power of 2) b) The bits don’t AND(&) any part of the pattern 0xAAAAAAAAFor example: 16 (10000) is power of 4 because there is only one bit set and 0x10 & 0xAAAAAAAA is zero.Thanks to Sarthak Sahu for suggesting the approach. C++ C Java Python3 C# Javascript // C++ program to check// if given number is// power of 4 or not#include<bits/stdc++.h> using namespace std; bool isPowerOfFour(unsigned int n){ return n !=0 && ((n&(n-1)) == 0) && !(n & 0xAAAAAAAA);} /*Driver code*/int main(){ int test_no = 64; if(isPowerOfFour(test_no)) cout << test_no << " is a power of 4" ; else cout << test_no << " is not a power of 4";} // C program to check// if given number is// power of 4 or not#include<stdio.h>#define bool int bool isPowerOfFour(unsigned int n){ return n != 0 && ((n&(n-1)) == 0) && !(n & 0xAAAAAAAA);} /*Driver program to test above function*/int main() { int test_no = 64; if(isPowerOfFour(test_no)) printf("%d is a power of 4", test_no); else printf("%d is not a power of 4", test_no); getchar();} // Java program to check// if given number is// power of 4 or notimport java.io.*;class GFG { static boolean isPowerOfFour(int n) { return n != 0 && ((n&(n-1)) == 0) && (n & 0xAAAAAAAA) == 0; } // Driver Code public static void main(String[] args) { int test_no = 64; if(isPowerOfFour(test_no)) System.out.println(test_no + " is a power of 4"); else System.out.println(test_no + " is not a power of 4"); }} # Python3 program to check# if given number is# power of 4 or notdef isPowerOfFour(n): return (n != 0 and ((n & (n - 1)) == 0) and not(n & 0xAAAAAAAA)); # Driver codetest_no = 64;if(isPowerOfFour(test_no)): print(test_no ,"is a power of 4");else: print(test_no , "is not a power of 4"); # This code contributed by Rajput-Ji // C# program to check// if given number is// power of 4 or notusing System; class GFG{ static bool isPowerOfFour(int n) { return n != 0 && ((n&(n-1)) == 0) && (n & 0xAAAAAAAA) == 0; } // Driver Code static void Main() { int test_no = 64; if(isPowerOfFour(test_no)) Console.WriteLine("{0} is a power of 4", test_no); else Console.WriteLine("{0} is not a power of 4", test_no); }} // This code is contributed by mohit kumar 29 <script>// C++ program to check// if given number is// power of 4 or not function isPowerOfFour( n){ return n !=0 && ((n&(n-1)) == 0) && !(n & 0xAAAAAAAA);} /*Driver code*/ test_no = 64; if(isPowerOfFour(test_no)) document.write(test_no + " is a power of 4"); else document.write(test_no + " is not a power of 4");//This code is contributed by simranarora5sos</script> 64 is a power of 4 Time Complexity: O(log4n) Auxiliary Space: O(1)Why 0xAAAAAAAA ? This is because the bit representation is of powers of 2 that are not of 4. Like 2, 8, 32 so on.5. A number will be a power of 4 if floor(log4(num))=ceil(log4(num) because log4 of a number that is a power of 4 will always be an integer. Below is the implementation of the above approach. C++ Java Python3 C# Javascript // C++ program to check // if given number is // power of 4 or not #include<bits/stdc++.h> using namespace std; float logn(int n, int r){ return log(n) / log(r);} bool isPowerOfFour(int n){ //0 is not considered as a power //of 4 if(n == 0) return false; return floor(logn(n,4))==ceil(logn(n,4));} /*Driver code*/int main(){ int test_no = 64; if(isPowerOfFour(test_no)) cout << test_no << " is a power of 4" ; else cout << test_no << " is not a power of 4"; return 0;} // Java program to check// if given number is// power of 4 or notimport java.util.*;class GFG{ static double logn(int n, int r){ return Math.log(n) / Math.log(r);} static boolean isPowerOfFour(int n){ // 0 is not considered // as a power of 4 if (n == 0) return false; return Math.floor(logn(n, 4)) == Math.ceil(logn(n, 4));} // Driver codepublic static void main(String[] args){ int test_no = 64; if (isPowerOfFour(test_no)) System.out.print(test_no + " is a power of 4"); else System.out.print(test_no + " is not a power of 4");}} // This code is contributed by Amit Katiyar # Python3 program to check# if given number is# power of 4 or notimport math def logn(n, r): return math.log(n) / math.log(r) def isPowerOfFour(n): # 0 is not considered # as a power of 4 if (n == 0): return False return (math.floor(logn(n, 4)) == math.ceil(logn(n, 4))) # Driver codeif __name__ == '__main__': test_no = 64 if (isPowerOfFour(test_no)): print(test_no, " is a power of 4") else: print(test_no, " is not a power of 4") # This code is contributed by Amit Katiyar // C# program to check// if given number is// power of 4 or notusing System;class GFG{ static double logn(int n, int r){ return Math.Log(n) / Math.Log(r);} static bool isPowerOfFour(int n){ // 0 is not considered // as a power of 4 if (n == 0) return false; return Math.Floor(logn(n, 4)) == Math.Ceiling(logn(n, 4));} // Driver codepublic static void Main(String[] args){ int test_no = 64; if (isPowerOfFour(test_no)) Console.Write(test_no + " is a power of 4"); else Console.Write(test_no + " is not a power of 4");}} // This code is contributed by 29AjayKumar <script>// javascript program to check // if given number is // power of 4 or not function logn( n, r){ return Math.log(n) / Math.log(r);} function isPowerOfFour( n){ //0 is not considered as a power //of 4 if(n == 0) return false; return Math.floor(logn(n,4))==Math.ceil(logn(n,4));} /*Driver code*/ let test_no = 64; if(isPowerOfFour(test_no)) document.write(test_no + " is a power of 4") ; else document.write( test_no + " is not a power of 4"); // This code contributed by gauravrajput1 </script> 64 is a power of 4 Time Complexity: O(log4n) Auxiliary Space: O(1) 6. Using Log and without using ceil (Oneliner) We can easily calculate without using ceil by checking the log of number base 4 to the power of 4 is equal to that number C++ Java Python3 C# Javascript // C++ program to check// if given number is// power of 4 or not#include <bits/stdc++.h>using namespace std; int isPowerOfFour(int n){ return (n > 0 and pow(4, int(log2(n) / log2(4))) == n);} // Driver codeint main(){ int test_no = 64; if (isPowerOfFour(test_no)) cout << test_no << " is a power of 4"; else cout << test_no << " is not a power of 4"; return 0;} // This code is contributed by ukasp // Java program to check// if given number isimport java.io.*; class GFG{ static boolean isPowerOfFour(int n){ return (n > 0 && Math.pow( 4, (int)((Math.log(n) / Math.log(2)) / (Math.log(4) / Math.log(2)))) == n);} // Driver codepublic static void main(String[] args){ int test_no = 64; if (isPowerOfFour(test_no)) System.out.println(test_no + " is a power of 4"); else System.out.println(test_no + " is not a power of 4");}} // This code is contributed by rag2127 # Python3 program to check# if given number is# power of 4 or notimport math def isPowerOfFour(n): return (n > 0 and 4**int(math.log(n, 4)) == n) # Driver codeif __name__ == '__main__': test_no = 64 if (isPowerOfFour(test_no)): print(test_no, " is a power of 4") else: print(test_no, " is not a power of 4") # This code is contributed by vikkycirus // C# program to check// if given number isusing System;class GFG{ static Boolean isPowerOfFour(int n){ return (n > 0 && Math.Pow( 4, (int)((Math.Log(n) / Math.Log(2)) / (Math.Log(4) / Math.Log(2)))) == n);} // Driver codepublic static void Main(String[] args){ int test_no = 64; if (isPowerOfFour(test_no)) Console.WriteLine(test_no + " is a power of 4"); else Console.WriteLine(test_no + " is not a power of 4");}} // This code is contributed by shivanisinghss2110 <script> // Javascript program to check// if given number is// power of 4 or notfunction isPowerOfFour(n){ return (n > 0 && Math.pow(4, (Math.log2(n) / Math.log2(4)) == n));} // Driver codelet test_no = 64; if (isPowerOfFour(test_no)) document.write(test_no + " is a power of 4");else document.write(test_no + " is not a power of 4"); // This code is contributed by avijitmondal1998 </script> 64 is a power of 4 Time Complexity: O(log4n) Auxiliary Space: O(1) 7. A number ‘n’ is a power of 4 if – a) It is a perfect square b) It is a power of two Below is the implementation of the above idea. C++ Java Python3 C# Javascript // C++ program to check// if given number is// power of 4 or not#include<bits/stdc++.h>using namespace std; // Function to check perfect squarebool isPerfectSqaure(int n){ int x = sqrt(n); return (x*x == n);} bool isPowerOfFour(int n){ // If n <= 0, it is not the power of four if(n <= 0) return false; // Check whether 'n' is a perfect square or not if(!isPerfectSqaure(n)) return false; // If 'n' is the perfect square // Check for the second condition i.e. 'n' must be power of two return !(n & (n-1));} /*Driver code*/int main(){ int test_no = 64; if(isPowerOfFour(test_no)) cout << test_no << " is a power of 4" ; else cout << test_no << " is not a power of 4"; return 0;} // This code is contributed by Fiza Shaikh // Java program to check// if given number is// power of 4 or notimport java.util.*; class GFG { // Function to check perfect square static boolean isPerfectSqaure(int n) { int x = (int) Math.sqrt(n); return (x * x == n); } static boolean isPowerOfFour(int n) { // If n <= 0, it is not the power of four if (n <= 0) return false; // Check whether 'n' is a perfect square or not if (!isPerfectSqaure(n)) return false; // If 'n' is the perfect square // Check for the second condition i.e. 'n' must be power of two return (n & (n - 1)) != 1 ? true : false; } /* Driver code */ public static void main(String[] args) { int test_no = 64; if (isPowerOfFour(test_no)) System.out.print(test_no + " is a power of 4"); else System.out.print(test_no + " is not a power of 4"); }} // This code is contributed by gauravrajput1 # Python program to check# if given number is# power of 4 or not# Function to check perfect square# import the math moduleimport math def isPerfectSqaure(n): x = math.sqrt(n) return (x*x == n) def isPowerOfFour(n): # If n <= 0, it is not the power of four if(n <= 0): return False # Check whether 'n' is a perfect square or not if(isPerfectSqaure(n)): return False # If 'n' is the perfect square # Check for the second condition i.e. 'n' must be power of two return (n & (n - 1)) # Driver codetest_no = 64if(isPowerOfFour(test_no)): print(test_no ," is a power of 4")else: print(test_no ," is not a power of 4") # This code is contributed by shivanisinghss2110 // C# program to check// if given number is// power of 4 or notusing System; public class GFG { // Function to check perfect square static bool isPerfectSqaure(int n) { int x = (int) Math.Sqrt(n); return (x * x == n); } static bool isPowerOfFour(int n) { // If n <= 0, it is not the power of four if (n <= 0) return false; // Check whether 'n' is a perfect square or not if (!isPerfectSqaure(n)) return false; // If 'n' is the perfect square // Check for the second condition i.e. 'n' must be power of two return (n & (n - 1)) != 1 ? true : false; } /* Driver code */ public static void Main(String[] args) { int test_no = 64; if (isPowerOfFour(test_no)) Console.Write(test_no + " is a power of 4"); else Console.Write(test_no + " is not a power of 4"); }} // This code is contributed by umadevi9616 <script> // JavaScript program to check// if given number is// power of 4 or not// Function to check perfect squarefunction isPerfectSqaure( n) { var x = Math.sqrt(n); return (x * x == n); } function isPowerOfFour(n) { // If n <= 0, it is not the power of four if (n <= 0) return false; // Check whether 'n' is a perfect square or not if (!isPerfectSqaure(n)) return false; // If 'n' is the perfect square // Check for the second condition i.e. 'n' must be power of two return (n & (n - 1)) != 1 ? true : false; } /* Driver code */ var test_no = 64; if (isPowerOfFour(test_no)) document.write(test_no + " is a power of 4"); else document.write(test_no + " is not a power of 4"); // This code is contributed by shivanisinghss2110 </script> 64 is a power of 4 Time Complexity: O(log2n) Auxiliary Space: O(1) Please write comments if you find any of the above codes/algorithms incorrect, or find other ways to solve the same problem kisanrajesh jit_t Mithun Kumar SoumikMondal Shivi_Aggarwal Vikramaditya Kukreja mohit kumar 29 Rajput-Ji yashbeersingh42 amit143katiyar 29AjayKumar rishavmahato348 GauravRajput1 aashish1995 vikkycirus simranarora5sos ukasp avijitmondal1998 rag2127 shivanisinghss2110 surindertarika1234 fizaashaikh umadevi9616 subhammahato348 subham348 aakashrajak02 Bit Magic Mathematical Mathematical Bit Magic Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Find the element that appears once Bits manipulation (Important tactics) Set, Clear and Toggle a given bit of a number in C Bit Fields in C C++ bitset and its application Program for Fibonacci numbers Write a program to print all permutations of a given string C++ Data Types Set in C++ Standard Template Library (STL) Coin Change | DP-7
[ { "code": null, "e": 24569, "s": 24541, "text": "\n12 Jan, 2022" }, { "code": null, "e": 24629, "s": 24569, "text": "Given an integer n, find whether it is a power of 4 or not." }, { "code": null, "e": 24640, "s": 24629, "text": "Example : " }, { "code": null, "e": 24723, "s": 24640, "text": "Input : 16\nOutput : 16 is a power of 4\n\nInput : 20\nOutput : 20 is not a power of 4" }, { "code": null, "e": 25052, "s": 24723, "text": "1. A simple method is to take a log of the given number on base 4, and if we get an integer then the number is the power of 4. 2. Another solution is to keep dividing the number by 4, i.e, do n = n/4 iteratively. In any iteration, if n%4 becomes non-zero and n is not 1 then n is not a power of 4, otherwise, n is a power of 4. " }, { "code": null, "e": 25056, "s": 25052, "text": "C++" }, { "code": null, "e": 25058, "s": 25056, "text": "C" }, { "code": null, "e": 25063, "s": 25058, "text": "Java" }, { "code": null, "e": 25071, "s": 25063, "text": "Python3" }, { "code": null, "e": 25074, "s": 25071, "text": "C#" }, { "code": null, "e": 25078, "s": 25074, "text": "PHP" }, { "code": null, "e": 25089, "s": 25078, "text": "Javascript" }, { "code": "// C++ program to find whether a given// number is a power of 4 or not#include<iostream> using namespace std;#define bool int class GFG{ /* Function to check if x is power of 4*/public : bool isPowerOfFour(int n){ if(n == 0) return 0; while(n != 1) { if(n % 4 != 0) return 0; n = n / 4; } return 1;}}; /*Driver code*/int main(){ GFG g; int test_no = 64; if(g.isPowerOfFour(test_no)) cout << test_no << \" is a power of 4\"; else cout << test_no << \"is not a power of 4\"; getchar();} // This code is contributed by SoM15242", "e": 25690, "s": 25089, "text": null }, { "code": "#include<stdio.h>#define bool int /* Function to check if x is power of 4*/bool isPowerOfFour(int n){ if(n == 0) return 0; while(n != 1) { if(n % 4 != 0) return 0; n = n / 4; } return 1;} /*Driver program to test above function*/int main(){ int test_no = 64; if(isPowerOfFour(test_no)) printf(\"%d is a power of 4\", test_no); else printf(\"%d is not a power of 4\", test_no); getchar();}", "e": 26111, "s": 25690, "text": null }, { "code": "// Java code to check if given// number is power of 4 or not class GFG { // Function to check if // x is power of 4 static int isPowerOfFour(int n) { if(n == 0) return 0; while(n != 1) { if(n % 4 != 0) return 0; n = n / 4; } return 1; } // Driver program public static void main(String[] args) { int test_no = 64; if(isPowerOfFour(test_no) == 1) System.out.println(test_no + \" is a power of 4\"); else System.out.println(test_no + \"is not a power of 4\"); }} // This code is contributed// by prerna saini", "e": 26812, "s": 26111, "text": null }, { "code": "# Python3 program to check if given# number is power of 4 or not # Function to check if x is power of 4def isPowerOfFour(n): if (n == 0): return False while (n != 1): if (n % 4 != 0): return False n = n // 4 return True # Driver codetest_no = 64if(isPowerOfFour(64)): print(test_no, 'is a power of 4')else: print(test_no, 'is not a power of 4') # This code is contributed by Danish Raza", "e": 27271, "s": 26812, "text": null }, { "code": "// C# code to check if given// number is power of 4 or notusing System; class GFG { // Function to check if // x is power of 4 static int isPowerOfFour(int n) { if (n == 0) return 0; while (n != 1) { if (n % 4 != 0) return 0; n = n / 4; } return 1; } // Driver code public static void Main() { int test_no = 64; if (isPowerOfFour(test_no) == 1) Console.Write(test_no + \" is a power of 4\"); else Console.Write(test_no + \" is not a power of 4\"); }} // This code is contributed by Sam007", "e": 27942, "s": 27271, "text": null }, { "code": "<?php// PHP code to check if given// number is power of 4 or not // Function to check if// x is power of 4function isPowerOfFour($n){ if($n == 0) return 0; while($n != 1) { if($n % 4 != 0) return 0; $n = $n / 4; } return 1;} // Driver Code$test_no = 64; if(isPowerOfFour($test_no)) echo $test_no,\" is a power of 4\";else echo $test_no,\" is not a power of 4\"; // This code is contributed by Rajesh?>", "e": 28406, "s": 27942, "text": null }, { "code": "<script> /* Function to check if x is power of 4*/function isPowerOfFour( n){ if(n == 0) return false; while(n != 1) { if(n % 4 != 0) return false; n = n / 4; } return true;} /*Driver program to test above function*/let test_no = 64; if(isPowerOfFour(test_no)) document.write(test_no+\" is a power of 4\"); else document.write(test_no+\" is not a power of 4\"); // This code is contributed by gauravrajput1 </script>", "e": 28853, "s": 28406, "text": null }, { "code": null, "e": 28872, "s": 28853, "text": "64 is a power of 4" }, { "code": null, "e": 28898, "s": 28872, "text": "Time Complexity: O(log4n)" }, { "code": null, "e": 29331, "s": 28898, "text": "Auxiliary Space: O(1)3. A number n is a power of 4 if the following conditions are met. a) There is only one bit set in the binary representation of n (or n is a power of 2) b) The count of zero bits before the (only) set bit is even.For example 16 (10000) is the power of 4 because there is only one bit set and count of 0s before the set bit is 4 which is even.Thanks to Geek4u for suggesting the approach and providing the code. " }, { "code": null, "e": 29335, "s": 29331, "text": "C++" }, { "code": null, "e": 29337, "s": 29335, "text": "C" }, { "code": null, "e": 29342, "s": 29337, "text": "Java" }, { "code": null, "e": 29350, "s": 29342, "text": "Python3" }, { "code": null, "e": 29353, "s": 29350, "text": "C#" }, { "code": null, "e": 29357, "s": 29353, "text": "PHP" }, { "code": null, "e": 29368, "s": 29357, "text": "Javascript" }, { "code": "// C++ program to check// if given number is// power of 4 or not#include<bits/stdc++.h> using namespace std; bool isPowerOfFour(unsigned int n){ int count = 0; /*Check if there is only one bit set in n*/ if ( n && !(n&(n-1)) ) { /* count 0 bits before set bit */ while(n > 1) { n >>= 1; count += 1; } /*If count is even then return true else false*/ return (count%2 == 0)? 1 :0; } /* If there are more than 1 bit set then n is not a power of 4*/ return 0;} /*Driver code*/int main(){ int test_no = 64; if(isPowerOfFour(test_no)) cout << test_no << \" is a power of 4\" ; else cout << test_no << \" is not a power of 4\";} // This code is contributed by Shivi_Aggarwal", "e": 30153, "s": 29368, "text": null }, { "code": "#include<stdio.h>#define bool int bool isPowerOfFour(unsigned int n){ int count = 0; /*Check if there is only one bit set in n*/ if ( n && !(n&(n-1)) ) { /* count 0 bits before set bit */ while(n > 1) { n >>= 1; count += 1; } /*If count is even then return true else false*/ return (count%2 == 0)? 1 :0; } /* If there are more than 1 bit set then n is not a power of 4*/ return 0;} /*Driver program to test above function*/int main(){ int test_no = 64; if(isPowerOfFour(test_no)) printf(\"%d is a power of 4\", test_no); else printf(\"%d is not a power of 4\", test_no); getchar();}", "e": 30804, "s": 30153, "text": null }, { "code": "// Java program to check// if given number is// power of 4 or notimport java.io.*;class GFG{ static int isPowerOfFour(int n) { int count = 0; /*Check if there is only one bit set in n*/ int x = n & (n - 1); if ( n > 0 && x == 0) { /* count 0 bits before set bit */ while(n > 1) { n >>= 1; count += 1; } /*If count is even then return true else false*/ return (count % 2 == 0) ? 1 : 0; } /* If there are more than 1 bit set then n is not a power of 4*/ return 0; } // Driver Code public static void main(String[] args) { int test_no = 64; if(isPowerOfFour(test_no)>0) System.out.println(test_no + \" is a power of 4\"); else System.out.println(test_no + \" is not a power of 4\"); }} // This code is contributed by mits", "e": 31907, "s": 30804, "text": null }, { "code": "# Python3 program to check if given# number is power of 4 or not # Function to check if x is power of 4def isPowerOfFour(n): count = 0 # Check if there is only one # bit set in n if (n and (not(n & (n - 1)))): # count 0 bits before set bit while(n > 1): n >>= 1 count += 1 # If count is even then return # true else false if(count % 2 == 0): return True else: return False # Driver codetest_no = 64if(isPowerOfFour(64)): print(test_no, 'is a power of 4')else: print(test_no, 'is not a power of 4') # This code is contributed by Danish Raza", "e": 32579, "s": 31907, "text": null }, { "code": "// C# program to check if given// number is power of 4 or notusing System; class GFG { static int isPowerOfFour(int n) { int count = 0; /*Check if there is only one bit set in n*/ int x = n & (n-1); if ( n > 0 && x == 0) { /* count 0 bits before set bit */ while(n > 1) { n >>= 1; count += 1; } /*If count is even then return true else false*/ return (count % 2 == 0) ? 1 : 0; } /* If there are more than 1 bit set then n is not a power of 4*/ return 0; } /*Driver program to test above function*/ static void Main() { int test_no = 64; if(isPowerOfFour(test_no)>0) Console.WriteLine(\"{0} is a power of 4\", test_no); else Console.WriteLine(\"{0} is not a power of 4\", test_no); }} // This Code is Contributed by mits", "e": 33680, "s": 32579, "text": null }, { "code": "<?php function isPowerOfFour($n){ $count = 0; /*Check if there is only one bit set in n*/if ( $n && !($n&($n-1)) ){ /* count 0 bits before set bit */ while($n > 1) { $n >>= 1; $count += 1; } /*If count is even then return true else false*/ return ($count%2 == 0)? 1 :0;} /* If there are more than 1 bit set then n is not a power of 4*/return 0;} /*Driver program to test above function*/ $test_no = 64; if(isPowerOfFour($test_no)) echo $test_no, \" is a power of 4\"; else echo $test_no, \" not is a power of 4\"; #This Code is Contributed by Ajit?>", "e": 34282, "s": 33680, "text": null }, { "code": "<script> // javascript program to check// if given number is// power of 4 or not function isPowerOfFour( n){ let count = 0; /*Check if there is only one bit set in n*/ if ( n && !(n&(n-1)) ) { /* count 0 bits before set bit */ while(n > 1) { n >>= 1; count += 1; } /*If count is even then return true else false*/ return (count%2 == 0)? 1 :0; } /* If there are more than 1 bit set then n is not a power of 4*/ return 0;} /*Driver code*/ let test_no = 64; if(isPowerOfFour(test_no)) document.write( test_no +\" is a power of 4\" ); else document.write(test_no + \" is not a power of 4\"); // This code contributed by aashish1995 </script>", "e": 35037, "s": 34282, "text": null }, { "code": null, "e": 35056, "s": 35037, "text": "64 is a power of 4" }, { "code": null, "e": 35082, "s": 35056, "text": "Time Complexity: O(log4n)" }, { "code": null, "e": 35104, "s": 35082, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 35471, "s": 35104, "text": "4. A number n is a power of 4 if the following conditions are met. a) There is only one bit set in the binary representation of n (or n is a power of 2) b) The bits don’t AND(&) any part of the pattern 0xAAAAAAAAFor example: 16 (10000) is power of 4 because there is only one bit set and 0x10 & 0xAAAAAAAA is zero.Thanks to Sarthak Sahu for suggesting the approach. " }, { "code": null, "e": 35475, "s": 35471, "text": "C++" }, { "code": null, "e": 35477, "s": 35475, "text": "C" }, { "code": null, "e": 35482, "s": 35477, "text": "Java" }, { "code": null, "e": 35490, "s": 35482, "text": "Python3" }, { "code": null, "e": 35493, "s": 35490, "text": "C#" }, { "code": null, "e": 35504, "s": 35493, "text": "Javascript" }, { "code": "// C++ program to check// if given number is// power of 4 or not#include<bits/stdc++.h> using namespace std; bool isPowerOfFour(unsigned int n){ return n !=0 && ((n&(n-1)) == 0) && !(n & 0xAAAAAAAA);} /*Driver code*/int main(){ int test_no = 64; if(isPowerOfFour(test_no)) cout << test_no << \" is a power of 4\" ; else cout << test_no << \" is not a power of 4\";}", "e": 35892, "s": 35504, "text": null }, { "code": "// C program to check// if given number is// power of 4 or not#include<stdio.h>#define bool int bool isPowerOfFour(unsigned int n){ return n != 0 && ((n&(n-1)) == 0) && !(n & 0xAAAAAAAA);} /*Driver program to test above function*/int main() { int test_no = 64; if(isPowerOfFour(test_no)) printf(\"%d is a power of 4\", test_no); else printf(\"%d is not a power of 4\", test_no); getchar();}", "e": 36299, "s": 35892, "text": null }, { "code": "// Java program to check// if given number is// power of 4 or notimport java.io.*;class GFG { static boolean isPowerOfFour(int n) { return n != 0 && ((n&(n-1)) == 0) && (n & 0xAAAAAAAA) == 0; } // Driver Code public static void main(String[] args) { int test_no = 64; if(isPowerOfFour(test_no)) System.out.println(test_no + \" is a power of 4\"); else System.out.println(test_no + \" is not a power of 4\"); }}", "e": 36841, "s": 36299, "text": null }, { "code": "# Python3 program to check# if given number is# power of 4 or notdef isPowerOfFour(n): return (n != 0 and ((n & (n - 1)) == 0) and not(n & 0xAAAAAAAA)); # Driver codetest_no = 64;if(isPowerOfFour(test_no)): print(test_no ,\"is a power of 4\");else: print(test_no , \"is not a power of 4\"); # This code contributed by Rajput-Ji", "e": 37194, "s": 36841, "text": null }, { "code": "// C# program to check// if given number is// power of 4 or notusing System; class GFG{ static bool isPowerOfFour(int n) { return n != 0 && ((n&(n-1)) == 0) && (n & 0xAAAAAAAA) == 0; } // Driver Code static void Main() { int test_no = 64; if(isPowerOfFour(test_no)) Console.WriteLine(\"{0} is a power of 4\", test_no); else Console.WriteLine(\"{0} is not a power of 4\", test_no); }} // This code is contributed by mohit kumar 29", "e": 37806, "s": 37194, "text": null }, { "code": "<script>// C++ program to check// if given number is// power of 4 or not function isPowerOfFour( n){ return n !=0 && ((n&(n-1)) == 0) && !(n & 0xAAAAAAAA);} /*Driver code*/ test_no = 64; if(isPowerOfFour(test_no)) document.write(test_no + \" is a power of 4\"); else document.write(test_no + \" is not a power of 4\");//This code is contributed by simranarora5sos</script>", "e": 38201, "s": 37806, "text": null }, { "code": null, "e": 38220, "s": 38201, "text": "64 is a power of 4" }, { "code": null, "e": 38246, "s": 38220, "text": "Time Complexity: O(log4n)" }, { "code": null, "e": 38522, "s": 38246, "text": "Auxiliary Space: O(1)Why 0xAAAAAAAA ? This is because the bit representation is of powers of 2 that are not of 4. Like 2, 8, 32 so on.5. A number will be a power of 4 if floor(log4(num))=ceil(log4(num) because log4 of a number that is a power of 4 will always be an integer. " }, { "code": null, "e": 38574, "s": 38522, "text": "Below is the implementation of the above approach. " }, { "code": null, "e": 38578, "s": 38574, "text": "C++" }, { "code": null, "e": 38583, "s": 38578, "text": "Java" }, { "code": null, "e": 38591, "s": 38583, "text": "Python3" }, { "code": null, "e": 38594, "s": 38591, "text": "C#" }, { "code": null, "e": 38605, "s": 38594, "text": "Javascript" }, { "code": "// C++ program to check // if given number is // power of 4 or not #include<bits/stdc++.h> using namespace std; float logn(int n, int r){ return log(n) / log(r);} bool isPowerOfFour(int n){ //0 is not considered as a power //of 4 if(n == 0) return false; return floor(logn(n,4))==ceil(logn(n,4));} /*Driver code*/int main(){ int test_no = 64; if(isPowerOfFour(test_no)) cout << test_no << \" is a power of 4\" ; else cout << test_no << \" is not a power of 4\"; return 0;}", "e": 39127, "s": 38605, "text": null }, { "code": "// Java program to check// if given number is// power of 4 or notimport java.util.*;class GFG{ static double logn(int n, int r){ return Math.log(n) / Math.log(r);} static boolean isPowerOfFour(int n){ // 0 is not considered // as a power of 4 if (n == 0) return false; return Math.floor(logn(n, 4)) == Math.ceil(logn(n, 4));} // Driver codepublic static void main(String[] args){ int test_no = 64; if (isPowerOfFour(test_no)) System.out.print(test_no + \" is a power of 4\"); else System.out.print(test_no + \" is not a power of 4\");}} // This code is contributed by Amit Katiyar", "e": 39795, "s": 39127, "text": null }, { "code": "# Python3 program to check# if given number is# power of 4 or notimport math def logn(n, r): return math.log(n) / math.log(r) def isPowerOfFour(n): # 0 is not considered # as a power of 4 if (n == 0): return False return (math.floor(logn(n, 4)) == math.ceil(logn(n, 4))) # Driver codeif __name__ == '__main__': test_no = 64 if (isPowerOfFour(test_no)): print(test_no, \" is a power of 4\") else: print(test_no, \" is not a power of 4\") # This code is contributed by Amit Katiyar", "e": 40352, "s": 39795, "text": null }, { "code": "// C# program to check// if given number is// power of 4 or notusing System;class GFG{ static double logn(int n, int r){ return Math.Log(n) / Math.Log(r);} static bool isPowerOfFour(int n){ // 0 is not considered // as a power of 4 if (n == 0) return false; return Math.Floor(logn(n, 4)) == Math.Ceiling(logn(n, 4));} // Driver codepublic static void Main(String[] args){ int test_no = 64; if (isPowerOfFour(test_no)) Console.Write(test_no + \" is a power of 4\"); else Console.Write(test_no + \" is not a power of 4\");}} // This code is contributed by 29AjayKumar", "e": 40999, "s": 40352, "text": null }, { "code": "<script>// javascript program to check // if given number is // power of 4 or not function logn( n, r){ return Math.log(n) / Math.log(r);} function isPowerOfFour( n){ //0 is not considered as a power //of 4 if(n == 0) return false; return Math.floor(logn(n,4))==Math.ceil(logn(n,4));} /*Driver code*/ let test_no = 64; if(isPowerOfFour(test_no)) document.write(test_no + \" is a power of 4\") ; else document.write( test_no + \" is not a power of 4\"); // This code contributed by gauravrajput1 </script>", "e": 41561, "s": 40999, "text": null }, { "code": null, "e": 41580, "s": 41561, "text": "64 is a power of 4" }, { "code": null, "e": 41606, "s": 41580, "text": "Time Complexity: O(log4n)" }, { "code": null, "e": 41628, "s": 41606, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 41675, "s": 41628, "text": "6. Using Log and without using ceil (Oneliner)" }, { "code": null, "e": 41797, "s": 41675, "text": "We can easily calculate without using ceil by checking the log of number base 4 to the power of 4 is equal to that number" }, { "code": null, "e": 41801, "s": 41797, "text": "C++" }, { "code": null, "e": 41806, "s": 41801, "text": "Java" }, { "code": null, "e": 41814, "s": 41806, "text": "Python3" }, { "code": null, "e": 41817, "s": 41814, "text": "C#" }, { "code": null, "e": 41828, "s": 41817, "text": "Javascript" }, { "code": "// C++ program to check// if given number is// power of 4 or not#include <bits/stdc++.h>using namespace std; int isPowerOfFour(int n){ return (n > 0 and pow(4, int(log2(n) / log2(4))) == n);} // Driver codeint main(){ int test_no = 64; if (isPowerOfFour(test_no)) cout << test_no << \" is a power of 4\"; else cout << test_no << \" is not a power of 4\"; return 0;} // This code is contributed by ukasp", "e": 42298, "s": 41828, "text": null }, { "code": "// Java program to check// if given number isimport java.io.*; class GFG{ static boolean isPowerOfFour(int n){ return (n > 0 && Math.pow( 4, (int)((Math.log(n) / Math.log(2)) / (Math.log(4) / Math.log(2)))) == n);} // Driver codepublic static void main(String[] args){ int test_no = 64; if (isPowerOfFour(test_no)) System.out.println(test_no + \" is a power of 4\"); else System.out.println(test_no + \" is not a power of 4\");}} // This code is contributed by rag2127", "e": 42871, "s": 42298, "text": null }, { "code": "# Python3 program to check# if given number is# power of 4 or notimport math def isPowerOfFour(n): return (n > 0 and 4**int(math.log(n, 4)) == n) # Driver codeif __name__ == '__main__': test_no = 64 if (isPowerOfFour(test_no)): print(test_no, \" is a power of 4\") else: print(test_no, \" is not a power of 4\") # This code is contributed by vikkycirus", "e": 43250, "s": 42871, "text": null }, { "code": "// C# program to check// if given number isusing System;class GFG{ static Boolean isPowerOfFour(int n){ return (n > 0 && Math.Pow( 4, (int)((Math.Log(n) / Math.Log(2)) / (Math.Log(4) / Math.Log(2)))) == n);} // Driver codepublic static void Main(String[] args){ int test_no = 64; if (isPowerOfFour(test_no)) Console.WriteLine(test_no + \" is a power of 4\"); else Console.WriteLine(test_no + \" is not a power of 4\");}} // This code is contributed by shivanisinghss2110", "e": 43825, "s": 43250, "text": null }, { "code": "<script> // Javascript program to check// if given number is// power of 4 or notfunction isPowerOfFour(n){ return (n > 0 && Math.pow(4, (Math.log2(n) / Math.log2(4)) == n));} // Driver codelet test_no = 64; if (isPowerOfFour(test_no)) document.write(test_no + \" is a power of 4\");else document.write(test_no + \" is not a power of 4\"); // This code is contributed by avijitmondal1998 </script>", "e": 44322, "s": 43825, "text": null }, { "code": null, "e": 44341, "s": 44322, "text": "64 is a power of 4" }, { "code": null, "e": 44367, "s": 44341, "text": "Time Complexity: O(log4n)" }, { "code": null, "e": 44389, "s": 44367, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 44427, "s": 44389, "text": "7. A number ‘n’ is a power of 4 if –" }, { "code": null, "e": 44453, "s": 44427, "text": "a) It is a perfect square" }, { "code": null, "e": 44477, "s": 44453, "text": "b) It is a power of two" }, { "code": null, "e": 44524, "s": 44477, "text": "Below is the implementation of the above idea." }, { "code": null, "e": 44528, "s": 44524, "text": "C++" }, { "code": null, "e": 44533, "s": 44528, "text": "Java" }, { "code": null, "e": 44541, "s": 44533, "text": "Python3" }, { "code": null, "e": 44544, "s": 44541, "text": "C#" }, { "code": null, "e": 44555, "s": 44544, "text": "Javascript" }, { "code": "// C++ program to check// if given number is// power of 4 or not#include<bits/stdc++.h>using namespace std; // Function to check perfect squarebool isPerfectSqaure(int n){ int x = sqrt(n); return (x*x == n);} bool isPowerOfFour(int n){ // If n <= 0, it is not the power of four if(n <= 0) return false; // Check whether 'n' is a perfect square or not if(!isPerfectSqaure(n)) return false; // If 'n' is the perfect square // Check for the second condition i.e. 'n' must be power of two return !(n & (n-1));} /*Driver code*/int main(){ int test_no = 64; if(isPowerOfFour(test_no)) cout << test_no << \" is a power of 4\" ; else cout << test_no << \" is not a power of 4\"; return 0;} // This code is contributed by Fiza Shaikh", "e": 45376, "s": 44555, "text": null }, { "code": "// Java program to check// if given number is// power of 4 or notimport java.util.*; class GFG { // Function to check perfect square static boolean isPerfectSqaure(int n) { int x = (int) Math.sqrt(n); return (x * x == n); } static boolean isPowerOfFour(int n) { // If n <= 0, it is not the power of four if (n <= 0) return false; // Check whether 'n' is a perfect square or not if (!isPerfectSqaure(n)) return false; // If 'n' is the perfect square // Check for the second condition i.e. 'n' must be power of two return (n & (n - 1)) != 1 ? true : false; } /* Driver code */ public static void main(String[] args) { int test_no = 64; if (isPowerOfFour(test_no)) System.out.print(test_no + \" is a power of 4\"); else System.out.print(test_no + \" is not a power of 4\"); }} // This code is contributed by gauravrajput1", "e": 46359, "s": 45376, "text": null }, { "code": "# Python program to check# if given number is# power of 4 or not# Function to check perfect square# import the math moduleimport math def isPerfectSqaure(n): x = math.sqrt(n) return (x*x == n) def isPowerOfFour(n): # If n <= 0, it is not the power of four if(n <= 0): return False # Check whether 'n' is a perfect square or not if(isPerfectSqaure(n)): return False # If 'n' is the perfect square # Check for the second condition i.e. 'n' must be power of two return (n & (n - 1)) # Driver codetest_no = 64if(isPowerOfFour(test_no)): print(test_no ,\" is a power of 4\")else: print(test_no ,\" is not a power of 4\") # This code is contributed by shivanisinghss2110", "e": 47110, "s": 46359, "text": null }, { "code": "// C# program to check// if given number is// power of 4 or notusing System; public class GFG { // Function to check perfect square static bool isPerfectSqaure(int n) { int x = (int) Math.Sqrt(n); return (x * x == n); } static bool isPowerOfFour(int n) { // If n <= 0, it is not the power of four if (n <= 0) return false; // Check whether 'n' is a perfect square or not if (!isPerfectSqaure(n)) return false; // If 'n' is the perfect square // Check for the second condition i.e. 'n' must be power of two return (n & (n - 1)) != 1 ? true : false; } /* Driver code */ public static void Main(String[] args) { int test_no = 64; if (isPowerOfFour(test_no)) Console.Write(test_no + \" is a power of 4\"); else Console.Write(test_no + \" is not a power of 4\"); }} // This code is contributed by umadevi9616", "e": 48078, "s": 47110, "text": null }, { "code": "<script> // JavaScript program to check// if given number is// power of 4 or not// Function to check perfect squarefunction isPerfectSqaure( n) { var x = Math.sqrt(n); return (x * x == n); } function isPowerOfFour(n) { // If n <= 0, it is not the power of four if (n <= 0) return false; // Check whether 'n' is a perfect square or not if (!isPerfectSqaure(n)) return false; // If 'n' is the perfect square // Check for the second condition i.e. 'n' must be power of two return (n & (n - 1)) != 1 ? true : false; } /* Driver code */ var test_no = 64; if (isPowerOfFour(test_no)) document.write(test_no + \" is a power of 4\"); else document.write(test_no + \" is not a power of 4\"); // This code is contributed by shivanisinghss2110 </script>", "e": 48968, "s": 48078, "text": null }, { "code": null, "e": 48987, "s": 48968, "text": "64 is a power of 4" }, { "code": null, "e": 49013, "s": 48987, "text": "Time Complexity: O(log2n)" }, { "code": null, "e": 49035, "s": 49013, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 49159, "s": 49035, "text": "Please write comments if you find any of the above codes/algorithms incorrect, or find other ways to solve the same problem" }, { "code": null, "e": 49171, "s": 49159, "text": "kisanrajesh" }, { "code": null, "e": 49177, "s": 49171, "text": "jit_t" }, { "code": null, "e": 49190, "s": 49177, "text": "Mithun Kumar" }, { "code": null, "e": 49203, "s": 49190, "text": "SoumikMondal" }, { "code": null, "e": 49218, "s": 49203, "text": "Shivi_Aggarwal" }, { "code": null, "e": 49239, "s": 49218, "text": "Vikramaditya Kukreja" }, { "code": null, "e": 49254, "s": 49239, "text": "mohit kumar 29" }, { "code": null, "e": 49264, "s": 49254, "text": "Rajput-Ji" }, { "code": null, "e": 49280, "s": 49264, "text": "yashbeersingh42" }, { "code": null, "e": 49295, "s": 49280, "text": "amit143katiyar" }, { "code": null, "e": 49307, "s": 49295, "text": "29AjayKumar" }, { "code": null, "e": 49323, "s": 49307, "text": "rishavmahato348" }, { "code": null, "e": 49337, "s": 49323, "text": "GauravRajput1" }, { "code": null, "e": 49349, "s": 49337, "text": "aashish1995" }, { "code": null, "e": 49360, "s": 49349, "text": "vikkycirus" }, { "code": null, "e": 49376, "s": 49360, "text": "simranarora5sos" }, { "code": null, "e": 49382, "s": 49376, "text": "ukasp" }, { "code": null, "e": 49399, "s": 49382, "text": "avijitmondal1998" }, { "code": null, "e": 49407, "s": 49399, "text": "rag2127" }, { "code": null, "e": 49426, "s": 49407, "text": "shivanisinghss2110" }, { "code": null, "e": 49445, "s": 49426, "text": "surindertarika1234" }, { "code": null, "e": 49457, "s": 49445, "text": "fizaashaikh" }, { "code": null, "e": 49469, "s": 49457, "text": "umadevi9616" }, { "code": null, "e": 49485, "s": 49469, "text": "subhammahato348" }, { "code": null, "e": 49495, "s": 49485, "text": "subham348" }, { "code": null, "e": 49509, "s": 49495, "text": "aakashrajak02" }, { "code": null, "e": 49519, "s": 49509, "text": "Bit Magic" }, { "code": null, "e": 49532, "s": 49519, "text": "Mathematical" }, { "code": null, "e": 49545, "s": 49532, "text": "Mathematical" }, { "code": null, "e": 49555, "s": 49545, "text": "Bit Magic" }, { "code": null, "e": 49653, "s": 49555, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 49662, "s": 49653, "text": "Comments" }, { "code": null, "e": 49675, "s": 49662, "text": "Old Comments" }, { "code": null, "e": 49710, "s": 49675, "text": "Find the element that appears once" }, { "code": null, "e": 49748, "s": 49710, "text": "Bits manipulation (Important tactics)" }, { "code": null, "e": 49799, "s": 49748, "text": "Set, Clear and Toggle a given bit of a number in C" }, { "code": null, "e": 49815, "s": 49799, "text": "Bit Fields in C" }, { "code": null, "e": 49846, "s": 49815, "text": "C++ bitset and its application" }, { "code": null, "e": 49876, "s": 49846, "text": "Program for Fibonacci numbers" }, { "code": null, "e": 49936, "s": 49876, "text": "Write a program to print all permutations of a given string" }, { "code": null, "e": 49951, "s": 49936, "text": "C++ Data Types" }, { "code": null, "e": 49994, "s": 49951, "text": "Set in C++ Standard Template Library (STL)" } ]
Data Structure and Algorithms - Linked List
A linked list is a sequence of data structures, which are connected together via links. Linked List is a sequence of links which contains items. Each link contains a connection to another link. Linked list is the second most-used data structure after array. Following are the important terms to understand the concept of Linked List. Link − Each link of a linked list can store a data called an element. Link − Each link of a linked list can store a data called an element. Next − Each link of a linked list contains a link to the next link called Next. Next − Each link of a linked list contains a link to the next link called Next. LinkedList − A Linked List contains the connection link to the first link called First. LinkedList − A Linked List contains the connection link to the first link called First. Linked list can be visualized as a chain of nodes, where every node points to the next node. As per the above illustration, following are the important points to be considered. Linked List contains a link element called first. Linked List contains a link element called first. Each link carries a data field(s) and a link field called next. Each link carries a data field(s) and a link field called next. Each link is linked with its next link using its next link. Each link is linked with its next link using its next link. Last link carries a link as null to mark the end of the list. Last link carries a link as null to mark the end of the list. Following are the various types of linked list. Simple Linked List − Item navigation is forward only. Simple Linked List − Item navigation is forward only. Doubly Linked List − Items can be navigated forward and backward. Doubly Linked List − Items can be navigated forward and backward. Circular Linked List − Last item contains link of the first element as next and the first element has a link to the last element as previous. Circular Linked List − Last item contains link of the first element as next and the first element has a link to the last element as previous. Following are the basic operations supported by a list. Insertion − Adds an element at the beginning of the list. Insertion − Adds an element at the beginning of the list. Deletion − Deletes an element at the beginning of the list. Deletion − Deletes an element at the beginning of the list. Display − Displays the complete list. Display − Displays the complete list. Search − Searches an element using the given key. Search − Searches an element using the given key. Delete − Deletes an element using the given key. Delete − Deletes an element using the given key. Adding a new node in linked list is a more than one step activity. We shall learn this with diagrams here. First, create a node using the same structure and find the location where it has to be inserted. Imagine that we are inserting a node B (NewNode), between A (LeftNode) and C (RightNode). Then point B.next to C − NewNode.next −> RightNode; It should look like this − Now, the next node at the left should point to the new node. LeftNode.next −> NewNode; This will put the new node in the middle of the two. The new list should look like this − Similar steps should be taken if the node is being inserted at the beginning of the list. While inserting it at the end, the second last node of the list should point to the new node and the new node will point to NULL. Deletion is also a more than one step process. We shall learn with pictorial representation. First, locate the target node to be removed, by using searching algorithms. The left (previous) node of the target node now should point to the next node of the target node − LeftNode.next −> TargetNode.next; This will remove the link that was pointing to the target node. Now, using the following code, we will remove what the target node is pointing at. TargetNode.next −> NULL; We need to use the deleted node. We can keep that in memory otherwise we can simply deallocate memory and wipe off the target node completely. This operation is a thorough one. We need to make the last node to be pointed by the head node and reverse the whole linked list. First, we traverse to the end of the list. It should be pointing to NULL. Now, we shall make it point to its previous node − We have to make sure that the last node is not the last node. So we'll have some temp node, which looks like the head node pointing to the last node. Now, we shall make all left side nodes point to their previous nodes one by one. Except the node (first node) pointed by the head node, all nodes should point to their predecessor, making them their new successor. The first node will point to NULL. We'll make the head node point to the new first node by using the temp node. The linked list is now reversed. To see linked list implementation in C programming language, please click here. 42 Lectures 1.5 hours Ravi Kiran 141 Lectures 13 hours Arnab Chakraborty 26 Lectures 8.5 hours Parth Panjabi 65 Lectures 6 hours Arnab Chakraborty 75 Lectures 13 hours Eduonix Learning Solutions 64 Lectures 10.5 hours Eduonix Learning Solutions Print Add Notes Bookmark this page
[ { "code": null, "e": 2668, "s": 2580, "text": "A linked list is a sequence of data structures, which are connected together via links." }, { "code": null, "e": 2914, "s": 2668, "text": "Linked List is a sequence of links which contains items. Each link contains a connection to another link. Linked list is the second most-used data structure after array. Following are the important terms to understand the concept of Linked List." }, { "code": null, "e": 2984, "s": 2914, "text": "Link − Each link of a linked list can store a data called an element." }, { "code": null, "e": 3054, "s": 2984, "text": "Link − Each link of a linked list can store a data called an element." }, { "code": null, "e": 3134, "s": 3054, "text": "Next − Each link of a linked list contains a link to the next link called Next." }, { "code": null, "e": 3214, "s": 3134, "text": "Next − Each link of a linked list contains a link to the next link called Next." }, { "code": null, "e": 3302, "s": 3214, "text": "LinkedList − A Linked List contains the connection link to the first link called First." }, { "code": null, "e": 3390, "s": 3302, "text": "LinkedList − A Linked List contains the connection link to the first link called First." }, { "code": null, "e": 3483, "s": 3390, "text": "Linked list can be visualized as a chain of nodes, where every node points to the next node." }, { "code": null, "e": 3567, "s": 3483, "text": "As per the above illustration, following are the important points to be considered." }, { "code": null, "e": 3617, "s": 3567, "text": "Linked List contains a link element called first." }, { "code": null, "e": 3667, "s": 3617, "text": "Linked List contains a link element called first." }, { "code": null, "e": 3731, "s": 3667, "text": "Each link carries a data field(s) and a link field called next." }, { "code": null, "e": 3795, "s": 3731, "text": "Each link carries a data field(s) and a link field called next." }, { "code": null, "e": 3855, "s": 3795, "text": "Each link is linked with its next link using its next link." }, { "code": null, "e": 3915, "s": 3855, "text": "Each link is linked with its next link using its next link." }, { "code": null, "e": 3977, "s": 3915, "text": "Last link carries a link as null to mark the end of the list." }, { "code": null, "e": 4039, "s": 3977, "text": "Last link carries a link as null to mark the end of the list." }, { "code": null, "e": 4087, "s": 4039, "text": "Following are the various types of linked list." }, { "code": null, "e": 4141, "s": 4087, "text": "Simple Linked List − Item navigation is forward only." }, { "code": null, "e": 4195, "s": 4141, "text": "Simple Linked List − Item navigation is forward only." }, { "code": null, "e": 4261, "s": 4195, "text": "Doubly Linked List − Items can be navigated forward and backward." }, { "code": null, "e": 4327, "s": 4261, "text": "Doubly Linked List − Items can be navigated forward and backward." }, { "code": null, "e": 4469, "s": 4327, "text": "Circular Linked List − Last item contains link of the first element as next and the first element has a link to the last element as previous." }, { "code": null, "e": 4611, "s": 4469, "text": "Circular Linked List − Last item contains link of the first element as next and the first element has a link to the last element as previous." }, { "code": null, "e": 4667, "s": 4611, "text": "Following are the basic operations supported by a list." }, { "code": null, "e": 4725, "s": 4667, "text": "Insertion − Adds an element at the beginning of the list." }, { "code": null, "e": 4783, "s": 4725, "text": "Insertion − Adds an element at the beginning of the list." }, { "code": null, "e": 4843, "s": 4783, "text": "Deletion − Deletes an element at the beginning of the list." }, { "code": null, "e": 4903, "s": 4843, "text": "Deletion − Deletes an element at the beginning of the list." }, { "code": null, "e": 4941, "s": 4903, "text": "Display − Displays the complete list." }, { "code": null, "e": 4979, "s": 4941, "text": "Display − Displays the complete list." }, { "code": null, "e": 5029, "s": 4979, "text": "Search − Searches an element using the given key." }, { "code": null, "e": 5079, "s": 5029, "text": "Search − Searches an element using the given key." }, { "code": null, "e": 5128, "s": 5079, "text": "Delete − Deletes an element using the given key." }, { "code": null, "e": 5177, "s": 5128, "text": "Delete − Deletes an element using the given key." }, { "code": null, "e": 5381, "s": 5177, "text": "Adding a new node in linked list is a more than one step activity. We shall learn this with diagrams here. First, create a node using the same structure and find the location where it has to be inserted." }, { "code": null, "e": 5496, "s": 5381, "text": "Imagine that we are inserting a node B (NewNode), between A (LeftNode) and C (RightNode). Then point B.next to C −" }, { "code": null, "e": 5524, "s": 5496, "text": "NewNode.next −> RightNode;\n" }, { "code": null, "e": 5551, "s": 5524, "text": "It should look like this −" }, { "code": null, "e": 5612, "s": 5551, "text": "Now, the next node at the left should point to the new node." }, { "code": null, "e": 5639, "s": 5612, "text": "LeftNode.next −> NewNode;\n" }, { "code": null, "e": 5729, "s": 5639, "text": "This will put the new node in the middle of the two. The new list should look like this −" }, { "code": null, "e": 5949, "s": 5729, "text": "Similar steps should be taken if the node is being inserted at the beginning of the list. While inserting it at the end, the second last node of the list should point to the new node and the new node will point to NULL." }, { "code": null, "e": 6118, "s": 5949, "text": "Deletion is also a more than one step process. We shall learn with pictorial representation. First, locate the target node to be removed, by using searching algorithms." }, { "code": null, "e": 6217, "s": 6118, "text": "The left (previous) node of the target node now should point to the next node of the target node −" }, { "code": null, "e": 6252, "s": 6217, "text": "LeftNode.next −> TargetNode.next;\n" }, { "code": null, "e": 6399, "s": 6252, "text": "This will remove the link that was pointing to the target node. Now, using the following code, we will remove what the target node is pointing at." }, { "code": null, "e": 6425, "s": 6399, "text": "TargetNode.next −> NULL;\n" }, { "code": null, "e": 6568, "s": 6425, "text": "We need to use the deleted node. We can keep that in memory otherwise we can simply deallocate memory and wipe off the target node completely." }, { "code": null, "e": 6698, "s": 6568, "text": "This operation is a thorough one. We need to make the last node to be pointed by the head node and reverse the whole linked list." }, { "code": null, "e": 6823, "s": 6698, "text": "First, we traverse to the end of the list. It should be pointing to NULL. Now, we shall make it point to its previous node −" }, { "code": null, "e": 7054, "s": 6823, "text": "We have to make sure that the last node is not the last node. So we'll have some temp node, which looks like the head node pointing to the last node. Now, we shall make all left side nodes point to their previous nodes one by one." }, { "code": null, "e": 7222, "s": 7054, "text": "Except the node (first node) pointed by the head node, all nodes should point to their predecessor, making them their new successor. The first node will point to NULL." }, { "code": null, "e": 7299, "s": 7222, "text": "We'll make the head node point to the new first node by using the temp node." }, { "code": null, "e": 7412, "s": 7299, "text": "The linked list is now reversed. To see linked list implementation in C programming language, please click here." }, { "code": null, "e": 7447, "s": 7412, "text": "\n 42 Lectures \n 1.5 hours \n" }, { "code": null, "e": 7459, "s": 7447, "text": " Ravi Kiran" }, { "code": null, "e": 7494, "s": 7459, "text": "\n 141 Lectures \n 13 hours \n" }, { "code": null, "e": 7513, "s": 7494, "text": " Arnab Chakraborty" }, { "code": null, "e": 7548, "s": 7513, "text": "\n 26 Lectures \n 8.5 hours \n" }, { "code": null, "e": 7563, "s": 7548, "text": " Parth Panjabi" }, { "code": null, "e": 7596, "s": 7563, "text": "\n 65 Lectures \n 6 hours \n" }, { "code": null, "e": 7615, "s": 7596, "text": " Arnab Chakraborty" }, { "code": null, "e": 7649, "s": 7615, "text": "\n 75 Lectures \n 13 hours \n" }, { "code": null, "e": 7677, "s": 7649, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 7713, "s": 7677, "text": "\n 64 Lectures \n 10.5 hours \n" }, { "code": null, "e": 7741, "s": 7713, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 7748, "s": 7741, "text": " Print" }, { "code": null, "e": 7759, "s": 7748, "text": " Add Notes" } ]
Modeling COVID-19 epidemic with Python | by Andrea Amparore | Towards Data Science
Because of the country lockdown currently enforced in Italy, also this weekend I had to stay at home, like billions of other people in this world. So, I decided to make use of this time for playing with data on COVID-19 pandemics in Italy, which is released daily by the Italian Civil Protection Department. In this article I will show how to inspect the data, represent it on chart and model future trends with Python, using some open source data science libraries such as Pandas, Matplotlib and Scikit-learn. I really appreciate the effort made by the Civil Protection to publish daily data at different scales (national, regional and provincial base), and especially to release it in machine-readable format through their official GitHub repository. As usual, Open Data approach facilitates scientific collaboration, enriches collaborative research and can improve collective analytical capacity to ultimately inform decisions. This work has been developed with Python-3.7.1 on macOS, and it has been inspired by similar papers that I found online. The full version of the code is available on my GitHub repository. It would be interesting to apply the proposed methodology also to datasets related to other COVID affected countries, such as Spain or France. The standard Python distribution does not come with the modules used in this tutorial. To use these third party modules, we must install them. pip install pandas matplotlib scikit-learnorconda install -c anaconda pandas matplotlib scikit-learn First, we import all the necessary libraries, then we import the latest version of the Italian COVID dataset from the Civil Protection GitHub account and we store it in a Pandas Data Frame. Then, we explore the structure of the table, in order to have a clearer view on the variables, and eventually identify the one we should consider for the epidemic modeling. import pandas as pdfrom datetime import datetime, timedeltaimport matplotlib.pyplot as pltfrom matplotlib.dates import DateFormatterimport matplotlib.dates as mdatesdata = pd.read_csv(“https://raw.githubusercontent.com/pcm-dpc/COVID-19/master/dati-andamento-nazionale/dpc-covid19-ita-andamento-nazionale.csv")print (data.columns) We obtained a list of all the fields contained in the data frame. In particular, deceduti and tamponi report respectively the cumulative numbers of deaths and medical swab (tests) recorded since the beginning of the crisis. In order to calculate the daily increase of those two variables, we run a simple pandas function and we save the outputs into two new fields: diff_deceduti and diff_tamponi. Then, we create a list containing all the dates and we convert them from string to a more suitable datetime format. data[‘diff_deceduti’] = data[‘deceduti’].diff()data[‘diff_tamponi’] = data[‘tamponi’].diff()dates = data[‘data’]date_format = [pd.to_datetime(d) for d in dates] We are now ready to create our first scatter plot, and display the variable that intuitively, at first glance, seems to be the most important for estimating the epidemic progression: the number of daily new positive cases (nuovi_positivi). variable = ‘nuovi_positivi’fig, ax = plt.subplots(figsize=(12, 5))ax.grid()ax.scatter(date_format,data[variable])ax.set(xlabel=”Date”,ylabel=variable,title=variable)date_form = DateFormatter(“%d-%m”)ax.xaxis.set_major_formatter(date_form)ax.xaxis.set_major_locator(mdates.DayLocator(interval = 3))fig.savefig(variable + ‘.png’)plt.show() There is a recurrent pattern in the plot above, which is given by the anomalies occurring just after every weekend: it seems that the epidemic slows down on Mondays and Tuesdays. This is caused by the fact that during weekends (and public holidays in general such as Easter Monday) a considerably smaller number of swabs is processed. A possible methodology for correcting this systematic bias consists in the calculation of the moving average, which is normally used to analyze time-series by calculating averages of different subsets of the complete dataset, in our case 7 days. The first moving average is calculated by averaging the first subset of 7 days, and then the subset is changed by moving forward to the next fixed subset, and so on. In general, the moving average smoothens the data, and it reduces anomalies like our weekend bias. rolling_average_days = 7data[‘nuovi_positivi_moving’] = data[‘nuovi_positivi’].rolling(window=rolling_average_days).mean()variable = ‘nuovi_positivi_moving’fig, ax = plt.subplots(figsize=(12, 5))ax.grid()ax.scatter(date_format,data[variable])ax.set(xlabel=”Date”,ylabel=variable,title=variable)date_form = DateFormatter(“%d-%m”)ax.xaxis.set_major_formatter(date_form)ax.xaxis.set_major_locator(mdates.DayLocator(interval = 3))fig.savefig(variable + ‘.png’)plt.show() The chart above represents the trend of new cases averaged over a 7-days period, masking out weekend anomalies. Nevertheless, the effect of the daily number of swabs tested every day is not fully compensated yet. Clearly, the number of positive cases is strictly correlated with the quantity of tests performed. Let’s take a look to the trend of daily tests performed in Italy (tamponi): data['diff_tamponi_moving'] = data['tamponi'].diff().rolling(window=rolling_average_days).mean()variable = 'diff_tamponi_moving'fig, ax = plt.subplots(figsize=(12, 5))ax.grid()ax.scatter(date_format,data[variable])ax.set(xlabel="Date",ylabel=variable,title=variable)date_form = DateFormatter("%d-%m")ax.xaxis.set_major_formatter(date_form)ax.xaxis.set_major_locator(mdates.DayLocator(interval = 3))fig.savefig(variable + '.png')plt.show() Our doubts have been confirmed: at the end our time series, the number of daily swabs is about 20 times higher than the beginning, therefore the variable nuovi_positivi is suffering from this important bias. In order to find a more representing trend, we now calculate the percentage of new positive over the total daily tests, and we inspect the variation over time. data[‘perc_positive’] = ((data[‘nuovi_positivi_moving’])/(data[‘diff_tamponi_moving’])*100)variable = ‘perc_positive’fig, ax = plt.subplots(figsize=(12, 5))ax.grid()ax.scatter(date_format,data[variable])ax.set(xlabel=”Date”,ylabel=variable,title=variable)date_form = DateFormatter(“%d-%m”)ax.xaxis.set_major_formatter(date_form)ax.xaxis.set_major_locator(mdates.DayLocator(interval = 3))fig.savefig(variable + ‘.png’)plt.show() The derived variable perc_positive provides a more reliable representation of the epidemic progression, since it fixes the two systematic biases identified before: weekend fluctuations and overall number of tests. Unfortunately, there are still several unconsidered factors that undermines the validity of this analysis. Just to mention some of these: testing procedures have changed considerably over time, ranging from testing only severe symptomatic patients to mass testing of entire populations in specific districts, and the swabs methodology varies region by region. Moreover, data on new positive cases can refer to tests conducted in previous days, from 1 up to 7 days before. And just to make things even more complicated, some patients can also be tested multiple times, and there is no way to detect this from our dataset. For those and other reasons, perc_positive is not yet the variable we need for modeling the epidemic trend, and ultimately for forecasting its evolution. There are other important fields that we should further inspect: in particular, let’s take a closer look to terapia_intensiva and diff_deceduti ( intensive care and daily deaths, respectively). variable = ‘terapia_intensiva’fig, ax = plt.subplots(figsize=(12, 5))ax.grid()ax.scatter(date_format,data[variable])ax.set(xlabel=”Date”,ylabel=variable,title=variable)date_form = DateFormatter(“%d-%m”)ax.xaxis.set_major_formatter(date_form)ax.xaxis.set_major_locator(mdates.DayLocator(interval = 3))fig.savefig(variable + ‘.png’)plt.show()variable = ‘diff_deceduti’fig, ax = plt.subplots(figsize=(12, 5))ax.grid()ax.scatter(date_format,data[variable])ax.set(xlabel=”Date”,ylabel=variable,title=variable)date_form = DateFormatter(“%d-%m”)ax.xaxis.set_major_formatter(date_form)ax.xaxis.set_major_locator(mdates.DayLocator(interval = 3))fig.savefig(variable + ‘.png’)plt.show() As shown in the chart above, the number of patients currently in intensive care seems to follow a more regular trend. In fact, it refers to an easily measurable information which doesn’t suffer from sampling methodology or weekly fluctuations. Surely it is not perfect, since it can also be prone to underestimation, mostly during the acute peak of the crisis where the health system was stressed and hospitals saturated. But after that critical phase it should reflect pretty well the number of patients that are affected most severely by the virus. However, probably we are still missing something: are we sure that a decreasing number of intensive cares always corresponds to an improvement of the situation? In fact, it could be that this number is lowering as a consequence of an increasing number of daily deaths. The above charts shows that daily deaths have been increasing until march 28th, and then it started decreasing at a slower pace. The main assumption of this paper is that the combined value of intensive care and daily deaths can be a reliable variable for estimating the current epidemic progression, and for modeling future trends. Let’s create a new field gravi_deceduti, calculate the sum of patients in severe distress with daily deaths, and plot the resulting values. data[‘gravi_deceduti’] = data[‘diff_deceduti’] + data[‘terapia_intensiva’]variable = 'gravi_deceduti'fig, ax = plt.subplots(figsize=(12, 5))ax.grid()ax.scatter(date_format,data[variable])ax.set(xlabel="Date",ylabel=variable,title=variable)date_form = DateFormatter("%d-%m")ax.xaxis.set_major_formatter(date_form)ax.xaxis.set_major_locator(mdates.DayLocator(interval = 3))ax.axvline(datetime(2020, 4, 1), c="green", zorder=0)fig.savefig(variable + '.png')plt.show() We explained how both terapia_intensiva and diff_deceduti do not suffer from major systemic bias, and how the latter compensates the former in case of increase of daily deaths. The combination of the two variables can now be used for modeling the trend of the epidemic in Italy, since this derived number suffers from fewer systematic bias. We added a green line on the chart in order to highlight the day the peak, which is the point where the descending trend has started. We now can build a Linear Regression model and train it with the data of gravi_decedutistarting from that date, April 1st. Linear Regression is one of the most popular classical machine learning algorithms for supervised learning. This algorithm is relatively easy to implement and works well when the relationship to between covariates and response variable is known to be linear (in our case: date VS gravi_deceduti). A clear disadvantage is that Linear Regression over simplifies many real world problems. The code below is an adaptation of some parts of the work done by Angelica Lo Duca on March 31st, in her attempt of modeling the epidemic using the number of positive cases. First we import linear_model from sklearn module. Then we exclude from X and y all data for the period before the epidemic peak registered on April 1st, and we fit the LinearRegression model with X and y. Finally, we evaluate the model by running the function score() , which returns the coefficient of determination R2 of the prediction (the proportion of the variance in the dependent variable that is predictable from the independent variable). R2 will give some information about the goodness of fit of a model. In regression, the R2 coefficient of determination is a statistical measure of how well the regression predictions approximate the real data points. An R2 of 1 indicates that the regression predictions perfectly fit the data, therefore the closer the value gets to 1, the more we can trust our model. import numpy as npfrom sklearn import linear_model# prepare the lists for the modelX = date_formaty = data['gravi_deceduti'].tolist()[1:]# date format is not suitable for modeling, let's transform the date into incrementals number starting from April 1ststarting_date = 37 # April 1st is the 37th day of the seriesday_numbers = []for i in range(1, len(X)): day_numbers.append([i])X = day_numbers# # let's train our model only with data after the peakX = X[starting_date:]y = y[starting_date:]# Instantiate Linear Regressionlinear_regr = linear_model.LinearRegression()# Train the model using the training setslinear_regr.fit(X, y)print ("Linear Regression Model Score: %s" % (linear_regr.score(X, y))) Now that we have fitted the model and evaluated positively its R2 score, we are ready for predicting the evolution of gravi_deceduti in the future. For doing this, we call the function predict() and we keep track of the the maximum error done by the model with the function max_error(). Using that value, we will create two lines that will depict the tolerance buffer, with both minimum and maximum errors of the model’s predictions. # Predict future trendfrom sklearn.metrics import max_errorimport mathy_pred = linear_regr.predict(X)error = max_error(y, y_pred) The model is now ready for predicting gravi_deceduti for next days. We define a variable X_test, which contains both past and future days. We also create the variable future_days containing the number of days for which we want to estimate the epidemic trend. Then we apply our model to X_test. X_test = []future_days = 55for i in range(starting_date, starting_date + future_days): X_test.append([i])y_pred_linear = linear_regr.predict(X_test) The variable y_pred_linear contains the predicted gravi_deceduti for next 55 days. In order to consider the errors made by the model, we define y_pred_max and y_pred_min containing the y_pred + error and y_pred - error, respectively. y_pred_max = []y_pred_min = []for i in range(0, len(y_pred_linear)): y_pred_max.append(y_pred_linear[i] + error) y_pred_min.append(y_pred_linear[i] - error) We have three output variables ready to be displayed on a chart: y_pred, y_pred_max and y_pred_min, containing the predictions, the maximum error and minimum error, respectively. In order to make the plot more appealing, we should convert numbers (represented by the X_test variable) to dates. # convert date of the epidemic peak into datetime formatfrom datetime import datetime, timedeltadate_zero = datetime.strptime(data['data'][starting_date], '%Y-%m-%dT%H:%M:%S')# creating x_ticks for making the plot more appealingdate_prev = []x_ticks = []step = 5data_curr = date_zerox_current = starting_daten = int(future_days / step)for i in range(0, n): date_prev.append(str(data_curr.day) + "/" + str(data_curr.month)) x_ticks.append(x_current) data_curr = data_curr + timedelta(days=step) x_current = x_current + step Now we can plot the known data together with the forecast and the error lines # plot known dataplt.grid()plt.scatter(X, y)# plot linear regression predictionplt.plot(X_test, y_pred_linear, color='green', linewidth=2)# plot maximum errorplt.plot(X_test, y_pred_max, color='red', linewidth=1, linestyle='dashed')#plot minimum errorplt.plot(X_test, y_pred_min, color='red', linewidth=1, linestyle='dashed')plt.xlabel('Days')plt.xlim(starting_date, starting_date + future_days)plt.xticks(x_ticks, date_prev)plt.ylabel('gravi_deceduti')plt.yscale("log")plt.savefig("prediction.png")plt.show() The above analysis is treating Italian COVID-19 epidemic on a national basis, but it is known that regions of Lombardia, Piemonte, Veneto and Emilia-Romagna have been affected more severely then the others. In order to quantify this, we can inspect the regional COVID-19 dataset provided by the Civil Protection, and calculate the proportion of the deaths registered in those regions (we call them Zone 1) data = pd.read_csv("https://raw.githubusercontent.com/pcm-dpc/COVID-19/master/dati-regioni/dpc-covid19-ita-regioni.csv")zone1_df = data[data.denominazione_regione.isin(['Piemonte','Emilia-Romagna','Veneto','Lombardia'])]zone1_df['deceduti'].sum()print("Zone 1 accounts for %s percent of the total deaths" % (round(zone1_df['deceduti'].sum() / data['deceduti'].sum() * 100,2))) If just those 4 regions (out of 20) account for more than 80% of the total deaths, having a single model that predicts the trend for the whole country is a major oversimplification. In fact, the situation varies considerably from region to region, as shown in the image below. For each region a different model should be applied, in order to better understand the epidemic trends in the different areas of the country. Given that the Civil Protection provides data also on a provincial basis, it would be possible to model the situation even better than this. Our model tells us that the COVID-19 epidemic will end in Italy between 19th and 22nd May 2020, if the current trend is maintained over time. Regional differences apart, we have to consider that any new external factor can change the effective trend of the epidemic. The current model can not take into account unexpected changes in the system, such as the gradual loosening of lockdown restrictions, or the effects of warmer temperature over the virus spread. In my next article I will create different models for every region separately, and I will upload the code on my GitHub repository.
[ { "code": null, "e": 480, "s": 172, "text": "Because of the country lockdown currently enforced in Italy, also this weekend I had to stay at home, like billions of other people in this world. So, I decided to make use of this time for playing with data on COVID-19 pandemics in Italy, which is released daily by the Italian Civil Protection Department." }, { "code": null, "e": 683, "s": 480, "text": "In this article I will show how to inspect the data, represent it on chart and model future trends with Python, using some open source data science libraries such as Pandas, Matplotlib and Scikit-learn." }, { "code": null, "e": 1103, "s": 683, "text": "I really appreciate the effort made by the Civil Protection to publish daily data at different scales (national, regional and provincial base), and especially to release it in machine-readable format through their official GitHub repository. As usual, Open Data approach facilitates scientific collaboration, enriches collaborative research and can improve collective analytical capacity to ultimately inform decisions." }, { "code": null, "e": 1291, "s": 1103, "text": "This work has been developed with Python-3.7.1 on macOS, and it has been inspired by similar papers that I found online. The full version of the code is available on my GitHub repository." }, { "code": null, "e": 1434, "s": 1291, "text": "It would be interesting to apply the proposed methodology also to datasets related to other COVID affected countries, such as Spain or France." }, { "code": null, "e": 1577, "s": 1434, "text": "The standard Python distribution does not come with the modules used in this tutorial. To use these third party modules, we must install them." }, { "code": null, "e": 1678, "s": 1577, "text": "pip install pandas matplotlib scikit-learnorconda install -c anaconda pandas matplotlib scikit-learn" }, { "code": null, "e": 2041, "s": 1678, "text": "First, we import all the necessary libraries, then we import the latest version of the Italian COVID dataset from the Civil Protection GitHub account and we store it in a Pandas Data Frame. Then, we explore the structure of the table, in order to have a clearer view on the variables, and eventually identify the one we should consider for the epidemic modeling." }, { "code": null, "e": 2371, "s": 2041, "text": "import pandas as pdfrom datetime import datetime, timedeltaimport matplotlib.pyplot as pltfrom matplotlib.dates import DateFormatterimport matplotlib.dates as mdatesdata = pd.read_csv(“https://raw.githubusercontent.com/pcm-dpc/COVID-19/master/dati-andamento-nazionale/dpc-covid19-ita-andamento-nazionale.csv\")print (data.columns)" }, { "code": null, "e": 2437, "s": 2371, "text": "We obtained a list of all the fields contained in the data frame." }, { "code": null, "e": 2885, "s": 2437, "text": "In particular, deceduti and tamponi report respectively the cumulative numbers of deaths and medical swab (tests) recorded since the beginning of the crisis. In order to calculate the daily increase of those two variables, we run a simple pandas function and we save the outputs into two new fields: diff_deceduti and diff_tamponi. Then, we create a list containing all the dates and we convert them from string to a more suitable datetime format." }, { "code": null, "e": 3046, "s": 2885, "text": "data[‘diff_deceduti’] = data[‘deceduti’].diff()data[‘diff_tamponi’] = data[‘tamponi’].diff()dates = data[‘data’]date_format = [pd.to_datetime(d) for d in dates]" }, { "code": null, "e": 3286, "s": 3046, "text": "We are now ready to create our first scatter plot, and display the variable that intuitively, at first glance, seems to be the most important for estimating the epidemic progression: the number of daily new positive cases (nuovi_positivi)." }, { "code": null, "e": 3624, "s": 3286, "text": "variable = ‘nuovi_positivi’fig, ax = plt.subplots(figsize=(12, 5))ax.grid()ax.scatter(date_format,data[variable])ax.set(xlabel=”Date”,ylabel=variable,title=variable)date_form = DateFormatter(“%d-%m”)ax.xaxis.set_major_formatter(date_form)ax.xaxis.set_major_locator(mdates.DayLocator(interval = 3))fig.savefig(variable + ‘.png’)plt.show()" }, { "code": null, "e": 3959, "s": 3624, "text": "There is a recurrent pattern in the plot above, which is given by the anomalies occurring just after every weekend: it seems that the epidemic slows down on Mondays and Tuesdays. This is caused by the fact that during weekends (and public holidays in general such as Easter Monday) a considerably smaller number of swabs is processed." }, { "code": null, "e": 4205, "s": 3959, "text": "A possible methodology for correcting this systematic bias consists in the calculation of the moving average, which is normally used to analyze time-series by calculating averages of different subsets of the complete dataset, in our case 7 days." }, { "code": null, "e": 4470, "s": 4205, "text": "The first moving average is calculated by averaging the first subset of 7 days, and then the subset is changed by moving forward to the next fixed subset, and so on. In general, the moving average smoothens the data, and it reduces anomalies like our weekend bias." }, { "code": null, "e": 4937, "s": 4470, "text": "rolling_average_days = 7data[‘nuovi_positivi_moving’] = data[‘nuovi_positivi’].rolling(window=rolling_average_days).mean()variable = ‘nuovi_positivi_moving’fig, ax = plt.subplots(figsize=(12, 5))ax.grid()ax.scatter(date_format,data[variable])ax.set(xlabel=”Date”,ylabel=variable,title=variable)date_form = DateFormatter(“%d-%m”)ax.xaxis.set_major_formatter(date_form)ax.xaxis.set_major_locator(mdates.DayLocator(interval = 3))fig.savefig(variable + ‘.png’)plt.show()" }, { "code": null, "e": 5325, "s": 4937, "text": "The chart above represents the trend of new cases averaged over a 7-days period, masking out weekend anomalies. Nevertheless, the effect of the daily number of swabs tested every day is not fully compensated yet. Clearly, the number of positive cases is strictly correlated with the quantity of tests performed. Let’s take a look to the trend of daily tests performed in Italy (tamponi):" }, { "code": null, "e": 5764, "s": 5325, "text": "data['diff_tamponi_moving'] = data['tamponi'].diff().rolling(window=rolling_average_days).mean()variable = 'diff_tamponi_moving'fig, ax = plt.subplots(figsize=(12, 5))ax.grid()ax.scatter(date_format,data[variable])ax.set(xlabel=\"Date\",ylabel=variable,title=variable)date_form = DateFormatter(\"%d-%m\")ax.xaxis.set_major_formatter(date_form)ax.xaxis.set_major_locator(mdates.DayLocator(interval = 3))fig.savefig(variable + '.png')plt.show()" }, { "code": null, "e": 6132, "s": 5764, "text": "Our doubts have been confirmed: at the end our time series, the number of daily swabs is about 20 times higher than the beginning, therefore the variable nuovi_positivi is suffering from this important bias. In order to find a more representing trend, we now calculate the percentage of new positive over the total daily tests, and we inspect the variation over time." }, { "code": null, "e": 6560, "s": 6132, "text": "data[‘perc_positive’] = ((data[‘nuovi_positivi_moving’])/(data[‘diff_tamponi_moving’])*100)variable = ‘perc_positive’fig, ax = plt.subplots(figsize=(12, 5))ax.grid()ax.scatter(date_format,data[variable])ax.set(xlabel=”Date”,ylabel=variable,title=variable)date_form = DateFormatter(“%d-%m”)ax.xaxis.set_major_formatter(date_form)ax.xaxis.set_major_locator(mdates.DayLocator(interval = 3))fig.savefig(variable + ‘.png’)plt.show()" }, { "code": null, "e": 7549, "s": 6560, "text": "The derived variable perc_positive provides a more reliable representation of the epidemic progression, since it fixes the two systematic biases identified before: weekend fluctuations and overall number of tests. Unfortunately, there are still several unconsidered factors that undermines the validity of this analysis. Just to mention some of these: testing procedures have changed considerably over time, ranging from testing only severe symptomatic patients to mass testing of entire populations in specific districts, and the swabs methodology varies region by region. Moreover, data on new positive cases can refer to tests conducted in previous days, from 1 up to 7 days before. And just to make things even more complicated, some patients can also be tested multiple times, and there is no way to detect this from our dataset. For those and other reasons, perc_positive is not yet the variable we need for modeling the epidemic trend, and ultimately for forecasting its evolution." }, { "code": null, "e": 7743, "s": 7549, "text": "There are other important fields that we should further inspect: in particular, let’s take a closer look to terapia_intensiva and diff_deceduti ( intensive care and daily deaths, respectively)." }, { "code": null, "e": 8420, "s": 7743, "text": "variable = ‘terapia_intensiva’fig, ax = plt.subplots(figsize=(12, 5))ax.grid()ax.scatter(date_format,data[variable])ax.set(xlabel=”Date”,ylabel=variable,title=variable)date_form = DateFormatter(“%d-%m”)ax.xaxis.set_major_formatter(date_form)ax.xaxis.set_major_locator(mdates.DayLocator(interval = 3))fig.savefig(variable + ‘.png’)plt.show()variable = ‘diff_deceduti’fig, ax = plt.subplots(figsize=(12, 5))ax.grid()ax.scatter(date_format,data[variable])ax.set(xlabel=”Date”,ylabel=variable,title=variable)date_form = DateFormatter(“%d-%m”)ax.xaxis.set_major_formatter(date_form)ax.xaxis.set_major_locator(mdates.DayLocator(interval = 3))fig.savefig(variable + ‘.png’)plt.show()" }, { "code": null, "e": 9240, "s": 8420, "text": "As shown in the chart above, the number of patients currently in intensive care seems to follow a more regular trend. In fact, it refers to an easily measurable information which doesn’t suffer from sampling methodology or weekly fluctuations. Surely it is not perfect, since it can also be prone to underestimation, mostly during the acute peak of the crisis where the health system was stressed and hospitals saturated. But after that critical phase it should reflect pretty well the number of patients that are affected most severely by the virus. However, probably we are still missing something: are we sure that a decreasing number of intensive cares always corresponds to an improvement of the situation? In fact, it could be that this number is lowering as a consequence of an increasing number of daily deaths." }, { "code": null, "e": 9369, "s": 9240, "text": "The above charts shows that daily deaths have been increasing until march 28th, and then it started decreasing at a slower pace." }, { "code": null, "e": 9713, "s": 9369, "text": "The main assumption of this paper is that the combined value of intensive care and daily deaths can be a reliable variable for estimating the current epidemic progression, and for modeling future trends. Let’s create a new field gravi_deceduti, calculate the sum of patients in severe distress with daily deaths, and plot the resulting values." }, { "code": null, "e": 10178, "s": 9713, "text": "data[‘gravi_deceduti’] = data[‘diff_deceduti’] + data[‘terapia_intensiva’]variable = 'gravi_deceduti'fig, ax = plt.subplots(figsize=(12, 5))ax.grid()ax.scatter(date_format,data[variable])ax.set(xlabel=\"Date\",ylabel=variable,title=variable)date_form = DateFormatter(\"%d-%m\")ax.xaxis.set_major_formatter(date_form)ax.xaxis.set_major_locator(mdates.DayLocator(interval = 3))ax.axvline(datetime(2020, 4, 1), c=\"green\", zorder=0)fig.savefig(variable + '.png')plt.show()" }, { "code": null, "e": 10653, "s": 10178, "text": "We explained how both terapia_intensiva and diff_deceduti do not suffer from major systemic bias, and how the latter compensates the former in case of increase of daily deaths. The combination of the two variables can now be used for modeling the trend of the epidemic in Italy, since this derived number suffers from fewer systematic bias. We added a green line on the chart in order to highlight the day the peak, which is the point where the descending trend has started." }, { "code": null, "e": 10776, "s": 10653, "text": "We now can build a Linear Regression model and train it with the data of gravi_decedutistarting from that date, April 1st." }, { "code": null, "e": 11162, "s": 10776, "text": "Linear Regression is one of the most popular classical machine learning algorithms for supervised learning. This algorithm is relatively easy to implement and works well when the relationship to between covariates and response variable is known to be linear (in our case: date VS gravi_deceduti). A clear disadvantage is that Linear Regression over simplifies many real world problems." }, { "code": null, "e": 11336, "s": 11162, "text": "The code below is an adaptation of some parts of the work done by Angelica Lo Duca on March 31st, in her attempt of modeling the epidemic using the number of positive cases." }, { "code": null, "e": 12153, "s": 11336, "text": "First we import linear_model from sklearn module. Then we exclude from X and y all data for the period before the epidemic peak registered on April 1st, and we fit the LinearRegression model with X and y. Finally, we evaluate the model by running the function score() , which returns the coefficient of determination R2 of the prediction (the proportion of the variance in the dependent variable that is predictable from the independent variable). R2 will give some information about the goodness of fit of a model. In regression, the R2 coefficient of determination is a statistical measure of how well the regression predictions approximate the real data points. An R2 of 1 indicates that the regression predictions perfectly fit the data, therefore the closer the value gets to 1, the more we can trust our model." }, { "code": null, "e": 12859, "s": 12153, "text": "import numpy as npfrom sklearn import linear_model# prepare the lists for the modelX = date_formaty = data['gravi_deceduti'].tolist()[1:]# date format is not suitable for modeling, let's transform the date into incrementals number starting from April 1ststarting_date = 37 # April 1st is the 37th day of the seriesday_numbers = []for i in range(1, len(X)): day_numbers.append([i])X = day_numbers# # let's train our model only with data after the peakX = X[starting_date:]y = y[starting_date:]# Instantiate Linear Regressionlinear_regr = linear_model.LinearRegression()# Train the model using the training setslinear_regr.fit(X, y)print (\"Linear Regression Model Score: %s\" % (linear_regr.score(X, y)))" }, { "code": null, "e": 13293, "s": 12859, "text": "Now that we have fitted the model and evaluated positively its R2 score, we are ready for predicting the evolution of gravi_deceduti in the future. For doing this, we call the function predict() and we keep track of the the maximum error done by the model with the function max_error(). Using that value, we will create two lines that will depict the tolerance buffer, with both minimum and maximum errors of the model’s predictions." }, { "code": null, "e": 13423, "s": 13293, "text": "# Predict future trendfrom sklearn.metrics import max_errorimport mathy_pred = linear_regr.predict(X)error = max_error(y, y_pred)" }, { "code": null, "e": 13717, "s": 13423, "text": "The model is now ready for predicting gravi_deceduti for next days. We define a variable X_test, which contains both past and future days. We also create the variable future_days containing the number of days for which we want to estimate the epidemic trend. Then we apply our model to X_test." }, { "code": null, "e": 13869, "s": 13717, "text": "X_test = []future_days = 55for i in range(starting_date, starting_date + future_days): X_test.append([i])y_pred_linear = linear_regr.predict(X_test)" }, { "code": null, "e": 14103, "s": 13869, "text": "The variable y_pred_linear contains the predicted gravi_deceduti for next 55 days. In order to consider the errors made by the model, we define y_pred_max and y_pred_min containing the y_pred + error and y_pred - error, respectively." }, { "code": null, "e": 14266, "s": 14103, "text": "y_pred_max = []y_pred_min = []for i in range(0, len(y_pred_linear)): y_pred_max.append(y_pred_linear[i] + error) y_pred_min.append(y_pred_linear[i] - error)" }, { "code": null, "e": 14560, "s": 14266, "text": "We have three output variables ready to be displayed on a chart: y_pred, y_pred_max and y_pred_min, containing the predictions, the maximum error and minimum error, respectively. In order to make the plot more appealing, we should convert numbers (represented by the X_test variable) to dates." }, { "code": null, "e": 15095, "s": 14560, "text": "# convert date of the epidemic peak into datetime formatfrom datetime import datetime, timedeltadate_zero = datetime.strptime(data['data'][starting_date], '%Y-%m-%dT%H:%M:%S')# creating x_ticks for making the plot more appealingdate_prev = []x_ticks = []step = 5data_curr = date_zerox_current = starting_daten = int(future_days / step)for i in range(0, n): date_prev.append(str(data_curr.day) + \"/\" + str(data_curr.month)) x_ticks.append(x_current) data_curr = data_curr + timedelta(days=step) x_current = x_current + step" }, { "code": null, "e": 15173, "s": 15095, "text": "Now we can plot the known data together with the forecast and the error lines" }, { "code": null, "e": 15683, "s": 15173, "text": "# plot known dataplt.grid()plt.scatter(X, y)# plot linear regression predictionplt.plot(X_test, y_pred_linear, color='green', linewidth=2)# plot maximum errorplt.plot(X_test, y_pred_max, color='red', linewidth=1, linestyle='dashed')#plot minimum errorplt.plot(X_test, y_pred_min, color='red', linewidth=1, linestyle='dashed')plt.xlabel('Days')plt.xlim(starting_date, starting_date + future_days)plt.xticks(x_ticks, date_prev)plt.ylabel('gravi_deceduti')plt.yscale(\"log\")plt.savefig(\"prediction.png\")plt.show()" }, { "code": null, "e": 16089, "s": 15683, "text": "The above analysis is treating Italian COVID-19 epidemic on a national basis, but it is known that regions of Lombardia, Piemonte, Veneto and Emilia-Romagna have been affected more severely then the others. In order to quantify this, we can inspect the regional COVID-19 dataset provided by the Civil Protection, and calculate the proportion of the deaths registered in those regions (we call them Zone 1)" }, { "code": null, "e": 16466, "s": 16089, "text": "data = pd.read_csv(\"https://raw.githubusercontent.com/pcm-dpc/COVID-19/master/dati-regioni/dpc-covid19-ita-regioni.csv\")zone1_df = data[data.denominazione_regione.isin(['Piemonte','Emilia-Romagna','Veneto','Lombardia'])]zone1_df['deceduti'].sum()print(\"Zone 1 accounts for %s percent of the total deaths\" % (round(zone1_df['deceduti'].sum() / data['deceduti'].sum() * 100,2)))" }, { "code": null, "e": 16743, "s": 16466, "text": "If just those 4 regions (out of 20) account for more than 80% of the total deaths, having a single model that predicts the trend for the whole country is a major oversimplification. In fact, the situation varies considerably from region to region, as shown in the image below." }, { "code": null, "e": 17026, "s": 16743, "text": "For each region a different model should be applied, in order to better understand the epidemic trends in the different areas of the country. Given that the Civil Protection provides data also on a provincial basis, it would be possible to model the situation even better than this." }, { "code": null, "e": 17168, "s": 17026, "text": "Our model tells us that the COVID-19 epidemic will end in Italy between 19th and 22nd May 2020, if the current trend is maintained over time." }, { "code": null, "e": 17487, "s": 17168, "text": "Regional differences apart, we have to consider that any new external factor can change the effective trend of the epidemic. The current model can not take into account unexpected changes in the system, such as the gradual loosening of lockdown restrictions, or the effects of warmer temperature over the virus spread." } ]
CodeIgniter - Page Redirection
While building web application, we often need to redirect the user from one page to another page. CodeIgniter makes this job easy for us. The redirect() function is used for this purpose. Syntax Parameters $uri (string) − URI string $uri (string) − URI string $method (string) − Redirect method (‘auto’, ‘location’ or ‘refresh’) $method (string) − Redirect method (‘auto’, ‘location’ or ‘refresh’) $code (string) − HTTP Response code (usually 302 or 303) $code (string) − HTTP Response code (usually 302 or 303) Return type The first argument can have two types of URI. We can pass full site URL or URI segments to the controller you want to direct. The second optional parameter can have any of the three values from auto, location or refresh. The default is auto. The third optional parameter is only available with location redirects and it allows you to send specific HTTP response code. Create a controller called Redirect_controller.php and save it in application/controller/Redirect_controller.php <?php class Redirect_controller extends CI_Controller { public function index() { /*Load the URL helper*/ $this->load->helper('url'); /*Redirect the user to some site*/ redirect('http://www.tutorialspoint.com'); } public function computer_graphics() { /*Load the URL helper*/ $this->load->helper('url'); redirect('http://www.tutorialspoint.com/computer_graphics/index.htm'); } public function version2() { /*Load the URL helper*/ $this->load->helper('url'); /*Redirect the user to some internal controller’s method*/ redirect('redirect/computer_graphics'); } } ?> Change the routes.php file in application/config/routes.php to add route for the above controller and add the following line at the end of the file. $route['redirect'] = 'Redirect_controller'; $route['redirect/version2'] = 'Redirect_controller/version2'; $route['redirect/computer_graphics'] = 'Redirect_controller/computer_graphics'; Type the following URL in the browser, to execute the example. http://yoursite.com/index.php/redirect The above URL will redirect you to the tutorialspoint.com website and if you visit the following URL, then it will redirect you to the computer graphics tutorial at tutorialspoint.com. http://yoursite.com/index.php/redirect/computer_graphics Print Add Notes Bookmark this page
[ { "code": null, "e": 2507, "s": 2319, "text": "While building web application, we often need to redirect the user from one page to another page. CodeIgniter makes this job easy for us. The redirect() function is used for this purpose." }, { "code": null, "e": 2514, "s": 2507, "text": "Syntax" }, { "code": null, "e": 2525, "s": 2514, "text": "Parameters" }, { "code": null, "e": 2552, "s": 2525, "text": "$uri (string) − URI string" }, { "code": null, "e": 2579, "s": 2552, "text": "$uri (string) − URI string" }, { "code": null, "e": 2648, "s": 2579, "text": "$method (string) − Redirect method (‘auto’, ‘location’ or ‘refresh’)" }, { "code": null, "e": 2717, "s": 2648, "text": "$method (string) − Redirect method (‘auto’, ‘location’ or ‘refresh’)" }, { "code": null, "e": 2774, "s": 2717, "text": "$code (string) − HTTP Response code (usually 302 or 303)" }, { "code": null, "e": 2831, "s": 2774, "text": "$code (string) − HTTP Response code (usually 302 or 303)" }, { "code": null, "e": 2843, "s": 2831, "text": "Return type" }, { "code": null, "e": 2969, "s": 2843, "text": "The first argument can have two types of URI. We can pass full site URL or URI segments to the controller you want to direct." }, { "code": null, "e": 3085, "s": 2969, "text": "The second optional parameter can have any of the three values from auto, location or refresh. The default is auto." }, { "code": null, "e": 3211, "s": 3085, "text": "The third optional parameter is only available with location redirects and it allows you to send specific HTTP response code." }, { "code": null, "e": 3324, "s": 3211, "text": "Create a controller called Redirect_controller.php and save it in application/controller/Redirect_controller.php" }, { "code": null, "e": 4067, "s": 3324, "text": "<?php \n class Redirect_controller extends CI_Controller { \n\t\n public function index() { \n /*Load the URL helper*/ \n $this->load->helper('url'); \n \n /*Redirect the user to some site*/ \n redirect('http://www.tutorialspoint.com'); \n }\n\t\t\n public function computer_graphics() { \n /*Load the URL helper*/ \n $this->load->helper('url'); \n redirect('http://www.tutorialspoint.com/computer_graphics/index.htm'); \n } \n \n public function version2() { \n /*Load the URL helper*/ \n $this->load->helper('url'); \n \n /*Redirect the user to some internal controller’s method*/ \n redirect('redirect/computer_graphics'); \n } \n\t\t\n } \n?>" }, { "code": null, "e": 4216, "s": 4067, "text": "Change the routes.php file in application/config/routes.php to add route for the above controller and add the following line at the end of the file." }, { "code": null, "e": 4404, "s": 4216, "text": "$route['redirect'] = 'Redirect_controller'; \n$route['redirect/version2'] = 'Redirect_controller/version2'; \n$route['redirect/computer_graphics'] = 'Redirect_controller/computer_graphics';" }, { "code": null, "e": 4467, "s": 4404, "text": "Type the following URL in the browser, to execute the example." }, { "code": null, "e": 4507, "s": 4467, "text": "http://yoursite.com/index.php/redirect\n" }, { "code": null, "e": 4692, "s": 4507, "text": "The above URL will redirect you to the tutorialspoint.com website and if you visit the following URL, then it will redirect you to the computer graphics tutorial at tutorialspoint.com." }, { "code": null, "e": 4750, "s": 4692, "text": "http://yoursite.com/index.php/redirect/computer_graphics\n" }, { "code": null, "e": 4757, "s": 4750, "text": " Print" }, { "code": null, "e": 4768, "s": 4757, "text": " Add Notes" } ]
Floyd Warshall | Practice | GeeksforGeeks
The problem is to find shortest distances between every pair of vertices in a given edge weighted directed Graph. The Graph is represented as adjancency matrix, and the matrix denotes the weight of the edegs (if it exists) else -1. Do it in-place. Example 1: Input: matrix = {{0,25},{-1,0}} Output: {{0,25},{-1,0}} Explanation: The shortest distance between every pair is already given(if it exists). Example 2: Input: matrix = {{0,1,43},{1,0,6},{-1,-1,0}} Output: {{0,1,7},{1,0,6},{-1,-1,0}} Explanation: We can reach 3 from 1 as 1->2->3 and the cost will be 1+6=7 which is less than 43. Your Task: You don't need to read, return or print anything. Your task is to complete the function shortest_distance() which takes the matrix as input parameter and modify the distances for every pair in-place. Expected Time Complexity: O(n3) Expected Space Compelxity: O(1) Constraints: 1 <= n <= 100 +2 khatritaukir1 month ago class Solution { public: void shortest_distance(vector<vector<int>>&g){ int n = g.size(); for(int k=0;k<n;k++) { for(int i=0;i<n;i++) { for(int j=0;j<n;j++) { if(i==k || j==k || g[i][k]==-1 || g[k][j] == -1) continue; g[i][j] = min(g[i][j]==-1 ? INT_MAX: g[i][j], g[i][k] + g[k][j]); } } } } }; 0 tvaishnavi10111 month ago class Solution { public:void shortest_distance(vector<vector<int>>&a){ // Code here int n=a.size(); for(int k=0;k<n;k++){ for(int i=0;i<n;i++){ for(int j=0;j<n;j++){ if(i==k || j==k || i==j || a[k][j]==-1 || a[i][k]==-1 ){ continue; } if(a[i][j]==-1){ a[i][j]=a[i][k]+a[k][j]; } else a[i][j]=min(a[i][j],a[i][k]+a[k][j]); } } } }}; -1 mohdhammadsiddiqui1 month ago This is the shortest solution you could find on GFG (2 Line condition):- class Solution { public:void shortest_distance(vector<vector<int>>&matrix){ int n=matrix.size(); for(int k=0;k<n;k++){ for(int row=0;row<n;row++){ for(int col=0;col<n;col++){ if(row==k ||col==k || row==k ||row==col||matrix[row][k]==-1 || matrix[k][col]==-1) continue; matrix[row][col]= matrix[row][col]==-1?matrix[row][col]=matrix[row][k]+matrix[k][col]:min(matrix[row][col],matrix[row][k]+matrix[k][col]); } } } }}; -1 mohdhammadsiddiqui1 month ago 2 Line condition Soution using 3 loops- class Solution { public:void shortest_distance(vector<vector<int>>&matrix){ int n=matrix.size(); for(int k=0;k<n;k++){ for(int row=0;row<n;row++){ for(int col=0;col<n;col++){ if(row==k ||col==k || row==k ||row==col|| (matrix[row][k]==-1 && matrix[k][col]==-1) || matrix[row][k]==-1 || matrix[k][col]==-1) continue; matrix[row][col]= matrix[row][col]==-1?matrix[row][col]=matrix[row][k]+matrix[k][col]:min(matrix[row][col],matrix[row][k]+matrix[k][col]); } } } }}; -1 mohdhammadsiddiqui1 month ago class Solution { public:void shortest_distance(vector<vector<int>>&matrix){ int n=matrix.size(); for(int k=0;k<n;k++){ for(int row=0;row<n;row++){ for(int col=0;col<n;col++){ if(row==k ||col==k || row==k ||row==col|| matrix[row][k]==-1 || matrix[k][col]==-1) continue; if(matrix[row][k]==-1 && matrix[k][col]==-1) continue; if(matrix[row][col]==-1) matrix[row][col]=matrix[row][k]+matrix[k][col]; else matrix[row][col]=min(matrix[row][col],matrix[row][k]+matrix[k][col]); } } } }}; +1 decostarsharma1132 months ago DESCRIPTION OF EXAMPLES IS BAD Just because I was not able to understand the example I looked the solution . 0 sanjay raz2 months ago class Solution { shortest_distance(matrix){ for(let k = 0; k < matrix.length; k++) { for(let i = 0; i < matrix.length; i++) { for(let j = 0; j < matrix.length; j++) { const ij = matrix[i][j] === -1 ? Infinity : matrix[i][j]; const ik = matrix[i][k] === -1 ? Infinity : matrix[i][k]; const kj = matrix[k][j] === -1 ? Infinity : matrix[k][j]; const min = Math.min(ij, ik + kj); matrix[i][j] = min === Infinity ? -1 : min; } } } return matrix; } } +1 aloksinghbais022 months ago C++ solution having time complexity as O(N^3+2*(N^2)) and auxiliary space complexity as O(1) is as follows :- Execution Time :- 0.3 / 1.5 sec void shortest_distance(vector<vector<int>>&matrix){ int n = matrix.size(); for(int i = 0; i < n; i++){ for(int j = 0; j < n; j++){ if(matrix[i][j] == -1) matrix[i][j] = INT_MAX; } } for(int k = 0; k < n; k++){ for(int i = 0; i < n; i++){ for(int j = 0; j < n; j++){ if(matrix[i][k] == INT_MAX || matrix[k][j] == INT_MAX){ continue; } else{ matrix[i][j] = min(matrix[i][j],matrix[i][k] + matrix[k][j]); } } } } for(int i = 0; i < n; i++){ for(int j = 0; j < n; j++){ if(matrix[i][j] == INT_MAX) matrix[i][j] = -1; } }} 0 puranjanprithu2 months ago Formula : matrix[i][j] = min(matrix[i][j],matrix[i][k]+matrix[k][j]) for k in range(n): for i in range(n): for j in range(n): if matrix[i][k]!=-1 and matrix[k][j]!=-1 : if matrix[i][j]==-1: matrix[i][j]=matrix[i][k]+matrix[k][j] else: matrix[i][j]=min(matrix[i][j],matrix[i][k]+matrix[k][j]) +2 sanketbhagat2 months ago SIMPLE JAVA SOLUTION class Solution{ public void shortest_distance(int[][] adj){ // Code here int n = adj.length; for(int k=0;k<n;k++){ for(int i=0;i<n;i++){ for(int j=0;j<n;j++){ if(adj[i][k]==-1 || adj[k][j]==-1) continue; if(adj[i][j]==-1) adj[i][j] = adj[i][k]+adj[k][j]; else adj[i][j] = Math.min(adj[i][j],adj[i][k]+adj[k][j]); } } } } } We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 488, "s": 238, "text": "The problem is to find shortest distances between every pair of vertices in a given edge weighted directed Graph. The Graph is represented as adjancency matrix, and the matrix denotes the weight of the edegs (if it exists) else -1. Do it in-place.\n " }, { "code": null, "e": 499, "s": 488, "text": "Example 1:" }, { "code": null, "e": 642, "s": 499, "text": "Input: matrix = {{0,25},{-1,0}}\nOutput: {{0,25},{-1,0}}\nExplanation: The shortest distance between\nevery pair is already given(if it exists).\n" }, { "code": null, "e": 653, "s": 642, "text": "Example 2:" }, { "code": null, "e": 832, "s": 653, "text": "Input: matrix = {{0,1,43},{1,0,6},{-1,-1,0}}\nOutput: {{0,1,7},{1,0,6},{-1,-1,0}}\nExplanation: We can reach 3 from 1 as 1->2->3\nand the cost will be 1+6=7 which is less than \n43.\n" }, { "code": null, "e": 1047, "s": 834, "text": "Your Task:\nYou don't need to read, return or print anything. Your task is to complete the function shortest_distance() which takes the matrix as input parameter and modify the distances for every pair in-place.\n " }, { "code": null, "e": 1113, "s": 1047, "text": "Expected Time Complexity: O(n3)\nExpected Space Compelxity: O(1)\n " }, { "code": null, "e": 1140, "s": 1113, "text": "Constraints:\n1 <= n <= 100" }, { "code": null, "e": 1143, "s": 1140, "text": "+2" }, { "code": null, "e": 1167, "s": 1143, "text": "khatritaukir1 month ago" }, { "code": null, "e": 1559, "s": 1167, "text": "class Solution {\n public:\n\tvoid shortest_distance(vector<vector<int>>&g){\n\t int n = g.size();\n\t for(int k=0;k<n;k++) {\n\t for(int i=0;i<n;i++) {\n\t for(int j=0;j<n;j++) {\n\t if(i==k || j==k || g[i][k]==-1 || g[k][j] == -1) continue;\n\t g[i][j] = min(g[i][j]==-1 ? INT_MAX: g[i][j], g[i][k] + g[k][j]);\n\t }\n\t }\n\t }\n\t}\n};" }, { "code": null, "e": 1561, "s": 1559, "text": "0" }, { "code": null, "e": 1587, "s": 1561, "text": "tvaishnavi10111 month ago" }, { "code": null, "e": 1693, "s": 1587, "text": "class Solution { public:void shortest_distance(vector<vector<int>>&a){ // Code here int n=a.size();" }, { "code": null, "e": 2059, "s": 1693, "text": " for(int k=0;k<n;k++){ for(int i=0;i<n;i++){ for(int j=0;j<n;j++){ if(i==k || j==k || i==j || a[k][j]==-1 || a[i][k]==-1 ){ continue; } if(a[i][j]==-1){ a[i][j]=a[i][k]+a[k][j]; } else a[i][j]=min(a[i][j],a[i][k]+a[k][j]); } } } }};" }, { "code": null, "e": 2062, "s": 2059, "text": "-1" }, { "code": null, "e": 2092, "s": 2062, "text": "mohdhammadsiddiqui1 month ago" }, { "code": null, "e": 2165, "s": 2092, "text": "This is the shortest solution you could find on GFG (2 Line condition):-" }, { "code": null, "e": 2653, "s": 2167, "text": "class Solution { public:void shortest_distance(vector<vector<int>>&matrix){ int n=matrix.size(); for(int k=0;k<n;k++){ for(int row=0;row<n;row++){ for(int col=0;col<n;col++){ if(row==k ||col==k || row==k ||row==col||matrix[row][k]==-1 || matrix[k][col]==-1) continue; matrix[row][col]= matrix[row][col]==-1?matrix[row][col]=matrix[row][k]+matrix[k][col]:min(matrix[row][col],matrix[row][k]+matrix[k][col]); } } } }};" }, { "code": null, "e": 2658, "s": 2655, "text": "-1" }, { "code": null, "e": 2688, "s": 2658, "text": "mohdhammadsiddiqui1 month ago" }, { "code": null, "e": 2728, "s": 2688, "text": "2 Line condition Soution using 3 loops-" }, { "code": null, "e": 3274, "s": 2728, "text": "class Solution { public:void shortest_distance(vector<vector<int>>&matrix){ int n=matrix.size(); for(int k=0;k<n;k++){ for(int row=0;row<n;row++){ for(int col=0;col<n;col++){ if(row==k ||col==k || row==k ||row==col|| (matrix[row][k]==-1 && matrix[k][col]==-1) || matrix[row][k]==-1 || matrix[k][col]==-1) continue; matrix[row][col]= matrix[row][col]==-1?matrix[row][col]=matrix[row][k]+matrix[k][col]:min(matrix[row][col],matrix[row][k]+matrix[k][col]); } } } }};" }, { "code": null, "e": 3279, "s": 3276, "text": "-1" }, { "code": null, "e": 3309, "s": 3279, "text": "mohdhammadsiddiqui1 month ago" }, { "code": null, "e": 3905, "s": 3311, "text": "class Solution { public:void shortest_distance(vector<vector<int>>&matrix){ int n=matrix.size(); for(int k=0;k<n;k++){ for(int row=0;row<n;row++){ for(int col=0;col<n;col++){ if(row==k ||col==k || row==k ||row==col|| matrix[row][k]==-1 || matrix[k][col]==-1) continue; if(matrix[row][k]==-1 && matrix[k][col]==-1) continue; if(matrix[row][col]==-1) matrix[row][col]=matrix[row][k]+matrix[k][col]; else matrix[row][col]=min(matrix[row][col],matrix[row][k]+matrix[k][col]); } } } }};" }, { "code": null, "e": 3910, "s": 3907, "text": "+1" }, { "code": null, "e": 3940, "s": 3910, "text": "decostarsharma1132 months ago" }, { "code": null, "e": 3971, "s": 3940, "text": "DESCRIPTION OF EXAMPLES IS BAD" }, { "code": null, "e": 4049, "s": 3971, "text": "Just because I was not able to understand the example I looked the solution ." }, { "code": null, "e": 4051, "s": 4049, "text": "0" }, { "code": null, "e": 4074, "s": 4051, "text": "sanjay raz2 months ago" }, { "code": null, "e": 4694, "s": 4074, "text": "class Solution {\n shortest_distance(matrix){\n for(let k = 0; k < matrix.length; k++) {\n for(let i = 0; i < matrix.length; i++) {\n for(let j = 0; j < matrix.length; j++) {\n const ij = matrix[i][j] === -1 ? Infinity : matrix[i][j];\n const ik = matrix[i][k] === -1 ? Infinity : matrix[i][k];\n const kj = matrix[k][j] === -1 ? Infinity : matrix[k][j];\n \n const min = Math.min(ij, ik + kj);\n \n matrix[i][j] = min === Infinity ? -1 : min;\n }\n }\n }\n \n return matrix;\n }\n}" }, { "code": null, "e": 4697, "s": 4694, "text": "+1" }, { "code": null, "e": 4725, "s": 4697, "text": "aloksinghbais022 months ago" }, { "code": null, "e": 4836, "s": 4725, "text": "C++ solution having time complexity as O(N^3+2*(N^2)) and auxiliary space complexity as O(1) is as follows :- " }, { "code": null, "e": 4870, "s": 4838, "text": "Execution Time :- 0.3 / 1.5 sec" }, { "code": null, "e": 5637, "s": 4872, "text": "void shortest_distance(vector<vector<int>>&matrix){ int n = matrix.size(); for(int i = 0; i < n; i++){ for(int j = 0; j < n; j++){ if(matrix[i][j] == -1) matrix[i][j] = INT_MAX; } } for(int k = 0; k < n; k++){ for(int i = 0; i < n; i++){ for(int j = 0; j < n; j++){ if(matrix[i][k] == INT_MAX || matrix[k][j] == INT_MAX){ continue; } else{ matrix[i][j] = min(matrix[i][j],matrix[i][k] + matrix[k][j]); } } } } for(int i = 0; i < n; i++){ for(int j = 0; j < n; j++){ if(matrix[i][j] == INT_MAX) matrix[i][j] = -1; } }}" }, { "code": null, "e": 5639, "s": 5637, "text": "0" }, { "code": null, "e": 5666, "s": 5639, "text": "puranjanprithu2 months ago" }, { "code": null, "e": 5676, "s": 5666, "text": "Formula :" }, { "code": null, "e": 5735, "s": 5676, "text": "matrix[i][j] = min(matrix[i][j],matrix[i][k]+matrix[k][j])" }, { "code": null, "e": 6065, "s": 5739, "text": "for k in range(n):\n for i in range(n):\n for j in range(n):\n if matrix[i][k]!=-1 and matrix[k][j]!=-1 :\n if matrix[i][j]==-1:\n matrix[i][j]=matrix[i][k]+matrix[k][j]\n else:\n matrix[i][j]=min(matrix[i][j],matrix[i][k]+matrix[k][j])" }, { "code": null, "e": 6068, "s": 6065, "text": "+2" }, { "code": null, "e": 6093, "s": 6068, "text": "sanketbhagat2 months ago" }, { "code": null, "e": 6114, "s": 6093, "text": "SIMPLE JAVA SOLUTION" }, { "code": null, "e": 6594, "s": 6114, "text": "class Solution{\n public void shortest_distance(int[][] adj){\n // Code here \n int n = adj.length;\n for(int k=0;k<n;k++){\n for(int i=0;i<n;i++){\n for(int j=0;j<n;j++){\n if(adj[i][k]==-1 || adj[k][j]==-1) continue;\n if(adj[i][j]==-1) adj[i][j] = adj[i][k]+adj[k][j];\n else adj[i][j] = Math.min(adj[i][j],adj[i][k]+adj[k][j]);\n }\n }\n }\n }\n}" }, { "code": null, "e": 6740, "s": 6594, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 6776, "s": 6740, "text": " Login to access your submissions. " }, { "code": null, "e": 6786, "s": 6776, "text": "\nProblem\n" }, { "code": null, "e": 6796, "s": 6786, "text": "\nContest\n" }, { "code": null, "e": 6859, "s": 6796, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 7007, "s": 6859, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 7215, "s": 7007, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 7321, "s": 7215, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
How to Install and Configure NFS Server on Linux
In this article we will learn and configure NFS (Network File System) which is basically used to share the files and folders between Linux systems. This was developed by Sun Microsystems in 1980 which allows us to mount the file system in the network and remote users can interact and the share just like local file and folders. NFS can be configured as a centralized storage solution. No need of running the same OS on both machines. Can be secured with Firewalls. It can be shared along with all the flavors of *nix. The NFS share folder can be mounted as a local file system. NFS mount needed at least two machines. The machine hosting the shared folders is called as server and which connects is called as clients. Server: 192.168.87.156 Client: 192.168.87.158 We needed to install the packages for NFS # yum install nfs-utils nfs-utils-lib Output: Loaded plugins: fastestmirror, security Setting up Install Process Loading mirror speeds from cached hostfile epel/metalink | 4.0 kB 00:00 * base: mirror.digistar.vn * epel: mirrors.ustc.edu.cn * extras: mirror.digistar.vn * updates: mirror.digistar.vn Resolving Dependencies --> Running transaction check ---> Package nfs-utils.x86_64 1:1.2.3-64.el6 will be installed ---> Package nfs-utils-lib.x86_64 0:1.1.5-11.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================ Package Arch Version Repository Size ================================================================================================ Installing: nfs-utils x86_64 1:1.2.3-64.el6 base 331 k nfs-utils-lib x86_64 1.1.5-11.el6 base 68 k Transaction Summary ================================================================================================ Install 2 Package(s) Total download size: 399 k Installed size: 1.1 M Is this ok [y/N]: y Downloading Packages: (1/2): nfs-utils-1.2.3-64.el6.x86_64.rpm | 331 kB 00:00 (2/2): nfs-utils-lib-1.1.5-11.el6.x86_64.rpm | 68 kB 00:00 ------------------------------------------------------------------------------------------------ Total 60 kB/s | 399 kB 00:06 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : nfs-utils-lib-1.1.5-11.el6.x86_64 1/2 Installing : 1:nfs-utils-1.2.3-64.el6.x86_64 2/2 Verifying : 1:nfs-utils-1.2.3-64.el6.x86_64 1/2 Verifying : nfs-utils-lib-1.1.5-11.el6.x86_64 2/2 Installed: nfs-utils.x86_64 1:1.2.3-64.el6 nfs-utils-lib.x86_64 0:1.1.5-11.el6 Complete! After this run the below commands to start the NFS servers and make sure it start at boot time. # chkconfig nfs on # service rpcbind start # service nfs start Output: Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS mountd: [ OK ] Starting NFS daemon: [ OK ] Starting RPC idmapd: [ OK ] We need to decide a directory which we want to share with the client. The directory should be added to /etc/exports # vi /etc/exports All the below lines to the file. /share 192.168.87.158(rw,sync,no_root_squash,no_subtree_check) /share – is the share folder which server wants to share 192.168.87.158 – is the IP address of the client to whom want to share rw – This will all the clients to read and write the files to the share directory. sync – which will confirm the shared directory once the changes are committed. no_subtree_check – Will prevents the scanning the shared directory, as nfs performs the scans of every share directory, Disabling the subtree check will increase the reliability, but reduces the security. no_root_squash – This will all the root user to connect to the designated directory. Once, we enter the details of the share in config file, run the below command to export them # exportfs -a Install the required packages to connect to NFS # yum install nfs-utils nfs-utils-lib -y Once the packages are installed on the client, create the directory to mount point the shared folder # mkdir -p /mnt/share # mount 192.168.87.156:/share /mnt/share/ To confirm if the share is mounted or not run the command ‘df -h’, this will show the list of mounted folders. # df -h Output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 50G 5.2G 42G 12% / tmpfs 427M 80K 427M 1% /dev/shm /dev/sda1 477M 42M 410M 10% /boot /dev/mapper/VolGroup-lv_home 95G 60M 90G 1% /home 192.168.87.156:/share 18G 2.0G 15G 13% /mnt/share To see the list of all the mounted file systems. # mount Output: /dev/mapper/VolGroup-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) /dev/sda1 on /boot type ext4 (rw) /dev/mapper/VolGroup-lv_home on /home type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) 192.168.87.156:/share on /mnt/share type nfs (rw,vers=4,addr=192.168.87.156,clientaddr=192.168.87.158) Create a file and folders in the server share directory # touch test1 # mkdir test Then goto the client side machine and check the /mnt/share folders # ls /mnt/share/ -lh total 4.0K drwxr-xr-x 2 root root 4.0K Apr 20 2016 test -rw-r--r-- 1 root root 0 Apr 20 2016 test1 To automatically mount the share folder permanently while boot in the client machine, add the entries in the /etc/fstab file # vi /etc/fstab # # /etc/fstab # Created by anaconda on Sat Apr 2 00:11:04 2016 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/VolGroup-lv_root / ext4 defaults 1 1 UUID=1adb2ad5-d0c7-48a5-9b10-f846a3f9258c /boot ext4 defaults 1 2 /dev/mapper/VolGroup-lv_home /home ext4 defaults 1 2 /dev/mapper/VolGroup-lv_swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 192.168.87.156:/share /mnt/share nfs auto,noatime,nolock,bg,nfsvers=3,intr,tcp,actimeo=1800 0 0 Some options and important command of NFS # showmount -e Export list for localhost.localdomain: /share 192.168.87.158 This will show the available share on the local machine, so needed to run on the server side. # showmount -e 192.168.87.156 Export list for 192.168.87.156: /share 192.168.87.158 This will show the remote server shared folders needed to run on the client side – # exportfs -v /share 192.168.87.158(rw,wdelay,no_root_squash,no_subtree_check,sec=sys,rw,no_root_squash,no_all_squash) List all the share files and folders with options on the server # exportfs -u /share 192.168.87.158 This will un-export the shared folders or files which are in /etc/exports # exports -r This will refresh the servers list and check for the changes if any. After this configuration and setup, you should be able to use NFS to share the files between *inx machines without any problem, then we should be able share the folders to only the client to whom we want to share the folder, this will improve the security.
[ { "code": null, "e": 1391, "s": 1062, "text": "In this article we will learn and configure NFS (Network File System) which is basically used to share the files and folders between Linux systems. This was developed by Sun Microsystems in 1980 which allows us to mount the file system in the network and remote users can interact and the share just like local file and folders." }, { "code": null, "e": 1448, "s": 1391, "text": "NFS can be configured as a centralized storage solution." }, { "code": null, "e": 1497, "s": 1448, "text": "No need of running the same OS on both machines." }, { "code": null, "e": 1528, "s": 1497, "text": "Can be secured with Firewalls." }, { "code": null, "e": 1581, "s": 1528, "text": "It can be shared along with all the flavors of *nix." }, { "code": null, "e": 1641, "s": 1581, "text": "The NFS share folder can be mounted as a local file system." }, { "code": null, "e": 1781, "s": 1641, "text": "NFS mount needed at least two machines. The machine hosting the shared folders is called as server and which connects is called as clients." }, { "code": null, "e": 1804, "s": 1781, "text": "Server: 192.168.87.156" }, { "code": null, "e": 1827, "s": 1804, "text": "Client: 192.168.87.158" }, { "code": null, "e": 1869, "s": 1827, "text": "We needed to install the packages for NFS" }, { "code": null, "e": 3730, "s": 1869, "text": "# yum install nfs-utils nfs-utils-lib\nOutput:\nLoaded plugins: fastestmirror, security\nSetting up Install Process\nLoading mirror speeds from cached hostfile\nepel/metalink | 4.0 kB 00:00\n* base: mirror.digistar.vn\n* epel: mirrors.ustc.edu.cn\n* extras: mirror.digistar.vn\n* updates: mirror.digistar.vn\nResolving Dependencies\n--> Running transaction check\n---> Package nfs-utils.x86_64 1:1.2.3-64.el6 will be installed\n---> Package nfs-utils-lib.x86_64 0:1.1.5-11.el6 will be installed\n--> Finished Dependency Resolution\nDependencies Resolved\n================================================================================================\nPackage Arch Version Repository Size\n================================================================================================\nInstalling:\nnfs-utils x86_64 1:1.2.3-64.el6 base 331 k\nnfs-utils-lib x86_64 1.1.5-11.el6 base 68 k\nTransaction Summary\n================================================================================================\nInstall 2 Package(s)\nTotal download size: 399 k\nInstalled size: 1.1 M\nIs this ok [y/N]: y\nDownloading Packages:\n(1/2): nfs-utils-1.2.3-64.el6.x86_64.rpm | 331 kB 00:00\n(2/2): nfs-utils-lib-1.1.5-11.el6.x86_64.rpm | 68 kB 00:00\n------------------------------------------------------------------------------------------------\nTotal 60 kB/s | 399 kB 00:06\nRunning rpm_check_debug\nRunning Transaction Test\nTransaction Test Succeeded\nRunning Transaction\nInstalling : nfs-utils-lib-1.1.5-11.el6.x86_64 1/2\nInstalling : 1:nfs-utils-1.2.3-64.el6.x86_64 2/2\nVerifying : 1:nfs-utils-1.2.3-64.el6.x86_64 1/2\nVerifying : nfs-utils-lib-1.1.5-11.el6.x86_64 2/2\nInstalled:\nnfs-utils.x86_64 1:1.2.3-64.el6 nfs-utils-lib.x86_64 0:1.1.5-11.el6\nComplete!" }, { "code": null, "e": 3826, "s": 3730, "text": "After this run the below commands to start the NFS servers and make sure it start at boot time." }, { "code": null, "e": 4042, "s": 3826, "text": "# chkconfig nfs on\n# service rpcbind start\n# service nfs start\n\nOutput:\nStarting NFS services: [ OK ]\nStarting NFS quotas: [ OK ]\nStarting NFS mountd: [ OK ]\nStarting NFS daemon: [ OK ]\nStarting RPC idmapd: [ OK ]\n\n" }, { "code": null, "e": 4158, "s": 4042, "text": "We need to decide a directory which we want to share with the client. The directory should be added to /etc/exports" }, { "code": null, "e": 4176, "s": 4158, "text": "# vi /etc/exports" }, { "code": null, "e": 4209, "s": 4176, "text": "All the below lines to the file." }, { "code": null, "e": 4272, "s": 4209, "text": "/share 192.168.87.158(rw,sync,no_root_squash,no_subtree_check)" }, { "code": null, "e": 4329, "s": 4272, "text": "/share – is the share folder which server wants to share" }, { "code": null, "e": 4400, "s": 4329, "text": "192.168.87.158 – is the IP address of the client to whom want to share" }, { "code": null, "e": 4483, "s": 4400, "text": "rw – This will all the clients to read and write the files to the share directory." }, { "code": null, "e": 4562, "s": 4483, "text": "sync – which will confirm the shared directory once the changes are committed." }, { "code": null, "e": 4767, "s": 4562, "text": "no_subtree_check – Will prevents the scanning the shared directory, as nfs performs the scans of every share directory, Disabling the subtree check will increase the reliability, but reduces the security." }, { "code": null, "e": 4852, "s": 4767, "text": "no_root_squash – This will all the root user to connect to the designated directory." }, { "code": null, "e": 4945, "s": 4852, "text": "Once, we enter the details of the share in config file, run the below command to export them" }, { "code": null, "e": 4959, "s": 4945, "text": "# exportfs -a" }, { "code": null, "e": 5007, "s": 4959, "text": "Install the required packages to connect to NFS" }, { "code": null, "e": 5048, "s": 5007, "text": "# yum install nfs-utils nfs-utils-lib -y" }, { "code": null, "e": 5149, "s": 5048, "text": "Once the packages are installed on the client, create the directory to mount point the shared folder" }, { "code": null, "e": 5171, "s": 5149, "text": "# mkdir -p /mnt/share" }, { "code": null, "e": 5213, "s": 5171, "text": "# mount 192.168.87.156:/share /mnt/share/" }, { "code": null, "e": 5324, "s": 5213, "text": "To confirm if the share is mounted or not run the command ‘df -h’, this will show the list of mounted folders." }, { "code": null, "e": 5599, "s": 5324, "text": "# df -h\nOutput:\nFilesystem Size Used Avail Use% Mounted on\n/dev/mapper/VolGroup-lv_root\n50G 5.2G 42G 12% /\ntmpfs 427M 80K 427M 1% /dev/shm\n/dev/sda1 477M 42M 410M 10% /boot\n/dev/mapper/VolGroup-lv_home\n95G 60M 90G 1% /home\n192.168.87.156:/share\n18G 2.0G 15G 13% /mnt/share\n\n" }, { "code": null, "e": 5648, "s": 5599, "text": "To see the list of all the mounted file systems." }, { "code": null, "e": 6157, "s": 5648, "text": "# mount\nOutput:\n/dev/mapper/VolGroup-lv_root on / type ext4 (rw)\nproc on /proc type proc (rw)\nsysfs on /sys type sysfs (rw)\ndevpts on /dev/pts type devpts (rw,gid=5,mode=620)\ntmpfs on /dev/shm type tmpfs (rw)\n/dev/sda1 on /boot type ext4 (rw)\n/dev/mapper/VolGroup-lv_home on /home type ext4 (rw)\nnone on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)\nsunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)\n192.168.87.156:/share on /mnt/share type nfs (rw,vers=4,addr=192.168.87.156,clientaddr=192.168.87.158)" }, { "code": null, "e": 6213, "s": 6157, "text": "Create a file and folders in the server share directory" }, { "code": null, "e": 6240, "s": 6213, "text": "# touch test1\n# mkdir test" }, { "code": null, "e": 6307, "s": 6240, "text": "Then goto the client side machine and check the /mnt/share folders" }, { "code": null, "e": 6427, "s": 6307, "text": "# ls /mnt/share/ -lh\ntotal 4.0K\ndrwxr-xr-x 2 root root 4.0K Apr 20 2016 test\n-rw-r--r-- 1 root root 0 Apr 20 2016 test1" }, { "code": null, "e": 6552, "s": 6427, "text": "To automatically mount the share folder permanently while boot in the client machine, add the entries in the /etc/fstab file" }, { "code": null, "e": 7238, "s": 6552, "text": "# vi /etc/fstab\n#\n# /etc/fstab\n# Created by anaconda on Sat Apr 2 00:11:04 2016\n#\n# Accessible filesystems, by reference, are maintained under '/dev/disk'\n# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info\n#\n/dev/mapper/VolGroup-lv_root / ext4 defaults 1 1\nUUID=1adb2ad5-d0c7-48a5-9b10-f846a3f9258c /boot ext4 defaults 1 2\n/dev/mapper/VolGroup-lv_home /home ext4 defaults 1 2\n/dev/mapper/VolGroup-lv_swap swap swap defaults 0 0\ntmpfs /dev/shm tmpfs defaults 0 0\ndevpts /dev/pts devpts gid=5,mode=620 0 0\nsysfs /sys sysfs defaults 0 0\nproc /proc proc defaults 0 0\n192.168.87.156:/share /mnt/share nfs auto,noatime,nolock,bg,nfsvers=3,intr,tcp,actimeo=1800 0 0\n\n" }, { "code": null, "e": 7280, "s": 7238, "text": "Some options and important command of NFS" }, { "code": null, "e": 7356, "s": 7280, "text": "# showmount -e\nExport list for localhost.localdomain:\n/share 192.168.87.158" }, { "code": null, "e": 7450, "s": 7356, "text": "This will show the available share on the local machine, so needed to run on the server side." }, { "code": null, "e": 7534, "s": 7450, "text": "# showmount -e 192.168.87.156\nExport list for 192.168.87.156:\n/share 192.168.87.158" }, { "code": null, "e": 7617, "s": 7534, "text": "This will show the remote server shared folders needed to run on the client side –" }, { "code": null, "e": 7736, "s": 7617, "text": "# exportfs -v\n/share 192.168.87.158(rw,wdelay,no_root_squash,no_subtree_check,sec=sys,rw,no_root_squash,no_all_squash)" }, { "code": null, "e": 7800, "s": 7736, "text": "List all the share files and folders with options on the server" }, { "code": null, "e": 7836, "s": 7800, "text": "# exportfs -u\n/share 192.168.87.158" }, { "code": null, "e": 7910, "s": 7836, "text": "This will un-export the shared folders or files which are in /etc/exports" }, { "code": null, "e": 7923, "s": 7910, "text": "# exports -r" }, { "code": null, "e": 7992, "s": 7923, "text": "This will refresh the servers list and check for the changes if any." }, { "code": null, "e": 8249, "s": 7992, "text": "After this configuration and setup, you should be able to use NFS to share the files between *inx machines without any problem, then we should be able share the folders to only the client to whom we want to share the folder, this will improve the security." } ]
How to Adjust Title Position in Matplotlib? - GeeksforGeeks
28 Nov, 2021 In this article, you learn how to modify the Title position in matplotlib in Python. The title() method in matplotlib module is used to specify title of the visualization depicted and displays the title using various attributes. Syntax: matplotlib.pyplot.title(label, fontdict=None, loc=’center’, pad=None, **kwargs) Example 1: In this example, we will look at how to give a title, Matplotlib provides a function title() which is used to give a title for the plots. Python3 #import matplotlibimport matplotlib.pyplot as plt # Points to markplt.plot([1, 2, 3, 4, 5], [1, 4, 6, 14, 25]) # X labelplt.xlabel('X-axis') # Y labelplt.ylabel('Y-axis') # Titleplt.title('Title') Output: By default, TitleTitle is placed in the center; it is pretty simple to change them. Example 2: In this example, we have placedTitleTitle to the right of the plot using matplotlib.pyplot.title() function by initializing the argument as right. Python3 #import matplotlibimport matplotlib.pyplot as plt # Points to markplt.plot([1, 2, 3, 4, 5], [1, 4, 6, 14, 25]) # X labelplt.xlabel('X-axis') # Y labelplt.ylabel('Y-axis') # Titleplt.title('Title', loc='right') Output: right Example 3: In this example, we have placedTitleTitle to the left of the plot using matplotlib.pyplot.title() function by initializing the argument as left. Python3 #import matplotlibimport matplotlib.pyplot as plt # Points to markplt.plot([1, 2, 3, 4, 5], [1, 4, 6, 14, 25]) # X labelplt.xlabel('X-axis') # Y labelplt.ylabel('Y-axis') # Titleplt.title('Title', loc='left') Output: In this method, we will place title inside the plot. Instead of giving the location in the “loc” parameter, we will give the exact location where it should be placed by using X and Y coordinates. Syntax: matplotlib.pyplot.title('Title', x=value, y=value) Example: In this example, we will be assigning the value of the x and y at the position where the title is to be placed in the python programming language. Python3 #import matplotlibimport matplotlib.pyplot as plt # Points to markplt.plot([1, 2, 3, 4, 5], [1, 4, 6, 14, 25]) # X labelplt.xlabel('X-axis') # Y labelplt.ylabel('Y-axis') # Titleplt.title('Title', x=0.4, y=0.8) Output: In this method, we will be using the pad argument of the title() function to change the title location in the given plot in the python programming language. Syntax: matplotlib.pyplot.title('Title', pad=value) Example: In this example, We will elevateTitleTitle by using the “pad” parameter. The offset of the Title from the top of the axes, in points. The default value is none. Python3 #import matplotlibimport matplotlib.pyplot as plt # Points to markplt.plot([1, 2, 3, 4, 5], [1, 4, 6, 14, 25]) # X labelplt.xlabel('X-axis') # Y labelplt.ylabel('Y-axis') # Title with padplt.title('Title', pad=50) Output: Picked Python-matplotlib Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python OOPs Concepts How to Install PIP on Windows ? Bar Plot in Matplotlib Defaultdict in Python Python Classes and Objects Deque in Python Check if element exists in list in Python How to drop one or multiple columns in Pandas Dataframe Python - Ways to remove duplicates from list Class method vs Static method in Python
[ { "code": null, "e": 23901, "s": 23873, "text": "\n28 Nov, 2021" }, { "code": null, "e": 23986, "s": 23901, "text": "In this article, you learn how to modify the Title position in matplotlib in Python." }, { "code": null, "e": 24130, "s": 23986, "text": "The title() method in matplotlib module is used to specify title of the visualization depicted and displays the title using various attributes." }, { "code": null, "e": 24138, "s": 24130, "text": "Syntax:" }, { "code": null, "e": 24218, "s": 24138, "text": "matplotlib.pyplot.title(label, fontdict=None, loc=’center’, pad=None, **kwargs)" }, { "code": null, "e": 24229, "s": 24218, "text": "Example 1:" }, { "code": null, "e": 24367, "s": 24229, "text": "In this example, we will look at how to give a title, Matplotlib provides a function title() which is used to give a title for the plots." }, { "code": null, "e": 24375, "s": 24367, "text": "Python3" }, { "code": "#import matplotlibimport matplotlib.pyplot as plt # Points to markplt.plot([1, 2, 3, 4, 5], [1, 4, 6, 14, 25]) # X labelplt.xlabel('X-axis') # Y labelplt.ylabel('Y-axis') # Titleplt.title('Title')", "e": 24576, "s": 24375, "text": null }, { "code": null, "e": 24584, "s": 24576, "text": "Output:" }, { "code": null, "e": 24669, "s": 24584, "text": "By default, TitleTitle is placed in the center; it is pretty simple to change them. " }, { "code": null, "e": 24680, "s": 24669, "text": "Example 2:" }, { "code": null, "e": 24828, "s": 24680, "text": "In this example, we have placedTitleTitle to the right of the plot using matplotlib.pyplot.title() function by initializing the argument as right. " }, { "code": null, "e": 24836, "s": 24828, "text": "Python3" }, { "code": "#import matplotlibimport matplotlib.pyplot as plt # Points to markplt.plot([1, 2, 3, 4, 5], [1, 4, 6, 14, 25]) # X labelplt.xlabel('X-axis') # Y labelplt.ylabel('Y-axis') # Titleplt.title('Title', loc='right')", "e": 25050, "s": 24836, "text": null }, { "code": null, "e": 25058, "s": 25050, "text": "Output:" }, { "code": null, "e": 25064, "s": 25058, "text": "right" }, { "code": null, "e": 25075, "s": 25064, "text": "Example 3:" }, { "code": null, "e": 25221, "s": 25075, "text": "In this example, we have placedTitleTitle to the left of the plot using matplotlib.pyplot.title() function by initializing the argument as left. " }, { "code": null, "e": 25229, "s": 25221, "text": "Python3" }, { "code": "#import matplotlibimport matplotlib.pyplot as plt # Points to markplt.plot([1, 2, 3, 4, 5], [1, 4, 6, 14, 25]) # X labelplt.xlabel('X-axis') # Y labelplt.ylabel('Y-axis') # Titleplt.title('Title', loc='left')", "e": 25442, "s": 25229, "text": null }, { "code": null, "e": 25450, "s": 25442, "text": "Output:" }, { "code": null, "e": 25647, "s": 25450, "text": "In this method, we will place title inside the plot. Instead of giving the location in the “loc” parameter, we will give the exact location where it should be placed by using X and Y coordinates. " }, { "code": null, "e": 25655, "s": 25647, "text": "Syntax:" }, { "code": null, "e": 25707, "s": 25655, "text": " matplotlib.pyplot.title('Title', x=value, y=value)" }, { "code": null, "e": 25716, "s": 25707, "text": "Example:" }, { "code": null, "e": 25863, "s": 25716, "text": "In this example, we will be assigning the value of the x and y at the position where the title is to be placed in the python programming language." }, { "code": null, "e": 25871, "s": 25863, "text": "Python3" }, { "code": "#import matplotlibimport matplotlib.pyplot as plt # Points to markplt.plot([1, 2, 3, 4, 5], [1, 4, 6, 14, 25]) # X labelplt.xlabel('X-axis') # Y labelplt.ylabel('Y-axis') # Titleplt.title('Title', x=0.4, y=0.8)", "e": 26086, "s": 25871, "text": null }, { "code": null, "e": 26094, "s": 26086, "text": "Output:" }, { "code": null, "e": 26251, "s": 26094, "text": "In this method, we will be using the pad argument of the title() function to change the title location in the given plot in the python programming language." }, { "code": null, "e": 26259, "s": 26251, "text": "Syntax:" }, { "code": null, "e": 26303, "s": 26259, "text": "matplotlib.pyplot.title('Title', pad=value)" }, { "code": null, "e": 26312, "s": 26303, "text": "Example:" }, { "code": null, "e": 26473, "s": 26312, "text": "In this example, We will elevateTitleTitle by using the “pad” parameter. The offset of the Title from the top of the axes, in points. The default value is none." }, { "code": null, "e": 26481, "s": 26473, "text": "Python3" }, { "code": "#import matplotlibimport matplotlib.pyplot as plt # Points to markplt.plot([1, 2, 3, 4, 5], [1, 4, 6, 14, 25]) # X labelplt.xlabel('X-axis') # Y labelplt.ylabel('Y-axis') # Title with padplt.title('Title', pad=50)", "e": 26699, "s": 26481, "text": null }, { "code": null, "e": 26707, "s": 26699, "text": "Output:" }, { "code": null, "e": 26714, "s": 26707, "text": "Picked" }, { "code": null, "e": 26732, "s": 26714, "text": "Python-matplotlib" }, { "code": null, "e": 26739, "s": 26732, "text": "Python" }, { "code": null, "e": 26837, "s": 26739, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26846, "s": 26837, "text": "Comments" }, { "code": null, "e": 26859, "s": 26846, "text": "Old Comments" }, { "code": null, "e": 26880, "s": 26859, "text": "Python OOPs Concepts" }, { "code": null, "e": 26912, "s": 26880, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 26935, "s": 26912, "text": "Bar Plot in Matplotlib" }, { "code": null, "e": 26957, "s": 26935, "text": "Defaultdict in Python" }, { "code": null, "e": 26984, "s": 26957, "text": "Python Classes and Objects" }, { "code": null, "e": 27000, "s": 26984, "text": "Deque in Python" }, { "code": null, "e": 27042, "s": 27000, "text": "Check if element exists in list in Python" }, { "code": null, "e": 27098, "s": 27042, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 27143, "s": 27098, "text": "Python - Ways to remove duplicates from list" } ]
Query returning no data in SAP Business One using Table Relationship
This looks like an issue with Join in queries. Try replacing Inner join with Left join like this. I ran this query and it is working fine: select T0.DocNum as 'Payment Number',T0.DocDate 'Payment Date',T0.CardCode, T0.CardName 'Customer Name',T1.BankCode 'Bankcode',T3.BankName 'Bank Name', T2.Phone1 , T0.CreditSum, T0.CashSum, T0.TrsfrSum, t0.CheckSum, t1.CheckNum as 'Check Number', t1.DueDate as 'check date', t6.VoucherNum as 'Voucher Number', t0.TrsfrRef as 'Transfer No', t0.TrsfrDate AS 'Transfer Date', ousr.USER_code as 'user code', T5.DocNum, t11.U_P_BuildingName as 'Building Name', CASE when T5.DocNum is null then 'On Account' else 'Paid For Invoice' END AS 'Payment Status', CASE when T5.DocStatus = 'O' then 'Open' else 'Closed' END AS ' Invoice Status', T4.SumApplied as 'Amount Paid on Invoice',T9.U_FloorNo,T5.U_UnitCode,T5.U_Type, t0.DocTotal as 'Payment Total',t5.DocTotal as'Invoice Total' , t8.City, t0.Comments as'Remarks' from ORCT T0 left join rct1 T1 on T0.DocNum=T1.DocNum left join ocrd T2 on T2.CardCode=T0.CardCode left outer join ODSC T3 on T3.BankCode=T0.BankCode left join RCT2 T4 on T0.DocNum = T4.DocNum left join RCT3 T6 on T0.DocNum = T6.DocNum left join OINV T5 on T4.DocEntry = T5.DocEntry and T5.ObjType = T4.InvType left join oitm t11 on t5.u_unitcode = t11.ItemCode LEFT JOIN OWHS T8 ON T11.U_P_BuildingNum = T8.WhsCode LEFT JOIN [dbo].[@AUND] T9 ON T5.[U_UnitCode] = T9.[Code] INNER JOIN OSLP T10 ON T5.[SlpCode] = T10.[SlpCode] inner join ousr on ousr.USERID = t0.usersign where T4.InvType <> '14' and T0.[Canceled] = 'N' and t0.docnum=200001
[ { "code": null, "e": 1201, "s": 1062, "text": "This looks like an issue with Join in queries. Try replacing Inner join with Left join like this. I ran this query and it is working fine:" }, { "code": null, "e": 2650, "s": 1201, "text": "select T0.DocNum as 'Payment Number',T0.DocDate 'Payment Date',T0.CardCode,\nT0.CardName 'Customer Name',T1.BankCode 'Bankcode',T3.BankName 'Bank Name', T2.Phone1 ,\nT0.CreditSum,\nT0.CashSum,\nT0.TrsfrSum,\nt0.CheckSum,\nt1.CheckNum as 'Check Number',\nt1.DueDate as 'check date',\nt6.VoucherNum as 'Voucher Number',\nt0.TrsfrRef as 'Transfer No',\nt0.TrsfrDate AS 'Transfer Date',\nousr.USER_code as 'user code',\nT5.DocNum, t11.U_P_BuildingName as 'Building Name',\nCASE when T5.DocNum is null then 'On Account' else 'Paid For Invoice' END AS 'Payment Status',\nCASE when T5.DocStatus = 'O' then 'Open' else 'Closed' END AS ' Invoice Status',\nT4.SumApplied as 'Amount Paid on Invoice',T9.U_FloorNo,T5.U_UnitCode,T5.U_Type,\nt0.DocTotal as 'Payment Total',t5.DocTotal as'Invoice Total' , t8.City,\nt0.Comments as'Remarks'\nfrom ORCT T0\nleft join rct1 T1 on T0.DocNum=T1.DocNum\nleft join ocrd T2 on T2.CardCode=T0.CardCode\nleft outer join ODSC T3 on T3.BankCode=T0.BankCode\nleft join RCT2 T4 on T0.DocNum = T4.DocNum\nleft join RCT3 T6 on T0.DocNum = T6.DocNum\nleft join OINV T5 on T4.DocEntry = T5.DocEntry and T5.ObjType = T4.InvType\nleft join oitm t11 on t5.u_unitcode = t11.ItemCode\nLEFT JOIN OWHS T8 ON T11.U_P_BuildingNum = T8.WhsCode\nLEFT JOIN [dbo].[@AUND] T9 ON T5.[U_UnitCode] = T9.[Code]\nINNER JOIN OSLP T10 ON T5.[SlpCode] = T10.[SlpCode]\ninner join ousr on ousr.USERID = t0.usersign\nwhere\nT4.InvType <> '14' and T0.[Canceled] = 'N' and t0.docnum=200001" } ]
numpy.where() in Python - GeeksforGeeks
03 Dec, 2020 The numpy.where() function returns the indices of elements in an input array where the given condition is satisfied. Syntax :numpy.where(condition[, x, y])Parameters:condition : When True, yield x, otherwise yield y.x, y : Values from which to choose. x, y and condition need to be broadcastable to some shape. Returns:out : [ndarray or tuple of ndarrays] If both x and y are specified, the output array contains elements of x where condition is True, and elements from y elsewhere. If only condition is given, return the tuple condition.nonzero(), the indices where condition is True. Code #1: # Python program explaining # where() function import numpy as np np.where([[True, False], [True, True]], [[1, 2], [3, 4]], [[5, 6], [7, 8]]) Output : array([[1, 6], [3, 4]]) Code #2: # Python program explaining # where() function import numpy as np # a is an array of integers.a = np.array([[1, 2, 3], [4, 5, 6]]) print(a) print ('Indices of elements <4') b = np.where(a<4)print(b) print("Elements which are <4")print(a[b]) Output : [[1 2 3] [4 5 6]] Indices of elements <4 (array([0, 0, 0], dtype=int64), array([0, 1, 2], dtype=int64)) Elements which are <4 array([1, 2, 3]) Python numpy-Indexing Python-numpy Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Enumerate() in Python Read a file line by line in Python Python OOPs Concepts Different ways to create Pandas Dataframe sum() function in Python How to Install PIP on Windows ? Stack in Python Bar Plot in Matplotlib Reading and Writing to text files in Python
[ { "code": null, "e": 24407, "s": 24379, "text": "\n03 Dec, 2020" }, { "code": null, "e": 24524, "s": 24407, "text": "The numpy.where() function returns the indices of elements in an input array where the given condition is satisfied." }, { "code": null, "e": 24718, "s": 24524, "text": "Syntax :numpy.where(condition[, x, y])Parameters:condition : When True, yield x, otherwise yield y.x, y : Values from which to choose. x, y and condition need to be broadcastable to some shape." }, { "code": null, "e": 24890, "s": 24718, "text": "Returns:out : [ndarray or tuple of ndarrays] If both x and y are specified, the output array contains elements of x where condition is True, and elements from y elsewhere." }, { "code": null, "e": 24993, "s": 24890, "text": "If only condition is given, return the tuple condition.nonzero(), the indices where condition is True." }, { "code": null, "e": 25002, "s": 24993, "text": "Code #1:" }, { "code": "# Python program explaining # where() function import numpy as np np.where([[True, False], [True, True]], [[1, 2], [3, 4]], [[5, 6], [7, 8]])", "e": 25155, "s": 25002, "text": null }, { "code": null, "e": 25164, "s": 25155, "text": "Output :" }, { "code": null, "e": 25195, "s": 25164, "text": "array([[1, 6],\n [3, 4]])" }, { "code": null, "e": 25206, "s": 25197, "text": "Code #2:" }, { "code": "# Python program explaining # where() function import numpy as np # a is an array of integers.a = np.array([[1, 2, 3], [4, 5, 6]]) print(a) print ('Indices of elements <4') b = np.where(a<4)print(b) print(\"Elements which are <4\")print(a[b])", "e": 25454, "s": 25206, "text": null }, { "code": null, "e": 25463, "s": 25454, "text": "Output :" }, { "code": null, "e": 25610, "s": 25463, "text": "[[1 2 3]\n [4 5 6]]\n\nIndices of elements <4\n(array([0, 0, 0], dtype=int64), array([0, 1, 2], dtype=int64))\n\nElements which are <4\narray([1, 2, 3])\n" }, { "code": null, "e": 25632, "s": 25610, "text": "Python numpy-Indexing" }, { "code": null, "e": 25645, "s": 25632, "text": "Python-numpy" }, { "code": null, "e": 25652, "s": 25645, "text": "Python" }, { "code": null, "e": 25750, "s": 25652, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25759, "s": 25750, "text": "Comments" }, { "code": null, "e": 25772, "s": 25759, "text": "Old Comments" }, { "code": null, "e": 25790, "s": 25772, "text": "Python Dictionary" }, { "code": null, "e": 25812, "s": 25790, "text": "Enumerate() in Python" }, { "code": null, "e": 25847, "s": 25812, "text": "Read a file line by line in Python" }, { "code": null, "e": 25868, "s": 25847, "text": "Python OOPs Concepts" }, { "code": null, "e": 25910, "s": 25868, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 25935, "s": 25910, "text": "sum() function in Python" }, { "code": null, "e": 25967, "s": 25935, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 25983, "s": 25967, "text": "Stack in Python" }, { "code": null, "e": 26006, "s": 25983, "text": "Bar Plot in Matplotlib" } ]
My Google Foobar journey. Level 2.1 — Elevator Maintenance | by Pratick Roy | Towards Data Science
Level 2.1 — Elevator Maintenance My Google FooBar Journey: Level 1 — Getting the Invitation.My Google FooBar Journey: Level 2.1 — Elevator Maintenance. (This one) My Google FooBar Journey: Level 1 — Getting the Invitation. My Google FooBar Journey: Level 2.1 — Elevator Maintenance. (This one) You survived a week in Commander Lambda’s organization, and you even managed to get yourself promoted. Hooray! Henchmen still don’t have the kind of security access you’ll need to take down Commander Lambda, though, so you’d better keep working. Chop chop! So now I had Level 2 access, but defeating Commander Lambda was still a long ways off. The next obstacle in my quest to free my bunny brethren, my next immediate key to progression, was a rather fun challenge. At a cursory level, this is quite a simple one really; the devil is hidden in the edge cases. I initially started with a really unclean solution to force myself through, and for the most part it worked. However, I simply couldn’t crack open some of the test cases, no matter how many more hacks I introduced. So I took a deep breath and a got good night’s sleep. Next morning, I got up, drank some coffee, and refactored the code. It passed all test cases in one shot. Before I get to the code, here is something I want to stress, always try to write clean code. I know it’s not really necessary in a competitive coding environment, in-fact sometimes it becomes detrimental to the cause. In future levels, I have taken many shortcuts that I would never write in a production environment, and would never let any one whose code I am reviewing write. They made my life easier, and the code ran faster, and so I went with it, and you should too in your fooBar journey, but here is the important point, you should feel guilty about doing that. If you can write working but unclean code without fear of the great man above[1], then you maybe a great coder, but you will be a terrible developer. If you disagree with this, I’d strongly suggest you to read a post I wrote a while back, and even after reading that if you disagree feel free to drop me a note/comment and we can discuss this further. towardsdatascience.com You’ve been assigned the onerous task of elevator maintenance — ugh! It wouldn’t be so bad, except that all the elevator documentation has been lying in a disorganized pile at the bottom of a filing cabinet for years, and you don’t even know what elevator version numbers you’ll be working on. Elevator versions are represented by a series of numbers, divided up into major, minor and revision integers. New versions of an elevator increase the major number, e.g. 1, 2, 3, and so on. When new features are added to an elevator without being a complete new version, a second number named “minor” can be used to represent those new additions, e.g. 1.0, 1.1, 1.2, etc. Small fixes or maintenance work can be represented by a third number named “revision”, e.g. 1.1.1, 1.1.2, 1.2.0, and so on. The number zero can be used as a major forpre-release versions of elevators, e.g. 0.1, 0.5, 0.9.2, etc (Commander Lambda is careful to always beta test her new technology, with her loyal henchmen as subjects!). Given a list of elevator versions represented as strings, write a function solution(l) that returns the same list sorted in ascending order by major, minor, and revision number so that you can identify the current elevator version. The versions in list l will always contain major numbers, but minor and revision numbers are optional. If the version contains a revision number, then it will also have a minor number. For example, given the list l as [“1.1.2”, “1.0”, “1.3.3”, “1.0.12”, “1.0.2”], the function solution(l) would return the list [“1.0”, “1.0.2”, “1.0.12”, “1.1.2”, “1.3.3”]. If two or more versions are equivalent but one version contains more numbers than the others, then these versions must be sorted ascending based onhow many numbers they have, e.g [“1”, “1.0”, “1.0.0”]. The number of elements in the list l will be at least 1 and will not exceed 100. — Test cases — Input:Solution.solution({“1.11”, “2.0.0”, “1.2”, “2”, “0.1”, “1.2.1”, “1.1.1”, “2.0”})Output: 0.1,1.1.1,1.2,1.2.1,1.11,2,2.0,2.0.0 Input:Solution.solution({“1.1.2”, “1.0”, “1.3.3”, “1.0.12”, “1.0.2”})Output: 1.0,1.0.2,1.0.12,1.1.2,1.3.3 The problem statement is fairly straight forward. Given a list of versions, sort them in ascending order with order of precedence : major > minor > revision. This is simple enough, let’s come to the edge cases. Minor and revision numbers are optional If two or more versions are equivalent but one version contains more numbers than the others, then these versions must be sorted in ascending order based on how many numbers they have. Before addressing these, lets first see the solution code Before running down the code, let’s first discuss a common coding construct. Simply put a comparator always takes the form: Now if you observe, here we are not using any if statements to make the comparison, but simply subtracting the two hash-codes. This subtraction is actually a simple and common convention to achieve this. For example consider two ints, a and b, then a — b < 0, if a is smallera — b == 0, if a and b is samea — b > 0, if a is larger which exactly echos our desired output. This becomes super useful with sorting, as the Java Standard Library provides sort functions that can sort a collection of any object based on a comparator. We will be making use of this concept not once but twice in our code. So with comparators out of the way. lets rundown the code and cover the above mentioned edge cases First we sort the array based on a custom comparator. Inside our comparator, we first split the versions to its major, minor and revision types. Again, we write another comparator ( this time without the pomp of inheritance, but basically it does the same thing ), for comparing the specific version types. We compare the version types in the required precedence of major > minor > revision. If at any stage, both are not equal, we simply return the output of the inner comparator. If they are, we move on to the next type. If at the end, there is no difference (Edge case 2), we sort based on the version subtype counts. In our inner comparator, we first check if the specific version subpart exists in either of the versions (Edge case 1). If it does not we default it to 0, else we pick it up and then compare them as we would do in any integer comparator. In the above section, I gave a rundown of the code to help beginners understand the core concepts, but for anyone who has spent any decent part of their lives dabbling in front of an IDE, did I need to? What the code is doing is doing is super clear without the need for any documentation. Sure, it could be even better, we could add enums to make the version types more clear, and use properly named private methods to make the edge case handlings more intuitive. But what we should appreciate here, is that following clean code principles, handling of edge cases are no longer ugly outliers that break the flow of thought, but are natural extensions to it. Now coming to the fact that I did not make it as clean as I could have, makes me feel super guilty. So let’s do some penance. I won’t give a rundown here as frankly it’s not needed. Moreover I would encourage you to only look at the solution method, and see if you can understand the flow without needing to see the implementation of Version and VersionType, which is why I have kept them at the bottom. If you can understand what the code is doing by only having to read about 1/4th of codebase, then my job here is done! In my next post, I’ll go into the second challenge for Level 2: Gearing Up for Destruction. When I do, I will link it here. To be notified of the same, consider following me on medium and subscribing to get an email of the same sent straight to your inbox! There are 2 kinds of writers, those that write more and those that write less. I am the latter. I obsess over creating value and shun noise. If you want to read such content, consider subscribing. [1] Robert C. Martin, Wikipedia [2] Comparator, JavaDoc
[ { "code": null, "e": 205, "s": 172, "text": "Level 2.1 — Elevator Maintenance" }, { "code": null, "e": 335, "s": 205, "text": "My Google FooBar Journey: Level 1 — Getting the Invitation.My Google FooBar Journey: Level 2.1 — Elevator Maintenance. (This one)" }, { "code": null, "e": 395, "s": 335, "text": "My Google FooBar Journey: Level 1 — Getting the Invitation." }, { "code": null, "e": 466, "s": 395, "text": "My Google FooBar Journey: Level 2.1 — Elevator Maintenance. (This one)" }, { "code": null, "e": 723, "s": 466, "text": "You survived a week in Commander Lambda’s organization, and you even managed to get yourself promoted. Hooray! Henchmen still don’t have the kind of security access you’ll need to take down Commander Lambda, though, so you’d better keep working. Chop chop!" }, { "code": null, "e": 933, "s": 723, "text": "So now I had Level 2 access, but defeating Commander Lambda was still a long ways off. The next obstacle in my quest to free my bunny brethren, my next immediate key to progression, was a rather fun challenge." }, { "code": null, "e": 1242, "s": 933, "text": "At a cursory level, this is quite a simple one really; the devil is hidden in the edge cases. I initially started with a really unclean solution to force myself through, and for the most part it worked. However, I simply couldn’t crack open some of the test cases, no matter how many more hacks I introduced." }, { "code": null, "e": 1402, "s": 1242, "text": "So I took a deep breath and a got good night’s sleep. Next morning, I got up, drank some coffee, and refactored the code. It passed all test cases in one shot." }, { "code": null, "e": 1973, "s": 1402, "text": "Before I get to the code, here is something I want to stress, always try to write clean code. I know it’s not really necessary in a competitive coding environment, in-fact sometimes it becomes detrimental to the cause. In future levels, I have taken many shortcuts that I would never write in a production environment, and would never let any one whose code I am reviewing write. They made my life easier, and the code ran faster, and so I went with it, and you should too in your fooBar journey, but here is the important point, you should feel guilty about doing that." }, { "code": null, "e": 2123, "s": 1973, "text": "If you can write working but unclean code without fear of the great man above[1], then you maybe a great coder, but you will be a terrible developer." }, { "code": null, "e": 2325, "s": 2123, "text": "If you disagree with this, I’d strongly suggest you to read a post I wrote a while back, and even after reading that if you disagree feel free to drop me a note/comment and we can discuss this further." }, { "code": null, "e": 2348, "s": 2325, "text": "towardsdatascience.com" }, { "code": null, "e": 2642, "s": 2348, "text": "You’ve been assigned the onerous task of elevator maintenance — ugh! It wouldn’t be so bad, except that all the elevator documentation has been lying in a disorganized pile at the bottom of a filing cabinet for years, and you don’t even know what elevator version numbers you’ll be working on." }, { "code": null, "e": 3349, "s": 2642, "text": "Elevator versions are represented by a series of numbers, divided up into major, minor and revision integers. New versions of an elevator increase the major number, e.g. 1, 2, 3, and so on. When new features are added to an elevator without being a complete new version, a second number named “minor” can be used to represent those new additions, e.g. 1.0, 1.1, 1.2, etc. Small fixes or maintenance work can be represented by a third number named “revision”, e.g. 1.1.1, 1.1.2, 1.2.0, and so on. The number zero can be used as a major forpre-release versions of elevators, e.g. 0.1, 0.5, 0.9.2, etc (Commander Lambda is careful to always beta test her new technology, with her loyal henchmen as subjects!)." }, { "code": null, "e": 3766, "s": 3349, "text": "Given a list of elevator versions represented as strings, write a function solution(l) that returns the same list sorted in ascending order by major, minor, and revision number so that you can identify the current elevator version. The versions in list l will always contain major numbers, but minor and revision numbers are optional. If the version contains a revision number, then it will also have a minor number." }, { "code": null, "e": 4221, "s": 3766, "text": "For example, given the list l as [“1.1.2”, “1.0”, “1.3.3”, “1.0.12”, “1.0.2”], the function solution(l) would return the list [“1.0”, “1.0.2”, “1.0.12”, “1.1.2”, “1.3.3”]. If two or more versions are equivalent but one version contains more numbers than the others, then these versions must be sorted ascending based onhow many numbers they have, e.g [“1”, “1.0”, “1.0.0”]. The number of elements in the list l will be at least 1 and will not exceed 100." }, { "code": null, "e": 4367, "s": 4221, "text": "— Test cases — Input:Solution.solution({“1.11”, “2.0.0”, “1.2”, “2”, “0.1”, “1.2.1”, “1.1.1”, “2.0”})Output: 0.1,1.1.1,1.2,1.2.1,1.11,2,2.0,2.0.0" }, { "code": null, "e": 4473, "s": 4367, "text": "Input:Solution.solution({“1.1.2”, “1.0”, “1.3.3”, “1.0.12”, “1.0.2”})Output: 1.0,1.0.2,1.0.12,1.1.2,1.3.3" }, { "code": null, "e": 4684, "s": 4473, "text": "The problem statement is fairly straight forward. Given a list of versions, sort them in ascending order with order of precedence : major > minor > revision. This is simple enough, let’s come to the edge cases." }, { "code": null, "e": 4724, "s": 4684, "text": "Minor and revision numbers are optional" }, { "code": null, "e": 4909, "s": 4724, "text": "If two or more versions are equivalent but one version contains more numbers than the others, then these versions must be sorted in ascending order based on how many numbers they have." }, { "code": null, "e": 4967, "s": 4909, "text": "Before addressing these, lets first see the solution code" }, { "code": null, "e": 5044, "s": 4967, "text": "Before running down the code, let’s first discuss a common coding construct." }, { "code": null, "e": 5091, "s": 5044, "text": "Simply put a comparator always takes the form:" }, { "code": null, "e": 5218, "s": 5091, "text": "Now if you observe, here we are not using any if statements to make the comparison, but simply subtracting the two hash-codes." }, { "code": null, "e": 5340, "s": 5218, "text": "This subtraction is actually a simple and common convention to achieve this. For example consider two ints, a and b, then" }, { "code": null, "e": 5422, "s": 5340, "text": "a — b < 0, if a is smallera — b == 0, if a and b is samea — b > 0, if a is larger" }, { "code": null, "e": 5462, "s": 5422, "text": "which exactly echos our desired output." }, { "code": null, "e": 5689, "s": 5462, "text": "This becomes super useful with sorting, as the Java Standard Library provides sort functions that can sort a collection of any object based on a comparator. We will be making use of this concept not once but twice in our code." }, { "code": null, "e": 5788, "s": 5689, "text": "So with comparators out of the way. lets rundown the code and cover the above mentioned edge cases" }, { "code": null, "e": 5842, "s": 5788, "text": "First we sort the array based on a custom comparator." }, { "code": null, "e": 5933, "s": 5842, "text": "Inside our comparator, we first split the versions to its major, minor and revision types." }, { "code": null, "e": 6095, "s": 5933, "text": "Again, we write another comparator ( this time without the pomp of inheritance, but basically it does the same thing ), for comparing the specific version types." }, { "code": null, "e": 6410, "s": 6095, "text": "We compare the version types in the required precedence of major > minor > revision. If at any stage, both are not equal, we simply return the output of the inner comparator. If they are, we move on to the next type. If at the end, there is no difference (Edge case 2), we sort based on the version subtype counts." }, { "code": null, "e": 6648, "s": 6410, "text": "In our inner comparator, we first check if the specific version subpart exists in either of the versions (Edge case 1). If it does not we default it to 0, else we pick it up and then compare them as we would do in any integer comparator." }, { "code": null, "e": 6851, "s": 6648, "text": "In the above section, I gave a rundown of the code to help beginners understand the core concepts, but for anyone who has spent any decent part of their lives dabbling in front of an IDE, did I need to?" }, { "code": null, "e": 7307, "s": 6851, "text": "What the code is doing is doing is super clear without the need for any documentation. Sure, it could be even better, we could add enums to make the version types more clear, and use properly named private methods to make the edge case handlings more intuitive. But what we should appreciate here, is that following clean code principles, handling of edge cases are no longer ugly outliers that break the flow of thought, but are natural extensions to it." }, { "code": null, "e": 7433, "s": 7307, "text": "Now coming to the fact that I did not make it as clean as I could have, makes me feel super guilty. So let’s do some penance." }, { "code": null, "e": 7830, "s": 7433, "text": "I won’t give a rundown here as frankly it’s not needed. Moreover I would encourage you to only look at the solution method, and see if you can understand the flow without needing to see the implementation of Version and VersionType, which is why I have kept them at the bottom. If you can understand what the code is doing by only having to read about 1/4th of codebase, then my job here is done!" }, { "code": null, "e": 8087, "s": 7830, "text": "In my next post, I’ll go into the second challenge for Level 2: Gearing Up for Destruction. When I do, I will link it here. To be notified of the same, consider following me on medium and subscribing to get an email of the same sent straight to your inbox!" }, { "code": null, "e": 8284, "s": 8087, "text": "There are 2 kinds of writers, those that write more and those that write less. I am the latter. I obsess over creating value and shun noise. If you want to read such content, consider subscribing." }, { "code": null, "e": 8316, "s": 8284, "text": "[1] Robert C. Martin, Wikipedia" } ]
Hexagonal Architecture in Java - GeeksforGeeks
17 Sep, 2021 As per the software development design principle, the software which requires the minimum effort of maintenance is considered as good design. That is, maintenance should be the key point which an architect must consider. In this article, one such architecture, known as Hexagonal Architecture which makes the software easy to maintain, manage, test, and scale is discussed. Hexagonal architecture is a term coined by Alistair Cockburn in 2006. The other name of Hexagonal architecture is Ports And Adapters architecture. This architecture divides an application into two parts namely, the inside part and the outside part. The core logic of an application is considered as the inside part. The database, UI, and messaging queues could be the outside part. In doing so, the core application logic has been isolated completely from the outside world. Now the communication between these two parts can happen through Port and Adapters. Now, let’s understand what each of these means. The Ports: The Ports acts as a gateway through which communication takes place as an inbound or outbound port. An Inbound port is something like a service interface that exposes the core logic to the outside world. An outbound port is something like a repository interface that facilitates communication from application to persistence system. The adapters: The adapters act as an implementation of a port that handles user input and translate it into the language-specific call. It basically encapsulates the logic to interact with outer systems such as message queues, databases, etc. It also transforms the communication between external objects and core. The adaptors are again of two types. Primary Adapters: It drives the application using the inbound port of an application and also called as Driving adapters. Examples of primary adapters could be WebViews or Rest Controllers.Secondary Adapters: This is an implementation of an outbound port that is driven by the application and also called as Driven adaptors. Connection with messaging queues, databases, and external API calls are some of the examples of Secondary adapters. Primary Adapters: It drives the application using the inbound port of an application and also called as Driving adapters. Examples of primary adapters could be WebViews or Rest Controllers.Secondary Adapters: This is an implementation of an outbound port that is driven by the application and also called as Driven adaptors. Connection with messaging queues, databases, and external API calls are some of the examples of Secondary adapters. Primary Adapters: It drives the application using the inbound port of an application and also called as Driving adapters. Examples of primary adapters could be WebViews or Rest Controllers. Secondary Adapters: This is an implementation of an outbound port that is driven by the application and also called as Driven adaptors. Connection with messaging queues, databases, and external API calls are some of the examples of Secondary adapters. Therefore, the hexagonal architecture talks about exposing multiple endpoints in an application for communication purposes. If we have the right adapter for our port, our request will get entertained. This architecture is a layered architecture and mainly consists of three layers, Framework, Application, and Domain. Domain: It is a core business logic layer and the implementation details of the outer layers are hidden with this.Application: It acts as a mediator between the Domain layer and the Framework layer.Framework: This layer has all the implementation details that how a domain layer will interact with the external world. Domain: It is a core business logic layer and the implementation details of the outer layers are hidden with this. Application: It acts as a mediator between the Domain layer and the Framework layer. Framework: This layer has all the implementation details that how a domain layer will interact with the external world. Illustrative Example: Let’s understand this architecture with a real-time example. We will be designing a Cake Service application using Spring Boot. You can create a normal Spring or Maven-based project as well, depending on your convenience. The following are the different parts in the example: Domain: Core of the application. Create a Cake class with its attributes, to keep it simple we will just add name here. Java // Consider this as a value object// around which the domain logic revolves.public class Cake implements Serializable { private static final long serialVersionUID = 100000000L; private String name; // Getters and setters for the name public String getName() { return name; } public void setName(String name) { this.name = name; } @Override public String toString() { return "Cake [name=" + name + "]"; }} Inbound port: Define an interface through which our core application will enable its communication. It exposes the core application to the outside world. Java import java.util.List; // Interface through which the core// application communicates. For// all the classes implementing the// interface, we need to implement// the methods in this interfacepublic interface CakeService { public void createCake(Cake cake); public Cake getCake(String cakeName); public List<Cake> listCake();} Outbound port: Create one more interface to create or access the outside world i.e., Cake. Java import java.util.List; // Interface to access the cakepublic interface CakeRepository { public void createCake(Cake cake); public Cake getCake(String cakeName); public List<Cake> getAllCake();} Primary Adapters: A controller could be our primary adapter which will provide endpoints for creating and fetching the resources. Java import java.util.List; import org.springframework.beans.factory.annotation.Autowired;import org.springframework.web.bind.annotation.RequestMapping;import org.springframework.web.bind.annotation.RestController; // This is the REST endpoint@RestController@RequestMapping("/cake")public class CakeRestController implements CakeRestUI { @Autowired private CakeService cakeService; @Override public void createCake(Cake cake) { cakeService.createCake(cake); } @Override public Cake getCake(String cakeName) { return cakeService.getCake(cakeName); } @Override public List<Cake> listCake() { return cakeService.listCake(); }} We can create one more interface for CakeRestUI as follows: Java import java.util.List; import org.springframework.web.bind.annotation.GetMapping;import org.springframework.web.bind.annotation.PathVariable;import org.springframework.web.bind.annotation.PostMapping;import org.springframework.web.bind.annotation.RequestBody; public interface CakeRestUI { @PostMapping void createCake(@RequestBody Cake cake); @GetMapping("/{name}") public Cake getCake(@PathVariable String name); @GetMapping public List<Cake> listCake();} Secondary Adapters: This will be the implementation of an outbound port. Since CakeRepository is our outbound port, so let’s implement it. Java import java.util.HashMap;import java.util.List;import java.util.Map;import java.util.stream.Collectors; import org.springframework.stereotype.Repository; // Implementing the interface and// all the methods which have been// defined in the interface@Repositorypublic class CakeRepositoryImpl implements CakeRepository { private Map<String, Cake> cakeStore = new HashMap<String, Cake>(); @Override public void createCake(Cake cake) { cakeStore.put(cake.getName(), cake); } @Override public Cake getCake(String cakeName) { return cakeStore.get(cakeName); } @Override public List<Cake> getAllCake() { return cakeStore.values().stream().collect(Collectors.toList()); }} Communication between the core to the Data Source: Finally, let’s create an implementation class that will be responsible for communication between core application to the data source using an outbound port. Java import java.util.List; import org.springframework.beans.factory.annotation.Autowired;import org.springframework.stereotype.Service; // This is the implementation class// for the CakeService@Servicepublic class CakeServiceImpl implements CakeService { // Overriding the methods defined // in the interface @Autowired private CakeRepository cakeRepository; @Override public void createCake(Cake cake) { cakeRepository.createCake(cake); } @Override public Cake getCake(String cakeName) { return cakeRepository.getCake(cakeName); } @Override public List<Cake> listCake() { return cakeRepository.getAllCake(); }} We have finally implemented all the required methods in the given example. The following is the output on running the above code: Now, lets create some Cake for the above example using the REST API. The following API is used to push the cakes into the repository. Since we are creating and adding the data, we use the POST request. For example: API: [POST]: http://localhost:8080/cake Input Body { "name" : "Black Forest" } API: [POST]: http://localhost:8080/cake Input Body { "name" : "Red Velvet" } API: [GET]: http://localhost:8080/cake Output [ { "name": "Black Forest" }, { "name": "Red Velvet" } ] Advantages of the Hexagonal architecture: Easy to maintain: Since the core application logic(classes and objects) is isolated from the outside world and it is loosely coupled, it is easier to maintain. It is easier to add some new features in either of the layers without touching the other one. Easy to adapt new changes: Since all the layers are independent and if we want to add or replace a new database, we just need to replace or add the database adapters, without changing the domain logic of an application. Easy to test: Testing becomes easy. We can write the test cases for each layer by just mocking the ports using the mock adapters. abhishek0719kadiyan Design Pattern Java Write From Home Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between Sequence diagram and Collaboration diagram Strategy Pattern | Set 1 (Introduction) Template Method Design Pattern Conceptual Model of the Unified Modeling Language (UML) State Design Pattern Arrays in Java Split() String method in Java with examples For-each loop in Java Object Oriented Programming (OOPs) Concept in Java Stream In Java
[ { "code": null, "e": 25859, "s": 25831, "text": "\n17 Sep, 2021" }, { "code": null, "e": 26842, "s": 25859, "text": "As per the software development design principle, the software which requires the minimum effort of maintenance is considered as good design. That is, maintenance should be the key point which an architect must consider. In this article, one such architecture, known as Hexagonal Architecture which makes the software easy to maintain, manage, test, and scale is discussed. Hexagonal architecture is a term coined by Alistair Cockburn in 2006. The other name of Hexagonal architecture is Ports And Adapters architecture. This architecture divides an application into two parts namely, the inside part and the outside part. The core logic of an application is considered as the inside part. The database, UI, and messaging queues could be the outside part. In doing so, the core application logic has been isolated completely from the outside world. Now the communication between these two parts can happen through Port and Adapters. Now, let’s understand what each of these means. " }, { "code": null, "e": 27186, "s": 26842, "text": "The Ports: The Ports acts as a gateway through which communication takes place as an inbound or outbound port. An Inbound port is something like a service interface that exposes the core logic to the outside world. An outbound port is something like a repository interface that facilitates communication from application to persistence system." }, { "code": null, "e": 27979, "s": 27186, "text": "The adapters: The adapters act as an implementation of a port that handles user input and translate it into the language-specific call. It basically encapsulates the logic to interact with outer systems such as message queues, databases, etc. It also transforms the communication between external objects and core. The adaptors are again of two types. Primary Adapters: It drives the application using the inbound port of an application and also called as Driving adapters. Examples of primary adapters could be WebViews or Rest Controllers.Secondary Adapters: This is an implementation of an outbound port that is driven by the application and also called as Driven adaptors. Connection with messaging queues, databases, and external API calls are some of the examples of Secondary adapters." }, { "code": null, "e": 28420, "s": 27979, "text": "Primary Adapters: It drives the application using the inbound port of an application and also called as Driving adapters. Examples of primary adapters could be WebViews or Rest Controllers.Secondary Adapters: This is an implementation of an outbound port that is driven by the application and also called as Driven adaptors. Connection with messaging queues, databases, and external API calls are some of the examples of Secondary adapters." }, { "code": null, "e": 28610, "s": 28420, "text": "Primary Adapters: It drives the application using the inbound port of an application and also called as Driving adapters. Examples of primary adapters could be WebViews or Rest Controllers." }, { "code": null, "e": 28862, "s": 28610, "text": "Secondary Adapters: This is an implementation of an outbound port that is driven by the application and also called as Driven adaptors. Connection with messaging queues, databases, and external API calls are some of the examples of Secondary adapters." }, { "code": null, "e": 29182, "s": 28862, "text": "Therefore, the hexagonal architecture talks about exposing multiple endpoints in an application for communication purposes. If we have the right adapter for our port, our request will get entertained. This architecture is a layered architecture and mainly consists of three layers, Framework, Application, and Domain. " }, { "code": null, "e": 29502, "s": 29184, "text": "Domain: It is a core business logic layer and the implementation details of the outer layers are hidden with this.Application: It acts as a mediator between the Domain layer and the Framework layer.Framework: This layer has all the implementation details that how a domain layer will interact with the external world." }, { "code": null, "e": 29617, "s": 29502, "text": "Domain: It is a core business logic layer and the implementation details of the outer layers are hidden with this." }, { "code": null, "e": 29702, "s": 29617, "text": "Application: It acts as a mediator between the Domain layer and the Framework layer." }, { "code": null, "e": 29822, "s": 29702, "text": "Framework: This layer has all the implementation details that how a domain layer will interact with the external world." }, { "code": null, "e": 30121, "s": 29822, "text": "Illustrative Example: Let’s understand this architecture with a real-time example. We will be designing a Cake Service application using Spring Boot. You can create a normal Spring or Maven-based project as well, depending on your convenience. The following are the different parts in the example: " }, { "code": null, "e": 30243, "s": 30121, "text": "Domain: Core of the application. Create a Cake class with its attributes, to keep it simple we will just add name here. " }, { "code": null, "e": 30248, "s": 30243, "text": "Java" }, { "code": "// Consider this as a value object// around which the domain logic revolves.public class Cake implements Serializable { private static final long serialVersionUID = 100000000L; private String name; // Getters and setters for the name public String getName() { return name; } public void setName(String name) { this.name = name; } @Override public String toString() { return \"Cake [name=\" + name + \"]\"; }}", "e": 30724, "s": 30248, "text": null }, { "code": null, "e": 30879, "s": 30724, "text": "Inbound port: Define an interface through which our core application will enable its communication. It exposes the core application to the outside world. " }, { "code": null, "e": 30884, "s": 30879, "text": "Java" }, { "code": "import java.util.List; // Interface through which the core// application communicates. For// all the classes implementing the// interface, we need to implement// the methods in this interfacepublic interface CakeService { public void createCake(Cake cake); public Cake getCake(String cakeName); public List<Cake> listCake();}", "e": 31222, "s": 30884, "text": null }, { "code": null, "e": 31315, "s": 31222, "text": "Outbound port: Create one more interface to create or access the outside world i.e., Cake. " }, { "code": null, "e": 31320, "s": 31315, "text": "Java" }, { "code": "import java.util.List; // Interface to access the cakepublic interface CakeRepository { public void createCake(Cake cake); public Cake getCake(String cakeName); public List<Cake> getAllCake();}", "e": 31526, "s": 31320, "text": null }, { "code": null, "e": 31657, "s": 31526, "text": "Primary Adapters: A controller could be our primary adapter which will provide endpoints for creating and fetching the resources. " }, { "code": null, "e": 31662, "s": 31657, "text": "Java" }, { "code": "import java.util.List; import org.springframework.beans.factory.annotation.Autowired;import org.springframework.web.bind.annotation.RequestMapping;import org.springframework.web.bind.annotation.RestController; // This is the REST endpoint@RestController@RequestMapping(\"/cake\")public class CakeRestController implements CakeRestUI { @Autowired private CakeService cakeService; @Override public void createCake(Cake cake) { cakeService.createCake(cake); } @Override public Cake getCake(String cakeName) { return cakeService.getCake(cakeName); } @Override public List<Cake> listCake() { return cakeService.listCake(); }}", "e": 32347, "s": 31662, "text": null }, { "code": null, "e": 32408, "s": 32347, "text": "We can create one more interface for CakeRestUI as follows: " }, { "code": null, "e": 32413, "s": 32408, "text": "Java" }, { "code": "import java.util.List; import org.springframework.web.bind.annotation.GetMapping;import org.springframework.web.bind.annotation.PathVariable;import org.springframework.web.bind.annotation.PostMapping;import org.springframework.web.bind.annotation.RequestBody; public interface CakeRestUI { @PostMapping void createCake(@RequestBody Cake cake); @GetMapping(\"/{name}\") public Cake getCake(@PathVariable String name); @GetMapping public List<Cake> listCake();}", "e": 32891, "s": 32413, "text": null }, { "code": null, "e": 33031, "s": 32891, "text": "Secondary Adapters: This will be the implementation of an outbound port. Since CakeRepository is our outbound port, so let’s implement it. " }, { "code": null, "e": 33036, "s": 33031, "text": "Java" }, { "code": "import java.util.HashMap;import java.util.List;import java.util.Map;import java.util.stream.Collectors; import org.springframework.stereotype.Repository; // Implementing the interface and// all the methods which have been// defined in the interface@Repositorypublic class CakeRepositoryImpl implements CakeRepository { private Map<String, Cake> cakeStore = new HashMap<String, Cake>(); @Override public void createCake(Cake cake) { cakeStore.put(cake.getName(), cake); } @Override public Cake getCake(String cakeName) { return cakeStore.get(cakeName); } @Override public List<Cake> getAllCake() { return cakeStore.values().stream().collect(Collectors.toList()); }}", "e": 33774, "s": 33036, "text": null }, { "code": null, "e": 33984, "s": 33774, "text": "Communication between the core to the Data Source: Finally, let’s create an implementation class that will be responsible for communication between core application to the data source using an outbound port. " }, { "code": null, "e": 33989, "s": 33984, "text": "Java" }, { "code": "import java.util.List; import org.springframework.beans.factory.annotation.Autowired;import org.springframework.stereotype.Service; // This is the implementation class// for the CakeService@Servicepublic class CakeServiceImpl implements CakeService { // Overriding the methods defined // in the interface @Autowired private CakeRepository cakeRepository; @Override public void createCake(Cake cake) { cakeRepository.createCake(cake); } @Override public Cake getCake(String cakeName) { return cakeRepository.getCake(cakeName); } @Override public List<Cake> listCake() { return cakeRepository.getAllCake(); }}", "e": 34673, "s": 33989, "text": null }, { "code": null, "e": 34804, "s": 34673, "text": "We have finally implemented all the required methods in the given example. The following is the output on running the above code: " }, { "code": null, "e": 35021, "s": 34804, "text": "Now, lets create some Cake for the above example using the REST API. The following API is used to push the cakes into the repository. Since we are creating and adding the data, we use the POST request. For example: " }, { "code": null, "e": 35074, "s": 35021, "text": "API: [POST]: http://localhost:8080/cake Input Body " }, { "code": null, "e": 35106, "s": 35074, "text": "{\n \"name\" : \"Black Forest\"\n}" }, { "code": null, "e": 35159, "s": 35106, "text": "API: [POST]: http://localhost:8080/cake Input Body " }, { "code": null, "e": 35189, "s": 35159, "text": "{\n \"name\" : \"Red Velvet\"\n}" }, { "code": null, "e": 35237, "s": 35189, "text": "API: [GET]: http://localhost:8080/cake Output " }, { "code": null, "e": 35326, "s": 35237, "text": "[\n {\n \"name\": \"Black Forest\"\n },\n {\n \"name\": \"Red Velvet\"\n }\n]" }, { "code": null, "e": 35370, "s": 35326, "text": "Advantages of the Hexagonal architecture: " }, { "code": null, "e": 35624, "s": 35370, "text": "Easy to maintain: Since the core application logic(classes and objects) is isolated from the outside world and it is loosely coupled, it is easier to maintain. It is easier to add some new features in either of the layers without touching the other one." }, { "code": null, "e": 35844, "s": 35624, "text": "Easy to adapt new changes: Since all the layers are independent and if we want to add or replace a new database, we just need to replace or add the database adapters, without changing the domain logic of an application." }, { "code": null, "e": 35974, "s": 35844, "text": "Easy to test: Testing becomes easy. We can write the test cases for each layer by just mocking the ports using the mock adapters." }, { "code": null, "e": 35996, "s": 35976, "text": "abhishek0719kadiyan" }, { "code": null, "e": 36011, "s": 35996, "text": "Design Pattern" }, { "code": null, "e": 36016, "s": 36011, "text": "Java" }, { "code": null, "e": 36032, "s": 36016, "text": "Write From Home" }, { "code": null, "e": 36037, "s": 36032, "text": "Java" }, { "code": null, "e": 36135, "s": 36037, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 36197, "s": 36135, "text": "Difference between Sequence diagram and Collaboration diagram" }, { "code": null, "e": 36237, "s": 36197, "text": "Strategy Pattern | Set 1 (Introduction)" }, { "code": null, "e": 36268, "s": 36237, "text": "Template Method Design Pattern" }, { "code": null, "e": 36324, "s": 36268, "text": "Conceptual Model of the Unified Modeling Language (UML)" }, { "code": null, "e": 36345, "s": 36324, "text": "State Design Pattern" }, { "code": null, "e": 36360, "s": 36345, "text": "Arrays in Java" }, { "code": null, "e": 36404, "s": 36360, "text": "Split() String method in Java with examples" }, { "code": null, "e": 36426, "s": 36404, "text": "For-each loop in Java" }, { "code": null, "e": 36477, "s": 36426, "text": "Object Oriented Programming (OOPs) Concept in Java" } ]
Connecting to Azure SQL Server using Python | by James Ho | Towards Data Science
This article provides a step-by-step tutorial of connecting to Azure SQL Server using Python on Linux OS. After creating an Azure SQL Database/Server, you can find the server name on the overview page. Azure SQL Server uses ODBC (Open Database Connectivity) as the driver. A database driver is a computer program that implements a protocol (ODBC or JDBC) for a database connection. Let me explain it in plain language. Do you still remember the time when we purchased hardware or software, it would come with a disk of its driver, and you had to install that driver before using the application? Well, you can think of the database as the application, and the database driver is essentially the driver that enables us to access the database, or DBMS(Database Management System). Different database systems (Postgresql, Mysql, SQL Server, Oracle,..., etc) have different drivers, mostly either ODBC or JDBC. Pyodbc is an open-source python package that makes accessing ODBC databases easy. Some use pymssql, but pyodbc is the most popular one. Let’s get our hands dirty! Firstly, import the required packages. We use sqlalchemy, which is a popular python SQL toolkit, here to create the connection and use urllib to create the connection string. import osimport pyodbcimport sqlalchemy as safrom sqlalchemy import create_engineimport urllib (Python 2.7)from urllib.parse import quote_plus (Python 3) Note that the quote_plus, which we will be using to generate the connection string, is different in Python 2.7 and Python 3. The first step of setting up the connection is to declare the environment variables. We use os.getenv to specify the variables and credentials so they are in a safe hand. server = os.getenv('SERVER_NAME')database = os.getenv('DB_NAME')username = os.getenv('USERNAME')password = os.getenv('PASSWORD')port = os.getenv('PORT') The default port of Azure SQL Server is 1433. We need one more variable, driver. In order to find the right driver, run the following command lines in your terminal (make sure you have pyodbc installed): $ odbcinst -junixODBC 2.3.4DRIVERS............: /etc/odbcinst.iniSYSTEM DATA SOURCES: /etc/odbc.iniFILE DATA SOURCES..: /etc/ODBCDataSourcesUSER DATA SOURCES..: /home/jamesho/.odbc.iniSQLULEN Size.......: 8SQLLEN Size........: 8$ cat /etc/odbcinst.ini[ODBC Driver 13 for SQL Server]Description=Microsoft ODBC Driver 13 for SQL ServerDriver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-13.1.so.9.0UsageCount=1 The information of the driver is stored in odbcinst.ini file. Set the driver to the name of your driver. driver = '{ODBC Driver 13 for SQL Server}' Next, we’re going to set up the connection string. There are 2 ways to define the connection string, one is using quote_plus under urllib package to parse the string, the other is using sqlalchemy’s URL format. # Using urllibodbc_str ='DRIVER='+driver+';SERVER='+server+';PORT='+port+';DATABASE='+database+';UID='+username+';PWD='+passwordconnect_str = 'mssql+pyodbc:///?odbc_connect='+quote_plus(odbc_str)#Using sa URL formatsa_url = f"mssql+pyodbc://{username}:{password}@{server}:{port}/{database}?driver={driver}" A full list of connection string of different databases can be found here. Finally, create an engine and pass the string to the engine. Use the execute function of the engine to run your query, which should be passed as a string too. engine = create_engine(connect_str / sa_url)print(engine.execute(‘’’ YOUR SQL QUERY ‘’’).fetchall()) Full script: import osimport pyodbcimport sqlalchemy as safrom sqlalchemy import create_engineimport urllib (Python 2.7)from urllib.parse import quote_plus (Python 3)server = os.getenv('SERVER_NAME')database = os.getenv('DB_NAME')username = os.getenv('USERNAME')password = os.getenv('PASSWORD')port = os.getenv('PORT',default=1433)driver = '{ODBC Driver 13 for SQL Server}'#connect using parsed URLodbc_str = 'DRIVER='+driver+';SERVER='+server+';PORT='+port+';DATABASE='+database+';UID='+username+';PWD='+ passwordconnect_str = 'mssql+pyodbc:///?odbc_connect=' + quote_plus(odbc_str)#connect with sa url formatsa_url = f"mssql+pyodbc://{username}:{password}@{server}:{port}/{database}?driver={driver}"engine = create_engine(connect_str/sa_url)print(engine.execute(''' YOUR SQL QUERY ''').fetchall()) Go to your terminal, export the environment variables like, and run this python script. $export SERVER_NAME = $export DB_NAME = $export USERNAME = $export PASSWORD = $export PORT =$python {your script name}.py You’ve successfully connected to your Azure SQL Database and you can interact with it using sqlalchemy.
[ { "code": null, "e": 278, "s": 172, "text": "This article provides a step-by-step tutorial of connecting to Azure SQL Server using Python on Linux OS." }, { "code": null, "e": 374, "s": 278, "text": "After creating an Azure SQL Database/Server, you can find the server name on the overview page." }, { "code": null, "e": 445, "s": 374, "text": "Azure SQL Server uses ODBC (Open Database Connectivity) as the driver." }, { "code": null, "e": 554, "s": 445, "text": "A database driver is a computer program that implements a protocol (ODBC or JDBC) for a database connection." }, { "code": null, "e": 1079, "s": 554, "text": "Let me explain it in plain language. Do you still remember the time when we purchased hardware or software, it would come with a disk of its driver, and you had to install that driver before using the application? Well, you can think of the database as the application, and the database driver is essentially the driver that enables us to access the database, or DBMS(Database Management System). Different database systems (Postgresql, Mysql, SQL Server, Oracle,..., etc) have different drivers, mostly either ODBC or JDBC." }, { "code": null, "e": 1215, "s": 1079, "text": "Pyodbc is an open-source python package that makes accessing ODBC databases easy. Some use pymssql, but pyodbc is the most popular one." }, { "code": null, "e": 1417, "s": 1215, "text": "Let’s get our hands dirty! Firstly, import the required packages. We use sqlalchemy, which is a popular python SQL toolkit, here to create the connection and use urllib to create the connection string." }, { "code": null, "e": 1571, "s": 1417, "text": "import osimport pyodbcimport sqlalchemy as safrom sqlalchemy import create_engineimport urllib (Python 2.7)from urllib.parse import quote_plus (Python 3)" }, { "code": null, "e": 1867, "s": 1571, "text": "Note that the quote_plus, which we will be using to generate the connection string, is different in Python 2.7 and Python 3. The first step of setting up the connection is to declare the environment variables. We use os.getenv to specify the variables and credentials so they are in a safe hand." }, { "code": null, "e": 2020, "s": 1867, "text": "server = os.getenv('SERVER_NAME')database = os.getenv('DB_NAME')username = os.getenv('USERNAME')password = os.getenv('PASSWORD')port = os.getenv('PORT')" }, { "code": null, "e": 2224, "s": 2020, "text": "The default port of Azure SQL Server is 1433. We need one more variable, driver. In order to find the right driver, run the following command lines in your terminal (make sure you have pyodbc installed):" }, { "code": null, "e": 2632, "s": 2224, "text": "$ odbcinst -junixODBC 2.3.4DRIVERS............: /etc/odbcinst.iniSYSTEM DATA SOURCES: /etc/odbc.iniFILE DATA SOURCES..: /etc/ODBCDataSourcesUSER DATA SOURCES..: /home/jamesho/.odbc.iniSQLULEN Size.......: 8SQLLEN Size........: 8$ cat /etc/odbcinst.ini[ODBC Driver 13 for SQL Server]Description=Microsoft ODBC Driver 13 for SQL ServerDriver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-13.1.so.9.0UsageCount=1" }, { "code": null, "e": 2737, "s": 2632, "text": "The information of the driver is stored in odbcinst.ini file. Set the driver to the name of your driver." }, { "code": null, "e": 2780, "s": 2737, "text": "driver = '{ODBC Driver 13 for SQL Server}'" }, { "code": null, "e": 2991, "s": 2780, "text": "Next, we’re going to set up the connection string. There are 2 ways to define the connection string, one is using quote_plus under urllib package to parse the string, the other is using sqlalchemy’s URL format." }, { "code": null, "e": 3298, "s": 2991, "text": "# Using urllibodbc_str ='DRIVER='+driver+';SERVER='+server+';PORT='+port+';DATABASE='+database+';UID='+username+';PWD='+passwordconnect_str = 'mssql+pyodbc:///?odbc_connect='+quote_plus(odbc_str)#Using sa URL formatsa_url = f\"mssql+pyodbc://{username}:{password}@{server}:{port}/{database}?driver={driver}\"" }, { "code": null, "e": 3373, "s": 3298, "text": "A full list of connection string of different databases can be found here." }, { "code": null, "e": 3532, "s": 3373, "text": "Finally, create an engine and pass the string to the engine. Use the execute function of the engine to run your query, which should be passed as a string too." }, { "code": null, "e": 3674, "s": 3532, "text": "engine = create_engine(connect_str / sa_url)print(engine.execute(‘’’ YOUR SQL QUERY ‘’’).fetchall())" }, { "code": null, "e": 3687, "s": 3674, "text": "Full script:" }, { "code": null, "e": 4516, "s": 3687, "text": "import osimport pyodbcimport sqlalchemy as safrom sqlalchemy import create_engineimport urllib (Python 2.7)from urllib.parse import quote_plus (Python 3)server = os.getenv('SERVER_NAME')database = os.getenv('DB_NAME')username = os.getenv('USERNAME')password = os.getenv('PASSWORD')port = os.getenv('PORT',default=1433)driver = '{ODBC Driver 13 for SQL Server}'#connect using parsed URLodbc_str = 'DRIVER='+driver+';SERVER='+server+';PORT='+port+';DATABASE='+database+';UID='+username+';PWD='+ passwordconnect_str = 'mssql+pyodbc:///?odbc_connect=' + quote_plus(odbc_str)#connect with sa url formatsa_url = f\"mssql+pyodbc://{username}:{password}@{server}:{port}/{database}?driver={driver}\"engine = create_engine(connect_str/sa_url)print(engine.execute(''' YOUR SQL QUERY ''').fetchall())" }, { "code": null, "e": 4604, "s": 4516, "text": "Go to your terminal, export the environment variables like, and run this python script." }, { "code": null, "e": 4726, "s": 4604, "text": "$export SERVER_NAME = $export DB_NAME = $export USERNAME = $export PASSWORD = $export PORT =$python {your script name}.py" } ]
How are Spectrum and Bandwidth defined in Wireless Communications?
Spectrum refers to the entire range of frequencies right from the starting frequency (the lowest frequency) to the ending frequency (the highest frequency). Spectrum basically refers to the entire group of frequencies. The electromagnetic spectrum is one good example. The electromagnetic (EM) spectrum covers frequencies right from zero (DC) to gamma band frequencies. This spectrum includes the human voice frequency band (audio band), ISM (Industrial Scientific Medical) band and optical frequency bands. Microwave radiations span in frequency from 300 MHz to 300 GHz. What is the spectrum of microwave radiation? Spectrum refers to the entire range of frequencies that exist in the microwave band. Therefore, the spectrum of microwave radiation is (300 MHz to 300 GHz). The difference between spectrum and bandwidth is that spectrum refers to the ‘entirety’ while bandwidth is a ‘sub-section’ of the spectrum. Spectrum refers to the wholesome of the quantity while bandwidth, on the other hand, is a portion of the entire spectrum. Bandwidth is a sub-section of a portion of spectrum. If frequencies from 12 MHz up to 40 MHz are allocated for an application, the spectrum refers to the entire range of frequencies right from 12 MHz to 40 MHz Therefore, the spectrum is (12 to 40) MHz. In some cases, the entire allocated 12 MHz to 40 MHz = Spectrum 17 MHz to 20 MHz = Bandwidth frequencies may not be used by the application. So, if only 17 MHz to 20 MHz is used by an application, then, those range of frequencies used is called the ‘bandwidth’. The commercial FM radio is one good example that explains the difference between the spectrum and the bandwidth. Spectrum here refers to the entire range of frequencies allocated for the FM radio application which is usually around 20 MHz (108 MHz to 87.5 MHz). An FM radio station doesn’t occupy the entire frequency band but just a fraction of it. Let us say that an FM station operates over the center frequency 93.5 MHz FM radio stations are usually allocated a bandwidth of 200 kHz. So, the range of frequencies over which the FM radio station operates is 93.4 MHz to 93.6 MHz. Let us look at one more example to understand the difference between spectrum and the bandwidth. Grapes come in different colours, sizes and tastes depending on the growing conditions and methods. The entire family is still called grapes even though there are differences in colour, size and taste. Bandwidth, however, refers to a particular group- a group of grapes grown in area X might have black colour, slightly sour taste Spectrum = 20 MHz Bandwidth of the radio station = 200 kHz and oval figure while a group of grapes grown in area Y might have green colour, sweet taste and oval figure. Entire family of grapes = Spectrum Different groups among grapes = Bandwidth
[ { "code": null, "e": 1281, "s": 1062, "text": "Spectrum refers to the entire range of frequencies right from the starting frequency (the lowest frequency) to the ending frequency (the highest frequency). Spectrum basically refers to the entire group of frequencies." }, { "code": null, "e": 1570, "s": 1281, "text": "The electromagnetic spectrum is one good example. The electromagnetic (EM) spectrum covers frequencies right from zero (DC) to gamma band frequencies. This spectrum includes the human voice frequency band (audio band), ISM (Industrial Scientific Medical) band and optical frequency bands." }, { "code": null, "e": 1679, "s": 1570, "text": "Microwave radiations span in frequency from 300 MHz to 300 GHz. What is the spectrum of microwave radiation?" }, { "code": null, "e": 1836, "s": 1679, "text": "Spectrum refers to the entire range of frequencies that exist in the microwave band. Therefore, the spectrum of microwave radiation is (300 MHz to 300 GHz)." }, { "code": null, "e": 2098, "s": 1836, "text": "The difference between spectrum and bandwidth is that spectrum refers to the ‘entirety’ while bandwidth is a ‘sub-section’ of the spectrum. Spectrum refers to the wholesome of the quantity while bandwidth, on the other hand, is a portion of the entire spectrum." }, { "code": null, "e": 2151, "s": 2098, "text": "Bandwidth is a sub-section of a portion of spectrum." }, { "code": null, "e": 2387, "s": 2151, "text": "If frequencies from 12 MHz up to 40 MHz are allocated for an application, the spectrum refers to the entire range of frequencies right from 12 MHz to 40 MHz Therefore, the spectrum is (12 to 40) MHz. In some cases, the entire allocated" }, { "code": null, "e": 2444, "s": 2387, "text": "12 MHz to 40 MHz = Spectrum\n17 MHz to 20 MHz = Bandwidth" }, { "code": null, "e": 2613, "s": 2444, "text": "frequencies may not be used by the application. So, if only 17 MHz to 20 MHz is used by an application, then, those range of frequencies used is called the ‘bandwidth’." }, { "code": null, "e": 3196, "s": 2613, "text": "The commercial FM radio is one good example that explains the difference between the spectrum and the bandwidth. Spectrum here refers to the entire range of frequencies allocated for the FM radio application which is usually around 20 MHz (108 MHz to 87.5 MHz). An FM radio station doesn’t occupy the entire frequency band but just a fraction of it. Let us say that an FM station operates over the center frequency 93.5 MHz FM radio stations are usually allocated a bandwidth of 200 kHz. So, the range of frequencies over which the FM radio station operates is 93.4 MHz to 93.6 MHz." }, { "code": null, "e": 3793, "s": 3196, "text": "Let us look at one more example to understand the difference between spectrum and the bandwidth. Grapes come in different colours, sizes and tastes depending on the growing conditions and methods. The entire family is still called grapes even though there are differences in colour, size and taste. Bandwidth, however, refers to a particular group- a group of grapes grown in area X might have black colour, slightly sour taste Spectrum = 20 MHz Bandwidth of the radio station = 200 kHz and oval figure while a group of grapes grown in area Y might have green colour, sweet taste and oval figure." }, { "code": null, "e": 3870, "s": 3793, "text": "Entire family of grapes = Spectrum\nDifferent groups among grapes = Bandwidth" } ]
Angular PrimeNG Button Component - GeeksforGeeks
11 Sep, 2021 Angular PrimeNG is an open-source framework with a rich set of native Angular UI components that are used for great styling and this framework is used to make responsive websites with very much ease. In this article, we will know how to use the Button Component in Angular PrimeNG. We will also learn about the properties, styling along with their syntaxes that will be used in the code. Button component: It is used to make a standard button that will indicate a possible user action. Properties of pButton: label: It is the text of the button. It is of string data type & the default value is null. icon: It is the name of the icon. It is of string data type & the default value is null. iconPos: It specifies the position of the icon, the valid values are “left” and “right”. It is of string type & the default value is left. loading: It specifies whether the button is in a loading state. It is of boolean data type & the default value is false loadingIcon: It is an icon to display in the loading state. It is of string type & the default value is pi pi-spinner pi-spin. Properties of p-button: type: It specifies the types of the button. It is of string type & the default value is null. label: It specifies the text of the button. It is of string type & the default value is null. icon: It specifies the name of the icon. It is of string type & the default value is null. iconPos: It specifies the position of the icon, the valid values are “left” and “right”. It is of string type & the default value is left. badge: It specifies the badge value. It is of string type & the default value is null. badgeClass: It specifies the badge style class. It is of string type & the default value is null. loading: It specifies whether the button is in a loading state. It is of boolean type & the default value is false. loadingIcon: It specifies the icon to display in the loading state. It is of string type & the default value is pi pi-spinner pi-spin. disabled: It specifies the component should be disabled. It is of boolean type & the default value is false. style: It specifies the inline style of the element. It is of string type & the default value is null. styleClass: It specifies the style class of the element. It is of string type & the default value is null. onClick: It is used to callback to execute when the button is clicked. It is of the event type & the default value is null. onFocus: It is used to callback to execute when the button is focused. It is of the event type & the default value is null. onBlur: It is used to callback to execute when the button loses focus. It is of the event type & the default value is null. Styling: p-button: It is the button element. p-button-icon: It is the icon element. p-button-label: It is the label element of the button. Creating Angular application & module installation: Step 1: Create an Angular application using the following command. ng new appname Step 2: After creating your project folder i.e. appname, move to it using the following command. cd appname Step 3: Install PrimeNG in your given directory. npm install primeng --save npm install primeicons --save Project Structure: It will look like the following: Example 1: This is the basic example that illustrates how to use Button Component. app.component.html <h2>GeeksforGeeks</h2><h5>PrimeNG Button Component</h5><button pButton pRipple label="Primary" class="p-button-raised"></button><button pButton pRipple label="Secondary" class="p-button-raised p-button-secondary"></button><button pButton pRipple label="Success" class="p-button-raised p-button-success"></button><button pButton pRipple label="Info" class="p-button-raised p-button-info"></button><button pButton pRipple label="Warning" class="p-button-raised p-button-warning"></button><button pButton pRipple label="Danger" class="p-button-raised p-button-danger"></button> app.component.ts import { Component } from '@angular/core'; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: ['./app.component.scss']})export class AppComponent {} app.module.ts import { NgModule } from "@angular/core";import { BrowserModule } from "@angular/platform-browser";import { BrowserAnimationsModule } from "@angular/platform-browser/animations"; import { AppComponent } from "./app.component";import { ButtonModule } from "primeng/button";import { RippleModule } from "primeng/ripple"; @NgModule({ imports: [BrowserModule, BrowserAnimationsModule, ButtonModule, RippleModule], declarations: [AppComponent], bootstrap: [AppComponent],})export class AppModule {} Output: Example 2: In this example, we will know how to use the various available class property in the Button Component. app.component.html <h2>GeeksforGeeks</h2><h5>PrimeNG Button Component</h5><h6>Small Outlined, Raised & Rounded Button</h6><button pButton pRipple label="Small button" class="p-button-raised p-button-sm p-button-rounded p-button-outlined"></button><h6>Normal Raised & Rounded Button</h6><button pButton pRipple label="Normal button" class="p-button-raised p-button-success p-button-rounded"></button><h6>Large,Text, Raised & Rounded Button</h6><button pButton pRipple label="Large button" class="p-button-text p-button-raised p-button-warning p-button-lg p-button-rounded"></button> app.component.ts import { Component } from "@angular/core"; @Component({ selector: "my-app", templateUrl: "./app.component.html", styleUrls: ["./app.component.scss"],})export class AppComponent {} app.module.ts import { NgModule } from "@angular/core";import { BrowserModule } from "@angular/platform-browser";import { BrowserAnimationsModule } from "@angular/platform-browser/animations"; import { AppComponent } from "./app.component";import { ButtonModule } from "primeng/button";import { RippleModule } from "primeng/ripple"; @NgModule({ imports: [BrowserModule, BrowserAnimationsModule, ButtonModule, RippleModule], declarations: [AppComponent], bootstrap: [AppComponent],})export class AppModule {} Output: Reference: https://primefaces.org/primeng/showcase/#/button Angular-PrimeNG AngularJS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Angular PrimeNG Dropdown Component Angular PrimeNG Calendar Component Angular PrimeNG Messages Component Angular 10 (blur) Event How to make a Bootstrap Modal Popup in Angular 9/8 ? Remove elements from a JavaScript Array Installation of Node.js on Linux Convert a string to an integer in JavaScript How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 26464, "s": 26436, "text": "\n11 Sep, 2021" }, { "code": null, "e": 26854, "s": 26464, "text": "Angular PrimeNG is an open-source framework with a rich set of native Angular UI components that are used for great styling and this framework is used to make responsive websites with very much ease. In this article, we will know how to use the Button Component in Angular PrimeNG. We will also learn about the properties, styling along with their syntaxes that will be used in the code. " }, { "code": null, "e": 26952, "s": 26854, "text": "Button component: It is used to make a standard button that will indicate a possible user action." }, { "code": null, "e": 26975, "s": 26952, "text": "Properties of pButton:" }, { "code": null, "e": 27069, "s": 26975, "text": "label: It is the text of the button. It is of string data type & the default value is null. " }, { "code": null, "e": 27158, "s": 27069, "text": "icon: It is the name of the icon. It is of string data type & the default value is null." }, { "code": null, "e": 27297, "s": 27158, "text": "iconPos: It specifies the position of the icon, the valid values are “left” and “right”. It is of string type & the default value is left." }, { "code": null, "e": 27417, "s": 27297, "text": "loading: It specifies whether the button is in a loading state. It is of boolean data type & the default value is false" }, { "code": null, "e": 27544, "s": 27417, "text": "loadingIcon: It is an icon to display in the loading state. It is of string type & the default value is pi pi-spinner pi-spin." }, { "code": null, "e": 27568, "s": 27544, "text": "Properties of p-button:" }, { "code": null, "e": 27664, "s": 27568, "text": "type: It specifies the types of the button. It is of string type & the default value is null. " }, { "code": null, "e": 27760, "s": 27664, "text": "label: It specifies the text of the button. It is of string type & the default value is null. " }, { "code": null, "e": 27851, "s": 27760, "text": "icon: It specifies the name of the icon. It is of string type & the default value is null." }, { "code": null, "e": 27990, "s": 27851, "text": "iconPos: It specifies the position of the icon, the valid values are “left” and “right”. It is of string type & the default value is left." }, { "code": null, "e": 28077, "s": 27990, "text": "badge: It specifies the badge value. It is of string type & the default value is null." }, { "code": null, "e": 28175, "s": 28077, "text": "badgeClass: It specifies the badge style class. It is of string type & the default value is null." }, { "code": null, "e": 28291, "s": 28175, "text": "loading: It specifies whether the button is in a loading state. It is of boolean type & the default value is false." }, { "code": null, "e": 28426, "s": 28291, "text": "loadingIcon: It specifies the icon to display in the loading state. It is of string type & the default value is pi pi-spinner pi-spin." }, { "code": null, "e": 28536, "s": 28426, "text": "disabled: It specifies the component should be disabled. It is of boolean type & the default value is false." }, { "code": null, "e": 28639, "s": 28536, "text": "style: It specifies the inline style of the element. It is of string type & the default value is null." }, { "code": null, "e": 28746, "s": 28639, "text": "styleClass: It specifies the style class of the element. It is of string type & the default value is null." }, { "code": null, "e": 28870, "s": 28746, "text": "onClick: It is used to callback to execute when the button is clicked. It is of the event type & the default value is null." }, { "code": null, "e": 28994, "s": 28870, "text": "onFocus: It is used to callback to execute when the button is focused. It is of the event type & the default value is null." }, { "code": null, "e": 29118, "s": 28994, "text": "onBlur: It is used to callback to execute when the button loses focus. It is of the event type & the default value is null." }, { "code": null, "e": 29129, "s": 29120, "text": "Styling:" }, { "code": null, "e": 29165, "s": 29129, "text": "p-button: It is the button element." }, { "code": null, "e": 29204, "s": 29165, "text": "p-button-icon: It is the icon element." }, { "code": null, "e": 29259, "s": 29204, "text": "p-button-label: It is the label element of the button." }, { "code": null, "e": 29311, "s": 29259, "text": "Creating Angular application & module installation:" }, { "code": null, "e": 29378, "s": 29311, "text": "Step 1: Create an Angular application using the following command." }, { "code": null, "e": 29393, "s": 29378, "text": "ng new appname" }, { "code": null, "e": 29490, "s": 29393, "text": "Step 2: After creating your project folder i.e. appname, move to it using the following command." }, { "code": null, "e": 29501, "s": 29490, "text": "cd appname" }, { "code": null, "e": 29550, "s": 29501, "text": "Step 3: Install PrimeNG in your given directory." }, { "code": null, "e": 29607, "s": 29550, "text": "npm install primeng --save\nnpm install primeicons --save" }, { "code": null, "e": 29659, "s": 29607, "text": "Project Structure: It will look like the following:" }, { "code": null, "e": 29743, "s": 29659, "text": "Example 1: This is the basic example that illustrates how to use Button Component. " }, { "code": null, "e": 29762, "s": 29743, "text": "app.component.html" }, { "code": "<h2>GeeksforGeeks</h2><h5>PrimeNG Button Component</h5><button pButton pRipple label=\"Primary\" class=\"p-button-raised\"></button><button pButton pRipple label=\"Secondary\" class=\"p-button-raised p-button-secondary\"></button><button pButton pRipple label=\"Success\" class=\"p-button-raised p-button-success\"></button><button pButton pRipple label=\"Info\" class=\"p-button-raised p-button-info\"></button><button pButton pRipple label=\"Warning\" class=\"p-button-raised p-button-warning\"></button><button pButton pRipple label=\"Danger\" class=\"p-button-raised p-button-danger\"></button>", "e": 30359, "s": 29762, "text": null }, { "code": null, "e": 30376, "s": 30359, "text": "app.component.ts" }, { "code": "import { Component } from '@angular/core'; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: ['./app.component.scss']})export class AppComponent {}", "e": 30559, "s": 30376, "text": null }, { "code": null, "e": 30575, "s": 30561, "text": "app.module.ts" }, { "code": "import { NgModule } from \"@angular/core\";import { BrowserModule } from \"@angular/platform-browser\";import { BrowserAnimationsModule } from \"@angular/platform-browser/animations\"; import { AppComponent } from \"./app.component\";import { ButtonModule } from \"primeng/button\";import { RippleModule } from \"primeng/ripple\"; @NgModule({ imports: [BrowserModule, BrowserAnimationsModule, ButtonModule, RippleModule], declarations: [AppComponent], bootstrap: [AppComponent],})export class AppModule {}", "e": 31104, "s": 30575, "text": null }, { "code": null, "e": 31112, "s": 31104, "text": "Output:" }, { "code": null, "e": 31226, "s": 31112, "text": "Example 2: In this example, we will know how to use the various available class property in the Button Component." }, { "code": null, "e": 31245, "s": 31226, "text": "app.component.html" }, { "code": "<h2>GeeksforGeeks</h2><h5>PrimeNG Button Component</h5><h6>Small Outlined, Raised & Rounded Button</h6><button pButton pRipple label=\"Small button\" class=\"p-button-raised p-button-sm p-button-rounded p-button-outlined\"></button><h6>Normal Raised & Rounded Button</h6><button pButton pRipple label=\"Normal button\" class=\"p-button-raised p-button-success p-button-rounded\"></button><h6>Large,Text, Raised & Rounded Button</h6><button pButton pRipple label=\"Large button\" class=\"p-button-text p-button-raised p-button-warning p-button-lg p-button-rounded\"></button>", "e": 31820, "s": 31245, "text": null }, { "code": null, "e": 31837, "s": 31820, "text": "app.component.ts" }, { "code": "import { Component } from \"@angular/core\"; @Component({ selector: \"my-app\", templateUrl: \"./app.component.html\", styleUrls: [\"./app.component.scss\"],})export class AppComponent {}", "e": 32021, "s": 31837, "text": null }, { "code": null, "e": 32035, "s": 32021, "text": "app.module.ts" }, { "code": "import { NgModule } from \"@angular/core\";import { BrowserModule } from \"@angular/platform-browser\";import { BrowserAnimationsModule } from \"@angular/platform-browser/animations\"; import { AppComponent } from \"./app.component\";import { ButtonModule } from \"primeng/button\";import { RippleModule } from \"primeng/ripple\"; @NgModule({ imports: [BrowserModule, BrowserAnimationsModule, ButtonModule, RippleModule], declarations: [AppComponent], bootstrap: [AppComponent],})export class AppModule {}", "e": 32568, "s": 32035, "text": null }, { "code": null, "e": 32576, "s": 32568, "text": "Output:" }, { "code": null, "e": 32636, "s": 32576, "text": "Reference: https://primefaces.org/primeng/showcase/#/button" }, { "code": null, "e": 32652, "s": 32636, "text": "Angular-PrimeNG" }, { "code": null, "e": 32662, "s": 32652, "text": "AngularJS" }, { "code": null, "e": 32679, "s": 32662, "text": "Web Technologies" }, { "code": null, "e": 32777, "s": 32679, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 32812, "s": 32777, "text": "Angular PrimeNG Dropdown Component" }, { "code": null, "e": 32847, "s": 32812, "text": "Angular PrimeNG Calendar Component" }, { "code": null, "e": 32882, "s": 32847, "text": "Angular PrimeNG Messages Component" }, { "code": null, "e": 32906, "s": 32882, "text": "Angular 10 (blur) Event" }, { "code": null, "e": 32959, "s": 32906, "text": "How to make a Bootstrap Modal Popup in Angular 9/8 ?" }, { "code": null, "e": 32999, "s": 32959, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 33032, "s": 32999, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 33077, "s": 33032, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 33120, "s": 33077, "text": "How to fetch data from an API in ReactJS ?" } ]
How to display video controls in HTML5 ? - GeeksforGeeks
06 Apr, 2021 The HTML <video> controls attribute is used to display video controls in HTML5. It is the Boolean value. HTML5 most commonly uses ogg, mp4, ogm and ogv as a video formats in the video tag because the browser support for them differs. Syntax <video controls> <source> </video> From above Syntax controls attribute adds video controls like volume, pause, and play and <source> element allows you to specify alternative video files. The video control should include: Play Pause Volume Full-screen Mode Seeking Captions/Subtitles(if available) Track(if available) Attributes: Video tag supports mainly 5 attributes as mentioned below: autoplay : Makes the video start playing automatically, without waiting for the entire video file to finish downloading.loop : Through loop you can play the video again and again.muted : Makes the player muted by default.preload : This can be set to following value.auto : This implies whether the video should be load as soon as the page loads.metadata : This implies whether only the video metadata should be loaded.none : This implies browser should not load the video when the page loads.src : This defines the URL of the video that should be played by the video tag. autoplay : Makes the video start playing automatically, without waiting for the entire video file to finish downloading. loop : Through loop you can play the video again and again. muted : Makes the player muted by default. preload : This can be set to following value.auto : This implies whether the video should be load as soon as the page loads.metadata : This implies whether only the video metadata should be loaded.none : This implies browser should not load the video when the page loads. auto : This implies whether the video should be load as soon as the page loads. metadata : This implies whether only the video metadata should be loaded. none : This implies browser should not load the video when the page loads. src : This defines the URL of the video that should be played by the video tag. Note: Always specify the width and height of the video else web page will be confused that how much space the video will be required due to the reason the web page becomes slow down. Example 1: Using src attribute in below code. HTML <!DOCTYPE html><html> <body> <center> <h1 style="color:green;">GeeksforGeeks</h1> <h3>HTML video controls Attribute</h3> <video width="400" height="200" controls> <source src= "https://media.geeksforgeeks.org/wp-content/uploads/20190616234019/Canvas.move_.mp4" type="video/mp4"> <source src= "https://media.geeksforgeeks.org/wp-content/uploads/20190616234019/Canvas.move_.ogg" type="video/ogg"> </video> </center></body> </html> Output: Example 2: Using autoplay attribute to play video automatically. HTML <!DOCTYPE html><html> <body> <center> <h1 style="color:green;">GeeksforGeeks</h1> <h3>HTML video controls Attribute</h3> <video width="400" height="200" autoplay controls> <source src= "https://media.geeksforgeeks.org/wp-content/uploads/20190616234019/Canvas.move_.mp4" type="video/mp4"> <source src= "https://media.geeksforgeeks.org/wp-content/uploads/20190616234019/Canvas.move_.ogg" type="video/ogg"> </video> </center></body> </html> Output: Example 3: Poster attribute is used to display the image while video downloading or when user click the play button. HTML <!DOCTYPE html><html><body> <center> <h1 style="color:green;">GeeksforGeeks</h1> <h3>HTML video poster Attribute</h3> <video width="400" height="200" controls poster= "https://media.geeksforgeeks.org/wp-content/uploads/20190627130930/a218.png"> <source src= "https://media.geeksforgeeks.org/wp-content/uploads/20190616234019/Canvas.move_.mp4" type="video/mp4"> <source src= "https://media.geeksforgeeks.org/wp-content/uploads/20190616234019/Canvas.move_.ogg" type="video/ogg"> </video> </center></body> </html> Output: Supported Browsers: The browsers supported by HTML video tag are listed below: Google Chrome 4.0 Firefox 4.0 Apple Safari 4.0 Opera 10.5 Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course. CSS-Properties CSS-Questions HTML-Questions HTML-Tags HTML5 Picked CSS HTML Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to insert spaces/tabs in text using HTML/CSS? Top 10 Projects For Beginners To Practice HTML and CSS Skills How to update Node.js and NPM to next version ? How to create footer to stay at the bottom of a Web page? How to apply style to parent if it has child with CSS? How to insert spaces/tabs in text using HTML/CSS? Top 10 Projects For Beginners To Practice HTML and CSS Skills How to update Node.js and NPM to next version ? How to set the default value for an HTML <select> element ? Hide or show elements in HTML using display property
[ { "code": null, "e": 32981, "s": 32953, "text": "\n06 Apr, 2021" }, { "code": null, "e": 33216, "s": 32981, "text": "The HTML <video> controls attribute is used to display video controls in HTML5. It is the Boolean value. HTML5 most commonly uses ogg, mp4, ogm and ogv as a video formats in the video tag because the browser support for them differs." }, { "code": null, "e": 33223, "s": 33216, "text": "Syntax" }, { "code": null, "e": 33260, "s": 33223, "text": "<video controls>\n <source>\n</video>" }, { "code": null, "e": 33448, "s": 33260, "text": "From above Syntax controls attribute adds video controls like volume, pause, and play and <source> element allows you to specify alternative video files. The video control should include:" }, { "code": null, "e": 33453, "s": 33448, "text": "Play" }, { "code": null, "e": 33459, "s": 33453, "text": "Pause" }, { "code": null, "e": 33466, "s": 33459, "text": "Volume" }, { "code": null, "e": 33483, "s": 33466, "text": "Full-screen Mode" }, { "code": null, "e": 33491, "s": 33483, "text": "Seeking" }, { "code": null, "e": 33524, "s": 33491, "text": "Captions/Subtitles(if available)" }, { "code": null, "e": 33544, "s": 33524, "text": "Track(if available)" }, { "code": null, "e": 33615, "s": 33544, "text": "Attributes: Video tag supports mainly 5 attributes as mentioned below:" }, { "code": null, "e": 34188, "s": 33615, "text": "autoplay : Makes the video start playing automatically, without waiting for the entire video file to finish downloading.loop : Through loop you can play the video again and again.muted : Makes the player muted by default.preload : This can be set to following value.auto : This implies whether the video should be load as soon as the page loads.metadata : This implies whether only the video metadata should be loaded.none : This implies browser should not load the video when the page loads.src : This defines the URL of the video that should be played by the video tag." }, { "code": null, "e": 34309, "s": 34188, "text": "autoplay : Makes the video start playing automatically, without waiting for the entire video file to finish downloading." }, { "code": null, "e": 34370, "s": 34309, "text": "loop : Through loop you can play the video again and again." }, { "code": null, "e": 34413, "s": 34370, "text": "muted : Makes the player muted by default." }, { "code": null, "e": 34685, "s": 34413, "text": "preload : This can be set to following value.auto : This implies whether the video should be load as soon as the page loads.metadata : This implies whether only the video metadata should be loaded.none : This implies browser should not load the video when the page loads." }, { "code": null, "e": 34765, "s": 34685, "text": "auto : This implies whether the video should be load as soon as the page loads." }, { "code": null, "e": 34839, "s": 34765, "text": "metadata : This implies whether only the video metadata should be loaded." }, { "code": null, "e": 34914, "s": 34839, "text": "none : This implies browser should not load the video when the page loads." }, { "code": null, "e": 34994, "s": 34914, "text": "src : This defines the URL of the video that should be played by the video tag." }, { "code": null, "e": 35177, "s": 34994, "text": "Note: Always specify the width and height of the video else web page will be confused that how much space the video will be required due to the reason the web page becomes slow down." }, { "code": null, "e": 35223, "s": 35177, "text": "Example 1: Using src attribute in below code." }, { "code": null, "e": 35228, "s": 35223, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <body> <center> <h1 style=\"color:green;\">GeeksforGeeks</h1> <h3>HTML video controls Attribute</h3> <video width=\"400\" height=\"200\" controls> <source src= \"https://media.geeksforgeeks.org/wp-content/uploads/20190616234019/Canvas.move_.mp4\" type=\"video/mp4\"> <source src= \"https://media.geeksforgeeks.org/wp-content/uploads/20190616234019/Canvas.move_.ogg\" type=\"video/ogg\"> </video> </center></body> </html>", "e": 35784, "s": 35228, "text": null }, { "code": null, "e": 35792, "s": 35784, "text": "Output:" }, { "code": null, "e": 35857, "s": 35792, "text": "Example 2: Using autoplay attribute to play video automatically." }, { "code": null, "e": 35862, "s": 35857, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <body> <center> <h1 style=\"color:green;\">GeeksforGeeks</h1> <h3>HTML video controls Attribute</h3> <video width=\"400\" height=\"200\" autoplay controls> <source src= \"https://media.geeksforgeeks.org/wp-content/uploads/20190616234019/Canvas.move_.mp4\" type=\"video/mp4\"> <source src= \"https://media.geeksforgeeks.org/wp-content/uploads/20190616234019/Canvas.move_.ogg\" type=\"video/ogg\"> </video> </center></body> </html>", "e": 36427, "s": 35862, "text": null }, { "code": null, "e": 36435, "s": 36427, "text": "Output:" }, { "code": null, "e": 36552, "s": 36435, "text": "Example 3: Poster attribute is used to display the image while video downloading or when user click the play button." }, { "code": null, "e": 36557, "s": 36552, "text": "HTML" }, { "code": "<!DOCTYPE html><html><body> <center> <h1 style=\"color:green;\">GeeksforGeeks</h1> <h3>HTML video poster Attribute</h3> <video width=\"400\" height=\"200\" controls poster= \"https://media.geeksforgeeks.org/wp-content/uploads/20190627130930/a218.png\"> <source src= \"https://media.geeksforgeeks.org/wp-content/uploads/20190616234019/Canvas.move_.mp4\" type=\"video/mp4\"> <source src= \"https://media.geeksforgeeks.org/wp-content/uploads/20190616234019/Canvas.move_.ogg\" type=\"video/ogg\"> </video> </center></body> </html>", "e": 37250, "s": 36557, "text": null }, { "code": null, "e": 37258, "s": 37250, "text": "Output:" }, { "code": null, "e": 37337, "s": 37258, "text": "Supported Browsers: The browsers supported by HTML video tag are listed below:" }, { "code": null, "e": 37355, "s": 37337, "text": "Google Chrome 4.0" }, { "code": null, "e": 37367, "s": 37355, "text": "Firefox 4.0" }, { "code": null, "e": 37384, "s": 37367, "text": "Apple Safari 4.0" }, { "code": null, "e": 37395, "s": 37384, "text": "Opera 10.5" }, { "code": null, "e": 37532, "s": 37395, "text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course." }, { "code": null, "e": 37547, "s": 37532, "text": "CSS-Properties" }, { "code": null, "e": 37561, "s": 37547, "text": "CSS-Questions" }, { "code": null, "e": 37576, "s": 37561, "text": "HTML-Questions" }, { "code": null, "e": 37586, "s": 37576, "text": "HTML-Tags" }, { "code": null, "e": 37592, "s": 37586, "text": "HTML5" }, { "code": null, "e": 37599, "s": 37592, "text": "Picked" }, { "code": null, "e": 37603, "s": 37599, "text": "CSS" }, { "code": null, "e": 37608, "s": 37603, "text": "HTML" }, { "code": null, "e": 37625, "s": 37608, "text": "Web Technologies" }, { "code": null, "e": 37630, "s": 37625, "text": "HTML" }, { "code": null, "e": 37728, "s": 37630, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 37778, "s": 37728, "text": "How to insert spaces/tabs in text using HTML/CSS?" }, { "code": null, "e": 37840, "s": 37778, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 37888, "s": 37840, "text": "How to update Node.js and NPM to next version ?" }, { "code": null, "e": 37946, "s": 37888, "text": "How to create footer to stay at the bottom of a Web page?" }, { "code": null, "e": 38001, "s": 37946, "text": "How to apply style to parent if it has child with CSS?" }, { "code": null, "e": 38051, "s": 38001, "text": "How to insert spaces/tabs in text using HTML/CSS?" }, { "code": null, "e": 38113, "s": 38051, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 38161, "s": 38113, "text": "How to update Node.js and NPM to next version ?" }, { "code": null, "e": 38221, "s": 38161, "text": "How to set the default value for an HTML <select> element ?" } ]