title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
Java program to add two integers
The + operator in Java is used to multiply two numbers. Read required numbers from the user using Scanner class and add these two integers using the + operator. import java.util.Scanner; public class AddinTwoNumbers { public static void main(String args[]){ Scanner sc = new Scanner(System.in); System.out.println("Enter the value of the first number ::"); int a = sc.nextInt(); System.out.println("Enter the value of the first number ::"); int b = sc.nextInt(); int result = a+b; System.out.println("Sum of the given two numbers is ::"+result); } } Enter the value of the first number :: 5564 Enter the value of the first number :: 2234 Sum of the given two numbers is ::7798
[ { "code": null, "e": 1223, "s": 1062, "text": "The + operator in Java is used to multiply two numbers. Read required numbers from the user using Scanner class and add these two integers using the + operator." }, { "code": null, "e": 1660, "s": 1223, "text": "import java.util.Scanner;\npublic class AddinTwoNumbers {\n public static void main(String args[]){\n Scanner sc = new Scanner(System.in);\n System.out.println(\"Enter the value of the first number ::\");\n int a = sc.nextInt();\n System.out.println(\"Enter the value of the first number ::\");\n int b = sc.nextInt();\n int result = a+b;\n System.out.println(\"Sum of the given two numbers is ::\"+result);\n }\n}" }, { "code": null, "e": 1787, "s": 1660, "text": "Enter the value of the first number ::\n5564\nEnter the value of the first number ::\n2234\nSum of the given two numbers is ::7798" } ]
How to write a conditional expression in lambda expression in Java?
The conditional operator is used to make conditional expressions in Java. It is also called a Ternary operator because it has three operands such as boolean condition, first expression, and second expression. We can also write a conditional expression in lambda expression in the below program. interface Algebra { int substraction(int a, int b); } public class ConditionalExpressionLambdaTest { public static void main(String args[]) { System.out.println("The value is: " + getAlgebra(false).substraction(20, 40)); System.out.println("The value is: " + getAlgebra(true).substraction(40, 10)); } static Algebra getAlgebra(boolean reverse) { Algebra alg = reverse ? (a, b) -> a - b : (a, b) -> b - a; // conditional expression return alg; } } The value is: 20 The value is: 30
[ { "code": null, "e": 1271, "s": 1062, "text": "The conditional operator is used to make conditional expressions in Java. It is also called a Ternary operator because it has three operands such as boolean condition, first expression, and second expression." }, { "code": null, "e": 1357, "s": 1271, "text": "We can also write a conditional expression in lambda expression in the below program." }, { "code": null, "e": 1844, "s": 1357, "text": "interface Algebra {\n int substraction(int a, int b);\n}\npublic class ConditionalExpressionLambdaTest {\n public static void main(String args[]) {\n System.out.println(\"The value is: \" + getAlgebra(false).substraction(20, 40));\n System.out.println(\"The value is: \" + getAlgebra(true).substraction(40, 10));\n }\n static Algebra getAlgebra(boolean reverse) {\n Algebra alg = reverse ? (a, b) -> a - b : (a, b) -> b - a; // conditional expression \n return alg;\n }\n}" }, { "code": null, "e": 1878, "s": 1844, "text": "The value is: 20\nThe value is: 30" } ]
10 Steps to Setup a Comprehensive Data Science Workspace with VSCode on Windows | by Gong Na | Towards Data Science
To start a data science project, choosing an effective development environment is always the first step. Jupyter Notebook/Lab is a common choice. I like it very much. But it’s really not enough. When I started my first professional data science project, I used many tools together: Jupyter Notebook for model training, PyCharm (free Community version) for structured Python script, Anaconda Prompt for python package management, MobaXterm for remote SSH connection, Excel for raw data review and so on. Everyday, I jump between different windows all the time. Fortunately, I found a single tool which supports all above functions and most importantly, it’s for free: VSCode. If you are suffering from the same problem as above, or you are a beginner in the data science field, this guide may help you build your own effective data science workspace and have a good start-point. Work with Anaconda Write structured and readable python scripts Use Jupyter Notebook Mange python packages and conda virtual environment Work on remote server (e.g. GPU nodes) via SSH with a very user-friendly interface View and edit different types of file (e.g. .txt, .csv, .xlsx, .png, .md, .yml, Dockerfile ...) Use Git version control Be flexible for future project extension such as adding a Docker-based deployment Now, Let’s start ~🚀 Overview of ContentsStep 1: Install AnacondaStep 2: (Optional) Create a Virtualenv in Anaconda PromptStep 3: Open VSCodeStep 4: Open Your Project Folder from VSCodeStep 5: Install Python Extension in VSCodeStep 6: Enable Your Virtualenv in VSCodeStep 7: Config the VSCode Terminal for 'conda' & 'python'Step 8: Enable Jupyter Notebook in VSCodeStep 9: Connect to the Remote Server from VSCode via SSHStep 10: Do Git Push in VSCode [following setup is based on Win10] You can install the latest edition from here, or the archived version from here. If you are going to use the integrated terminal in VSCode, I highly recommend you to install the version of Anaconda ≥ 4.6. After the successful installation, you have also installed Jupyter Notebook, Python and some common python packages for data science usage. And you can now find Anaconda Navigator, Anaconda Prompt, Jupyter Notebook icons in the Start Menu. To avoid the mutual interference between different projects, it’s better to develop in an isolated virtual environment. If you really prefer to develop in the defaultbase env, you can skip this step. Open Anaconda Prompt from Start Menu: Check your current env list : conda env list Create your own virtualenv with a name and specified python version (example name: ssl): conda create --name ssl python==3.6.4 Activate the virtualenv: conda activate ssl Check what packages exist in this virtualenv (python should have been installed during the virtualenv creation): conda list After finishing the setup in VSCode, you can directly mange your virtualenv in VSCode terminal. For more conda environment management, please check here. Option 1: open Anaconda Navigator > launch VSCode from the Home page. Option 2: you can also install VSCode from the website. In this way, you can get a shortcut, then launch VSCode quickly from the desktop or your taskbar. From my own experience, this way is faster than Option 1. Create a new project folder as your root directory in anywhere you like. Open VSCode > click the File in the top left > click Open Folder > click the target folder name. Now, you can put all your project files and scripts here. VSCode is a source-code editor supporting a bunch of languages. In order to enable Python, we need to install the Python extension. Click Extension icon in the left bar > search Python > click install: Now, you can create standard python scripts simply with .py file extension: If you also like the cute python icon in front of the helloWorld.py, you can get it by adding VSCode Great Icons extension. It can help you easily distinguish different types of file and folder: In VSCode, press ctrl+shift+p, a Command Palette will pop up in the top middle. Click Python: Select Interpreter > you will see a list containing all your virtualenv and the base env > choose the proper virtualenv. If you cannot see the Python: Select Interpreter, just type and search it. Afterwards, the python interpreter with the env name will show in the bottom-left of the Status Bar: Open a new VSCode terminal by clicking top left ‘Terminal’ or the shortcut ctrl+shift+`. The terminal will be opened in the right bottom. Choose a default shell based on your personal preference (default: cmd): If you want to use Linux command on Windows just like me, Powershell is a good choice for you. After selected, just open another new terminal to catch your updates. If you have successfully selected a python interpreter in the last step, the virtualenv now will be automatically activated in your new opened terminal via conda activate. And the virtualenv name will show in () at the front: Now, you can execute .pyfiles in the terminal by python helloWorld.py 💢 Two common errors you may encounter: 1. Fails to run conda activate in Powershell. Solution: conda starts to support conda activate in Powershell only from the version 4.6. So if you want to use Powershell as your default shell in VSCode, make sure that your conda version is at least 4.6 or higher (it may differ in different Windows systems) 2. Unrecognized term error when you run conda or pythonin the terminal Solution: Get your python and conda paths by running below command in a terminal where pythonwhere conda Copy&Paste both the python and conda paths to the Path of Environment Variable: Install the Jupyter package which includes the notebook, qtconsole, and the IPython kernel via pip install jupyter# orconda install -c conda-forge jupyter Create a new jupyter notebook file: pressctrl+shif+p > search Create New Blank Jupyter Notebook > click it > save the created.ipynbfile with a name: In a few seconds, the top right Jupyter Server will be automatically set as local, and the Kernel as the conda environment chosen in Step 5. If not, or you want to change the kernel/env: press ctrl+shif+p > search Select Interpreter to start Jupyter server > click it > choose a proper env from the shown list: Now, you can use Jupyter Notebook as usual: Another awesome feature which I like the most is the remote SSH connection which supports opening any remote folder with full VSCode’s feature set. Install Remote-SSH extension: Then you will see a little connection icon shown in the bottom-left of the Status Bar: Click the icon > select Remote-SSH: Connect to Host... > add the standard ssh connection command ssh username@hostname > hit Enter: An empty window will be opened which will setup the remote SSH connection automatically. After connected, you will see the host name in the bottom-left of Status Bar with SSH:username@hostname: Now, you can open any folder under your home directory on that connected remote machine and work on it just as you are locally: 💢 If you get the corrupted MAC on input error during the remote connection as below, you can solve it by adding the availabel MAC cryptographic algorithm in ssh . I post the detailed steps in this story. VSCode supports Git version control in-the-box so that you can track your code in a very pleasant format and push it easily by a few clicks. If your opened folder is under a git repository, the branch name will be shown in the bottom left with a * in VSCode. And the total number of changed files will be shown with the Source Control icon in the left bar. In the below example, it’s in the master branch with one changed file in Untracked status. Click the Source Control icon > click the file name > you can check the detailed code changes in a comparison view: Stage changes by clicking + icon after the file > add a commit message in the above box > click ✔ on the above to commit: A Synchronize Changes action will then show in the Status Bar with a little icon of 1↑ which indicates that you have 1 commit not pushed yet: Just click it to push all committed changes. After the successful push, the 1↑ indicator will disappear: Now, your Data Science workspace in VSCode is ready! VSCode is far more powerful than I shared above. There are a lot of diversified extensions which support different developing requirements. If you are new about them, and feel little bit lost in the large extension pool at the beginning, the following essential extensions may help you to start. Just keep trying! - Predawn Theme Kit # readable code- Sublime Text Keymap and Settings Importer # readable code- Excel Viewer # preview csv file in excel format- Prettify JSON # parameter config- YAML # parameter config
[ { "code": null, "e": 366, "s": 171, "text": "To start a data science project, choosing an effective development environment is always the first step. Jupyter Notebook/Lab is a common choice. I like it very much. But it’s really not enough." }, { "code": null, "e": 731, "s": 366, "text": "When I started my first professional data science project, I used many tools together: Jupyter Notebook for model training, PyCharm (free Community version) for structured Python script, Anaconda Prompt for python package management, MobaXterm for remote SSH connection, Excel for raw data review and so on. Everyday, I jump between different windows all the time." }, { "code": null, "e": 846, "s": 731, "text": "Fortunately, I found a single tool which supports all above functions and most importantly, it’s for free: VSCode." }, { "code": null, "e": 1049, "s": 846, "text": "If you are suffering from the same problem as above, or you are a beginner in the data science field, this guide may help you build your own effective data science workspace and have a good start-point." }, { "code": null, "e": 1068, "s": 1049, "text": "Work with Anaconda" }, { "code": null, "e": 1113, "s": 1068, "text": "Write structured and readable python scripts" }, { "code": null, "e": 1134, "s": 1113, "text": "Use Jupyter Notebook" }, { "code": null, "e": 1186, "s": 1134, "text": "Mange python packages and conda virtual environment" }, { "code": null, "e": 1269, "s": 1186, "text": "Work on remote server (e.g. GPU nodes) via SSH with a very user-friendly interface" }, { "code": null, "e": 1365, "s": 1269, "text": "View and edit different types of file (e.g. .txt, .csv, .xlsx, .png, .md, .yml, Dockerfile ...)" }, { "code": null, "e": 1389, "s": 1365, "text": "Use Git version control" }, { "code": null, "e": 1471, "s": 1389, "text": "Be flexible for future project extension such as adding a Docker-based deployment" }, { "code": null, "e": 1491, "s": 1471, "text": "Now, Let’s start ~🚀" }, { "code": null, "e": 1922, "s": 1491, "text": "Overview of ContentsStep 1: Install AnacondaStep 2: (Optional) Create a Virtualenv in Anaconda PromptStep 3: Open VSCodeStep 4: Open Your Project Folder from VSCodeStep 5: Install Python Extension in VSCodeStep 6: Enable Your Virtualenv in VSCodeStep 7: Config the VSCode Terminal for 'conda' & 'python'Step 8: Enable Jupyter Notebook in VSCodeStep 9: Connect to the Remote Server from VSCode via SSHStep 10: Do Git Push in VSCode" }, { "code": null, "e": 1958, "s": 1922, "text": "[following setup is based on Win10]" }, { "code": null, "e": 2163, "s": 1958, "text": "You can install the latest edition from here, or the archived version from here. If you are going to use the integrated terminal in VSCode, I highly recommend you to install the version of Anaconda ≥ 4.6." }, { "code": null, "e": 2403, "s": 2163, "text": "After the successful installation, you have also installed Jupyter Notebook, Python and some common python packages for data science usage. And you can now find Anaconda Navigator, Anaconda Prompt, Jupyter Notebook icons in the Start Menu." }, { "code": null, "e": 2603, "s": 2403, "text": "To avoid the mutual interference between different projects, it’s better to develop in an isolated virtual environment. If you really prefer to develop in the defaultbase env, you can skip this step." }, { "code": null, "e": 2641, "s": 2603, "text": "Open Anaconda Prompt from Start Menu:" }, { "code": null, "e": 2671, "s": 2641, "text": "Check your current env list :" }, { "code": null, "e": 2686, "s": 2671, "text": "conda env list" }, { "code": null, "e": 2775, "s": 2686, "text": "Create your own virtualenv with a name and specified python version (example name: ssl):" }, { "code": null, "e": 2813, "s": 2775, "text": "conda create --name ssl python==3.6.4" }, { "code": null, "e": 2838, "s": 2813, "text": "Activate the virtualenv:" }, { "code": null, "e": 2857, "s": 2838, "text": "conda activate ssl" }, { "code": null, "e": 2970, "s": 2857, "text": "Check what packages exist in this virtualenv (python should have been installed during the virtualenv creation):" }, { "code": null, "e": 2981, "s": 2970, "text": "conda list" }, { "code": null, "e": 3077, "s": 2981, "text": "After finishing the setup in VSCode, you can directly mange your virtualenv in VSCode terminal." }, { "code": null, "e": 3135, "s": 3077, "text": "For more conda environment management, please check here." }, { "code": null, "e": 3205, "s": 3135, "text": "Option 1: open Anaconda Navigator > launch VSCode from the Home page." }, { "code": null, "e": 3417, "s": 3205, "text": "Option 2: you can also install VSCode from the website. In this way, you can get a shortcut, then launch VSCode quickly from the desktop or your taskbar. From my own experience, this way is faster than Option 1." }, { "code": null, "e": 3645, "s": 3417, "text": "Create a new project folder as your root directory in anywhere you like. Open VSCode > click the File in the top left > click Open Folder > click the target folder name. Now, you can put all your project files and scripts here." }, { "code": null, "e": 3777, "s": 3645, "text": "VSCode is a source-code editor supporting a bunch of languages. In order to enable Python, we need to install the Python extension." }, { "code": null, "e": 3847, "s": 3777, "text": "Click Extension icon in the left bar > search Python > click install:" }, { "code": null, "e": 3923, "s": 3847, "text": "Now, you can create standard python scripts simply with .py file extension:" }, { "code": null, "e": 4118, "s": 3923, "text": "If you also like the cute python icon in front of the helloWorld.py, you can get it by adding VSCode Great Icons extension. It can help you easily distinguish different types of file and folder:" }, { "code": null, "e": 4333, "s": 4118, "text": "In VSCode, press ctrl+shift+p, a Command Palette will pop up in the top middle. Click Python: Select Interpreter > you will see a list containing all your virtualenv and the base env > choose the proper virtualenv." }, { "code": null, "e": 4408, "s": 4333, "text": "If you cannot see the Python: Select Interpreter, just type and search it." }, { "code": null, "e": 4509, "s": 4408, "text": "Afterwards, the python interpreter with the env name will show in the bottom-left of the Status Bar:" }, { "code": null, "e": 4647, "s": 4509, "text": "Open a new VSCode terminal by clicking top left ‘Terminal’ or the shortcut ctrl+shift+`. The terminal will be opened in the right bottom." }, { "code": null, "e": 4720, "s": 4647, "text": "Choose a default shell based on your personal preference (default: cmd):" }, { "code": null, "e": 4815, "s": 4720, "text": "If you want to use Linux command on Windows just like me, Powershell is a good choice for you." }, { "code": null, "e": 5111, "s": 4815, "text": "After selected, just open another new terminal to catch your updates. If you have successfully selected a python interpreter in the last step, the virtualenv now will be automatically activated in your new opened terminal via conda activate. And the virtualenv name will show in () at the front:" }, { "code": null, "e": 5160, "s": 5111, "text": "Now, you can execute .pyfiles in the terminal by" }, { "code": null, "e": 5181, "s": 5160, "text": "python helloWorld.py" }, { "code": null, "e": 5220, "s": 5181, "text": "💢 Two common errors you may encounter:" }, { "code": null, "e": 5266, "s": 5220, "text": "1. Fails to run conda activate in Powershell." }, { "code": null, "e": 5527, "s": 5266, "text": "Solution: conda starts to support conda activate in Powershell only from the version 4.6. So if you want to use Powershell as your default shell in VSCode, make sure that your conda version is at least 4.6 or higher (it may differ in different Windows systems)" }, { "code": null, "e": 5679, "s": 5527, "text": "2. Unrecognized term error when you run conda or pythonin the terminal Solution: Get your python and conda paths by running below command in a terminal" }, { "code": null, "e": 5703, "s": 5679, "text": "where pythonwhere conda" }, { "code": null, "e": 5783, "s": 5703, "text": "Copy&Paste both the python and conda paths to the Path of Environment Variable:" }, { "code": null, "e": 5878, "s": 5783, "text": "Install the Jupyter package which includes the notebook, qtconsole, and the IPython kernel via" }, { "code": null, "e": 5938, "s": 5878, "text": "pip install jupyter# orconda install -c conda-forge jupyter" }, { "code": null, "e": 6087, "s": 5938, "text": "Create a new jupyter notebook file: pressctrl+shif+p > search Create New Blank Jupyter Notebook > click it > save the created.ipynbfile with a name:" }, { "code": null, "e": 6228, "s": 6087, "text": "In a few seconds, the top right Jupyter Server will be automatically set as local, and the Kernel as the conda environment chosen in Step 5." }, { "code": null, "e": 6398, "s": 6228, "text": "If not, or you want to change the kernel/env: press ctrl+shif+p > search Select Interpreter to start Jupyter server > click it > choose a proper env from the shown list:" }, { "code": null, "e": 6442, "s": 6398, "text": "Now, you can use Jupyter Notebook as usual:" }, { "code": null, "e": 6590, "s": 6442, "text": "Another awesome feature which I like the most is the remote SSH connection which supports opening any remote folder with full VSCode’s feature set." }, { "code": null, "e": 6620, "s": 6590, "text": "Install Remote-SSH extension:" }, { "code": null, "e": 6707, "s": 6620, "text": "Then you will see a little connection icon shown in the bottom-left of the Status Bar:" }, { "code": null, "e": 6839, "s": 6707, "text": "Click the icon > select Remote-SSH: Connect to Host... > add the standard ssh connection command ssh username@hostname > hit Enter:" }, { "code": null, "e": 7033, "s": 6839, "text": "An empty window will be opened which will setup the remote SSH connection automatically. After connected, you will see the host name in the bottom-left of Status Bar with SSH:username@hostname:" }, { "code": null, "e": 7161, "s": 7033, "text": "Now, you can open any folder under your home directory on that connected remote machine and work on it just as you are locally:" }, { "code": null, "e": 7365, "s": 7161, "text": "💢 If you get the corrupted MAC on input error during the remote connection as below, you can solve it by adding the availabel MAC cryptographic algorithm in ssh . I post the detailed steps in this story." }, { "code": null, "e": 7506, "s": 7365, "text": "VSCode supports Git version control in-the-box so that you can track your code in a very pleasant format and push it easily by a few clicks." }, { "code": null, "e": 7722, "s": 7506, "text": "If your opened folder is under a git repository, the branch name will be shown in the bottom left with a * in VSCode. And the total number of changed files will be shown with the Source Control icon in the left bar." }, { "code": null, "e": 7813, "s": 7722, "text": "In the below example, it’s in the master branch with one changed file in Untracked status." }, { "code": null, "e": 7929, "s": 7813, "text": "Click the Source Control icon > click the file name > you can check the detailed code changes in a comparison view:" }, { "code": null, "e": 8051, "s": 7929, "text": "Stage changes by clicking + icon after the file > add a commit message in the above box > click ✔ on the above to commit:" }, { "code": null, "e": 8193, "s": 8051, "text": "A Synchronize Changes action will then show in the Status Bar with a little icon of 1↑ which indicates that you have 1 commit not pushed yet:" }, { "code": null, "e": 8298, "s": 8193, "text": "Just click it to push all committed changes. After the successful push, the 1↑ indicator will disappear:" }, { "code": null, "e": 8351, "s": 8298, "text": "Now, your Data Science workspace in VSCode is ready!" }, { "code": null, "e": 8665, "s": 8351, "text": "VSCode is far more powerful than I shared above. There are a lot of diversified extensions which support different developing requirements. If you are new about them, and feel little bit lost in the large extension pool at the beginning, the following essential extensions may help you to start. Just keep trying!" } ]
How to appropriately plot the losses values acquired by (loss_curve_) from MLPClassifier? (Matplotlib)
To appropriately plot losses values acquired by (loss_curve_) from MLPCIassifier, we can take the following steps − Set the figure size and adjust the padding between and around the subplots. Make a params, a list of dictionaries. Make a list of labels and plot arguments. Create a figure and a set of subplots, with nrows=2 and ncols= Load and return the iris dataset (classification). Get x_digits and y_digits from the dataset. Get customized data_set, list of tuples. Iterate zipped, axes, data_sets and the list of name of titles. In the plot_on_dataset() method; set the title of the current axis. Get the Multi-layer Perceptron classifier instance. Get mlps, i.e a list of mlpc instances. Iterate mlps and plot mlp.loss_curve_using plot() method. To display the figure, use show() method. import warnings import matplotlib.pyplot as plt from sklearn.neural_network import MLPClassifier from sklearn.preprocessing import MinMaxScaler from sklearn import datasets from sklearn.exceptions import ConvergenceWarning plt.rcParams["figure.figsize"] = [7.50, 3.50] plt.rcParams["figure.autolayout"] = True params = [{'solver': 'sgd', 'learning_rate': 'constant', 'momentum': 0, 'learning_rate_init': 0.2}, {'solver': 'sgd', 'learning_rate': 'constant', 'momentum': .9, 'nesterovs_momentum': False, 'learning_rate_init': 0.2}, {'solver': 'sgd', 'learning_rate': 'constant', 'momentum': .9, 'nesterovs_momentum': True, 'learning_rate_init': 0.2}, {'solver': 'sgd', 'learning_rate': 'invscaling', 'momentum': 0, 'learning_rate_init': 0.2}, {'solver': 'sgd', 'learning_rate': 'invscaling', 'momentum': .9, 'nesterovs_momentum': True, 'learning_rate_init': 0.2}, {'solver': 'sgd', 'learning_rate': 'invscaling', 'momentum': .9, 'nesterovs_momentum': False, 'learning_rate_init': 0.2}, {'solver': 'adam', 'learning_rate_init': 0.01}] labels = ["constant learning-rate", "constant with momentum", "constant with Nesterov's momentum", "inv-scaling learning-rate", "inv-scaling with momentum", "inv-scaling with Nesterov's momentum", "adam"] plot_args = [{'c': 'red', 'linestyle': '-'}, {'c': 'green', 'linestyle': '-'}, {'c': 'blue', 'linestyle': '-'}, {'c': 'red', 'linestyle': '--'}, {'c': 'green', 'linestyle': '--'}, {'c': 'blue', 'linestyle': '--'}, {'c': 'black', 'linestyle': '-'}] def plot_on_dataset(X, y, ax, name): ax.set_title(name) X = MinMaxScaler().fit_transform(X) mlps = [] if name == "digits": max_iter = 15 else: max_iter = 400 for label, param in zip(labels, params): mlp = MLPClassifier(random_state=0, max_iter=max_iter, **param) with warnings.catch_warnings(): warnings.filterwarnings("ignore", category=ConvergenceWarning, module="sklearn") mlp.fit(X, y) mlps.append(mlp) for mlp, label, args in zip(mlps, labels, plot_args): ax.plot(mlp.loss_curve_, label=label, **args) fig, axes = plt.subplots(2, 2) iris = datasets.load_iris() X_digits, y_digits = datasets.load_digits(return_X_y=True) data_sets = [(iris.data, iris.target), (X_digits, y_digits), datasets.make_circles(noise=0.2, factor=0.5, random_state=1), datasets.make_moons(noise=0.3, random_state=0)] for ax, data, name in zip(axes.ravel(), data_sets, ['iris', 'digits', 'circles', 'moons']): plot_on_dataset(*data, ax=ax, name=name) fig.legend(ax.get_lines(), labels, ncol=3, loc="upper center") plt.show()
[ { "code": null, "e": 1178, "s": 1062, "text": "To appropriately plot losses values acquired by (loss_curve_) from MLPCIassifier, we can take the following steps −" }, { "code": null, "e": 1254, "s": 1178, "text": "Set the figure size and adjust the padding between and around the subplots." }, { "code": null, "e": 1293, "s": 1254, "text": "Make a params, a list of dictionaries." }, { "code": null, "e": 1335, "s": 1293, "text": "Make a list of labels and plot arguments." }, { "code": null, "e": 1398, "s": 1335, "text": "Create a figure and a set of subplots, with nrows=2 and ncols=" }, { "code": null, "e": 1449, "s": 1398, "text": "Load and return the iris dataset (classification)." }, { "code": null, "e": 1493, "s": 1449, "text": "Get x_digits and y_digits from the dataset." }, { "code": null, "e": 1534, "s": 1493, "text": "Get customized data_set, list of tuples." }, { "code": null, "e": 1598, "s": 1534, "text": "Iterate zipped, axes, data_sets and the list of name of titles." }, { "code": null, "e": 1666, "s": 1598, "text": "In the plot_on_dataset() method; set the title of the current axis." }, { "code": null, "e": 1718, "s": 1666, "text": "Get the Multi-layer Perceptron classifier instance." }, { "code": null, "e": 1758, "s": 1718, "text": "Get mlps, i.e a list of mlpc instances." }, { "code": null, "e": 1816, "s": 1758, "text": "Iterate mlps and plot mlp.loss_curve_using plot() method." }, { "code": null, "e": 1858, "s": 1816, "text": "To display the figure, use show() method." }, { "code": null, "e": 4491, "s": 1858, "text": "import warnings\nimport matplotlib.pyplot as plt\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn import datasets\nfrom sklearn.exceptions import ConvergenceWarning\n\nplt.rcParams[\"figure.figsize\"] = [7.50, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\n\nparams = [{'solver': 'sgd', 'learning_rate': 'constant', 'momentum': 0, 'learning_rate_init': 0.2},\n {'solver': 'sgd', 'learning_rate': 'constant', 'momentum': .9, 'nesterovs_momentum': False, 'learning_rate_init': 0.2},\n {'solver': 'sgd', 'learning_rate': 'constant', 'momentum': .9, 'nesterovs_momentum': True, 'learning_rate_init': 0.2},\n {'solver': 'sgd', 'learning_rate': 'invscaling', 'momentum': 0, 'learning_rate_init': 0.2},\n {'solver': 'sgd', 'learning_rate': 'invscaling', 'momentum': .9, 'nesterovs_momentum': True, 'learning_rate_init': 0.2},\n {'solver': 'sgd', 'learning_rate': 'invscaling', 'momentum': .9, 'nesterovs_momentum': False, 'learning_rate_init': 0.2},\n {'solver': 'adam', 'learning_rate_init': 0.01}]\n\nlabels = [\"constant learning-rate\", \"constant with momentum\", \"constant with Nesterov's momentum\", \"inv-scaling learning-rate\", \"inv-scaling with momentum\", \"inv-scaling with Nesterov's momentum\", \"adam\"]\n\nplot_args = [{'c': 'red', 'linestyle': '-'},\n {'c': 'green', 'linestyle': '-'},\n {'c': 'blue', 'linestyle': '-'},\n {'c': 'red', 'linestyle': '--'},\n {'c': 'green', 'linestyle': '--'},\n {'c': 'blue', 'linestyle': '--'},\n {'c': 'black', 'linestyle': '-'}]\n\ndef plot_on_dataset(X, y, ax, name):\n ax.set_title(name)\n X = MinMaxScaler().fit_transform(X)\n mlps = []\n if name == \"digits\":\n max_iter = 15\n else:\n max_iter = 400\n for label, param in zip(labels, params):\n mlp = MLPClassifier(random_state=0, max_iter=max_iter, **param)\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", category=ConvergenceWarning, module=\"sklearn\")\n mlp.fit(X, y)\n mlps.append(mlp)\n for mlp, label, args in zip(mlps, labels, plot_args):\n ax.plot(mlp.loss_curve_, label=label, **args)\n\nfig, axes = plt.subplots(2, 2)\niris = datasets.load_iris()\nX_digits, y_digits = datasets.load_digits(return_X_y=True)\ndata_sets = [(iris.data, iris.target), (X_digits, y_digits), datasets.make_circles(noise=0.2, factor=0.5, random_state=1), datasets.make_moons(noise=0.3, random_state=0)]\n\nfor ax, data, name in zip(axes.ravel(), data_sets,\n['iris', 'digits', 'circles', 'moons']):\nplot_on_dataset(*data, ax=ax, name=name)\n\nfig.legend(ax.get_lines(), labels, ncol=3, loc=\"upper center\")\n\nplt.show()" } ]
MongoDB query to remove subdocument from document?
To remove subdocument from a document, use $pull along with update(). Let us first create a collection with documents − > db.demo538.insertOne( ... { ... id:101, ... "details": ... { ... anotherDetails: ... [ ... { ... "Name":"Chris", ... Age:21 ... }, ... { ... "Name":"David", ... Age:23 ... }, ... { ... "Name":"Bob", ... Age:20 ... } ... ] ... } ... } ... ); { "acknowledged" : true, "insertedId" : ObjectId("5e8c8f0aef4dcbee04fbbc08") } Display all documents from a collection with the help of find() method − > db.demo538.find(); This will produce the following output − { "_id" : ObjectId("5e8c8f0aef4dcbee04fbbc08"), "id" : 101, "details" : { "anotherDetails" : [ { "Name" : "Chris", "Age" : 21 }, { "Name" : "David", "Age" : 23 }, { "Name" : "Bob", "Age" : 20 } ] } } Following is the query to remove subdocument from a document − > db.demo538.update({ id:101}, ... {$pull : { "details.anotherDetails" : {"Age":23} } } ) WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 }) Display all documents from a collection with the help of find() method − > db.demo538.find(); This will produce the following output − { "_id" : ObjectId("5e8c8f0aef4dcbee04fbbc08"), "id" : 101, "details" : { "anotherDetails" : [ { "Name" : "Chris", "Age" : 21 }, { "Name" : "Bob", "Age" : 20 } ] } }
[ { "code": null, "e": 1182, "s": 1062, "text": "To remove subdocument from a document, use $pull along with update(). Let us first create a collection with documents −" }, { "code": null, "e": 1666, "s": 1182, "text": "> db.demo538.insertOne(\n... {\n... id:101,\n... \"details\":\n... {\n... anotherDetails:\n... [\n... {\n... \"Name\":\"Chris\",\n... Age:21\n... },\n... {\n... \"Name\":\"David\",\n... Age:23\n... },\n... {\n... \"Name\":\"Bob\",\n... Age:20\n... }\n... ]\n... }\n... }\n... );\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e8c8f0aef4dcbee04fbbc08\")\n}" }, { "code": null, "e": 1739, "s": 1666, "text": "Display all documents from a collection with the help of find() method −" }, { "code": null, "e": 1760, "s": 1739, "text": "> db.demo538.find();" }, { "code": null, "e": 1801, "s": 1760, "text": "This will produce the following output −" }, { "code": null, "e": 2004, "s": 1801, "text": "{ \"_id\" : ObjectId(\"5e8c8f0aef4dcbee04fbbc08\"), \"id\" : 101, \"details\" : { \"anotherDetails\" : [\n { \"Name\" : \"Chris\", \"Age\" : 21 }, { \"Name\" : \"David\", \"Age\" : 23 }, { \"Name\" : \"Bob\", \"Age\" : 20 } ]\n} }" }, { "code": null, "e": 2067, "s": 2004, "text": "Following is the query to remove subdocument from a document −" }, { "code": null, "e": 2223, "s": 2067, "text": "> db.demo538.update({ id:101},\n... {$pull : { \"details.anotherDetails\" : {\"Age\":23} } } )\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })" }, { "code": null, "e": 2296, "s": 2223, "text": "Display all documents from a collection with the help of find() method −" }, { "code": null, "e": 2317, "s": 2296, "text": "> db.demo538.find();" }, { "code": null, "e": 2358, "s": 2317, "text": "This will produce the following output −" }, { "code": null, "e": 2524, "s": 2358, "text": "{ \"_id\" : ObjectId(\"5e8c8f0aef4dcbee04fbbc08\"), \"id\" : 101, \"details\" : { \"anotherDetails\" : [ {\n\"Name\" : \"Chris\", \"Age\" : 21 }, { \"Name\" : \"Bob\", \"Age\" : 20 } ] } }" } ]
Shortest Path Faster Algorithm - GeeksforGeeks
07 Feb, 2022 Prerequisites: Bellman-Ford AlgorithmGiven a directed weighted graph with V vertices, E edges and a source vertex S. The task is to find the shortest path from the source vertex to all other vertices in the given graph.Example: Input: V = 5, S = 1, arr = {{1, 2, 1}, {2, 3, 7}, {2, 4, -2}, {1, 3, 8}, {1, 4, 9}, {3, 4, 3}, {2, 5, 3}, {4, 5, -3}} Output: 1, 0 2, 1 3, 8 4, -1 5, -4 Explanation: For the given input, the shortest path from 1 to 1 is 0, 1 to 2 is 1 and so on.Input: V = 5, S = 1, arr = {{1, 2, -1}, {1, 3, 4}, {2, 3, 3}, {2, 4, 2}, {2, 5, 2}, {4, 3, 5}, {4, 2, 1}, {5, 4, 3}} Output: 1, 0 2, -1 3, 2 4, 1 5, 1 Approach: The shortest path faster algorithm is based on Bellman-Ford algorithm where every vertex is used to relax its adjacent vertices but in SPF algorithm, a queue of vertices is maintained and a vertex is added to the queue only if that vertex is relaxed. This process repeats until no more vertex can be relaxed. The following steps can be performed to compute the result: Create an array d[] to store the shortest distance of all vertex from the source vertex. Initialize this array by infinity except for d[S] = 0 where S is the source vertex.Create a queue Q and push starting source vertex in it. while Queue is not empty, do the following for each edge(u, v) in the graph If d[v] > d[u] + weight of edge(u, v)d[v] = d[u] + weight of edge(u, v)If vertex v is not present in Queue, then push the vertex v into the Queue. Create an array d[] to store the shortest distance of all vertex from the source vertex. Initialize this array by infinity except for d[S] = 0 where S is the source vertex. Create a queue Q and push starting source vertex in it. while Queue is not empty, do the following for each edge(u, v) in the graph If d[v] > d[u] + weight of edge(u, v)d[v] = d[u] + weight of edge(u, v)If vertex v is not present in Queue, then push the vertex v into the Queue. while Queue is not empty, do the following for each edge(u, v) in the graph If d[v] > d[u] + weight of edge(u, v)d[v] = d[u] + weight of edge(u, v)If vertex v is not present in Queue, then push the vertex v into the Queue. If d[v] > d[u] + weight of edge(u, v) d[v] = d[u] + weight of edge(u, v) If vertex v is not present in Queue, then push the vertex v into the Queue. Note: The term relaxation means updating the cost of all vertices connected to a vertex v if those costs would be improved by including the path via vertex v. This can be understood better from an analogy between the estimate of the shortest path and the length of a helical tension spring, which is not designed for compression. Initially, the cost of the shortest path is an overestimate, likened to a stretched-out spring. As shorter paths are found, the estimated cost is lowered, and the spring is relaxed. Eventually, the shortest path, if one exists, is found and the spring has been relaxed to its resting length.Below is the implementation of the above approach: C++ Java C# Python3 Javascript // C++ implementation of SPFA #include <bits/stdc++.h>using namespace std; // Graph is stored as vector of vector of pairs// first element of pair store vertex// second element of pair store weightvector<vector<pair<int, int> >> graph; // Function to add edges in the graph// connecting a pair of vertex(frm) and weight// to another vertex(to) in graphvoid addEdge(int frm, int to, int weight){ graph[frm].push_back({ to, weight });} // Function to print shortest distance from sourcevoid print_distance(int d[], int V){ cout << "Vertex \t\t Distance" << " from source" << endl; for (int i = 1; i <= V; i++) { cout << i << " " << d[i] << '\n'; }} // Function to compute the SPF algorithmvoid shortestPathFaster(int S, int V){ // Create array d to store shortest distance int d[V + 1]; // Boolean array to check if vertex // is present in queue or not bool inQueue[V + 1] = { false }; // Initialize the distance from source to // other vertex as INT_MAX(infinite) for (int i = 0; i <= V; i++) { d[i] = INT_MAX; } d[S] = 0; queue<int> q; q.push(S); inQueue[S] = true; while (!q.empty()) { // Take the front vertex from Queue int u = q.front(); q.pop(); inQueue[u] = false; // Relaxing all the adjacent edges of // vertex taken from the Queue for (int i = 0; i < graph[u].size(); i++) { int v = graph[u][i].first; int weight = graph[u][i].second; if (d[v] > d[u] + weight) { d[v] = d[u] + weight; // Check if vertex v is in Queue or not // if not then push it into the Queue if (!inQueue[v]) { q.push(v); inQueue[v] = true; } } } } // Print the result print_distance(d, V);} // Driver codeint main(){ int V = 5; int S = 1; graph = vector<vector<pair<int,int>>> (V+1); // Connect vertex a to b with weight w // addEdge(a, b, w) addEdge(1, 2, 1); addEdge(2, 3, 7); addEdge(2, 4, -2); addEdge(1, 3, 8); addEdge(1, 4, 9); addEdge(3, 4, 3); addEdge(2, 5, 3); addEdge(4, 5, -3); // Calling shortestPathFaster function shortestPathFaster(S, V); return 0;} // Java implementation of SPFAimport java.util.*; class GFG{ static class pair { int first, second; public pair(int first, int second) { this.first = first; this.second = second; } } // Graph is stored as vector of vector of pairs// first element of pair store vertex// second element of pair store weightstatic Vector<pair > []graph = new Vector[100000]; // Function to add edges in the graph// connecting a pair of vertex(frm) and weight// to another vertex(to) in graphstatic void addEdge(int frm, int to, int weight){ graph[frm].add(new pair( to, weight ));} // Function to print shortest distance from sourcestatic void print_distance(int d[], int V){ System.out.print("Vertex \t\t Distance" + " from source" +"\n"); for (int i = 1; i <= V; i++) { System.out.printf("%d \t\t %d\n", i, d[i]); }} // Function to compute the SPF algorithmstatic void shortestPathFaster(int S, int V){ // Create array d to store shortest distance int []d = new int[V + 1]; // Boolean array to check if vertex // is present in queue or not boolean []inQueue = new boolean[V + 1]; // Initialize the distance from source to // other vertex as Integer.MAX_VALUE(infinite) for (int i = 0; i <= V; i++) { d[i] = Integer.MAX_VALUE; } d[S] = 0; Queue<Integer> q = new LinkedList<>(); q.add(S); inQueue[S] = true; while (!q.isEmpty()) { // Take the front vertex from Queue int u = q.peek(); q.remove(); inQueue[u] = false; // Relaxing all the adjacent edges of // vertex taken from the Queue for (int i = 0; i < graph[u].size(); i++) { int v = graph[u].get(i).first; int weight = graph[u].get(i).second; if (d[v] > d[u] + weight) { d[v] = d[u] + weight; // Check if vertex v is in Queue or not // if not then push it into the Queue if (!inQueue[v]) { q.add(v); inQueue[v] = true; } } } } // Print the result print_distance(d, V);} // Driver codepublic static void main(String[] args){ int V = 5; int S = 1; for (int i = 0; i < graph.length; i++) { graph[i] = new Vector<pair>(); } // Connect vertex a to b with weight w // addEdge(a, b, w) addEdge(1, 2, 1); addEdge(2, 3, 7); addEdge(2, 4, -2); addEdge(1, 3, 8); addEdge(1, 4, 9); addEdge(3, 4, 3); addEdge(2, 5, 3); addEdge(4, 5, -3); // Calling shortestPathFaster function shortestPathFaster(S, V);}} // This code is contributed by 29AjayKumar // C# implementation of SPFAusing System;using System.Collections.Generic; class GFG{ class pair { public int first, second; public pair(int first, int second) { this.first = first; this.second = second; } } // Graph is stored as vector of vector of pairs// first element of pair store vertex// second element of pair store weightstatic List<pair> []graph = new List<pair>[100000]; // Function to add edges in the graph// connecting a pair of vertex(frm) and weight// to another vertex(to) in graphstatic void addEdge(int frm, int to, int weight){ graph[frm].Add(new pair( to, weight ));} // Function to print shortest distance from sourcestatic void print_distance(int []d, int V){ Console.Write("Vertex \t\t Distance" + " from source" +"\n"); for (int i = 1; i <= V; i++) { Console.Write("{0} \t\t {1}\n", i, d[i]); }} // Function to compute the SPF algorithmstatic void shortestPathFaster(int S, int V){ // Create array d to store shortest distance int []d = new int[V + 1]; // Boolean array to check if vertex // is present in queue or not bool []inQueue = new bool[V + 1]; // Initialize the distance from source to // other vertex as int.MaxValue(infinite) for (int i = 0; i <= V; i++) { d[i] = int.MaxValue; } d[S] = 0; Queue<int> q = new Queue<int>(); q.Enqueue(S); inQueue[S] = true; while (q.Count!=0) { // Take the front vertex from Queue int u = q.Peek(); q.Dequeue(); inQueue[u] = false; // Relaxing all the adjacent edges of // vertex taken from the Queue for (int i = 0; i < graph[u].Count; i++) { int v = graph[u][i].first; int weight = graph[u][i].second; if (d[v] > d[u] + weight) { d[v] = d[u] + weight; // Check if vertex v is in Queue or not // if not then push it into the Queue if (!inQueue[v]) { q.Enqueue(v); inQueue[v] = true; } } } } // Print the result print_distance(d, V);} // Driver codepublic static void Main(String[] args){ int V = 5; int S = 1; for (int i = 0; i < graph.Length; i++) { graph[i] = new List<pair>(); } // Connect vertex a to b with weight w // addEdge(a, b, w) addEdge(1, 2, 1); addEdge(2, 3, 7); addEdge(2, 4, -2); addEdge(1, 3, 8); addEdge(1, 4, 9); addEdge(3, 4, 3); addEdge(2, 5, 3); addEdge(4, 5, -3); // Calling shortestPathFaster function shortestPathFaster(S, V);}} // This code is contributed by PrinciRaj1992 # Python3 implementation of SPFAfrom collections import deque # Graph is stored as vector of vector of pairs# first element of pair store vertex# second element of pair store weightgraph = [[] for _ in range(100000)] # Function to add edges in the graph# connecting a pair of vertex(frm) and weight# to another vertex(to) in graphdef addEdge(frm, to, weight): graph[frm].append([to, weight]) # Function to print shortest distance from sourcedef print_distance(d, V): print("Vertex","\t","Distance from source") for i in range(1, V + 1): print(i,"\t",d[i]) # Function to compute the SPF algorithmdef shortestPathFaster(S, V): # Create array d to store shortest distance d = [10**9]*(V + 1) # Boolean array to check if vertex # is present in queue or not inQueue = [False]*(V + 1) d[S] = 0 q = deque() q.append(S) inQueue[S] = True while (len(q) > 0): # Take the front vertex from Queue u = q.popleft() inQueue[u] = False # Relaxing all the adjacent edges of # vertex taken from the Queue for i in range(len(graph[u])): v = graph[u][i][0] weight = graph[u][i][1] if (d[v] > d[u] + weight): d[v] = d[u] + weight # Check if vertex v is in Queue or not # if not then append it into the Queue if (inQueue[v] == False): q.append(v) inQueue[v] = True # Print the result print_distance(d, V) # Driver codeif __name__ == '__main__': V = 5 S = 1 # Connect vertex a to b with weight w # addEdge(a, b, w) addEdge(1, 2, 1) addEdge(2, 3, 7) addEdge(2, 4, -2) addEdge(1, 3, 8) addEdge(1, 4, 9) addEdge(3, 4, 3) addEdge(2, 5, 3) addEdge(4, 5, -3) # Calling shortestPathFaster function shortestPathFaster(S, V) # This code is contributed by mohit kumar 29 <script> // JavaScript implementation of SPFA // Graph is stored as vector of vector of pairs// first element of pair store vertex// second element of pair store weight let graph=new Array(100000); // Function to add edges in the graph// connecting a pair of vertex(frm) and weight// to another vertex(to) in graph function addEdge(frm,to,weight) { graph[frm].push([to, weight ]); } // Function to print shortest distance from source function print_distance(d,V) { document.write( "Vertex", " ", "Distance" + " from source" +"<br>" ); for (let i = 1; i <= V; i++) { document.write( i+" "+ d[i]+"<br>"); } } // Function to compute the SPF algorithm function shortestPathFaster(S,V) { // Create array d to store shortest distance let d = new Array(V + 1); // Boolean array to check if vertex // is present in queue or not let inQueue = new Array(V + 1); // Initialize the distance from source to // other vertex as Integer.MAX_VALUE(infinite) for (let i = 0; i <= V; i++) { d[i] = Number.MAX_VALUE; } d[S] = 0; let q = []; q.push(S); inQueue[S] = true; while (q.length!=0) { // Take the front vertex from Queue let u = q[0]; q.shift(); inQueue[u] = false; // Relaxing all the adjacent edges of // vertex taken from the Queue for (let i = 0; i < graph[u].length; i++) { let v = graph[u][i][0]; let weight = graph[u][i][1]; if (d[v] > d[u] + weight) { d[v] = d[u] + weight; // Check if vertex v is in Queue or not // if not then push it into the Queue if (!inQueue[v]) { q.push(v); inQueue[v] = true; } } } } // Print the result print_distance(d, V); } // Driver code let V = 5; let S = 1; for (let i = 0; i < graph.length; i++) { graph[i] = []; } // Connect vertex a to b with weight w // addEdge(a, b, w) addEdge(1, 2, 1); addEdge(2, 3, 7); addEdge(2, 4, -2); addEdge(1, 3, 8); addEdge(1, 4, 9); addEdge(3, 4, 3); addEdge(2, 5, 3); addEdge(4, 5, -3); // Calling shortestPathFaster function shortestPathFaster(S, V); // This code is contributed by unknown2108 </script> Vertex Distance from source 1 0 2 1 3 8 4 -1 5 -4 Time Complexity: Average Time Complexity: O(|E|) Worstcase Time Complexity: O(|V|.|E|) Note: Bound on average runtime has not been proved yet.References: Shortest Path Faster Algorithm 29AjayKumar princiraj1992 mohit kumar 29 unknown2108 ashutoshsinghgeeksforgeeks surinderdawra388 Advanced Data Structure Algorithms Graph Graph Algorithms Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Extendible Hashing (Dynamic approach to DBMS) Ternary Search Tree 2-3 Trees | (Search, Insert and Deletion) Proof that Dominant Set of a Graph is NP-Complete Quad Tree SDE SHEET - A Complete Guide for SDE Preparation Top 50 Array Coding Problems for Interviews DSA Sheet by Love Babbar Difference between BFS and DFS A* Search Algorithm
[ { "code": null, "e": 24097, "s": 24069, "text": "\n07 Feb, 2022" }, { "code": null, "e": 24327, "s": 24097, "text": "Prerequisites: Bellman-Ford AlgorithmGiven a directed weighted graph with V vertices, E edges and a source vertex S. The task is to find the shortest path from the source vertex to all other vertices in the given graph.Example: " }, { "code": null, "e": 24725, "s": 24327, "text": "Input: V = 5, S = 1, arr = {{1, 2, 1}, {2, 3, 7}, {2, 4, -2}, {1, 3, 8}, {1, 4, 9}, {3, 4, 3}, {2, 5, 3}, {4, 5, -3}} Output: 1, 0 2, 1 3, 8 4, -1 5, -4 Explanation: For the given input, the shortest path from 1 to 1 is 0, 1 to 2 is 1 and so on.Input: V = 5, S = 1, arr = {{1, 2, -1}, {1, 3, 4}, {2, 3, 3}, {2, 4, 2}, {2, 5, 2}, {4, 3, 5}, {4, 2, 1}, {5, 4, 3}} Output: 1, 0 2, -1 3, 2 4, 1 5, 1 " }, { "code": null, "e": 25108, "s": 24727, "text": "Approach: The shortest path faster algorithm is based on Bellman-Ford algorithm where every vertex is used to relax its adjacent vertices but in SPF algorithm, a queue of vertices is maintained and a vertex is added to the queue only if that vertex is relaxed. This process repeats until no more vertex can be relaxed. The following steps can be performed to compute the result: " }, { "code": null, "e": 25559, "s": 25108, "text": "Create an array d[] to store the shortest distance of all vertex from the source vertex. Initialize this array by infinity except for d[S] = 0 where S is the source vertex.Create a queue Q and push starting source vertex in it. while Queue is not empty, do the following for each edge(u, v) in the graph If d[v] > d[u] + weight of edge(u, v)d[v] = d[u] + weight of edge(u, v)If vertex v is not present in Queue, then push the vertex v into the Queue." }, { "code": null, "e": 25732, "s": 25559, "text": "Create an array d[] to store the shortest distance of all vertex from the source vertex. Initialize this array by infinity except for d[S] = 0 where S is the source vertex." }, { "code": null, "e": 26011, "s": 25732, "text": "Create a queue Q and push starting source vertex in it. while Queue is not empty, do the following for each edge(u, v) in the graph If d[v] > d[u] + weight of edge(u, v)d[v] = d[u] + weight of edge(u, v)If vertex v is not present in Queue, then push the vertex v into the Queue." }, { "code": null, "e": 26234, "s": 26011, "text": "while Queue is not empty, do the following for each edge(u, v) in the graph If d[v] > d[u] + weight of edge(u, v)d[v] = d[u] + weight of edge(u, v)If vertex v is not present in Queue, then push the vertex v into the Queue." }, { "code": null, "e": 26272, "s": 26234, "text": "If d[v] > d[u] + weight of edge(u, v)" }, { "code": null, "e": 26307, "s": 26272, "text": "d[v] = d[u] + weight of edge(u, v)" }, { "code": null, "e": 26383, "s": 26307, "text": "If vertex v is not present in Queue, then push the vertex v into the Queue." }, { "code": null, "e": 27057, "s": 26383, "text": "Note: The term relaxation means updating the cost of all vertices connected to a vertex v if those costs would be improved by including the path via vertex v. This can be understood better from an analogy between the estimate of the shortest path and the length of a helical tension spring, which is not designed for compression. Initially, the cost of the shortest path is an overestimate, likened to a stretched-out spring. As shorter paths are found, the estimated cost is lowered, and the spring is relaxed. Eventually, the shortest path, if one exists, is found and the spring has been relaxed to its resting length.Below is the implementation of the above approach: " }, { "code": null, "e": 27061, "s": 27057, "text": "C++" }, { "code": null, "e": 27066, "s": 27061, "text": "Java" }, { "code": null, "e": 27069, "s": 27066, "text": "C#" }, { "code": null, "e": 27077, "s": 27069, "text": "Python3" }, { "code": null, "e": 27088, "s": 27077, "text": "Javascript" }, { "code": "// C++ implementation of SPFA #include <bits/stdc++.h>using namespace std; // Graph is stored as vector of vector of pairs// first element of pair store vertex// second element of pair store weightvector<vector<pair<int, int> >> graph; // Function to add edges in the graph// connecting a pair of vertex(frm) and weight// to another vertex(to) in graphvoid addEdge(int frm, int to, int weight){ graph[frm].push_back({ to, weight });} // Function to print shortest distance from sourcevoid print_distance(int d[], int V){ cout << \"Vertex \\t\\t Distance\" << \" from source\" << endl; for (int i = 1; i <= V; i++) { cout << i << \" \" << d[i] << '\\n'; }} // Function to compute the SPF algorithmvoid shortestPathFaster(int S, int V){ // Create array d to store shortest distance int d[V + 1]; // Boolean array to check if vertex // is present in queue or not bool inQueue[V + 1] = { false }; // Initialize the distance from source to // other vertex as INT_MAX(infinite) for (int i = 0; i <= V; i++) { d[i] = INT_MAX; } d[S] = 0; queue<int> q; q.push(S); inQueue[S] = true; while (!q.empty()) { // Take the front vertex from Queue int u = q.front(); q.pop(); inQueue[u] = false; // Relaxing all the adjacent edges of // vertex taken from the Queue for (int i = 0; i < graph[u].size(); i++) { int v = graph[u][i].first; int weight = graph[u][i].second; if (d[v] > d[u] + weight) { d[v] = d[u] + weight; // Check if vertex v is in Queue or not // if not then push it into the Queue if (!inQueue[v]) { q.push(v); inQueue[v] = true; } } } } // Print the result print_distance(d, V);} // Driver codeint main(){ int V = 5; int S = 1; graph = vector<vector<pair<int,int>>> (V+1); // Connect vertex a to b with weight w // addEdge(a, b, w) addEdge(1, 2, 1); addEdge(2, 3, 7); addEdge(2, 4, -2); addEdge(1, 3, 8); addEdge(1, 4, 9); addEdge(3, 4, 3); addEdge(2, 5, 3); addEdge(4, 5, -3); // Calling shortestPathFaster function shortestPathFaster(S, V); return 0;}", "e": 29410, "s": 27088, "text": null }, { "code": "// Java implementation of SPFAimport java.util.*; class GFG{ static class pair { int first, second; public pair(int first, int second) { this.first = first; this.second = second; } } // Graph is stored as vector of vector of pairs// first element of pair store vertex// second element of pair store weightstatic Vector<pair > []graph = new Vector[100000]; // Function to add edges in the graph// connecting a pair of vertex(frm) and weight// to another vertex(to) in graphstatic void addEdge(int frm, int to, int weight){ graph[frm].add(new pair( to, weight ));} // Function to print shortest distance from sourcestatic void print_distance(int d[], int V){ System.out.print(\"Vertex \\t\\t Distance\" + \" from source\" +\"\\n\"); for (int i = 1; i <= V; i++) { System.out.printf(\"%d \\t\\t %d\\n\", i, d[i]); }} // Function to compute the SPF algorithmstatic void shortestPathFaster(int S, int V){ // Create array d to store shortest distance int []d = new int[V + 1]; // Boolean array to check if vertex // is present in queue or not boolean []inQueue = new boolean[V + 1]; // Initialize the distance from source to // other vertex as Integer.MAX_VALUE(infinite) for (int i = 0; i <= V; i++) { d[i] = Integer.MAX_VALUE; } d[S] = 0; Queue<Integer> q = new LinkedList<>(); q.add(S); inQueue[S] = true; while (!q.isEmpty()) { // Take the front vertex from Queue int u = q.peek(); q.remove(); inQueue[u] = false; // Relaxing all the adjacent edges of // vertex taken from the Queue for (int i = 0; i < graph[u].size(); i++) { int v = graph[u].get(i).first; int weight = graph[u].get(i).second; if (d[v] > d[u] + weight) { d[v] = d[u] + weight; // Check if vertex v is in Queue or not // if not then push it into the Queue if (!inQueue[v]) { q.add(v); inQueue[v] = true; } } } } // Print the result print_distance(d, V);} // Driver codepublic static void main(String[] args){ int V = 5; int S = 1; for (int i = 0; i < graph.length; i++) { graph[i] = new Vector<pair>(); } // Connect vertex a to b with weight w // addEdge(a, b, w) addEdge(1, 2, 1); addEdge(2, 3, 7); addEdge(2, 4, -2); addEdge(1, 3, 8); addEdge(1, 4, 9); addEdge(3, 4, 3); addEdge(2, 5, 3); addEdge(4, 5, -3); // Calling shortestPathFaster function shortestPathFaster(S, V);}} // This code is contributed by 29AjayKumar", "e": 32159, "s": 29410, "text": null }, { "code": "// C# implementation of SPFAusing System;using System.Collections.Generic; class GFG{ class pair { public int first, second; public pair(int first, int second) { this.first = first; this.second = second; } } // Graph is stored as vector of vector of pairs// first element of pair store vertex// second element of pair store weightstatic List<pair> []graph = new List<pair>[100000]; // Function to add edges in the graph// connecting a pair of vertex(frm) and weight// to another vertex(to) in graphstatic void addEdge(int frm, int to, int weight){ graph[frm].Add(new pair( to, weight ));} // Function to print shortest distance from sourcestatic void print_distance(int []d, int V){ Console.Write(\"Vertex \\t\\t Distance\" + \" from source\" +\"\\n\"); for (int i = 1; i <= V; i++) { Console.Write(\"{0} \\t\\t {1}\\n\", i, d[i]); }} // Function to compute the SPF algorithmstatic void shortestPathFaster(int S, int V){ // Create array d to store shortest distance int []d = new int[V + 1]; // Boolean array to check if vertex // is present in queue or not bool []inQueue = new bool[V + 1]; // Initialize the distance from source to // other vertex as int.MaxValue(infinite) for (int i = 0; i <= V; i++) { d[i] = int.MaxValue; } d[S] = 0; Queue<int> q = new Queue<int>(); q.Enqueue(S); inQueue[S] = true; while (q.Count!=0) { // Take the front vertex from Queue int u = q.Peek(); q.Dequeue(); inQueue[u] = false; // Relaxing all the adjacent edges of // vertex taken from the Queue for (int i = 0; i < graph[u].Count; i++) { int v = graph[u][i].first; int weight = graph[u][i].second; if (d[v] > d[u] + weight) { d[v] = d[u] + weight; // Check if vertex v is in Queue or not // if not then push it into the Queue if (!inQueue[v]) { q.Enqueue(v); inQueue[v] = true; } } } } // Print the result print_distance(d, V);} // Driver codepublic static void Main(String[] args){ int V = 5; int S = 1; for (int i = 0; i < graph.Length; i++) { graph[i] = new List<pair>(); } // Connect vertex a to b with weight w // addEdge(a, b, w) addEdge(1, 2, 1); addEdge(2, 3, 7); addEdge(2, 4, -2); addEdge(1, 3, 8); addEdge(1, 4, 9); addEdge(3, 4, 3); addEdge(2, 5, 3); addEdge(4, 5, -3); // Calling shortestPathFaster function shortestPathFaster(S, V);}} // This code is contributed by PrinciRaj1992", "e": 34924, "s": 32159, "text": null }, { "code": "# Python3 implementation of SPFAfrom collections import deque # Graph is stored as vector of vector of pairs# first element of pair store vertex# second element of pair store weightgraph = [[] for _ in range(100000)] # Function to add edges in the graph# connecting a pair of vertex(frm) and weight# to another vertex(to) in graphdef addEdge(frm, to, weight): graph[frm].append([to, weight]) # Function to print shortest distance from sourcedef print_distance(d, V): print(\"Vertex\",\"\\t\",\"Distance from source\") for i in range(1, V + 1): print(i,\"\\t\",d[i]) # Function to compute the SPF algorithmdef shortestPathFaster(S, V): # Create array d to store shortest distance d = [10**9]*(V + 1) # Boolean array to check if vertex # is present in queue or not inQueue = [False]*(V + 1) d[S] = 0 q = deque() q.append(S) inQueue[S] = True while (len(q) > 0): # Take the front vertex from Queue u = q.popleft() inQueue[u] = False # Relaxing all the adjacent edges of # vertex taken from the Queue for i in range(len(graph[u])): v = graph[u][i][0] weight = graph[u][i][1] if (d[v] > d[u] + weight): d[v] = d[u] + weight # Check if vertex v is in Queue or not # if not then append it into the Queue if (inQueue[v] == False): q.append(v) inQueue[v] = True # Print the result print_distance(d, V) # Driver codeif __name__ == '__main__': V = 5 S = 1 # Connect vertex a to b with weight w # addEdge(a, b, w) addEdge(1, 2, 1) addEdge(2, 3, 7) addEdge(2, 4, -2) addEdge(1, 3, 8) addEdge(1, 4, 9) addEdge(3, 4, 3) addEdge(2, 5, 3) addEdge(4, 5, -3) # Calling shortestPathFaster function shortestPathFaster(S, V) # This code is contributed by mohit kumar 29", "e": 36846, "s": 34924, "text": null }, { "code": "<script> // JavaScript implementation of SPFA // Graph is stored as vector of vector of pairs// first element of pair store vertex// second element of pair store weight let graph=new Array(100000); // Function to add edges in the graph// connecting a pair of vertex(frm) and weight// to another vertex(to) in graph function addEdge(frm,to,weight) { graph[frm].push([to, weight ]); } // Function to print shortest distance from source function print_distance(d,V) { document.write( \"Vertex\", \" \", \"Distance\" + \" from source\" +\"<br>\" ); for (let i = 1; i <= V; i++) { document.write( i+\" \"+ d[i]+\"<br>\"); } } // Function to compute the SPF algorithm function shortestPathFaster(S,V) { // Create array d to store shortest distance let d = new Array(V + 1); // Boolean array to check if vertex // is present in queue or not let inQueue = new Array(V + 1); // Initialize the distance from source to // other vertex as Integer.MAX_VALUE(infinite) for (let i = 0; i <= V; i++) { d[i] = Number.MAX_VALUE; } d[S] = 0; let q = []; q.push(S); inQueue[S] = true; while (q.length!=0) { // Take the front vertex from Queue let u = q[0]; q.shift(); inQueue[u] = false; // Relaxing all the adjacent edges of // vertex taken from the Queue for (let i = 0; i < graph[u].length; i++) { let v = graph[u][i][0]; let weight = graph[u][i][1]; if (d[v] > d[u] + weight) { d[v] = d[u] + weight; // Check if vertex v is in Queue or not // if not then push it into the Queue if (!inQueue[v]) { q.push(v); inQueue[v] = true; } } } } // Print the result print_distance(d, V); } // Driver code let V = 5; let S = 1; for (let i = 0; i < graph.length; i++) { graph[i] = []; } // Connect vertex a to b with weight w // addEdge(a, b, w) addEdge(1, 2, 1); addEdge(2, 3, 7); addEdge(2, 4, -2); addEdge(1, 3, 8); addEdge(1, 4, 9); addEdge(3, 4, 3); addEdge(2, 5, 3); addEdge(4, 5, -3); // Calling shortestPathFaster function shortestPathFaster(S, V); // This code is contributed by unknown2108 </script>", "e": 39349, "s": 36846, "text": null }, { "code": null, "e": 39447, "s": 39349, "text": "Vertex Distance from source\n1 0\n2 1\n3 8\n4 -1\n5 -4" }, { "code": null, "e": 39635, "s": 39449, "text": "Time Complexity: Average Time Complexity: O(|E|) Worstcase Time Complexity: O(|V|.|E|) Note: Bound on average runtime has not been proved yet.References: Shortest Path Faster Algorithm " }, { "code": null, "e": 39647, "s": 39635, "text": "29AjayKumar" }, { "code": null, "e": 39661, "s": 39647, "text": "princiraj1992" }, { "code": null, "e": 39676, "s": 39661, "text": "mohit kumar 29" }, { "code": null, "e": 39688, "s": 39676, "text": "unknown2108" }, { "code": null, "e": 39715, "s": 39688, "text": "ashutoshsinghgeeksforgeeks" }, { "code": null, "e": 39732, "s": 39715, "text": "surinderdawra388" }, { "code": null, "e": 39756, "s": 39732, "text": "Advanced Data Structure" }, { "code": null, "e": 39767, "s": 39756, "text": "Algorithms" }, { "code": null, "e": 39773, "s": 39767, "text": "Graph" }, { "code": null, "e": 39779, "s": 39773, "text": "Graph" }, { "code": null, "e": 39790, "s": 39779, "text": "Algorithms" }, { "code": null, "e": 39888, "s": 39790, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 39897, "s": 39888, "text": "Comments" }, { "code": null, "e": 39910, "s": 39897, "text": "Old Comments" }, { "code": null, "e": 39956, "s": 39910, "text": "Extendible Hashing (Dynamic approach to DBMS)" }, { "code": null, "e": 39976, "s": 39956, "text": "Ternary Search Tree" }, { "code": null, "e": 40018, "s": 39976, "text": "2-3 Trees | (Search, Insert and Deletion)" }, { "code": null, "e": 40068, "s": 40018, "text": "Proof that Dominant Set of a Graph is NP-Complete" }, { "code": null, "e": 40078, "s": 40068, "text": "Quad Tree" }, { "code": null, "e": 40127, "s": 40078, "text": "SDE SHEET - A Complete Guide for SDE Preparation" }, { "code": null, "e": 40171, "s": 40127, "text": "Top 50 Array Coding Problems for Interviews" }, { "code": null, "e": 40196, "s": 40171, "text": "DSA Sheet by Love Babbar" }, { "code": null, "e": 40227, "s": 40196, "text": "Difference between BFS and DFS" } ]
Data Cleaning and Exploratory Analysis in Python and R | by Sagar Uprety | Towards Data Science
This notebook is about using a combination of Python and R to perform data preproccesing and some exploratory statistical analysis on an untidy dataset. Blogs and articles for upcoming data scientists often harp on the need for using real-world data which is messy and the importance of learning data cleaning and preprocessing for this profession. However, we mostly find blogs and tutorials with standard datasets which skip the pre-processing step. Similarly, real-world data analysis may require a combination of EDA and statistical analysis. For that purpose, it may require that data scientists use both Python and R and switch between them depending upon the micro-task at hand. In this tutorial, I share some steps of data processing and exploratory analysis which came across as part of an experiment during my PhD. I have introduced a toy dataset for this purpose but the structure and messyness of the data is similar to what I encountered. Imagine a retail company with five stores in different geographic locations. Each store has two billing counters. The company is trialing a new product which they have only put at the billing counter. The cashiers are supposed to pitch the item to the customers during billing. The cashiers also ask each customer three questions about the product and then ask if they want to buy that product. The questions are about three attributes of the product and the customers have to answer ‘Yes’ — if they like the attribute or ‘No’ — if they don’t like the attribute. Thus we have 4 columns in our dataset for each copy of the product — 3 for the attributes and 1 for recording whether customers end up buying the item or not. All values are categorical — “Yes” or “No”. However, the way the data has been stored — it is stored in a single csv file but it has 40 columns — each of the 5 stores and each of the two billing counters in each store has a separate record. Thus we get multi-level columns as shown in the screenshot below. There is one more problem with the way in which the data is stored. If you look at the data for Store 1, and compare the two tills, you see that the records are mutually exclusive. That is, for a customer who visits Till 1, the corresponding record for Till 2 is left empty. There could have been some valid or lazy reasons why it was stored this way. But right now, as a data scientist, we have got this dataset to work on. So we begin. We first read our csv file and store it as a pandas dataframe. Note that we use header=[2] to use the third row as header and skip the first two rows. import pandas as pdd1 = pd.read_csv('https://raw.githubusercontent.com/sagaruprety/data_science/master/multi_attribute_buying_behaviour.csv', header=[2])d1.head() We see that pandas has labelled the columns with the ‘.’ extension. So the ‘Buy’ variable for {Store_1, Till_1} remains ‘Buy’, but that of {Store_1, Till_2} is ‘Buy.1’. Similarly, ‘Buy’ variable for {Store_2, Till_1} is ‘Buy.2’. Also note that for any store, an item record is registered as NaN if it was purchased from the other till. In total there are 40 columns here — 4 variables x 5 stores x 2 tills per store. Our final aim of data processessing is to merge item records of all tills and all stores. This will result in a dataframe with only four columns — corresponding to the three attributes and the Buy decision. The first step is to merge the data of two tills of a store into one. We iterate over the columns of the dataframe, replace the NaNs with a null string (‘’) and then concatenate the columns corresponding to any two tills of a store. The new dataframe is stored in a different variable d2. columns = ['Attribute_1', 'Attribute_2', 'Attribute_3', 'Buy']num_stores = 5d2 = pd.DataFrame()for col in columns: for i in range(num_stores): if i == 0: d2[col+'.'+str(i)] = d1[col].fillna('') + d1[col+'.1'].fillna('') else: d2[col+'.'+str(i)] = d1[col+'.'+str(2*i)].fillna('') + d1[col+'.'+str(2*i+1)].fillna('')d2.info()<class 'pandas.core.frame.DataFrame'>RangeIndex: 202 entries, 0 to 201Data columns (total 20 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Attribute_1.0 202 non-null object 1 Attribute_1.1 202 non-null object 2 Attribute_1.2 202 non-null object 3 Attribute_1.3 202 non-null object 4 Attribute_1.4 202 non-null object 5 Attribute_2.0 202 non-null object 6 Attribute_2.1 202 non-null object 7 Attribute_2.2 202 non-null object 8 Attribute_2.3 202 non-null object 9 Attribute_2.4 202 non-null object 10 Attribute_3.0 202 non-null object 11 Attribute_3.1 202 non-null object 12 Attribute_3.2 202 non-null object 13 Attribute_3.3 202 non-null object 14 Attribute_3.4 202 non-null object 15 Buy.0 202 non-null object 16 Buy.1 202 non-null object 17 Buy.2 202 non-null object 18 Buy.3 202 non-null object 19 Buy.4 202 non-null objectdtypes: object(20)memory usage: 31.7+ KB As we see above, we have merged the Till level information into Store level. In the next step, we merge the records for all stores into one. We create subsets of dataframes corresponding to the five stores and then append them below each other. num_stores = 5store_dfs = [pd.DataFrame() for _ in range(num_stores)]col_ind = 0for col in columns: for store in range(num_stores): store_dfs[store][col] = d2.iloc[:, col_ind] col_ind+=1store_dfs[4].head() Above is the data frame corresponding to Store 5. Similarly we have dataframes corresponding to each store. Note below that each of them has 202 rows. store_dfs[4].info()<class 'pandas.core.frame.DataFrame'>RangeIndex: 202 entries, 0 to 201Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Attribute_1 202 non-null object 1 Attribute_2 202 non-null object 2 Attribute_3 202 non-null object 3 Buy 202 non-null objectdtypes: object(4)memory usage: 6.4+ KB Now we append the dataframes corresponding to each store below each other and reset the index, otherwise the index will repeat in a loop from 0 to 201 for the 5 dataframes. df = store_dfs[0]for i in range(1,num_stores): df = df.append(store_dfs[i])df.reset_index(drop=True, inplace=True)df.info()<class 'pandas.core.frame.DataFrame'>RangeIndex: 1010 entries, 0 to 1009Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Attribute_1 1010 non-null object 1 Attribute_2 1010 non-null object 2 Attribute_3 1010 non-null object 3 Buy 1010 non-null objectdtypes: object(4)memory usage: 31.7+ KB Note that the final dataframe has 202 x 5 = 1010 rows and 4 columns. It represents the data for the whole company rather than Store wise and Till wise. Let us now proceed to some basic statistical analysis of the data. R is one of the most popular languages for statistical analysis and we will use that here. Jupyter (which stands for JUlia PYThon R) notebooks allow us to embedd R code snippets in a Python notebook. We first need to download the following R package in order to do so: %load_ext rpy2.ipython %R is a magic command which helps us switch from Python to R. Beginning any line of code with %R will enable us to write in R. The following line instructs the machine to take the python variable named df and convert it into an R variable of the same name. So we convert our pandas dataframe into an R dataframe. %R -i df Next, we examine the structure of the R dataframe. This shows that internally the values of the different attributes are stored as strings. %R str(df)'data.frame': 1010 obs. of 4 variables: $ Attribute_1: chr "No" "Yes" "Yes" "No" ... $ Attribute_2: chr "Yes" "Yes" "No" "Yes" ... $ Attribute_3: chr "Yes" "No" "No" "Yes" ... $ Buy : chr "No" "Yes" "No" "No" ... The following lines of code stores the values of different attributes as categorical variables (called factors in R). But we see that there are 3 types of categorical variables. There is a null string which is due to some missing values in the dataset. Note that you can also use the magic command %%R which enables the whole cell to be used for R. %%Rdf$Attribute_1 <- as.factor(df$Attribute_1)df$Attribute_2 <- as.factor(df$Attribute_2)df$Attribute_3 <- as.factor(df$Attribute_3)df$Buy <- as.factor(df$Buy)str(df)'data.frame': 1007 obs. of 4 variables: $ Attribute_1: Factor w/ 2 levels "No","Yes": 1 2 2 1 2 1 1 2 2 2 ... $ Attribute_2: Factor w/ 2 levels "No","Yes": 2 2 1 2 2 2 2 2 2 2 ... $ Attribute_3: Factor w/ 2 levels "No","Yes": 2 1 1 2 2 2 2 1 2 2 ... $ Buy : Factor w/ 2 levels "No","Yes": 1 2 1 1 1 1 1 2 2 2 ... This is something we should have checked in the first place. We can easily do that in pandas. Note the empty values are not NaN but rather empty character literals. So we need to first convert them into NaN and then use pandas dropna function to drop rows with NaN values. import numpy as npdf.replace(to_replace='', value=np.NaN, inplace=True)df.dropna(inplace=True) Now run the above piece of R code again, and we get an R dataframe with 2 levels of categorical variables. %R -i df%R df$Attribute_1 <- as.factor(df$Attribute_1)%R df$Attribute_2 <- as.factor(df$Attribute_2)%R df$Attribute_3 <- as.factor(df$Attribute_3)%R df$Buy <- as.factor(df$Buy)%R str(df)'data.frame': 1007 obs. of 4 variables: $ Attribute_1: Factor w/ 2 levels "No","Yes": 1 2 2 1 2 1 1 2 2 2 ... $ Attribute_2: Factor w/ 2 levels "No","Yes": 2 2 1 2 2 2 2 2 2 2 ... $ Attribute_3: Factor w/ 2 levels "No","Yes": 2 1 1 2 2 2 2 1 2 2 ... $ Buy : Factor w/ 2 levels "No","Yes": 1 2 1 1 1 1 1 2 2 2 ... Now we can perform some analysis on the data. The purpose is to find out which of the three attributes influences the buying decision most. We first use xtabs to calculate cross category frequencies. This gives us a glimpse of the influence of each attribute on the Buy decision. %%Rprint(xtabs(~Buy+Attribute_1, data=df))print(xtabs(~Buy+Attribute_2, data=df))print(xtabs(~Buy+Attribute_3, data=df))Attribute_1Buy No Yes No 372 122 Yes 48 465 Attribute_2Buy No Yes No 267 227 Yes 180 333 Attribute_3Buy No Yes No 272 222 Yes 155 358 Note in the above matrices that Attribute_1 appears to be heavily influencing the buying decision. Almost 80% of customers who like attribute 1 of the new product buy it and almost 90% of those who do not like attribute 1, do not end up buying it. The same cannot be said of the other two attributes where the fractions tend towards 50–50. Since we are dealing with categorical variables here, we need to use the binomial family of generalised linear models (GLM) in order to analyse the effect of different attributes. So we fit a logistic regression model into the data. In the summary below, we find the significant influence of attribute 1 in the buying decision as the corresponding p-values is very low. The negative intercept means that if all the attributes have value ‘No’, then the customer is highly unlikely to buy the product, which makes sense. Note that in logistic regression, the output variables ‘Buy’ is converted into a log odds scale. So a negative value actually means the odds are stacked against the customer buying the product. If you are not familiar with the concepts and intution behind logistic regression, refer to this excellent series of videos explaining logistic regression. %%Rlogistic <-glm(Buy~Attribute_1+Attribute_2+Attribute_3, data=df, family = "binomial")print(summary(logistic))Call:glm(formula = Buy ~ Attribute_1 + Attribute_2 + Attribute_3, family = "binomial", data = df)Deviance Residuals: Min 1Q Median 3Q Max -2.0684 -0.5250 0.5005 0.6474 2.4538 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -2.9599 0.2241 -13.206 < 2e-16 ***Attribute_1Yes 3.3681 0.1912 17.620 < 2e-16 ***Attribute_2Yes 0.5579 0.1757 3.174 0.0015 ** Attribute_3Yes 1.0479 0.1782 5.881 4.07e-09 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1(Dispersion parameter for binomial family taken to be 1) Null deviance: 1395.6 on 1006 degrees of freedomResidual deviance: 850.7 on 1003 degrees of freedomAIC: 858.7Number of Fisher Scoring iterations: 5 We want to know whether we should consider all three variables in order to model the data or whether some variable, maybe Attribute_2, does not contribute to the buying decision. We can test this using Anova technique as shown below. The deviance of attribute_1 is much higher than the others. Attributes 2 and 3 do not contribute as much to the buying decision. So if you want to keep the model simpler, attribute_1 can be alone used to make prediction about the buying decision. Although when using all three attributes the residual deviance is the lowest, so the model is best fitted using all the three attributes. %R anova(logistic, test='Chisq') We can make predictions using this fitted model on new customers. For example, we see below that if a customer does not like the first attribute but likes other two, the probability of buying the product is 0.2. %%Rnew_customer <- data.frame(Attribute_1=as.factor('No'), Attribute_2=as.factor('Yes'), Attribute_3=as.factor('Yes'))new_customer_buys <-predict(logistic, new_customer, type='response')paste(new_customer_buys)[1] "0.205188948698921" So this was just an introduction to some real-world data science work. We worked from a very untidy dataset to prepare the data in order to carry out some statistical analysis and predictions. Also, we used a combination of Python and R for the purpose. The model is very simple, with just three atrtibutes. Even then, one of them alone is sufficient to predict the decision. For those of you into behavioural aspects of decision-making, it would be interesting to think whether the predictions will change if the cashier asks the three questions to the customers in different orders. Machine Learning models do not take into account the order of features. We know that human answers are affected by the order of questions asked (Order Effects). So, when the features of a machine learning model correspond to human decisions/answers/judgements, ML models need to take into account the ordering of features too when predicting a decision. Thank you for going through this post. I am still learning the trades of data science and analysis and would love to get feedback on all the aspects of this post.
[ { "code": null, "e": 858, "s": 172, "text": "This notebook is about using a combination of Python and R to perform data preproccesing and some exploratory statistical analysis on an untidy dataset. Blogs and articles for upcoming data scientists often harp on the need for using real-world data which is messy and the importance of learning data cleaning and preprocessing for this profession. However, we mostly find blogs and tutorials with standard datasets which skip the pre-processing step. Similarly, real-world data analysis may require a combination of EDA and statistical analysis. For that purpose, it may require that data scientists use both Python and R and switch between them depending upon the micro-task at hand." }, { "code": null, "e": 1890, "s": 858, "text": "In this tutorial, I share some steps of data processing and exploratory analysis which came across as part of an experiment during my PhD. I have introduced a toy dataset for this purpose but the structure and messyness of the data is similar to what I encountered. Imagine a retail company with five stores in different geographic locations. Each store has two billing counters. The company is trialing a new product which they have only put at the billing counter. The cashiers are supposed to pitch the item to the customers during billing. The cashiers also ask each customer three questions about the product and then ask if they want to buy that product. The questions are about three attributes of the product and the customers have to answer ‘Yes’ — if they like the attribute or ‘No’ — if they don’t like the attribute. Thus we have 4 columns in our dataset for each copy of the product — 3 for the attributes and 1 for recording whether customers end up buying the item or not. All values are categorical — “Yes” or “No”." }, { "code": null, "e": 2153, "s": 1890, "text": "However, the way the data has been stored — it is stored in a single csv file but it has 40 columns — each of the 5 stores and each of the two billing counters in each store has a separate record. Thus we get multi-level columns as shown in the screenshot below." }, { "code": null, "e": 2578, "s": 2153, "text": "There is one more problem with the way in which the data is stored. If you look at the data for Store 1, and compare the two tills, you see that the records are mutually exclusive. That is, for a customer who visits Till 1, the corresponding record for Till 2 is left empty. There could have been some valid or lazy reasons why it was stored this way. But right now, as a data scientist, we have got this dataset to work on." }, { "code": null, "e": 2742, "s": 2578, "text": "So we begin. We first read our csv file and store it as a pandas dataframe. Note that we use header=[2] to use the third row as header and skip the first two rows." }, { "code": null, "e": 2905, "s": 2742, "text": "import pandas as pdd1 = pd.read_csv('https://raw.githubusercontent.com/sagaruprety/data_science/master/multi_attribute_buying_behaviour.csv', header=[2])d1.head()" }, { "code": null, "e": 3241, "s": 2905, "text": "We see that pandas has labelled the columns with the ‘.’ extension. So the ‘Buy’ variable for {Store_1, Till_1} remains ‘Buy’, but that of {Store_1, Till_2} is ‘Buy.1’. Similarly, ‘Buy’ variable for {Store_2, Till_1} is ‘Buy.2’. Also note that for any store, an item record is registered as NaN if it was purchased from the other till." }, { "code": null, "e": 3322, "s": 3241, "text": "In total there are 40 columns here — 4 variables x 5 stores x 2 tills per store." }, { "code": null, "e": 3529, "s": 3322, "text": "Our final aim of data processessing is to merge item records of all tills and all stores. This will result in a dataframe with only four columns — corresponding to the three attributes and the Buy decision." }, { "code": null, "e": 3818, "s": 3529, "text": "The first step is to merge the data of two tills of a store into one. We iterate over the columns of the dataframe, replace the NaNs with a null string (‘’) and then concatenate the columns corresponding to any two tills of a store. The new dataframe is stored in a different variable d2." }, { "code": null, "e": 5225, "s": 3818, "text": "columns = ['Attribute_1', 'Attribute_2', 'Attribute_3', 'Buy']num_stores = 5d2 = pd.DataFrame()for col in columns: for i in range(num_stores): if i == 0: d2[col+'.'+str(i)] = d1[col].fillna('') + d1[col+'.1'].fillna('') else: d2[col+'.'+str(i)] = d1[col+'.'+str(2*i)].fillna('') + d1[col+'.'+str(2*i+1)].fillna('')d2.info()<class 'pandas.core.frame.DataFrame'>RangeIndex: 202 entries, 0 to 201Data columns (total 20 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Attribute_1.0 202 non-null object 1 Attribute_1.1 202 non-null object 2 Attribute_1.2 202 non-null object 3 Attribute_1.3 202 non-null object 4 Attribute_1.4 202 non-null object 5 Attribute_2.0 202 non-null object 6 Attribute_2.1 202 non-null object 7 Attribute_2.2 202 non-null object 8 Attribute_2.3 202 non-null object 9 Attribute_2.4 202 non-null object 10 Attribute_3.0 202 non-null object 11 Attribute_3.1 202 non-null object 12 Attribute_3.2 202 non-null object 13 Attribute_3.3 202 non-null object 14 Attribute_3.4 202 non-null object 15 Buy.0 202 non-null object 16 Buy.1 202 non-null object 17 Buy.2 202 non-null object 18 Buy.3 202 non-null object 19 Buy.4 202 non-null objectdtypes: object(20)memory usage: 31.7+ KB" }, { "code": null, "e": 5470, "s": 5225, "text": "As we see above, we have merged the Till level information into Store level. In the next step, we merge the records for all stores into one. We create subsets of dataframes corresponding to the five stores and then append them below each other." }, { "code": null, "e": 5683, "s": 5470, "text": "num_stores = 5store_dfs = [pd.DataFrame() for _ in range(num_stores)]col_ind = 0for col in columns: for store in range(num_stores): store_dfs[store][col] = d2.iloc[:, col_ind] col_ind+=1store_dfs[4].head()" }, { "code": null, "e": 5834, "s": 5683, "text": "Above is the data frame corresponding to Store 5. Similarly we have dataframes corresponding to each store. Note below that each of them has 202 rows." }, { "code": null, "e": 6233, "s": 5834, "text": "store_dfs[4].info()<class 'pandas.core.frame.DataFrame'>RangeIndex: 202 entries, 0 to 201Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Attribute_1 202 non-null object 1 Attribute_2 202 non-null object 2 Attribute_3 202 non-null object 3 Buy 202 non-null objectdtypes: object(4)memory usage: 6.4+ KB" }, { "code": null, "e": 6406, "s": 6233, "text": "Now we append the dataframes corresponding to each store below each other and reset the index, otherwise the index will repeat in a loop from 0 to 201 for the 5 dataframes." }, { "code": null, "e": 6913, "s": 6406, "text": "df = store_dfs[0]for i in range(1,num_stores): df = df.append(store_dfs[i])df.reset_index(drop=True, inplace=True)df.info()<class 'pandas.core.frame.DataFrame'>RangeIndex: 1010 entries, 0 to 1009Data columns (total 4 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Attribute_1 1010 non-null object 1 Attribute_2 1010 non-null object 2 Attribute_3 1010 non-null object 3 Buy 1010 non-null objectdtypes: object(4)memory usage: 31.7+ KB" }, { "code": null, "e": 7332, "s": 6913, "text": "Note that the final dataframe has 202 x 5 = 1010 rows and 4 columns. It represents the data for the whole company rather than Store wise and Till wise. Let us now proceed to some basic statistical analysis of the data. R is one of the most popular languages for statistical analysis and we will use that here. Jupyter (which stands for JUlia PYThon R) notebooks allow us to embedd R code snippets in a Python notebook." }, { "code": null, "e": 7401, "s": 7332, "text": "We first need to download the following R package in order to do so:" }, { "code": null, "e": 7424, "s": 7401, "text": "%load_ext rpy2.ipython" }, { "code": null, "e": 7551, "s": 7424, "text": "%R is a magic command which helps us switch from Python to R. Beginning any line of code with %R will enable us to write in R." }, { "code": null, "e": 7737, "s": 7551, "text": "The following line instructs the machine to take the python variable named df and convert it into an R variable of the same name. So we convert our pandas dataframe into an R dataframe." }, { "code": null, "e": 7746, "s": 7737, "text": "%R -i df" }, { "code": null, "e": 7886, "s": 7746, "text": "Next, we examine the structure of the R dataframe. This shows that internally the values of the different attributes are stored as strings." }, { "code": null, "e": 8121, "s": 7886, "text": "%R str(df)'data.frame':\t1010 obs. of 4 variables: $ Attribute_1: chr \"No\" \"Yes\" \"Yes\" \"No\" ... $ Attribute_2: chr \"Yes\" \"Yes\" \"No\" \"Yes\" ... $ Attribute_3: chr \"Yes\" \"No\" \"No\" \"Yes\" ... $ Buy : chr \"No\" \"Yes\" \"No\" \"No\" ..." }, { "code": null, "e": 8374, "s": 8121, "text": "The following lines of code stores the values of different attributes as categorical variables (called factors in R). But we see that there are 3 types of categorical variables. There is a null string which is due to some missing values in the dataset." }, { "code": null, "e": 8470, "s": 8374, "text": "Note that you can also use the magic command %%R which enables the whole cell to be used for R." }, { "code": null, "e": 8957, "s": 8470, "text": "%%Rdf$Attribute_1 <- as.factor(df$Attribute_1)df$Attribute_2 <- as.factor(df$Attribute_2)df$Attribute_3 <- as.factor(df$Attribute_3)df$Buy <- as.factor(df$Buy)str(df)'data.frame':\t1007 obs. of 4 variables: $ Attribute_1: Factor w/ 2 levels \"No\",\"Yes\": 1 2 2 1 2 1 1 2 2 2 ... $ Attribute_2: Factor w/ 2 levels \"No\",\"Yes\": 2 2 1 2 2 2 2 2 2 2 ... $ Attribute_3: Factor w/ 2 levels \"No\",\"Yes\": 2 1 1 2 2 2 2 1 2 2 ... $ Buy : Factor w/ 2 levels \"No\",\"Yes\": 1 2 1 1 1 1 1 2 2 2 ..." }, { "code": null, "e": 9230, "s": 8957, "text": "This is something we should have checked in the first place. We can easily do that in pandas. Note the empty values are not NaN but rather empty character literals. So we need to first convert them into NaN and then use pandas dropna function to drop rows with NaN values." }, { "code": null, "e": 9325, "s": 9230, "text": "import numpy as npdf.replace(to_replace='', value=np.NaN, inplace=True)df.dropna(inplace=True)" }, { "code": null, "e": 9432, "s": 9325, "text": "Now run the above piece of R code again, and we get an R dataframe with 2 levels of categorical variables." }, { "code": null, "e": 9939, "s": 9432, "text": "%R -i df%R df$Attribute_1 <- as.factor(df$Attribute_1)%R df$Attribute_2 <- as.factor(df$Attribute_2)%R df$Attribute_3 <- as.factor(df$Attribute_3)%R df$Buy <- as.factor(df$Buy)%R str(df)'data.frame':\t1007 obs. of 4 variables: $ Attribute_1: Factor w/ 2 levels \"No\",\"Yes\": 1 2 2 1 2 1 1 2 2 2 ... $ Attribute_2: Factor w/ 2 levels \"No\",\"Yes\": 2 2 1 2 2 2 2 2 2 2 ... $ Attribute_3: Factor w/ 2 levels \"No\",\"Yes\": 2 1 1 2 2 2 2 1 2 2 ... $ Buy : Factor w/ 2 levels \"No\",\"Yes\": 1 2 1 1 1 1 1 2 2 2 ..." }, { "code": null, "e": 10219, "s": 9939, "text": "Now we can perform some analysis on the data. The purpose is to find out which of the three attributes influences the buying decision most. We first use xtabs to calculate cross category frequencies. This gives us a glimpse of the influence of each attribute on the Buy decision." }, { "code": null, "e": 10500, "s": 10219, "text": "%%Rprint(xtabs(~Buy+Attribute_1, data=df))print(xtabs(~Buy+Attribute_2, data=df))print(xtabs(~Buy+Attribute_3, data=df))Attribute_1Buy No Yes No 372 122 Yes 48 465 Attribute_2Buy No Yes No 267 227 Yes 180 333 Attribute_3Buy No Yes No 272 222 Yes 155 358" }, { "code": null, "e": 10840, "s": 10500, "text": "Note in the above matrices that Attribute_1 appears to be heavily influencing the buying decision. Almost 80% of customers who like attribute 1 of the new product buy it and almost 90% of those who do not like attribute 1, do not end up buying it. The same cannot be said of the other two attributes where the fractions tend towards 50–50." }, { "code": null, "e": 11073, "s": 10840, "text": "Since we are dealing with categorical variables here, we need to use the binomial family of generalised linear models (GLM) in order to analyse the effect of different attributes. So we fit a logistic regression model into the data." }, { "code": null, "e": 11210, "s": 11073, "text": "In the summary below, we find the significant influence of attribute 1 in the buying decision as the corresponding p-values is very low." }, { "code": null, "e": 11553, "s": 11210, "text": "The negative intercept means that if all the attributes have value ‘No’, then the customer is highly unlikely to buy the product, which makes sense. Note that in logistic regression, the output variables ‘Buy’ is converted into a log odds scale. So a negative value actually means the odds are stacked against the customer buying the product." }, { "code": null, "e": 11709, "s": 11553, "text": "If you are not familiar with the concepts and intution behind logistic regression, refer to this excellent series of videos explaining logistic regression." }, { "code": null, "e": 12598, "s": 11709, "text": "%%Rlogistic <-glm(Buy~Attribute_1+Attribute_2+Attribute_3, data=df, family = \"binomial\")print(summary(logistic))Call:glm(formula = Buy ~ Attribute_1 + Attribute_2 + Attribute_3, family = \"binomial\", data = df)Deviance Residuals: Min 1Q Median 3Q Max -2.0684 -0.5250 0.5005 0.6474 2.4538 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -2.9599 0.2241 -13.206 < 2e-16 ***Attribute_1Yes 3.3681 0.1912 17.620 < 2e-16 ***Attribute_2Yes 0.5579 0.1757 3.174 0.0015 ** Attribute_3Yes 1.0479 0.1782 5.881 4.07e-09 ***---Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1(Dispersion parameter for binomial family taken to be 1) Null deviance: 1395.6 on 1006 degrees of freedomResidual deviance: 850.7 on 1003 degrees of freedomAIC: 858.7Number of Fisher Scoring iterations: 5" }, { "code": null, "e": 12832, "s": 12598, "text": "We want to know whether we should consider all three variables in order to model the data or whether some variable, maybe Attribute_2, does not contribute to the buying decision. We can test this using Anova technique as shown below." }, { "code": null, "e": 13217, "s": 12832, "text": "The deviance of attribute_1 is much higher than the others. Attributes 2 and 3 do not contribute as much to the buying decision. So if you want to keep the model simpler, attribute_1 can be alone used to make prediction about the buying decision. Although when using all three attributes the residual deviance is the lowest, so the model is best fitted using all the three attributes." }, { "code": null, "e": 13250, "s": 13217, "text": "%R anova(logistic, test='Chisq')" }, { "code": null, "e": 13462, "s": 13250, "text": "We can make predictions using this fitted model on new customers. For example, we see below that if a customer does not like the first attribute but likes other two, the probability of buying the product is 0.2." }, { "code": null, "e": 13696, "s": 13462, "text": "%%Rnew_customer <- data.frame(Attribute_1=as.factor('No'), Attribute_2=as.factor('Yes'), Attribute_3=as.factor('Yes'))new_customer_buys <-predict(logistic, new_customer, type='response')paste(new_customer_buys)[1] \"0.205188948698921\"" }, { "code": null, "e": 14072, "s": 13696, "text": "So this was just an introduction to some real-world data science work. We worked from a very untidy dataset to prepare the data in order to carry out some statistical analysis and predictions. Also, we used a combination of Python and R for the purpose. The model is very simple, with just three atrtibutes. Even then, one of them alone is sufficient to predict the decision." }, { "code": null, "e": 14635, "s": 14072, "text": "For those of you into behavioural aspects of decision-making, it would be interesting to think whether the predictions will change if the cashier asks the three questions to the customers in different orders. Machine Learning models do not take into account the order of features. We know that human answers are affected by the order of questions asked (Order Effects). So, when the features of a machine learning model correspond to human decisions/answers/judgements, ML models need to take into account the ordering of features too when predicting a decision." } ]
Closest value to K from an unsorted array - GeeksforGeeks
05 May, 2021 Given an array arr[] consisting of N integers and an integer K, the task is to find the array element closest to K. If multiple closest values exist, then print the smallest one. Examples: Input: arr[]={4, 2, 8, 11, 7}, K = 6Output: 7Explanation:The absolute difference between 4 and 6 is |4 – 6| = 2The absolute difference between 2 and 6 is |2 – 6| = 4The absolute difference between 8 and 6 is |8– 6| = 2The absolute difference between 11 and 6 is |11 – 6| = 5The absolute difference between 7 and 6 is |7 – 6| = 1Here, the absolute difference between K(=6) and 7 is minimum. Therefore, the closest value of K(=6) is 7 Input: arr[]={100, 200, 400}, K = 300Output: 200 Approach: The idea is to traverse the given array and print an element of the array which gives the minimum absolute difference with the given integer K. Follow the steps below to solve the problem: Initialize a variable, say res, to store the array element the closest value to K.Traverse the array and compare the absolute value of abs(K – res) and the absolute value of abs(K – arr[i]).If the value of abs(K – res) exceeds abs(K – arr[i]), then update res to arr[i].Finally, print res. Initialize a variable, say res, to store the array element the closest value to K. Traverse the array and compare the absolute value of abs(K – res) and the absolute value of abs(K – arr[i]). If the value of abs(K – res) exceeds abs(K – arr[i]), then update res to arr[i]. Finally, print res. Below is the implementation of the above approach: C++ Java Python3 C# Javascript // C++ program to implement// the above approach #include <bits/stdc++.h>using namespace std; // Function to get// the closest valueint clostVal(int arr[], int N, int K){ // Stores the closest // value to K int res = arr[0]; // Traverse the array for (int i = 1; i < N; i++) { // If absolute difference // of K and res exceeds // absolute difference of K // and current element if (abs(K - res) > abs(K - arr[i])) { res = arr[i]; } } // Return the closest // array element return res;} // Driver Codeint main(){ int arr[] = { 100, 200, 400 }; int K = 300; int N = sizeof(arr) / sizeof(arr[0]); cout << clostVal(arr, N, K);} // Java program to implement// the above approachimport java.io.*; class GFG{ // Function to get// the closest valuestatic int clostVal(int arr[], int N, int K){ // Stores the closest // value to K int res = arr[0]; // Traverse the array for(int i = 1; i < N; i++) { // If absolute difference // of K and res exceeds // absolute difference of K // and current element if (Math.abs(K - res) > Math.abs(K - arr[i])) { res = arr[i]; } } // Return the closest // array element return res;} // Driver Codepublic static void main (String[] args){ int arr[] = { 100, 200, 400 }; int K = 300; int N = arr.length; System.out.print(clostVal(arr, N, K));}} // This code is contributed by code_hunt # Python3 program to implement# the above approach # Function to get# the closest valuedef clostVal(arr, N, K): # Stores the closest # value to K res = arr[0] # Traverse the array for i in range(1, N, 1): # If absolute difference # of K and res exceeds # absolute difference of K # and current element if (abs(K - res) > abs(K - arr[i])): res = arr[i] # Return the closest # array element return res # Driver Codearr = [ 100, 200, 400 ]K = 300N = len(arr) print(clostVal(arr, N, K)) # This code is contributed by susmitakundugoaldanga // C# program to implement// the above approach using System; class GFG{ // Function to get// the closest valuestatic int clostVal(int[] arr, int N, int K){ // Stores the closest // value to K int res = arr[0]; // Traverse the array for(int i = 1; i < N; i++) { // If absolute difference // of K and res exceeds // absolute difference of K // and current element if (Math.Abs(K - res) > Math.Abs(K - arr[i])) { res = arr[i]; } } // Return the closest // array element return res;} // Driver Codepublic static void Main (){ int[] arr = { 100, 200, 400 }; int K = 300; int N = arr.Length; Console.WriteLine(clostVal(arr, N, K));}} // This code is contributed by sanjoy_62 <script> // Javascript program to implement// the above approach // Function to get// the closest valuefunction clostVal(arr, N, K){ // Stores the closest // value to K let res = arr[0]; // Traverse the array for(let i = 1; i < N; i++) { // If absolute difference // of K and res exceeds // absolute difference of K // and current element if (Math.abs(K - res) > Math.abs(K - arr[i])) { res = arr[i]; } } // Return the closest // array element return res;} // Driver Code let arr = [ 100, 200, 400 ]; let K = 300; let N = arr.length; document.write(clostVal(arr, N, K)); </script> 200 Time Complexity: O(N)Auxiliary Space: O(1) code_hunt sanjoy_62 susmitakundugoaldanga avijitmondal1998 Arrays Mathematical School Programming Arrays Mathematical Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Window Sliding Technique Trapping Rain Water Reversal algorithm for array rotation Building Heap from Array Move all negative numbers to beginning and positive to end with constant extra space Program for Fibonacci numbers Write a program to print all permutations of a given string C++ Data Types Set in C++ Standard Template Library (STL) Coin Change | DP-7
[ { "code": null, "e": 24820, "s": 24792, "text": "\n05 May, 2021" }, { "code": null, "e": 24999, "s": 24820, "text": "Given an array arr[] consisting of N integers and an integer K, the task is to find the array element closest to K. If multiple closest values exist, then print the smallest one." }, { "code": null, "e": 25009, "s": 24999, "text": "Examples:" }, { "code": null, "e": 25442, "s": 25009, "text": "Input: arr[]={4, 2, 8, 11, 7}, K = 6Output: 7Explanation:The absolute difference between 4 and 6 is |4 – 6| = 2The absolute difference between 2 and 6 is |2 – 6| = 4The absolute difference between 8 and 6 is |8– 6| = 2The absolute difference between 11 and 6 is |11 – 6| = 5The absolute difference between 7 and 6 is |7 – 6| = 1Here, the absolute difference between K(=6) and 7 is minimum. Therefore, the closest value of K(=6) is 7" }, { "code": null, "e": 25491, "s": 25442, "text": "Input: arr[]={100, 200, 400}, K = 300Output: 200" }, { "code": null, "e": 25690, "s": 25491, "text": "Approach: The idea is to traverse the given array and print an element of the array which gives the minimum absolute difference with the given integer K. Follow the steps below to solve the problem:" }, { "code": null, "e": 25980, "s": 25690, "text": "Initialize a variable, say res, to store the array element the closest value to K.Traverse the array and compare the absolute value of abs(K – res) and the absolute value of abs(K – arr[i]).If the value of abs(K – res) exceeds abs(K – arr[i]), then update res to arr[i].Finally, print res." }, { "code": null, "e": 26063, "s": 25980, "text": "Initialize a variable, say res, to store the array element the closest value to K." }, { "code": null, "e": 26172, "s": 26063, "text": "Traverse the array and compare the absolute value of abs(K – res) and the absolute value of abs(K – arr[i])." }, { "code": null, "e": 26253, "s": 26172, "text": "If the value of abs(K – res) exceeds abs(K – arr[i]), then update res to arr[i]." }, { "code": null, "e": 26273, "s": 26253, "text": "Finally, print res." }, { "code": null, "e": 26324, "s": 26273, "text": "Below is the implementation of the above approach:" }, { "code": null, "e": 26328, "s": 26324, "text": "C++" }, { "code": null, "e": 26333, "s": 26328, "text": "Java" }, { "code": null, "e": 26341, "s": 26333, "text": "Python3" }, { "code": null, "e": 26344, "s": 26341, "text": "C#" }, { "code": null, "e": 26355, "s": 26344, "text": "Javascript" }, { "code": "// C++ program to implement// the above approach #include <bits/stdc++.h>using namespace std; // Function to get// the closest valueint clostVal(int arr[], int N, int K){ // Stores the closest // value to K int res = arr[0]; // Traverse the array for (int i = 1; i < N; i++) { // If absolute difference // of K and res exceeds // absolute difference of K // and current element if (abs(K - res) > abs(K - arr[i])) { res = arr[i]; } } // Return the closest // array element return res;} // Driver Codeint main(){ int arr[] = { 100, 200, 400 }; int K = 300; int N = sizeof(arr) / sizeof(arr[0]); cout << clostVal(arr, N, K);}", "e": 27107, "s": 26355, "text": null }, { "code": "// Java program to implement// the above approachimport java.io.*; class GFG{ // Function to get// the closest valuestatic int clostVal(int arr[], int N, int K){ // Stores the closest // value to K int res = arr[0]; // Traverse the array for(int i = 1; i < N; i++) { // If absolute difference // of K and res exceeds // absolute difference of K // and current element if (Math.abs(K - res) > Math.abs(K - arr[i])) { res = arr[i]; } } // Return the closest // array element return res;} // Driver Codepublic static void main (String[] args){ int arr[] = { 100, 200, 400 }; int K = 300; int N = arr.length; System.out.print(clostVal(arr, N, K));}} // This code is contributed by code_hunt", "e": 27951, "s": 27107, "text": null }, { "code": "# Python3 program to implement# the above approach # Function to get# the closest valuedef clostVal(arr, N, K): # Stores the closest # value to K res = arr[0] # Traverse the array for i in range(1, N, 1): # If absolute difference # of K and res exceeds # absolute difference of K # and current element if (abs(K - res) > abs(K - arr[i])): res = arr[i] # Return the closest # array element return res # Driver Codearr = [ 100, 200, 400 ]K = 300N = len(arr) print(clostVal(arr, N, K)) # This code is contributed by susmitakundugoaldanga", "e": 28575, "s": 27951, "text": null }, { "code": "// C# program to implement// the above approach using System; class GFG{ // Function to get// the closest valuestatic int clostVal(int[] arr, int N, int K){ // Stores the closest // value to K int res = arr[0]; // Traverse the array for(int i = 1; i < N; i++) { // If absolute difference // of K and res exceeds // absolute difference of K // and current element if (Math.Abs(K - res) > Math.Abs(K - arr[i])) { res = arr[i]; } } // Return the closest // array element return res;} // Driver Codepublic static void Main (){ int[] arr = { 100, 200, 400 }; int K = 300; int N = arr.Length; Console.WriteLine(clostVal(arr, N, K));}} // This code is contributed by sanjoy_62", "e": 29402, "s": 28575, "text": null }, { "code": "<script> // Javascript program to implement// the above approach // Function to get// the closest valuefunction clostVal(arr, N, K){ // Stores the closest // value to K let res = arr[0]; // Traverse the array for(let i = 1; i < N; i++) { // If absolute difference // of K and res exceeds // absolute difference of K // and current element if (Math.abs(K - res) > Math.abs(K - arr[i])) { res = arr[i]; } } // Return the closest // array element return res;} // Driver Code let arr = [ 100, 200, 400 ]; let K = 300; let N = arr.length; document.write(clostVal(arr, N, K)); </script>", "e": 30120, "s": 29402, "text": null }, { "code": null, "e": 30124, "s": 30120, "text": "200" }, { "code": null, "e": 30169, "s": 30126, "text": "Time Complexity: O(N)Auxiliary Space: O(1)" }, { "code": null, "e": 30179, "s": 30169, "text": "code_hunt" }, { "code": null, "e": 30189, "s": 30179, "text": "sanjoy_62" }, { "code": null, "e": 30211, "s": 30189, "text": "susmitakundugoaldanga" }, { "code": null, "e": 30228, "s": 30211, "text": "avijitmondal1998" }, { "code": null, "e": 30235, "s": 30228, "text": "Arrays" }, { "code": null, "e": 30248, "s": 30235, "text": "Mathematical" }, { "code": null, "e": 30267, "s": 30248, "text": "School Programming" }, { "code": null, "e": 30274, "s": 30267, "text": "Arrays" }, { "code": null, "e": 30287, "s": 30274, "text": "Mathematical" }, { "code": null, "e": 30385, "s": 30287, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30410, "s": 30385, "text": "Window Sliding Technique" }, { "code": null, "e": 30430, "s": 30410, "text": "Trapping Rain Water" }, { "code": null, "e": 30468, "s": 30430, "text": "Reversal algorithm for array rotation" }, { "code": null, "e": 30493, "s": 30468, "text": "Building Heap from Array" }, { "code": null, "e": 30578, "s": 30493, "text": "Move all negative numbers to beginning and positive to end with constant extra space" }, { "code": null, "e": 30608, "s": 30578, "text": "Program for Fibonacci numbers" }, { "code": null, "e": 30668, "s": 30608, "text": "Write a program to print all permutations of a given string" }, { "code": null, "e": 30683, "s": 30668, "text": "C++ Data Types" }, { "code": null, "e": 30726, "s": 30683, "text": "Set in C++ Standard Template Library (STL)" } ]
Binary Search a String in C++
In Binary search a string, we are given a sorted array of strings and we have to search for a string in the array of strings using binary search algorithm. Input : stringArray = {“I”, “Love”, “Programming”, “tutorials”, “point”}. Element = “programming” Output : string found at index 3 Explanation : The index of string is 3. Input : stringArray = {“I”, “Love”, “Programming”, “tutorials”, “point”}. Element = “coding” Output : -1 ‘string not found’ Binary search is searching technique that works by finding the middle of the array for finding the element. For array of strings also the binary search algorithm will remain the same. But the comparisons that are made will be based on string comparison. String comparison check for the first character of the strings and compare it. characters are same then it goes to the next and so on. arrString : array of sorted strings Lower = 0 ; upper = n (length of array) Element = string that is to be found Step 1 : while element is not found. Do : Step 2 : mid = lower + (upper - lower) / 2 ; Step 3 : if arrString[mid] = element , return mid and exit Step 4 : if arrString[mid] < element, lower = mid+1 Step 5 : if arrString[mid] > element, upper = mid-1 Step 6 : upper < lower , return -1, exit. #include<bits/stdc++.h> using namespace std; int binarySearchString(string arr[], string x, int n) { int lower = 0; int upper = n - 1; while (lower <= upper) { int mid = lower + (upper - lower) / 2; int res; if (x == (arr[mid])) res = 0; if (res == 0) return mid; if (x > (arr[mid])) lower = mid + 1; else upper = mid - 1; } return -1; } int main () { string arr[] = {"I", "Love", "Programming" , "tutorials" , "point"}; string x = "Programming"; int n = 4; int result = binarySearchString(arr, x, n); if(result == -1) cout<<("Element not present"); else cout<<("Element found at index ")<<result; } Element found at index 2
[ { "code": null, "e": 1218, "s": 1062, "text": "In Binary search a string, we are given a sorted array of strings and we have to search for a string in the array of strings using binary search algorithm." }, { "code": null, "e": 1513, "s": 1218, "text": "Input : stringArray = {“I”, “Love”, “Programming”, “tutorials”, “point”}.\nElement = “programming”\nOutput : string found at index 3\nExplanation : The index of string is 3.\nInput : stringArray = {“I”, “Love”, “Programming”, “tutorials”, “point”}.\nElement = “coding”\nOutput : -1 ‘string not found’" }, { "code": null, "e": 1621, "s": 1513, "text": "Binary search is searching technique that works by finding the middle of the array for finding the element." }, { "code": null, "e": 1902, "s": 1621, "text": "For array of strings also the binary search algorithm will remain the same. But the comparisons that are made will be based on string comparison. String comparison check for the first character of the strings and compare it. characters are same then it goes to the next and so on." }, { "code": null, "e": 2307, "s": 1902, "text": "arrString : array of sorted strings\nLower = 0 ; upper = n (length of array)\nElement = string that is to be found\nStep 1 : while element is not found. Do :\nStep 2 : mid = lower + (upper - lower) / 2 ;\nStep 3 : if arrString[mid] = element , return mid and exit\nStep 4 : if arrString[mid] < element, lower = mid+1\nStep 5 : if arrString[mid] > element, upper = mid-1\nStep 6 : upper < lower , return -1, exit." }, { "code": null, "e": 3024, "s": 2307, "text": "#include<bits/stdc++.h>\nusing namespace std;\nint binarySearchString(string arr[], string x, int n) {\n int lower = 0;\n int upper = n - 1;\n while (lower <= upper) {\n int mid = lower + (upper - lower) / 2;\n int res;\n if (x == (arr[mid]))\n res = 0;\n if (res == 0)\n return mid;\n if (x > (arr[mid]))\n lower = mid + 1;\n else\n upper = mid - 1;\n }\n return -1;\n}\nint main () {\n string arr[] = {\"I\", \"Love\", \"Programming\" , \"tutorials\" , \"point\"};\n string x = \"Programming\";\n int n = 4;\n int result = binarySearchString(arr, x, n);\n if(result == -1)\n cout<<(\"Element not present\");\n else\n cout<<(\"Element found at index \")<<result;\n}" }, { "code": null, "e": 3049, "s": 3024, "text": "Element found at index 2" } ]
SAP Payroll - Quick Guide
SAP Payroll is one of the key modules in SAP Human Capital Management. This is used to calculate the remuneration for each employee with respect to the work performed by them. SAP Payroll not only consists of remuneration part, but also the other benefits that the organization has to provide for the employee welfare according to different company laws in any country. These commonly include − Labor Law Benefits Law Contribution Law Tax Law Information Law Reporting Law Statistics Law A SAP Payroll System manages the gross and net pay, which also includes the payments and deductions calculated while processing payroll for an employee. The system calculates the payment and all deductions while processing remuneration using different wage types. Once the payroll processing is done, the system carries out different subsequent activities. For example − You can generate various lists related to remuneration and deductions performed in the system. SAP Payroll module is easily integrated with − Personnel Administration Time Management Incentive and Wages Finance and Accounting Personnel Administration is used to get the master data and other payroll related information. By using Time Management, you can get the time related data to calculate the remuneration and for payroll run. Incentive and Wages data is used to calculate the incentive wages component in the payroll. Wage type defines the daily payroll for each employee and incentive defines the other extra benefits that should be paid to an employee. Expense Payable for payroll is posted to cost center using integration with SAP Finance and Accounting module. You can assign the cost to cost centers in Finance and Accounting module. Here you can also manage the expense for payroll processing of the third party vendors. Payroll is based on the payroll driver that varies with each country and region. The payroll driver considers the administrative and legal regulations of the country while defining the payroll. While running a payroll, the payroll driver refers to its corresponding payroll schema, which contains a number of different functions. Each function consists of import data function from internal tables and payroll related files. The steps in Payroll processing − A payroll system gets the payroll related data from the system. In case of off-cycle payroll, the system deletes the internal table and imports the last payroll result. The gross wage, shift schedule, and compensation along with the valuation bases are calculated in the system and the master data relevant to this payroll is added in the calculation. Next is to calculate the partial period factors, salary elements, and to calculate the gross results. Finally, in the last process system calculates the net remuneration and performs the accounting in case there is any change in the master data from a previously processed payroll. Once this payroll run is completed, the results are transferred to Finance Accounting and evaluation. Then the posting is done for the corresponding cost centers. Payroll driver is used to run the payroll and their structure is based on that particular country’s laws, as each country has a specific payroll driver. Following are a couple of drivers with their technical names − RPCALCx0 − Here, x represents the country specific code, like ‘D’ for Germany and F for France, etc. RPCALCx0 − Here, x represents the country specific code, like ‘D’ for Germany and F for France, etc. HxxCALC0 − Here, xx represents the ISO code for country, like ID for Indonesia. HxxCALC0 − Here, xx represents the ISO code for country, like ID for Indonesia. This represents the calculation rules used by the payroll driver. In SAP Payroll system, you have country-specific schemas X000 where X represents the country indicator. The Schema structure consists of the following components − Initialization Step 1 − Includes updating the database Importing the Infotypes Calculating gross pay Step 2 − Processing of time data from time management Off cycle payroll run Payroll accounting of last processed payroll Calculating time related data and calculating gross amount for each employee Performing factors Calculating Net pay Step 3 − Calculating the net remuneration Performing the bank transfers It is also possible to automate the payroll run partially or fully and schedule it to run in the background. SAP recommends a few tasks to be run in the background for better performance. For example − Payroll can be run in the night and you can check the results the next morning. Go to System → Service → Jobs → Define Job or SM36 You can define the job here to let the payroll run to process in the background. These background jobs are processed using a Computing Center Management System (CCMS) in the SAP system. The CCMS can be used to perform the following functions. The configuration and monitoring of this background processing system. Managing and scheduling background jobs in the system. To schedule a background job, enter the Job Name. Enter the job class that defines the priority of the job. You can define three types of priorities − Class A - High Class B - Medium Class C - Low You can also define the system for load balancing in the target filed. If you want the system to select the server automatically for load balancing purpose, you can leave this option blank. If you want the spool request generated from this job to be sent to someone using email, you can mention the same in the Spool list recipient. To define a start condition, click on the Start condition tab, there are various options that you can use to define the Start condition. If you want to create a periodic job, check the box at the bottom left side of the screen as shown in the following screenshot. Define the steps of the job by clicking the Step tab. You can specify the ABAP Program, external command or any external program to be used for each step. The next step is to save the job to submit to background processing system. Note − You have to release a job to make it run. No job even those scheduled for immediate processing, can run without first being released. Off-Cycle activities are carried out to process payroll for an employee on any day unlike payroll run that is a periodic activity and schedule to run at specific time interval. In order to perform Off-Cycle activities, you should define an Off-Cycle activity section in customizing for payroll. Off-Cycle consists of the following areas − It provides a uniform user interface for all the Off-Cycle activities. You can perform the following functions in an Off-Cycle workbench − To make a bonus payment to an employee on a special occasion like a marriage gift, new born baby, etc. To make a bonus payment to an employee on a special occasion like a marriage gift, new born baby, etc. To perform an immediate correction run. For example − Consider where employee master data got modified. To perform an immediate correction run. For example − Consider where employee master data got modified. To pay an absence like a leave in advance. To pay an absence like a leave in advance. To process the payments that are added to Payroll Results Adjustment under Infotype 0221. To process the payments that are added to Payroll Results Adjustment under Infotype 0221. Consider a case where the payment was made but not received by an employee. To perform a replacement, you can use Off-Cycle workbench. Consider a case where the payment was made but not received by an employee. To perform a replacement, you can use Off-Cycle workbench. If you want to reverse a payroll result. If you want to reverse a payroll result. This is one of the key components that allows you to check the previous payroll run results for an employee within an Off-Cycle workbench. In the Off-Cycle workbench, go to History tab to display an extract from the payroll which contains all the necessary information of an employee payroll. It also shows details of all the payments that are replaced with a check along with any payroll’s which are reversed are also mentioned here. If you want to check any further details on an employee payroll, you can check the remuneration statement for the employee for a specific payroll period. You can also check the following details about the payment made in the History tab under workbench − For reverse payment you can check the reason for reversal and person who has carried out reversal payment. For reverse payment you can check the reason for reversal and person who has carried out reversal payment. To check the replace payment details, you can find which payments are replaced and by which check number. To check the replace payment details, you can find which payments are replaced and by which check number. Details of check number, bank name, etc. Details of check number, bank name, etc. Note that to view the remuneration statement of a payroll → select the result and choose → Remuneration Statement. This is used to further process the Off-Cycle payroll results, a payment reversal or repayment, etc. When a bonus payment is made using a workbench, a replaced or reverse payroll, remuneration statement should be generated and results from payroll run should be posted to Accounting. All the details related to Off-Cycle payroll, reverse payment or repayment is stored in table T52OCG and is available in the report H99LT52OCG and this report is available in Off-Cycle menu. Subsequent processing is performed by running one or more batch reports and to ensure that subsequent processing is performed in the correct sequence. You should schedule the report for a Batch Subsequent Processing in the international standard system as regular background jobs. With scheduling report that subsequent processing is conducted regularly and on time. Process model is used to define a subsequent program and order in which they run. When you select a report for Batch Subsequent Processing, you also have to define the process model report that should be used. Off–Cycle subsequent processing, it is possible to schedule the batch report in background job with process model or you can also call it in a workbench menu and run it from there. According to the function executed in an Off-Cycle workbench, different activities are performed. For example − Consider replacing a payment. Runs Preliminary Program Data Medium Exchange Indicates each payment replacement with a key composed of program run date and the indicator feature CYYYP. Enters the details in the indicator table for off-cycle batch processing Runs the batch report for subsequent processing of check replacement as a background job at the time that you have scheduled for the regular processing of the report Reads indicator table Runs the process model that you have specified in the report variant Prints new checks To maintain a master data in SAP system, there are different Infotypes defined in SAP system for Personnel Administration and payroll. Company insurance Group accident insurance Life insurance Supplementary insurance Not liable Risk Risk/pension Nursing care Sick pay Sports club Medical Union, etc. Payroll system consists of Date Specifications and monitoring of tasks Infotypes. Using monitoring of task, you can set automatic monitoring of tasks for HR activities and system suggest a date when you want to be reminded of the stored tasks. This is stored in Infotype 0041 and date type defines the type of information. You can create series of reports on specific date type. You can use this Infotype to run Payroll and also to maintain leave. In a standard payroll system, it contains 12 combinations of date type and date and to add more date specification for an employee at the same time, you can use time constraint 3. You can also create an automatic monitoring of all HR related tasks that includes follow up activities to be performed and it is maintained in Infotype 0019. System suggests a date according to task type on which you will be reminded and this allows you to perform follow up activities as per the required schedule. The reminder date in the system is used to determine when you want to be reminded for a task type. Reminder date can be defined based on this criteria − When you select a task type, if the operator indicator has a blank or negative (-) value then reminder should be set before the task data. If the Operator indicator has a positive (+) value, reminder date shouldn’t be before the task date. Note − Payroll system also suggests a reminder date for each task independent of task type and you can change this at any point of time. Following are a few task types that can be added under monitoring − Temporary contract Expiry of inactive contract Expiry of temporary contract Next appraisal Pay scale jump End of maternity protection End of maternity leave Start of maternity protection Training period Dismissal protection Personal interview Vaccination date Follow-up medical Submit SI statement Work permit Work permit expires End of leave of absence Expiry of probation This contains Infotype related to employee’s previous/other work experience, education and training and qualification. Other/Previous Employers (0023) Education and training (0022) Qualifications (0024) This is used to store other employer contract of an employee. You can store the information where an employee works or has worked before working for your company. To enter multiple employer details, you can add multiple data records and validity period for each employee. Enter the employer’s name and the country for each employer. The following information can be stored in this Infotype − City HQ – where the company is based Industry in which company is active Job role that an employee or applicant carried out or carries out Type of work contract with other employer This is used to store employee/application qualification details in this Infotype. Incase to store information on more than one qualification for an employee, you can also create multiple data records in this. Each qualification type is identified by a key and you can also add proficiency level for each qualification. Proficiency level defines the knowledge and skill of an employee on a qualification. Proficiency level can be defined in the following order − Proficiency 0 means non-valuated Proficiency 0 means non-valuated Proficiency 1 means very poor ... Proficiency 1 means very poor ... Proficiency 5 means Average ... Proficiency 5 means Average ... Proficiency 9 means excellent Proficiency 9 means excellent This is used to store education details of an employee/applicant. To store the details about the complete education and training history of an employee/applicant, you have to create as many data records as necessary for the respective subtypes of this Infotype. You can enter the respective dates of the training period as the validity period. The following subtypes can be created for each education establishment type − Institute/Place − This contains institute details like University, college name, etc. Institute/Place − This contains institute details like University, college name, etc. Country Key − It is used to contain the country in which the education/training institution is based. Country Key − It is used to contain the country in which the education/training institution is based. Certificate − This is used to maintain possible leaving certificates in relation to the educational establishment type specified. Certificate − This is used to maintain possible leaving certificates in relation to the educational establishment type specified. Duration of Course − This is used to specify the length of each course of study. Duration of Course − This is used to specify the length of each course of study. Final Marks Final Marks Branch of Study − This includes the specialization of education like ECE, Computers, Mechanical, etc. Branch of Study − This includes the specialization of education like ECE, Computers, Mechanical, etc. This is used to an employee’s communication id for a certain type of communication. You can define various subtypes under this Infotype to maintain communication details of an employee. The following subtypes can be defined − Credit Card number Internet Address Voice Mail Fax, etc. This subtype is used to store the employee’s credit card number for clearing, so the items booked on a credit card should be assigned to a personnel number in the system. This is more helpful incase an employee contains multiple credit cards or credit cards from different credit card companies. You can also maintain different card numbers for different companies – first two positions of the ID/number field have been defined with an ID code that corresponds to the individual credit card companies. This contains Infotype for test procedure and contains the test procedure for your employee. Test procedure includes test procedure key and release date. You can store the following information in Infotype 0130. All this information is defined by a system and cannot be entered − Date Time Releaser User ID Program to implement release When a test procedure is performed for an employee up to a certain release date, then write authorization may no longer be performed which involves changing certain Infotype data with validity start date is before the release date. Personal Data − This is used to maintain personal information for an employee in different Infotypes. This is used to store the address information of an employee. Various subtypes can be maintained under the Address Infotype. Permanent Address Residence Address Home Address Mailing Address This is used to maintain the bank account details to process the net pay of travel expenses from payroll from the HR module. This Infotype is used to maintain legal obligations for severely challenged persons. Different subtypes can be defined under this Infotype − Challenge Group Degree of Challenge Credit Factor Type of Challenge This Infotype is used to store the information for identifying an employee. For example − Name Marital status Nationality, etc. This Infotype is used to maintain an employee’s family member and relative details. The following relationship types can be maintained in the standard system − Spouse Divorced spouse Father Mother Child Legal guardian Guardian and Related persons Emergency contact This Infotype is used to store the data related to employee’s medical examination. Various subtypes can be defined under this − Blood Group Habits Vision Allergy Hearing Test Nervous system, etc. Using Action Infotype you can combine several Infotypes into one group. You can use Personnel action for the following purpose − Hiring an Employee To change assignment of an employee To perform pay change Employee leaving the organization This section contains the following Infotypes − This contains the general instructions that an employee is supposed to perform – data protection, accident prevention, other instructions, etc. This Infotype is used to maintain an employee’s corporate function like work council member, etc. This Infotype is used to maintain data on company car, employee identification and work center. This is used to compare three personnel numbers while processing the payroll. When an employee loses his bonus, night work allowance cos of his involvement in work council, this is used to process his bonus by comparing with similar personnel for this purpose. This Infotype is maintained only for those employees which are involved in work council function. This Infotype is used to maintain details of all the assets that have been provided to employee as loan. You can define the following subtypes under this − Key(s) Clothing Books Tool(s) Plant ID This Infotype is used to store the data related to employee’s employment contract. While creating a record for Contract Elements Infotype (0016), system suggests default values for the following fields − Contract type Sick pay Probation period Continued pay Notice period for EE Notice period for ER These default values are determined by the company code, personnel area and employee group/subgroup in Organizational Assignment Infotype (0001). This Infotype is used to store any special authority/privilege that has been assigned to an employee − Power of Attorney. Different subtypes can be defined under this − Commercial power of attorney General commercial power of attorney Power of attorney to perform banking transactions Pay scale grouping for allowances is performed to add similar type of employees in a group and similar characteristics are applied on each group. This is used to determine: compensation structure as per grouping, payroll processing procedure, and the value of compensation for an employee. While defining the payroll processing, grouping is the first step that is performed. Wage type can’t be defined till you define the pay scale grouping for allowances. Pay scale grouping for allowances is defined based on few parameters − Pay scale area Pay scale type Pay scale group Pay scale level As an example, consider a company with offices in Hyderabad, Bangalore, Mumbai, Delhi and Chennai. Now the employee location where he is located affects the compensation to a certain level. In this case, it is possible to assign the different cities to pay scale area and hence pay scale area becomes a key pay parameter to create pay scale grouping for allowances. In a similar way you can define other pay scale parameters depends on various factors. In SAP Easy access menu → SPRO → IMG → Personnel Management → Personnel Administration → Payroll data → Basic Pay → Define EE Subgroup Grouping for PCR and Coll.Agrmt.Prov. It will show you the list of EE group, the EE group name and different fields associated with it. If you want to change it, this can be done here. Pay Scale Grouping for Allowances is not define in any of the Infotypes. You can’t put an employee directly to a pay scale grouping for allowances. When you define five different pay parameters, an employee is directly assigned to a pay scale grouping for allowances. By entering an Employee Group and Employee Subgroup in Organizational Assignment Infotype (0001) and pay scale area, pay scale type, pay scale group, pay scale level in the Basic Pay Infotype (0008), it adds the employee to a pay scale grouping for allowances automatically. So, the pay scale grouping is defined as an assignment of the pay parameters. Go to SPRO → IMG → Payroll → Payroll: India → Assign Pay scale grouping for allowances. In the next window that comes up, you can see the associated pay parameters to pay scale grouping. PS Area PS Type PS Group PS Level Pay Scale grouping for allowances can decide the following objects in Payroll − Wage types Basic salary and increments Dearness Allowance Housing and Car & Conveyance Recurring allowances and deductions Reimbursements, Allowances and Perks Leave Travel Allowance Gratuity Superannuation Long Term Reimbursements Rounding off Recovery Provident Fund Mid-Year Go Live data is used in countries where payroll is implemented in the middle of financial year. This is used for transferring legacy payroll data to the SAP System and also for creating payroll results from the transferred legacy data. For Example − You can consider a case for India where income tax assessment year is performed from 1st April − 31st March. Now to implement SAP Payroll India in the middle of a Financial Year, there is a need to transfer payroll results for those periods of the financial year that lie before that period. This is defined as a period for which the payroll results are available and need to be transferred to the SAP system. This period is defined as the term when you process the first productive payroll run. This is used to rehire an employee by using the same Personnel number as used in the time of last employment or within same financial year. The action type associated with this is – Reentry into the company. In case of rehiring an employee, if previous records are not delimited, you will have to delimit the previous records and there is a need to create new entries. The following Infotype value needs to be updated for this action type − Recurring Payments/Deductions (0014) Recurring Payments/Deductions (0014) Organizational Assignment (0001) Organizational Assignment (0001) Membership Fees (0057), Example: sports club, Union, etc. Membership Fees (0057), Example: sports club, Union, etc. Family Member/Dependents (0021) Family Member/Dependents (0021) Other Statutory Deductions (0588) Other Statutory Deductions (0588) Long term reimbursements (0590) Long term reimbursements (0590) Housing (0581), Example − HRA, Company owned, etc. Housing (0581), Example − HRA, Company owned, etc. While running the payroll for a rehired employee, payroll function checks the status of the rehired employee’s employment in the system. If the system is showing the present status as active preceded with withdrawn and active status within same Financial Year, this represents that the employee is rehired. The status of an employee’s employment is maintained in the internal table COCD. To check the previous payroll data for a rehired employee – earning, deductions, and exemptions, this can be checked using the Results Table (RT) and the Cumulative Results Table (CRT). The Payroll function INPET is used to process the previous employment tax details. The following wage types are generated − Wage Type /4V1 to /4V9 − This is created to maintain details of the employee’s employment in the other company in the same Financial Year. Wage Type /4V1 to /4V9 − This is created to maintain details of the employee’s employment in the other company in the same Financial Year. Wage Type /4VA to /4Vg (From internal table 16) − This is created to maintain employee’s previous employment details in the same company in the same Financial Year. Wage Type /4VA to /4Vg (From internal table 16) − This is created to maintain employee’s previous employment details in the same company in the same Financial Year. The following components of the employee’s tax is calculated for a rehired employee − Tax Exemptions on − House Rent Allowance (HRA) (Metro or non-Metro) Leave Travel Allowance (LTA) Child Education Allowance or Tuition fee Child Hostel Allowance (CHA) Tax Exemptions on − House Rent Allowance (HRA) (Metro or non-Metro) House Rent Allowance (HRA) (Metro or non-Metro) Leave Travel Allowance (LTA) Leave Travel Allowance (LTA) Child Education Allowance or Tuition fee Child Education Allowance or Tuition fee Child Hostel Allowance (CHA) Child Hostel Allowance (CHA) The following perquisites are checked before calculating tax − Company Owned Accommodation Company Paid/Leased Accommodation Loans The Payroll system also checks the below deductions for the employee − Deductions under section 80 Section 89 relief Professional Tax Labor Welfare Fund (LWF), etc. Employee State Insurance (ESI) EPF Provident Fund and Pension Fund A Split Payroll is run for the following periods – First of the month to one day before the employee is rehired. And from the date of rehiring to the end of the month. When an employee is rehired on any day other than the first, a split payroll is enabled. Go to SPRO → IMG → Payroll → Payroll India → Basic Settings → Enable Split Payroll Run. In a new window, you will see the list of all split payroll in the system. To create a new entry, click on the New Entries tab at the top left hand side of the screen. Enter the values: Act. 12 stands for re-entry into the company. In a similar way, you can select the other fields as well. Once you enter all the details, click the save icon at the top left hand side of the screen. An example of rehiring and payroll run − An employee left a company on May 17, 2015 and was rehired on Nov 25, 2015. In this case, November payroll will be run twice. For the period between Nov 1, 2015 and Nov 24, 2015. For the period between Nov 1, 2015 and Nov 24, 2015. For the period between Nov 25, 2015 to Nov 30, 2015. For the period between Nov 25, 2015 to Nov 30, 2015. Indirect Evaluation is used to calculate payroll for some specific wage types that are defaulted under the Basic Pay Infotype (0008) or Infotype 0014 or 001 (recurring payment/deductions or Additional payments). Note − While using indirect evaluation, it is also possible to calculate INVAL as numbers instead of using value as amount considering the wage type configured correctly. For example − You can configure INVAL for an employee to be eligible for 10 liters of petrol each month. This represents INVAL as number. There are quite a few types of variants for indirect evaluation, which are − Variant A − This is used to calculate the wage type value as a fixed amount. Variant A − This is used to calculate the wage type value as a fixed amount. Variant B − This is used to calculate the amount as percentage of the base wage type added to a fixed amount. In this, multiple amounts with same or different percentage of the base wage type, can be calculated for an INVAL wage type. In this case, the amount that will be Indirectly Evaluated will be the sum of all such calculated amounts added to a fixed amount. Variant B − This is used to calculate the amount as percentage of the base wage type added to a fixed amount. In this, multiple amounts with same or different percentage of the base wage type, can be calculated for an INVAL wage type. In this case, the amount that will be Indirectly Evaluated will be the sum of all such calculated amounts added to a fixed amount. For example − Wage type M230, consider the following different INVAL B amounts. 10% of MB10 30% of M220 Fixed amount of Rs.1000 So in this scenario, wage type M230 will have INVAL amount as sum of a, b and c. Variant C − This is used to calculate the amount as a percentage of a base wage type subject to a maximum limit. More than one such amount, with same or different percentage of the base wage type, can be calculated for an INVAL wage type. In this case, the amount that will be Indirectly Evaluated will be the sum of all such calculated amounts, subject to a maximum limit. Variant C − This is used to calculate the amount as a percentage of a base wage type subject to a maximum limit. More than one such amount, with same or different percentage of the base wage type, can be calculated for an INVAL wage type. In this case, the amount that will be Indirectly Evaluated will be the sum of all such calculated amounts, subject to a maximum limit. For example − Wage type M230, consider the following different INVAL C amounts. 15% of MB10 20% of M220 Limit of Rs.4000 In this scenario, INVAL amount for the wage type M230 will be the sum of a, and b subject to a maximum of c. Variant D − This is used to calculate the amount as one or any combination of the following INVAL Module variants based on Basic salary slabs. Variant D − This is used to calculate the amount as one or any combination of the following INVAL Module variants based on Basic salary slabs. This is used to calculate the fixed amount and the percentage of the basic slab. This is done by first calculating the percentage of a base wage type added to a fixed amount. And then secondly, the percentage of a base wage type which is subject to a maximum limit. The Gross Part of the Payroll is used to determine an employee’s gross pay as per the contractual requirements and consists of payments and deductions. The Gross Pay consists of different components, which includes − Basic Pay Dearest Allowance Variation allowance Bonuses Provident fund Gratuity Then there are different deductions that are made as per the employee enrollment. These deductions include company owned apartment (COA), company sponsored day care, and other deductions. All these factors are based on a country’s legal labor rules and determines the gross taxable income of the employee. Wage type is one of the key components in payroll processing. Based on the way they store information; wage type can be divided into the following two categories − Primary wage type is defined as the wage type for which data is entered in an Infotype. The Primary wage types are created by copying the model wage types provided by SAP. There are different types of primary wage types − Time wage type Time wage type is used to store the time related information. This wage type is used to combine payroll and time management. Time wage type is generated at the time of evaluation and is configured through T510S or using a custom PCR. Dialogue wage type This wage type includes basic pay IT0008, recurring payments and deductions IT0014, and additional payments IT0015. The secondary wage types are predefined wage types in the SAP system and starts with a ‘/’. These wage types are created during the payroll run. These wage types are system generated and can’t be maintained online. For example − /559 Bank Transfer The key elements of a wage type include − Amount AMT Rate RTE Number NUM As per the processing type, each element can have one, two or all element values. For example − The basic pay can have a Rate and a Number, however a bonus pay can only have an amount. The payment includes all the payments given to an employee according to the employment contract and any voluntary payment paid. A payment combines the employee gross remuneration. This gross remuneration is defined as the calculation of social insurance and tax payments and also for the calculation of net remuneration. The payment is defined in terms of the following components in the SAP Payroll system − Basic Pay − This component consists of the fixed wage and other salary elements and is paid to the employee for each payroll period. The details are entered in the Basic Pay (0008) Infotype. Basic Pay − This component consists of the fixed wage and other salary elements and is paid to the employee for each payroll period. The details are entered in the Basic Pay (0008) Infotype. Recurring Payments and Deductions − Recurring payments and deductions include components like overtime, leave or other components. This information is maintained in Recurring Payment and deduction Infotype (0014). Recurring Payments and Deductions − Recurring payments and deductions include components like overtime, leave or other components. This information is maintained in Recurring Payment and deduction Infotype (0014). Additional Payments − There are many components in the payment section, which are not paid in each payroll period. This information is added to Additional Payments Infotype (0015). Additional Payments − There are many components in the payment section, which are not paid in each payroll period. This information is added to Additional Payments Infotype (0015). Time management is one of the key components in Payroll that is used to calculate the gross salary of the employees. Monetary benefits are determined by work schedule and planned working hours. The time management integration with payroll is used to determine wage types like bonuses for overtime, night/odd hour work allowance, work on holiday, etc. You can also use Time Data Recording and Administration Component Integration with Time Management component to find out time data information for employees and further to determine the time wage types. When you use this time evaluation component Integration with Time Management component, this is used to find time wage types determined by Time Evaluation. This component is used to ensure that an employee shouldn’t get financial disadvantage, if he/she is working in odd hours or if a shift time is changed for them. For example − An employee’s planned working time is changed and he is facing a financial disadvantage, he or she is paid on the basis of the original working time – like an employee gets his shift changed from a night shift with night shift bonuses to an early shift. If an employee’s shift time is changed and that employee will be benefited financially, he or she is paid on the basis of the changed working time. Consider an employee whose shift changes from Friday to a Sunday with Sunday bonuses. In this case, a shift change compensation will be listed under the remuneration statement. It is also possible to limit the payment of a shift change compensation for a particular category. This is used to process the manually calculated wages, bonus or non-standard wage types. This component provides information on payroll with time and person related time wage types. Time wage type is used to perform the financial evaluation of work performed on a payroll. Wage type is one of the key components in payroll processing. Based on the way they store information, they can be defined as Primary and Secondary Wage types. During the payroll run, the primary wage types are provided with the values and secondary wage types are formed at the time of the payroll run. You can check the characteristics of a wage type by going to the following path − SPRO → IMG → Personnel Management → Personnel Administration → Info Type → Wage types → Wage type catalog → Wage type characteristics With the release of 4.6B, processing of averages has been changed. Processing of averages depends on the country and release and with countries like Argentina, Brazil and a few other, a new processing is released with version 4.5B. At one time, you can only use any of these two versions, if you are using an old version, you can continue to use the same version and there is no need to move to the new version, but the older version is not under development. The technical processing of averages can be configured as shown in the following steps − SPRO → IMG → Payroll → Payroll India → Time wage type valuation → Averages → Bases for valuation of Averages You should check the following perquisites for this − Forming the basis for calculating average values Definition of calculation rules for averages Assignment of calculation rules to wage types To create a new technical processing of average, click on New Entries In a new window, define the different rules as mentioned above and click on the save icon at the top. To maintain incentive wages, different accounting processes are defined in standard SAP system. In this Wage Type, target time for each ticket is calculated using a piecework rate. This is used to calculate the amount for time ticket that the employee is due. This amount consists of − Basic Monthly Pay − This defines the gross amount that is paid to the employees irrespective of their performance and it can be paid as a monthly sum or in terms of hourly pay as per their contract. Basic Monthly Pay − This defines the gross amount that is paid to the employees irrespective of their performance and it can be paid as a monthly sum or in terms of hourly pay as per their contract. Time Dependent Variable Pay − This is used to define the pay scale rate that is different from a master pay scale rate for an employee. It is possible that an employee is remunerated at a high rate as compared to a master pay rate for specific activities. You have to enter a higher pay scale into the time ticket. Time Dependent Variable Pay − This is used to define the pay scale rate that is different from a master pay scale rate for an employee. It is possible that an employee is remunerated at a high rate as compared to a master pay rate for specific activities. You have to enter a higher pay scale into the time ticket. Performance Dependent Variable Pay − This is used to credit when an employee completes the work in less time than the target time. The different between target time and actual time is mentioned on time ticket. Performance Dependent Variable Pay − This is used to credit when an employee completes the work in less time than the target time. The different between target time and actual time is mentioned on time ticket. This is similar to the monthly wage calculation with the only difference that the monthly wage is specified as an hourly wage from the starting, so you don’t need to convert the monthly basic wage into an hourly wage. Personnel calculation schemas − There are two types of schemas valuation of time tickets for incentive wages − German Version DIW0 − this contains special features that are specific to German only. German Version DIW0 − this contains special features that are specific to German only. International Version XIW00 − You can use schema XIW00 to set up your own incentive wage accounting rules as per different countries. As valuation of time tickets vary according to different countries and organizations so there are no country specific accounting schemas in it. International Version XIW00 − You can use schema XIW00 to set up your own incentive wage accounting rules as per different countries. As valuation of time tickets vary according to different countries and organizations so there are no country specific accounting schemas in it. This component is used to check the remuneration when an employee works for a lesser period of time. You can use factoring in the following cases − When an employee leaves, joins or remains absent for a specific period of time. When an employee leaves, joins or remains absent for a specific period of time. When there is a change in the basic pay, substitution, work reassignment or change in personal work schedule. When there is a change in the basic pay, substitution, work reassignment or change in personal work schedule. To find the correct remuneration for an employee, the remuneration amount is multiplied by a partial period factor which is based on different methods − Payment method Deduction method PWS method Hybrid method Each Payroll system contains few factoring rules that are needed to determine the partial period factor. These rules can be customized to meet specific requirements in the company. The following Infotypes are calculated for partial period remuneration − This factor is used to calculate the partial remuneration. This is defined as a variable value which is calculated using different formulas as per the company and the circumstances. While customizing, partial period factors are defined in a personnel calculation rules for specific situations and assigned to wage types for particular periods. When you multiply the partial period factor by the fixed remuneration amount, this gives you partial period remuneration amount to be paid for a specific period. For example − Consider an employee who was on an unpaid leave from 3rd February to 29th March, this means that the employee has worked for 2 days in February and 2 days in March considering 20 workdays in Feb and 23 workdays in Mar. Now consider reduction using the following different ways − Partial Period Factor Basic Remuneration Now if you use the payment method, the employee receives the same remuneration for both the months. If you use the deduction method, the employee is overpaid in Feb and underpaid in March. If the PWS method is used, the employee receives more salary in Feb as compared to March, however the difference is negligible. This component is used to determine an employee’s gross and net income and various components that effects the net income of an employee. It consists of the following components − Personnel Administration Payroll South Africa Payroll Australia The following Infotypes should be configured before setting up the salary package for an employee. The following are Infotypes are country specific and valid only for a few countries only − Actions (Infotype 0000) Addresses (Infotype 0006) Basic Pay (Infotype 0008) Organizational Assignment (Infotype 0001) Personal Data (Infotype 0002) Bank Details (Infotype 0009) Planned Working Time (Infotype 0007) Social Insurance SA (Infotype 0150) (only for South Africa) Superannuation (Infotype 0220) (only for Australia) Taxes SA (Infotype 0149) (only for South Africa) You can find the Salary Packaging SPRO → IMG → Personnel Management → Personnel Administration → Payroll Data → Salary Packaging You have to define the following components under customizing − Basic Settings − It is used to define compensation area as per guidelines. Basic Settings − It is used to define compensation area as per guidelines. Salary Components − It includes the elements of an employee's compensation package. Salary Components − It includes the elements of an employee's compensation package. For example − Basic Salary and Company Car. This is used to define the default salary components based on an employee's organizational assignment. Using the Eligibility Criteria, you can create checks to determine if an employee will have a specific salary component defaulted into their salary package. For example − An employee is eligible for a certain salary component, once they reach a specific pay scale level. You can set eligible criteria for this rule. This is used to maintain additional features for salary packaging. Various steps can be defined as per different country specifications − Maintain Company Car Regulation Define Receiver Travel Allowance Rates Result T-code: P16B_ADMIN The following is some general information about the subsequent screenshot − The right side of the screen comprises of components that are currently a part of your package. The right side of the screen comprises of components that are currently a part of your package. The left side of the screen contains all those additional components for which you are eligible. The left side of the screen contains all those additional components for which you are eligible. The following tasks should be performed to model a Package − First is to click on the salary component text and choose the arrow to move the component between two boxes. Using this you can add/remove the components from the package. First is to click on the salary component text and choose the arrow to move the component between two boxes. Using this you can add/remove the components from the package. If you want to change the component details, click on the amount for the component. If you want to change the component details, click on the amount for the component. Below this you can see the edit section. This section is specific to each component and contains the relevant amount, percentage, and contribution information valid for the component. Below this you can see the edit section. This section is specific to each component and contains the relevant amount, percentage, and contribution information valid for the component. Click Accept to include your new attributes to the package. Click Accept to include your new attributes to the package. You can click on the Reset button to put the last values used. You can click on the Reset button to put the last values used. Once you close the modeling screen, you can select from the following options − You can select simulation that will allow you to preview a sample online pay slip. You can select simulation that will allow you to preview a sample online pay slip. You can select Update that will update the Infotypes accordingly. You can select Update that will update the Infotypes accordingly. This allowance is a part of the monthly remuneration paid to an employee and varies as per the location and other factors. The value of this component depends on the Consumer Price Index (CPI) for that location and this index varies as per government regulation. When an employee is transferred or moved to a different location, this allowance is also changed as per the location. Dearness allowance along with other components like Base salary, Income tax, Gratuity, etc., forms the salary package of an employee for computation. You can calculate Dearness allowance in a standard SAP system by using the following methods − CPI slab based calculation You can also define new CPI in SAP system using New Entries. Incremental CPI slab based calculation Basic slab based calculation Basic slab based calculation, subject to minimum value Non-slab based calculation Incremental basic slab based calculation Note − For a non-managerial category this allowance is called Dearness allowance however for managerial category employee group it is also called Cost of Living Allowance (COLA). To configure DA in SAP system, go to SPRO → IMG → Payroll → Payroll India → Dearness Allowance → Maintain Basic slab details for Dearness allowance. Once you click on this, it shows you the Basic slab details for Dearness allowance, which includes Fixed value, Percentage, CPI % mul. Fac., currency. This component is used to maintain information about an employee accommodation. This is used to calculate tax exemptions and to check perquisite applicable on a housing benefit. While updating or creating a housing record using the Housing (HRA / CLA / COA) under Infotype (0581), the system dynamically updates the Basic Pay Infotype (0008) with the new or changed wage type for Housing. Rented − When an employee uses a Rented Accommodation, he receives a House Rent Allowance (HRA) to meet the expenses incurred by renting a residential accommodation. In this case, the system calculates the tax exemption on the rented accommodation and rented amount paid by an employee. Rented − When an employee uses a Rented Accommodation, he receives a House Rent Allowance (HRA) to meet the expenses incurred by renting a residential accommodation. In this case, the system calculates the tax exemption on the rented accommodation and rented amount paid by an employee. Company Leased Accommodation (CLA) − When an employee uses a Company Leased Accommodation, the company leases an accommodation and provides it as a housing benefit to the employee. The company Leased eligibility depends on the employee Pay Scale Grouping for Allowances. When an employee uses CLA benefit, the system checks the applicable perquisite on the CLA. Company Leased Accommodation (CLA) − When an employee uses a Company Leased Accommodation, the company leases an accommodation and provides it as a housing benefit to the employee. The company Leased eligibility depends on the employee Pay Scale Grouping for Allowances. When an employee uses CLA benefit, the system checks the applicable perquisite on the CLA. Company Owned Accommodation (COA) − When an employee uses COA, in this case company owns the accommodation and provides it as a housing benefit to the employee. Like CLA employee eligibility for COA depends on the employee grouping for pay scale allowance. When an employee opts for COA benefit, the system will compute the perquisite applicable on the COA. Company Owned Accommodation (COA) − When an employee uses COA, in this case company owns the accommodation and provides it as a housing benefit to the employee. Like CLA employee eligibility for COA depends on the employee grouping for pay scale allowance. When an employee opts for COA benefit, the system will compute the perquisite applicable on the COA. Hotel Accommodation − A company can also provide a hotel accommodation to the employee. Their stay in the hotel depends on a fixed period as per the Government rule and if the stay exceeds the time limit, a perquisite is applicable on the cost of accommodation. Hotel Accommodation − A company can also provide a hotel accommodation to the employee. Their stay in the hotel depends on a fixed period as per the Government rule and if the stay exceeds the time limit, a perquisite is applicable on the cost of accommodation. In a SAP standard system, the following accommodation types are configured by default − Rented Accommodation Company Leased (Old) Perkable Hotel Accommodation It is also possible to create a new accommodation type in the system. Go to SPRO → IMG → Payroll → Payroll India → Housing → Define Accommodation type Under the Accommodation type, you can view the already defined Housing types or can create new entries by clicking the New Entries button. In the Tax Code, select the tax code as per the accommodation type. Tax code field determines as per different accommodation type − This component is used to process the exemption on conveyance allowance. The details are maintained in Car and Conveyance Infotype (0583). The standard SAP system provides exemption on conveyance allowance given to the employees. The following configuration has to be configured in the system if you want to give conveyance allowance and exemption to the employees. Go to SPRO → IMG → Payroll → Payroll India → Car and Conveyance → Define Conveyance Type. Different Car schemas can be used in the SAP system for exemption under different sections. This defines as the long-term benefits provided to the employees over a fixed period of years. The duration varies from three to five years. In a standard SAP system, long-term benefits can be divided into the following categories − This includes benefits provided to employee for purpose of purchasing movable items like Fridge, TV, Washing machine, computer, etc. This includes benefits provided to employee for purpose of purchasing consumer good items like Sofa, chair, Carpet, etc. This benefit includes maintenance of their car over a period of time, etc. This Infotype is used to maintain Long Term Reimbursement claimed by the employees and under one of the following subtypes − Subtype SHFS − For maintaining hard furnishing schemes information Subtype SHFS − For maintaining hard furnishing schemes information Subtype SSFS − For maintaining soft furnishing schemes information Subtype SSFS − For maintaining soft furnishing schemes information Subtype SCAR − For maintaining car maintenance schemes information Subtype SCAR − For maintaining car maintenance schemes information To configure a long term reimbursement, go to SPRO → IMG → Payroll → Payroll India → Long Term Reimbursement → Maintain block of years for long tern reimbursement. To avail long-term benefits by an employee, there are different perquisites attached with each benefits that should be met − In this, there is a fixed percentage as the perquisite value applicable on the assets that an employee can avail during a financial year. This fixed value is maintained in Calculate Hard Furnishing Perk Value constant (HFPRC) of the table view Payroll Constants (V_T511K). In this, the system calculates perquisite value for the assets that an employee avails in the current financial year and it is based on the perquisite percentage that you maintain in the Long Term Reimbursements Infotype (590) and subtype SSFS. Normally, the system doesn’t contain any perquisite value with the Car Maintenance Scheme or any other similar type of scheme you create in the system. In a company, an employee is eligible to claim some monetary and non-monetary benefits and these claims vary as per the pay scale grouping and many other factors. An employee needs to submit the claim based on the eligibility to get these benefits. Claims submitted can be of the following types − This includes the claims that are available as per the eligibility amount. For example − A conveyance allowance of Rs. 1800 per month or a Medical claim of Rs. 15000 in a given assessment year. These claims are commonly raised by an employee for company work. They are normally placed in units like Stationary request, Calculator, Petrol, etc. Apart from this, there is one more type of claim known as the slab based claim. A few common types of slab based claims are LTA, car maintenance allowance, etc. These type of claims has an eligibility which is normally more than a year. For example − Car maintenance allowance – where the validity period starts from the date of purchase of the car and in the first and second year an employee is eligible for a car maintenance allowance of Rs. 3000 and in the third year, claim eligibility is Rs. 5000 and in the fourth year, the eligibility is Rs. 7500. To get into the non-monetary claims section, you should use the following Transaction Code: PC00_M40_REMP as shown in the subsequent screenshot. Once you run the above transaction, the reimbursement claim screen will appear. The claims can be processed via − Regular payroll run − In this reimbursement type, additional payments Infotype 0015 is updated with the information that you enter in this report and claim disbursement is made along with the regular payroll. Regular payroll run − In this reimbursement type, additional payments Infotype 0015 is updated with the information that you enter in this report and claim disbursement is made along with the regular payroll. Off-cycle payroll run − In this method, One-Time Payments Off-Cycle Infotype 0267 is updated with the information that you enter in this report and approved claims can be disbursed through an off-cycle payment process. Off-cycle payroll run − In this method, One-Time Payments Off-Cycle Infotype 0267 is updated with the information that you enter in this report and approved claims can be disbursed through an off-cycle payment process. For example − In this disbursement, claims are disbursed on the same day or claims submitted during the week are disbursed on any day of the week. This component is used to process the employee bonus and can compute both regular and off-cycle bonus. As with claims, there are two types of bonuses that can be paid − Type 1 Additional Payments 0015 − In this, the SAP system updates the Infotype when a regular bonus is processed. Type 1 Additional Payments 0015 − In this, the SAP system updates the Infotype when a regular bonus is processed. Type 2 Additional Off-Cycle Payments for Off-Cycle Bonus 0267 − In this, 0267 Infotype is updated in the system, when an Off-Cycle bonus is computed. Type 2 Additional Off-Cycle Payments for Off-Cycle Bonus 0267 − In this, 0267 Infotype is updated in the system, when an Off-Cycle bonus is computed. It is defined as a statutory benefit provided to an employee by his employer for his association with the company. The Gratuity can be configured based on the following rules − Payment of Gratuity Act, 1972 − As per this, a minimum amount that an employer has to contribute for this component is 4.81% of the base salary of the employee. As per the company policy where the benefits are better as compared to the Gratuity Act. Payment of Gratuity Act, 1972 − As per this, a minimum amount that an employer has to contribute for this component is 4.81% of the base salary of the employee. As per the company policy where the benefits are better as compared to the Gratuity Act. Personal IDs 0185 Gratuity for India subtype 03 − This is used to maintain the employee Personnel id number for Gratuity and the name of the trust to which you are contributing for employee gratuity. Personal IDs 0185 Gratuity for India subtype 03 − This is used to maintain the employee Personnel id number for Gratuity and the name of the trust to which you are contributing for employee gratuity. Gratuity Listing Report (HINCGRY0) to Generate Gratuity List − This report is used to generate a list which shows the employee wise contribution to the trust name on behalf of the employee. Gratuity Listing Report (HINCGRY0) to Generate Gratuity List − This report is used to generate a list which shows the employee wise contribution to the trust name on behalf of the employee. You can configure Gratuity in the SAP system by following this path. Go to SPRO → IMG → Payroll → Payroll India → Retirement benefits → Gratuity. The employee record for Gratuity (Personnel Id’s) is maintained in Infotype and Gratuity for India Subtype 03. This is defined as the benefit provided to an employee by the employer for his association with the company. The employer contributes towards Superannuation trust on a monthly or yearly basis to provide this benefit to the employee and it doesn’t include any employee contribution. This component is not presented as part of the monthly pay slip and is not a taxable component. Superannuation report (HINCSAN0) for list − This report can be used to generate Superannuation List which provides employer contribution for this component for a specific time period. Superannuation report (HINCSAN0) for list − This report can be used to generate Superannuation List which provides employer contribution for this component for a specific time period. Superannuation component and configuration − This component consists of the employee record as Personal Id’s Infotype 0185 Superannuation for India Subtype 01. This subtype is used to maintain the trust name and employee identification number for the employee. Superannuation component and configuration − This component consists of the employee record as Personal Id’s Infotype 0185 Superannuation for India Subtype 01. This subtype is used to maintain the trust name and employee identification number for the employee. To define the trust name where the employer maintains the Superannuation account, you need to define trust id and name of trust in the system. This can be done by going to SPRO → IMG → Payroll → Payroll India → Retirement Benefits → Maintain Superannuation Trust ID The Superannuation ID field of Personal IDs Infotype (0185) Superannuation for India Subtype (01), displays options as per the Trust IDs that you have configured in this IMG activity. To configure the criteria under which you want an employee to be eligible for Superannuation, it can be configured in a SAP system with the following method − SPRO → IMG → Payroll → Payroll India → Retirement Benefits → Maintain Eligibility Details for Superannuation. This component of the payroll system deals with the net part of the remuneration paid to an employee after the deductions. There are various deductions applied on the Gross salary like tax, insurance paid, etc. The Net pay is the amount paid to an employee after all these deductions. This component is used to compute tax on the income received by an employee. An employee income consists of the following parts − This consists of regular income components like Basic pay, HRA, conveyance allowance. Regular income can be categorized as monthly regular income or annual regular income. The system projects the annual regular income using either the Actual Basis or Nominal Basis. The system, by default uses Actual Basis to project annual regular income. You can access this from SPRO → IMG → Payroll → Payroll India → Tax → Maintain Annual Taxable Income. Professional tax in a SAP system is defined as the tax calculated on the employee salaries. Professional tax is also defined as the tax applied by the State Government on profession, trades, employment, etc. A SAP system calculates the annual professional tax of an employee and deducts it from the salary as per the Section 16(ii) of the Income Tax Act. Professional tax is based on the following salary components for an employee − Basic Pay Dearness Allowance Medical Reimbursement Bonus Housing Other remuneration that employee receives regularly This includes the medical reimbursement amount that is more than the amount, exempted in the Income Tax under the IT Act, as a part of the professional tax basis. For all the employees who are availing company leased (CLA) or company owned accommodation (COA), the system calculates the difference in housing allowance and the rent. When an employee gets the amount for the difference in both the components as a part of the regular income, then professional tax is applied on the differential amount. To display and take prints of professional tax returns, the system generates the professional tax returns that your company needs to submit to the state authorities, while remitting the professional tax deductions of the employees. When you generate a professional tax report (HINCPTX0), there must be an Infotype – Other Statutory Deductions Infotype (0588) and PTX (Professional Tax Eligibility) subtype (0003). In this Infotype, you must select the Professional Tax eligibility indicator for an employee. And there should be professional tax results for at least one payroll period. This component is used to maintain information on the employee Provident Fund. The Provident fund is a benefit provided to the employees and contains two parts − As per the government rule, both employee and employer contributes a fixed percentage of the PF basis towards the Provident Fund. The minimum percentage that each employee needs to contribute is 12% of the base salary. An employee can also select some percentage of fixed basis towards PF which is known as Voluntary Provided Fund (VPF). As per the authority rule, an employer has to contribute a fixed percentage of the PF basis towards the Pension Fund of an employee. Also note that apart from these contributions, an employer has to contribute to the Employee's Deposit Linked Insurance (EDLI or ESI). In a SAP system, Provident Fund component allows you to maintain and process the following components − PF Pension Fund EDLI VPF Provident Fund By using the employees Provident Fund Reports (HINCEPF0), you can generate the following monthly PF forms − Form 5 − This can be generated for the employees who qualify for the PF, Pension Fund and EDLI membership for the first time. Form 5 − This can be generated for the employees who qualify for the PF, Pension Fund and EDLI membership for the first time. Form 10 − This can be generated for those employees leaving the service, or leaving the PF trust in the current payroll period. Form 10 − This can be generated for those employees leaving the service, or leaving the PF trust in the current payroll period. Form 12A − This can be generated for wages paid and recoveries made in the current payroll period, as the Employee and Employer's contribution. Form 12A − This can be generated for wages paid and recoveries made in the current payroll period, as the Employee and Employer's contribution. By using the PF Report (HINCEPF1), you can generate the following annual PF forms − Form 3A − This is used to get the statement on the PF contributions made towards un-exempted establishments annually. Form 3A − This is used to get the statement on the PF contributions made towards un-exempted establishments annually. Form 6A − This report is used to print the consolidated contribution statement for that financial year. Form 6A − This report is used to print the consolidated contribution statement for that financial year. If you want to generate the Monthly reports on the Employee PF and Employee Pension Fund contribution, go to SAP Easy access → Human Resources → Payroll → Asia/Pacific → India → Subsequent Activities → Per Payroll Period → Legal Reports. To generate the Annual reports on the Employee PF and Employee Pension Fund contribution, go to SAP Easy Access → Human Resources → Payroll → Asia/Pacific → India → Subsequent Activities → Annual → Legal Reports. Employee State Insurance is one of the other statutory benefit type that has been provided to employees of a company. ESI contribution includes deduction/contribution − From the employee salary From the employer side In case there are other statutory deductions Infotype 0588 and subtype ESI (0001) record exists for the employee, then an employee is considered as eligible for ESI. Note − The ESI Basis for an employee is less than or equal to the amount stored in the ESI Eligibility Limit. ESI Contribution and Benefit Period The Employee contribution towards ESI is 1.75% of the ESI Basis. While the Employer contribution towards ESI is 4.75% of the ESI Basis. To change the ESI Grouping for an employee, this can be configured in the user exit by following SPRO → IMG → Payroll → Payroll India → Statutory Social Contribution → Employees' State Insurance User Exit: Determine Personnel Subarea Grp for ESI. Like Employee State Insurance, LWF is known as the statutory contribution towards welfare of the employees. LWF contribution and frequency of contribution is decided by the state authority. The LWF (Labor Welfare Fund) details are maintained under other Statutory Deductions Infotype 0588 and LWF subtype 0002. In a SAP system, you can define the eligibility of for Labor Welfare Fund, LWF contribution frequency, LWF computation rates and the Validity date. The LWF data is available in the legal report – Labor Welfare Fund legal reports (HINCLWFI) By using this report, it is possible to generate the LWF form for submission to the authorities. You can configure your SAP system to generate LWF statements in the format prescribed by the concerned state authority. This component is used to define the minimum wage for an employee for processing the payroll. All the deduction to be considered for the minimum net processing is defined by following this path − Go to SPRO → IMG → Payroll → Payroll India → Deductions → Arrears and Priorities. In a SAP system, you can configure the minimum net pay using the following two methods − Using this method, you can maintain the percentage in the Minimum Net Pay under Percentage Constant (MNPPR) of table view Payroll Constants (V_T511K). Note − By default, the system takes a particular wage component as the Total gross amount wage type (/101). You can also define a fixed amount in the minimum Net Pay-Fixed Amount constant (MNPAM) of table view Payroll Constants. Both the methods can be configured in the SAP system by the following path − SPRO → IMG → Payroll → Payroll India → Deductions → Minimum Net Pay → Maintain Value for Determination of Minimum Wage. Note − In case you are maintaining both of the above methods, the amount in the Minimum net pay − Fixed Amount constant (MNPAM) is taken as the minimum wage. This component is used to calculate all the payments that are made to third parties and are deducted from the employee’s salary. Different types of deductions can be calculated on gross remuneration or on net remuneration. This involves social welfare payment and taxes. This includes the payment made by an employee to any saving accounts or any voluntary insurance policy that the employer has taken for the employee. You can consider these as one time deductions and recurring deductions. One time deductions are those which are paid by an employee once in a Financial Year. Recurring deductions are maintained in the Infotype 0014 and they are paid in a defined periodicity. This component is used to manage the details of a loan that is provided by the company to an employee. This can include – house loan, car loan, personal loan, etc. An interest amount is charged which is lower than the normal interest rate in the market and the employee salary is considered as a security for this loan. In a SAP system, you can select between different loan categories and different repayment types − Installment Loan Annuity Loan The loan data is maintained in Infotype 0045 and you can get the following details while processing the payroll − Loan Repayment Loan Interest Calculation Imputed income taxation Loans Infotype 0045 − as you enter the information on a company loan, it can contain loan approval date, loan amount, etc. You maintain loan type’s information in subtypes in the Loans Infotype 0045. There is a sequential number that is assigned to each loan. In a SAP system, you can use the combination of a loan type and a sequential number to uniquely identify every loan and hence this allows you to create multiple loans of the same type for an employee. There are different categories of repayment types that can be used and differentiated as follows − Payment is made to the borrower or a repayment to the employer. Payment is made to the borrower or a repayment to the employer. Payment is made directly by check or a bank transfer is made or is processed during the employee payroll run. Payment is made directly by check or a bank transfer is made or is processed during the employee payroll run. You can use the payment types that are defined in a SAP system or you can also define under SPRO → IMG → Payroll → Payroll India → Company Loans → Master Data → Customer Payment Types. You can use the payment types that are defined in a SAP system or you can also define under SPRO → IMG → Payroll → Payroll India → Company Loans → Master Data → Customer Payment Types. This section describes the Loan enhancement customization available in the SAP system for payroll India. You can make the following configuration for the company loans in India − To maintain Loan grouping, go to SPRO → IMG → Payroll → Payroll India → Company Loans → Master data → Maintain Loan Grouping. To define different salary components that define the salary for a loan grouping, go to SPRO → IMG → Payroll → Payroll India → Company Loans → Master Data → Maintain Salary Components. To specify if a Loan Type is eligible for Section 24 Deduction, go to SPRO → IMG → Payroll → Payroll India → Company Loans → Master Data → Maintain Deduction Details Under Section 24. Similarly, you can create various customizations under Payroll India for processing Loan enhancement. This component is used to process the voluntary salary deduction for employees and is applicable for one or multiple days. This component calculates the employee contribution for the same amount at it was paid by the employer. This voluntary deduction normally involves payment to charitable trust, prime minister Relief fund, etc. In the SAP system, this component is maintained in the table V_T7INO1. Go to SPRO → IMG → Payroll → Payroll India → One day Salary deduction → Maintain Details for one day Salary deduction. For example − Consider an employee’s details in a table view One-day salary deduction (V_T7INO1) for Pay Scale Grouping for Allowances MN01. Year - 2010 Period - 01 Calculation Indicator - Actual Salary/Calendar Days Employer Contribution – As the employer also contributes. You run the payroll for an employee, who belongs to a Pay Scale Grouping for Allowances MN01, in June 2010. Let the Actual Salary of the employee for June 2010 be Rs. 6000 and the calendar days KSOLL for the month June = 30. One-day salary deduction payroll function (INDSD) reads the table view one-day salary deduction (V_T7INO1) for the Pay Scale Grouping for Allowances MN01, and generates the following wage types − For employee, one-day salary deduction wage type (/3OE) = Rs (6000/30) * 2 → 400. For employee, one-day salary deduction wage type (/3OE) = Rs (6000/30) * 2 → 400. For employer, one-day salary contribution wage type (/3OF), which is also equal to 400. For employer, one-day salary contribution wage type (/3OF), which is also equal to 400. This includes the activities that should be carried out after processing of gross and net payroll for employees. This is used to post the personnel expenses within a company to financial accounting and Cost Accounting. This includes payables to the employees who are posted against the Wages and Salaries Payable account. It also includes payables to the recipients as the deductions received from the employee are posted in the additional payables account and this varies as per the country. Subsequent activities are performed for this, which includes − Payables against employees are settled by payment. Payables against employees are settled by payment. Receivables against third party like tax, insurance are settled by payments. Receivables against third party like tax, insurance are settled by payments. For each transaction, the following steps are performed − Step 1 − Amounts payable are calculated Step 1 − Amounts payable are calculated Step 2 − Amounts calculated are paid Step 2 − Amounts calculated are paid Step 3 − A payable account to bank clearing account is created. Step 3 − A payable account to bank clearing account is created. You can perform Step 2 and Step 3 either automatically or manually and it varies according to the country and the transaction type. To create or edit the salary/remuneration statement, you can use HR Forms Workplace. This allows you to create a new salary statement with the Forms Workplace and also provides you multi-functional graphical options for structuring the layout of the form and then print program. A form can be printed from the HR Forms Workplace or by using a SAP Easy access menu. This is used for the evaluation of payroll results and you can generate reports and statistics using this component. You have the following options available in the SAP system to perform the evaluation − InfoSet Query − To check the InfoSet query, follow the below path. Go to Human Resources → Information System → Reporting Tool → SAP Query. To create a new infoset query, click on Infoset query. You can also evaluate payroll results using the following standard reports. Remuneration statement Payroll journal Payroll account Wage type reporter In this chapter, we will discuss about the reporting pattern in SAP Payroll. This is used to perform the increment update on the base pay wage type in Infotype 0008. This can be maintained under the Human Resource in SAP Easy access. In SAP Easy access → Human Resource → Payroll → Asia/Pacific → India → Utilities → Basic → General increments. Enter the Personnel number and Pay Scale Grouping for Allowances of the employees to whom you want to give increments in the Basic Salary. Enter the Personnel number and pay scale grouping. Enter the date from which the increment has to be effective and the name of that batch session. Click Execute. The list of employees eligible for the increment appears. You have the following options on the output screen − You can select this option to process the increment for all the eligible employees. The system creates a batch session. You can execute this batch session to update the Basic Pay Infotype 0008. You can select this option to display the ambiguous cases. For example − All the employees for whom the Effective Date that you have entered on the selection screen does not fall in the last split of the Basic Pay Infotype 0008. This option is used to select and display all the error cases. For example − All the employees for whom the Pay Scale Grouping for Allowances is not the same as the one that you have entered on the selection screen. This is used to perform the batch update of a base salary wage type in Infotype 0008 because of the increment posted on the employee promotion. In SAP Easy access → Human Resource → Payroll → Asia/Pacific → India → Utilities → Basic → Promotions. Then you can − Enter the employee selection criteria. Enter the employee selection criteria. Enter the Pay Scale Grouping for Allowances of the employees for whom you want an increment update. Enter the Pay Scale Grouping for Allowances of the employees for whom you want an increment update. Enter the date from which the increment has to be effective. Enter the date from which the increment has to be effective. Enter the name of the batch session and execute the report. Enter the name of the batch session and execute the report. This will display the list of employees eligible for the promotion. You will have the following options on the output screen − This option allows you to update the increment for all the eligible employees and a batch session is created. This batch can be executed to update Basic Pay Infotype 0008 with the Basic Salary increment. This option is used to display cases where there is ambiguity. For example − All employees for whom you have entered the increment effective date, which does not fall in the last split of the Basic Pay Infotype 0008. This option is used to display all the error cases. For example − When all the employees with a Pay Scale Grouping for Allowances is not the same as you have entered in the selection criteria. Using this component, you can print the following sections of Form 16 and Form 16AA for an employee, in a Financial Year − Salary Paid and any Other Income and Tax Deducted. Salary Paid and any Other Income and Tax Deducted. It displays the income, deductions and tax details of the employee for that financial year. It displays the income, deductions and tax details of the employee for that financial year. Details of the Tax Deducted and Deposited into the Central Government Account. Details of the Tax Deducted and Deposited into the Central Government Account. This section also includes the following components − Tax Deducted at Source (TDS) BSR Code of Bank Branch Total Tax Deposited Cheque or DD No. (If applicable) Surcharge Date on Which Tax Deposited Transfer Voucher/Challan Identification Number Education Cess Using this component, you can print Form 24 and Form 24Q for the employees with the following sections − Details of the Salary Paid and Tax Deducted thereon from the Employee. Details of the Salary Paid and Tax Deducted thereon from the Employee. This is used to display the income, deductions and tax details in a particular financial year for the selected employee. This form is defined as the e-filing of Form 24 and needs to be submitted to the IT office in a physical form. The employee has to submit an e-copy of Form 24 to the IT department before the 31st May for the preceding financial year. For example − The Form 24 has to be submitted before May 31, 2016 for the financial year 2015 − 2016. This component creates a batch program which runs and updates the DA wage type in Basic Pay Infotype 0008 for an employee. To access this report, go to SAP Easy access → Human Resources → Payroll → Asia Pacific → India → Utilities → Dearness Allowance → Batch Program for DA. Enter the relevant selection criteria like Personnel number and date. Specify a name for the batch session against Batch group. To execute the report, choose Program → Execute. This will open the Correct Cases screen to review the following types of information. This screen provides information on − Correct cases Ambiguous cases Error cases To view an information type, select the required option. For example − To view the correct cases, select Display correct cases. Next is to select the employee records for which you want to generate the batch session. To generate the batch session, choose User Interface → Create batch input. These components are used to check the Actual Contributions for Tax Exemption indicator of the Section 80 and 80C Deductions Infotype 0585 records of all or selected employees. You can execute this report for a range of employees and it can be based on − Payroll Area and Range of Personnel numbers You have the option of selecting or not selecting the Consider Actual Contributions for Tax Exemption indicator of the Section 80 and 80C Deductions Infotype 0585 records. A session is created when this report is run and this session should be executed from T-code SM35 for the updation of Section 80 Deductions Infotype 0585. To access the report, go to SAP Easy access → Human Resources → Payroll → Asia Pacific → India → Utilities → Section 80 → Batch Program for 80. Enter the relevant selection criteria. If you want the actual Section 80 contributions of the selected employees to be considered during the payroll run, select the Consider Actual Contributions indicator. Enter the Session name. To keep a record of that session after execution, you can select the Keep session indicator option. You can also enter the Lock date. Use T-code SM35 for the updation of the Infotype records only after this date. To execute this, click Execute option. Now you can run T-code SM35 and select the session you want to run. You can run the session in the foreground or in the background. This component is used to check the status of the claims made by the employees. Using this component, you can check − Different reimbursement types claimed by your employees. Different reimbursement types claimed by your employees. Reimbursement Types validity period. Reimbursement Types validity period. Balances carry forward from the previous year. Balances carry forward from the previous year. Details of claim amounts that have been already disbursed and the pending amount to be disbursed along with a payroll run. Details of claim amounts that have been already disbursed and the pending amount to be disbursed along with a payroll run. To check the eligibility, go to SPRO → IMG → Payroll → Payroll India → Reimbursements, Allowances and Perks → Calculate Eligibility for RAPs. It shows the different claims made by your employees according to the Effective Date and the Reimbursement Type. To access the claim report, go to SAP Easy access → Human Resources → Payroll → Asia/Pacific → India → Utilities → Reimbursements, Allowances and Perks → Claims Status. Enter the relevant selection criteria and to execute the report → Execute. This component is used to generate Gratuity List for a selected employee range, within a specified gratuity period. An important perquisite to create a report is to maintain the Personal IDs Infotype 0185 Gratuity for India subtype 03. You have processed the payroll for the required period and have payroll results. Following is the information displayed using this report − Name of the employee Gross salary of the employee Contribution towards employee gratuity from employer side To access this report, go to SAP Easy Access → Human Resources → Payroll → Asia Pacific → India → Subsequent Activities → Per Payroll Period → Reporting → Gratuity → Gratuity Listing. Enter the relevant selection criteria and mention the Gratuity Trust ID for which you want to generate the report. To get the result in a customized format, select the Customer Layout option and enter the name of the Customer Layout. A Roster is used to allow the reservation to the employees based on a specific criterion. The key parameters to be considered for reservation includes − Caste Special benefit By using Roster, you can perform the following activities − It helps in maintaining hiring, promotion and transfer of employees as per the reservation policy. It helps in maintaining hiring, promotion and transfer of employees as per the reservation policy. It helps in maintaining staffing details for the government. It helps in maintaining staffing details for the government. To define the reservation type, go to SPRO → IMG → Payroll → Payroll India → India Public Sector → Rosters → Basic Settings → Define Reservation Types. You have to define the Roster group, recruitment and promotion type, map reservation category to ethnic or challenge group. Next is to map the action types to standard action and time independent roster attribute of the model roster. To provide reservation to employees, the following types of objects can be used − Model Roster − This is defined as a template used to create a Roster. Model Roster − This is defined as a template used to create a Roster. Roster − This is defined as an object that has a fixed number of points assigned to it. Roster − This is defined as an object that has a fixed number of points assigned to it. Roster Point − These are the objects to which the employees are assigned and they are identified by an ID. You can assign one employee ID to a Roster ID for a specific period. Roster Point − These are the objects to which the employees are assigned and they are identified by an ID. You can assign one employee ID to a Roster ID for a specific period. A Roster point has the following attributes − Sequence number Reservation category De-reservation indicator Obsolete indicator Remark Reference date 25 Lectures 6 hours Sanjo Thomas 26 Lectures 2 hours Neha Gupta 30 Lectures 2.5 hours Sumit Agarwal 30 Lectures 4 hours Sumit Agarwal 14 Lectures 1.5 hours Neha Malik 13 Lectures 1.5 hours Neha Malik Print Add Notes Bookmark this page
[ { "code": null, "e": 2377, "s": 1982, "text": "SAP Payroll is one of the key modules in SAP Human Capital Management. This is used to calculate the remuneration for each employee with respect to the work performed by them. SAP Payroll not only consists of remuneration part, but also the other benefits that the organization has to provide for the employee welfare according to different company laws in any country. These commonly include −" }, { "code": null, "e": 2387, "s": 2377, "text": "Labor Law" }, { "code": null, "e": 2400, "s": 2387, "text": "Benefits Law" }, { "code": null, "e": 2417, "s": 2400, "text": "Contribution Law" }, { "code": null, "e": 2425, "s": 2417, "text": "Tax Law" }, { "code": null, "e": 2441, "s": 2425, "text": "Information Law" }, { "code": null, "e": 2455, "s": 2441, "text": "Reporting Law" }, { "code": null, "e": 2470, "s": 2455, "text": "Statistics Law" }, { "code": null, "e": 2734, "s": 2470, "text": "A SAP Payroll System manages the gross and net pay, which also includes the payments and deductions calculated while processing payroll for an employee. The system calculates the payment and all deductions while processing remuneration using different wage types." }, { "code": null, "e": 2827, "s": 2734, "text": "Once the payroll processing is done, the system carries out different subsequent activities." }, { "code": null, "e": 2936, "s": 2827, "text": "For example − You can generate various lists related to remuneration and deductions performed in the system." }, { "code": null, "e": 2983, "s": 2936, "text": "SAP Payroll module is easily integrated with −" }, { "code": null, "e": 3008, "s": 2983, "text": "Personnel Administration" }, { "code": null, "e": 3024, "s": 3008, "text": "Time Management" }, { "code": null, "e": 3044, "s": 3024, "text": "Incentive and Wages" }, { "code": null, "e": 3067, "s": 3044, "text": "Finance and Accounting" }, { "code": null, "e": 3273, "s": 3067, "text": "Personnel Administration is used to get the master data and other payroll related information. By using Time Management, you can get the time related data to calculate the remuneration and for payroll run." }, { "code": null, "e": 3502, "s": 3273, "text": "Incentive and Wages data is used to calculate the incentive wages component in the payroll. Wage type defines the daily payroll for each employee and incentive defines the other extra benefits that should be paid to an employee." }, { "code": null, "e": 3775, "s": 3502, "text": "Expense Payable for payroll is posted to cost center using integration with SAP Finance and Accounting module. You can assign the cost to cost centers in Finance and Accounting module. Here you can also manage the expense for payroll processing of the third party vendors." }, { "code": null, "e": 4200, "s": 3775, "text": "Payroll is based on the payroll driver that varies with each country and region. The payroll driver considers the administrative and legal regulations of the country while defining the payroll. While running a payroll, the payroll driver refers to its corresponding payroll schema, which contains a number of different functions. Each function consists of import data function from internal tables and payroll related files." }, { "code": null, "e": 4234, "s": 4200, "text": "The steps in Payroll processing −" }, { "code": null, "e": 4586, "s": 4234, "text": "A payroll system gets the payroll related data from the system. In case of off-cycle payroll, the system deletes the internal table and imports the last payroll result. The gross wage, shift schedule, and compensation along with the valuation bases are calculated in the system and the master data relevant to this payroll is added in the calculation." }, { "code": null, "e": 4868, "s": 4586, "text": "Next is to calculate the partial period factors, salary elements, and to calculate the gross results. Finally, in the last process system calculates the net remuneration and performs the accounting in case there is any change in the master data from a previously processed payroll." }, { "code": null, "e": 5031, "s": 4868, "text": "Once this payroll run is completed, the results are transferred to Finance Accounting and evaluation. Then the posting is done for the corresponding cost centers." }, { "code": null, "e": 5184, "s": 5031, "text": "Payroll driver is used to run the payroll and their structure is based on that particular country’s laws, as each country has a specific payroll driver." }, { "code": null, "e": 5247, "s": 5184, "text": "Following are a couple of drivers with their technical names −" }, { "code": null, "e": 5348, "s": 5247, "text": "RPCALCx0 − Here, x represents the country specific code, like ‘D’ for Germany and F for France, etc." }, { "code": null, "e": 5449, "s": 5348, "text": "RPCALCx0 − Here, x represents the country specific code, like ‘D’ for Germany and F for France, etc." }, { "code": null, "e": 5529, "s": 5449, "text": "HxxCALC0 − Here, xx represents the ISO code for country, like ID for Indonesia." }, { "code": null, "e": 5609, "s": 5529, "text": "HxxCALC0 − Here, xx represents the ISO code for country, like ID for Indonesia." }, { "code": null, "e": 5779, "s": 5609, "text": "This represents the calculation rules used by the payroll driver. In SAP Payroll system, you have country-specific schemas X000 where X represents the country indicator." }, { "code": null, "e": 5839, "s": 5779, "text": "The Schema structure consists of the following components −" }, { "code": null, "e": 5854, "s": 5839, "text": "Initialization" }, { "code": null, "e": 5863, "s": 5854, "text": "Step 1 −" }, { "code": null, "e": 5894, "s": 5863, "text": "Includes updating the database" }, { "code": null, "e": 5918, "s": 5894, "text": "Importing the Infotypes" }, { "code": null, "e": 5940, "s": 5918, "text": "Calculating gross pay" }, { "code": null, "e": 5949, "s": 5940, "text": "Step 2 −" }, { "code": null, "e": 5994, "s": 5949, "text": "Processing of time data from time management" }, { "code": null, "e": 6016, "s": 5994, "text": "Off cycle payroll run" }, { "code": null, "e": 6061, "s": 6016, "text": "Payroll accounting of last processed payroll" }, { "code": null, "e": 6138, "s": 6061, "text": "Calculating time related data and calculating gross amount for each employee" }, { "code": null, "e": 6157, "s": 6138, "text": "Performing factors" }, { "code": null, "e": 6177, "s": 6157, "text": "Calculating Net pay" }, { "code": null, "e": 6186, "s": 6177, "text": "Step 3 −" }, { "code": null, "e": 6219, "s": 6186, "text": "Calculating the net remuneration" }, { "code": null, "e": 6249, "s": 6219, "text": "Performing the bank transfers" }, { "code": null, "e": 6437, "s": 6249, "text": "It is also possible to automate the payroll run partially or fully and schedule it to run in the background. SAP recommends a few tasks to be run in the background for better performance." }, { "code": null, "e": 6531, "s": 6437, "text": "For example − Payroll can be run in the night and you can check the results the next morning." }, { "code": null, "e": 6582, "s": 6531, "text": "Go to System → Service → Jobs → Define Job or SM36" }, { "code": null, "e": 6825, "s": 6582, "text": "You can define the job here to let the payroll run to process in the background. These background jobs are processed using a Computing Center Management System (CCMS) in the SAP system. The CCMS can be used to perform the following functions." }, { "code": null, "e": 6896, "s": 6825, "text": "The configuration and monitoring of this background processing system." }, { "code": null, "e": 6951, "s": 6896, "text": "Managing and scheduling background jobs in the system." }, { "code": null, "e": 7059, "s": 6951, "text": "To schedule a background job, enter the Job Name. Enter the job class that defines the priority of the job." }, { "code": null, "e": 7102, "s": 7059, "text": "You can define three types of priorities −" }, { "code": null, "e": 7117, "s": 7102, "text": "Class A - High" }, { "code": null, "e": 7134, "s": 7117, "text": "Class B - Medium" }, { "code": null, "e": 7148, "s": 7134, "text": "Class C - Low" }, { "code": null, "e": 7338, "s": 7148, "text": "You can also define the system for load balancing in the target filed. If you want the system to select the server automatically for load balancing purpose, you can leave this option blank." }, { "code": null, "e": 7481, "s": 7338, "text": "If you want the spool request generated from this job to be sent to someone using email, you can mention the same in the Spool list recipient." }, { "code": null, "e": 7746, "s": 7481, "text": "To define a start condition, click on the Start condition tab, there are various options that you can use to define the Start condition. If you want to create a periodic job, check the box at the bottom left side of the screen as shown in the following screenshot." }, { "code": null, "e": 7977, "s": 7746, "text": "Define the steps of the job by clicking the Step tab. You can specify the ABAP Program, external command or any external program to be used for each step. The next step is to save the job to submit to background processing system." }, { "code": null, "e": 8118, "s": 7977, "text": "Note − You have to release a job to make it run. No job even those scheduled for immediate processing, can run without first being released." }, { "code": null, "e": 8295, "s": 8118, "text": "Off-Cycle activities are carried out to process payroll for an employee on any day unlike payroll run that is a periodic activity and schedule to run at specific time interval." }, { "code": null, "e": 8457, "s": 8295, "text": "In order to perform Off-Cycle activities, you should define an Off-Cycle activity section in customizing for payroll. Off-Cycle consists of the following areas −" }, { "code": null, "e": 8596, "s": 8457, "text": "It provides a uniform user interface for all the Off-Cycle activities. You can perform the following functions in an Off-Cycle workbench −" }, { "code": null, "e": 8699, "s": 8596, "text": "To make a bonus payment to an employee on a special occasion like a marriage gift, new born baby, etc." }, { "code": null, "e": 8802, "s": 8699, "text": "To make a bonus payment to an employee on a special occasion like a marriage gift, new born baby, etc." }, { "code": null, "e": 8906, "s": 8802, "text": "To perform an immediate correction run.\nFor example − Consider where employee master data got modified." }, { "code": null, "e": 8946, "s": 8906, "text": "To perform an immediate correction run." }, { "code": null, "e": 9010, "s": 8946, "text": "For example − Consider where employee master data got modified." }, { "code": null, "e": 9053, "s": 9010, "text": "To pay an absence like a leave in advance." }, { "code": null, "e": 9096, "s": 9053, "text": "To pay an absence like a leave in advance." }, { "code": null, "e": 9186, "s": 9096, "text": "To process the payments that are added to Payroll Results Adjustment under Infotype 0221." }, { "code": null, "e": 9276, "s": 9186, "text": "To process the payments that are added to Payroll Results Adjustment under Infotype 0221." }, { "code": null, "e": 9411, "s": 9276, "text": "Consider a case where the payment was made but not received by an employee. To perform a replacement, you can use Off-Cycle workbench." }, { "code": null, "e": 9546, "s": 9411, "text": "Consider a case where the payment was made but not received by an employee. To perform a replacement, you can use Off-Cycle workbench." }, { "code": null, "e": 9587, "s": 9546, "text": "If you want to reverse a payroll result." }, { "code": null, "e": 9628, "s": 9587, "text": "If you want to reverse a payroll result." }, { "code": null, "e": 9767, "s": 9628, "text": "This is one of the key components that allows you to check the previous payroll run results for an employee within an Off-Cycle workbench." }, { "code": null, "e": 10063, "s": 9767, "text": "In the Off-Cycle workbench, go to History tab to display an extract from the payroll which contains all the necessary information of an employee payroll. It also shows details of all the payments that are replaced with a check along with any payroll’s which are reversed are also mentioned here." }, { "code": null, "e": 10217, "s": 10063, "text": "If you want to check any further details on an employee payroll, you can check the remuneration statement for the employee for a specific payroll period." }, { "code": null, "e": 10318, "s": 10217, "text": "You can also check the following details about the payment made in the History tab under workbench −" }, { "code": null, "e": 10425, "s": 10318, "text": "For reverse payment you can check the reason for reversal and person who has carried out reversal payment." }, { "code": null, "e": 10532, "s": 10425, "text": "For reverse payment you can check the reason for reversal and person who has carried out reversal payment." }, { "code": null, "e": 10638, "s": 10532, "text": "To check the replace payment details, you can find which payments are replaced and by which check number." }, { "code": null, "e": 10744, "s": 10638, "text": "To check the replace payment details, you can find which payments are replaced and by which check number." }, { "code": null, "e": 10785, "s": 10744, "text": "Details of check number, bank name, etc." }, { "code": null, "e": 10826, "s": 10785, "text": "Details of check number, bank name, etc." }, { "code": null, "e": 10941, "s": 10826, "text": "Note that to view the remuneration statement of a payroll → select the result and choose → Remuneration Statement." }, { "code": null, "e": 11225, "s": 10941, "text": "This is used to further process the Off-Cycle payroll results, a payment reversal or repayment, etc. When a bonus payment is made using a workbench, a replaced or reverse payroll, remuneration statement should be generated and results from payroll run should be posted to Accounting." }, { "code": null, "e": 11416, "s": 11225, "text": "All the details related to Off-Cycle payroll, reverse payment or repayment is stored in table T52OCG and is available in the report H99LT52OCG and this report is available in Off-Cycle menu." }, { "code": null, "e": 11783, "s": 11416, "text": "Subsequent processing is performed by running one or more batch reports and to ensure that subsequent processing is performed in the correct sequence. You should schedule the report for a Batch Subsequent Processing in the international standard system as regular background jobs. With scheduling report that subsequent processing is conducted regularly and on time." }, { "code": null, "e": 11993, "s": 11783, "text": "Process model is used to define a subsequent program and order in which they run. When you select a report for Batch Subsequent Processing, you also have to define the process model report that should be used." }, { "code": null, "e": 12174, "s": 11993, "text": "Off–Cycle subsequent processing, it is possible to schedule the batch report in background job with process model or you can also call it in a workbench menu and run it from there." }, { "code": null, "e": 12272, "s": 12174, "text": "According to the function executed in an Off-Cycle workbench, different activities are performed." }, { "code": null, "e": 12316, "s": 12272, "text": "For example − Consider replacing a payment." }, { "code": null, "e": 12362, "s": 12316, "text": "Runs Preliminary Program Data Medium Exchange" }, { "code": null, "e": 12470, "s": 12362, "text": "Indicates each payment replacement with a key composed of program run date and the indicator feature CYYYP." }, { "code": null, "e": 12543, "s": 12470, "text": "Enters the details in the indicator table for off-cycle batch processing" }, { "code": null, "e": 12709, "s": 12543, "text": "Runs the batch report for subsequent processing of check replacement as a background job at the time that you have scheduled for the regular processing of the report" }, { "code": null, "e": 12731, "s": 12709, "text": "Reads indicator table" }, { "code": null, "e": 12800, "s": 12731, "text": "Runs the process model that you have specified in the report variant" }, { "code": null, "e": 12818, "s": 12800, "text": "Prints new checks" }, { "code": null, "e": 12953, "s": 12818, "text": "To maintain a master data in SAP system, there are different Infotypes defined in SAP system for Personnel Administration and payroll." }, { "code": null, "e": 12971, "s": 12953, "text": "Company insurance" }, { "code": null, "e": 12996, "s": 12971, "text": "Group accident insurance" }, { "code": null, "e": 13011, "s": 12996, "text": "Life insurance" }, { "code": null, "e": 13035, "s": 13011, "text": "Supplementary insurance" }, { "code": null, "e": 13046, "s": 13035, "text": "Not liable" }, { "code": null, "e": 13051, "s": 13046, "text": "Risk" }, { "code": null, "e": 13064, "s": 13051, "text": "Risk/pension" }, { "code": null, "e": 13077, "s": 13064, "text": "Nursing care" }, { "code": null, "e": 13086, "s": 13077, "text": "Sick pay" }, { "code": null, "e": 13098, "s": 13086, "text": "Sports club" }, { "code": null, "e": 13106, "s": 13098, "text": "Medical" }, { "code": null, "e": 13118, "s": 13106, "text": "Union, etc." }, { "code": null, "e": 13362, "s": 13118, "text": "Payroll system consists of Date Specifications and monitoring of tasks Infotypes. Using monitoring of task, you can set automatic monitoring of tasks for HR activities and system suggest a date when you want to be reminded of the stored tasks." }, { "code": null, "e": 13566, "s": 13362, "text": "This is stored in Infotype 0041 and date type defines the type of information. You can create series of reports on specific date type. You can use this Infotype to run Payroll and also to maintain leave." }, { "code": null, "e": 13746, "s": 13566, "text": "In a standard payroll system, it contains 12 combinations of date type and date and to add more date specification for an employee at the same time, you can use time constraint 3." }, { "code": null, "e": 14062, "s": 13746, "text": "You can also create an automatic monitoring of all HR related tasks that includes follow up activities to be performed and it is maintained in Infotype 0019. System suggests a date according to task type on which you will be reminded and this allows you to perform follow up activities as per the required schedule." }, { "code": null, "e": 14215, "s": 14062, "text": "The reminder date in the system is used to determine when you want to be reminded for a task type. Reminder date can be defined based on this criteria −" }, { "code": null, "e": 14354, "s": 14215, "text": "When you select a task type, if the operator indicator has a blank or negative (-) value then reminder should be set before the task data." }, { "code": null, "e": 14455, "s": 14354, "text": "If the Operator indicator has a positive (+) value, reminder date shouldn’t be before the task date." }, { "code": null, "e": 14592, "s": 14455, "text": "Note − Payroll system also suggests a reminder date for each task independent of task type and you can change this at any point of time." }, { "code": null, "e": 14660, "s": 14592, "text": "Following are a few task types that can be added under monitoring −" }, { "code": null, "e": 14679, "s": 14660, "text": "Temporary contract" }, { "code": null, "e": 14707, "s": 14679, "text": "Expiry of inactive contract" }, { "code": null, "e": 14736, "s": 14707, "text": "Expiry of temporary contract" }, { "code": null, "e": 14751, "s": 14736, "text": "Next appraisal" }, { "code": null, "e": 14766, "s": 14751, "text": "Pay scale jump" }, { "code": null, "e": 14794, "s": 14766, "text": "End of maternity protection" }, { "code": null, "e": 14817, "s": 14794, "text": "End of maternity leave" }, { "code": null, "e": 14847, "s": 14817, "text": "Start of maternity protection" }, { "code": null, "e": 14863, "s": 14847, "text": "Training period" }, { "code": null, "e": 14884, "s": 14863, "text": "Dismissal protection" }, { "code": null, "e": 14903, "s": 14884, "text": "Personal interview" }, { "code": null, "e": 14920, "s": 14903, "text": "Vaccination date" }, { "code": null, "e": 14938, "s": 14920, "text": "Follow-up medical" }, { "code": null, "e": 14958, "s": 14938, "text": "Submit SI statement" }, { "code": null, "e": 14970, "s": 14958, "text": "Work permit" }, { "code": null, "e": 14990, "s": 14970, "text": "Work permit expires" }, { "code": null, "e": 15014, "s": 14990, "text": "End of leave of absence" }, { "code": null, "e": 15034, "s": 15014, "text": "Expiry of probation" }, { "code": null, "e": 15153, "s": 15034, "text": "This contains Infotype related to employee’s previous/other work experience, education and training and qualification." }, { "code": null, "e": 15185, "s": 15153, "text": "Other/Previous Employers (0023)" }, { "code": null, "e": 15215, "s": 15185, "text": "Education and training (0022)" }, { "code": null, "e": 15237, "s": 15215, "text": "Qualifications (0024)" }, { "code": null, "e": 15509, "s": 15237, "text": "This is used to store other employer contract of an employee. You can store the information where an employee works or has worked before working for your company. To enter multiple employer details, you can add multiple data records and validity period for each employee." }, { "code": null, "e": 15629, "s": 15509, "text": "Enter the employer’s name and the country for each employer. The following information can be stored in this Infotype −" }, { "code": null, "e": 15666, "s": 15629, "text": "City HQ – where the company is based" }, { "code": null, "e": 15702, "s": 15666, "text": "Industry in which company is active" }, { "code": null, "e": 15768, "s": 15702, "text": "Job role that an employee or applicant carried out or carries out" }, { "code": null, "e": 15810, "s": 15768, "text": "Type of work contract with other employer" }, { "code": null, "e": 16020, "s": 15810, "text": "This is used to store employee/application qualification details in this Infotype. Incase to store information on more than one qualification for an employee, you can also create multiple data records in this." }, { "code": null, "e": 16215, "s": 16020, "text": "Each qualification type is identified by a key and you can also add proficiency level for each qualification. Proficiency level defines the knowledge and skill of an employee on a qualification." }, { "code": null, "e": 16273, "s": 16215, "text": "Proficiency level can be defined in the following order −" }, { "code": null, "e": 16306, "s": 16273, "text": "Proficiency 0 means non-valuated" }, { "code": null, "e": 16339, "s": 16306, "text": "Proficiency 0 means non-valuated" }, { "code": null, "e": 16373, "s": 16339, "text": "Proficiency 1 means very poor\n..." }, { "code": null, "e": 16403, "s": 16373, "text": "Proficiency 1 means very poor" }, { "code": null, "e": 16407, "s": 16403, "text": "..." }, { "code": null, "e": 16439, "s": 16407, "text": "Proficiency 5 means Average\n..." }, { "code": null, "e": 16467, "s": 16439, "text": "Proficiency 5 means Average" }, { "code": null, "e": 16471, "s": 16467, "text": "..." }, { "code": null, "e": 16501, "s": 16471, "text": "Proficiency 9 means excellent" }, { "code": null, "e": 16531, "s": 16501, "text": "Proficiency 9 means excellent" }, { "code": null, "e": 16875, "s": 16531, "text": "This is used to store education details of an employee/applicant. To store the details about the complete education and training history of an employee/applicant, you have to create as many data records as necessary for the respective subtypes of this Infotype. You can enter the respective dates of the training period as the validity period." }, { "code": null, "e": 16953, "s": 16875, "text": "The following subtypes can be created for each education establishment type −" }, { "code": null, "e": 17039, "s": 16953, "text": "Institute/Place − This contains institute details like University, college name, etc." }, { "code": null, "e": 17125, "s": 17039, "text": "Institute/Place − This contains institute details like University, college name, etc." }, { "code": null, "e": 17227, "s": 17125, "text": "Country Key − It is used to contain the country in which the education/training institution is based." }, { "code": null, "e": 17329, "s": 17227, "text": "Country Key − It is used to contain the country in which the education/training institution is based." }, { "code": null, "e": 17459, "s": 17329, "text": "Certificate − This is used to maintain possible leaving certificates in relation to the educational establishment type specified." }, { "code": null, "e": 17589, "s": 17459, "text": "Certificate − This is used to maintain possible leaving certificates in relation to the educational establishment type specified." }, { "code": null, "e": 17670, "s": 17589, "text": "Duration of Course − This is used to specify the length of each course of study." }, { "code": null, "e": 17751, "s": 17670, "text": "Duration of Course − This is used to specify the length of each course of study." }, { "code": null, "e": 17763, "s": 17751, "text": "Final Marks" }, { "code": null, "e": 17775, "s": 17763, "text": "Final Marks" }, { "code": null, "e": 17877, "s": 17775, "text": "Branch of Study − This includes the specialization of education like ECE, Computers, Mechanical, etc." }, { "code": null, "e": 17979, "s": 17877, "text": "Branch of Study − This includes the specialization of education like ECE, Computers, Mechanical, etc." }, { "code": null, "e": 18205, "s": 17979, "text": "This is used to an employee’s communication id for a certain type of communication. You can define various subtypes under this Infotype to maintain communication details of an employee. The following subtypes can be defined −" }, { "code": null, "e": 18224, "s": 18205, "text": "Credit Card number" }, { "code": null, "e": 18241, "s": 18224, "text": "Internet Address" }, { "code": null, "e": 18252, "s": 18241, "text": "Voice Mail" }, { "code": null, "e": 18262, "s": 18252, "text": "Fax, etc." }, { "code": null, "e": 18558, "s": 18262, "text": "This subtype is used to store the employee’s credit card number for clearing, so the items booked on a credit card should be assigned to a personnel number in the system. This is more helpful incase an employee contains multiple credit cards or credit cards from different credit card companies." }, { "code": null, "e": 18764, "s": 18558, "text": "You can also maintain different card numbers for different companies – first two positions of the ID/number field have been defined with an ID code that corresponds to the individual credit card companies." }, { "code": null, "e": 18918, "s": 18764, "text": "This contains Infotype for test procedure and contains the test procedure for your employee. Test procedure includes test procedure key and release date." }, { "code": null, "e": 19044, "s": 18918, "text": "You can store the following information in Infotype 0130. All this information is defined by a system and cannot be entered −" }, { "code": null, "e": 19049, "s": 19044, "text": "Date" }, { "code": null, "e": 19054, "s": 19049, "text": "Time" }, { "code": null, "e": 19071, "s": 19054, "text": "Releaser User ID" }, { "code": null, "e": 19100, "s": 19071, "text": "Program to implement release" }, { "code": null, "e": 19332, "s": 19100, "text": "When a test procedure is performed for an employee up to a certain release date, then write authorization may no longer be performed which involves changing certain Infotype data with validity start date is before the release date." }, { "code": null, "e": 19434, "s": 19332, "text": "Personal Data − This is used to maintain personal information for an employee in different Infotypes." }, { "code": null, "e": 19559, "s": 19434, "text": "This is used to store the address information of an employee. Various subtypes can be maintained under the Address Infotype." }, { "code": null, "e": 19577, "s": 19559, "text": "Permanent Address" }, { "code": null, "e": 19595, "s": 19577, "text": "Residence Address" }, { "code": null, "e": 19608, "s": 19595, "text": "Home Address" }, { "code": null, "e": 19624, "s": 19608, "text": "Mailing Address" }, { "code": null, "e": 19749, "s": 19624, "text": "This is used to maintain the bank account details to process the net pay of travel expenses from payroll from the HR module." }, { "code": null, "e": 19890, "s": 19749, "text": "This Infotype is used to maintain legal obligations for severely challenged persons. Different subtypes can be defined under this Infotype −" }, { "code": null, "e": 19906, "s": 19890, "text": "Challenge Group" }, { "code": null, "e": 19926, "s": 19906, "text": "Degree of Challenge" }, { "code": null, "e": 19940, "s": 19926, "text": "Credit Factor" }, { "code": null, "e": 19958, "s": 19940, "text": "Type of Challenge" }, { "code": null, "e": 20034, "s": 19958, "text": "This Infotype is used to store the information for identifying an employee." }, { "code": null, "e": 20048, "s": 20034, "text": "For example −" }, { "code": null, "e": 20053, "s": 20048, "text": "Name" }, { "code": null, "e": 20068, "s": 20053, "text": "Marital status" }, { "code": null, "e": 20086, "s": 20068, "text": "Nationality, etc." }, { "code": null, "e": 20246, "s": 20086, "text": "This Infotype is used to maintain an employee’s family member and relative details. The following relationship types can be maintained in the standard system −" }, { "code": null, "e": 20253, "s": 20246, "text": "Spouse" }, { "code": null, "e": 20269, "s": 20253, "text": "Divorced spouse" }, { "code": null, "e": 20276, "s": 20269, "text": "Father" }, { "code": null, "e": 20283, "s": 20276, "text": "Mother" }, { "code": null, "e": 20289, "s": 20283, "text": "Child" }, { "code": null, "e": 20304, "s": 20289, "text": "Legal guardian" }, { "code": null, "e": 20333, "s": 20304, "text": "Guardian and Related persons" }, { "code": null, "e": 20351, "s": 20333, "text": "Emergency contact" }, { "code": null, "e": 20479, "s": 20351, "text": "This Infotype is used to store the data related to employee’s medical examination. Various subtypes can be defined under this −" }, { "code": null, "e": 20491, "s": 20479, "text": "Blood Group" }, { "code": null, "e": 20498, "s": 20491, "text": "Habits" }, { "code": null, "e": 20505, "s": 20498, "text": "Vision" }, { "code": null, "e": 20513, "s": 20505, "text": "Allergy" }, { "code": null, "e": 20526, "s": 20513, "text": "Hearing Test" }, { "code": null, "e": 20547, "s": 20526, "text": "Nervous system, etc." }, { "code": null, "e": 20676, "s": 20547, "text": "Using Action Infotype you can combine several Infotypes into one group. You can use Personnel action for the following purpose −" }, { "code": null, "e": 20695, "s": 20676, "text": "Hiring an Employee" }, { "code": null, "e": 20731, "s": 20695, "text": "To change assignment of an employee" }, { "code": null, "e": 20753, "s": 20731, "text": "To perform pay change" }, { "code": null, "e": 20787, "s": 20753, "text": "Employee leaving the organization" }, { "code": null, "e": 20835, "s": 20787, "text": "This section contains the following Infotypes −" }, { "code": null, "e": 20979, "s": 20835, "text": "This contains the general instructions that an employee is supposed to perform – data protection, accident prevention, other instructions, etc." }, { "code": null, "e": 21077, "s": 20979, "text": "This Infotype is used to maintain an employee’s corporate function like work council member, etc." }, { "code": null, "e": 21173, "s": 21077, "text": "This Infotype is used to maintain data on company car, employee identification and work center." }, { "code": null, "e": 21532, "s": 21173, "text": "This is used to compare three personnel numbers while processing the payroll. When an employee loses his bonus, night work allowance cos of his involvement in work council, this is used to process his bonus by comparing with similar personnel for this purpose. This Infotype is maintained only for those employees which are involved in work council function." }, { "code": null, "e": 21688, "s": 21532, "text": "This Infotype is used to maintain details of all the assets that have been provided to employee as loan. You can define the following subtypes under this −" }, { "code": null, "e": 21695, "s": 21688, "text": "Key(s)" }, { "code": null, "e": 21704, "s": 21695, "text": "Clothing" }, { "code": null, "e": 21710, "s": 21704, "text": "Books" }, { "code": null, "e": 21718, "s": 21710, "text": "Tool(s)" }, { "code": null, "e": 21727, "s": 21718, "text": "Plant ID" }, { "code": null, "e": 21931, "s": 21727, "text": "This Infotype is used to store the data related to employee’s employment contract. While creating a record for Contract Elements Infotype (0016), system suggests default values for the following fields −" }, { "code": null, "e": 21945, "s": 21931, "text": "Contract type" }, { "code": null, "e": 21954, "s": 21945, "text": "Sick pay" }, { "code": null, "e": 21971, "s": 21954, "text": "Probation period" }, { "code": null, "e": 21985, "s": 21971, "text": "Continued pay" }, { "code": null, "e": 22006, "s": 21985, "text": "Notice period for EE" }, { "code": null, "e": 22027, "s": 22006, "text": "Notice period for ER" }, { "code": null, "e": 22173, "s": 22027, "text": "These default values are determined by the company code, personnel area and employee group/subgroup in Organizational Assignment Infotype (0001)." }, { "code": null, "e": 22295, "s": 22173, "text": "This Infotype is used to store any special authority/privilege that has been assigned to an employee − Power of Attorney." }, { "code": null, "e": 22342, "s": 22295, "text": "Different subtypes can be defined under this −" }, { "code": null, "e": 22371, "s": 22342, "text": "Commercial power of attorney" }, { "code": null, "e": 22408, "s": 22371, "text": "General commercial power of attorney" }, { "code": null, "e": 22458, "s": 22408, "text": "Power of attorney to perform banking transactions" }, { "code": null, "e": 22748, "s": 22458, "text": "Pay scale grouping for allowances is performed to add similar type of employees in a group and similar characteristics are applied on each group. This is used to determine: compensation structure as per grouping, payroll processing procedure, and the value of compensation for an employee." }, { "code": null, "e": 22915, "s": 22748, "text": "While defining the payroll processing, grouping is the first step that is performed. Wage type can’t be defined till you define the pay scale grouping for allowances." }, { "code": null, "e": 22986, "s": 22915, "text": "Pay scale grouping for allowances is defined based on few parameters −" }, { "code": null, "e": 23001, "s": 22986, "text": "Pay scale area" }, { "code": null, "e": 23016, "s": 23001, "text": "Pay scale type" }, { "code": null, "e": 23032, "s": 23016, "text": "Pay scale group" }, { "code": null, "e": 23048, "s": 23032, "text": "Pay scale level" }, { "code": null, "e": 23501, "s": 23048, "text": "As an example, consider a company with offices in Hyderabad, Bangalore, Mumbai, Delhi and Chennai. Now the employee location where he is located affects the compensation to a certain level. In this case, it is possible to assign the different cities to pay scale area and hence pay scale area becomes a key pay parameter to create pay scale grouping for allowances. In a similar way you can define other pay scale parameters depends on various factors." }, { "code": null, "e": 23674, "s": 23501, "text": "In SAP Easy access menu → SPRO → IMG → Personnel Management → Personnel Administration → Payroll data → Basic Pay → Define EE Subgroup Grouping for PCR and Coll.Agrmt.Prov." }, { "code": null, "e": 23821, "s": 23674, "text": "It will show you the list of EE group, the EE group name and different fields associated with it. If you want to change it, this can be done here." }, { "code": null, "e": 24089, "s": 23821, "text": "Pay Scale Grouping for Allowances is not define in any of the Infotypes. You can’t put an employee directly to a pay scale grouping for allowances. When you define five different pay parameters, an employee is directly assigned to a pay scale grouping for allowances." }, { "code": null, "e": 24442, "s": 24089, "text": "By entering an Employee Group and Employee Subgroup in Organizational Assignment Infotype (0001) and pay scale area, pay scale type, pay scale group, pay scale level in the Basic Pay Infotype (0008), it adds the employee to a pay scale grouping for allowances automatically. So, the pay scale grouping is defined as an assignment of the pay parameters." }, { "code": null, "e": 24530, "s": 24442, "text": "Go to SPRO → IMG → Payroll → Payroll: India → Assign Pay scale grouping for allowances." }, { "code": null, "e": 24629, "s": 24530, "text": "In the next window that comes up, you can see the associated pay parameters to pay scale grouping." }, { "code": null, "e": 24637, "s": 24629, "text": "PS Area" }, { "code": null, "e": 24645, "s": 24637, "text": "PS Type" }, { "code": null, "e": 24654, "s": 24645, "text": "PS Group" }, { "code": null, "e": 24663, "s": 24654, "text": "PS Level" }, { "code": null, "e": 24743, "s": 24663, "text": "Pay Scale grouping for allowances can decide the following objects in Payroll −" }, { "code": null, "e": 24754, "s": 24743, "text": "Wage types" }, { "code": null, "e": 24782, "s": 24754, "text": "Basic salary and increments" }, { "code": null, "e": 24801, "s": 24782, "text": "Dearness Allowance" }, { "code": null, "e": 24830, "s": 24801, "text": "Housing and Car & Conveyance" }, { "code": null, "e": 24866, "s": 24830, "text": "Recurring allowances and deductions" }, { "code": null, "e": 24903, "s": 24866, "text": "Reimbursements, Allowances and Perks" }, { "code": null, "e": 24926, "s": 24903, "text": "Leave Travel Allowance" }, { "code": null, "e": 24935, "s": 24926, "text": "Gratuity" }, { "code": null, "e": 24950, "s": 24935, "text": "Superannuation" }, { "code": null, "e": 24975, "s": 24950, "text": "Long Term Reimbursements" }, { "code": null, "e": 24997, "s": 24975, "text": "Rounding off Recovery" }, { "code": null, "e": 25012, "s": 24997, "text": "Provident Fund" }, { "code": null, "e": 25257, "s": 25012, "text": "Mid-Year Go Live data is used in countries where payroll is implemented in the middle of financial year. This is used for transferring legacy payroll data to the SAP System and also for creating payroll results from the transferred legacy data." }, { "code": null, "e": 25563, "s": 25257, "text": "For Example − You can consider a case for India where income tax assessment year is performed from 1st April − 31st March. Now to implement SAP Payroll India in the middle of a Financial Year, there is a need to transfer payroll results for those periods of the financial year that lie before that period." }, { "code": null, "e": 25681, "s": 25563, "text": "This is defined as a period for which the payroll results are available and need to be transferred to the SAP system." }, { "code": null, "e": 25767, "s": 25681, "text": "This period is defined as the term when you process the first productive payroll run." }, { "code": null, "e": 25975, "s": 25767, "text": "This is used to rehire an employee by using the same Personnel number as used in the time of last employment or within same financial year. The action type associated with this is – Reentry into the company." }, { "code": null, "e": 26136, "s": 25975, "text": "In case of rehiring an employee, if previous records are not delimited, you will have to delimit the previous records and there is a need to create new entries." }, { "code": null, "e": 26208, "s": 26136, "text": "The following Infotype value needs to be updated for this action type −" }, { "code": null, "e": 26245, "s": 26208, "text": "Recurring Payments/Deductions (0014)" }, { "code": null, "e": 26282, "s": 26245, "text": "Recurring Payments/Deductions (0014)" }, { "code": null, "e": 26315, "s": 26282, "text": "Organizational Assignment (0001)" }, { "code": null, "e": 26348, "s": 26315, "text": "Organizational Assignment (0001)" }, { "code": null, "e": 26406, "s": 26348, "text": "Membership Fees (0057), Example: sports club, Union, etc." }, { "code": null, "e": 26464, "s": 26406, "text": "Membership Fees (0057), Example: sports club, Union, etc." }, { "code": null, "e": 26496, "s": 26464, "text": "Family Member/Dependents (0021)" }, { "code": null, "e": 26528, "s": 26496, "text": "Family Member/Dependents (0021)" }, { "code": null, "e": 26562, "s": 26528, "text": "Other Statutory Deductions (0588)" }, { "code": null, "e": 26596, "s": 26562, "text": "Other Statutory Deductions (0588)" }, { "code": null, "e": 26628, "s": 26596, "text": "Long term reimbursements (0590)" }, { "code": null, "e": 26660, "s": 26628, "text": "Long term reimbursements (0590)" }, { "code": null, "e": 26711, "s": 26660, "text": "Housing (0581), Example − HRA, Company owned, etc." }, { "code": null, "e": 26762, "s": 26711, "text": "Housing (0581), Example − HRA, Company owned, etc." }, { "code": null, "e": 27069, "s": 26762, "text": "While running the payroll for a rehired employee, payroll function checks the status of the rehired employee’s employment in the system. If the system is showing the present status as active preceded with withdrawn and active status within same Financial Year, this represents that the employee is rehired." }, { "code": null, "e": 27150, "s": 27069, "text": "The status of an employee’s employment is maintained in the internal table COCD." }, { "code": null, "e": 27336, "s": 27150, "text": "To check the previous payroll data for a rehired employee – earning, deductions, and exemptions, this can be checked using the Results Table (RT) and the Cumulative Results Table (CRT)." }, { "code": null, "e": 27419, "s": 27336, "text": "The Payroll function INPET is used to process the previous employment tax details." }, { "code": null, "e": 27460, "s": 27419, "text": "The following wage types are generated −" }, { "code": null, "e": 27599, "s": 27460, "text": "Wage Type /4V1 to /4V9 − This is created to maintain details of the employee’s employment in the other company in the same Financial Year." }, { "code": null, "e": 27738, "s": 27599, "text": "Wage Type /4V1 to /4V9 − This is created to maintain details of the employee’s employment in the other company in the same Financial Year." }, { "code": null, "e": 27903, "s": 27738, "text": "Wage Type /4VA to /4Vg (From internal table 16) − This is created to maintain employee’s previous employment details in the same company in the same Financial Year." }, { "code": null, "e": 28068, "s": 27903, "text": "Wage Type /4VA to /4Vg (From internal table 16) − This is created to maintain employee’s previous employment details in the same company in the same Financial Year." }, { "code": null, "e": 28154, "s": 28068, "text": "The following components of the employee’s tax is calculated for a rehired employee −" }, { "code": null, "e": 28324, "s": 28154, "text": "Tax Exemptions on −\n\nHouse Rent Allowance (HRA) (Metro or non-Metro)\nLeave Travel Allowance (LTA)\nChild Education Allowance or Tuition fee\nChild Hostel Allowance (CHA)\n\n" }, { "code": null, "e": 28344, "s": 28324, "text": "Tax Exemptions on −" }, { "code": null, "e": 28392, "s": 28344, "text": "House Rent Allowance (HRA) (Metro or non-Metro)" }, { "code": null, "e": 28440, "s": 28392, "text": "House Rent Allowance (HRA) (Metro or non-Metro)" }, { "code": null, "e": 28469, "s": 28440, "text": "Leave Travel Allowance (LTA)" }, { "code": null, "e": 28498, "s": 28469, "text": "Leave Travel Allowance (LTA)" }, { "code": null, "e": 28539, "s": 28498, "text": "Child Education Allowance or Tuition fee" }, { "code": null, "e": 28580, "s": 28539, "text": "Child Education Allowance or Tuition fee" }, { "code": null, "e": 28609, "s": 28580, "text": "Child Hostel Allowance (CHA)" }, { "code": null, "e": 28638, "s": 28609, "text": "Child Hostel Allowance (CHA)" }, { "code": null, "e": 28701, "s": 28638, "text": "The following perquisites are checked before calculating tax −" }, { "code": null, "e": 28729, "s": 28701, "text": "Company Owned Accommodation" }, { "code": null, "e": 28763, "s": 28729, "text": "Company Paid/Leased Accommodation" }, { "code": null, "e": 28769, "s": 28763, "text": "Loans" }, { "code": null, "e": 28840, "s": 28769, "text": "The Payroll system also checks the below deductions for the employee −" }, { "code": null, "e": 28868, "s": 28840, "text": "Deductions under section 80" }, { "code": null, "e": 28886, "s": 28868, "text": "Section 89 relief" }, { "code": null, "e": 28903, "s": 28886, "text": "Professional Tax" }, { "code": null, "e": 28934, "s": 28903, "text": "Labor Welfare Fund (LWF), etc." }, { "code": null, "e": 28965, "s": 28934, "text": "Employee State Insurance (ESI)" }, { "code": null, "e": 29001, "s": 28965, "text": "EPF Provident Fund and Pension Fund" }, { "code": null, "e": 29169, "s": 29001, "text": "A Split Payroll is run for the following periods – First of the month to one day before the employee is rehired. And from the date of rehiring to the end of the month." }, { "code": null, "e": 29346, "s": 29169, "text": "When an employee is rehired on any day other than the first, a split payroll is enabled. Go to SPRO → IMG → Payroll → Payroll India → Basic Settings → Enable Split Payroll Run." }, { "code": null, "e": 29514, "s": 29346, "text": "In a new window, you will see the list of all split payroll in the system. To create a new entry, click on the New Entries tab at the top left hand side of the screen." }, { "code": null, "e": 29637, "s": 29514, "text": "Enter the values: Act. 12 stands for re-entry into the company. In a similar way, you can select the other fields as well." }, { "code": null, "e": 29730, "s": 29637, "text": "Once you enter all the details, click the save icon at the top left hand side of the screen." }, { "code": null, "e": 29771, "s": 29730, "text": "An example of rehiring and payroll run −" }, { "code": null, "e": 29897, "s": 29771, "text": "An employee left a company on May 17, 2015 and was rehired on Nov 25, 2015. In this case, November payroll will be run twice." }, { "code": null, "e": 29950, "s": 29897, "text": "For the period between Nov 1, 2015 and Nov 24, 2015." }, { "code": null, "e": 30003, "s": 29950, "text": "For the period between Nov 1, 2015 and Nov 24, 2015." }, { "code": null, "e": 30056, "s": 30003, "text": "For the period between Nov 25, 2015 to Nov 30, 2015." }, { "code": null, "e": 30109, "s": 30056, "text": "For the period between Nov 25, 2015 to Nov 30, 2015." }, { "code": null, "e": 30321, "s": 30109, "text": "Indirect Evaluation is used to calculate payroll for some specific wage types that are defaulted under the Basic Pay Infotype (0008) or Infotype 0014 or 001 (recurring payment/deductions or Additional payments)." }, { "code": null, "e": 30492, "s": 30321, "text": "Note − While using indirect evaluation, it is also possible to calculate INVAL as numbers instead of using value as amount considering the wage type configured correctly." }, { "code": null, "e": 30630, "s": 30492, "text": "For example − You can configure INVAL for an employee to be eligible for 10 liters of petrol each month. This represents INVAL as number." }, { "code": null, "e": 30707, "s": 30630, "text": "There are quite a few types of variants for indirect evaluation, which are −" }, { "code": null, "e": 30784, "s": 30707, "text": "Variant A − This is used to calculate the wage type value as a fixed amount." }, { "code": null, "e": 30861, "s": 30784, "text": "Variant A − This is used to calculate the wage type value as a fixed amount." }, { "code": null, "e": 31227, "s": 30861, "text": "Variant B − This is used to calculate the amount as percentage of the base wage type added to a fixed amount. In this, multiple amounts with same or different percentage of the base wage type, can be calculated for an INVAL wage type. In this case, the amount that will be Indirectly Evaluated will be the sum of all such calculated amounts added to a fixed amount." }, { "code": null, "e": 31593, "s": 31227, "text": "Variant B − This is used to calculate the amount as percentage of the base wage type added to a fixed amount. In this, multiple amounts with same or different percentage of the base wage type, can be calculated for an INVAL wage type. In this case, the amount that will be Indirectly Evaluated will be the sum of all such calculated amounts added to a fixed amount." }, { "code": null, "e": 31673, "s": 31593, "text": "For example − Wage type M230, consider the following different INVAL B amounts." }, { "code": null, "e": 31803, "s": 31673, "text": "10% of MB10\n30% of M220\nFixed amount of Rs.1000\nSo in this scenario, wage type M230 will have INVAL amount as sum of a, b and c.\n" }, { "code": null, "e": 32177, "s": 31803, "text": "Variant C − This is used to calculate the amount as a percentage of a base wage type subject to a maximum limit. More than one such amount, with same or different percentage of the base wage type, can be calculated for an INVAL wage type. In this case, the amount that will be Indirectly Evaluated will be the sum of all such calculated amounts, subject to a maximum limit." }, { "code": null, "e": 32551, "s": 32177, "text": "Variant C − This is used to calculate the amount as a percentage of a base wage type subject to a maximum limit. More than one such amount, with same or different percentage of the base wage type, can be calculated for an INVAL wage type. In this case, the amount that will be Indirectly Evaluated will be the sum of all such calculated amounts, subject to a maximum limit." }, { "code": null, "e": 32631, "s": 32551, "text": "For example − Wage type M230, consider the following different INVAL C amounts." }, { "code": null, "e": 32783, "s": 32631, "text": "15% of MB10\n20% of M220\nLimit of Rs.4000\nIn this scenario, INVAL amount for the wage type M230 will be the sum of a, and b \nsubject to a maximum of c.\n" }, { "code": null, "e": 32926, "s": 32783, "text": "Variant D − This is used to calculate the amount as one or any combination of the following INVAL Module variants based on Basic salary slabs." }, { "code": null, "e": 33069, "s": 32926, "text": "Variant D − This is used to calculate the amount as one or any combination of the following INVAL Module variants based on Basic salary slabs." }, { "code": null, "e": 33335, "s": 33069, "text": "This is used to calculate the fixed amount and the percentage of the basic slab. This is done by first calculating the percentage of a base wage type added to a fixed amount. And then secondly, the percentage of a base wage type which is subject to a maximum limit." }, { "code": null, "e": 33552, "s": 33335, "text": "The Gross Part of the Payroll is used to determine an employee’s gross pay as per the contractual requirements and consists of payments and deductions. The Gross Pay consists of different components, which includes −" }, { "code": null, "e": 33562, "s": 33552, "text": "Basic Pay" }, { "code": null, "e": 33580, "s": 33562, "text": "Dearest Allowance" }, { "code": null, "e": 33600, "s": 33580, "text": "Variation allowance" }, { "code": null, "e": 33608, "s": 33600, "text": "Bonuses" }, { "code": null, "e": 33623, "s": 33608, "text": "Provident fund" }, { "code": null, "e": 33632, "s": 33623, "text": "Gratuity" }, { "code": null, "e": 33820, "s": 33632, "text": "Then there are different deductions that are made as per the employee enrollment. These deductions include company owned apartment (COA), company sponsored day care, and other deductions." }, { "code": null, "e": 33938, "s": 33820, "text": "All these factors are based on a country’s legal labor rules and determines the gross taxable income of the employee." }, { "code": null, "e": 34102, "s": 33938, "text": "Wage type is one of the key components in payroll processing. Based on the way they store information; wage type can be divided into the following two categories −" }, { "code": null, "e": 34324, "s": 34102, "text": "Primary wage type is defined as the wage type for which data is entered in an Infotype. The Primary wage types are created by copying the model wage types provided by SAP. There are different types of primary wage types −" }, { "code": null, "e": 34339, "s": 34324, "text": "Time wage type" }, { "code": null, "e": 34573, "s": 34339, "text": "Time wage type is used to store the time related information. This wage type is used to combine payroll and time management. Time wage type is generated at the time of evaluation and is configured through T510S or using a custom PCR." }, { "code": null, "e": 34592, "s": 34573, "text": "Dialogue wage type" }, { "code": null, "e": 34708, "s": 34592, "text": "This wage type includes basic pay IT0008, recurring payments and deductions IT0014, and additional payments IT0015." }, { "code": null, "e": 34853, "s": 34708, "text": "The secondary wage types are predefined wage types in the SAP system and starts with a ‘/’. These wage types are created during the payroll run." }, { "code": null, "e": 34923, "s": 34853, "text": "These wage types are system generated and can’t be maintained online." }, { "code": null, "e": 34956, "s": 34923, "text": "For example − /559 Bank Transfer" }, { "code": null, "e": 34998, "s": 34956, "text": "The key elements of a wage type include −" }, { "code": null, "e": 35009, "s": 34998, "text": "Amount AMT" }, { "code": null, "e": 35018, "s": 35009, "text": "Rate RTE" }, { "code": null, "e": 35029, "s": 35018, "text": "Number NUM" }, { "code": null, "e": 35111, "s": 35029, "text": "As per the processing type, each element can have one, two or all element values." }, { "code": null, "e": 35214, "s": 35111, "text": "For example − The basic pay can have a Rate and a Number, however a bonus pay can only have an amount." }, { "code": null, "e": 35394, "s": 35214, "text": "The payment includes all the payments given to an employee according to the employment contract and any voluntary payment paid. A payment combines the employee gross remuneration." }, { "code": null, "e": 35535, "s": 35394, "text": "This gross remuneration is defined as the calculation of social insurance and tax payments and also for the calculation of net remuneration." }, { "code": null, "e": 35623, "s": 35535, "text": "The payment is defined in terms of the following components in the SAP Payroll system −" }, { "code": null, "e": 35814, "s": 35623, "text": "Basic Pay − This component consists of the fixed wage and other salary elements and is paid to the employee for each payroll period. The details are entered in the Basic Pay (0008) Infotype." }, { "code": null, "e": 36005, "s": 35814, "text": "Basic Pay − This component consists of the fixed wage and other salary elements and is paid to the employee for each payroll period. The details are entered in the Basic Pay (0008) Infotype." }, { "code": null, "e": 36219, "s": 36005, "text": "Recurring Payments and Deductions − Recurring payments and deductions include components like overtime, leave or other components. This information is maintained in Recurring Payment and deduction Infotype (0014)." }, { "code": null, "e": 36433, "s": 36219, "text": "Recurring Payments and Deductions − Recurring payments and deductions include components like overtime, leave or other components. This information is maintained in Recurring Payment and deduction Infotype (0014)." }, { "code": null, "e": 36614, "s": 36433, "text": "Additional Payments − There are many components in the payment section, which are not paid in each payroll period. This information is added to Additional Payments Infotype (0015)." }, { "code": null, "e": 36795, "s": 36614, "text": "Additional Payments − There are many components in the payment section, which are not paid in each payroll period. This information is added to Additional Payments Infotype (0015)." }, { "code": null, "e": 36989, "s": 36795, "text": "Time management is one of the key components in Payroll that is used to calculate the gross salary of the employees. Monetary benefits are determined by work schedule and planned working hours." }, { "code": null, "e": 37146, "s": 36989, "text": "The time management integration with payroll is used to determine wage types like bonuses for overtime, night/odd hour work allowance, work on holiday, etc." }, { "code": null, "e": 37349, "s": 37146, "text": "You can also use Time Data Recording and Administration Component Integration with Time Management component to find out time data information for employees and further to determine the time wage types." }, { "code": null, "e": 37505, "s": 37349, "text": "When you use this time evaluation component Integration with Time Management component, this is used to find time wage types determined by Time Evaluation." }, { "code": null, "e": 37667, "s": 37505, "text": "This component is used to ensure that an employee shouldn’t get financial disadvantage, if he/she is working in odd hours or if a shift time is changed for them." }, { "code": null, "e": 37935, "s": 37667, "text": "For example − An employee’s planned working time is changed and he is facing a financial disadvantage, he or she is paid on the basis of the original working time – like an employee gets his shift changed from a night shift with night shift bonuses to an early shift." }, { "code": null, "e": 38083, "s": 37935, "text": "If an employee’s shift time is changed and that employee will be benefited financially, he or she is paid on the basis of the changed working time." }, { "code": null, "e": 38359, "s": 38083, "text": "Consider an employee whose shift changes from Friday to a Sunday with Sunday bonuses. In this case, a shift change compensation will be listed under the remuneration statement. It is also possible to limit the payment of a shift change compensation for a particular category." }, { "code": null, "e": 38448, "s": 38359, "text": "This is used to process the manually calculated wages, bonus or non-standard wage types." }, { "code": null, "e": 38632, "s": 38448, "text": "This component provides information on payroll with time and person related time wage types. Time wage type is used to perform the financial evaluation of work performed on a payroll." }, { "code": null, "e": 38792, "s": 38632, "text": "Wage type is one of the key components in payroll processing. Based on the way they store information, they can be defined as Primary and Secondary Wage types." }, { "code": null, "e": 39018, "s": 38792, "text": "During the payroll run, the primary wage types are provided with the values and secondary wage types are formed at the time of the payroll run. You can check the characteristics of a wage type by going to the following path −" }, { "code": null, "e": 39152, "s": 39018, "text": "SPRO → IMG → Personnel Management → Personnel Administration → Info Type → Wage types → Wage type catalog → Wage type characteristics" }, { "code": null, "e": 39384, "s": 39152, "text": "With the release of 4.6B, processing of averages has been changed. Processing of averages depends on the country and release and with countries like Argentina, Brazil and a few other, a new processing is released with version 4.5B." }, { "code": null, "e": 39612, "s": 39384, "text": "At one time, you can only use any of these two versions, if you are using an old version, you can continue to use the same version and there is no need to move to the new version, but the older version is not under development." }, { "code": null, "e": 39701, "s": 39612, "text": "The technical processing of averages can be configured as shown in the following steps −" }, { "code": null, "e": 39810, "s": 39701, "text": "SPRO → IMG → Payroll → Payroll India → Time wage type valuation → Averages → Bases for valuation of Averages" }, { "code": null, "e": 39864, "s": 39810, "text": "You should check the following perquisites for this −" }, { "code": null, "e": 39913, "s": 39864, "text": "Forming the basis for calculating average values" }, { "code": null, "e": 39958, "s": 39913, "text": "Definition of calculation rules for averages" }, { "code": null, "e": 40004, "s": 39958, "text": "Assignment of calculation rules to wage types" }, { "code": null, "e": 40074, "s": 40004, "text": "To create a new technical processing of average, click on New Entries" }, { "code": null, "e": 40176, "s": 40074, "text": "In a new window, define the different rules as mentioned above and click on the save icon at the top." }, { "code": null, "e": 40272, "s": 40176, "text": "To maintain incentive wages, different accounting processes are defined in standard SAP system." }, { "code": null, "e": 40462, "s": 40272, "text": "In this Wage Type, target time for each ticket is calculated using a piecework rate. This is used to calculate the amount for time ticket that the employee is due. This amount consists of −" }, { "code": null, "e": 40661, "s": 40462, "text": "Basic Monthly Pay − This defines the gross amount that is paid to the employees irrespective of their performance and it can be paid as a monthly sum or in terms of hourly pay as per their contract." }, { "code": null, "e": 40860, "s": 40661, "text": "Basic Monthly Pay − This defines the gross amount that is paid to the employees irrespective of their performance and it can be paid as a monthly sum or in terms of hourly pay as per their contract." }, { "code": null, "e": 41175, "s": 40860, "text": "Time Dependent Variable Pay − This is used to define the pay scale rate that is different from a master pay scale rate for an employee. It is possible that an employee is remunerated at a high rate as compared to a master pay rate for specific activities. You have to enter a higher pay scale into the time ticket." }, { "code": null, "e": 41490, "s": 41175, "text": "Time Dependent Variable Pay − This is used to define the pay scale rate that is different from a master pay scale rate for an employee. It is possible that an employee is remunerated at a high rate as compared to a master pay rate for specific activities. You have to enter a higher pay scale into the time ticket." }, { "code": null, "e": 41700, "s": 41490, "text": "Performance Dependent Variable Pay − This is used to credit when an employee completes the work in less time than the target time. The different between target time and actual time is mentioned on time ticket." }, { "code": null, "e": 41910, "s": 41700, "text": "Performance Dependent Variable Pay − This is used to credit when an employee completes the work in less time than the target time. The different between target time and actual time is mentioned on time ticket." }, { "code": null, "e": 42128, "s": 41910, "text": "This is similar to the monthly wage calculation with the only difference that the monthly wage is specified as an hourly wage from the starting, so you don’t need to convert the monthly basic wage into an hourly wage." }, { "code": null, "e": 42160, "s": 42128, "text": "Personnel calculation schemas −" }, { "code": null, "e": 42239, "s": 42160, "text": "There are two types of schemas valuation of time tickets for incentive wages −" }, { "code": null, "e": 42326, "s": 42239, "text": "German Version DIW0 − this contains special features that are specific to German only." }, { "code": null, "e": 42413, "s": 42326, "text": "German Version DIW0 − this contains special features that are specific to German only." }, { "code": null, "e": 42691, "s": 42413, "text": "International Version XIW00 − You can use schema XIW00 to set up your own incentive wage accounting rules as per different countries. As valuation of time tickets vary according to different countries and organizations so there are no country specific accounting schemas in it." }, { "code": null, "e": 42969, "s": 42691, "text": "International Version XIW00 − You can use schema XIW00 to set up your own incentive wage accounting rules as per different countries. As valuation of time tickets vary according to different countries and organizations so there are no country specific accounting schemas in it." }, { "code": null, "e": 43117, "s": 42969, "text": "This component is used to check the remuneration when an employee works for a lesser period of time. You can use factoring in the following cases −" }, { "code": null, "e": 43197, "s": 43117, "text": "When an employee leaves, joins or remains absent for a specific period of time." }, { "code": null, "e": 43277, "s": 43197, "text": "When an employee leaves, joins or remains absent for a specific period of time." }, { "code": null, "e": 43387, "s": 43277, "text": "When there is a change in the basic pay, substitution, work reassignment or change in personal work schedule." }, { "code": null, "e": 43497, "s": 43387, "text": "When there is a change in the basic pay, substitution, work reassignment or change in personal work schedule." }, { "code": null, "e": 43650, "s": 43497, "text": "To find the correct remuneration for an employee, the remuneration amount is multiplied by a partial period factor which is based on different methods −" }, { "code": null, "e": 43665, "s": 43650, "text": "Payment method" }, { "code": null, "e": 43682, "s": 43665, "text": "Deduction method" }, { "code": null, "e": 43693, "s": 43682, "text": "PWS method" }, { "code": null, "e": 43707, "s": 43693, "text": "Hybrid method" }, { "code": null, "e": 43888, "s": 43707, "text": "Each Payroll system contains few factoring rules that are needed to determine the partial period factor. These rules can be customized to meet specific requirements in the company." }, { "code": null, "e": 43961, "s": 43888, "text": "The following Infotypes are calculated for partial period remuneration −" }, { "code": null, "e": 44143, "s": 43961, "text": "This factor is used to calculate the partial remuneration. This is defined as a variable value which is calculated using different formulas as per the company and the circumstances." }, { "code": null, "e": 44305, "s": 44143, "text": "While customizing, partial period factors are defined in a personnel calculation rules for specific situations and assigned to wage types for particular periods." }, { "code": null, "e": 44467, "s": 44305, "text": "When you multiply the partial period factor by the fixed remuneration amount, this gives you partial period remuneration amount to be paid for a specific period." }, { "code": null, "e": 44700, "s": 44467, "text": "For example − Consider an employee who was on an unpaid leave from 3rd February to 29th March, this means that the employee has worked for 2 days in February and 2 days in March considering 20 workdays in Feb and 23 workdays in Mar." }, { "code": null, "e": 44760, "s": 44700, "text": "Now consider reduction using the following different ways −" }, { "code": null, "e": 44782, "s": 44760, "text": "Partial Period Factor" }, { "code": null, "e": 44801, "s": 44782, "text": "Basic Remuneration" }, { "code": null, "e": 44901, "s": 44801, "text": "Now if you use the payment method, the employee receives the same remuneration for both the months." }, { "code": null, "e": 44990, "s": 44901, "text": "If you use the deduction method, the employee is overpaid in Feb and underpaid in March." }, { "code": null, "e": 45118, "s": 44990, "text": "If the PWS method is used, the employee receives more salary in Feb as compared to March, however the difference is negligible." }, { "code": null, "e": 45256, "s": 45118, "text": "This component is used to determine an employee’s gross and net income and various components that effects the net income of an employee." }, { "code": null, "e": 45298, "s": 45256, "text": "It consists of the following components −" }, { "code": null, "e": 45323, "s": 45298, "text": "Personnel Administration" }, { "code": null, "e": 45344, "s": 45323, "text": "Payroll South Africa" }, { "code": null, "e": 45362, "s": 45344, "text": "Payroll Australia" }, { "code": null, "e": 45552, "s": 45362, "text": "The following Infotypes should be configured before setting up the salary package for an employee. The following are Infotypes are country specific and valid only for a few countries only −" }, { "code": null, "e": 45576, "s": 45552, "text": "Actions (Infotype 0000)" }, { "code": null, "e": 45602, "s": 45576, "text": "Addresses (Infotype 0006)" }, { "code": null, "e": 45628, "s": 45602, "text": "Basic Pay (Infotype 0008)" }, { "code": null, "e": 45670, "s": 45628, "text": "Organizational Assignment (Infotype 0001)" }, { "code": null, "e": 45700, "s": 45670, "text": "Personal Data (Infotype 0002)" }, { "code": null, "e": 45729, "s": 45700, "text": "Bank Details (Infotype 0009)" }, { "code": null, "e": 45766, "s": 45729, "text": "Planned Working Time (Infotype 0007)" }, { "code": null, "e": 45826, "s": 45766, "text": "Social Insurance SA (Infotype 0150) (only for South Africa)" }, { "code": null, "e": 45878, "s": 45826, "text": "Superannuation (Infotype 0220) (only for Australia)" }, { "code": null, "e": 45927, "s": 45878, "text": "Taxes SA (Infotype 0149) (only for South Africa)" }, { "code": null, "e": 46056, "s": 45927, "text": "You can find the Salary Packaging SPRO → IMG → Personnel Management → Personnel Administration → Payroll Data → Salary Packaging" }, { "code": null, "e": 46120, "s": 46056, "text": "You have to define the following components under customizing −" }, { "code": null, "e": 46195, "s": 46120, "text": "Basic Settings − It is used to define compensation area as per guidelines." }, { "code": null, "e": 46270, "s": 46195, "text": "Basic Settings − It is used to define compensation area as per guidelines." }, { "code": null, "e": 46354, "s": 46270, "text": "Salary Components − It includes the elements of an employee's compensation package." }, { "code": null, "e": 46438, "s": 46354, "text": "Salary Components − It includes the elements of an employee's compensation package." }, { "code": null, "e": 46482, "s": 46438, "text": "For example − Basic Salary and Company Car." }, { "code": null, "e": 46585, "s": 46482, "text": "This is used to define the default salary components based on an employee's organizational assignment." }, { "code": null, "e": 46742, "s": 46585, "text": "Using the Eligibility Criteria, you can create checks to determine if an employee will have a specific salary component defaulted into their salary package." }, { "code": null, "e": 46901, "s": 46742, "text": "For example − An employee is eligible for a certain salary component, once they reach a specific pay scale level. You can set eligible criteria for this rule." }, { "code": null, "e": 47039, "s": 46901, "text": "This is used to maintain additional features for salary packaging. Various steps can be defined as per different country specifications −" }, { "code": null, "e": 47071, "s": 47039, "text": "Maintain Company Car Regulation" }, { "code": null, "e": 47110, "s": 47071, "text": "Define Receiver Travel Allowance Rates" }, { "code": null, "e": 47117, "s": 47110, "text": "Result" }, { "code": null, "e": 47136, "s": 47117, "text": "T-code: P16B_ADMIN" }, { "code": null, "e": 47212, "s": 47136, "text": "The following is some general information about the subsequent screenshot −" }, { "code": null, "e": 47308, "s": 47212, "text": "The right side of the screen comprises of components that are currently a part of your package." }, { "code": null, "e": 47404, "s": 47308, "text": "The right side of the screen comprises of components that are currently a part of your package." }, { "code": null, "e": 47501, "s": 47404, "text": "The left side of the screen contains all those additional components for which you are eligible." }, { "code": null, "e": 47598, "s": 47501, "text": "The left side of the screen contains all those additional components for which you are eligible." }, { "code": null, "e": 47659, "s": 47598, "text": "The following tasks should be performed to model a Package −" }, { "code": null, "e": 47831, "s": 47659, "text": "First is to click on the salary component text and choose the arrow to move the component between two boxes. Using this you can add/remove the components from the package." }, { "code": null, "e": 48003, "s": 47831, "text": "First is to click on the salary component text and choose the arrow to move the component between two boxes. Using this you can add/remove the components from the package." }, { "code": null, "e": 48087, "s": 48003, "text": "If you want to change the component details, click on the amount for the component." }, { "code": null, "e": 48171, "s": 48087, "text": "If you want to change the component details, click on the amount for the component." }, { "code": null, "e": 48355, "s": 48171, "text": "Below this you can see the edit section. This section is specific to each component and contains the relevant amount, percentage, and contribution information valid for the component." }, { "code": null, "e": 48539, "s": 48355, "text": "Below this you can see the edit section. This section is specific to each component and contains the relevant amount, percentage, and contribution information valid for the component." }, { "code": null, "e": 48599, "s": 48539, "text": "Click Accept to include your new attributes to the package." }, { "code": null, "e": 48659, "s": 48599, "text": "Click Accept to include your new attributes to the package." }, { "code": null, "e": 48722, "s": 48659, "text": "You can click on the Reset button to put the last values used." }, { "code": null, "e": 48785, "s": 48722, "text": "You can click on the Reset button to put the last values used." }, { "code": null, "e": 48865, "s": 48785, "text": "Once you close the modeling screen, you can select from the following options −" }, { "code": null, "e": 48948, "s": 48865, "text": "You can select simulation that will allow you to preview a sample online pay slip." }, { "code": null, "e": 49031, "s": 48948, "text": "You can select simulation that will allow you to preview a sample online pay slip." }, { "code": null, "e": 49097, "s": 49031, "text": "You can select Update that will update the Infotypes accordingly." }, { "code": null, "e": 49163, "s": 49097, "text": "You can select Update that will update the Infotypes accordingly." }, { "code": null, "e": 49544, "s": 49163, "text": "This allowance is a part of the monthly remuneration paid to an employee and varies as per the location and other factors. The value of this component depends on the Consumer Price Index (CPI) for that location and this index varies as per government regulation. When an employee is transferred or moved to a different location, this allowance is also changed as per the location." }, { "code": null, "e": 49694, "s": 49544, "text": "Dearness allowance along with other components like Base salary, Income tax, Gratuity, etc., forms the salary package of an employee for computation." }, { "code": null, "e": 49789, "s": 49694, "text": "You can calculate Dearness allowance in a standard SAP system by using the following methods −" }, { "code": null, "e": 49816, "s": 49789, "text": "CPI slab based calculation" }, { "code": null, "e": 49877, "s": 49816, "text": "You can also define new CPI in SAP system using New Entries." }, { "code": null, "e": 49916, "s": 49877, "text": "Incremental CPI slab based calculation" }, { "code": null, "e": 49945, "s": 49916, "text": "Basic slab based calculation" }, { "code": null, "e": 50000, "s": 49945, "text": "Basic slab based calculation, subject to minimum value" }, { "code": null, "e": 50027, "s": 50000, "text": "Non-slab based calculation" }, { "code": null, "e": 50068, "s": 50027, "text": "Incremental basic slab based calculation" }, { "code": null, "e": 50247, "s": 50068, "text": "Note − For a non-managerial category this allowance is called Dearness allowance however for managerial category employee group it is also called Cost of Living Allowance (COLA)." }, { "code": null, "e": 50396, "s": 50247, "text": "To configure DA in SAP system, go to SPRO → IMG → Payroll → Payroll India → Dearness Allowance → Maintain Basic slab details for Dearness allowance." }, { "code": null, "e": 50547, "s": 50396, "text": "Once you click on this, it shows you the Basic slab details for Dearness allowance, which includes Fixed value, Percentage, CPI % mul. Fac., currency." }, { "code": null, "e": 50725, "s": 50547, "text": "This component is used to maintain information about an employee accommodation. This is used to calculate tax exemptions and to check perquisite applicable on a housing benefit." }, { "code": null, "e": 50936, "s": 50725, "text": "While updating or creating a housing record using the Housing (HRA / CLA / COA) under Infotype (0581), the system dynamically updates the Basic Pay Infotype (0008) with the new or changed wage type for Housing." }, { "code": null, "e": 51223, "s": 50936, "text": "Rented − When an employee uses a Rented Accommodation, he receives a House Rent Allowance (HRA) to meet the expenses incurred by renting a residential accommodation.\nIn this case, the system calculates the tax exemption on the rented accommodation and rented amount paid by an employee." }, { "code": null, "e": 51389, "s": 51223, "text": "Rented − When an employee uses a Rented Accommodation, he receives a House Rent Allowance (HRA) to meet the expenses incurred by renting a residential accommodation." }, { "code": null, "e": 51510, "s": 51389, "text": "In this case, the system calculates the tax exemption on the rented accommodation and rented amount paid by an employee." }, { "code": null, "e": 51872, "s": 51510, "text": "Company Leased Accommodation (CLA) − When an employee uses a Company Leased Accommodation, the company leases an accommodation and provides it as a housing benefit to the employee.\nThe company Leased eligibility depends on the employee Pay Scale Grouping for Allowances. When an employee uses CLA benefit, the system checks the applicable perquisite on the CLA." }, { "code": null, "e": 52053, "s": 51872, "text": "Company Leased Accommodation (CLA) − When an employee uses a Company Leased Accommodation, the company leases an accommodation and provides it as a housing benefit to the employee." }, { "code": null, "e": 52234, "s": 52053, "text": "The company Leased eligibility depends on the employee Pay Scale Grouping for Allowances. When an employee uses CLA benefit, the system checks the applicable perquisite on the CLA." }, { "code": null, "e": 52592, "s": 52234, "text": "Company Owned Accommodation (COA) − When an employee uses COA, in this case company owns the accommodation and provides it as a housing benefit to the employee. Like CLA employee eligibility for COA depends on the employee grouping for pay scale allowance.\nWhen an employee opts for COA benefit, the system will compute the perquisite applicable on the COA." }, { "code": null, "e": 52849, "s": 52592, "text": "Company Owned Accommodation (COA) − When an employee uses COA, in this case company owns the accommodation and provides it as a housing benefit to the employee. Like CLA employee eligibility for COA depends on the employee grouping for pay scale allowance." }, { "code": null, "e": 52950, "s": 52849, "text": "When an employee opts for COA benefit, the system will compute the perquisite applicable on the COA." }, { "code": null, "e": 53212, "s": 52950, "text": "Hotel Accommodation − A company can also provide a hotel accommodation to the employee. Their stay in the hotel depends on a fixed period as per the Government rule and if the stay exceeds the time limit, a perquisite is applicable on the cost of accommodation." }, { "code": null, "e": 53474, "s": 53212, "text": "Hotel Accommodation − A company can also provide a hotel accommodation to the employee. Their stay in the hotel depends on a fixed period as per the Government rule and if the stay exceeds the time limit, a perquisite is applicable on the cost of accommodation." }, { "code": null, "e": 53562, "s": 53474, "text": "In a SAP standard system, the following accommodation types are configured by default −" }, { "code": null, "e": 53583, "s": 53562, "text": "Rented Accommodation" }, { "code": null, "e": 53604, "s": 53583, "text": "Company Leased (Old)" }, { "code": null, "e": 53633, "s": 53604, "text": "Perkable Hotel Accommodation" }, { "code": null, "e": 53784, "s": 53633, "text": "It is also possible to create a new accommodation type in the system. Go to SPRO → IMG → Payroll → Payroll India → Housing → Define Accommodation type" }, { "code": null, "e": 53923, "s": 53784, "text": "Under the Accommodation type, you can view the already defined Housing types or can create new entries by clicking the New Entries button." }, { "code": null, "e": 53991, "s": 53923, "text": "In the Tax Code, select the tax code as per the accommodation type." }, { "code": null, "e": 54055, "s": 53991, "text": "Tax code field determines as per different accommodation type −" }, { "code": null, "e": 54194, "s": 54055, "text": "This component is used to process the exemption on conveyance allowance. The details are maintained in Car and Conveyance Infotype (0583)." }, { "code": null, "e": 54421, "s": 54194, "text": "The standard SAP system provides exemption on conveyance allowance given to the employees. The following configuration has to be configured in the system if you want to give conveyance allowance and exemption to the employees." }, { "code": null, "e": 54511, "s": 54421, "text": "Go to SPRO → IMG → Payroll → Payroll India → Car and Conveyance → Define Conveyance Type." }, { "code": null, "e": 54603, "s": 54511, "text": "Different Car schemas can be used in the SAP system for exemption under different sections." }, { "code": null, "e": 54836, "s": 54603, "text": "This defines as the long-term benefits provided to the employees over a fixed period of years. The duration varies from three to five years. In a standard SAP system, long-term benefits can be divided into the following categories −" }, { "code": null, "e": 54969, "s": 54836, "text": "This includes benefits provided to employee for purpose of purchasing movable items like Fridge, TV, Washing machine, computer, etc." }, { "code": null, "e": 55090, "s": 54969, "text": "This includes benefits provided to employee for purpose of purchasing consumer good items like Sofa, chair, Carpet, etc." }, { "code": null, "e": 55165, "s": 55090, "text": "This benefit includes maintenance of their car over a period of time, etc." }, { "code": null, "e": 55290, "s": 55165, "text": "This Infotype is used to maintain Long Term Reimbursement claimed by the employees and under one of the following subtypes −" }, { "code": null, "e": 55357, "s": 55290, "text": "Subtype SHFS − For maintaining hard furnishing schemes information" }, { "code": null, "e": 55424, "s": 55357, "text": "Subtype SHFS − For maintaining hard furnishing schemes information" }, { "code": null, "e": 55491, "s": 55424, "text": "Subtype SSFS − For maintaining soft furnishing schemes information" }, { "code": null, "e": 55558, "s": 55491, "text": "Subtype SSFS − For maintaining soft furnishing schemes information" }, { "code": null, "e": 55625, "s": 55558, "text": "Subtype SCAR − For maintaining car maintenance schemes information" }, { "code": null, "e": 55692, "s": 55625, "text": "Subtype SCAR − For maintaining car maintenance schemes information" }, { "code": null, "e": 55856, "s": 55692, "text": "To configure a long term reimbursement, go to SPRO → IMG → Payroll → Payroll India → Long Term Reimbursement → Maintain block of years for long tern reimbursement." }, { "code": null, "e": 55981, "s": 55856, "text": "To avail long-term benefits by an employee, there are different perquisites attached with each benefits that should be met −" }, { "code": null, "e": 56119, "s": 55981, "text": "In this, there is a fixed percentage as the perquisite value applicable on the assets that an employee can avail during a financial year." }, { "code": null, "e": 56254, "s": 56119, "text": "This fixed value is maintained in Calculate Hard Furnishing Perk Value constant (HFPRC) of the table view Payroll Constants (V_T511K)." }, { "code": null, "e": 56499, "s": 56254, "text": "In this, the system calculates perquisite value for the assets that an employee avails in the current financial year and it is based on the perquisite percentage that you maintain in the Long Term Reimbursements Infotype (590) and subtype SSFS." }, { "code": null, "e": 56651, "s": 56499, "text": "Normally, the system doesn’t contain any perquisite value with the Car Maintenance Scheme or any other similar type of scheme you create in the system." }, { "code": null, "e": 56949, "s": 56651, "text": "In a company, an employee is eligible to claim some monetary and non-monetary benefits and these claims vary as per the pay scale grouping and many other factors. An employee needs to submit the claim based on the eligibility to get these benefits. Claims submitted can be of the following types −" }, { "code": null, "e": 57024, "s": 56949, "text": "This includes the claims that are available as per the eligibility amount." }, { "code": null, "e": 57143, "s": 57024, "text": "For example − A conveyance allowance of Rs. 1800 per month or a Medical claim of Rs. 15000 in a given assessment year." }, { "code": null, "e": 57293, "s": 57143, "text": "These claims are commonly raised by an employee for company work. They are normally placed in units like Stationary request, Calculator, Petrol, etc." }, { "code": null, "e": 57530, "s": 57293, "text": "Apart from this, there is one more type of claim known as the slab based claim. A few common types of slab based claims are LTA, car maintenance allowance, etc. These type of claims has an eligibility which is normally more than a year." }, { "code": null, "e": 57849, "s": 57530, "text": "For example − Car maintenance allowance – where the validity period starts from the date of purchase of the car and in the first and second year an employee is eligible for a car maintenance allowance of Rs. 3000 and in the third year, claim eligibility is Rs. 5000 and in the fourth year, the eligibility is Rs. 7500." }, { "code": null, "e": 57994, "s": 57849, "text": "To get into the non-monetary claims section, you should use the following Transaction Code: PC00_M40_REMP as shown in the subsequent screenshot." }, { "code": null, "e": 58074, "s": 57994, "text": "Once you run the above transaction, the reimbursement claim screen will appear." }, { "code": null, "e": 58108, "s": 58074, "text": "The claims can be processed via −" }, { "code": null, "e": 58317, "s": 58108, "text": "Regular payroll run − In this reimbursement type, additional payments Infotype 0015 is updated with the information that you enter in this report and claim disbursement is made along with the regular payroll." }, { "code": null, "e": 58526, "s": 58317, "text": "Regular payroll run − In this reimbursement type, additional payments Infotype 0015 is updated with the information that you enter in this report and claim disbursement is made along with the regular payroll." }, { "code": null, "e": 58745, "s": 58526, "text": "Off-cycle payroll run − In this method, One-Time Payments Off-Cycle Infotype 0267 is updated with the information that you enter in this report and approved claims can be disbursed through an off-cycle payment process." }, { "code": null, "e": 58964, "s": 58745, "text": "Off-cycle payroll run − In this method, One-Time Payments Off-Cycle Infotype 0267 is updated with the information that you enter in this report and approved claims can be disbursed through an off-cycle payment process." }, { "code": null, "e": 59111, "s": 58964, "text": "For example − In this disbursement, claims are disbursed on the same day or claims submitted during the week are disbursed on any day of the week." }, { "code": null, "e": 59214, "s": 59111, "text": "This component is used to process the employee bonus and can compute both regular and off-cycle bonus." }, { "code": null, "e": 59280, "s": 59214, "text": "As with claims, there are two types of bonuses that can be paid −" }, { "code": null, "e": 59394, "s": 59280, "text": "Type 1 Additional Payments 0015 − In this, the SAP system updates the Infotype when a regular bonus is processed." }, { "code": null, "e": 59508, "s": 59394, "text": "Type 1 Additional Payments 0015 − In this, the SAP system updates the Infotype when a regular bonus is processed." }, { "code": null, "e": 59658, "s": 59508, "text": "Type 2 Additional Off-Cycle Payments for Off-Cycle Bonus 0267 − In this, 0267 Infotype is updated in the system, when an Off-Cycle bonus is computed." }, { "code": null, "e": 59808, "s": 59658, "text": "Type 2 Additional Off-Cycle Payments for Off-Cycle Bonus 0267 − In this, 0267 Infotype is updated in the system, when an Off-Cycle bonus is computed." }, { "code": null, "e": 59985, "s": 59808, "text": "It is defined as a statutory benefit provided to an employee by his employer for his association with the company. The Gratuity can be configured based on the following rules −" }, { "code": null, "e": 60235, "s": 59985, "text": "Payment of Gratuity Act, 1972 − As per this, a minimum amount that an employer has to contribute for this component is 4.81% of the base salary of the employee. As per the company policy where the benefits are better as compared to the Gratuity Act." }, { "code": null, "e": 60485, "s": 60235, "text": "Payment of Gratuity Act, 1972 − As per this, a minimum amount that an employer has to contribute for this component is 4.81% of the base salary of the employee. As per the company policy where the benefits are better as compared to the Gratuity Act." }, { "code": null, "e": 60685, "s": 60485, "text": "Personal IDs 0185 Gratuity for India subtype 03 − This is used to maintain the employee Personnel id number for Gratuity and the name of the trust to which you are contributing for employee gratuity." }, { "code": null, "e": 60885, "s": 60685, "text": "Personal IDs 0185 Gratuity for India subtype 03 − This is used to maintain the employee Personnel id number for Gratuity and the name of the trust to which you are contributing for employee gratuity." }, { "code": null, "e": 61075, "s": 60885, "text": "Gratuity Listing Report (HINCGRY0) to Generate Gratuity List − This report is used to generate a list which shows the employee wise contribution to the trust name on behalf of the employee." }, { "code": null, "e": 61265, "s": 61075, "text": "Gratuity Listing Report (HINCGRY0) to Generate Gratuity List − This report is used to generate a list which shows the employee wise contribution to the trust name on behalf of the employee." }, { "code": null, "e": 61411, "s": 61265, "text": "You can configure Gratuity in the SAP system by following this path. Go to SPRO → IMG → Payroll → Payroll India → Retirement benefits → Gratuity." }, { "code": null, "e": 61522, "s": 61411, "text": "The employee record for Gratuity (Personnel Id’s) is maintained in Infotype and Gratuity for India Subtype 03." }, { "code": null, "e": 61900, "s": 61522, "text": "This is defined as the benefit provided to an employee by the employer for his association with the company. The employer contributes towards Superannuation trust on a monthly or yearly basis to provide this benefit to the employee and it doesn’t include any employee contribution. This component is not presented as part of the monthly pay slip and is not a taxable component." }, { "code": null, "e": 62084, "s": 61900, "text": "Superannuation report (HINCSAN0) for list − This report can be used to generate Superannuation List which provides employer contribution for this component for a specific time period." }, { "code": null, "e": 62268, "s": 62084, "text": "Superannuation report (HINCSAN0) for list − This report can be used to generate Superannuation List which provides employer contribution for this component for a specific time period." }, { "code": null, "e": 62529, "s": 62268, "text": "Superannuation component and configuration − This component consists of the employee record as Personal Id’s Infotype 0185 Superannuation for India Subtype 01. This subtype is used to maintain the trust name and employee identification number for the employee." }, { "code": null, "e": 62790, "s": 62529, "text": "Superannuation component and configuration − This component consists of the employee record as Personal Id’s Infotype 0185 Superannuation for India Subtype 01. This subtype is used to maintain the trust name and employee identification number for the employee." }, { "code": null, "e": 62933, "s": 62790, "text": "To define the trust name where the employer maintains the Superannuation account, you need to define trust id and name of trust in the system." }, { "code": null, "e": 63056, "s": 62933, "text": "This can be done by going to SPRO → IMG → Payroll → Payroll India → Retirement Benefits → Maintain Superannuation Trust ID" }, { "code": null, "e": 63240, "s": 63056, "text": "The Superannuation ID field of Personal IDs Infotype (0185) Superannuation for India Subtype (01), displays options as per the Trust IDs that you have configured in this IMG activity." }, { "code": null, "e": 63399, "s": 63240, "text": "To configure the criteria under which you want an employee to be eligible for Superannuation, it can be configured in a SAP system with the following method −" }, { "code": null, "e": 63509, "s": 63399, "text": "SPRO → IMG → Payroll → Payroll India → Retirement Benefits → Maintain Eligibility Details for Superannuation." }, { "code": null, "e": 63794, "s": 63509, "text": "This component of the payroll system deals with the net part of the remuneration paid to an employee after the deductions. There are various deductions applied on the Gross salary like tax, insurance paid, etc. The Net pay is the amount paid to an employee after all these deductions." }, { "code": null, "e": 63924, "s": 63794, "text": "This component is used to compute tax on the income received by an employee. An employee income consists of the following parts −" }, { "code": null, "e": 64096, "s": 63924, "text": "This consists of regular income components like Basic pay, HRA, conveyance allowance. Regular income can be categorized as monthly regular income or annual regular income." }, { "code": null, "e": 64367, "s": 64096, "text": "The system projects the annual regular income using either the Actual Basis or Nominal Basis. The system, by default uses Actual Basis to project annual regular income. You can access this from SPRO → IMG → Payroll → Payroll India → Tax → Maintain Annual Taxable Income." }, { "code": null, "e": 64575, "s": 64367, "text": "Professional tax in a SAP system is defined as the tax calculated on the employee salaries. Professional tax is also defined as the tax applied by the State Government on profession, trades, employment, etc." }, { "code": null, "e": 64801, "s": 64575, "text": "A SAP system calculates the annual professional tax of an employee and deducts it from the salary as per the Section 16(ii) of the Income Tax Act. Professional tax is based on the following salary components for an employee −" }, { "code": null, "e": 64811, "s": 64801, "text": "Basic Pay" }, { "code": null, "e": 64830, "s": 64811, "text": "Dearness Allowance" }, { "code": null, "e": 64852, "s": 64830, "text": "Medical Reimbursement" }, { "code": null, "e": 64858, "s": 64852, "text": "Bonus" }, { "code": null, "e": 64866, "s": 64858, "text": "Housing" }, { "code": null, "e": 64918, "s": 64866, "text": "Other remuneration that employee receives regularly" }, { "code": null, "e": 65081, "s": 64918, "text": "This includes the medical reimbursement amount that is more than the amount, exempted in the Income Tax under the IT Act, as a part of the professional tax basis." }, { "code": null, "e": 65420, "s": 65081, "text": "For all the employees who are availing company leased (CLA) or company owned accommodation (COA), the system calculates the difference in housing allowance and the rent. When an employee gets the amount for the difference in both the components as a part of the regular income, then professional tax is applied on the differential amount." }, { "code": null, "e": 65652, "s": 65420, "text": "To display and take prints of professional tax returns, the system generates the professional tax returns that your company needs to submit to the state authorities, while remitting the professional tax deductions of the employees." }, { "code": null, "e": 65928, "s": 65652, "text": "When you generate a professional tax report (HINCPTX0), there must be an Infotype – Other Statutory Deductions Infotype (0588) and PTX (Professional Tax Eligibility) subtype (0003). In this Infotype, you must select the Professional Tax eligibility indicator for an employee." }, { "code": null, "e": 66006, "s": 65928, "text": "And there should be professional tax results for at least one payroll period." }, { "code": null, "e": 66168, "s": 66006, "text": "This component is used to maintain information on the employee Provident Fund. The Provident fund is a benefit provided to the employees and contains two parts −" }, { "code": null, "e": 66387, "s": 66168, "text": "As per the government rule, both employee and employer contributes a fixed percentage of the PF basis towards the Provident Fund. The minimum percentage that each employee needs to contribute is 12% of the base salary." }, { "code": null, "e": 66506, "s": 66387, "text": "An employee can also select some percentage of fixed basis towards PF which is known as Voluntary Provided Fund (VPF)." }, { "code": null, "e": 66639, "s": 66506, "text": "As per the authority rule, an employer has to contribute a fixed percentage of the PF basis towards the Pension Fund of an employee." }, { "code": null, "e": 66774, "s": 66639, "text": "Also note that apart from these contributions, an employer has to contribute to the Employee's Deposit Linked Insurance (EDLI or ESI)." }, { "code": null, "e": 66878, "s": 66774, "text": "In a SAP system, Provident Fund component allows you to maintain and process the following components −" }, { "code": null, "e": 66881, "s": 66878, "text": "PF" }, { "code": null, "e": 66894, "s": 66881, "text": "Pension Fund" }, { "code": null, "e": 66899, "s": 66894, "text": "EDLI" }, { "code": null, "e": 66903, "s": 66899, "text": "VPF" }, { "code": null, "e": 66918, "s": 66903, "text": "Provident Fund" }, { "code": null, "e": 67026, "s": 66918, "text": "By using the employees Provident Fund Reports (HINCEPF0), you can generate the following monthly PF forms −" }, { "code": null, "e": 67152, "s": 67026, "text": "Form 5 − This can be generated for the employees who qualify for the PF, Pension Fund and EDLI membership for the first time." }, { "code": null, "e": 67278, "s": 67152, "text": "Form 5 − This can be generated for the employees who qualify for the PF, Pension Fund and EDLI membership for the first time." }, { "code": null, "e": 67406, "s": 67278, "text": "Form 10 − This can be generated for those employees leaving the service, or leaving the PF trust in the current payroll period." }, { "code": null, "e": 67534, "s": 67406, "text": "Form 10 − This can be generated for those employees leaving the service, or leaving the PF trust in the current payroll period." }, { "code": null, "e": 67678, "s": 67534, "text": "Form 12A − This can be generated for wages paid and recoveries made in the current payroll period, as the Employee and Employer's contribution." }, { "code": null, "e": 67822, "s": 67678, "text": "Form 12A − This can be generated for wages paid and recoveries made in the current payroll period, as the Employee and Employer's contribution." }, { "code": null, "e": 67906, "s": 67822, "text": "By using the PF Report (HINCEPF1), you can generate the following annual PF forms −" }, { "code": null, "e": 68024, "s": 67906, "text": "Form 3A − This is used to get the statement on the PF contributions made towards un-exempted establishments annually." }, { "code": null, "e": 68142, "s": 68024, "text": "Form 3A − This is used to get the statement on the PF contributions made towards un-exempted establishments annually." }, { "code": null, "e": 68246, "s": 68142, "text": "Form 6A − This report is used to print the consolidated contribution statement for that financial year." }, { "code": null, "e": 68350, "s": 68246, "text": "Form 6A − This report is used to print the consolidated contribution statement for that financial year." }, { "code": null, "e": 68588, "s": 68350, "text": "If you want to generate the Monthly reports on the Employee PF and Employee Pension Fund contribution, go to SAP Easy access → Human Resources → Payroll → Asia/Pacific → India → Subsequent Activities → Per Payroll Period → Legal Reports." }, { "code": null, "e": 68801, "s": 68588, "text": "To generate the Annual reports on the Employee PF and Employee Pension Fund contribution, go to SAP Easy Access → Human Resources → Payroll → Asia/Pacific → India → Subsequent Activities → Annual → Legal Reports." }, { "code": null, "e": 68970, "s": 68801, "text": "Employee State Insurance is one of the other statutory benefit type that has been provided to employees of a company. ESI contribution includes deduction/contribution −" }, { "code": null, "e": 68995, "s": 68970, "text": "From the employee salary" }, { "code": null, "e": 69018, "s": 68995, "text": "From the employer side" }, { "code": null, "e": 69184, "s": 69018, "text": "In case there are other statutory deductions Infotype 0588 and subtype ESI (0001) record exists for the employee, then an employee is considered as eligible for ESI." }, { "code": null, "e": 69294, "s": 69184, "text": "Note − The ESI Basis for an employee is less than or equal to the amount stored in the ESI Eligibility Limit." }, { "code": null, "e": 69330, "s": 69294, "text": "ESI Contribution and Benefit Period" }, { "code": null, "e": 69466, "s": 69330, "text": "The Employee contribution towards ESI is 1.75% of the ESI Basis. While the Employer contribution towards ESI is 4.75% of the ESI Basis." }, { "code": null, "e": 69713, "s": 69466, "text": "To change the ESI Grouping for an employee, this can be configured in the user exit by following SPRO → IMG → Payroll → Payroll India → Statutory Social Contribution → Employees' State Insurance User Exit: Determine Personnel Subarea Grp for ESI." }, { "code": null, "e": 69903, "s": 69713, "text": "Like Employee State Insurance, LWF is known as the statutory contribution towards welfare of the employees. LWF contribution and frequency of contribution is decided by the state authority." }, { "code": null, "e": 70024, "s": 69903, "text": "The LWF (Labor Welfare Fund) details are maintained under other Statutory Deductions Infotype 0588 and LWF subtype 0002." }, { "code": null, "e": 70264, "s": 70024, "text": "In a SAP system, you can define the eligibility of for Labor Welfare Fund, LWF contribution frequency, LWF computation rates and the Validity date. The LWF data is available in the legal report – Labor Welfare Fund legal reports (HINCLWFI)" }, { "code": null, "e": 70481, "s": 70264, "text": "By using this report, it is possible to generate the LWF form for submission to the authorities. You can configure your SAP system to generate LWF statements in the format prescribed by the concerned state authority." }, { "code": null, "e": 70677, "s": 70481, "text": "This component is used to define the minimum wage for an employee for processing the payroll. All the deduction to be considered for the minimum net processing is defined by following this path −" }, { "code": null, "e": 70759, "s": 70677, "text": "Go to SPRO → IMG → Payroll → Payroll India → Deductions → Arrears and Priorities." }, { "code": null, "e": 70848, "s": 70759, "text": "In a SAP system, you can configure the minimum net pay using the following two methods −" }, { "code": null, "e": 70999, "s": 70848, "text": "Using this method, you can maintain the percentage in the Minimum Net Pay under Percentage Constant (MNPPR) of table view Payroll Constants (V_T511K)." }, { "code": null, "e": 71107, "s": 70999, "text": "Note − By default, the system takes a particular wage component as the Total gross amount wage type (/101)." }, { "code": null, "e": 71305, "s": 71107, "text": "You can also define a fixed amount in the minimum Net Pay-Fixed Amount constant (MNPAM) of table view Payroll Constants. Both the methods can be configured in the SAP system by the following path −" }, { "code": null, "e": 71425, "s": 71305, "text": "SPRO → IMG → Payroll → Payroll India → Deductions → Minimum Net Pay → Maintain Value for Determination of Minimum Wage." }, { "code": null, "e": 71583, "s": 71425, "text": "Note − In case you are maintaining both of the above methods, the amount in the Minimum net pay − Fixed Amount constant (MNPAM) is taken as the minimum wage." }, { "code": null, "e": 71806, "s": 71583, "text": "This component is used to calculate all the payments that are made to third parties and are deducted from the employee’s salary. Different types of deductions can be calculated on gross remuneration or on net remuneration." }, { "code": null, "e": 71854, "s": 71806, "text": "This involves social welfare payment and taxes." }, { "code": null, "e": 72075, "s": 71854, "text": "This includes the payment made by an employee to any saving accounts or any voluntary insurance policy that the employer has taken for the employee. You can consider these as one time deductions and recurring deductions." }, { "code": null, "e": 72262, "s": 72075, "text": "One time deductions are those which are paid by an employee once in a Financial Year. Recurring deductions are maintained in the Infotype 0014 and they are paid in a defined periodicity." }, { "code": null, "e": 72582, "s": 72262, "text": "This component is used to manage the details of a loan that is provided by the company to an employee. This can include – house loan, car loan, personal loan, etc. An interest amount is charged which is lower than the normal interest rate in the market and the employee salary is considered as a security for this loan." }, { "code": null, "e": 72680, "s": 72582, "text": "In a SAP system, you can select between different loan categories and different repayment types −" }, { "code": null, "e": 72697, "s": 72680, "text": "Installment Loan" }, { "code": null, "e": 72710, "s": 72697, "text": "Annuity Loan" }, { "code": null, "e": 72824, "s": 72710, "text": "The loan data is maintained in Infotype 0045 and you can get the following details while processing the payroll −" }, { "code": null, "e": 72839, "s": 72824, "text": "Loan Repayment" }, { "code": null, "e": 72865, "s": 72839, "text": "Loan Interest Calculation" }, { "code": null, "e": 72889, "s": 72865, "text": "Imputed income taxation" }, { "code": null, "e": 73012, "s": 72889, "text": "Loans Infotype 0045 − as you enter the information on a company loan, it can contain loan approval date, loan amount, etc." }, { "code": null, "e": 73350, "s": 73012, "text": "You maintain loan type’s information in subtypes in the Loans Infotype 0045. There is a sequential number that is assigned to each loan. In a SAP system, you can use the combination of a loan type and a sequential number to uniquely identify every loan and hence this allows you to create multiple loans of the same type for an employee." }, { "code": null, "e": 73449, "s": 73350, "text": "There are different categories of repayment types that can be used and differentiated as follows −" }, { "code": null, "e": 73513, "s": 73449, "text": "Payment is made to the borrower or a repayment to the employer." }, { "code": null, "e": 73577, "s": 73513, "text": "Payment is made to the borrower or a repayment to the employer." }, { "code": null, "e": 73687, "s": 73577, "text": "Payment is made directly by check or a bank transfer is made or is processed during the employee payroll run." }, { "code": null, "e": 73797, "s": 73687, "text": "Payment is made directly by check or a bank transfer is made or is processed during the employee payroll run." }, { "code": null, "e": 73982, "s": 73797, "text": "You can use the payment types that are defined in a SAP system or you can also define under SPRO → IMG → Payroll → Payroll India → Company Loans → Master Data → Customer Payment Types." }, { "code": null, "e": 74167, "s": 73982, "text": "You can use the payment types that are defined in a SAP system or you can also define under SPRO → IMG → Payroll → Payroll India → Company Loans → Master Data → Customer Payment Types." }, { "code": null, "e": 74346, "s": 74167, "text": "This section describes the Loan enhancement customization available in the SAP system for payroll India. You can make the following configuration for the company loans in India −" }, { "code": null, "e": 74472, "s": 74346, "text": "To maintain Loan grouping, go to SPRO → IMG → Payroll → Payroll India → Company Loans → Master data → Maintain Loan Grouping." }, { "code": null, "e": 74657, "s": 74472, "text": "To define different salary components that define the salary for a loan grouping, go to SPRO → IMG → Payroll → Payroll India → Company Loans → Master Data → Maintain Salary Components." }, { "code": null, "e": 74841, "s": 74657, "text": "To specify if a Loan Type is eligible for Section 24 Deduction, go to SPRO → IMG → Payroll → Payroll India → Company Loans → Master Data → Maintain Deduction Details Under Section 24." }, { "code": null, "e": 74943, "s": 74841, "text": "Similarly, you can create various customizations under Payroll India for processing Loan enhancement." }, { "code": null, "e": 75275, "s": 74943, "text": "This component is used to process the voluntary salary deduction for employees and is applicable for one or multiple days. This component calculates the employee contribution for the same amount at it was paid by the employer. This voluntary deduction normally involves payment to charitable trust, prime minister Relief fund, etc." }, { "code": null, "e": 75346, "s": 75275, "text": "In the SAP system, this component is maintained in the table V_T7INO1." }, { "code": null, "e": 75465, "s": 75346, "text": "Go to SPRO → IMG → Payroll → Payroll India → One day Salary deduction → Maintain Details for one day Salary deduction." }, { "code": null, "e": 75606, "s": 75465, "text": "For example − Consider an employee’s details in a table view One-day salary deduction (V_T7INO1) for Pay Scale Grouping for Allowances MN01." }, { "code": null, "e": 75618, "s": 75606, "text": "Year - 2010" }, { "code": null, "e": 75630, "s": 75618, "text": "Period - 01" }, { "code": null, "e": 75682, "s": 75630, "text": "Calculation Indicator - Actual Salary/Calendar Days" }, { "code": null, "e": 75740, "s": 75682, "text": "Employer Contribution – As the employer also contributes." }, { "code": null, "e": 75965, "s": 75740, "text": "You run the payroll for an employee, who belongs to a Pay Scale Grouping for Allowances MN01, in June 2010. Let the Actual Salary of the employee for June 2010 be Rs. 6000 and the calendar days KSOLL for the month June = 30." }, { "code": null, "e": 76161, "s": 75965, "text": "One-day salary deduction payroll function (INDSD) reads the table view one-day salary deduction (V_T7INO1) for the Pay Scale Grouping for Allowances MN01, and generates the following wage types −" }, { "code": null, "e": 76243, "s": 76161, "text": "For employee, one-day salary deduction wage type (/3OE) = Rs (6000/30) * 2 → 400." }, { "code": null, "e": 76325, "s": 76243, "text": "For employee, one-day salary deduction wage type (/3OE) = Rs (6000/30) * 2 → 400." }, { "code": null, "e": 76413, "s": 76325, "text": "For employer, one-day salary contribution wage type (/3OF), which is also equal to 400." }, { "code": null, "e": 76501, "s": 76413, "text": "For employer, one-day salary contribution wage type (/3OF), which is also equal to 400." }, { "code": null, "e": 76614, "s": 76501, "text": "This includes the activities that should be carried out after processing of gross and net payroll for employees." }, { "code": null, "e": 76720, "s": 76614, "text": "This is used to post the personnel expenses within a company to financial accounting and Cost Accounting." }, { "code": null, "e": 76994, "s": 76720, "text": "This includes payables to the employees who are posted against the Wages and Salaries Payable account. It also includes payables to the recipients as the deductions received from the employee are posted in the additional payables account and this varies as per the country." }, { "code": null, "e": 77057, "s": 76994, "text": "Subsequent activities are performed for this, which includes −" }, { "code": null, "e": 77108, "s": 77057, "text": "Payables against employees are settled by payment." }, { "code": null, "e": 77159, "s": 77108, "text": "Payables against employees are settled by payment." }, { "code": null, "e": 77236, "s": 77159, "text": "Receivables against third party like tax, insurance are settled by payments." }, { "code": null, "e": 77313, "s": 77236, "text": "Receivables against third party like tax, insurance are settled by payments." }, { "code": null, "e": 77371, "s": 77313, "text": "For each transaction, the following steps are performed −" }, { "code": null, "e": 77411, "s": 77371, "text": "Step 1 − Amounts payable are calculated" }, { "code": null, "e": 77451, "s": 77411, "text": "Step 1 − Amounts payable are calculated" }, { "code": null, "e": 77488, "s": 77451, "text": "Step 2 − Amounts calculated are paid" }, { "code": null, "e": 77525, "s": 77488, "text": "Step 2 − Amounts calculated are paid" }, { "code": null, "e": 77589, "s": 77525, "text": "Step 3 − A payable account to bank clearing account is created." }, { "code": null, "e": 77653, "s": 77589, "text": "Step 3 − A payable account to bank clearing account is created." }, { "code": null, "e": 77785, "s": 77653, "text": "You can perform Step 2 and Step 3 either automatically or manually and it varies according to the country and the transaction type." }, { "code": null, "e": 78064, "s": 77785, "text": "To create or edit the salary/remuneration statement, you can use HR Forms Workplace. This allows you to create a new salary statement with the Forms Workplace and also provides you multi-functional graphical options for structuring the layout of the form and then print program." }, { "code": null, "e": 78150, "s": 78064, "text": "A form can be printed from the HR Forms Workplace or by using a SAP Easy access menu." }, { "code": null, "e": 78354, "s": 78150, "text": "This is used for the evaluation of payroll results and you can generate reports and statistics using this component. You have the following options available in the SAP system to perform the evaluation −" }, { "code": null, "e": 78494, "s": 78354, "text": "InfoSet Query − To check the InfoSet query, follow the below path. Go to Human Resources → Information System → Reporting Tool → SAP Query." }, { "code": null, "e": 78549, "s": 78494, "text": "To create a new infoset query, click on Infoset query." }, { "code": null, "e": 78625, "s": 78549, "text": "You can also evaluate payroll results using the following standard reports." }, { "code": null, "e": 78648, "s": 78625, "text": "Remuneration statement" }, { "code": null, "e": 78664, "s": 78648, "text": "Payroll journal" }, { "code": null, "e": 78680, "s": 78664, "text": "Payroll account" }, { "code": null, "e": 78699, "s": 78680, "text": "Wage type reporter" }, { "code": null, "e": 78776, "s": 78699, "text": "In this chapter, we will discuss about the reporting pattern in SAP Payroll." }, { "code": null, "e": 78933, "s": 78776, "text": "This is used to perform the increment update on the base pay wage type in Infotype 0008. This can be maintained under the Human Resource in SAP Easy access." }, { "code": null, "e": 79044, "s": 78933, "text": "In SAP Easy access → Human Resource → Payroll → Asia/Pacific → India → Utilities → Basic → General increments." }, { "code": null, "e": 79183, "s": 79044, "text": "Enter the Personnel number and Pay Scale Grouping for Allowances of the employees to whom you want to give increments in the Basic Salary." }, { "code": null, "e": 79234, "s": 79183, "text": "Enter the Personnel number and pay scale grouping." }, { "code": null, "e": 79345, "s": 79234, "text": "Enter the date from which the increment has to be effective and the name of that batch session. Click Execute." }, { "code": null, "e": 79457, "s": 79345, "text": "The list of employees eligible for the increment appears. You have the following options on the output screen −" }, { "code": null, "e": 79651, "s": 79457, "text": "You can select this option to process the increment for all the eligible employees. The system creates a batch session. You can execute this batch session to update the Basic Pay Infotype 0008." }, { "code": null, "e": 79710, "s": 79651, "text": "You can select this option to display the ambiguous cases." }, { "code": null, "e": 79880, "s": 79710, "text": "For example − All the employees for whom the Effective Date that you have entered on the selection screen does not fall in the last split of the Basic Pay Infotype 0008." }, { "code": null, "e": 79943, "s": 79880, "text": "This option is used to select and display all the error cases." }, { "code": null, "e": 80096, "s": 79943, "text": "For example − All the employees for whom the Pay Scale Grouping for Allowances is not the same as the one that you have entered on the selection screen." }, { "code": null, "e": 80240, "s": 80096, "text": "This is used to perform the batch update of a base salary wage type in Infotype 0008 because of the increment posted on the employee promotion." }, { "code": null, "e": 80343, "s": 80240, "text": "In SAP Easy access → Human Resource → Payroll → Asia/Pacific → India → Utilities → Basic → Promotions." }, { "code": null, "e": 80358, "s": 80343, "text": "Then you can −" }, { "code": null, "e": 80397, "s": 80358, "text": "Enter the employee selection criteria." }, { "code": null, "e": 80436, "s": 80397, "text": "Enter the employee selection criteria." }, { "code": null, "e": 80536, "s": 80436, "text": "Enter the Pay Scale Grouping for Allowances of the employees for whom you want an increment update." }, { "code": null, "e": 80636, "s": 80536, "text": "Enter the Pay Scale Grouping for Allowances of the employees for whom you want an increment update." }, { "code": null, "e": 80697, "s": 80636, "text": "Enter the date from which the increment has to be effective." }, { "code": null, "e": 80758, "s": 80697, "text": "Enter the date from which the increment has to be effective." }, { "code": null, "e": 80818, "s": 80758, "text": "Enter the name of the batch session and execute the report." }, { "code": null, "e": 80878, "s": 80818, "text": "Enter the name of the batch session and execute the report." }, { "code": null, "e": 80946, "s": 80878, "text": "This will display the list of employees eligible for the promotion." }, { "code": null, "e": 81005, "s": 80946, "text": "You will have the following options on the output screen −" }, { "code": null, "e": 81209, "s": 81005, "text": "This option allows you to update the increment for all the eligible employees and a batch session is created. This batch can be executed to update Basic Pay Infotype 0008 with the Basic Salary increment." }, { "code": null, "e": 81272, "s": 81209, "text": "This option is used to display cases where there is ambiguity." }, { "code": null, "e": 81426, "s": 81272, "text": "For example − All employees for whom you have entered the increment effective date, which does not fall in the last split of the Basic Pay Infotype 0008." }, { "code": null, "e": 81478, "s": 81426, "text": "This option is used to display all the error cases." }, { "code": null, "e": 81619, "s": 81478, "text": "For example − When all the employees with a Pay Scale Grouping for Allowances is not the same as you have entered in the selection criteria." }, { "code": null, "e": 81742, "s": 81619, "text": "Using this component, you can print the following sections of Form 16 and Form 16AA for an employee, in a Financial Year −" }, { "code": null, "e": 81793, "s": 81742, "text": "Salary Paid and any Other Income and Tax Deducted." }, { "code": null, "e": 81844, "s": 81793, "text": "Salary Paid and any Other Income and Tax Deducted." }, { "code": null, "e": 81936, "s": 81844, "text": "It displays the income, deductions and tax details of the employee for that financial year." }, { "code": null, "e": 82028, "s": 81936, "text": "It displays the income, deductions and tax details of the employee for that financial year." }, { "code": null, "e": 82107, "s": 82028, "text": "Details of the Tax Deducted and Deposited into the Central Government Account." }, { "code": null, "e": 82186, "s": 82107, "text": "Details of the Tax Deducted and Deposited into the Central Government Account." }, { "code": null, "e": 82240, "s": 82186, "text": "This section also includes the following components −" }, { "code": null, "e": 82269, "s": 82240, "text": "Tax Deducted at Source (TDS)" }, { "code": null, "e": 82293, "s": 82269, "text": "BSR Code of Bank Branch" }, { "code": null, "e": 82313, "s": 82293, "text": "Total Tax Deposited" }, { "code": null, "e": 82346, "s": 82313, "text": "Cheque or DD No. (If applicable)" }, { "code": null, "e": 82356, "s": 82346, "text": "Surcharge" }, { "code": null, "e": 82384, "s": 82356, "text": "Date on Which Tax Deposited" }, { "code": null, "e": 82431, "s": 82384, "text": "Transfer Voucher/Challan Identification Number" }, { "code": null, "e": 82446, "s": 82431, "text": "Education Cess" }, { "code": null, "e": 82551, "s": 82446, "text": "Using this component, you can print Form 24 and Form 24Q for the employees with the following sections −" }, { "code": null, "e": 82622, "s": 82551, "text": "Details of the Salary Paid and Tax Deducted thereon from the Employee." }, { "code": null, "e": 82693, "s": 82622, "text": "Details of the Salary Paid and Tax Deducted thereon from the Employee." }, { "code": null, "e": 82814, "s": 82693, "text": "This is used to display the income, deductions and tax details in a particular financial year for the selected employee." }, { "code": null, "e": 82925, "s": 82814, "text": "This form is defined as the e-filing of Form 24 and needs to be submitted to the IT office in a physical form." }, { "code": null, "e": 83048, "s": 82925, "text": "The employee has to submit an e-copy of Form 24 to the IT department before the 31st May for the preceding financial year." }, { "code": null, "e": 83150, "s": 83048, "text": "For example − The Form 24 has to be submitted before May 31, 2016 for the financial year 2015 − 2016." }, { "code": null, "e": 83273, "s": 83150, "text": "This component creates a batch program which runs and updates the DA wage type in Basic Pay Infotype 0008 for an employee." }, { "code": null, "e": 83426, "s": 83273, "text": "To access this report, go to SAP Easy access → Human Resources → Payroll → Asia Pacific → India → Utilities → Dearness Allowance → Batch Program for DA." }, { "code": null, "e": 83554, "s": 83426, "text": "Enter the relevant selection criteria like Personnel number and date. Specify a name for the batch session against Batch group." }, { "code": null, "e": 83603, "s": 83554, "text": "To execute the report, choose Program → Execute." }, { "code": null, "e": 83727, "s": 83603, "text": "This will open the Correct Cases screen to review the following types of information. This screen provides information on −" }, { "code": null, "e": 83741, "s": 83727, "text": "Correct cases" }, { "code": null, "e": 83757, "s": 83741, "text": "Ambiguous cases" }, { "code": null, "e": 83769, "s": 83757, "text": "Error cases" }, { "code": null, "e": 83826, "s": 83769, "text": "To view an information type, select the required option." }, { "code": null, "e": 83897, "s": 83826, "text": "For example − To view the correct cases, select Display correct cases." }, { "code": null, "e": 84061, "s": 83897, "text": "Next is to select the employee records for which you want to generate the batch session. To generate the batch session, choose User Interface → Create batch input." }, { "code": null, "e": 84238, "s": 84061, "text": "These components are used to check the Actual Contributions for Tax Exemption indicator of the Section 80 and 80C Deductions Infotype 0585 records of all or selected employees." }, { "code": null, "e": 84316, "s": 84238, "text": "You can execute this report for a range of employees and it can be based on −" }, { "code": null, "e": 84333, "s": 84316, "text": "Payroll Area and" }, { "code": null, "e": 84360, "s": 84333, "text": "Range of Personnel numbers" }, { "code": null, "e": 84687, "s": 84360, "text": "You have the option of selecting or not selecting the Consider Actual Contributions for Tax Exemption indicator of the Section 80 and 80C Deductions Infotype 0585 records. A session is created when this report is run and this session should be executed from T-code SM35 for the updation of Section 80 Deductions Infotype 0585." }, { "code": null, "e": 84831, "s": 84687, "text": "To access the report, go to SAP Easy access → Human Resources → Payroll → Asia Pacific → India → Utilities → Section 80 → Batch Program for 80." }, { "code": null, "e": 85161, "s": 84831, "text": "Enter the relevant selection criteria. If you want the actual Section 80 contributions of the selected employees to be considered during the payroll run, select the Consider Actual Contributions indicator. Enter the Session name. To keep a record of that session after execution, you can select the Keep session indicator option." }, { "code": null, "e": 85313, "s": 85161, "text": "You can also enter the Lock date. Use T-code SM35 for the updation of the Infotype records only after this date. To execute this, click Execute option." }, { "code": null, "e": 85445, "s": 85313, "text": "Now you can run T-code SM35 and select the session you want to run. You can run the session in the foreground or in the background." }, { "code": null, "e": 85563, "s": 85445, "text": "This component is used to check the status of the claims made by the employees. Using this component, you can check −" }, { "code": null, "e": 85620, "s": 85563, "text": "Different reimbursement types claimed by your employees." }, { "code": null, "e": 85677, "s": 85620, "text": "Different reimbursement types claimed by your employees." }, { "code": null, "e": 85714, "s": 85677, "text": "Reimbursement Types validity period." }, { "code": null, "e": 85751, "s": 85714, "text": "Reimbursement Types validity period." }, { "code": null, "e": 85798, "s": 85751, "text": "Balances carry forward from the previous year." }, { "code": null, "e": 85845, "s": 85798, "text": "Balances carry forward from the previous year." }, { "code": null, "e": 85968, "s": 85845, "text": "Details of claim amounts that have been already disbursed and the pending amount to be disbursed along with a payroll run." }, { "code": null, "e": 86091, "s": 85968, "text": "Details of claim amounts that have been already disbursed and the pending amount to be disbursed along with a payroll run." }, { "code": null, "e": 86233, "s": 86091, "text": "To check the eligibility, go to SPRO → IMG → Payroll → Payroll India → Reimbursements, Allowances and Perks → Calculate Eligibility for RAPs." }, { "code": null, "e": 86346, "s": 86233, "text": "It shows the different claims made by your employees according to the Effective Date and the Reimbursement Type." }, { "code": null, "e": 86515, "s": 86346, "text": "To access the claim report, go to SAP Easy access → Human Resources → Payroll → Asia/Pacific → India → Utilities → Reimbursements, Allowances and Perks → Claims Status." }, { "code": null, "e": 86590, "s": 86515, "text": "Enter the relevant selection criteria and to execute the report → Execute." }, { "code": null, "e": 86706, "s": 86590, "text": "This component is used to generate Gratuity List for a selected employee range, within a specified gratuity period." }, { "code": null, "e": 86907, "s": 86706, "text": "An important perquisite to create a report is to maintain the Personal IDs Infotype 0185 Gratuity for India subtype 03. You have processed the payroll for the required period and have payroll results." }, { "code": null, "e": 86966, "s": 86907, "text": "Following is the information displayed using this report −" }, { "code": null, "e": 86987, "s": 86966, "text": "Name of the employee" }, { "code": null, "e": 87016, "s": 86987, "text": "Gross salary of the employee" }, { "code": null, "e": 87074, "s": 87016, "text": "Contribution towards employee gratuity from employer side" }, { "code": null, "e": 87258, "s": 87074, "text": "To access this report, go to SAP Easy Access → Human Resources → Payroll → Asia Pacific → India → Subsequent Activities → Per Payroll Period → Reporting → Gratuity → Gratuity Listing." }, { "code": null, "e": 87373, "s": 87258, "text": "Enter the relevant selection criteria and mention the Gratuity Trust ID for which you want to generate the report." }, { "code": null, "e": 87492, "s": 87373, "text": "To get the result in a customized format, select the Customer Layout option and enter the name of the Customer Layout." }, { "code": null, "e": 87645, "s": 87492, "text": "A Roster is used to allow the reservation to the employees based on a specific criterion. The key parameters to be considered for reservation includes −" }, { "code": null, "e": 87651, "s": 87645, "text": "Caste" }, { "code": null, "e": 87667, "s": 87651, "text": "Special benefit" }, { "code": null, "e": 87727, "s": 87667, "text": "By using Roster, you can perform the following activities −" }, { "code": null, "e": 87826, "s": 87727, "text": "It helps in maintaining hiring, promotion and transfer of employees as per the reservation policy." }, { "code": null, "e": 87925, "s": 87826, "text": "It helps in maintaining hiring, promotion and transfer of employees as per the reservation policy." }, { "code": null, "e": 87986, "s": 87925, "text": "It helps in maintaining staffing details for the government." }, { "code": null, "e": 88047, "s": 87986, "text": "It helps in maintaining staffing details for the government." }, { "code": null, "e": 88199, "s": 88047, "text": "To define the reservation type, go to SPRO → IMG → Payroll → Payroll India → India Public Sector → Rosters → Basic Settings → Define Reservation Types." }, { "code": null, "e": 88433, "s": 88199, "text": "You have to define the Roster group, recruitment and promotion type, map reservation category to ethnic or challenge group. Next is to map the action types to standard action and time independent roster attribute of the model roster." }, { "code": null, "e": 88515, "s": 88433, "text": "To provide reservation to employees, the following types of objects can be used −" }, { "code": null, "e": 88585, "s": 88515, "text": "Model Roster − This is defined as a template used to create a Roster." }, { "code": null, "e": 88655, "s": 88585, "text": "Model Roster − This is defined as a template used to create a Roster." }, { "code": null, "e": 88743, "s": 88655, "text": "Roster − This is defined as an object that has a fixed number of points assigned to it." }, { "code": null, "e": 88831, "s": 88743, "text": "Roster − This is defined as an object that has a fixed number of points assigned to it." }, { "code": null, "e": 89007, "s": 88831, "text": "Roster Point − These are the objects to which the employees are assigned and they are identified by an ID. You can assign one employee ID to a Roster ID for a specific period." }, { "code": null, "e": 89183, "s": 89007, "text": "Roster Point − These are the objects to which the employees are assigned and they are identified by an ID. You can assign one employee ID to a Roster ID for a specific period." }, { "code": null, "e": 89229, "s": 89183, "text": "A Roster point has the following attributes −" }, { "code": null, "e": 89245, "s": 89229, "text": "Sequence number" }, { "code": null, "e": 89266, "s": 89245, "text": "Reservation category" }, { "code": null, "e": 89291, "s": 89266, "text": "De-reservation indicator" }, { "code": null, "e": 89310, "s": 89291, "text": "Obsolete indicator" }, { "code": null, "e": 89317, "s": 89310, "text": "Remark" }, { "code": null, "e": 89332, "s": 89317, "text": "Reference date" }, { "code": null, "e": 89365, "s": 89332, "text": "\n 25 Lectures \n 6 hours \n" }, { "code": null, "e": 89379, "s": 89365, "text": " Sanjo Thomas" }, { "code": null, "e": 89412, "s": 89379, "text": "\n 26 Lectures \n 2 hours \n" }, { "code": null, "e": 89424, "s": 89412, "text": " Neha Gupta" }, { "code": null, "e": 89459, "s": 89424, "text": "\n 30 Lectures \n 2.5 hours \n" }, { "code": null, "e": 89474, "s": 89459, "text": " Sumit Agarwal" }, { "code": null, "e": 89507, "s": 89474, "text": "\n 30 Lectures \n 4 hours \n" }, { "code": null, "e": 89522, "s": 89507, "text": " Sumit Agarwal" }, { "code": null, "e": 89557, "s": 89522, "text": "\n 14 Lectures \n 1.5 hours \n" }, { "code": null, "e": 89569, "s": 89557, "text": " Neha Malik" }, { "code": null, "e": 89604, "s": 89569, "text": "\n 13 Lectures \n 1.5 hours \n" }, { "code": null, "e": 89616, "s": 89604, "text": " Neha Malik" }, { "code": null, "e": 89623, "s": 89616, "text": " Print" }, { "code": null, "e": 89634, "s": 89623, "text": " Add Notes" } ]
Program to implement the fractional knapsack problem in Python
Suppose we have two lists, weights and values of same length and another value capacity. The weights[i] and values[i] represent the weight and value of ith element. So if we can take at most capacity weights, and that we can take a fraction of an item's weight with proportionate value, we have to find the maximum amount of value we can get (rounded down to the nearest integer) So, if the input is like weights = [6, 7, 3] values = [110, 120, 2] capacity = 10, then the output will be 178. To solve this, we will follow these steps − res := 0make a list of pairs P with weights and values, and sort them based on values per weightfor each pair in P, docif capacity is 0, thencome out from the loopif pair[0] > capacity, thenres := res + quotient of (pair[1] /(pair[0] / capacity)capacity := 0otherwise when pair[0] <= capacity, thenres := res + pair[1]capacity := capacity - pair[0]return floor value of res res := 0 make a list of pairs P with weights and values, and sort them based on values per weight make a list of pairs P with weights and values, and sort them based on values per weight for each pair in P, docif capacity is 0, thencome out from the loopif pair[0] > capacity, thenres := res + quotient of (pair[1] /(pair[0] / capacity)capacity := 0otherwise when pair[0] <= capacity, thenres := res + pair[1]capacity := capacity - pair[0] for each pair in P, do cif capacity is 0, thencome out from the loop come out from the loop come out from the loop if pair[0] > capacity, thenres := res + quotient of (pair[1] /(pair[0] / capacity)capacity := 0 if pair[0] > capacity, then res := res + quotient of (pair[1] /(pair[0] / capacity) res := res + quotient of (pair[1] /(pair[0] / capacity) capacity := 0 capacity := 0 otherwise when pair[0] <= capacity, thenres := res + pair[1]capacity := capacity - pair[0] otherwise when pair[0] <= capacity, then res := res + pair[1] res := res + pair[1] capacity := capacity - pair[0] capacity := capacity - pair[0] return floor value of res return floor value of res Let us see the following implementation to get better understanding − Live Demo class Solution: def solve(self, weights, values, capacity): res = 0 for pair in sorted(zip(weights, values), key=lambda x: - x[1]/x[0]): if not bool(capacity): break if pair[0] > capacity: res += int(pair[1] / (pair[0] / capacity)) capacity = 0 elif pair[0] <= capacity: res += pair[1] capacity -= pair[0] return int(res) ob = Solution() weights = [6, 7, 3] values = [110, 120, 2] capacity = 10 print(ob.solve(weights, values, capacity)) [6, 7, 3],[110, 120, 2],10 230
[ { "code": null, "e": 1442, "s": 1062, "text": "Suppose we have two lists, weights and values of same length and another value capacity. The weights[i] and values[i] represent the weight and value of ith element. So if we can take at most capacity weights, and that we can take a fraction of an item's weight with proportionate value, we have to find the maximum amount of value we can get (rounded down to the nearest integer)" }, { "code": null, "e": 1554, "s": 1442, "text": "So, if the input is like weights = [6, 7, 3] values = [110, 120, 2] capacity = 10, then the output will be 178." }, { "code": null, "e": 1598, "s": 1554, "text": "To solve this, we will follow these steps −" }, { "code": null, "e": 1972, "s": 1598, "text": "res := 0make a list of pairs P with weights and values, and sort them based on values per weightfor each pair in P, docif capacity is 0, thencome out from the loopif pair[0] > capacity, thenres := res + quotient of (pair[1] /(pair[0] / capacity)capacity := 0otherwise when pair[0] <= capacity, thenres := res + pair[1]capacity := capacity - pair[0]return floor value of res" }, { "code": null, "e": 1981, "s": 1972, "text": "res := 0" }, { "code": null, "e": 2070, "s": 1981, "text": "make a list of pairs P with weights and values, and sort them based on values per weight" }, { "code": null, "e": 2159, "s": 2070, "text": "make a list of pairs P with weights and values, and sort them based on values per weight" }, { "code": null, "e": 2412, "s": 2159, "text": "for each pair in P, docif capacity is 0, thencome out from the loopif pair[0] > capacity, thenres := res + quotient of (pair[1] /(pair[0] / capacity)capacity := 0otherwise when pair[0] <= capacity, thenres := res + pair[1]capacity := capacity - pair[0]" }, { "code": null, "e": 2435, "s": 2412, "text": "for each pair in P, do" }, { "code": null, "e": 2481, "s": 2435, "text": "cif capacity is 0, thencome out from the loop" }, { "code": null, "e": 2504, "s": 2481, "text": "come out from the loop" }, { "code": null, "e": 2527, "s": 2504, "text": "come out from the loop" }, { "code": null, "e": 2623, "s": 2527, "text": "if pair[0] > capacity, thenres := res + quotient of (pair[1] /(pair[0] / capacity)capacity := 0" }, { "code": null, "e": 2651, "s": 2623, "text": "if pair[0] > capacity, then" }, { "code": null, "e": 2707, "s": 2651, "text": "res := res + quotient of (pair[1] /(pair[0] / capacity)" }, { "code": null, "e": 2763, "s": 2707, "text": "res := res + quotient of (pair[1] /(pair[0] / capacity)" }, { "code": null, "e": 2777, "s": 2763, "text": "capacity := 0" }, { "code": null, "e": 2791, "s": 2777, "text": "capacity := 0" }, { "code": null, "e": 2882, "s": 2791, "text": "otherwise when pair[0] <= capacity, thenres := res + pair[1]capacity := capacity - pair[0]" }, { "code": null, "e": 2923, "s": 2882, "text": "otherwise when pair[0] <= capacity, then" }, { "code": null, "e": 2944, "s": 2923, "text": "res := res + pair[1]" }, { "code": null, "e": 2965, "s": 2944, "text": "res := res + pair[1]" }, { "code": null, "e": 2996, "s": 2965, "text": "capacity := capacity - pair[0]" }, { "code": null, "e": 3027, "s": 2996, "text": "capacity := capacity - pair[0]" }, { "code": null, "e": 3053, "s": 3027, "text": "return floor value of res" }, { "code": null, "e": 3079, "s": 3053, "text": "return floor value of res" }, { "code": null, "e": 3149, "s": 3079, "text": "Let us see the following implementation to get better understanding −" }, { "code": null, "e": 3160, "s": 3149, "text": " Live Demo" }, { "code": null, "e": 3707, "s": 3160, "text": "class Solution:\n def solve(self, weights, values, capacity):\n res = 0\n for pair in sorted(zip(weights, values), key=lambda x: - x[1]/x[0]):\n if not bool(capacity):\n break\n if pair[0] > capacity:\n res += int(pair[1] / (pair[0] / capacity))\n capacity = 0\n elif pair[0] <= capacity:\n res += pair[1]\n capacity -= pair[0]\n return int(res)\n\nob = Solution()\nweights = [6, 7, 3]\nvalues = [110, 120, 2]\ncapacity = 10\nprint(ob.solve(weights, values, capacity))" }, { "code": null, "e": 3734, "s": 3707, "text": "[6, 7, 3],[110, 120, 2],10" }, { "code": null, "e": 3738, "s": 3734, "text": "230" } ]
Multithreading in C++
Multithreading is a specialized form of multitasking and a multitasking is the feature that allows your computer to run two or more programs concurrently. In general, there are two types of multitasking: process-based and thread-based. Process-based multitasking handles the concurrent execution of programs. Threadbased multitasking deals with the concurrent execution of pieces of the same program. A multithreaded program contains two or more parts that can run concurrently. Each part of such a program is called a thread, and each thread defines a separate path of execution. C++ does not contain any built-in support for multithreaded applications. Instead, it relies entirely upon the operating system to provide this feature. This tutorial assumes that you are working on Linux OS and we are going to write multi-threaded C++ program using POSIX. POSIX Threads, or Pthreads provides API which are available on many Unix-like POSIX systems such as FreeBSD, NetBSD, GNU/Linux, Mac OS X and Solaris. The following routine is used to create a POSIX thread − #include <pthread.h> pthread_create (thread, attr, start_routine, arg) Here, pthread_create creates a new thread and makes it executable. This routine can be called any number of times from anywhere within your code. Here is the description of the parameters. The maximum number of threads that may be created by a process is implementation dependent. Once created, threads are peers, and may create other threads. There is no implied hierarchy or dependency between threads. There is following routine which we use to terminate a POSIX thread – #include <pthread.h> pthread_exit (status) Here pthread_exit is used to explicitly exit a thread. Typically, the pthread_exit() routine is called after a thread has completed its work and is no longer required to exist. If main() finishes before the threads it has created, and exits with pthread_exit(), the other threads will continue to execute. Otherwise, they will be automatically terminated when main() finishes. #include <iostream> #include <cstdlib> #include <pthread.h> using namespace std; #define NUM_THREADS 5 void *PrintHello(void *threadid) { long tid; tid = (long)threadid; cout << "Hello World! Thread ID, " << tid << endl; pthread_exit(NULL); } int main () { pthread_t threads[NUM_THREADS]; int rc; int i; for( i = 0; i < NUM_THREADS; i++ ) { cout << "main() : creating thread, " << i << endl; rc = pthread_create(&threads[i], NULL, PrintHello, (void *)i); if (rc) { cout << "Error:unable to create thread," << rc << endl; exit(-1); } } pthread_exit(NULL); } $gcc test.cpp -lpthread $./a.out main() : creating thread, 0 main() : creating thread, 1 main() : creating thread, 2 main() : creating thread, 3 main() : creating thread, 4 Hello World! Thread ID, 0 Hello World! Thread ID, 1 Hello World! Thread ID, 2 Hello World! Thread ID, 3 Hello World! Thread ID, 4
[ { "code": null, "e": 1298, "s": 1062, "text": "Multithreading is a specialized form of multitasking and a multitasking is the feature\nthat allows your computer to run two or more programs concurrently. In general,\nthere are two types of multitasking: process-based and thread-based." }, { "code": null, "e": 1463, "s": 1298, "text": "Process-based multitasking handles the concurrent execution of programs. Threadbased\nmultitasking deals with the concurrent execution of pieces of the same\nprogram." }, { "code": null, "e": 1643, "s": 1463, "text": "A multithreaded program contains two or more parts that can run concurrently.\nEach part of such a program is called a thread, and each thread defines a separate\npath of execution." }, { "code": null, "e": 1796, "s": 1643, "text": "C++ does not contain any built-in support for multithreaded applications. Instead, it\nrelies entirely upon the operating system to provide this feature." }, { "code": null, "e": 2067, "s": 1796, "text": "This tutorial assumes that you are working on Linux OS and we are going to write\nmulti-threaded C++ program using POSIX. POSIX Threads, or Pthreads provides API\nwhich are available on many Unix-like POSIX systems such as FreeBSD, NetBSD,\nGNU/Linux, Mac OS X and Solaris." }, { "code": null, "e": 2124, "s": 2067, "text": "The following routine is used to create a POSIX thread −" }, { "code": null, "e": 2195, "s": 2124, "text": "#include <pthread.h>\npthread_create (thread, attr, start_routine, arg)" }, { "code": null, "e": 2384, "s": 2195, "text": "Here, pthread_create creates a new thread and makes it executable. This routine\ncan be called any number of times from anywhere within your code. Here is the\ndescription of the parameters." }, { "code": null, "e": 2600, "s": 2384, "text": "The maximum number of threads that may be created by a process is\nimplementation dependent. Once created, threads are peers, and may create other\nthreads. There is no implied hierarchy or dependency between threads." }, { "code": null, "e": 2670, "s": 2600, "text": "There is following routine which we use to terminate a POSIX thread –" }, { "code": null, "e": 2713, "s": 2670, "text": "#include <pthread.h>\npthread_exit (status)" }, { "code": null, "e": 2890, "s": 2713, "text": "Here pthread_exit is used to explicitly exit a thread. Typically, the pthread_exit()\nroutine is called after a thread has completed its work and is no longer required to\nexist." }, { "code": null, "e": 3090, "s": 2890, "text": "If main() finishes before the threads it has created, and exits with pthread_exit(), the\nother threads will continue to execute. Otherwise, they will be automatically\nterminated when main() finishes." }, { "code": null, "e": 3719, "s": 3090, "text": "#include <iostream>\n#include <cstdlib>\n#include <pthread.h>\nusing namespace std;\n#define NUM_THREADS 5\nvoid *PrintHello(void *threadid) {\n long tid;\n tid = (long)threadid;\n cout << \"Hello World! Thread ID, \" << tid << endl;\n pthread_exit(NULL);\n}\nint main () {\n pthread_t threads[NUM_THREADS];\n int rc;\n int i;\n for( i = 0; i < NUM_THREADS; i++ ) {\n cout << \"main() : creating thread, \" << i << endl;\n rc = pthread_create(&threads[i], NULL, PrintHello, (void *)i);\n if (rc) {\n cout << \"Error:unable to create thread,\" << rc << endl;\n exit(-1);\n }\n }\n pthread_exit(NULL);\n}" }, { "code": null, "e": 4022, "s": 3719, "text": "$gcc test.cpp -lpthread\n$./a.out\nmain() : creating thread, 0\nmain() : creating thread, 1\nmain() : creating thread, 2\nmain() : creating thread, 3\nmain() : creating thread, 4\nHello World! Thread ID, 0\nHello World! Thread ID, 1\nHello World! Thread ID, 2\nHello World! Thread ID, 3\nHello World! Thread ID, 4" } ]
Go - The goto Statement
A goto statement in Go programming language provides an unconditional jump from the goto to a labeled statement in the same function. Note − Use of goto statement is highly discouraged in any programming language because it becomes difficult to trace the control flow of a program, making the program difficult to understand and hard to modify. Any program that uses a goto can be rewritten using some other construct. The syntax for a goto statement in Go is as follows − goto label; .. . label: statement; Here, label can be any plain text except Go keyword and it can be set anywhere in the Go program above or below to goto statement. package main import "fmt" func main() { /* local variable definition */ var a int = 10 /* do loop execution */ LOOP: for a < 20 { if a == 15 { /* skip the iteration */ a = a + 1 goto LOOP } fmt.Printf("value of a: %d\n", a) a++ } } When the above code is compiled and executed, it produces the following result − value of a: 10 value of a: 11 value of a: 12 value of a: 13 value of a: 14 value of a: 16 value of a: 17 value of a: 18 value of a: 19 64 Lectures 6.5 hours Ridhi Arora 20 Lectures 2.5 hours Asif Hussain 22 Lectures 4 hours Dilip Padmanabhan 48 Lectures 6 hours Arnab Chakraborty 7 Lectures 1 hours Aditya Kulkarni 44 Lectures 3 hours Arnab Chakraborty Print Add Notes Bookmark this page
[ { "code": null, "e": 2071, "s": 1937, "text": "A goto statement in Go programming language provides an unconditional jump from the goto to a labeled statement in the same function." }, { "code": null, "e": 2356, "s": 2071, "text": "Note − Use of goto statement is highly discouraged in any programming language because it becomes difficult to trace the control flow of a program, making the program difficult to understand and hard to modify. Any program that uses a goto can be rewritten using some other construct." }, { "code": null, "e": 2410, "s": 2356, "text": "The syntax for a goto statement in Go is as follows −" }, { "code": null, "e": 2446, "s": 2410, "text": "goto label;\n..\n.\nlabel: statement;\n" }, { "code": null, "e": 2577, "s": 2446, "text": "Here, label can be any plain text except Go keyword and it can be set anywhere in the Go program above or below to goto statement." }, { "code": null, "e": 2885, "s": 2577, "text": "package main\n\nimport \"fmt\"\n\nfunc main() {\n /* local variable definition */\n var a int = 10\n\n /* do loop execution */\n LOOP: for a < 20 {\n if a == 15 {\n /* skip the iteration */\n a = a + 1\n goto LOOP\n }\n fmt.Printf(\"value of a: %d\\n\", a)\n a++ \n } \n}" }, { "code": null, "e": 2966, "s": 2885, "text": "When the above code is compiled and executed, it produces the following result −" }, { "code": null, "e": 3102, "s": 2966, "text": "value of a: 10\nvalue of a: 11\nvalue of a: 12\nvalue of a: 13\nvalue of a: 14\nvalue of a: 16\nvalue of a: 17\nvalue of a: 18\nvalue of a: 19\n" }, { "code": null, "e": 3137, "s": 3102, "text": "\n 64 Lectures \n 6.5 hours \n" }, { "code": null, "e": 3150, "s": 3137, "text": " Ridhi Arora" }, { "code": null, "e": 3185, "s": 3150, "text": "\n 20 Lectures \n 2.5 hours \n" }, { "code": null, "e": 3199, "s": 3185, "text": " Asif Hussain" }, { "code": null, "e": 3232, "s": 3199, "text": "\n 22 Lectures \n 4 hours \n" }, { "code": null, "e": 3251, "s": 3232, "text": " Dilip Padmanabhan" }, { "code": null, "e": 3284, "s": 3251, "text": "\n 48 Lectures \n 6 hours \n" }, { "code": null, "e": 3303, "s": 3284, "text": " Arnab Chakraborty" }, { "code": null, "e": 3335, "s": 3303, "text": "\n 7 Lectures \n 1 hours \n" }, { "code": null, "e": 3352, "s": 3335, "text": " Aditya Kulkarni" }, { "code": null, "e": 3385, "s": 3352, "text": "\n 44 Lectures \n 3 hours \n" }, { "code": null, "e": 3404, "s": 3385, "text": " Arnab Chakraborty" }, { "code": null, "e": 3411, "s": 3404, "text": " Print" }, { "code": null, "e": 3422, "s": 3411, "text": " Add Notes" } ]
A Step-by-Step Guide in detecting causal relationships using Bayesian Structure Learning in Python | by Erdogan Taskesen | Towards Data Science
Determining causality across variables can be a challenging step but it is important for strategic actions. I will summarize the concepts of causal models in terms of Bayesian probabilistic, followed by a hands-on tutorial to detect causal relationships using Bayesian structure learning. I will use the sprinkler dataset to conceptually explain how structures are learned with the use of the Python library bnlearn. The use of machine learning techniques has become a standard toolkit to obtain useful insights and make predictions in many areas such as disease prediction, recommendation systems, natural language processing. Although good performances can be achieved, it is not straightforward to extract causal relationships with, for example, the target variable. In other words: which variables have a direct causal effect on the target variable? Such insights are important to determine the driving factors that reach the conclusion, and as such, strategic actions can be taken. A branch of machine learning is Bayesian probabilistic graphical models, also named Bayesian networks (BN), which can be used to determine such causal factors. Let’s rehash some terminology before we jump into the technical details of causal models. It is common to use the terms “correlation” and “association” interchangeably. But we all know that correlation or association is not causation. Or in other words, observed relationships between two variables do not necessarily mean that one causes the other. Technically, correlation refers to a linear relationship between two variables whereas association refers to any relationship between two (or more) variables. Causation, on the other hand, means that one variable (often called the predictor variable or independent variable) causes the other (often called the outcome variable or dependent variable) [1]. In the next two sections, I will briefly describe correlation and association by example. Pearson correlation is the most commonly used correlation coefficient. It is so common that it is often used synonymously with correlation. The strength is denoted by r and measures the strength of a linear relationship in a sample on a standardized scale from -1 to 1. There are three possible results when using correlation: Positive correlation: a relationship between two variables in which both variables move in the same direction Negative correlation: a relationship between two variables in which an increase in one variable is associated with a decrease in the other, and No correlation: when there is no relationship between two variables. An example of positive correlation is demonstrated in Figure 1 where the relationship is seen between chocolate consumption and the number of Nobel Laureates per country [2]. The figure shows that chocolate consumption could imply an increase in Nobel Laureates. Or the other way around, an increase in Nobel laureates could likewise underlie an increase in chocolate consumption. Despite the strong correlation, it is more plausible that unobserved variables such as socioeconomic status or quality of the education system might cause an increase in both chocolate consumption and Nobel Laureates. Or in other words, it is still unknown whether the relationship is causal [2]. This does not mean that correlation by itself is useless, it simply has a different purpose [3]. Correlation by itself does not imply causation because statistical relations do not uniquely constrain causal relations. When we talk about association, we mean that certain values of one variable tend to co-occur with certain values of the other variable. From a statistical point of view, there are many measures of association (such as chi-square test, Fisher exact test, hypergeometric test, etc) and are often used where one or both of the variables is either ordinal or nominal. It should be noted that correlation is a technical term, whereas the term association is not, and therefore, there is not always consensus about the meaning in statistics. This means that it’s always a good practice to state the meaning of the terms you’re using. More information about associations can be found in this blog [4] and read this blog on how to explore and understand your data with a network of significant associations [5]. For the sake of example, I will use the Hypergeometric test to demonstrate whether two variables are associated using the Titanic dataset. The Titanic dataset is used in many machine learning examples, and it is readily known that the sex status (female) is a good predictor for survival. Let me demonstrate how to compute the association between survived and female. First, install the bnlearn library and only load the Titanic dataset. pip install bnlearn Q: What is the probability that females survived? Null hypothesis: There is no relation between survived and female. The hypergeometric test uses the hypergeometric distribution to measure the statistical significance of a discrete probability distribution. In this example, N is the population size (891), K is the number of successful states in the population (342), n is the sample size/number of draws (314), x is the number of success in sample (233). We can reject the null hypothesis under alpha=0.05 and therefore, we can speak about a statistically significant association between survived and female. Importantly, association by itself does not imply causation. We need to distinguish between marginal associations and conditional associations. The latter is the key building block of causal inference. Causation means that one (independent) variable causes the other (dependent) variable and is formulated by Reichenbach (1956) as follows: If two random variables X and Y are statistically dependent (X/Y), then either (a) X causes Y, (b) Y causes X, or (c ) there exists a third variable Z that causes both X and Y. Further, X and Y become independent given Z, i.e., X⊥Y∣Z. This definition is incorporated in Bayesian graphical models (a.k.a. Bayesian networks, Bayesian belief networks, Bayes Net, causal probabilistic networks, and Influence diagrams). A lot of names for the same technique. To determine causality, we can use Bayesian networks (BN). Let’s start with the graph and visualize the statistical dependencies between the three variables described by Reichenbach (X, Y, Z) (see figure 2). Nodes correspond to variables (X, Y, Z) and the directed edges (arrows) indicate dependency relationships or conditional distributions. Four graphs can be created; (a, b) Cascade, (c ) Common parent and (d) the V-structure, and these graphs form the basis for Bayesian networks. But how can we tell what causes what? The conceptual idea to determine the direction of causality, thus which node influences which node, is by holding a node constant and then observe the effect. As an example, let’s take DAG (a) in Figure 2, which describes that Z is caused by X, and Y is caused by Z. If we now keep Z constant there should not be a change in Y if this model is true. Every Bayesian network can be described by these four graphs, and with probability theory (see the section below) we can glue the parts together. Bayesian network is a happy marriage between probability and graph theory. It should be noted that a Bayesian network is a Directed Acyclic Graph (DAG) and DAGs are causal. This means that the edges in the graph are directed and there is no (feedback) loop (acyclic). Probability theory, or more specific Bayes theorem or Bayes Rule, forms the fundament for Bayesian networks. The Bayes rule is used to update model information, and stated mathematically as the following equation: The equation consists of four parts; the posterior probability is the probability that Z occurs given X. The conditional probability or likelihood is the probability of the evidence given that the hypothesis is true. This can be derived from the data. Our prior belief is the probability of the hypothesis before observing the evidence. This can also be derived from the data or domain knowledge. Finally, the marginal probability describes the probability of the new evidence under all possible hypotheses which needs to be computed. If you want to read more about the (factorized) probability distribution or more details the joint distribution for a Bayesian network, try this blog [6]. With structure learning, we want to determine the structure of the graph that best captures the causal dependencies between the variables in the data set. Or in other words: What is the DAG that best fits the data? A naïve manner to find the best DAG is simply creating all possible combinations of the graph, i.e., by making tens, hundreds, or even thousands of different DAGs until all combinations are exhausted. Each DAG can then be scored on the fit of the data. Finally, the best scoring DAG is returned. In the case of variables X, Y, Z, one can make the graphs as shown in Figure 2 and a few more because it is not only X>Z>Y (Figure 2a), but it can also be like Z>X>Y, etc. The variables X, Y, Z can be boolean values (True or False), but can also have multiple states. The search space of DAGs becomes so-called super-exponential in the number of variables that maximize the score. This means that an exhaustive search is practically infeasible with a large number of nodes, and therefore, various greedy strategies have been proposed to browse DAG space. With optimization-based search approaches, it is possible to browse a larger DAG space. Such approaches require a scoring function and a search strategy. A common scoring function is the posterior probability of the structure given the training data, like the BIC or the BDeu. Structure learning for large DAGs requires a scoring function and search strategy. Before we jump into the examples, it is always good to understand when to use which technique. There are two broad approaches to search throughout the DAG space and find the best fitting graph for the data. Score-based structure learning Constraint-based structure learning Note that a local search strategy makes incremental changes aimed at improving the score of the structure. A global search algorithm like Markov chain Monte Carlo can avoid getting trapped in local minima but I will not discuss that here. Score-based approaches have two main components: The search algorithm to optimize throughout the search space of all possible DAGs; such as ExhaustiveSearch, Hillclimbsearch, Chow-Liu.The scoring function indicates how well the Bayesian network fits the data. Commonly used scoring functions are Bayesian Dirichlet scores such as BDeu or K2 and the Bayesian Information Criterion (BIC, also called MDL). The search algorithm to optimize throughout the search space of all possible DAGs; such as ExhaustiveSearch, Hillclimbsearch, Chow-Liu. The scoring function indicates how well the Bayesian network fits the data. Commonly used scoring functions are Bayesian Dirichlet scores such as BDeu or K2 and the Bayesian Information Criterion (BIC, also called MDL). Four common score-based methods are depicted below, but more detail about the Bayesian scoring methods can be found here [9]. ExhaustiveSearch, as the name implies, scores every possible DAG and returns the best-scoring DAG. This search approach is only attractable for very small networks and prohibits efficient local optimization algorithms to always find the optimal structure. Thus, identifying the ideal structure is often not tractable. Nevertheless, heuristic search strategies often yield good results if only a few nodes are involved (read: less than 5 or so). Hillclimbsearch is a heuristic search approach that can be used if more nodes are used. HillClimbSearch implements a greedy local search that starts from the DAG “start” (default: disconnected DAG) and proceeds by iteratively performing single-edge manipulations that maximally increase the score. The search terminates once a local maximum is found. Chow-Liu algorithm is a specific type of tree-based approach. The Chow-Liu algorithm finds the maximum-likelihood tree structure where each node has at most one parent. The complexity can be limited by restricting to tree structures. Tree-augmented Naive Bayes (TAN) algorithm is also a tree-based approach that can be used to model huge datasets involving lots of uncertainties among its various interdependent feature sets [6]. Chi-square test. A different, but quite straightforward approach to construct a DAG by identifying independencies in the data set using hypothesis tests, such as chi2 test statistic. This approach does rely on statistical tests and conditional hypotheses to learn independence among the variables in the model. The P-value of the chi2 test is the probability of observing the computed chi2 statistic, given the null hypothesis that X and Y are independent given Z. This can be used to make independent judgments, at a given level of significance. An example of a constraint-based approach is the PC algorithm which starts with a complete fully connected graph and removes edges based on the results of the tests if the nodes are independent until a stopping criterion is achieved. A few words about the bnlearn library that is used for all the analysis in this article. The bnlearn library is designed to tackle a few challenges such as: Structure learning: Given the data: Estimate a DAG that captures the dependencies between the variables. Parameter learning: Given the data and DAG: Estimate the (conditional) probability distributions of the individual variables. Inference: Given the learned model: Determine the exact probability values for your queries. What benefits does bnlearn offer over other bayesian analysis implementations? Build on top of the pgmpy library Contains the most-wanted bayesian pipelines Simple and intuitive Open-source Documentation page Let's start with a simple and intuitive example to demonstrate the working of structure learning. Suppose you have a sprinkler system in your backyard and for the last 1000 days, you measured four variables, each with two states: Rain (yes or no), Cloudy (yes or no), Sprinkler system (on or off), and Wet grass (true or false). Based on these four variables and your conception of the real world, you may have an intuition how the graph should look like. right? right? If not, it is good that you read this article because with structure learning you will find out! With bnlearn it is easy to determine the causal relationships with only a few lines of code. In the example below, we will import the bnlearn library, load the sprinkler dataset, and determine which DAG fits best the data. Note that the sprinkler dataset is readily cleaned without missing values and all values have the state 1 or 0. That's it! We have the learned structure as shown in Figure 3. The detected DAG consists of four nodes that are connected through edges, each edge indicates a causal relation. The state of Wet grass depends on two nodes, Rain and Sprinkler. The state of Rain is conditioned by Cloudy, and separately, the state Sprinkler is also conditioned by Cloudy. This DAG represents the (factorized) probability distribution, where S is the random variable for sprinkler, R for the rain, G for the wet grass, and C for cloudy. By examining the graph, you quickly see that the only independent variable in the model is C. The other variables are conditioned on the probability of cloudy, rain, and/or the sprinkler. In general, the joint distribution for a Bayesian Network is the product of the conditional probabilities for every node given its parents: The default setting in bnlearn for structure learning is the hillclimbsearch method and BIC scoring. Notably, different methods and scoring types can be specified. See the example to specify the search and scoring type: Although the detected DAG for the sprinkler dataset is insightful and shows the causal dependencies for the variables in the dataset, it does not allow you to ask all kinds of questions, such as: How probable is it to have wet grass given the sprinkler is off? How probable is it to have a rainy day given the sprinkler is off and it is cloudy? In the sprinkler dataset, it may be evident what the outcome is, given your knowledge about the world and by logical thinking. But once you have larger, more complex graphs it may not be so evident anymore. With so-called inferences, we can answer “what-if-we-did-x” type questions that would normally require controlled experiments and explicit interventions to answer. To make inferences we need two ingredients; the DAG and Conditional Probabilistic Tables (CPTs). At this point, we have the data stored in the data frame (df) and we readily computed the DAG that describes the structure of the data. The CPTs are needed to quantitatively describe the statistical relationship between each node and its parents. The CPTs can be computed using Parameter learning, so let’s jump into parameter learning first, and then we move back to making inferences. Parameter learning is the task to estimate the values of the Conditional Probability Tables (CPTs). The bnlearn library supports Parameter learning for discrete nodes: Maximum Likelihood Estimation is a natural estimate by using the relative frequencies with which the variable states have occurred. When estimating parameters for Bayesian networks, lack of data is a frequent problem and the ML estimator has the problem of overfitting to the data. In other words, if the observed data is not representative (or too small) for the underlying distribution, ML estimations can be extremely far off. As an example, if a variable has 3 parents that can each take 10 states, then state counts will be done separately for 103 = 1000 parents configurations. This can make MLE very fragile for learning Bayesian Network parameters. A way to mitigate MLE’s overfitting is Bayesian Parameter Estimation. Bayesian Estimation starts with readily existing prior CPTs, that express our beliefs about the variables before the data was observed. Those “priors” are then updated using the state counts from the observed data. One can think of the priors as consisting in pseudo-state counts, that are added to the actual counts before normalization. A very simple prior is the so-called K2 prior, which simply adds “1” to the count of every single state. A somewhat more sensible choice of prior is BDeu (Bayesian Dirichlet equivalent uniform prior). I will continue with the sprinkler dataset to learn its parameters, resulting in the detection of Conditional Probabilistic Tables (CPTs). To learn parameters, we need a Directed Acyclic Graph (DAG) and a dataset with exactly the same variables. The idea is to connect the dataset with the DAG. In the previous example, we readily computed the DAG (Figure 3). You can use it in this example or alternatively, you can create your own DAG based on your knowledge of the world! In the example, I will demonstrate how to create your own DAG which can be based on expert/domain knowledge. If you reached this point, you have computed the CPTs based on the DAG and the input dataset df using Maximum Likelihood Estimation (MLE) (Figure 4). Note that the CPTs are included in Figure 4 for clarity purposes. Computing the CPTs using MLE is straightforward, let me demonstrate this by example by computing the CPTs manually for the nodes Cloudy and Rain. Note that conditional dependencies can be based on limited data points. Aa an example, P(Rain=1|Cloudy=0) is based on 91 observations. If Rain had more than two states and/or more dependencies, this number would have been even lower. Is more data the solution? Maybe. Maybe not. Just keep in mind that even if the total sample size is very large, the fact that state counts are conditionally for each parent’s configuration can also cause fragmentation. Check out the differences with the CPTs compared to the MLE approach. To make inferences, it requires the Bayesian network to have two main components: A Directed Acyclic Graph (DAG) that describes the structure of the data and Conditional Probability Tables (CPT) that describe the statistical relationship between each node and its parents. At this point you have the dataset, you computed the DAG using structure learning and estimated the CPTs using parameter learning. You can now make inferences! With inferences, we marginalize variables in a procedure that is called variable elimination. Variable elimination is an exact inference algorithm. It can also be used to figure out the state of the network that has maximum probability by simply exchanging the sums by max functions. Its downside is that for large BNs it might be computationally intractable. Approximate inference algorithms such as Gibbs sampling or rejection sampling might be used in these cases [7]. With bnlearn we can make inferences as follow: And now we have the answers to our questions: How probable is it to have wet grass given the sprinkler is off? P(Wet_grass=1 | Sprinkler=0) = 0.51How probable is it have a rainy day given sprinkler is off and it is cloudy?P(Rain=1 | Sprinkler=0, Cloudy=1) = 0.663 If you solely used data to compute the causal diagram, it is hard to fully verify the validity and completeness of your causal diagram. However, some solutions can help to get more trust in the causal diagram. For example, it may be possible to empirically test certain conditional independence or dependence relationships between sets of variables. If they are not in the data, it is an indication of the correctness of the causal model [8]. Alternatively, prior expert knowledge can be added, such as a DAG or CPTs, to get more trust in the model when making inferences. In this article, I touched on a few concepts about why correlation or association is not causation and how to go from data towards a causal model using structure learning. A summary of the advantages of Bayesian techniques is that: The outcome of posterior probability distributions, or the graph allows the user to make a judgment on the model predictions instead of having one single value as an outcome.The possibility to incorporate domain/expert knowledge in the DAG and reason with incomplete information and missing data. This is possible because Bayes theorem is built on updating the prior term with evidence.It has a notion of modularity.A complex system is built by combining simpler parts.Graph theory provides intuitively highly interacting sets of variables.Probability theory provides the glue to combine the parts. The outcome of posterior probability distributions, or the graph allows the user to make a judgment on the model predictions instead of having one single value as an outcome. The possibility to incorporate domain/expert knowledge in the DAG and reason with incomplete information and missing data. This is possible because Bayes theorem is built on updating the prior term with evidence. It has a notion of modularity. A complex system is built by combining simpler parts. Graph theory provides intuitively highly interacting sets of variables. Probability theory provides the glue to combine the parts. A weakness on the other hand of Bayesian networks is that finding the optimum DAG is computationally expensive since an exhaustive search over all the possible structures must be performed. The limit of nodes for exhaustive search can already be around 15 nodes but also depends on the number of states. If you have more nodes, alternative methods with a scoring function and search algorithm are required. Nevertheless, to deal with problems with hundreds or maybe even thousands of variables, a different approach, such as tree-based or constraint-based approaches is necessary with the use of black/whitelisting of variables. Such an approach first determines the order and then finds the optimal BN structure for that ordering. This implies working on the search space of the possible orderings, which is convenient as it is smaller than the space of network structures. Determining causality can be a challenging task but the bnlearn library is designed to tackle some of the challenges, such as Structure learning, Parameter learning, and Inferences. But it can also derive the topological ordering of the (entire) graph, or compare two graphs. Documentation can be found here that also contains the examples of the Alarm, Andes, Asia, Pathfinder, Sachs models. Be safe. Stay frosty. Cheers, E. If you found this article helpful, help support my content by signing up for a Medium membership using my referral link or follow me to access similar blogs. bnlearn documentation Colab Notebook examples Let’s connect on LinkedIn Follow me on github McLeod, S. A, Correlation definitions, examples & interpretation. Simply Psychology, 2018, January 14F. Dablander, An Introduction to Causal Inference, Department of Psychological Methods, University of Amsterdam, https://psyarxiv.com/b3fkwBrittany Davis, When Correlation is Better than Causation, Medium, 2021Paul Gingrich, Measures of association. Page 766–795Taskesen, E, Explore and understand your data with a network of significant associations. Aug. 2021, MediumBranislav Holländer, Introduction to Probabilistic Graphical Models, Medium, 2020Harini Padmanaban, Comparative Analysis of Naive Analysis of Naive Bayes and Tes and Tree Augmented Naive augmented Naive Bayes Models, San Jose State University, 2014Huszar. F, ML beyond Curve Fitting: An Intro to Causal Inference and do-CalculusE. Perrier et al, Finding Optimal Bayesian Network Given a Super-Structure, Journal of Machine Learning Research 9 (2008) 2251–2286. McLeod, S. A, Correlation definitions, examples & interpretation. Simply Psychology, 2018, January 14 F. Dablander, An Introduction to Causal Inference, Department of Psychological Methods, University of Amsterdam, https://psyarxiv.com/b3fkw Brittany Davis, When Correlation is Better than Causation, Medium, 2021 Paul Gingrich, Measures of association. Page 766–795 Taskesen, E, Explore and understand your data with a network of significant associations. Aug. 2021, Medium Branislav Holländer, Introduction to Probabilistic Graphical Models, Medium, 2020 Harini Padmanaban, Comparative Analysis of Naive Analysis of Naive Bayes and Tes and Tree Augmented Naive augmented Naive Bayes Models, San Jose State University, 2014 Huszar. F, ML beyond Curve Fitting: An Intro to Causal Inference and do-Calculus E. Perrier et al, Finding Optimal Bayesian Network Given a Super-Structure, Journal of Machine Learning Research 9 (2008) 2251–2286.
[ { "code": null, "e": 588, "s": 171, "text": "Determining causality across variables can be a challenging step but it is important for strategic actions. I will summarize the concepts of causal models in terms of Bayesian probabilistic, followed by a hands-on tutorial to detect causal relationships using Bayesian structure learning. I will use the sprinkler dataset to conceptually explain how structures are learned with the use of the Python library bnlearn." }, { "code": null, "e": 1318, "s": 588, "text": "The use of machine learning techniques has become a standard toolkit to obtain useful insights and make predictions in many areas such as disease prediction, recommendation systems, natural language processing. Although good performances can be achieved, it is not straightforward to extract causal relationships with, for example, the target variable. In other words: which variables have a direct causal effect on the target variable? Such insights are important to determine the driving factors that reach the conclusion, and as such, strategic actions can be taken. A branch of machine learning is Bayesian probabilistic graphical models, also named Bayesian networks (BN), which can be used to determine such causal factors." }, { "code": null, "e": 2113, "s": 1318, "text": "Let’s rehash some terminology before we jump into the technical details of causal models. It is common to use the terms “correlation” and “association” interchangeably. But we all know that correlation or association is not causation. Or in other words, observed relationships between two variables do not necessarily mean that one causes the other. Technically, correlation refers to a linear relationship between two variables whereas association refers to any relationship between two (or more) variables. Causation, on the other hand, means that one variable (often called the predictor variable or independent variable) causes the other (often called the outcome variable or dependent variable) [1]. In the next two sections, I will briefly describe correlation and association by example." }, { "code": null, "e": 2440, "s": 2113, "text": "Pearson correlation is the most commonly used correlation coefficient. It is so common that it is often used synonymously with correlation. The strength is denoted by r and measures the strength of a linear relationship in a sample on a standardized scale from -1 to 1. There are three possible results when using correlation:" }, { "code": null, "e": 2550, "s": 2440, "text": "Positive correlation: a relationship between two variables in which both variables move in the same direction" }, { "code": null, "e": 2694, "s": 2550, "text": "Negative correlation: a relationship between two variables in which an increase in one variable is associated with a decrease in the other, and" }, { "code": null, "e": 2763, "s": 2694, "text": "No correlation: when there is no relationship between two variables." }, { "code": null, "e": 2938, "s": 2763, "text": "An example of positive correlation is demonstrated in Figure 1 where the relationship is seen between chocolate consumption and the number of Nobel Laureates per country [2]." }, { "code": null, "e": 3659, "s": 2938, "text": "The figure shows that chocolate consumption could imply an increase in Nobel Laureates. Or the other way around, an increase in Nobel laureates could likewise underlie an increase in chocolate consumption. Despite the strong correlation, it is more plausible that unobserved variables such as socioeconomic status or quality of the education system might cause an increase in both chocolate consumption and Nobel Laureates. Or in other words, it is still unknown whether the relationship is causal [2]. This does not mean that correlation by itself is useless, it simply has a different purpose [3]. Correlation by itself does not imply causation because statistical relations do not uniquely constrain causal relations." }, { "code": null, "e": 4463, "s": 3659, "text": "When we talk about association, we mean that certain values of one variable tend to co-occur with certain values of the other variable. From a statistical point of view, there are many measures of association (such as chi-square test, Fisher exact test, hypergeometric test, etc) and are often used where one or both of the variables is either ordinal or nominal. It should be noted that correlation is a technical term, whereas the term association is not, and therefore, there is not always consensus about the meaning in statistics. This means that it’s always a good practice to state the meaning of the terms you’re using. More information about associations can be found in this blog [4] and read this blog on how to explore and understand your data with a network of significant associations [5]." }, { "code": null, "e": 4831, "s": 4463, "text": "For the sake of example, I will use the Hypergeometric test to demonstrate whether two variables are associated using the Titanic dataset. The Titanic dataset is used in many machine learning examples, and it is readily known that the sex status (female) is a good predictor for survival. Let me demonstrate how to compute the association between survived and female." }, { "code": null, "e": 4901, "s": 4831, "text": "First, install the bnlearn library and only load the Titanic dataset." }, { "code": null, "e": 4921, "s": 4901, "text": "pip install bnlearn" }, { "code": null, "e": 4971, "s": 4921, "text": "Q: What is the probability that females survived?" }, { "code": null, "e": 5038, "s": 4971, "text": "Null hypothesis: There is no relation between survived and female." }, { "code": null, "e": 5378, "s": 5038, "text": "The hypergeometric test uses the hypergeometric distribution to measure the statistical significance of a discrete probability distribution. In this example, N is the population size (891), K is the number of successful states in the population (342), n is the sample size/number of draws (314), x is the number of success in sample (233)." }, { "code": null, "e": 5734, "s": 5378, "text": "We can reject the null hypothesis under alpha=0.05 and therefore, we can speak about a statistically significant association between survived and female. Importantly, association by itself does not imply causation. We need to distinguish between marginal associations and conditional associations. The latter is the key building block of causal inference." }, { "code": null, "e": 5872, "s": 5734, "text": "Causation means that one (independent) variable causes the other (dependent) variable and is formulated by Reichenbach (1956) as follows:" }, { "code": null, "e": 6107, "s": 5872, "text": "If two random variables X and Y are statistically dependent (X/Y), then either (a) X causes Y, (b) Y causes X, or (c ) there exists a third variable Z that causes both X and Y. Further, X and Y become independent given Z, i.e., X⊥Y∣Z." }, { "code": null, "e": 6671, "s": 6107, "text": "This definition is incorporated in Bayesian graphical models (a.k.a. Bayesian networks, Bayesian belief networks, Bayes Net, causal probabilistic networks, and Influence diagrams). A lot of names for the same technique. To determine causality, we can use Bayesian networks (BN). Let’s start with the graph and visualize the statistical dependencies between the three variables described by Reichenbach (X, Y, Z) (see figure 2). Nodes correspond to variables (X, Y, Z) and the directed edges (arrows) indicate dependency relationships or conditional distributions." }, { "code": null, "e": 6814, "s": 6671, "text": "Four graphs can be created; (a, b) Cascade, (c ) Common parent and (d) the V-structure, and these graphs form the basis for Bayesian networks." }, { "code": null, "e": 6852, "s": 6814, "text": "But how can we tell what causes what?" }, { "code": null, "e": 7348, "s": 6852, "text": "The conceptual idea to determine the direction of causality, thus which node influences which node, is by holding a node constant and then observe the effect. As an example, let’s take DAG (a) in Figure 2, which describes that Z is caused by X, and Y is caused by Z. If we now keep Z constant there should not be a change in Y if this model is true. Every Bayesian network can be described by these four graphs, and with probability theory (see the section below) we can glue the parts together." }, { "code": null, "e": 7423, "s": 7348, "text": "Bayesian network is a happy marriage between probability and graph theory." }, { "code": null, "e": 7616, "s": 7423, "text": "It should be noted that a Bayesian network is a Directed Acyclic Graph (DAG) and DAGs are causal. This means that the edges in the graph are directed and there is no (feedback) loop (acyclic)." }, { "code": null, "e": 7830, "s": 7616, "text": "Probability theory, or more specific Bayes theorem or Bayes Rule, forms the fundament for Bayesian networks. The Bayes rule is used to update model information, and stated mathematically as the following equation:" }, { "code": null, "e": 8520, "s": 7830, "text": "The equation consists of four parts; the posterior probability is the probability that Z occurs given X. The conditional probability or likelihood is the probability of the evidence given that the hypothesis is true. This can be derived from the data. Our prior belief is the probability of the hypothesis before observing the evidence. This can also be derived from the data or domain knowledge. Finally, the marginal probability describes the probability of the new evidence under all possible hypotheses which needs to be computed. If you want to read more about the (factorized) probability distribution or more details the joint distribution for a Bayesian network, try this blog [6]." }, { "code": null, "e": 8694, "s": 8520, "text": "With structure learning, we want to determine the structure of the graph that best captures the causal dependencies between the variables in the data set. Or in other words:" }, { "code": null, "e": 8735, "s": 8694, "text": "What is the DAG that best fits the data?" }, { "code": null, "e": 9864, "s": 8735, "text": "A naïve manner to find the best DAG is simply creating all possible combinations of the graph, i.e., by making tens, hundreds, or even thousands of different DAGs until all combinations are exhausted. Each DAG can then be scored on the fit of the data. Finally, the best scoring DAG is returned. In the case of variables X, Y, Z, one can make the graphs as shown in Figure 2 and a few more because it is not only X>Z>Y (Figure 2a), but it can also be like Z>X>Y, etc. The variables X, Y, Z can be boolean values (True or False), but can also have multiple states. The search space of DAGs becomes so-called super-exponential in the number of variables that maximize the score. This means that an exhaustive search is practically infeasible with a large number of nodes, and therefore, various greedy strategies have been proposed to browse DAG space. With optimization-based search approaches, it is possible to browse a larger DAG space. Such approaches require a scoring function and a search strategy. A common scoring function is the posterior probability of the structure given the training data, like the BIC or the BDeu." }, { "code": null, "e": 9947, "s": 9864, "text": "Structure learning for large DAGs requires a scoring function and search strategy." }, { "code": null, "e": 10154, "s": 9947, "text": "Before we jump into the examples, it is always good to understand when to use which technique. There are two broad approaches to search throughout the DAG space and find the best fitting graph for the data." }, { "code": null, "e": 10185, "s": 10154, "text": "Score-based structure learning" }, { "code": null, "e": 10221, "s": 10185, "text": "Constraint-based structure learning" }, { "code": null, "e": 10460, "s": 10221, "text": "Note that a local search strategy makes incremental changes aimed at improving the score of the structure. A global search algorithm like Markov chain Monte Carlo can avoid getting trapped in local minima but I will not discuss that here." }, { "code": null, "e": 10509, "s": 10460, "text": "Score-based approaches have two main components:" }, { "code": null, "e": 10864, "s": 10509, "text": "The search algorithm to optimize throughout the search space of all possible DAGs; such as ExhaustiveSearch, Hillclimbsearch, Chow-Liu.The scoring function indicates how well the Bayesian network fits the data. Commonly used scoring functions are Bayesian Dirichlet scores such as BDeu or K2 and the Bayesian Information Criterion (BIC, also called MDL)." }, { "code": null, "e": 11000, "s": 10864, "text": "The search algorithm to optimize throughout the search space of all possible DAGs; such as ExhaustiveSearch, Hillclimbsearch, Chow-Liu." }, { "code": null, "e": 11220, "s": 11000, "text": "The scoring function indicates how well the Bayesian network fits the data. Commonly used scoring functions are Bayesian Dirichlet scores such as BDeu or K2 and the Bayesian Information Criterion (BIC, also called MDL)." }, { "code": null, "e": 11346, "s": 11220, "text": "Four common score-based methods are depicted below, but more detail about the Bayesian scoring methods can be found here [9]." }, { "code": null, "e": 11791, "s": 11346, "text": "ExhaustiveSearch, as the name implies, scores every possible DAG and returns the best-scoring DAG. This search approach is only attractable for very small networks and prohibits efficient local optimization algorithms to always find the optimal structure. Thus, identifying the ideal structure is often not tractable. Nevertheless, heuristic search strategies often yield good results if only a few nodes are involved (read: less than 5 or so)." }, { "code": null, "e": 12142, "s": 11791, "text": "Hillclimbsearch is a heuristic search approach that can be used if more nodes are used. HillClimbSearch implements a greedy local search that starts from the DAG “start” (default: disconnected DAG) and proceeds by iteratively performing single-edge manipulations that maximally increase the score. The search terminates once a local maximum is found." }, { "code": null, "e": 12376, "s": 12142, "text": "Chow-Liu algorithm is a specific type of tree-based approach. The Chow-Liu algorithm finds the maximum-likelihood tree structure where each node has at most one parent. The complexity can be limited by restricting to tree structures." }, { "code": null, "e": 12572, "s": 12376, "text": "Tree-augmented Naive Bayes (TAN) algorithm is also a tree-based approach that can be used to model huge datasets involving lots of uncertainties among its various interdependent feature sets [6]." }, { "code": null, "e": 13353, "s": 12572, "text": "Chi-square test. A different, but quite straightforward approach to construct a DAG by identifying independencies in the data set using hypothesis tests, such as chi2 test statistic. This approach does rely on statistical tests and conditional hypotheses to learn independence among the variables in the model. The P-value of the chi2 test is the probability of observing the computed chi2 statistic, given the null hypothesis that X and Y are independent given Z. This can be used to make independent judgments, at a given level of significance. An example of a constraint-based approach is the PC algorithm which starts with a complete fully connected graph and removes edges based on the results of the tests if the nodes are independent until a stopping criterion is achieved." }, { "code": null, "e": 13510, "s": 13353, "text": "A few words about the bnlearn library that is used for all the analysis in this article. The bnlearn library is designed to tackle a few challenges such as:" }, { "code": null, "e": 13615, "s": 13510, "text": "Structure learning: Given the data: Estimate a DAG that captures the dependencies between the variables." }, { "code": null, "e": 13741, "s": 13615, "text": "Parameter learning: Given the data and DAG: Estimate the (conditional) probability distributions of the individual variables." }, { "code": null, "e": 13834, "s": 13741, "text": "Inference: Given the learned model: Determine the exact probability values for your queries." }, { "code": null, "e": 13913, "s": 13834, "text": "What benefits does bnlearn offer over other bayesian analysis implementations?" }, { "code": null, "e": 13947, "s": 13913, "text": "Build on top of the pgmpy library" }, { "code": null, "e": 13991, "s": 13947, "text": "Contains the most-wanted bayesian pipelines" }, { "code": null, "e": 14012, "s": 13991, "text": "Simple and intuitive" }, { "code": null, "e": 14024, "s": 14012, "text": "Open-source" }, { "code": null, "e": 14043, "s": 14024, "text": "Documentation page" }, { "code": null, "e": 14610, "s": 14043, "text": "Let's start with a simple and intuitive example to demonstrate the working of structure learning. Suppose you have a sprinkler system in your backyard and for the last 1000 days, you measured four variables, each with two states: Rain (yes or no), Cloudy (yes or no), Sprinkler system (on or off), and Wet grass (true or false). Based on these four variables and your conception of the real world, you may have an intuition how the graph should look like. right? right? If not, it is good that you read this article because with structure learning you will find out!" }, { "code": null, "e": 14703, "s": 14610, "text": "With bnlearn it is easy to determine the causal relationships with only a few lines of code." }, { "code": null, "e": 14945, "s": 14703, "text": "In the example below, we will import the bnlearn library, load the sprinkler dataset, and determine which DAG fits best the data. Note that the sprinkler dataset is readily cleaned without missing values and all values have the state 1 or 0." }, { "code": null, "e": 15461, "s": 14945, "text": "That's it! We have the learned structure as shown in Figure 3. The detected DAG consists of four nodes that are connected through edges, each edge indicates a causal relation. The state of Wet grass depends on two nodes, Rain and Sprinkler. The state of Rain is conditioned by Cloudy, and separately, the state Sprinkler is also conditioned by Cloudy. This DAG represents the (factorized) probability distribution, where S is the random variable for sprinkler, R for the rain, G for the wet grass, and C for cloudy." }, { "code": null, "e": 15789, "s": 15461, "text": "By examining the graph, you quickly see that the only independent variable in the model is C. The other variables are conditioned on the probability of cloudy, rain, and/or the sprinkler. In general, the joint distribution for a Bayesian Network is the product of the conditional probabilities for every node given its parents:" }, { "code": null, "e": 16009, "s": 15789, "text": "The default setting in bnlearn for structure learning is the hillclimbsearch method and BIC scoring. Notably, different methods and scoring types can be specified. See the example to specify the search and scoring type:" }, { "code": null, "e": 16205, "s": 16009, "text": "Although the detected DAG for the sprinkler dataset is insightful and shows the causal dependencies for the variables in the dataset, it does not allow you to ask all kinds of questions, such as:" }, { "code": null, "e": 16270, "s": 16205, "text": "How probable is it to have wet grass given the sprinkler is off?" }, { "code": null, "e": 16354, "s": 16270, "text": "How probable is it to have a rainy day given the sprinkler is off and it is cloudy?" }, { "code": null, "e": 16725, "s": 16354, "text": "In the sprinkler dataset, it may be evident what the outcome is, given your knowledge about the world and by logical thinking. But once you have larger, more complex graphs it may not be so evident anymore. With so-called inferences, we can answer “what-if-we-did-x” type questions that would normally require controlled experiments and explicit interventions to answer." }, { "code": null, "e": 17209, "s": 16725, "text": "To make inferences we need two ingredients; the DAG and Conditional Probabilistic Tables (CPTs). At this point, we have the data stored in the data frame (df) and we readily computed the DAG that describes the structure of the data. The CPTs are needed to quantitatively describe the statistical relationship between each node and its parents. The CPTs can be computed using Parameter learning, so let’s jump into parameter learning first, and then we move back to making inferences." }, { "code": null, "e": 17377, "s": 17209, "text": "Parameter learning is the task to estimate the values of the Conditional Probability Tables (CPTs). The bnlearn library supports Parameter learning for discrete nodes:" }, { "code": null, "e": 18104, "s": 17377, "text": "Maximum Likelihood Estimation is a natural estimate by using the relative frequencies with which the variable states have occurred. When estimating parameters for Bayesian networks, lack of data is a frequent problem and the ML estimator has the problem of overfitting to the data. In other words, if the observed data is not representative (or too small) for the underlying distribution, ML estimations can be extremely far off. As an example, if a variable has 3 parents that can each take 10 states, then state counts will be done separately for 103 = 1000 parents configurations. This can make MLE very fragile for learning Bayesian Network parameters. A way to mitigate MLE’s overfitting is Bayesian Parameter Estimation." }, { "code": null, "e": 18644, "s": 18104, "text": "Bayesian Estimation starts with readily existing prior CPTs, that express our beliefs about the variables before the data was observed. Those “priors” are then updated using the state counts from the observed data. One can think of the priors as consisting in pseudo-state counts, that are added to the actual counts before normalization. A very simple prior is the so-called K2 prior, which simply adds “1” to the count of every single state. A somewhat more sensible choice of prior is BDeu (Bayesian Dirichlet equivalent uniform prior)." }, { "code": null, "e": 19228, "s": 18644, "text": "I will continue with the sprinkler dataset to learn its parameters, resulting in the detection of Conditional Probabilistic Tables (CPTs). To learn parameters, we need a Directed Acyclic Graph (DAG) and a dataset with exactly the same variables. The idea is to connect the dataset with the DAG. In the previous example, we readily computed the DAG (Figure 3). You can use it in this example or alternatively, you can create your own DAG based on your knowledge of the world! In the example, I will demonstrate how to create your own DAG which can be based on expert/domain knowledge." }, { "code": null, "e": 19444, "s": 19228, "text": "If you reached this point, you have computed the CPTs based on the DAG and the input dataset df using Maximum Likelihood Estimation (MLE) (Figure 4). Note that the CPTs are included in Figure 4 for clarity purposes." }, { "code": null, "e": 19590, "s": 19444, "text": "Computing the CPTs using MLE is straightforward, let me demonstrate this by example by computing the CPTs manually for the nodes Cloudy and Rain." }, { "code": null, "e": 20114, "s": 19590, "text": "Note that conditional dependencies can be based on limited data points. Aa an example, P(Rain=1|Cloudy=0) is based on 91 observations. If Rain had more than two states and/or more dependencies, this number would have been even lower. Is more data the solution? Maybe. Maybe not. Just keep in mind that even if the total sample size is very large, the fact that state counts are conditionally for each parent’s configuration can also cause fragmentation. Check out the differences with the CPTs compared to the MLE approach." }, { "code": null, "e": 20547, "s": 20114, "text": "To make inferences, it requires the Bayesian network to have two main components: A Directed Acyclic Graph (DAG) that describes the structure of the data and Conditional Probability Tables (CPT) that describe the statistical relationship between each node and its parents. At this point you have the dataset, you computed the DAG using structure learning and estimated the CPTs using parameter learning. You can now make inferences!" }, { "code": null, "e": 21019, "s": 20547, "text": "With inferences, we marginalize variables in a procedure that is called variable elimination. Variable elimination is an exact inference algorithm. It can also be used to figure out the state of the network that has maximum probability by simply exchanging the sums by max functions. Its downside is that for large BNs it might be computationally intractable. Approximate inference algorithms such as Gibbs sampling or rejection sampling might be used in these cases [7]." }, { "code": null, "e": 21066, "s": 21019, "text": "With bnlearn we can make inferences as follow:" }, { "code": null, "e": 21112, "s": 21066, "text": "And now we have the answers to our questions:" }, { "code": null, "e": 21330, "s": 21112, "text": "How probable is it to have wet grass given the sprinkler is off? P(Wet_grass=1 | Sprinkler=0) = 0.51How probable is it have a rainy day given sprinkler is off and it is cloudy?P(Rain=1 | Sprinkler=0, Cloudy=1) = 0.663" }, { "code": null, "e": 21903, "s": 21330, "text": "If you solely used data to compute the causal diagram, it is hard to fully verify the validity and completeness of your causal diagram. However, some solutions can help to get more trust in the causal diagram. For example, it may be possible to empirically test certain conditional independence or dependence relationships between sets of variables. If they are not in the data, it is an indication of the correctness of the causal model [8]. Alternatively, prior expert knowledge can be added, such as a DAG or CPTs, to get more trust in the model when making inferences." }, { "code": null, "e": 22135, "s": 21903, "text": "In this article, I touched on a few concepts about why correlation or association is not causation and how to go from data towards a causal model using structure learning. A summary of the advantages of Bayesian techniques is that:" }, { "code": null, "e": 22734, "s": 22135, "text": "The outcome of posterior probability distributions, or the graph allows the user to make a judgment on the model predictions instead of having one single value as an outcome.The possibility to incorporate domain/expert knowledge in the DAG and reason with incomplete information and missing data. This is possible because Bayes theorem is built on updating the prior term with evidence.It has a notion of modularity.A complex system is built by combining simpler parts.Graph theory provides intuitively highly interacting sets of variables.Probability theory provides the glue to combine the parts." }, { "code": null, "e": 22909, "s": 22734, "text": "The outcome of posterior probability distributions, or the graph allows the user to make a judgment on the model predictions instead of having one single value as an outcome." }, { "code": null, "e": 23122, "s": 22909, "text": "The possibility to incorporate domain/expert knowledge in the DAG and reason with incomplete information and missing data. This is possible because Bayes theorem is built on updating the prior term with evidence." }, { "code": null, "e": 23153, "s": 23122, "text": "It has a notion of modularity." }, { "code": null, "e": 23207, "s": 23153, "text": "A complex system is built by combining simpler parts." }, { "code": null, "e": 23279, "s": 23207, "text": "Graph theory provides intuitively highly interacting sets of variables." }, { "code": null, "e": 23338, "s": 23279, "text": "Probability theory provides the glue to combine the parts." }, { "code": null, "e": 24213, "s": 23338, "text": "A weakness on the other hand of Bayesian networks is that finding the optimum DAG is computationally expensive since an exhaustive search over all the possible structures must be performed. The limit of nodes for exhaustive search can already be around 15 nodes but also depends on the number of states. If you have more nodes, alternative methods with a scoring function and search algorithm are required. Nevertheless, to deal with problems with hundreds or maybe even thousands of variables, a different approach, such as tree-based or constraint-based approaches is necessary with the use of black/whitelisting of variables. Such an approach first determines the order and then finds the optimal BN structure for that ordering. This implies working on the search space of the possible orderings, which is convenient as it is smaller than the space of network structures." }, { "code": null, "e": 24606, "s": 24213, "text": "Determining causality can be a challenging task but the bnlearn library is designed to tackle some of the challenges, such as Structure learning, Parameter learning, and Inferences. But it can also derive the topological ordering of the (entire) graph, or compare two graphs. Documentation can be found here that also contains the examples of the Alarm, Andes, Asia, Pathfinder, Sachs models." }, { "code": null, "e": 24628, "s": 24606, "text": "Be safe. Stay frosty." }, { "code": null, "e": 24639, "s": 24628, "text": "Cheers, E." }, { "code": null, "e": 24797, "s": 24639, "text": "If you found this article helpful, help support my content by signing up for a Medium membership using my referral link or follow me to access similar blogs." }, { "code": null, "e": 24819, "s": 24797, "text": "bnlearn documentation" }, { "code": null, "e": 24843, "s": 24819, "text": "Colab Notebook examples" }, { "code": null, "e": 24869, "s": 24843, "text": "Let’s connect on LinkedIn" }, { "code": null, "e": 24889, "s": 24869, "text": "Follow me on github" }, { "code": null, "e": 25821, "s": 24889, "text": "McLeod, S. A, Correlation definitions, examples & interpretation. Simply Psychology, 2018, January 14F. Dablander, An Introduction to Causal Inference, Department of Psychological Methods, University of Amsterdam, https://psyarxiv.com/b3fkwBrittany Davis, When Correlation is Better than Causation, Medium, 2021Paul Gingrich, Measures of association. Page 766–795Taskesen, E, Explore and understand your data with a network of significant associations. Aug. 2021, MediumBranislav Holländer, Introduction to Probabilistic Graphical Models, Medium, 2020Harini Padmanaban, Comparative Analysis of Naive Analysis of Naive Bayes and Tes and Tree Augmented Naive augmented Naive Bayes Models, San Jose State University, 2014Huszar. F, ML beyond Curve Fitting: An Intro to Causal Inference and do-CalculusE. Perrier et al, Finding Optimal Bayesian Network Given a Super-Structure, Journal of Machine Learning Research 9 (2008) 2251–2286." }, { "code": null, "e": 25923, "s": 25821, "text": "McLeod, S. A, Correlation definitions, examples & interpretation. Simply Psychology, 2018, January 14" }, { "code": null, "e": 26063, "s": 25923, "text": "F. Dablander, An Introduction to Causal Inference, Department of Psychological Methods, University of Amsterdam, https://psyarxiv.com/b3fkw" }, { "code": null, "e": 26135, "s": 26063, "text": "Brittany Davis, When Correlation is Better than Causation, Medium, 2021" }, { "code": null, "e": 26188, "s": 26135, "text": "Paul Gingrich, Measures of association. Page 766–795" }, { "code": null, "e": 26296, "s": 26188, "text": "Taskesen, E, Explore and understand your data with a network of significant associations. Aug. 2021, Medium" }, { "code": null, "e": 26379, "s": 26296, "text": "Branislav Holländer, Introduction to Probabilistic Graphical Models, Medium, 2020" }, { "code": null, "e": 26547, "s": 26379, "text": "Harini Padmanaban, Comparative Analysis of Naive Analysis of Naive Bayes and Tes and Tree Augmented Naive augmented Naive Bayes Models, San Jose State University, 2014" }, { "code": null, "e": 26628, "s": 26547, "text": "Huszar. F, ML beyond Curve Fitting: An Intro to Causal Inference and do-Calculus" } ]
Ticket Counter | Practice | GeeksforGeeks
N people standing in the queue of a movie ticket counter. It is a weird counter, distributes tickets to first K people and then last K people and again first K people and so on. The task is to find the last person to get the ticket. Example: Let N = 9, K = 3, starting queue will like {1, 2, 3, 4, 5, 6, 7, 8, 9}. After the first distribution queue will look like {4, 5, 6, 7, 8, 9}. And after the second distribution queue will look like {4, 5, 6}. The last person to get the ticket will be 6. Input: 1. The first line of the input contains a single integer T denoting the number of test cases. The description of T test cases follows. 2. The first line of each test case contains two space-separated integers N and K. Output: For each test case, print the last problem which remains Constraints: 1. 1 <= T <= 10 2. 1 <= K<= N <= 105 Example: Input: 2 9 3 25 7 Output: 6 15 0 msbritogabriella2 months ago Easy C++ solution using Deque 0 soumik8673 months ago //////Using only vector #include<iostream>#include<bits/stdc++.h>using namespace std;int main(){int t;cin>>t;while(t--){ int n,k; cin>>n>>k; vector<int> v(n); v[0]=1; if(n==1){ cout<<1<<endl; return 0; } for(int i=1;i<n;i++){ v[i]=v[i-1]+1; } int x=0,y=n-1,check=0,check1=0; while((y-x)>=k){ x+=k; if((y-x)<=k){ check=1; break; } y-=k; if((y-x)<=k){ check1=1; break; } } if(check==1) cout<<v[x]<<endl; if(check1==1) cout<<v[y]<<endl;}return 0;} 0 aroopghosh5533 months ago #include <bits/stdc++.h> using namespace std; int last(int n, int k){ deque<int> dq; for(int i = 1; i <= n; i++){ dq.push_back(i); } int i = 1; while(dq.size() > 1){ if(i % 2 != 0){ for(int j = 0; j < k; j++){ if(dq.size() == 1){ return dq.front(); } dq.pop_front(); } i++; } else{ for(int j = 0; j < k; j++){ if(dq.size() == 1){ return dq.front(); } dq.pop_back(); } i++; } } return dq.front(); } int main() { int t; cin >> t; while(t--){ int n, k; cin >> n >> k; cout << last(n, k) << "\n"; } return 0; } 0 avinav26113 months ago Easy C++ Solution using Set 0 kewatshubham993 months ago solution in 0.2/1.8 /*package whatever //do not write package name here */ import java.util.*; import java.lang.*; import java.io.*; class GFG { public static void main (String[] args) { Scanner scan = new Scanner(System.in); int test = scan.nextInt(); for(int k=0;k<test;k++){ int total = scan.nextInt(); int pass = scan.nextInt(); int i=0,j=total; while(j>i||i<j){ i+=pass; if(i>=j){ System.out.println(j);break; } j-=pass; if(j<=i){ System.out.println(i+1);break; } } }//code } } solution in 0.2/1.8 0 imranwahid6 months ago Easy C++ solution https://ide.geeksforgeeks.org/5Fo5gOqvC5 #include<bits/stdc++.h> using namespace std; int helper(int n,int k) { int start=1,end=n; bool front=true; while(end-start+1>k) { if(end-start+1>k) { start+=k; front=false; } else { break; } if(end-start+1>k) { end-=k; front=true; } else { break; } } return front?end:start; } int main() { int t; cin>>t; while(t--) { int n,k; cin>>n>>k; cout<<helper(n,k)<<endl; } return 0; } 0 syedabdullah2736 months ago #include <iostream>#include <deque>using namespace std; int main() {//code int T,N,K;deque<int>people;cin>>T;while(T--){ cin>>N; cin>>K; for(int i=1;i<=N;i++) people.push_back(i); while(people.size()>1) { for(int i=1;i<=K&&people.size()>1;i++) people.pop_front(); for(int i=1;i<=K&&people.size()>1;i++) people.pop_back(); } cout<<people.front()<<endl; people.pop_front(); } return 0;} 0 abhishekpanwar6976 months ago #include <bits/stdc++.h>using namespace std; int main() {int t;cin>>t;while(t--){int n,k;cin>>n>>k;deque<int>q;for(int i=1;i<=n;i++){ q.push_back(i);}int flag=1,last;while(!q.empty()){ if(flag==1) { int x=k; while(x&&!q.empty()) { last=q.front(); q.pop_front(); x--; } } else { int x=k; while(x&&!q.empty()) { last=q.back(); q.pop_back(); x--; } } flag^=1;}cout<<last<<endl;}return 0;} 0 M G8 months ago M G Without using a deque 0 Prasad8 months ago Prasad can we follow another approach aspect deque We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 723, "s": 226, "text": "N people standing in the queue of a movie ticket counter. It is a weird counter, distributes tickets to first K people and then last K people and again first K people and so on. The task is to find the last person to get the ticket.\n\nExample: Let N = 9, K = 3, starting queue will like {1, 2, 3, 4, 5, 6, 7, 8, 9}. After the first distribution queue will look like {4, 5, 6, 7, 8, 9}. And after the second distribution queue will look like {4, 5, 6}. The last person to get the ticket will be 6. " }, { "code": null, "e": 950, "s": 723, "text": "Input: \n1. The first line of the input contains a single integer T denoting the number of test cases. The description of T test cases follows.\n2. The first line of each test case contains two space-separated integers N and K." }, { "code": null, "e": 1077, "s": 950, "text": "\nOutput: For each test case, print the last problem which remains\n\nConstraints:\n1. 1 <= T <= 10\n2. 1 <= K<= N <= 105\n\nExample:" }, { "code": null, "e": 1109, "s": 1077, "text": "Input:\n2\n9 3\n25 7\n\nOutput:\n6\n15" }, { "code": null, "e": 1111, "s": 1109, "text": "0" }, { "code": null, "e": 1140, "s": 1111, "text": "msbritogabriella2 months ago" }, { "code": null, "e": 1170, "s": 1140, "text": "Easy C++ solution using Deque" }, { "code": null, "e": 1172, "s": 1170, "text": "0" }, { "code": null, "e": 1194, "s": 1172, "text": "soumik8673 months ago" }, { "code": null, "e": 1218, "s": 1194, "text": "//////Using only vector" }, { "code": null, "e": 1785, "s": 1218, "text": "#include<iostream>#include<bits/stdc++.h>using namespace std;int main(){int t;cin>>t;while(t--){ int n,k; cin>>n>>k; vector<int> v(n); v[0]=1; if(n==1){ cout<<1<<endl; return 0; } for(int i=1;i<n;i++){ v[i]=v[i-1]+1; } int x=0,y=n-1,check=0,check1=0; while((y-x)>=k){ x+=k; if((y-x)<=k){ check=1; break; } y-=k; if((y-x)<=k){ check1=1; break; } } if(check==1) cout<<v[x]<<endl; if(check1==1) cout<<v[y]<<endl;}return 0;}" }, { "code": null, "e": 1787, "s": 1785, "text": "0" }, { "code": null, "e": 1813, "s": 1787, "text": "aroopghosh5533 months ago" }, { "code": null, "e": 2619, "s": 1813, "text": "#include <bits/stdc++.h>\nusing namespace std;\n\nint last(int n, int k){\n deque<int> dq;\n for(int i = 1; i <= n; i++){\n dq.push_back(i);\n }\n int i = 1; \n while(dq.size() > 1){\n if(i % 2 != 0){\n for(int j = 0; j < k; j++){\n if(dq.size() == 1){\n return dq.front();\n }\n dq.pop_front();\n }\n i++;\n }\n else{\n for(int j = 0; j < k; j++){\n if(dq.size() == 1){\n return dq.front();\n }\n dq.pop_back();\n }\n i++;\n }\n }\n return dq.front();\n}\n\nint main() {\n\tint t;\n\tcin >> t;\n\twhile(t--){\n\t int n, k;\n\t cin >> n >> k;\n\t cout << last(n, k) << \"\\n\";\n\t}\n\treturn 0;\n}" }, { "code": null, "e": 2621, "s": 2619, "text": "0" }, { "code": null, "e": 2644, "s": 2621, "text": "avinav26113 months ago" }, { "code": null, "e": 2672, "s": 2644, "text": "Easy C++ Solution using Set" }, { "code": null, "e": 2676, "s": 2674, "text": "0" }, { "code": null, "e": 2703, "s": 2676, "text": "kewatshubham993 months ago" }, { "code": null, "e": 3417, "s": 2703, "text": "solution in 0.2/1.8\n/*package whatever //do not write package name here */\n\nimport java.util.*;\nimport java.lang.*;\nimport java.io.*;\n\nclass GFG {\n\tpublic static void main (String[] args) {\n\t\tScanner scan = new Scanner(System.in);\n\t\tint test = scan.nextInt();\n\t\t\n\t\tfor(int k=0;k<test;k++){\n\t int total = scan.nextInt();\n\t int pass = scan.nextInt();\n\t \n\t int i=0,j=total;\n\t \n\t while(j>i||i<j){\n\t i+=pass;\n\t if(i>=j){\n\t System.out.println(j);break;\n\t }\n\t \n\t j-=pass;\n\t if(j<=i){\n\t System.out.println(i+1);break;\n\t }\n\t \n\t }\n\t\t}//code\n\t\t\n\t\t\n\t}\n}" }, { "code": null, "e": 3437, "s": 3417, "text": "solution in 0.2/1.8" }, { "code": null, "e": 3439, "s": 3437, "text": "0" }, { "code": null, "e": 3462, "s": 3439, "text": "imranwahid6 months ago" }, { "code": null, "e": 3480, "s": 3462, "text": "Easy C++ solution" }, { "code": null, "e": 3521, "s": 3480, "text": "https://ide.geeksforgeeks.org/5Fo5gOqvC5" }, { "code": null, "e": 4111, "s": 3521, "text": "#include<bits/stdc++.h>\nusing namespace std;\nint helper(int n,int k)\n{\n int start=1,end=n;\n bool front=true;\n while(end-start+1>k)\n {\n if(end-start+1>k)\n {\n start+=k;\n front=false;\n }\n else\n {\n break;\n }\n if(end-start+1>k)\n {\n end-=k;\n front=true;\n }\n else\n {\n break;\n }\n }\n return front?end:start;\n}\nint main()\n{\n\tint t;\n\tcin>>t;\n\twhile(t--)\n\t{\n\t int n,k;\n\t cin>>n>>k;\n\t cout<<helper(n,k)<<endl;\n\t}\n\treturn 0;\n}" }, { "code": null, "e": 4113, "s": 4111, "text": "0" }, { "code": null, "e": 4141, "s": 4113, "text": "syedabdullah2736 months ago" }, { "code": null, "e": 4197, "s": 4141, "text": "#include <iostream>#include <deque>using namespace std;" }, { "code": null, "e": 4600, "s": 4197, "text": "int main() {//code int T,N,K;deque<int>people;cin>>T;while(T--){ cin>>N; cin>>K; for(int i=1;i<=N;i++) people.push_back(i); while(people.size()>1) { for(int i=1;i<=K&&people.size()>1;i++) people.pop_front(); for(int i=1;i<=K&&people.size()>1;i++) people.pop_back(); } cout<<people.front()<<endl; people.pop_front(); } return 0;}" }, { "code": null, "e": 4602, "s": 4600, "text": "0" }, { "code": null, "e": 4632, "s": 4602, "text": "abhishekpanwar6976 months ago" }, { "code": null, "e": 4677, "s": 4632, "text": "#include <bits/stdc++.h>using namespace std;" }, { "code": null, "e": 5149, "s": 4677, "text": "int main() {int t;cin>>t;while(t--){int n,k;cin>>n>>k;deque<int>q;for(int i=1;i<=n;i++){ q.push_back(i);}int flag=1,last;while(!q.empty()){ if(flag==1) { int x=k; while(x&&!q.empty()) { last=q.front(); q.pop_front(); x--; } } else { int x=k; while(x&&!q.empty()) { last=q.back(); q.pop_back(); x--; } } flag^=1;}cout<<last<<endl;}return 0;}" }, { "code": null, "e": 5151, "s": 5149, "text": "0" }, { "code": null, "e": 5167, "s": 5151, "text": "M G8 months ago" }, { "code": null, "e": 5171, "s": 5167, "text": "M G" }, { "code": null, "e": 5193, "s": 5171, "text": "Without using a deque" }, { "code": null, "e": 5195, "s": 5193, "text": "0" }, { "code": null, "e": 5214, "s": 5195, "text": "Prasad8 months ago" }, { "code": null, "e": 5221, "s": 5214, "text": "Prasad" }, { "code": null, "e": 5265, "s": 5221, "text": "can we follow another approach aspect deque" }, { "code": null, "e": 5411, "s": 5265, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 5447, "s": 5411, "text": " Login to access your submissions. " }, { "code": null, "e": 5457, "s": 5447, "text": "\nProblem\n" }, { "code": null, "e": 5467, "s": 5457, "text": "\nContest\n" }, { "code": null, "e": 5530, "s": 5467, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 5678, "s": 5530, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 5886, "s": 5678, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 5992, "s": 5886, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
Implementation of Non-Restoring Division Algorithm for Unsigned Integer - GeeksforGeeks
20 May, 2020 In the previous article, we have already discussed the Non-Restoring Division Algorithm. In this article, we will discuss the implementation of this algorithm. Non-restoring division algorithm is used to divide two unsigned integers. The other form of this algorithm is Restoring Division. This algorithm is different from the other algorithm because here, there is no concept of restoration and this algorithm is less complex than the restoring division algorithm. Let the dividend Q = 0110 and the divisor M = 0100. The following table demonstrates the step by step solution for the given values: Initial Values Step1:Left-Shift Step2:Left-Shift Step3:Left-Shift Step4:Left-Shift Approach: From the above solution, the idea is to observe that the number of steps required to compute the required quotient and remainder is equal to the number of bits in the dividend. Initially, let the dividend be Q and the divisor be M and the accumulator A = 0. Therefore: At each step, left shift the dividend by 1 position.Subtract the divisor from A (A – M).If the result is positive then the step is said to be “successful”. In this case, the quotient bit will be “1” and the restoration is NOT Required. So, the next step will also be subtraction.If the result is negative then the step is said to be “unsuccessful”. In this case, the quotient bit will be “0”. Here, the restoration is NOT performed like the restoration division algorithm. Instead, the next step will be ADDITION in place of subtraction.Repeat steps 1 to 4 for all bits of the Dividend. At each step, left shift the dividend by 1 position. Subtract the divisor from A (A – M). If the result is positive then the step is said to be “successful”. In this case, the quotient bit will be “1” and the restoration is NOT Required. So, the next step will also be subtraction. If the result is negative then the step is said to be “unsuccessful”. In this case, the quotient bit will be “0”. Here, the restoration is NOT performed like the restoration division algorithm. Instead, the next step will be ADDITION in place of subtraction. Repeat steps 1 to 4 for all bits of the Dividend. Below is the implementation of the above approach: # Python program to divide two # unsigned integers using # Non-Restoring Division Algorithm # Function to add two binary numbersdef add(A, M): carry = 0 Sum = '' # Iterating through the number # A. Here, it is assumed that # the length of both the numbers # is same for i in range (len(A)-1, -1, -1): # Adding the values at both # the indices along with the # carry temp = int(A[i]) + int(M[i]) + carry # If the binary number exceeds 1 if (temp>1): Sum += str(temp % 2) carry = 1 else: Sum += str(temp) carry = 0 # Returning the sum from # MSB to LSB return Sum[::-1] # Function to find the compliment# of the given binary numberdef compliment(m): M = '' # Iterating through the number for i in range (0, len(m)): # Computing the compliment M += str((int(m[i]) + 1) % 2) # Adding 1 to the computed # value M = add(M, '0001') return M # Function to find the quotient# and remainder using the # Non-Restoring Division Algorithmdef nonRestoringDivision(Q, M, A): # Computing the length of the # number count = len(M) comp_M = compliment(M) # Variable to determine whether # addition or subtraction has # to be computed for the next step flag = 'successful' # Printing the initial values # of the accumulator, dividend # and divisor print ('Initial Values: A:', A, ' Q:', Q, ' M:', M) # The number of steps is equal to the # length of the binary number while (count): # Printing the values at every step print ("\nstep:", len(M)-count + 1, end = '') # Step1: Left Shift, assigning LSB of Q # to MSB of A. print (' Left Shift and ', end = '') A = A[1:] + Q[0] # Choosing the addition # or subtraction based on the # result of the previous step if (flag == 'successful'): A = add(A, comp_M) print ('subtract: ') else: A = add(A, M) print ('Addition: ') print('A:', A, ' Q:', Q[1:]+'_', end ='') if (A[0] == '1'): # Step is unsuccessful and the # quotient bit will be '0' Q = Q[1:] + '0' print (' -Unsuccessful') flag = 'unsuccessful' print ('A:', A, ' Q:', Q, ' -Addition in next Step') else: # Step is successful and the quotient # bit will be '1' Q = Q[1:] + '1' print (' Successful') flag = 'successful' print ('A:', A, ' Q:', Q, ' -Subtraction in next step') count -= 1 print ('\nQuotient(Q):', Q, ' Remainder(A):', A) # Driver codeif __name__ == "__main__": dividend = '0111' divisor = '0101' accumulator = '0' * len(dividend) nonRestoringDivision(dividend, divisor, accumulator) Initial Values: A: 0000 Q: 0111 M: 0101 step: 1 Left Shift and subtract: A: 1011 Q: 111_ -Unsuccessful A: 1011 Q: 1110 -Addition in next Step step: 2 Left Shift and Addition: A: 1100 Q: 110_ -Unsuccessful A: 1100 Q: 1100 -Addition in next Step step: 3 Left Shift and Addition: A: 1110 Q: 100_ -Unsuccessful A: 1110 Q: 1000 -Addition in next Step step: 4 Left Shift and Addition: A: 0010 Q: 000_ Successful A: 0010 Q: 0001 -Subtraction in next step Quotient(Q): 0001 Remainder(A): 0010 Number Divisibility Algorithms Computer Organization & Architecture Mathematical Mathematical Algorithms Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments SDE SHEET - A Complete Guide for SDE Preparation DSA Sheet by Love Babbar Introduction to Algorithms Difference between Informed and Uninformed Search in AI How to write a Pseudo Code? Cache Memory in Computer Organization Program for Decimal to Binary Conversion Addressing Modes Architecture of 8086 Logical and Physical Address in Operating System
[ { "code": null, "e": 24577, "s": 24549, "text": "\n20 May, 2020" }, { "code": null, "e": 24737, "s": 24577, "text": "In the previous article, we have already discussed the Non-Restoring Division Algorithm. In this article, we will discuss the implementation of this algorithm." }, { "code": null, "e": 25176, "s": 24737, "text": "Non-restoring division algorithm is used to divide two unsigned integers. The other form of this algorithm is Restoring Division. This algorithm is different from the other algorithm because here, there is no concept of restoration and this algorithm is less complex than the restoring division algorithm. Let the dividend Q = 0110 and the divisor M = 0100. The following table demonstrates the step by step solution for the given values:" }, { "code": null, "e": 25191, "s": 25176, "text": "Initial Values" }, { "code": null, "e": 25208, "s": 25191, "text": "Step1:Left-Shift" }, { "code": null, "e": 25225, "s": 25208, "text": "Step2:Left-Shift" }, { "code": null, "e": 25242, "s": 25225, "text": "Step3:Left-Shift" }, { "code": null, "e": 25259, "s": 25242, "text": "Step4:Left-Shift" }, { "code": null, "e": 25538, "s": 25259, "text": "Approach: From the above solution, the idea is to observe that the number of steps required to compute the required quotient and remainder is equal to the number of bits in the dividend. Initially, let the dividend be Q and the divisor be M and the accumulator A = 0. Therefore:" }, { "code": null, "e": 26125, "s": 25538, "text": "At each step, left shift the dividend by 1 position.Subtract the divisor from A (A – M).If the result is positive then the step is said to be “successful”. In this case, the quotient bit will be “1” and the restoration is NOT Required. So, the next step will also be subtraction.If the result is negative then the step is said to be “unsuccessful”. In this case, the quotient bit will be “0”. Here, the restoration is NOT performed like the restoration division algorithm. Instead, the next step will be ADDITION in place of subtraction.Repeat steps 1 to 4 for all bits of the Dividend." }, { "code": null, "e": 26178, "s": 26125, "text": "At each step, left shift the dividend by 1 position." }, { "code": null, "e": 26215, "s": 26178, "text": "Subtract the divisor from A (A – M)." }, { "code": null, "e": 26407, "s": 26215, "text": "If the result is positive then the step is said to be “successful”. In this case, the quotient bit will be “1” and the restoration is NOT Required. So, the next step will also be subtraction." }, { "code": null, "e": 26666, "s": 26407, "text": "If the result is negative then the step is said to be “unsuccessful”. In this case, the quotient bit will be “0”. Here, the restoration is NOT performed like the restoration division algorithm. Instead, the next step will be ADDITION in place of subtraction." }, { "code": null, "e": 26716, "s": 26666, "text": "Repeat steps 1 to 4 for all bits of the Dividend." }, { "code": null, "e": 26767, "s": 26716, "text": "Below is the implementation of the above approach:" }, { "code": "# Python program to divide two # unsigned integers using # Non-Restoring Division Algorithm # Function to add two binary numbersdef add(A, M): carry = 0 Sum = '' # Iterating through the number # A. Here, it is assumed that # the length of both the numbers # is same for i in range (len(A)-1, -1, -1): # Adding the values at both # the indices along with the # carry temp = int(A[i]) + int(M[i]) + carry # If the binary number exceeds 1 if (temp>1): Sum += str(temp % 2) carry = 1 else: Sum += str(temp) carry = 0 # Returning the sum from # MSB to LSB return Sum[::-1] # Function to find the compliment# of the given binary numberdef compliment(m): M = '' # Iterating through the number for i in range (0, len(m)): # Computing the compliment M += str((int(m[i]) + 1) % 2) # Adding 1 to the computed # value M = add(M, '0001') return M # Function to find the quotient# and remainder using the # Non-Restoring Division Algorithmdef nonRestoringDivision(Q, M, A): # Computing the length of the # number count = len(M) comp_M = compliment(M) # Variable to determine whether # addition or subtraction has # to be computed for the next step flag = 'successful' # Printing the initial values # of the accumulator, dividend # and divisor print ('Initial Values: A:', A, ' Q:', Q, ' M:', M) # The number of steps is equal to the # length of the binary number while (count): # Printing the values at every step print (\"\\nstep:\", len(M)-count + 1, end = '') # Step1: Left Shift, assigning LSB of Q # to MSB of A. print (' Left Shift and ', end = '') A = A[1:] + Q[0] # Choosing the addition # or subtraction based on the # result of the previous step if (flag == 'successful'): A = add(A, comp_M) print ('subtract: ') else: A = add(A, M) print ('Addition: ') print('A:', A, ' Q:', Q[1:]+'_', end ='') if (A[0] == '1'): # Step is unsuccessful and the # quotient bit will be '0' Q = Q[1:] + '0' print (' -Unsuccessful') flag = 'unsuccessful' print ('A:', A, ' Q:', Q, ' -Addition in next Step') else: # Step is successful and the quotient # bit will be '1' Q = Q[1:] + '1' print (' Successful') flag = 'successful' print ('A:', A, ' Q:', Q, ' -Subtraction in next step') count -= 1 print ('\\nQuotient(Q):', Q, ' Remainder(A):', A) # Driver codeif __name__ == \"__main__\": dividend = '0111' divisor = '0101' accumulator = '0' * len(dividend) nonRestoringDivision(dividend, divisor, accumulator)", "e": 29917, "s": 26767, "text": null }, { "code": null, "e": 30431, "s": 29917, "text": "Initial Values: A: 0000 Q: 0111 M: 0101\n\nstep: 1 Left Shift and subtract: \nA: 1011 Q: 111_ -Unsuccessful\nA: 1011 Q: 1110 -Addition in next Step\n\nstep: 2 Left Shift and Addition: \nA: 1100 Q: 110_ -Unsuccessful\nA: 1100 Q: 1100 -Addition in next Step\n\nstep: 3 Left Shift and Addition: \nA: 1110 Q: 100_ -Unsuccessful\nA: 1110 Q: 1000 -Addition in next Step\n\nstep: 4 Left Shift and Addition: \nA: 0010 Q: 000_ Successful\nA: 0010 Q: 0001 -Subtraction in next step\n\nQuotient(Q): 0001 Remainder(A): 0010\n" }, { "code": null, "e": 30451, "s": 30431, "text": "Number Divisibility" }, { "code": null, "e": 30462, "s": 30451, "text": "Algorithms" }, { "code": null, "e": 30499, "s": 30462, "text": "Computer Organization & Architecture" }, { "code": null, "e": 30512, "s": 30499, "text": "Mathematical" }, { "code": null, "e": 30525, "s": 30512, "text": "Mathematical" }, { "code": null, "e": 30536, "s": 30525, "text": "Algorithms" }, { "code": null, "e": 30634, "s": 30536, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30643, "s": 30634, "text": "Comments" }, { "code": null, "e": 30656, "s": 30643, "text": "Old Comments" }, { "code": null, "e": 30705, "s": 30656, "text": "SDE SHEET - A Complete Guide for SDE Preparation" }, { "code": null, "e": 30730, "s": 30705, "text": "DSA Sheet by Love Babbar" }, { "code": null, "e": 30757, "s": 30730, "text": "Introduction to Algorithms" }, { "code": null, "e": 30813, "s": 30757, "text": "Difference between Informed and Uninformed Search in AI" }, { "code": null, "e": 30841, "s": 30813, "text": "How to write a Pseudo Code?" }, { "code": null, "e": 30879, "s": 30841, "text": "Cache Memory in Computer Organization" }, { "code": null, "e": 30920, "s": 30879, "text": "Program for Decimal to Binary Conversion" }, { "code": null, "e": 30937, "s": 30920, "text": "Addressing Modes" }, { "code": null, "e": 30958, "s": 30937, "text": "Architecture of 8086" } ]
How to get default phone number in android?
This example demonstrate about How to get default phone number in android. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:gravity="center" android:layout_height="match_parent" tools:context=".MainActivity" android:orientation="vertical"> <TextView android:id="@+id/text" android:textSize="30sp" android:layout_width="wrap_content" android:layout_height="wrap_content" /> </LinearLayout> In the above code, we have taken a text view to show the phone number. Step 3 − Add the following code to java/MainActivity.xml package com.example.myapplication; import android.Manifest; import android.content.Context; import android.content.pm.PackageManager; import android.os.Bundle; import android.support.annotation.NonNull; import android.support.v4.app.ActivityCompat; import android.support.v7.app.AppCompatActivity; import android.telephony.TelephonyManager; import android.widget.TextView; import static android.Manifest.permission.READ_PHONE_NUMBERS; import static android.Manifest.permission.READ_PHONE_STATE; import static android.Manifest.permission.READ_SMS; public class MainActivity extends AppCompatActivity { private static final int PERMISSION_REQUEST_CODE = 100; TextView textView; TelephonyManager telephonyManager; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); textView = findViewById(R.id.text); telephonyManager = (TelephonyManager) this.getSystemService(Context.TELEPHONY_SERVICE); if (ActivityCompat.checkSelfPermission(this, READ_SMS) != PackageManager.PERMISSION_GRANTED && ActivityCompat.checkSelfPermission(this, READ_PHONE_NUMBERS) != PackageManager.PERMISSION_GRANTED && ActivityCompat.checkSelfPermission(this, READ_PHONE_STATE) != PackageManager.PERMISSION_GRANTED) { ActivityCompat.requestPermissions(this, new String[]{READ_SMS, READ_PHONE_NUMBERS, READ_PHONE_STATE}, PERMISSION_REQUEST_CODE); } else { textView.setText(telephonyManager.getLine1Number()); } } @Override public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) { super.onRequestPermissionsResult(requestCode, permissions, grantResults); switch (requestCode) { case PERMISSION_REQUEST_CODE: if (ActivityCompat.checkSelfPermission(this, Manifest.permission.READ_SMS) != PackageManager.PERMISSION_GRANTED && ActivityCompat.checkSelfPermission(this, Manifest.permission.READ_PHONE_NUMBERS) != PackageManager.PERMISSION_GRANTED && ActivityCompat.checkSelfPermission(this, Manifest.permission.READ_PHONE_STATE) != PackageManager.PERMISSION_GRANTED) { return; } else { textView.setText(telephonyManager.getLine1Number()); } } } } Step 3 − Add the following code toAndroidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.myapplication"> <uses-permission android:name="android.permission.READ_PHONE_NUMBERS" /> <uses-permission android:name="android.permission.READ_PHONE_STATE" /> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen – Click here to download the project code
[ { "code": null, "e": 1137, "s": 1062, "text": "This example demonstrate about How to get default phone number in android." }, { "code": null, "e": 1266, "s": 1137, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1331, "s": 1266, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 1899, "s": 1331, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:app=\"http://schemas.android.com/apk/res-auto\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:gravity=\"center\"\n android:layout_height=\"match_parent\"\n tools:context=\".MainActivity\"\n android:orientation=\"vertical\">\n <TextView\n android:id=\"@+id/text\"\n android:textSize=\"30sp\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\" />\n</LinearLayout>" }, { "code": null, "e": 1970, "s": 1899, "text": "In the above code, we have taken a text view to show the phone number." }, { "code": null, "e": 2027, "s": 1970, "text": "Step 3 − Add the following code to java/MainActivity.xml" }, { "code": null, "e": 4450, "s": 2027, "text": "package com.example.myapplication;\n\nimport android.Manifest;\nimport android.content.Context;\nimport android.content.pm.PackageManager;\nimport android.os.Bundle;\nimport android.support.annotation.NonNull;\nimport android.support.v4.app.ActivityCompat;\nimport android.support.v7.app.AppCompatActivity;\nimport android.telephony.TelephonyManager;\nimport android.widget.TextView;\n\nimport static android.Manifest.permission.READ_PHONE_NUMBERS;\nimport static android.Manifest.permission.READ_PHONE_STATE;\nimport static android.Manifest.permission.READ_SMS;\n\npublic class MainActivity extends AppCompatActivity {\n private static final int PERMISSION_REQUEST_CODE = 100;\n TextView textView;\n TelephonyManager telephonyManager;\n\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n textView = findViewById(R.id.text);\n telephonyManager = (TelephonyManager) this.getSystemService(Context.TELEPHONY_SERVICE);\n if (ActivityCompat.checkSelfPermission(this, READ_SMS) != PackageManager.PERMISSION_GRANTED &&\n ActivityCompat.checkSelfPermission(this, READ_PHONE_NUMBERS) !=\n PackageManager.PERMISSION_GRANTED && ActivityCompat.checkSelfPermission(this,\n READ_PHONE_STATE) != PackageManager.PERMISSION_GRANTED) {\n ActivityCompat.requestPermissions(this, new String[]{READ_SMS, READ_PHONE_NUMBERS, READ_PHONE_STATE}, PERMISSION_REQUEST_CODE);\n } else {\n textView.setText(telephonyManager.getLine1Number());\n }\n }\n\n @Override\n public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {\n super.onRequestPermissionsResult(requestCode, permissions, grantResults);\n switch (requestCode) {\n case PERMISSION_REQUEST_CODE:\n if (ActivityCompat.checkSelfPermission(this, Manifest.permission.READ_SMS) !=\n PackageManager.PERMISSION_GRANTED && ActivityCompat.checkSelfPermission(this,\n Manifest.permission.READ_PHONE_NUMBERS) != PackageManager.PERMISSION_GRANTED &&\n ActivityCompat.checkSelfPermission(this, Manifest.permission.READ_PHONE_STATE) !=\n PackageManager.PERMISSION_GRANTED) {\n return;\n } else {\n textView.setText(telephonyManager.getLine1Number());\n }\n }\n }\n}" }, { "code": null, "e": 4504, "s": 4450, "text": "Step 3 − Add the following code toAndroidManifest.xml" }, { "code": null, "e": 5336, "s": 4504, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\npackage=\"com.example.myapplication\">\n\n <uses-permission android:name=\"android.permission.READ_PHONE_NUMBERS\" />\n <uses-permission android:name=\"android.permission.READ_PHONE_STATE\" />\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 5683, "s": 5336, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –" }, { "code": null, "e": 5723, "s": 5683, "text": "Click here to download the project code" } ]
JavaScript DOM Elements
This page teaches you how to find and access HTML elements in an HTML page. Often, with JavaScript, you want to manipulate HTML elements. To do so, you have to find the elements first. There are several ways to do this: Finding HTML elements by id Finding HTML elements by tag name Finding HTML elements by class name Finding HTML elements by CSS selectors Finding HTML elements by HTML object collections The easiest way to find an HTML element in the DOM, is by using the element id. This example finds the element with id="intro": If the element is found, the method will return the element as an object (in element). If the element is not found, element will contain null. This example finds all <p> elements: This example finds the element with id="main", and then finds all <p> elements inside "main": If you want to find all HTML elements with the same class name, use getElementsByClassName(). This example returns a list of all elements with class="intro". If you want to find all HTML elements that match a specified CSS selector (id, class names, types, attributes, values of attributes, etc), use the querySelectorAll() method. This example returns a list of all <p> elements with class="intro". This example finds the form element with id="frm1", in the forms collection, and displays all element values: The following HTML objects (and object collections) are also accessible: document.anchors document.body document.documentElement document.embeds document.forms document.head document.images document.links document.scripts document.title Use the getElementById method to find the <p> element, and change its text to "Hello". <p id="demo"></p> <script> = "Hello"; </script> Start the Exercise We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: [email protected] Your message has been sent to W3Schools.
[ { "code": null, "e": 77, "s": 0, "text": "This page teaches you how to find and access HTML elements in \nan HTML page." }, { "code": null, "e": 139, "s": 77, "text": "Often, with JavaScript, you want to manipulate HTML elements." }, { "code": null, "e": 221, "s": 139, "text": "To do so, you have to find the elements first. There are several ways to do this:" }, { "code": null, "e": 249, "s": 221, "text": "Finding HTML elements by id" }, { "code": null, "e": 284, "s": 249, "text": "Finding HTML elements by tag name " }, { "code": null, "e": 320, "s": 284, "text": "Finding HTML elements by class name" }, { "code": null, "e": 359, "s": 320, "text": "Finding HTML elements by CSS selectors" }, { "code": null, "e": 408, "s": 359, "text": "Finding HTML elements by HTML object collections" }, { "code": null, "e": 488, "s": 408, "text": "The easiest way to find an HTML element in the DOM, is by using the element id." }, { "code": null, "e": 536, "s": 488, "text": "This example finds the element with id=\"intro\":" }, { "code": null, "e": 623, "s": 536, "text": "If the element is found, the method will return the element as an object (in element)." }, { "code": null, "e": 679, "s": 623, "text": "If the element is not found, element will contain null." }, { "code": null, "e": 716, "s": 679, "text": "This example finds all <p> elements:" }, { "code": null, "e": 811, "s": 716, "text": "This example finds the element with id=\"main\", and then finds all <p> elements \ninside \"main\":" }, { "code": null, "e": 907, "s": 811, "text": "If you want to find all HTML elements with the same class name, use \ngetElementsByClassName().\n" }, { "code": null, "e": 971, "s": 907, "text": "This example returns a list of all elements with class=\"intro\"." }, { "code": null, "e": 1147, "s": 971, "text": "If you want to find all HTML elements that match a specified CSS selector \n(id, class names, types, attributes, values of attributes, etc), use the querySelectorAll() method.\n" }, { "code": null, "e": 1215, "s": 1147, "text": "This example returns a list of all <p> elements with class=\"intro\"." }, { "code": null, "e": 1327, "s": 1215, "text": "This example finds the form element with id=\"frm1\", in the forms \ncollection, and displays all element \nvalues:" }, { "code": null, "e": 1400, "s": 1327, "text": "The following HTML objects (and object collections) are also accessible:" }, { "code": null, "e": 1417, "s": 1400, "text": "document.anchors" }, { "code": null, "e": 1431, "s": 1417, "text": "document.body" }, { "code": null, "e": 1456, "s": 1431, "text": "document.documentElement" }, { "code": null, "e": 1472, "s": 1456, "text": "document.embeds" }, { "code": null, "e": 1487, "s": 1472, "text": "document.forms" }, { "code": null, "e": 1501, "s": 1487, "text": "document.head" }, { "code": null, "e": 1517, "s": 1501, "text": "document.images" }, { "code": null, "e": 1532, "s": 1517, "text": "document.links" }, { "code": null, "e": 1549, "s": 1532, "text": "document.scripts" }, { "code": null, "e": 1564, "s": 1549, "text": "document.title" }, { "code": null, "e": 1652, "s": 1564, "text": "Use the getElementById method to find the \n<p> element, and change its text to \"Hello\"." }, { "code": null, "e": 1703, "s": 1652, "text": "<p id=\"demo\"></p>\n\n<script>\n = \"Hello\";\n</script>\n" }, { "code": null, "e": 1722, "s": 1703, "text": "Start the Exercise" }, { "code": null, "e": 1755, "s": 1722, "text": "We just launchedW3Schools videos" }, { "code": null, "e": 1797, "s": 1755, "text": "Get certifiedby completinga course today!" }, { "code": null, "e": 1904, "s": 1797, "text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:" }, { "code": null, "e": 1923, "s": 1904, "text": "[email protected]" } ]
Python | Pandas Series.sort_index() - GeeksforGeeks
05 Feb, 2019 Pandas series is a One-dimensional ndarray with axis labels. The labels need not be unique but must be a hashable type. The object supports both integer- and label-based indexing and provides a host of methods for performing operations involving the index. Pandas Series.sort_index() function is used to sort the index labels of the given series object. Syntax: Series.sort_index(axis=0, level=None, ascending=True, inplace=False, kind=’quicksort’, na_position=’last’, sort_remaining=True) Parameter :axis : Axis to direct sorting. This can only be 0 for Series.level : If not None, sort on values in specified index level(s).ascending : Sort ascending vs. descending.inplace : If True, perform operation in-place.kind : Choice of sorting algorithm.na_position : Argument ‘first’ puts NaNs at the beginning, ‘last’ puts NaNs at the end.sort_remaining : If true and sorting by level and index is multilevel, sort by other levels too (in order) after sorting by specified level. Returns : Series Example #1: Use Series.sort_index() function to sort the index labels of the given series object. # importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series(['New York', 'Chicago', 'Toronto', 'Lisbon', 'Rio', 'Moscow']) # Create the Indexindex_ = ['City 5', 'City 6', 'City 4', 'City 2', 'City 3', 'City 1'] # set the indexsr.index = index_ # Print the seriesprint(sr) Output : Now we will use Series.sort_index() function to sort the index labels of the given series object. # sort the index labelssr.sort_index() Output : As we can see in the output, the Series.sort_index() function has successfully sorted the index labels of the given series object. Example #2: Use Series.sort_index() function to sort the index labels of the given series object. # importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series([19.5, 16.8, 22.78, 20.124, 18.1002]) # Create the Indexindex_ = [5, 3, 2, 1, 4] # set the indexsr.index = index_ # Print the seriesprint(sr) Output : Now we will use Series.sort_index() function to sort the index labels of the given series object. # sort the index labelssr.sort_index() Output : As we can see in the output, the Series.sort_index() function has successfully sorted the index labels of the given series object. Python pandas-series Python pandas-series-methods Python-pandas Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Install PIP on Windows ? How to drop one or multiple columns in Pandas Dataframe How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | Pandas dataframe.groupby() Defaultdict in Python Python | Get unique values from a list Python Classes and Objects Python | os.path.join() method Create a directory in Python
[ { "code": null, "e": 23901, "s": 23873, "text": "\n05 Feb, 2019" }, { "code": null, "e": 24158, "s": 23901, "text": "Pandas series is a One-dimensional ndarray with axis labels. The labels need not be unique but must be a hashable type. The object supports both integer- and label-based indexing and provides a host of methods for performing operations involving the index." }, { "code": null, "e": 24255, "s": 24158, "text": "Pandas Series.sort_index() function is used to sort the index labels of the given series object." }, { "code": null, "e": 24391, "s": 24255, "text": "Syntax: Series.sort_index(axis=0, level=None, ascending=True, inplace=False, kind=’quicksort’, na_position=’last’, sort_remaining=True)" }, { "code": null, "e": 24878, "s": 24391, "text": "Parameter :axis : Axis to direct sorting. This can only be 0 for Series.level : If not None, sort on values in specified index level(s).ascending : Sort ascending vs. descending.inplace : If True, perform operation in-place.kind : Choice of sorting algorithm.na_position : Argument ‘first’ puts NaNs at the beginning, ‘last’ puts NaNs at the end.sort_remaining : If true and sorting by level and index is multilevel, sort by other levels too (in order) after sorting by specified level." }, { "code": null, "e": 24895, "s": 24878, "text": "Returns : Series" }, { "code": null, "e": 24993, "s": 24895, "text": "Example #1: Use Series.sort_index() function to sort the index labels of the given series object." }, { "code": "# importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series(['New York', 'Chicago', 'Toronto', 'Lisbon', 'Rio', 'Moscow']) # Create the Indexindex_ = ['City 5', 'City 6', 'City 4', 'City 2', 'City 3', 'City 1'] # set the indexsr.index = index_ # Print the seriesprint(sr)", "e": 25290, "s": 24993, "text": null }, { "code": null, "e": 25299, "s": 25290, "text": "Output :" }, { "code": null, "e": 25397, "s": 25299, "text": "Now we will use Series.sort_index() function to sort the index labels of the given series object." }, { "code": "# sort the index labelssr.sort_index()", "e": 25436, "s": 25397, "text": null }, { "code": null, "e": 25445, "s": 25436, "text": "Output :" }, { "code": null, "e": 25674, "s": 25445, "text": "As we can see in the output, the Series.sort_index() function has successfully sorted the index labels of the given series object. Example #2: Use Series.sort_index() function to sort the index labels of the given series object." }, { "code": "# importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series([19.5, 16.8, 22.78, 20.124, 18.1002]) # Create the Indexindex_ = [5, 3, 2, 1, 4] # set the indexsr.index = index_ # Print the seriesprint(sr)", "e": 25900, "s": 25674, "text": null }, { "code": null, "e": 25909, "s": 25900, "text": "Output :" }, { "code": null, "e": 26007, "s": 25909, "text": "Now we will use Series.sort_index() function to sort the index labels of the given series object." }, { "code": "# sort the index labelssr.sort_index()", "e": 26046, "s": 26007, "text": null }, { "code": null, "e": 26055, "s": 26046, "text": "Output :" }, { "code": null, "e": 26186, "s": 26055, "text": "As we can see in the output, the Series.sort_index() function has successfully sorted the index labels of the given series object." }, { "code": null, "e": 26207, "s": 26186, "text": "Python pandas-series" }, { "code": null, "e": 26236, "s": 26207, "text": "Python pandas-series-methods" }, { "code": null, "e": 26250, "s": 26236, "text": "Python-pandas" }, { "code": null, "e": 26257, "s": 26250, "text": "Python" }, { "code": null, "e": 26355, "s": 26257, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26364, "s": 26355, "text": "Comments" }, { "code": null, "e": 26377, "s": 26364, "text": "Old Comments" }, { "code": null, "e": 26409, "s": 26377, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 26465, "s": 26409, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 26507, "s": 26465, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 26549, "s": 26507, "text": "Check if element exists in list in Python" }, { "code": null, "e": 26585, "s": 26549, "text": "Python | Pandas dataframe.groupby()" }, { "code": null, "e": 26607, "s": 26585, "text": "Defaultdict in Python" }, { "code": null, "e": 26646, "s": 26607, "text": "Python | Get unique values from a list" }, { "code": null, "e": 26673, "s": 26646, "text": "Python Classes and Objects" }, { "code": null, "e": 26704, "s": 26673, "text": "Python | os.path.join() method" } ]
Ruby - Modules and Mixins
Modules are a way of grouping together methods, classes, and constants. Modules give you two major benefits. Modules provide a namespace and prevent name clashes. Modules provide a namespace and prevent name clashes. Modules implement the mixin facility. Modules implement the mixin facility. Modules define a namespace, a sandbox in which your methods and constants can play without having to worry about being stepped on by other methods and constants. module Identifier statement1 statement2 ........... end Module constants are named just like class constants, with an initial uppercase letter. The method definitions look similar, too: Module methods are defined just like class methods. As with class methods, you call a module method by preceding its name with the module's name and a period, and you reference a constant using the module name and two colons. #!/usr/bin/ruby # Module defined in trig.rb file module Trig PI = 3.141592654 def Trig.sin(x) # .. end def Trig.cos(x) # .. end end We can define one more module with the same function name but different functionality − #!/usr/bin/ruby # Module defined in moral.rb file module Moral VERY_BAD = 0 BAD = 1 def Moral.sin(badness) # ... end end Like class methods, whenever you define a method in a module, you specify the module name followed by a dot and then the method name. The require statement is similar to the include statement of C and C++ and the import statement of Java. If a third program wants to use any defined module, it can simply load the module files using the Ruby require statement − require filename Here, it is not required to give .rb extension along with a file name. $LOAD_PATH << '.' require 'trig.rb' require 'moral' y = Trig.sin(Trig::PI/4) wrongdoing = Moral.sin(Moral::VERY_BAD) Here we are using $LOAD_PATH << '.' to make Ruby aware that included files must be searched in the current directory. If you do not want to use $LOAD_PATH then you can use require_relative to include files from a relative directory. IMPORTANT − Here, both the files contain the same function name. So, this will result in code ambiguity while including in calling program but modules avoid this code ambiguity and we are able to call appropriate function using module name. You can embed a module in a class. To embed a module in a class, you use the include statement in the class − include modulename If a module is defined in a separate file, then it is required to include that file using require statement before embedding module in a class. Consider the following module written in support.rb file. module Week FIRST_DAY = "Sunday" def Week.weeks_in_month puts "You have four weeks in a month" end def Week.weeks_in_year puts "You have 52 weeks in a year" end end Now, you can include this module in a class as follows − #!/usr/bin/ruby $LOAD_PATH << '.' require "support" class Decade include Week no_of_yrs = 10 def no_of_months puts Week::FIRST_DAY number = 10*12 puts number end end d1 = Decade.new puts Week::FIRST_DAY Week.weeks_in_month Week.weeks_in_year d1.no_of_months This will produce the following result − Sunday You have four weeks in a month You have 52 weeks in a year Sunday 120 Before going through this section, we assume you have the knowledge of Object Oriented Concepts. When a class can inherit features from more than one parent class, the class is supposed to show multiple inheritance. Ruby does not support multiple inheritance directly but Ruby Modules have another wonderful use. At a stroke, they pretty much eliminate the need for multiple inheritance, providing a facility called a mixin. Mixins give you a wonderfully controlled way of adding functionality to classes. However, their true power comes out when the code in the mixin starts to interact with code in the class that uses it. Let us examine the following sample code to gain an understand of mixin − module A def a1 end def a2 end end module B def b1 end def b2 end end class Sample include A include B def s1 end end samp = Sample.new samp.a1 samp.a2 samp.b1 samp.b2 samp.s1 Module A consists of the methods a1 and a2. Module B consists of the methods b1 and b2. The class Sample includes both modules A and B. The class Sample can access all four methods, namely, a1, a2, b1, and b2. Therefore, you can see that the class Sample inherits from both the modules. Thus, you can say the class Sample shows multiple inheritance or a mixin. 46 Lectures 9.5 hours Eduonix Learning Solutions 97 Lectures 7.5 hours Skillbakerystudios 227 Lectures 40 hours YouAccel 19 Lectures 10 hours Programming Line 51 Lectures 5 hours Stone River ELearning 39 Lectures 4.5 hours Stone River ELearning Print Add Notes Bookmark this page
[ { "code": null, "e": 2403, "s": 2294, "text": "Modules are a way of grouping together methods, classes, and constants. Modules give you two major benefits." }, { "code": null, "e": 2457, "s": 2403, "text": "Modules provide a namespace and prevent name clashes." }, { "code": null, "e": 2511, "s": 2457, "text": "Modules provide a namespace and prevent name clashes." }, { "code": null, "e": 2549, "s": 2511, "text": "Modules implement the mixin facility." }, { "code": null, "e": 2587, "s": 2549, "text": "Modules implement the mixin facility." }, { "code": null, "e": 2749, "s": 2587, "text": "Modules define a namespace, a sandbox in which your methods and constants can play without having to worry about being stepped on by other methods and constants." }, { "code": null, "e": 2815, "s": 2749, "text": "module Identifier\n statement1\n statement2\n ...........\nend\n" }, { "code": null, "e": 2997, "s": 2815, "text": "Module constants are named just like class constants, with an initial uppercase letter. The method definitions look similar, too: Module methods are defined just like class methods." }, { "code": null, "e": 3171, "s": 2997, "text": "As with class methods, you call a module method by preceding its name with the module's name and a period, and you reference a constant using the module name and two colons." }, { "code": null, "e": 3326, "s": 3171, "text": "#!/usr/bin/ruby\n\n# Module defined in trig.rb file\n\nmodule Trig\n PI = 3.141592654\n def Trig.sin(x)\n # ..\n end\n def Trig.cos(x)\n # ..\n end\nend" }, { "code": null, "e": 3414, "s": 3326, "text": "We can define one more module with the same function name but different functionality −" }, { "code": null, "e": 3552, "s": 3414, "text": "#!/usr/bin/ruby\n\n# Module defined in moral.rb file\n\nmodule Moral\n VERY_BAD = 0\n BAD = 1\n def Moral.sin(badness)\n # ...\n end\nend" }, { "code": null, "e": 3686, "s": 3552, "text": "Like class methods, whenever you define a method in a module, you specify the module name followed by a dot and then the method name." }, { "code": null, "e": 3914, "s": 3686, "text": "The require statement is similar to the include statement of C and C++ and the import statement of Java. If a third program wants to use any defined module, it can simply load the module files using the Ruby require statement −" }, { "code": null, "e": 3932, "s": 3914, "text": "require filename\n" }, { "code": null, "e": 4003, "s": 3932, "text": "Here, it is not required to give .rb extension along with a file name." }, { "code": null, "e": 4122, "s": 4003, "text": "$LOAD_PATH << '.'\n\nrequire 'trig.rb'\nrequire 'moral'\n\ny = Trig.sin(Trig::PI/4)\nwrongdoing = Moral.sin(Moral::VERY_BAD)" }, { "code": null, "e": 4355, "s": 4122, "text": "Here we are using $LOAD_PATH << '.' to make Ruby aware that included files must be searched in the current directory. If you do not want to use $LOAD_PATH then you can use require_relative to include files from a relative directory." }, { "code": null, "e": 4596, "s": 4355, "text": "IMPORTANT − Here, both the files contain the same function name. So, this will result in code ambiguity while including in calling program but modules avoid this code ambiguity and we are able to call appropriate function using module name." }, { "code": null, "e": 4706, "s": 4596, "text": "You can embed a module in a class. To embed a module in a class, you use the include statement in the class −" }, { "code": null, "e": 4726, "s": 4706, "text": "include modulename\n" }, { "code": null, "e": 4870, "s": 4726, "text": "If a module is defined in a separate file, then it is required to include that file using require statement before embedding module in a class." }, { "code": null, "e": 4928, "s": 4870, "text": "Consider the following module written in support.rb file." }, { "code": null, "e": 5120, "s": 4928, "text": "module Week\n FIRST_DAY = \"Sunday\"\n def Week.weeks_in_month\n puts \"You have four weeks in a month\"\n end\n def Week.weeks_in_year\n puts \"You have 52 weeks in a year\"\n end\nend" }, { "code": null, "e": 5177, "s": 5120, "text": "Now, you can include this module in a class as follows −" }, { "code": null, "e": 5463, "s": 5177, "text": "#!/usr/bin/ruby\n$LOAD_PATH << '.'\nrequire \"support\"\n\nclass Decade\ninclude Week\n no_of_yrs = 10\n def no_of_months\n puts Week::FIRST_DAY\n number = 10*12\n puts number\n end\nend\nd1 = Decade.new\nputs Week::FIRST_DAY\nWeek.weeks_in_month\nWeek.weeks_in_year\nd1.no_of_months" }, { "code": null, "e": 5504, "s": 5463, "text": "This will produce the following result −" }, { "code": null, "e": 5582, "s": 5504, "text": "Sunday\nYou have four weeks in a month\nYou have 52 weeks in a year\nSunday\n120\n" }, { "code": null, "e": 5679, "s": 5582, "text": "Before going through this section, we assume you have the knowledge of Object Oriented Concepts." }, { "code": null, "e": 5798, "s": 5679, "text": "When a class can inherit features from more than one parent class, the class is supposed to show multiple inheritance." }, { "code": null, "e": 6007, "s": 5798, "text": "Ruby does not support multiple inheritance directly but Ruby Modules have another wonderful use. At a stroke, they pretty much eliminate the need for multiple inheritance, providing a facility called a mixin." }, { "code": null, "e": 6207, "s": 6007, "text": "Mixins give you a wonderfully controlled way of adding functionality to classes. However, their true power comes out when the code in the mixin starts to interact with code in the class that uses it." }, { "code": null, "e": 6281, "s": 6207, "text": "Let us examine the following sample code to gain an understand of mixin −" }, { "code": null, "e": 6489, "s": 6281, "text": "module A\n def a1\n end\n def a2\n end\nend\nmodule B\n def b1\n end\n def b2\n end\nend\n\nclass Sample\ninclude A\ninclude B\n def s1\n end\nend\n\nsamp = Sample.new\nsamp.a1\nsamp.a2\nsamp.b1\nsamp.b2\nsamp.s1" }, { "code": null, "e": 6850, "s": 6489, "text": "Module A consists of the methods a1 and a2. Module B consists of the methods b1 and b2. The class Sample includes both modules A and B. The class Sample can access all four methods, namely, a1, a2, b1, and b2. Therefore, you can see that the class Sample inherits from both the modules. Thus, you can say the class Sample shows multiple inheritance or a mixin." }, { "code": null, "e": 6885, "s": 6850, "text": "\n 46 Lectures \n 9.5 hours \n" }, { "code": null, "e": 6913, "s": 6885, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 6948, "s": 6913, "text": "\n 97 Lectures \n 7.5 hours \n" }, { "code": null, "e": 6968, "s": 6948, "text": " Skillbakerystudios" }, { "code": null, "e": 7003, "s": 6968, "text": "\n 227 Lectures \n 40 hours \n" }, { "code": null, "e": 7013, "s": 7003, "text": " YouAccel" }, { "code": null, "e": 7047, "s": 7013, "text": "\n 19 Lectures \n 10 hours \n" }, { "code": null, "e": 7065, "s": 7047, "text": " Programming Line" }, { "code": null, "e": 7098, "s": 7065, "text": "\n 51 Lectures \n 5 hours \n" }, { "code": null, "e": 7121, "s": 7098, "text": " Stone River ELearning" }, { "code": null, "e": 7156, "s": 7121, "text": "\n 39 Lectures \n 4.5 hours \n" }, { "code": null, "e": 7179, "s": 7156, "text": " Stone River ELearning" }, { "code": null, "e": 7186, "s": 7179, "text": " Print" }, { "code": null, "e": 7197, "s": 7186, "text": " Add Notes" } ]
How to Create Your First Machine Learning Model | by Rebecca Vickery | Towards Data Science
Many machine learning tutorials focus on specific elements of the machine learning workflow such as data cleaning, model training or algorithm optimisation. However, if you are very new to machine learning it can be difficult to fully grasp the basic end to end workflow without a complete simple explanation or walkthrough. In the following post, I am going to provide a very simple tutorial for developing a supervised machine learning model in python. In this post I am going to assume only a very basic knowledge of python programming and that you have a few of the common data science libraries already installed. If you need an introduction to python for data science first, Codeacademy is a good place to start. I will also provide links to the documentation for all libraries that I include in the tutorial. Machine learning is the ability of a computer to be able to learn the mapping between some inputs (data features) and some known outputs (data labels) without being explicitly programmed. The goal is that, given new inputs with unknown outputs, the machine can correctly predict the labels. “Machine learning is a thing-labeler, essentially.”, Cassie Kozyrkov This mapping of inputs to outputs is performed by mathematical functions, primarily from the areas of linear algebra and calculus, that are performed at a speed and scale that could not be achieved without large amounts of computing power. There are many different algorithms that can accomplish this task from simple regression-based methods to more complex deep learning techniques. Fortunately, the python programming language has a very active community of open source developers who have built a number of libraries that abstract away the need to directly code these algorithms. One of the core machine learning libraries is scikit-learn which is highly accessible for the beginner. In this tutorial, I am going to be focussing on this library. For simplicity, I will be using machine learning to solve a classification problem. I will use a toy data set from the scikit-learn API which consists of a number of attributes of wine (the features) such as the alcohol volume and colour, for three types of wine (the labels or target). To load the data and convert it to a pandas data frame use the code shown below. from sklearn.datasets import load_wineimport pandas as pdwine = load_wine()data = pd.DataFrame(data=wine['data'], columns = wine['feature_names'])data['target'] = wine['target']data.head() As machine learning algorithms are based on mathematics the data for machine learning must be numeric in nature. This data set is entirely numeric, however, if we had categorical features we would need to perform some preprocessing to convert them to numeric first. I have chosen this data set so that I can show the simplest workflow to develop a model without introducing more advanced concepts around data preprocessing. When developing a machine learning model it is important to be able to evaluate how well it is able to map inputs to outputs and make accurate predictions. However, if you use data that the model has already seen (during training for example) to evaluate the performance then you will not be able to detect such problems as overfitting. Overfitting is when a model has learned either too much detail or noise in the training data which won’t necessarily exist in unseen data. In this case, the model will appear to perform well on the training data but will perform poorly on unseen data. This is often referred to as the model not generalising well. It is a standard in machine learning to first split your training data into a set for training and a set for testing. There is no rule as to the exact size split to make but it is sensible to reserve a larger sample for training — a typical split is 80% training and 20% testing data. It is important that the data is also randomly split so that you are getting a good representation of the patterns that exist in the data in both sets. Scikit-learn has a tool that performs this process in a single line of code, known as test_train_split. The code below passes the features and the target data to the function and specifies a test size of 20%. from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(data.drop('target', axis=1), data['target'], test_size=0.20, random_state=0) It is a good idea to next train a dummy classifier to get a baseline score to benchmark further iterations of model development. Scikit-learn has a function that allows you to train a model and make predictions based on simple rules, such as predicting at random. This is useful to evaluate that your model development is improving as you iterate through the next steps. In the code below I have trained a dummy classifier that always predicts the most frequently occurring class. The dummy classifier, however, has a number of different methods for this. The accuracy score is 0.44. A perfect accuracy score would be 1.0. Now moving through the next stages we will know if we are making an improvement on this baseline. from sklearn.dummy import DummyClassifierfrom sklearn.metrics import accuracy_scorebaseline = DummyClassifier(strategy='most_frequent', random_state=0).fit(X_train, y_train)y_pred = baseline.predict(X_test)print(round(accuracy_score(y_test, y_pred),4)) Now that we have a baseline model trained we need to evaluate if there is another algorithm that may perform better on our data. Scikit-learn has this useful cheat sheet that will give you an idea of the different algorithms available to solve a classification problem. The below code loops through a selection of classification algorithms and prints the resulting score. The output is shown below the code. from sklearn.metrics import accuracy_score, log_lossfrom sklearn.neighbors import KNeighborsClassifierfrom sklearn.svm import SVC, LinearSVC, NuSVCfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifierfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysisfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysisclassifiers = [ KNeighborsClassifier(3), SVC(kernel="rbf", C=0.025, probability=True), NuSVC(probability=True), DecisionTreeClassifier(), RandomForestClassifier(), AdaBoostClassifier(), GradientBoostingClassifier() ]for classifier in classifiers: model = classifier.fit(X_train, y_train) print(classifier) print("model score: %.3f" % model.score(X_test, y_test)) We can see that the RandomForestClassifier performs best for this model so we will select this algorithm to train our final model. Each machine learning algorithm has a wide number of parameters that are used to control the learning process. These parameters can be changed and depending on the data set can result in an increase in performance for the model. The process of finding the best set of parameters for an algorithm and data set is known as hyperparameter optimisation. We know from the code we ran in the last section that the random forest classifier performed best on the data set we are using. If you look at the documentation for this model you will see that there are many parameters that can be tuned for this algorithm. A common method to use for hyperparameter optimisation is known as grid search. Scikit-learn provides a function to perform this called GridSearchCV. We need to pass this function a grid in the form of a python dictionary containing the parameter names and the corresponding list of parameters. This then becomes the parameter space in which the function will search. The function will then build a model for every combination of parameters for the given classifier. Once this has been performed you will be able to access the results in the form of the best model and the best parameter combination. For simplicity, in the example below I have chosen to tune only four parameters. They are as follows: n_estimators: the number of trees in the model. max_depth: the maximum depth of the tree. min_samples_split: the minimum number of data points in a node before the node is split. min_samples_leaf: the minimum amount of data points in each leaf. The code below performs hyperparameter tuning and prints both the best model score and parameter combination. from sklearn.model_selection import GridSearchCVn_estimators = [100, 300, 500, 800, 1200]max_depth = [5, 8, 15, 25, 30]min_samples_split = [2, 5, 10, 15, 100]min_samples_leaf = [1, 2, 5, 10]param_grid = dict(n_estimators = n_estimators, max_depth = max_depth, min_samples_split = min_samples_split, min_samples_leaf = min_samples_leaf)rf = RandomForestClassifier()grid_search = GridSearchCV(estimator=rf, param_grid=param_grid)best_model = grid_search.fit(X_train, y_train)print(round(best_model.score(X_test, y_test),2))print(best_model.best_params_) Finally, we use the best model to predict labels on the test set and print a classification report to evaluate its performance in detail. We can see that overall the performance has vastly improved from the baseline model. from sklearn.metrics import classification_reporty_pred_best = best_model.predict(X_test)print(classification_report(y_test, y_pred_best)) In this article, I have demonstrated the simplest workflow required to develop a machine learning model. There are many more steps typically involved in developing a model, in particular when you are using a real-world data set. These include data cleaning, feature engineering and cross-validation amongst many other possible steps. Once you have grasped the basic steps in this post you can move onto learning about the other elements involved in machine learning. In this article I only covered classification but for a brief overview of other types of machine learning see — Beginners Guide to the Three Types of Machine Learning. If you are interested in learning data science see my list of completely free resources for learning data science here— How to Learn Data Science for Free. Thanks for reading!
[ { "code": null, "e": 497, "s": 172, "text": "Many machine learning tutorials focus on specific elements of the machine learning workflow such as data cleaning, model training or algorithm optimisation. However, if you are very new to machine learning it can be difficult to fully grasp the basic end to end workflow without a complete simple explanation or walkthrough." }, { "code": null, "e": 988, "s": 497, "text": "In the following post, I am going to provide a very simple tutorial for developing a supervised machine learning model in python. In this post I am going to assume only a very basic knowledge of python programming and that you have a few of the common data science libraries already installed. If you need an introduction to python for data science first, Codeacademy is a good place to start. I will also provide links to the documentation for all libraries that I include in the tutorial." }, { "code": null, "e": 1279, "s": 988, "text": "Machine learning is the ability of a computer to be able to learn the mapping between some inputs (data features) and some known outputs (data labels) without being explicitly programmed. The goal is that, given new inputs with unknown outputs, the machine can correctly predict the labels." }, { "code": null, "e": 1348, "s": 1279, "text": "“Machine learning is a thing-labeler, essentially.”, Cassie Kozyrkov" }, { "code": null, "e": 1733, "s": 1348, "text": "This mapping of inputs to outputs is performed by mathematical functions, primarily from the areas of linear algebra and calculus, that are performed at a speed and scale that could not be achieved without large amounts of computing power. There are many different algorithms that can accomplish this task from simple regression-based methods to more complex deep learning techniques." }, { "code": null, "e": 2098, "s": 1733, "text": "Fortunately, the python programming language has a very active community of open source developers who have built a number of libraries that abstract away the need to directly code these algorithms. One of the core machine learning libraries is scikit-learn which is highly accessible for the beginner. In this tutorial, I am going to be focussing on this library." }, { "code": null, "e": 2385, "s": 2098, "text": "For simplicity, I will be using machine learning to solve a classification problem. I will use a toy data set from the scikit-learn API which consists of a number of attributes of wine (the features) such as the alcohol volume and colour, for three types of wine (the labels or target)." }, { "code": null, "e": 2466, "s": 2385, "text": "To load the data and convert it to a pandas data frame use the code shown below." }, { "code": null, "e": 2655, "s": 2466, "text": "from sklearn.datasets import load_wineimport pandas as pdwine = load_wine()data = pd.DataFrame(data=wine['data'], columns = wine['feature_names'])data['target'] = wine['target']data.head()" }, { "code": null, "e": 3079, "s": 2655, "text": "As machine learning algorithms are based on mathematics the data for machine learning must be numeric in nature. This data set is entirely numeric, however, if we had categorical features we would need to perform some preprocessing to convert them to numeric first. I have chosen this data set so that I can show the simplest workflow to develop a model without introducing more advanced concepts around data preprocessing." }, { "code": null, "e": 3416, "s": 3079, "text": "When developing a machine learning model it is important to be able to evaluate how well it is able to map inputs to outputs and make accurate predictions. However, if you use data that the model has already seen (during training for example) to evaluate the performance then you will not be able to detect such problems as overfitting." }, { "code": null, "e": 3730, "s": 3416, "text": "Overfitting is when a model has learned either too much detail or noise in the training data which won’t necessarily exist in unseen data. In this case, the model will appear to perform well on the training data but will perform poorly on unseen data. This is often referred to as the model not generalising well." }, { "code": null, "e": 4015, "s": 3730, "text": "It is a standard in machine learning to first split your training data into a set for training and a set for testing. There is no rule as to the exact size split to make but it is sensible to reserve a larger sample for training — a typical split is 80% training and 20% testing data." }, { "code": null, "e": 4376, "s": 4015, "text": "It is important that the data is also randomly split so that you are getting a good representation of the patterns that exist in the data in both sets. Scikit-learn has a tool that performs this process in a single line of code, known as test_train_split. The code below passes the features and the target data to the function and specifies a test size of 20%." }, { "code": null, "e": 4557, "s": 4376, "text": "from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(data.drop('target', axis=1), data['target'], test_size=0.20, random_state=0)" }, { "code": null, "e": 4928, "s": 4557, "text": "It is a good idea to next train a dummy classifier to get a baseline score to benchmark further iterations of model development. Scikit-learn has a function that allows you to train a model and make predictions based on simple rules, such as predicting at random. This is useful to evaluate that your model development is improving as you iterate through the next steps." }, { "code": null, "e": 5278, "s": 4928, "text": "In the code below I have trained a dummy classifier that always predicts the most frequently occurring class. The dummy classifier, however, has a number of different methods for this. The accuracy score is 0.44. A perfect accuracy score would be 1.0. Now moving through the next stages we will know if we are making an improvement on this baseline." }, { "code": null, "e": 5531, "s": 5278, "text": "from sklearn.dummy import DummyClassifierfrom sklearn.metrics import accuracy_scorebaseline = DummyClassifier(strategy='most_frequent', random_state=0).fit(X_train, y_train)y_pred = baseline.predict(X_test)print(round(accuracy_score(y_test, y_pred),4))" }, { "code": null, "e": 5801, "s": 5531, "text": "Now that we have a baseline model trained we need to evaluate if there is another algorithm that may perform better on our data. Scikit-learn has this useful cheat sheet that will give you an idea of the different algorithms available to solve a classification problem." }, { "code": null, "e": 5939, "s": 5801, "text": "The below code loops through a selection of classification algorithms and prints the resulting score. The output is shown below the code." }, { "code": null, "e": 6767, "s": 5939, "text": "from sklearn.metrics import accuracy_score, log_lossfrom sklearn.neighbors import KNeighborsClassifierfrom sklearn.svm import SVC, LinearSVC, NuSVCfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifierfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysisfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysisclassifiers = [ KNeighborsClassifier(3), SVC(kernel=\"rbf\", C=0.025, probability=True), NuSVC(probability=True), DecisionTreeClassifier(), RandomForestClassifier(), AdaBoostClassifier(), GradientBoostingClassifier() ]for classifier in classifiers: model = classifier.fit(X_train, y_train) print(classifier) print(\"model score: %.3f\" % model.score(X_test, y_test))" }, { "code": null, "e": 6898, "s": 6767, "text": "We can see that the RandomForestClassifier performs best for this model so we will select this algorithm to train our final model." }, { "code": null, "e": 7248, "s": 6898, "text": "Each machine learning algorithm has a wide number of parameters that are used to control the learning process. These parameters can be changed and depending on the data set can result in an increase in performance for the model. The process of finding the best set of parameters for an algorithm and data set is known as hyperparameter optimisation." }, { "code": null, "e": 7506, "s": 7248, "text": "We know from the code we ran in the last section that the random forest classifier performed best on the data set we are using. If you look at the documentation for this model you will see that there are many parameters that can be tuned for this algorithm." }, { "code": null, "e": 7874, "s": 7506, "text": "A common method to use for hyperparameter optimisation is known as grid search. Scikit-learn provides a function to perform this called GridSearchCV. We need to pass this function a grid in the form of a python dictionary containing the parameter names and the corresponding list of parameters. This then becomes the parameter space in which the function will search." }, { "code": null, "e": 8107, "s": 7874, "text": "The function will then build a model for every combination of parameters for the given classifier. Once this has been performed you will be able to access the results in the form of the best model and the best parameter combination." }, { "code": null, "e": 8209, "s": 8107, "text": "For simplicity, in the example below I have chosen to tune only four parameters. They are as follows:" }, { "code": null, "e": 8257, "s": 8209, "text": "n_estimators: the number of trees in the model." }, { "code": null, "e": 8299, "s": 8257, "text": "max_depth: the maximum depth of the tree." }, { "code": null, "e": 8388, "s": 8299, "text": "min_samples_split: the minimum number of data points in a node before the node is split." }, { "code": null, "e": 8454, "s": 8388, "text": "min_samples_leaf: the minimum amount of data points in each leaf." }, { "code": null, "e": 8564, "s": 8454, "text": "The code below performs hyperparameter tuning and prints both the best model score and parameter combination." }, { "code": null, "e": 9144, "s": 8564, "text": "from sklearn.model_selection import GridSearchCVn_estimators = [100, 300, 500, 800, 1200]max_depth = [5, 8, 15, 25, 30]min_samples_split = [2, 5, 10, 15, 100]min_samples_leaf = [1, 2, 5, 10]param_grid = dict(n_estimators = n_estimators, max_depth = max_depth, min_samples_split = min_samples_split, min_samples_leaf = min_samples_leaf)rf = RandomForestClassifier()grid_search = GridSearchCV(estimator=rf, param_grid=param_grid)best_model = grid_search.fit(X_train, y_train)print(round(best_model.score(X_test, y_test),2))print(best_model.best_params_)" }, { "code": null, "e": 9367, "s": 9144, "text": "Finally, we use the best model to predict labels on the test set and print a classification report to evaluate its performance in detail. We can see that overall the performance has vastly improved from the baseline model." }, { "code": null, "e": 9506, "s": 9367, "text": "from sklearn.metrics import classification_reporty_pred_best = best_model.predict(X_test)print(classification_report(y_test, y_pred_best))" }, { "code": null, "e": 9973, "s": 9506, "text": "In this article, I have demonstrated the simplest workflow required to develop a machine learning model. There are many more steps typically involved in developing a model, in particular when you are using a real-world data set. These include data cleaning, feature engineering and cross-validation amongst many other possible steps. Once you have grasped the basic steps in this post you can move onto learning about the other elements involved in machine learning." }, { "code": null, "e": 10141, "s": 9973, "text": "In this article I only covered classification but for a brief overview of other types of machine learning see — Beginners Guide to the Three Types of Machine Learning." }, { "code": null, "e": 10297, "s": 10141, "text": "If you are interested in learning data science see my list of completely free resources for learning data science here— How to Learn Data Science for Free." } ]
Automatic Vision Object Tracking. A pan/tilt servo device helping a... | by Marcelo Rovai | Towards Data Science
A pan/tilt servo device helping a camera to automatically track color objects using vision. On a previous tutorial, we explored how to control a Pan/Tilt Servo device in order to position a PiCam. Now we will use our device to help the camera to automatically tracking color objects as you can see below: This is my first experience with OpenCV and I must confess, I am in love with this fantastic “Open Source Computer Vision Library”. OpenCV is free for both academic and commercial use. It has C++, C, Python and Java interfaces and supports Windows, Linux, Mac OS, iOS and, Android. On my series of OpenCV tutorials, we will be focusing on Raspberry Pi (so, Raspbian as OS) and Python. OpenCV was designed for computational efficiency and with a strong focus on real-time applications. So, it’s perfect for Physical computing projects! I am using a Raspberry Pi V3 updated to the last version of Raspbian (Stretch), so the best way to have OpenCV installed, is to follow the excellent tutorial developed by Adrian Rosebrock: Raspbian Stretch: Install OpenCV 3 + Python on your Raspberry Pi. I tried several different guides to install OpenCV on my Pi. Adrian’s tutorial is the best. I advise you to do the same, following his guideline step-by-step. Once you finished Adrian’s tutorial, you should have an OpenCV virtual environment ready to run our experiments on your Pi. Let’s go to our virtual environment and confirm that OpenCV 3 is correctly installed. Adrian recommends run the command “source” each time you open up a new terminal to ensure your system variables have been set up correctly. source ~/.profile Next, let’s enter on our virtual environment: workon cv If you see the text (cv) preceding your prompt, then you are in the cv virtualenvironment: (cv) pi@raspberry:~$ Adrian calls the attention that the cv Python virtual environment is entirely independent and sequestered from the default Python version included in the download of Raspbian Stretch. So, any Python packages in the global site-packages directory will not be available to the cv virtual environment. Similarly, any Python packages installed in site-packages of cv will not be available to the global install of Python. Now, enter in your Python interpreter: python and confirm that you are running the 3.5 (or above) version Inside the interpreter (the “>>>” will appear), import the OpenCV library: import cv2 If no error messages appear, the OpenCV is correctly installed ON YOUR PYTHON VIRTUAL ENVIRONMENT. You can also check the OpenCV version installed: cv2.__version__ The 3.3.0 should appear (or a superior version that can be released in future). The above Terminal PrintScreen shows the previous steps. Once you have OpenCV installed in your RPi let’s test if your camera is working properly. I am assuming that you have a PiCam already installed on your Raspberry Pi. Enter the below Python code on your IDE: import numpy as npimport cv2cap = cv2.VideoCapture(0)while(True): ret, frame = cap.read() frame = cv2.flip(frame, -1) # Flip camera vertically gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) cv2.imshow('frame', frame) cv2.imshow('gray', gray) if cv2.waitKey(1) & 0xFF == ord('q'): breakcap.release()cv2.destroyAllWindows() The above code will capture the video stream that will be generated by your PiCam, displaying it both, in BGR color and Gray mode. Note that I rotated my camera vertically due the way it is assembled. If it is not your case, comment or delete the “flip” command line. You can alternatively download the code from my GitHub: simpleCamTest.py To execute, enter the command: python simpleCamTest.py To finish the program, you must press the key [q] tor [Ctrl] + [C] on your keyboard The picture shows the result. To know more about OpenCV, you can follow the tutorial: loading -video-python-opencv-tutorial One thing that we will try to accomplish, will be the detection and tracking of a certain color object. For that, we must understand a little bit more about how OpenCV interpret colors. Henri Dang wrote a great tutorial about Color Detection in Python with OpenCV. Usually, our camera will work with RGB color mode, which can be understood by thinking of it as all possible colors that can be made from three colored lights for red, green, and blue. We will work here with BGR (Blue, Green, Red) instead. As described above, with BGR, a pixel is represented by 3 parameters, blue, green, and red. Each parameter usually has a value from 0–255 (or O to FF in hexadecimal). For example, a pure blue pixel on your computer screen would have a B value of 255, a G value of 0, and a R value of 0. OpenCV works with HSV (Hue, Saturation, Value) color model, that it is an alternative representation of the RGB color model, designed in the 1970s by computer graphics researchers to more closely align with the way human vision perceives color-making attributes: Great. So, if you want to track a certain color using OpenCV, you must define it using the HSV Model. Let’s say that I must track a yellow object as the plastic box shown ay above picture. The ease part is to find its BGR elements. You can use any design program to find it (I used PowerPoint). In my case I found: Blue: 71 Green: 234 Red: 213 Next, we must convert the BGR (71, 234, 213) model to an HSV model, that will be defined with upper and lower range boundaries. For that, let’s run the below code: import sysimport numpy as npimport cv2blue = sys.argv[1]green = sys.argv[2]red = sys.argv[3] color = np.uint8([[[blue, green, red]]])hsv_color = cv2.cvtColor(color, cv2.COLOR_BGR2HSV)hue = hsv_color[0][0][0]print("Lower bound is :"),print("[" + str(hue-10) + ", 100, 100]\n")print("Upper bound is :"),print("[" + str(hue + 10) + ", 255, 255]") You can alternatively download the code from my GitHub: bgr_hsv_converter.py To execute, enter the below command having as parameters the BGR values found before: python bgr_hsv_converter.py 71 234 213 The program will print the upper and lower boundaries of our object color. In this case: lower bound: [24, 100, 100] and upper bound: [44, 255, 255] The Terminal PrintScreen shows the result. Last, but not least, let’s see how OpenCV can “mask” our object once we have determined its color: import cv2import numpy as np# Read the picure - The 1 means we want the image in BGRimg = cv2.imread('yellow_object.JPG', 1) # resize imag to 20% in each axisimg = cv2.resize(img, (0,0), fx=0.2, fy=0.2)# convert BGR image to a HSV imagehsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # NumPy to create arrays to hold lower and upper range # The “dtype = np.uint8” means that data type is an 8 bit integerlower_range = np.array([24, 100, 100], dtype=np.uint8) upper_range = np.array([44, 255, 255], dtype=np.uint8)# create a mask for imagemask = cv2.inRange(hsv, lower_range, upper_range)# display both the mask and the image side-by-sidecv2.imshow('mask',mask)cv2.imshow('image', img)# wait to user to press [ ESC ]while(1): k = cv2.waitKey(0) if(k == 27): breakcv2.destroyAllWindows() You can alternatively download the code from my GitHub: colorDetection.py To execute, enter the below command having in your directory a photo with your target object (in my case: yellow_object.JPG): python colorDetection.py The above picture will show the original image (“image”) and how the object will appear (“mask”) after that the mask is applied. Now that we know how to “select” our object using a mask, let’s track its movement in real time using the camera. For that, I based my code on Adrian Rosebrock’s Ball Tracking with OpenCV tutorial. I strongly suggest that you read Adrian’s tutorial in detail. First, confirm if you have the imutils library installed. it is Adrian’s collection of OpenCV convenience functions to make a few basic tasks (like resizing or flip screen) much easier. If not, enter with below command to install the library on your Virtual Python environment: pip install imutils Next, download the code ball_tracking.py from my GitHub, and execute it using the command: python ball_traking.py As a result, you will see something similar to the gif below: Basically, it is the same code as Adrian’s unless the “video vertical flip”, that I got with the line: frame = imutils.rotate(frame, angle=180) Also, note that the mask boundaries used were the one that we got in the previous step. Now that we have played with the basics of OpenCV, let’s install a LED to our RPi and start to interact with our GPIOs. Follow the above electrical diagram: The LED’s cathode will be connected to GPIO 21 and its anode to GND via a 220-ohm resistor. Let’s test our LED inside our Virtual Python Environment. Remember that its possible that RPi.GPIO is not installed in your Python virtual environment! To fix this issue, once you are there (remember to confirm that the (cv) is in your terminal), you need to use pip to install it into your virtual environment: pip install RPi.GPIO Let’s use the python script to execute a simple test : import sysimport timeimport RPi.GPIO as GPIO# initialize GPIO and variablesredLed = int(sys.argv[1])freq = int(sys.argv[2])GPIO.setmode(GPIO.BCM)GPIO.setup(redLed, GPIO.OUT)GPIO.setwarnings(False)print("\n [INFO] Blinking LED (5 times) connected at GPIO {0} \at every {1} second(s)".format(redLed, freq))for i in range(5): GPIO.output(redLed, GPIO.LOW) time.sleep(freq) GPIO.output(redLed, GPIO.HIGH) time.sleep(freq)# do a bit of cleanupprint("\n [INFO] Exiting Program and cleanup stuff \n")GPIO.cleanup() This code will receive as arguments a GPIO number and the frequency in seconds that our LED should blink. The LED will blink 5 times and the program will be terminated. Note that before terminate, we will liberate the GPIOs. So, to execute the script, you must enter as parameters, LED GPIO, and frequency. For example: python LED_simple_test.py 21 1 The above command will blink 5 times the red LED connected to “GPIO 21” every “1” second. The file GPIO_LED_test.py can be downloaded from my GitHub The above Terminal print screen shows the result (and of course you should confirm that the LED is blinking. Now, let’s work with OpenCV and some basic GPIO stuff. Let’s start integrating our OpenCV codes with GPIO interaction. We will start with the last OpenCV code and we will integrate the GPIO-RPI library on it, so we will turn on the red LED anytime that our colored object is found by the camera. The code used in this step was based on Adrian’s great tutorial OpenCV, RPi.GPIO, and GPIO Zero on the Raspberry Pi: The first thing to do is to “create” our LED, connecting it to the specific GPIO: import RPi.GPIO as GPIOredLed = 21GPIO.setmode(GPIO.BCM)GPIO.setwarnings(False)GPIO.setup(redLed, GPIO.OUT) Second, we must initialize our LED (turned off): GPIO.output(redLed, GPIO.LOW)ledOn = False Now, inside the loop, where the “circle” is created when the object is found, we will turn on the LED: GPIO.output(redLed, GPIO.HIGH)ledOn = True Let’s download the complete code from my GitHub: object_detection_LED.py Run the code using the command: python object_detection_LED.py Here the result. Note the LED (left inferior corner) goes on everytime that the object is detected: Try it with different objects (color and format). You will see that once the color match inside the mask boundaries, the LED is turned on. The video below shows some experiences. Note that only yellow objects that stay inside the color range will be detected, turning the LED on. Objects with different colors are ignored. We are only using the LED here as explained in the last step. I had my Pan Tilt already assembled when I did the video, so ignore it. We will handle with PAN/TILT mechanism in next step. Now that we have played with the basics of OpenCV and GPIO, let’s install our Pan/tilt mechanism. For details, please visit my tutorial: Pan-Tilt-Multi-Servo-Control The servos should be connected to an external 5V supply, having their data pin (in my case, their yellow wiring) connect to Raspberry Pi GPIO as below: GPIO 17 ==> Tilt Servo GPIO 27 ==> Pan Servo Do not forget to connect the GNDs together ==> Raspberry Pi — Servos — External Power Supply) You can have as an option, a resistor of 1K ohm in series, between Raspberry Pi GPIO and Server data input pin. This would protect your RPi in case of a servo problem. Let’s also use the opportunity and test our servos inside our Virtual Python Environment. Let’s use Python script to execute some tests with our drivers: from time import sleepimport RPi.GPIO as GPIOGPIO.setmode(GPIO.BCM)GPIO.setwarnings(False)def setServoAngle(servo, angle): pwm = GPIO.PWM(servo, 50) pwm.start(8) dutyCycle = angle / 18. + 3. pwm.ChangeDutyCycle(dutyCycle) sleep(0.3) pwm.stop()if __name__ == '__main__': import sys servo = int(sys.argv[1]) GPIO.setup(servo, GPIO.OUT) setServoAngle(servo, int(sys.argv[2])) GPIO.cleanup() The core of above code is the function setServoAngle(servo, angle). This function receives as arguments, a servo GPIO number, and an angle value to where the servo must be positioned. Once the input of this function is “angle”, we must convert it to an equivalent duty cycle. To execute the script, you must enter as parameters, servo GPIO, and angle. For example: python angleServoCtrl.py 17 45 The above command will position the servo connected on GPIO 17 (“tilt”) with 45 degrees in “elevation”. The file angleServoCtrl.py can be downloaded from my GitHub The idea here will be to position the object in the middle of the screen using the Pan/Tilt mechanism. The bad news is that for starting we must know where the object is located in real time. But the good news is that it is very easy, once we already have the object center’s coordinates. First, let’s take the “object_detect_LED” code used before and modify it to print the x,y coordinates of the founded object. Download from my GitHub the code: objectDetectCoord.py The “core” of the code is the portion where we find the object and draw a circle on it with a red dot in its center. # only proceed if the radius meets a minimum sizeif radius > 10: # draw the circle and centroid on the frame, # then update the list of tracked points cv2.circle(frame, (int(x), int(y)), int(radius), (0, 255, 255), 2) cv2.circle(frame, center, 5, (0, 0, 255), -1) # print center of circle coordinates mapObjectPosition(int(x), int(y)) # if the led is not already on, turn the LED on if not ledOn: GPIO.output(redLed, GPIO.HIGH) ledOn = True Let’s “export” the center coordinates to mapObjectPosition(int(x), int(y)) function in order to print its coordinates. Below the function: def mapObjectPosition (x, y): print ("[INFO] Object Center coordinates at \ X0 = {0} and Y0 = {1}".format(x, y)) Running the program, we will see at our terminal, the (x, y) position coordinates, as shown above. Move the object and observe the coordinates. We will realize that x goes from 0 to 500 (left to right) and y goes from o to 350 (top to down). See the above pictures. Great! Now we must use those coordinates as a starting point for our Pan/Tilt tracking system We want that our object stays always centered on the screen. So, let’s define for example, that we will consider our object “centered” if: 220 < x < 280 160 < y < 210 Outside of those boundaries, we must move our Pan/Tilt mechanism to compensate deviation. Based on that, we can build the function mapServoPosition(x, y) as below. Note that the “x” and “y” used as parameters in this function are the same that we have used before for printing central position: # position servos to present object at center of the framedef mapServoPosition (x, y): global panAngle global tiltAngle if (x < 220): panAngle += 10 if panAngle > 140: panAngle = 140 positionServo (panServo, panAngle) if (x > 280): panAngle -= 10 if panAngle < 40: panAngle = 40 positionServo (panServo, panAngle) if (y < 160): tiltAngle += 10 if tiltAngle > 140: tiltAngle = 140 positionServo (tiltServo, tiltAngle) if (y > 210): tiltAngle -= 10 if tiltAngle < 40: tiltAngle = 40 positionServo (tiltServo, tiltAngle) Based on the (x, y) coordinates, servo position commands are generated, using the function positionServo(servo, angle). For example, suppose that y position is “50”, what means that our object is almost in the top of the screen, that can be translated that out “camera sight” is “low” (let’s say a tilt angle of 120 degrees) So we must “decrease” Tilt angle (let’s say to 100 degrees), so the camera sight will be “up” and the object will go “down” on screen (y will increase to let’s say, 190). The above diagram shows the example in terms of geometry. Think how the Pan camera will operate. note that the screen is not mirroring, what means that if you move the object to “your left”, it will move on screen for “your right”, once you are in opposition to the camera. The function positionServo(servo, angle) can be written as: def positionServo (servo, angle): os.system("python angleServoCtrl.py " + str(servo) + " " + str(angle)) print("[INFO] Positioning servo at GPIO {0} to {1} \ degrees\n".format(servo, angle)) We will be calling the script shown before for servo positioning. Note that angleServoCtrl.py must be in the same directory as objectDetectTrac.py The complete code can be download from my GitHub: objectDetectTrack.py Below gif shows an example of our project working: As always, I hope this project can help others find their way into the exciting world of electronics! For details and final code, please visit my GitHub depository: OpenCV-Object-Face-Tracking For more projects, please visit my blog: MJRoBot.org Below a glimpse of my next tutorial, where we will explore “Face track and detection”: Saludos from the south of the world! See you in my next article! Thank you, Marcelo No rights reserved by the author.
[ { "code": null, "e": 264, "s": 172, "text": "A pan/tilt servo device helping a camera to automatically track color objects using vision." }, { "code": null, "e": 477, "s": 264, "text": "On a previous tutorial, we explored how to control a Pan/Tilt Servo device in order to position a PiCam. Now we will use our device to help the camera to automatically tracking color objects as you can see below:" }, { "code": null, "e": 609, "s": 477, "text": "This is my first experience with OpenCV and I must confess, I am in love with this fantastic “Open Source Computer Vision Library”." }, { "code": null, "e": 1012, "s": 609, "text": "OpenCV is free for both academic and commercial use. It has C++, C, Python and Java interfaces and supports Windows, Linux, Mac OS, iOS and, Android. On my series of OpenCV tutorials, we will be focusing on Raspberry Pi (so, Raspbian as OS) and Python. OpenCV was designed for computational efficiency and with a strong focus on real-time applications. So, it’s perfect for Physical computing projects!" }, { "code": null, "e": 1267, "s": 1012, "text": "I am using a Raspberry Pi V3 updated to the last version of Raspbian (Stretch), so the best way to have OpenCV installed, is to follow the excellent tutorial developed by Adrian Rosebrock: Raspbian Stretch: Install OpenCV 3 + Python on your Raspberry Pi." }, { "code": null, "e": 1426, "s": 1267, "text": "I tried several different guides to install OpenCV on my Pi. Adrian’s tutorial is the best. I advise you to do the same, following his guideline step-by-step." }, { "code": null, "e": 1550, "s": 1426, "text": "Once you finished Adrian’s tutorial, you should have an OpenCV virtual environment ready to run our experiments on your Pi." }, { "code": null, "e": 1636, "s": 1550, "text": "Let’s go to our virtual environment and confirm that OpenCV 3 is correctly installed." }, { "code": null, "e": 1776, "s": 1636, "text": "Adrian recommends run the command “source” each time you open up a new terminal to ensure your system variables have been set up correctly." }, { "code": null, "e": 1794, "s": 1776, "text": "source ~/.profile" }, { "code": null, "e": 1840, "s": 1794, "text": "Next, let’s enter on our virtual environment:" }, { "code": null, "e": 1850, "s": 1840, "text": "workon cv" }, { "code": null, "e": 1941, "s": 1850, "text": "If you see the text (cv) preceding your prompt, then you are in the cv virtualenvironment:" }, { "code": null, "e": 1962, "s": 1941, "text": "(cv) pi@raspberry:~$" }, { "code": null, "e": 2380, "s": 1962, "text": "Adrian calls the attention that the cv Python virtual environment is entirely independent and sequestered from the default Python version included in the download of Raspbian Stretch. So, any Python packages in the global site-packages directory will not be available to the cv virtual environment. Similarly, any Python packages installed in site-packages of cv will not be available to the global install of Python." }, { "code": null, "e": 2419, "s": 2380, "text": "Now, enter in your Python interpreter:" }, { "code": null, "e": 2426, "s": 2419, "text": "python" }, { "code": null, "e": 2486, "s": 2426, "text": "and confirm that you are running the 3.5 (or above) version" }, { "code": null, "e": 2561, "s": 2486, "text": "Inside the interpreter (the “>>>” will appear), import the OpenCV library:" }, { "code": null, "e": 2572, "s": 2561, "text": "import cv2" }, { "code": null, "e": 2671, "s": 2572, "text": "If no error messages appear, the OpenCV is correctly installed ON YOUR PYTHON VIRTUAL ENVIRONMENT." }, { "code": null, "e": 2720, "s": 2671, "text": "You can also check the OpenCV version installed:" }, { "code": null, "e": 2736, "s": 2720, "text": "cv2.__version__" }, { "code": null, "e": 2873, "s": 2736, "text": "The 3.3.0 should appear (or a superior version that can be released in future). The above Terminal PrintScreen shows the previous steps." }, { "code": null, "e": 2963, "s": 2873, "text": "Once you have OpenCV installed in your RPi let’s test if your camera is working properly." }, { "code": null, "e": 3039, "s": 2963, "text": "I am assuming that you have a PiCam already installed on your Raspberry Pi." }, { "code": null, "e": 3080, "s": 3039, "text": "Enter the below Python code on your IDE:" }, { "code": null, "e": 3431, "s": 3080, "text": "import numpy as npimport cv2cap = cv2.VideoCapture(0)while(True): ret, frame = cap.read() frame = cv2.flip(frame, -1) # Flip camera vertically gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) cv2.imshow('frame', frame) cv2.imshow('gray', gray) if cv2.waitKey(1) & 0xFF == ord('q'): breakcap.release()cv2.destroyAllWindows()" }, { "code": null, "e": 3562, "s": 3431, "text": "The above code will capture the video stream that will be generated by your PiCam, displaying it both, in BGR color and Gray mode." }, { "code": null, "e": 3699, "s": 3562, "text": "Note that I rotated my camera vertically due the way it is assembled. If it is not your case, comment or delete the “flip” command line." }, { "code": null, "e": 3772, "s": 3699, "text": "You can alternatively download the code from my GitHub: simpleCamTest.py" }, { "code": null, "e": 3803, "s": 3772, "text": "To execute, enter the command:" }, { "code": null, "e": 3827, "s": 3803, "text": "python simpleCamTest.py" }, { "code": null, "e": 3911, "s": 3827, "text": "To finish the program, you must press the key [q] tor [Ctrl] + [C] on your keyboard" }, { "code": null, "e": 3941, "s": 3911, "text": "The picture shows the result." }, { "code": null, "e": 4035, "s": 3941, "text": "To know more about OpenCV, you can follow the tutorial: loading -video-python-opencv-tutorial" }, { "code": null, "e": 4221, "s": 4035, "text": "One thing that we will try to accomplish, will be the detection and tracking of a certain color object. For that, we must understand a little bit more about how OpenCV interpret colors." }, { "code": null, "e": 4300, "s": 4221, "text": "Henri Dang wrote a great tutorial about Color Detection in Python with OpenCV." }, { "code": null, "e": 4540, "s": 4300, "text": "Usually, our camera will work with RGB color mode, which can be understood by thinking of it as all possible colors that can be made from three colored lights for red, green, and blue. We will work here with BGR (Blue, Green, Red) instead." }, { "code": null, "e": 4827, "s": 4540, "text": "As described above, with BGR, a pixel is represented by 3 parameters, blue, green, and red. Each parameter usually has a value from 0–255 (or O to FF in hexadecimal). For example, a pure blue pixel on your computer screen would have a B value of 255, a G value of 0, and a R value of 0." }, { "code": null, "e": 5090, "s": 4827, "text": "OpenCV works with HSV (Hue, Saturation, Value) color model, that it is an alternative representation of the RGB color model, designed in the 1970s by computer graphics researchers to more closely align with the way human vision perceives color-making attributes:" }, { "code": null, "e": 5192, "s": 5090, "text": "Great. So, if you want to track a certain color using OpenCV, you must define it using the HSV Model." }, { "code": null, "e": 5385, "s": 5192, "text": "Let’s say that I must track a yellow object as the plastic box shown ay above picture. The ease part is to find its BGR elements. You can use any design program to find it (I used PowerPoint)." }, { "code": null, "e": 5405, "s": 5385, "text": "In my case I found:" }, { "code": null, "e": 5414, "s": 5405, "text": "Blue: 71" }, { "code": null, "e": 5425, "s": 5414, "text": "Green: 234" }, { "code": null, "e": 5434, "s": 5425, "text": "Red: 213" }, { "code": null, "e": 5598, "s": 5434, "text": "Next, we must convert the BGR (71, 234, 213) model to an HSV model, that will be defined with upper and lower range boundaries. For that, let’s run the below code:" }, { "code": null, "e": 5943, "s": 5598, "text": "import sysimport numpy as npimport cv2blue = sys.argv[1]green = sys.argv[2]red = sys.argv[3] color = np.uint8([[[blue, green, red]]])hsv_color = cv2.cvtColor(color, cv2.COLOR_BGR2HSV)hue = hsv_color[0][0][0]print(\"Lower bound is :\"),print(\"[\" + str(hue-10) + \", 100, 100]\\n\")print(\"Upper bound is :\"),print(\"[\" + str(hue + 10) + \", 255, 255]\")" }, { "code": null, "e": 6020, "s": 5943, "text": "You can alternatively download the code from my GitHub: bgr_hsv_converter.py" }, { "code": null, "e": 6106, "s": 6020, "text": "To execute, enter the below command having as parameters the BGR values found before:" }, { "code": null, "e": 6145, "s": 6106, "text": "python bgr_hsv_converter.py 71 234 213" }, { "code": null, "e": 6220, "s": 6145, "text": "The program will print the upper and lower boundaries of our object color." }, { "code": null, "e": 6234, "s": 6220, "text": "In this case:" }, { "code": null, "e": 6262, "s": 6234, "text": "lower bound: [24, 100, 100]" }, { "code": null, "e": 6266, "s": 6262, "text": "and" }, { "code": null, "e": 6294, "s": 6266, "text": "upper bound: [44, 255, 255]" }, { "code": null, "e": 6337, "s": 6294, "text": "The Terminal PrintScreen shows the result." }, { "code": null, "e": 6436, "s": 6337, "text": "Last, but not least, let’s see how OpenCV can “mask” our object once we have determined its color:" }, { "code": null, "e": 7222, "s": 6436, "text": "import cv2import numpy as np# Read the picure - The 1 means we want the image in BGRimg = cv2.imread('yellow_object.JPG', 1) # resize imag to 20% in each axisimg = cv2.resize(img, (0,0), fx=0.2, fy=0.2)# convert BGR image to a HSV imagehsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # NumPy to create arrays to hold lower and upper range # The “dtype = np.uint8” means that data type is an 8 bit integerlower_range = np.array([24, 100, 100], dtype=np.uint8) upper_range = np.array([44, 255, 255], dtype=np.uint8)# create a mask for imagemask = cv2.inRange(hsv, lower_range, upper_range)# display both the mask and the image side-by-sidecv2.imshow('mask',mask)cv2.imshow('image', img)# wait to user to press [ ESC ]while(1): k = cv2.waitKey(0) if(k == 27): breakcv2.destroyAllWindows()" }, { "code": null, "e": 7296, "s": 7222, "text": "You can alternatively download the code from my GitHub: colorDetection.py" }, { "code": null, "e": 7422, "s": 7296, "text": "To execute, enter the below command having in your directory a photo with your target object (in my case: yellow_object.JPG):" }, { "code": null, "e": 7447, "s": 7422, "text": "python colorDetection.py" }, { "code": null, "e": 7576, "s": 7447, "text": "The above picture will show the original image (“image”) and how the object will appear (“mask”) after that the mask is applied." }, { "code": null, "e": 7774, "s": 7576, "text": "Now that we know how to “select” our object using a mask, let’s track its movement in real time using the camera. For that, I based my code on Adrian Rosebrock’s Ball Tracking with OpenCV tutorial." }, { "code": null, "e": 7836, "s": 7774, "text": "I strongly suggest that you read Adrian’s tutorial in detail." }, { "code": null, "e": 8114, "s": 7836, "text": "First, confirm if you have the imutils library installed. it is Adrian’s collection of OpenCV convenience functions to make a few basic tasks (like resizing or flip screen) much easier. If not, enter with below command to install the library on your Virtual Python environment:" }, { "code": null, "e": 8134, "s": 8114, "text": "pip install imutils" }, { "code": null, "e": 8225, "s": 8134, "text": "Next, download the code ball_tracking.py from my GitHub, and execute it using the command:" }, { "code": null, "e": 8248, "s": 8225, "text": "python ball_traking.py" }, { "code": null, "e": 8310, "s": 8248, "text": "As a result, you will see something similar to the gif below:" }, { "code": null, "e": 8413, "s": 8310, "text": "Basically, it is the same code as Adrian’s unless the “video vertical flip”, that I got with the line:" }, { "code": null, "e": 8454, "s": 8413, "text": "frame = imutils.rotate(frame, angle=180)" }, { "code": null, "e": 8542, "s": 8454, "text": "Also, note that the mask boundaries used were the one that we got in the previous step." }, { "code": null, "e": 8662, "s": 8542, "text": "Now that we have played with the basics of OpenCV, let’s install a LED to our RPi and start to interact with our GPIOs." }, { "code": null, "e": 8791, "s": 8662, "text": "Follow the above electrical diagram: The LED’s cathode will be connected to GPIO 21 and its anode to GND via a 220-ohm resistor." }, { "code": null, "e": 8849, "s": 8791, "text": "Let’s test our LED inside our Virtual Python Environment." }, { "code": null, "e": 9103, "s": 8849, "text": "Remember that its possible that RPi.GPIO is not installed in your Python virtual environment! To fix this issue, once you are there (remember to confirm that the (cv) is in your terminal), you need to use pip to install it into your virtual environment:" }, { "code": null, "e": 9124, "s": 9103, "text": "pip install RPi.GPIO" }, { "code": null, "e": 9179, "s": 9124, "text": "Let’s use the python script to execute a simple test :" }, { "code": null, "e": 9699, "s": 9179, "text": "import sysimport timeimport RPi.GPIO as GPIO# initialize GPIO and variablesredLed = int(sys.argv[1])freq = int(sys.argv[2])GPIO.setmode(GPIO.BCM)GPIO.setup(redLed, GPIO.OUT)GPIO.setwarnings(False)print(\"\\n [INFO] Blinking LED (5 times) connected at GPIO {0} \\at every {1} second(s)\".format(redLed, freq))for i in range(5): GPIO.output(redLed, GPIO.LOW) time.sleep(freq) GPIO.output(redLed, GPIO.HIGH) time.sleep(freq)# do a bit of cleanupprint(\"\\n [INFO] Exiting Program and cleanup stuff \\n\")GPIO.cleanup()" }, { "code": null, "e": 9924, "s": 9699, "text": "This code will receive as arguments a GPIO number and the frequency in seconds that our LED should blink. The LED will blink 5 times and the program will be terminated. Note that before terminate, we will liberate the GPIOs." }, { "code": null, "e": 10006, "s": 9924, "text": "So, to execute the script, you must enter as parameters, LED GPIO, and frequency." }, { "code": null, "e": 10019, "s": 10006, "text": "For example:" }, { "code": null, "e": 10050, "s": 10019, "text": "python LED_simple_test.py 21 1" }, { "code": null, "e": 10140, "s": 10050, "text": "The above command will blink 5 times the red LED connected to “GPIO 21” every “1” second." }, { "code": null, "e": 10199, "s": 10140, "text": "The file GPIO_LED_test.py can be downloaded from my GitHub" }, { "code": null, "e": 10308, "s": 10199, "text": "The above Terminal print screen shows the result (and of course you should confirm that the LED is blinking." }, { "code": null, "e": 10363, "s": 10308, "text": "Now, let’s work with OpenCV and some basic GPIO stuff." }, { "code": null, "e": 10721, "s": 10363, "text": "Let’s start integrating our OpenCV codes with GPIO interaction. We will start with the last OpenCV code and we will integrate the GPIO-RPI library on it, so we will turn on the red LED anytime that our colored object is found by the camera. The code used in this step was based on Adrian’s great tutorial OpenCV, RPi.GPIO, and GPIO Zero on the Raspberry Pi:" }, { "code": null, "e": 10803, "s": 10721, "text": "The first thing to do is to “create” our LED, connecting it to the specific GPIO:" }, { "code": null, "e": 10911, "s": 10803, "text": "import RPi.GPIO as GPIOredLed = 21GPIO.setmode(GPIO.BCM)GPIO.setwarnings(False)GPIO.setup(redLed, GPIO.OUT)" }, { "code": null, "e": 10960, "s": 10911, "text": "Second, we must initialize our LED (turned off):" }, { "code": null, "e": 11003, "s": 10960, "text": "GPIO.output(redLed, GPIO.LOW)ledOn = False" }, { "code": null, "e": 11106, "s": 11003, "text": "Now, inside the loop, where the “circle” is created when the object is found, we will turn on the LED:" }, { "code": null, "e": 11149, "s": 11106, "text": "GPIO.output(redLed, GPIO.HIGH)ledOn = True" }, { "code": null, "e": 11222, "s": 11149, "text": "Let’s download the complete code from my GitHub: object_detection_LED.py" }, { "code": null, "e": 11254, "s": 11222, "text": "Run the code using the command:" }, { "code": null, "e": 11285, "s": 11254, "text": "python object_detection_LED.py" }, { "code": null, "e": 11385, "s": 11285, "text": "Here the result. Note the LED (left inferior corner) goes on everytime that the object is detected:" }, { "code": null, "e": 11524, "s": 11385, "text": "Try it with different objects (color and format). You will see that once the color match inside the mask boundaries, the LED is turned on." }, { "code": null, "e": 11708, "s": 11524, "text": "The video below shows some experiences. Note that only yellow objects that stay inside the color range will be detected, turning the LED on. Objects with different colors are ignored." }, { "code": null, "e": 11895, "s": 11708, "text": "We are only using the LED here as explained in the last step. I had my Pan Tilt already assembled when I did the video, so ignore it. We will handle with PAN/TILT mechanism in next step." }, { "code": null, "e": 11993, "s": 11895, "text": "Now that we have played with the basics of OpenCV and GPIO, let’s install our Pan/tilt mechanism." }, { "code": null, "e": 12061, "s": 11993, "text": "For details, please visit my tutorial: Pan-Tilt-Multi-Servo-Control" }, { "code": null, "e": 12213, "s": 12061, "text": "The servos should be connected to an external 5V supply, having their data pin (in my case, their yellow wiring) connect to Raspberry Pi GPIO as below:" }, { "code": null, "e": 12236, "s": 12213, "text": "GPIO 17 ==> Tilt Servo" }, { "code": null, "e": 12258, "s": 12236, "text": "GPIO 27 ==> Pan Servo" }, { "code": null, "e": 12352, "s": 12258, "text": "Do not forget to connect the GNDs together ==> Raspberry Pi — Servos — External Power Supply)" }, { "code": null, "e": 12520, "s": 12352, "text": "You can have as an option, a resistor of 1K ohm in series, between Raspberry Pi GPIO and Server data input pin. This would protect your RPi in case of a servo problem." }, { "code": null, "e": 12610, "s": 12520, "text": "Let’s also use the opportunity and test our servos inside our Virtual Python Environment." }, { "code": null, "e": 12674, "s": 12610, "text": "Let’s use Python script to execute some tests with our drivers:" }, { "code": null, "e": 13062, "s": 12674, "text": "from time import sleepimport RPi.GPIO as GPIOGPIO.setmode(GPIO.BCM)GPIO.setwarnings(False)def setServoAngle(servo, angle):\tpwm = GPIO.PWM(servo, 50)\tpwm.start(8)\tdutyCycle = angle / 18. + 3.\tpwm.ChangeDutyCycle(dutyCycle)\tsleep(0.3)\tpwm.stop()if __name__ == '__main__':\timport sys\tservo = int(sys.argv[1])\tGPIO.setup(servo, GPIO.OUT)\tsetServoAngle(servo, int(sys.argv[2]))\tGPIO.cleanup()" }, { "code": null, "e": 13338, "s": 13062, "text": "The core of above code is the function setServoAngle(servo, angle). This function receives as arguments, a servo GPIO number, and an angle value to where the servo must be positioned. Once the input of this function is “angle”, we must convert it to an equivalent duty cycle." }, { "code": null, "e": 13414, "s": 13338, "text": "To execute the script, you must enter as parameters, servo GPIO, and angle." }, { "code": null, "e": 13427, "s": 13414, "text": "For example:" }, { "code": null, "e": 13458, "s": 13427, "text": "python angleServoCtrl.py 17 45" }, { "code": null, "e": 13562, "s": 13458, "text": "The above command will position the servo connected on GPIO 17 (“tilt”) with 45 degrees in “elevation”." }, { "code": null, "e": 13622, "s": 13562, "text": "The file angleServoCtrl.py can be downloaded from my GitHub" }, { "code": null, "e": 13911, "s": 13622, "text": "The idea here will be to position the object in the middle of the screen using the Pan/Tilt mechanism. The bad news is that for starting we must know where the object is located in real time. But the good news is that it is very easy, once we already have the object center’s coordinates." }, { "code": null, "e": 14036, "s": 13911, "text": "First, let’s take the “object_detect_LED” code used before and modify it to print the x,y coordinates of the founded object." }, { "code": null, "e": 14091, "s": 14036, "text": "Download from my GitHub the code: objectDetectCoord.py" }, { "code": null, "e": 14208, "s": 14091, "text": "The “core” of the code is the portion where we find the object and draw a circle on it with a red dot in its center." }, { "code": null, "e": 14658, "s": 14208, "text": "# only proceed if the radius meets a minimum sizeif radius > 10:\t# draw the circle and centroid on the frame,\t# then update the list of tracked points\tcv2.circle(frame, (int(x), int(y)), int(radius),\t\t(0, 255, 255), 2)\tcv2.circle(frame, center, 5, (0, 0, 255), -1)\t\t\t\t# print center of circle coordinates\tmapObjectPosition(int(x), int(y))\t\t\t\t# if the led is not already on, turn the LED on\tif not ledOn:\t\tGPIO.output(redLed, GPIO.HIGH)\t\tledOn = True" }, { "code": null, "e": 14797, "s": 14658, "text": "Let’s “export” the center coordinates to mapObjectPosition(int(x), int(y)) function in order to print its coordinates. Below the function:" }, { "code": null, "e": 14917, "s": 14797, "text": "def mapObjectPosition (x, y): print (\"[INFO] Object Center coordinates at \\ X0 = {0} and Y0 = {1}\".format(x, y))" }, { "code": null, "e": 15183, "s": 14917, "text": "Running the program, we will see at our terminal, the (x, y) position coordinates, as shown above. Move the object and observe the coordinates. We will realize that x goes from 0 to 500 (left to right) and y goes from o to 350 (top to down). See the above pictures." }, { "code": null, "e": 15277, "s": 15183, "text": "Great! Now we must use those coordinates as a starting point for our Pan/Tilt tracking system" }, { "code": null, "e": 15416, "s": 15277, "text": "We want that our object stays always centered on the screen. So, let’s define for example, that we will consider our object “centered” if:" }, { "code": null, "e": 15430, "s": 15416, "text": "220 < x < 280" }, { "code": null, "e": 15444, "s": 15430, "text": "160 < y < 210" }, { "code": null, "e": 15739, "s": 15444, "text": "Outside of those boundaries, we must move our Pan/Tilt mechanism to compensate deviation. Based on that, we can build the function mapServoPosition(x, y) as below. Note that the “x” and “y” used as parameters in this function are the same that we have used before for printing central position:" }, { "code": null, "e": 16403, "s": 15739, "text": "# position servos to present object at center of the framedef mapServoPosition (x, y): global panAngle global tiltAngle if (x < 220): panAngle += 10 if panAngle > 140: panAngle = 140 positionServo (panServo, panAngle) if (x > 280): panAngle -= 10 if panAngle < 40: panAngle = 40 positionServo (panServo, panAngle) if (y < 160): tiltAngle += 10 if tiltAngle > 140: tiltAngle = 140 positionServo (tiltServo, tiltAngle) if (y > 210): tiltAngle -= 10 if tiltAngle < 40: tiltAngle = 40 positionServo (tiltServo, tiltAngle)" }, { "code": null, "e": 16899, "s": 16403, "text": "Based on the (x, y) coordinates, servo position commands are generated, using the function positionServo(servo, angle). For example, suppose that y position is “50”, what means that our object is almost in the top of the screen, that can be translated that out “camera sight” is “low” (let’s say a tilt angle of 120 degrees) So we must “decrease” Tilt angle (let’s say to 100 degrees), so the camera sight will be “up” and the object will go “down” on screen (y will increase to let’s say, 190)." }, { "code": null, "e": 16957, "s": 16899, "text": "The above diagram shows the example in terms of geometry." }, { "code": null, "e": 17173, "s": 16957, "text": "Think how the Pan camera will operate. note that the screen is not mirroring, what means that if you move the object to “your left”, it will move on screen for “your right”, once you are in opposition to the camera." }, { "code": null, "e": 17233, "s": 17173, "text": "The function positionServo(servo, angle) can be written as:" }, { "code": null, "e": 17447, "s": 17233, "text": "def positionServo (servo, angle): os.system(\"python angleServoCtrl.py \" + str(servo) + \" \" + str(angle)) print(\"[INFO] Positioning servo at GPIO {0} to {1} \\ degrees\\n\".format(servo, angle))" }, { "code": null, "e": 17513, "s": 17447, "text": "We will be calling the script shown before for servo positioning." }, { "code": null, "e": 17594, "s": 17513, "text": "Note that angleServoCtrl.py must be in the same directory as objectDetectTrac.py" }, { "code": null, "e": 17665, "s": 17594, "text": "The complete code can be download from my GitHub: objectDetectTrack.py" }, { "code": null, "e": 17716, "s": 17665, "text": "Below gif shows an example of our project working:" }, { "code": null, "e": 17818, "s": 17716, "text": "As always, I hope this project can help others find their way into the exciting world of electronics!" }, { "code": null, "e": 17909, "s": 17818, "text": "For details and final code, please visit my GitHub depository: OpenCV-Object-Face-Tracking" }, { "code": null, "e": 17962, "s": 17909, "text": "For more projects, please visit my blog: MJRoBot.org" }, { "code": null, "e": 18049, "s": 17962, "text": "Below a glimpse of my next tutorial, where we will explore “Face track and detection”:" }, { "code": null, "e": 18086, "s": 18049, "text": "Saludos from the south of the world!" }, { "code": null, "e": 18114, "s": 18086, "text": "See you in my next article!" }, { "code": null, "e": 18125, "s": 18114, "text": "Thank you," }, { "code": null, "e": 18133, "s": 18125, "text": "Marcelo" }, { "code": null, "e": 18152, "s": 18133, "text": "No rights reserved" } ]
How to join points on a scatterplot with smooth lines in R using plot function?
It is very difficult to join points on a scatterplot with smooth lines if the scatteredness is high but we might want to look at the smoothness that cannot be understood by just looking at the points. It is also helpful to understand whether the model is linear or not. We can do this by plotting the model with loess using plot function. Consider the below data − > set.seed(3) > x<-sample(1:100,10,replace=TRUE) > y<-rpois(10,100) Using loess to create the smooth lines − > Model <- loess(y~x) > summary(Model) Call: loess(formula = y ~ x) Number of Observations: 10 Equivalent Number of Parameters: 4.77 Residual Standard Error: 8.608 Trace of smoother matrix: 5.27 (exact) Control settings: span : 0.75 degree : 2 family : gaussian surface : interpolate cell = 0.2 normalize : TRUE parametric : FALSE drop.square: FALSE > plot(x,y) > lines(Model, col='red', lwd=2)
[ { "code": null, "e": 1401, "s": 1062, "text": "It is very difficult to join points on a scatterplot with smooth lines if the scatteredness is high but we might want to look at the smoothness that cannot be understood by just looking at the points. It is also helpful to understand whether the model is linear or not. We can do this by plotting the model with loess using plot function." }, { "code": null, "e": 1427, "s": 1401, "text": "Consider the below data −" }, { "code": null, "e": 1495, "s": 1427, "text": "> set.seed(3)\n> x<-sample(1:100,10,replace=TRUE)\n> y<-rpois(10,100)" }, { "code": null, "e": 1536, "s": 1495, "text": "Using loess to create the smooth lines −" }, { "code": null, "e": 1916, "s": 1536, "text": "> Model <- loess(y~x)\n> summary(Model)\nCall:\nloess(formula = y ~ x)\nNumber of Observations: 10\nEquivalent Number of Parameters: 4.77\nResidual Standard Error: 8.608\nTrace of smoother matrix: 5.27 (exact)\nControl settings:\nspan : 0.75\ndegree : 2\nfamily : gaussian\nsurface : interpolate cell = 0.2\nnormalize : TRUE\nparametric : FALSE\ndrop.square: FALSE\n> plot(x,y)" }, { "code": null, "e": 1949, "s": 1916, "text": "> lines(Model, col='red', lwd=2)" } ]
Python Pandas - Return numpy array of python datetime.time objects
To return numpy array of python datetime.time objects, use the datetimeindex.time property in Pandas. At first, import the required libraries − import pandas as pd Create a DatetimeIndex with period 3 and frequency as us i.e. nanoseconds. The timezone is Australia/Sydney − datetimeindex = pd.date_range('2021-10-20 02:30:50', periods=3, tz='Australia/Sydney', freq='ns') Display DateTimeIndex − print("DateTimeIndex...\n", datetimeindex) Returns only the time part of Timestamps without timezone information − print("\nThe numpy array (time part)..\n",datetimeindex.time) Following is the code − import pandas as pd # DatetimeIndex with period 3 and frequency as us i.e. nanoseconds # The timezone is Australia/Sydney datetimeindex = pd.date_range('2021-10-20 02:30:50', periods=3, tz='Australia/Sydney', freq='ns') # display DateTimeIndex print("DateTimeIndex...\n", datetimeindex) # Returns only the date part of Timestamps without timezone information print("\nThe numpy array (date part)..\n",datetimeindex.date) # Returns only the time part of Timestamps without timezone information print("\nThe numpy array (time part)..\n",datetimeindex.time) This will produce the following code − DateTimeIndex... DatetimeIndex([ '2021-10-20 02:30:50+11:00', '2021-10-20 02:30:50.000000001+11:00', '2021-10-20 02:30:50.000000002+11:00'], dtype='datetime64[ns, Australia/Sydney]', freq='N') The numpy array (date part).. [datetime.date(2021, 10, 20) datetime.date(2021, 10, 20) datetime.date(2021, 10, 20)] The numpy array (time part).. [datetime.time(2, 30, 50) datetime.time(2, 30, 50) datetime.time(2, 30, 50)]
[ { "code": null, "e": 1164, "s": 1062, "text": "To return numpy array of python datetime.time objects, use the datetimeindex.time property in Pandas." }, { "code": null, "e": 1206, "s": 1164, "text": "At first, import the required libraries −" }, { "code": null, "e": 1226, "s": 1206, "text": "import pandas as pd" }, { "code": null, "e": 1336, "s": 1226, "text": "Create a DatetimeIndex with period 3 and frequency as us i.e. nanoseconds. The timezone is Australia/Sydney −" }, { "code": null, "e": 1435, "s": 1336, "text": "datetimeindex = pd.date_range('2021-10-20 02:30:50', periods=3, tz='Australia/Sydney', freq='ns')\n" }, { "code": null, "e": 1459, "s": 1435, "text": "Display DateTimeIndex −" }, { "code": null, "e": 1502, "s": 1459, "text": "print(\"DateTimeIndex...\\n\", datetimeindex)" }, { "code": null, "e": 1574, "s": 1502, "text": "Returns only the time part of Timestamps without timezone information −" }, { "code": null, "e": 1637, "s": 1574, "text": "print(\"\\nThe numpy array (time part)..\\n\",datetimeindex.time)\n" }, { "code": null, "e": 1661, "s": 1637, "text": "Following is the code −" }, { "code": null, "e": 2220, "s": 1661, "text": "import pandas as pd\n\n# DatetimeIndex with period 3 and frequency as us i.e. nanoseconds\n# The timezone is Australia/Sydney\ndatetimeindex = pd.date_range('2021-10-20 02:30:50', periods=3, tz='Australia/Sydney', freq='ns')\n\n# display DateTimeIndex\nprint(\"DateTimeIndex...\\n\", datetimeindex)\n\n# Returns only the date part of Timestamps without timezone information\nprint(\"\\nThe numpy array (date part)..\\n\",datetimeindex.date)\n\n# Returns only the time part of Timestamps without timezone information\nprint(\"\\nThe numpy array (time part)..\\n\",datetimeindex.time)" }, { "code": null, "e": 2259, "s": 2220, "text": "This will produce the following code −" }, { "code": null, "e": 2677, "s": 2259, "text": "DateTimeIndex...\nDatetimeIndex([ '2021-10-20 02:30:50+11:00',\n'2021-10-20 02:30:50.000000001+11:00',\n'2021-10-20 02:30:50.000000002+11:00'],\ndtype='datetime64[ns, Australia/Sydney]', freq='N')\n\nThe numpy array (date part)..\n[datetime.date(2021, 10, 20) datetime.date(2021, 10, 20)\ndatetime.date(2021, 10, 20)]\n\nThe numpy array (time part)..\n[datetime.time(2, 30, 50) datetime.time(2, 30, 50)\ndatetime.time(2, 30, 50)]" } ]
Visualizing and Analyzing Proteins in Python | by Aren Carpenter | Towards Data Science
Human biology is incredibly complex. Even with our ever-growing understanding, our answers only uncover more and more questions. The completion of the Human Genome Project gave many scientists confidence that we could solve pressing issues in biology through genomics. However, as our understanding of biology has grown, we’ve recognized that other factors influence how an organism’s genome is utilized. Thus, new fields of study were born to address these interconnected and flexible domains, including transcriptomics (study of mRNA) and proteomics (study of proteins). As I covered in my previous blog, the Biopython package is quite powerful and can visualize and analyze DNA and RNA sequences simply. And it has protein analysis capabilities, too! So let’s dive in. The Protein Data Bank is a one-stop shop for exploring and downloading protein sequences. PDB developed its own file format for this purpose —the aptly named, .pdb. But as larger, more complex proteins were analyzed another format was developed — CIF and mmCIF. CIF (Crystallographic Information File) was developed to archive small molecule crystallographic experiments studying the arrangement of atoms in crystalline solids. CIF was expanded to larger molecules (macromolecules, hence mm) with mmCIF and has replaced the PDB format. [1] While mmCIF is now the standard, there is still legacy support for PDB files. Let’s look at Phytohemagglutinin-L, a lectin found in certain legumes, like kidney beans. Import the necessary packages: from Bio.PDB import *import nglview as nvimport ipywidgets Now we’ll create an instance of Biopython’s PDBParser, and use the nglview library to create our interactive visualization. We can pan, zoom, and rotate the molecule and even hover for specific atom information. pdb_parser = PDBParser()structure = pdb_parser.get_structure("PHA-L", "Data/1FAT.pdb")view = nv.show_biopython(structure) Our process for CIF files will be very similar, just utilize the MMCIF Parser instead! Here we’re visualizing a larger protein 6EBK, or the voltage-activated Kv1.2–2.1 paddle chimera channel in lipid nanodiscs (a mouthful...) cif_parser = MMCIFParser()structure = cif_parser.get_structure("6EBK", "fa/6ebk.cif")view = nv.show_biopython(structure) The most immediate way to access information about the protein is through the Header, a dictionary of metadata, available in both PDB and CIF file formats. mmcif_dict = MMCIFDict.MMCIFDict("fa/1fat.cif")len(mmcif_dict) # 689 This generates a large dictionary of information about the protein, including the citation that sequenced the protein, structure information, atom by atom location and angles, and chemical compositions. As you can see, there are 689 items in this dictionary. One of the most important pieces of information you’ll want to analyze is the residue — think amino acid — sequence of the protein or polypeptide. Because proteins can be comprised of several polypeptides, we use a framework to know which level of organization we are examining. From the overall structure, down to individual atoms. The Structure object in our file follows the SMCRA architecture in a parent-child relationship: A Structure consists of models A Model consists of chains A Chain consists of residues (amino acids) A Residue consists of Atoms There are many ways to parse the structure metadata to return the protein’s residue sequence. Three ways are below: # .get_residues() method in a loopfor model in structure: for residue in model.get_residues(): print(residue)# .get_residues() method as generator objectresidues = structure.get_residues() # returns a generator object[item for item in residues]# .unfold_entities - keyword for each level of the SMCRA structureSelection.unfold_entities(structure, "R") # R is for residues Getting the residue sequence above returns the sequence for the entire protein structure, but proteins are often comprised of several smaller polypeptides that we may want to analyze separately. Biopython enables this through polypeptide builders that can generate these individual polypeptides. polypeptide_builder = CaPPBuilder()counter = 1for polypeptide in polypeptide_builder.build_peptides(structure): seq = polypeptide.get_sequence() print(f"Sequence: {counter}, Length: {len(seq)}") print(seq) counter += 1# Sequence: 1, Length: 36# SNDIYFNFQRFNETNLILQRDASVSSSGQLRLTNLN# Sequence: 2, Length: 196# NGEPRVGSLGRAFYSAPIQIWDNTTGTVASFATSFT...ASKLS# Sequence: 3, Length: 233# SNDIYFNFQRFNETNLILQRDASVSSSGQLRLTNLN...ASKLS# Sequence: 4, Length: 36# SNDIYFNFQRFNETNLILQRDASVSSSGQLRLTNLN# Sequence: 5, Length: 196# NGEPRVGSLGRAFYSAPIQIWDNTTGTVASFATSFT...ASKLS# Sequence: 6, Length: 35# SNDIYFNFQRFNETNLILQRDASVSSSGQLRLTNL# Sequence: 7, Length: 196# NGEPRVGSLGRAFYSAPIQIWDNTTGTVASFATSFT...ASKLS So now we have the residue sequences for these 7 chains, but we also have access to many methods for analyzing these sequences. from Bio.SeqUtils.ProtParam import ProteinAnalysis The only caveat is that calling .get_sequences() above returns a Biopython Seq() object — check out my previous blog to dive deeper into Seq() objects and their functionality — and ProteinAnalysis expects a string. analyzed_seq = ProteinAnalysis(str(seq)) We are now ready to run the following methods to build understanding of our sequence! We can calculate the molecular weight of the polypeptide. analyzed_seq.molecular_weight()# 4176.51669 Protein GRAVY returns the GRAVY (grand average of hydropathy) value for the protein sequences you enter. The GRAVY value is calculated by adding the hydropathy value for each residue and dividing by the length of the sequence (Kyte and Doolittle; 1982). [2] A higher value is more hydrophobic. A lower value is more hydrophilic. We’ll discuss later how to generate residue by residue hydrophobicity later. analyzed_seq.gravy()# -0.5611 We can easily count the number of each type of amino acid. analyzed_seq.count_amino_acids()# {'A': 1, 'C': 0, 'D': 2, 'E': 1, 'F': 3, 'G': 1, 'H': 0, 'I': 2, 'K': 0, 'L': 5, 'M': 0, 'N': 6, 'P': 0, 'Q': 3, 'R': 3, 'S': 5, 'T': 2, 'V': 1, 'W': 0, 'Y': 1} And the percentage of each amino acid in the sequence! analyzed_seq.get_amino_acids_percent()# {'A': 0.027777777777777776, 'C': 0.0, 'D': 0.05555555555555555, 'E': 0.027777777777777776, 'F': 0.08333333333333333, 'G': 0.027777777777777776, 'H': 0.0, 'I': 0.05555555555555555, 'K': 0.0, 'L': 0.1388888888888889, 'M': 0.0, 'N': 0.16666666666666666, 'P': 0.0, 'Q': 0.08333333333333333, 'R': 0.08333333333333333, 'S': 0.1388888888888889, 'T': 0.05555555555555555, 'V': 0.027777777777777776, 'W': 0.0, 'Y': 0.027777777777777776} A very useful method — .secondary_structure_fraction() — returns the fraction of amino acids that tend to be found in the three classical secondary structures. These are beta sheets, alpha helixes, and turns (where the residues change direction). analyzed_seq.secondary_structure_fraction() # helix, turn, sheet# (0.3333333333333333, 0.3333333333333333, 0.19444444444444445) Protein scales are a way of measuring certain attributes of residues over the length of the peptide sequence using a sliding window. Scales are comprised of values for each amino acid based on different physical and chemical properties, such as hydrophobicity, secondary structure tendencies, and surface accessibility. As opposed to some chain-level measures like overall molecule behavior, scales allow a more granular understanding of how smaller sections of the sequence will behave. from Bio.SeqUtils.ProtParam import ProtParamData Some common scales include: kd → Kyte & Doolittle Index of Hydrophobicity [Original Article] Flex → Normalized average flexibility parameters (B-values) [Original Article] hw → Hopp & Wood Index of Hydrophilicity [Original Article] em → Emini Surface fractional probability (Surface Accessibility) [Original Book] Documentation on some common scales can be found here. Let’s look at the Index of Hydrophobicity (kd) as an example. Here is the scale, where each residue has an associated value representing its level of hydrophobicity. kd = {"A": 1.8, "R": -4.5, "N": -3.5, "D": -3.5, "C": 2.5, "Q": -3.5, "E": -3.5, "G": -0.4, "H": -3.2, "I": 4.5, "L": 3.8, "K": -3.9, "M": 1.9, "F": 2.8, "P": -1.6, "S": -0.8, "T": -0.7, "W": -0.9, "Y": -1.3, "V": 4.2} Positive values are hydrophobic. Isoleucine (I) and Valine (V) are the most hydrophobic, and Arginine (R) and Lysine (K) are the most hydrophilic. Hydrophobic residues are generally located internally in the polypeptide, while hydrophilic residues are external, so this scale also gives a sense of how this polypeptide may be folded. Protein scale analysis requires setting a window size over which an average value is calculated. You can also specify using the “edge” keyword how important the neighboring residues are, basically weighting their importance to the average for the window. analysed_seq.protein_scale(window=7, param_dict=ProtParamData.kd)# [-0.7571428571428572, -0.2428571428571429, -0.24285714285714288, -0.38571428571428573, -0.6285714285714287, -0.942857142857143, -1.842857142857143, -1.442857142857143, -2.3428571428571425, -1.3000000000000003, -0.01428571428571433, 0.1285714285714285, 0.1285714285714285, -0.014285714285714235, -0.4142857142857143, 0.3428571428571428, -0.31428571428571417, -0.35714285714285715, -1.014285714285714, -0.6285714285714284, -0.10000000000000002, 0.3428571428571429, -0.4142857142857142, 0.24285714285714285, -1.0, -0.34285714285714286, -0.32857142857142857, -0.7142857142857143, -0.1142857142857144, -0.11428571428571435] Let’s bring all of our methods together and create a script that can iterate through each chain in our structure and run some routine analysis. We will create an empty container and then populate it with a dictionary of key information for each sequence. Once in this nested structure, we can slice as we would any container in Python to select individual entries. # Create empty list for chainsall_seqs = []counter = 1# For each polypeptide in the structure, run protein analysis methods and store in dictfor pp in ppb.build_peptides(structure): seq_info = {} # create an empty dict seq = pp.get_sequence() # get the sequence like above analyzed_seq = ProteinAnalysis(str(seq)) # needs to be a str # Specify dict keys and values seq_info['Sequence Number'] = counter # set sequence id seq_info['Sequence'] = seq # store BioPython Seq() object seq_info['Sequence Length'] = len(seq) # length of seq seq_info['Molecular Weight'] = analyzed_seq.molecular_weight() seq_info['GRAVY'] = analyzed_seq.gravy() # hydrophobicity seq_info['AA Count'] = analyzed_seq.count_amino_acids() seq_info['AA Percent'] = analyzed_seq.get_amino_acids_percent() # tuple of (helix, turn, sheet) seq_info['Secondary Structure'] = \ analyzed_seq.secondary_structure_fraction() # Update all_seqs list and increase counter all_seqs.append(seq_info) counter += 1 Selecting the first sequence returns the dictionary with our analyses and values! all_seqs[0] # select the first sequence# {'Sequence Number': 1, 'Sequence': Seq('SNDIYFNFQRFNETNLILQRDASVSSSGQLRLTNLN'), 'Sequence Length': 36, 'Molecular Weight': 4176.52, 'GRAVY': -0.5611, 'Amino Acid Count': {'A': 1, 'C': 0, 'D': 2, 'E': 1, 'F': 3, 'G': 1, 'H': 0, 'I': 2, 'K': 0, 'L': 5, 'M': 0, 'N': 6, 'P': 0, 'Q': 3, 'R': 3, 'S': 5, 'T': 2, 'V': 1, 'W': 0, 'Y': 1}, 'Amino Acid Percent': {'A': 0.027777777777777776, 'C': 0.0, 'D': 0.05555555555555555, 'E': 0.027777777777777776, 'F': 0.08333333333333333, 'G': 0.027777777777777776, 'H': 0.0, 'I': 0.05555555555555555, 'K': 0.0, 'L': 0.1388888888888889, 'M': 0.0, 'N': 0.16666666666666666, 'P': 0.0, 'Q': 0.08333333333333333, 'R': 0.08333333333333333, 'S': 0.1388888888888889, 'T': 0.05555555555555555, 'V': 0.027777777777777776, 'W': 0.0, 'Y': 0.027777777777777776}, 'Secondary Structure': (0.3333333333333333, 0.3333333333333333, 0.19444444444444445)} We can select specific values easily, too. all_seqs[0]['Sequence']# Seq('SNDIYFNFQRFNETNLILQRDASVSSSGQLRLTNLN')all_seqs[0]['Molecular Weight']# 4176.52 Not only does Biopython make working with DNA sequences easy, it also can be leveraged for proteomics to visualize and analyze proteins. It provides powerful and flexible methods for routine protein analysis that can be used to develop custom pipelines based on your specific needs. I know that as I continue to dive deeper into what Biopython has to offer, I will continue to be impressed, so you can expect more articles covering its capabilities in the future. As always, all of the code and dependencies described in this article can be found in this repo, which I will continue to update as I explore Biopython. I hope this guide shows you how simple it can be to start your own bioinformatics projects with Biopython, and I look forward to seeing what you can create! I’m always looking to connect and explore other projects! You can follow me on GitHub or LinkedIn, and check out my other stories on Medium. I also have a Twitter! [1] R. K. Green, Beginner’s Guide to PDB Structures and the PDBx/mmCIF Format. [2] J. Kyte and R.F. Doolittle, A simple method for displaying the hydropathic character of a protein (1983), J. Mol. Biol. 157 (1): 105–32.
[ { "code": null, "e": 745, "s": 172, "text": "Human biology is incredibly complex. Even with our ever-growing understanding, our answers only uncover more and more questions. The completion of the Human Genome Project gave many scientists confidence that we could solve pressing issues in biology through genomics. However, as our understanding of biology has grown, we’ve recognized that other factors influence how an organism’s genome is utilized. Thus, new fields of study were born to address these interconnected and flexible domains, including transcriptomics (study of mRNA) and proteomics (study of proteins)." }, { "code": null, "e": 944, "s": 745, "text": "As I covered in my previous blog, the Biopython package is quite powerful and can visualize and analyze DNA and RNA sequences simply. And it has protein analysis capabilities, too! So let’s dive in." }, { "code": null, "e": 1484, "s": 944, "text": "The Protein Data Bank is a one-stop shop for exploring and downloading protein sequences. PDB developed its own file format for this purpose —the aptly named, .pdb. But as larger, more complex proteins were analyzed another format was developed — CIF and mmCIF. CIF (Crystallographic Information File) was developed to archive small molecule crystallographic experiments studying the arrangement of atoms in crystalline solids. CIF was expanded to larger molecules (macromolecules, hence mm) with mmCIF and has replaced the PDB format. [1]" }, { "code": null, "e": 1562, "s": 1484, "text": "While mmCIF is now the standard, there is still legacy support for PDB files." }, { "code": null, "e": 1652, "s": 1562, "text": "Let’s look at Phytohemagglutinin-L, a lectin found in certain legumes, like kidney beans." }, { "code": null, "e": 1683, "s": 1652, "text": "Import the necessary packages:" }, { "code": null, "e": 1742, "s": 1683, "text": "from Bio.PDB import *import nglview as nvimport ipywidgets" }, { "code": null, "e": 1954, "s": 1742, "text": "Now we’ll create an instance of Biopython’s PDBParser, and use the nglview library to create our interactive visualization. We can pan, zoom, and rotate the molecule and even hover for specific atom information." }, { "code": null, "e": 2076, "s": 1954, "text": "pdb_parser = PDBParser()structure = pdb_parser.get_structure(\"PHA-L\", \"Data/1FAT.pdb\")view = nv.show_biopython(structure)" }, { "code": null, "e": 2302, "s": 2076, "text": "Our process for CIF files will be very similar, just utilize the MMCIF Parser instead! Here we’re visualizing a larger protein 6EBK, or the voltage-activated Kv1.2–2.1 paddle chimera channel in lipid nanodiscs (a mouthful...)" }, { "code": null, "e": 2423, "s": 2302, "text": "cif_parser = MMCIFParser()structure = cif_parser.get_structure(\"6EBK\", \"fa/6ebk.cif\")view = nv.show_biopython(structure)" }, { "code": null, "e": 2579, "s": 2423, "text": "The most immediate way to access information about the protein is through the Header, a dictionary of metadata, available in both PDB and CIF file formats." }, { "code": null, "e": 2648, "s": 2579, "text": "mmcif_dict = MMCIFDict.MMCIFDict(\"fa/1fat.cif\")len(mmcif_dict) # 689" }, { "code": null, "e": 2907, "s": 2648, "text": "This generates a large dictionary of information about the protein, including the citation that sequenced the protein, structure information, atom by atom location and angles, and chemical compositions. As you can see, there are 689 items in this dictionary." }, { "code": null, "e": 3240, "s": 2907, "text": "One of the most important pieces of information you’ll want to analyze is the residue — think amino acid — sequence of the protein or polypeptide. Because proteins can be comprised of several polypeptides, we use a framework to know which level of organization we are examining. From the overall structure, down to individual atoms." }, { "code": null, "e": 3336, "s": 3240, "text": "The Structure object in our file follows the SMCRA architecture in a parent-child relationship:" }, { "code": null, "e": 3367, "s": 3336, "text": "A Structure consists of models" }, { "code": null, "e": 3394, "s": 3367, "text": "A Model consists of chains" }, { "code": null, "e": 3437, "s": 3394, "text": "A Chain consists of residues (amino acids)" }, { "code": null, "e": 3465, "s": 3437, "text": "A Residue consists of Atoms" }, { "code": null, "e": 3581, "s": 3465, "text": "There are many ways to parse the structure metadata to return the protein’s residue sequence. Three ways are below:" }, { "code": null, "e": 3963, "s": 3581, "text": "# .get_residues() method in a loopfor model in structure: for residue in model.get_residues(): print(residue)# .get_residues() method as generator objectresidues = structure.get_residues() # returns a generator object[item for item in residues]# .unfold_entities - keyword for each level of the SMCRA structureSelection.unfold_entities(structure, \"R\") # R is for residues" }, { "code": null, "e": 4259, "s": 3963, "text": "Getting the residue sequence above returns the sequence for the entire protein structure, but proteins are often comprised of several smaller polypeptides that we may want to analyze separately. Biopython enables this through polypeptide builders that can generate these individual polypeptides." }, { "code": null, "e": 4966, "s": 4259, "text": "polypeptide_builder = CaPPBuilder()counter = 1for polypeptide in polypeptide_builder.build_peptides(structure): seq = polypeptide.get_sequence() print(f\"Sequence: {counter}, Length: {len(seq)}\") print(seq) counter += 1# Sequence: 1, Length: 36# SNDIYFNFQRFNETNLILQRDASVSSSGQLRLTNLN# Sequence: 2, Length: 196# NGEPRVGSLGRAFYSAPIQIWDNTTGTVASFATSFT...ASKLS# Sequence: 3, Length: 233# SNDIYFNFQRFNETNLILQRDASVSSSGQLRLTNLN...ASKLS# Sequence: 4, Length: 36# SNDIYFNFQRFNETNLILQRDASVSSSGQLRLTNLN# Sequence: 5, Length: 196# NGEPRVGSLGRAFYSAPIQIWDNTTGTVASFATSFT...ASKLS# Sequence: 6, Length: 35# SNDIYFNFQRFNETNLILQRDASVSSSGQLRLTNL# Sequence: 7, Length: 196# NGEPRVGSLGRAFYSAPIQIWDNTTGTVASFATSFT...ASKLS" }, { "code": null, "e": 5094, "s": 4966, "text": "So now we have the residue sequences for these 7 chains, but we also have access to many methods for analyzing these sequences." }, { "code": null, "e": 5145, "s": 5094, "text": "from Bio.SeqUtils.ProtParam import ProteinAnalysis" }, { "code": null, "e": 5360, "s": 5145, "text": "The only caveat is that calling .get_sequences() above returns a Biopython Seq() object — check out my previous blog to dive deeper into Seq() objects and their functionality — and ProteinAnalysis expects a string." }, { "code": null, "e": 5401, "s": 5360, "text": "analyzed_seq = ProteinAnalysis(str(seq))" }, { "code": null, "e": 5487, "s": 5401, "text": "We are now ready to run the following methods to build understanding of our sequence!" }, { "code": null, "e": 5545, "s": 5487, "text": "We can calculate the molecular weight of the polypeptide." }, { "code": null, "e": 5589, "s": 5545, "text": "analyzed_seq.molecular_weight()# 4176.51669" }, { "code": null, "e": 5847, "s": 5589, "text": "Protein GRAVY returns the GRAVY (grand average of hydropathy) value for the protein sequences you enter. The GRAVY value is calculated by adding the hydropathy value for each residue and dividing by the length of the sequence (Kyte and Doolittle; 1982). [2]" }, { "code": null, "e": 5995, "s": 5847, "text": "A higher value is more hydrophobic. A lower value is more hydrophilic. We’ll discuss later how to generate residue by residue hydrophobicity later." }, { "code": null, "e": 6025, "s": 5995, "text": "analyzed_seq.gravy()# -0.5611" }, { "code": null, "e": 6084, "s": 6025, "text": "We can easily count the number of each type of amino acid." }, { "code": null, "e": 6279, "s": 6084, "text": "analyzed_seq.count_amino_acids()# {'A': 1, 'C': 0, 'D': 2, 'E': 1, 'F': 3, 'G': 1, 'H': 0, 'I': 2, 'K': 0, 'L': 5, 'M': 0, 'N': 6, 'P': 0, 'Q': 3, 'R': 3, 'S': 5, 'T': 2, 'V': 1, 'W': 0, 'Y': 1}" }, { "code": null, "e": 6334, "s": 6279, "text": "And the percentage of each amino acid in the sequence!" }, { "code": null, "e": 6802, "s": 6334, "text": "analyzed_seq.get_amino_acids_percent()# {'A': 0.027777777777777776, 'C': 0.0, 'D': 0.05555555555555555, 'E': 0.027777777777777776, 'F': 0.08333333333333333, 'G': 0.027777777777777776, 'H': 0.0, 'I': 0.05555555555555555, 'K': 0.0, 'L': 0.1388888888888889, 'M': 0.0, 'N': 0.16666666666666666, 'P': 0.0, 'Q': 0.08333333333333333, 'R': 0.08333333333333333, 'S': 0.1388888888888889, 'T': 0.05555555555555555, 'V': 0.027777777777777776, 'W': 0.0, 'Y': 0.027777777777777776}" }, { "code": null, "e": 7049, "s": 6802, "text": "A very useful method — .secondary_structure_fraction() — returns the fraction of amino acids that tend to be found in the three classical secondary structures. These are beta sheets, alpha helixes, and turns (where the residues change direction)." }, { "code": null, "e": 7177, "s": 7049, "text": "analyzed_seq.secondary_structure_fraction() # helix, turn, sheet# (0.3333333333333333, 0.3333333333333333, 0.19444444444444445)" }, { "code": null, "e": 7665, "s": 7177, "text": "Protein scales are a way of measuring certain attributes of residues over the length of the peptide sequence using a sliding window. Scales are comprised of values for each amino acid based on different physical and chemical properties, such as hydrophobicity, secondary structure tendencies, and surface accessibility. As opposed to some chain-level measures like overall molecule behavior, scales allow a more granular understanding of how smaller sections of the sequence will behave." }, { "code": null, "e": 7714, "s": 7665, "text": "from Bio.SeqUtils.ProtParam import ProtParamData" }, { "code": null, "e": 7742, "s": 7714, "text": "Some common scales include:" }, { "code": null, "e": 7807, "s": 7742, "text": "kd → Kyte & Doolittle Index of Hydrophobicity [Original Article]" }, { "code": null, "e": 7886, "s": 7807, "text": "Flex → Normalized average flexibility parameters (B-values) [Original Article]" }, { "code": null, "e": 7946, "s": 7886, "text": "hw → Hopp & Wood Index of Hydrophilicity [Original Article]" }, { "code": null, "e": 8028, "s": 7946, "text": "em → Emini Surface fractional probability (Surface Accessibility) [Original Book]" }, { "code": null, "e": 8083, "s": 8028, "text": "Documentation on some common scales can be found here." }, { "code": null, "e": 8249, "s": 8083, "text": "Let’s look at the Index of Hydrophobicity (kd) as an example. Here is the scale, where each residue has an associated value representing its level of hydrophobicity." }, { "code": null, "e": 8486, "s": 8249, "text": "kd = {\"A\": 1.8, \"R\": -4.5, \"N\": -3.5, \"D\": -3.5, \"C\": 2.5, \"Q\": -3.5, \"E\": -3.5, \"G\": -0.4, \"H\": -3.2, \"I\": 4.5, \"L\": 3.8, \"K\": -3.9, \"M\": 1.9, \"F\": 2.8, \"P\": -1.6, \"S\": -0.8, \"T\": -0.7, \"W\": -0.9, \"Y\": -1.3, \"V\": 4.2}" }, { "code": null, "e": 8820, "s": 8486, "text": "Positive values are hydrophobic. Isoleucine (I) and Valine (V) are the most hydrophobic, and Arginine (R) and Lysine (K) are the most hydrophilic. Hydrophobic residues are generally located internally in the polypeptide, while hydrophilic residues are external, so this scale also gives a sense of how this polypeptide may be folded." }, { "code": null, "e": 9075, "s": 8820, "text": "Protein scale analysis requires setting a window size over which an average value is calculated. You can also specify using the “edge” keyword how important the neighboring residues are, basically weighting their importance to the average for the window." }, { "code": null, "e": 9761, "s": 9075, "text": "analysed_seq.protein_scale(window=7, param_dict=ProtParamData.kd)# [-0.7571428571428572, -0.2428571428571429, -0.24285714285714288, -0.38571428571428573, -0.6285714285714287, -0.942857142857143, -1.842857142857143, -1.442857142857143, -2.3428571428571425, -1.3000000000000003, -0.01428571428571433, 0.1285714285714285, 0.1285714285714285, -0.014285714285714235, -0.4142857142857143, 0.3428571428571428, -0.31428571428571417, -0.35714285714285715, -1.014285714285714, -0.6285714285714284, -0.10000000000000002, 0.3428571428571429, -0.4142857142857142, 0.24285714285714285, -1.0, -0.34285714285714286, -0.32857142857142857, -0.7142857142857143, -0.1142857142857144, -0.11428571428571435]" }, { "code": null, "e": 10126, "s": 9761, "text": "Let’s bring all of our methods together and create a script that can iterate through each chain in our structure and run some routine analysis. We will create an empty container and then populate it with a dictionary of key information for each sequence. Once in this nested structure, we can slice as we would any container in Python to select individual entries." }, { "code": null, "e": 11162, "s": 10126, "text": "# Create empty list for chainsall_seqs = []counter = 1# For each polypeptide in the structure, run protein analysis methods and store in dictfor pp in ppb.build_peptides(structure): seq_info = {} # create an empty dict seq = pp.get_sequence() # get the sequence like above analyzed_seq = ProteinAnalysis(str(seq)) # needs to be a str # Specify dict keys and values seq_info['Sequence Number'] = counter # set sequence id seq_info['Sequence'] = seq # store BioPython Seq() object seq_info['Sequence Length'] = len(seq) # length of seq seq_info['Molecular Weight'] = analyzed_seq.molecular_weight() seq_info['GRAVY'] = analyzed_seq.gravy() # hydrophobicity seq_info['AA Count'] = analyzed_seq.count_amino_acids() seq_info['AA Percent'] = analyzed_seq.get_amino_acids_percent() # tuple of (helix, turn, sheet) seq_info['Secondary Structure'] = \\ analyzed_seq.secondary_structure_fraction() # Update all_seqs list and increase counter all_seqs.append(seq_info) counter += 1" }, { "code": null, "e": 11244, "s": 11162, "text": "Selecting the first sequence returns the dictionary with our analyses and values!" }, { "code": null, "e": 12194, "s": 11244, "text": "all_seqs[0] # select the first sequence# {'Sequence Number': 1, 'Sequence': Seq('SNDIYFNFQRFNETNLILQRDASVSSSGQLRLTNLN'), 'Sequence Length': 36, 'Molecular Weight': 4176.52, 'GRAVY': -0.5611, 'Amino Acid Count': {'A': 1, 'C': 0, 'D': 2, 'E': 1, 'F': 3, 'G': 1, 'H': 0, 'I': 2, 'K': 0, 'L': 5, 'M': 0, 'N': 6, 'P': 0, 'Q': 3, 'R': 3, 'S': 5, 'T': 2, 'V': 1, 'W': 0, 'Y': 1}, 'Amino Acid Percent': {'A': 0.027777777777777776, 'C': 0.0, 'D': 0.05555555555555555, 'E': 0.027777777777777776, 'F': 0.08333333333333333, 'G': 0.027777777777777776, 'H': 0.0, 'I': 0.05555555555555555, 'K': 0.0, 'L': 0.1388888888888889, 'M': 0.0, 'N': 0.16666666666666666, 'P': 0.0, 'Q': 0.08333333333333333, 'R': 0.08333333333333333, 'S': 0.1388888888888889, 'T': 0.05555555555555555, 'V': 0.027777777777777776, 'W': 0.0, 'Y': 0.027777777777777776}, 'Secondary Structure': (0.3333333333333333, 0.3333333333333333, 0.19444444444444445)}" }, { "code": null, "e": 12237, "s": 12194, "text": "We can select specific values easily, too." }, { "code": null, "e": 12346, "s": 12237, "text": "all_seqs[0]['Sequence']# Seq('SNDIYFNFQRFNETNLILQRDASVSSSGQLRLTNLN')all_seqs[0]['Molecular Weight']# 4176.52" }, { "code": null, "e": 12810, "s": 12346, "text": "Not only does Biopython make working with DNA sequences easy, it also can be leveraged for proteomics to visualize and analyze proteins. It provides powerful and flexible methods for routine protein analysis that can be used to develop custom pipelines based on your specific needs. I know that as I continue to dive deeper into what Biopython has to offer, I will continue to be impressed, so you can expect more articles covering its capabilities in the future." }, { "code": null, "e": 13120, "s": 12810, "text": "As always, all of the code and dependencies described in this article can be found in this repo, which I will continue to update as I explore Biopython. I hope this guide shows you how simple it can be to start your own bioinformatics projects with Biopython, and I look forward to seeing what you can create!" }, { "code": null, "e": 13284, "s": 13120, "text": "I’m always looking to connect and explore other projects! You can follow me on GitHub or LinkedIn, and check out my other stories on Medium. I also have a Twitter!" }, { "code": null, "e": 13363, "s": 13284, "text": "[1] R. K. Green, Beginner’s Guide to PDB Structures and the PDBx/mmCIF Format." } ]
Policy-Gradient Methods. REINFORCE algorithm | by Jordi TORRES.AI | Towards Data Science
This is a new post devoted to Policy-Gradient Methods, in the “Deep Reinforcement Learning Explained” series. Policy-Gradient methods are a subclass of Policy-Based methods that estimate an optimal policy’s weights through gradient ascent. Intuitively, gradient ascent begins with an initial guess for the value of policy’s weights that maximizes the expected return, then, the algorithm evaluates the gradient at that point that indicates the direction of the steepest increase of the function of expected return, and so we can make a small step in that direction. We hope that we end up at a new value of policy’s weights for which the value of the expected return function is a little bit larger. The algorithm then repeats this process of evaluating the gradient and taking steps until it considers that it is eventually reached the maximum expected return. Spanish version of this publication medium.com Although we have coded a deterministic policy in the previous post, Policy-based methods can learn either stochastic or deterministic policies. With a stochastic policy, our neural network’s output is an action vector that represents a probability distribution (rather than returning a single deterministic action). The policy we will follow in the new method presented in this Post is selecting an action from this probability distribution. This means that if our Agent ends up in the same state twice, we may not end up taking the same action every time. Such representation of actions as probabilities has many advantages, for instance the advantage of smooth representation: if we change our network weights a bit, the output of the neural network will change, but probably just a little bit. In the case of a deterministic policy, with a discrete numbers output, even a small adjustment of the weights can lead to a jump to a different action. However, if the output is a probability distribution, a small change of weights will usually lead to a small change in output distribution. This is a very important property due gradient optimization methods are all about tweaking the parameters of a model a bit to improve the results. But how can be changed network’s parameters to improve the policy? If you remember from Post 6, we solved a very similar problem using the Cross-Entropy method: our network took observations as inputs and returned the probability distribution of the actions. In fact, the cross-entropy method is, somehow, a preliminary version of the methods that we will introduce in this Post. The key idea underlying policy gradients is reinforcing good actions: to push up the probabilities of actions that lead to higher return, and push down the probabilities of actions that lead to a lower return, until you arrive at the optimal policy. The policy gradient method will iteratively amend the policy network weights (with smooth updates) to make state-action pairs that resulted in positive return more likely, and make state-action pairs that resulted in negative return less likely. To introduce this idea we will start with a vanilla version (the basic version) of the policy gradient method called REINFORCE algorithm ( original paper). This algorithm is the fundamental policy gradient algorithm on which nearly all the advanced policy gradient algorithms are based. Let’s look at a more mathematical definition of the algorithm since it will be good for us in order to understand the most advanced algorithms in following Posts. The first thing we need to define is a trajectory, just a state-action-rewards sequence (but we ignore the reward). A trajectory is a little bit more flexible than an episode because there are no restrictions on its length; it can correspond to a full episode or just a part of an episode. We denote the length with a capital H, where H stands for Horizon, and we represent a trajectory with τ: The method REINFORCE is built upon trajectories instead of episodes because maximizing expected return over trajectories (instead of episodes) lets the method search for optimal policies for both episodic and continuing tasks. Although for the vast majority of episodic tasks, where a reward is only delivered at the end of the episode, it only makes sense just to use the full episode as a trajectory; otherwise, we don’t have enough reward information to meaningfully estimate the expected return. We denote the return for a trajectory τ with R(τ), and it is calculated as the sum reward from that trajectory τ: The parameter Gk is called the total return, or future return, at time step k for the transition k It is the return we expect to collect from time step k until the end of the trajectory, and it can be approximated by adding the rewards from some state in the episode until the end of the episode using gamma γ: Remember that the goal of this algorithm is to find the weights θ of the neural network that maximize the expected return that we denote by U(θ) and can be defined as: To see how it corresponds to the expected return, note that we have expressed the return R(τ) as a function of the trajectory τ. Then, we calculate the weighted average, where the weights are given by P(τ;θ), the probability of each possible trajectory, of all possible values that the return R(τ) can take. Note that probability depends on the weights θ in the neural network because θ defines the policy used to select the actions in the trajectory, which also plays a role in determining the states that the agent observes. As we already introduced, one way to determine the value of θ that maximizes U(θ) function is through gradient ascent. Equivalent to Hill Climbing algorithm presented in the previous Post, intuitively we can visualize that the gradient ascent draws up a strategy to reach the highest point of a hill, U(θ), just iteratively taking small steps in the direction of the gradient: Mathematically, our update step for gradient ascent can be expressed as: where α is the step size that is generally allowed to decay over time (equivalent to the learning rate decay in deep learning). Once we know how to calculate or estimate this gradient, we can repeatedly apply this update step, in the hopes that θ converges to the value that maximizes U(θ). Gradient ascent is closely related to gradient descent, where the differences are that gradient descent is designed to find the minimum of a function (steps in the direction of the negative gradient), whereas gradient ascent will find the maximum (steps in the direction of the gradient). We will use this approach in our code in PyTorch. To apply this method, we will need to be able to calculate the gradient ∇​U(θ); however, we won’t be able to calculate the exact value of the gradient since that is computationally too expensive because, to calculate the gradient exactly, we’ll have to consider every possible trajectory, becoming an intractable problem in most cases. Instead of doing this, the method samples trajectories using the policy and then use those trajectories only to estimate the gradient. This sampling is equivalent to the approach of Monte Carlo presented in Post 13 of this series, and for this reason, method REINFORCE is also known as Monte Carlo Policy Gradients. In summary, the pseudocode that describes in more detail the behavior of this method can be written as: Let’s look a bit more closely at the equation of step 3 in the pseudocode to understand it. We begin by making some simplifying assumptions, for example, assuming that corresponds to a full episode. Remember that R(τ) is just the cumulative rewards from the trajectory τ (the only one trajectory) at each time step. Assume that the reward signal at time step t and the sample play we are working with gives the Agent a reward of positive one (Gt=+1) if we won the game and a reward of negative one (Gt=-1) if we lost. In the other hand, the term looks at the probability that the Agent selects action at from state st in time step t. Remember that π with the subscript θ refers to the policy which is parameterized by θ. Then, the full expression takes the gradient of the log of that probability is This will tell us how we should change the weights of the policy θ if we want to increase the log probability of selecting action at from state st. Specifically, suppose we nudge the policy weights by taking a small step in the direction of this gradient. In that case, it will increase the log probability of selecting the action from that state, and if we step in the opposite direction will decrease the log probability. The following equation will do all of these updates all at once for each state-action pair, at and st, at each time step t in the trajectory: To see this behavior, assume that the Agent won the episode. Then, Gt is just a positive one (+1), and what the sum does is add up all the gradient directions we should step in to increase the log probability of selecting each state-action pair. That’s equivalent to just taking H+1 simultaneous steps where each step corresponds to a state-action pair in the trajectory. In the opposite, if the Agent lost, Gt becomes a negative one, which ensures that instead of stepping in the direction of the steepest increase of the log probabilities, the method steps in the direction of the steepest decrease. The proof of how to derive the equation that approximates the gradient can be safely skipped, what interests us much more is the meaning of the expression. In Gradient methods where we can formulate some probability p which should be maximized, we would actually optimize the log probability logp instead of the probability p for some parameters θ. The reason is that generally, work better to optimize logp(x) than p(x) due to the gradient of logp(x) is generally more well-scaled. Remember that probabilities are bounded by 0 and 1 by definition, so the range of values that the optimizer can operate over is limited and small. In this case, sometimes probabilities may be extremely tiny or very close to 1, and this runs into numerical issues when optimizing on a computer with limited numerical precision. If we instead use a surrogate objective, namely log p (natural logarithm), we have an objective that has a larger “dynamic range” than raw probability space, since the log of probability space ranges from (-∞,0), and this makes the log probability easier to compute. Now, we will explore an implementation of the REINFORCE to solve OpenAI Gym’s Cartpole environment. The entire code of this post can be found on GitHub and can be run as a Colab google notebook using this link. First, we will import all necessary packages with the following lines of code: import numpy as npimport torchimport gymfrom matplotlib import pyplot as plt And also the OpenAI Gym’s Cartpole Environment: env = gym.make('CartPole-v0') We will build a neural network that serves as a policy network. The policy network will accept a state vectors as inputs, and it will produce a (discrete) probability distribution over the possible actions. obs_size = env.observation_space.shape[0] n_actions = env.action_space.n HIDDEN_SIZE = 256model = torch.nn.Sequential( torch.nn.Linear(obs_size, HIDDEN_SIZE), torch.nn.ReLU(), torch.nn.Linear(HIDDEN_SIZE, n_actions), torch.nn.Softmax(dim=0) ) The model is only two linear layers, with a ReLU activation function for the first layer, and the Softmax function for the last layer. By default, the initialization is with random weights). print (model) With the result of the neural network, the Agent samples from the probability distribution to take an action that will be executed in the Environment. act_prob = model(torch.from_numpy(curr_state).float())action = np.random.choice(np.array([0,1]),p=act_prob.data.numpy())prev_state = curr_statecurr_state, _, done, info = env.step(action) The second line of this code samples an action from the probability distribuion produced by the policy network obtained in the firt line. Then in the last line of this code the Agent takes the action. The training loop trains the policy network by updating the parameters θ to following the pseudocode steps describes in the previous section. First we define the optimizer and initialize some variables: learning_rate = 0.003optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)Horizon = 500MAX_TRAJECTORIES = 500gamma = 0.99score = [] where is learning_rate is the step size α , Horizon is the H and gammais γ in the previous pseudocode. Using these variables, the main loop with the number of iterations is defined by MAX_TRAJECTORIESis coded as: for trajectory in range(MAX_TRAJECTORIES): curr_state = env.reset() done = False transitions = [] for t in range(Horizon): act_prob = model(torch.from_numpy(curr_state).float()) action = np.random.choice(np.array([0,1]), p=act_prob.data.numpy()) prev_state = curr_state curr_state, _, done, info = env.step(action) transitions.append((prev_state, action, t+1)) if done: break score.append(len(transitions)) reward_batch = torch.Tensor([r for (s,a,r) in transitions]).flip(dims=(0,)) batch_Gvals =[] for i in range(len(transitions)): new_Gval=0 power=0 for j in range(i,len(transitions)): new_Gval=new_Gval+ ((gamma**power)*reward_batch[j]).numpy() power+=1 batch_Gvals.append(new_Gval) expected_returns_batch=torch.FloatTensor(batch_Gvals) expected_returns_batch /= expected_returns_batch.max() state_batch = torch.Tensor([s for (s,a,r) in transitions]) action_batch = torch.Tensor([a for (s,a,r) in transitions]) pred_batch = model(state_batch) prob_batch = pred_batch.gather(dim=1,index=action_batch .long().view(-1,1)).squeeze() loss= -torch.sum(torch.log(prob_batch)*expected_returns_batch) optimizer.zero_grad() loss.backward() optimizer.step() With score list we will keep track of the trajectory length over training time. We keep track of the actions and states in the list transactions for the transactions of the current trajectory. Following we compute the expected return for each transaction (code snippet from the previous listing): batch_Gvals =[]for i in range(len(transitions)): new_Gval=0 power=0 for j in range(i,len(transitions)): new_Gval=new_Gval+((gamma**power)*reward_batch[j]).numpy() power+=1 batch_Gvals.append(new_Gval)expected_returns_batch=torch.FloatTensor(batch_Gvals)expected_returns_batch /= expected_returns_batch.max() The listbatch_Gvals is used to compute the expected return for each transaction as it is indicated in the previous pseudocode. The list expected_return stores the expected returns for all the transactions of the current trajectory. Finally this code normalizes the rewards to be within in the [0,1] interval to improve numerical stability. The loss function requires an array of action probabilities, prob_batch, for the actions that were taken and the discounted rewards: loss = - torch.sum(torch.log(prob_batch) * expected_returns_batch) For this purpose we recomputes the action probabilities for all the states in the trajectory and subsets the action-probabilities associated with the actions that were actually taken with the following two lines of code: pred_batch = model(state_batch) prob_batch = pred_batch.gather(dim=1,index=action_batch .long().view(-1,1)).squeeze() An important detail is the minus sign in the loss function of this code: loss= -torch.sum(torch.log(prob_batch)*expected_returns_batch) Why we introduced a - in the log_prob? In general, we prefer to set things up so that we are minimizing an objective function instead of maximizing, since it plays nicely with PyTorch’s built-in optimizers (using stochastic gradient descend) . We should instead tell PyTorch to minimize 1-π . This loss function approaches 0 as π nears 1, so we are encouraging the gradients to maximize π for the action we took. Also, let’s remember that we use a surrogate objective, namely –log π (where log is the natural logarithm), because we have an objective that has a larger dynamic range than raw probability space (bounded by 0 and 1 by definition), since the log of probability space ranges from (–∞,0), and this makes the log probability easier to compute. Finally, mention that we included in the code the following lines of code to control the progres of the training loop: if trajectory % 50 == 0 and trajectory>0: print('Trajectory {}\tAverage Score: {:.2f}' .format(trajectory, np.mean(score[-50:-1]))) We can visualise the results of this code running the following code: def running_mean(x): N=50 kernel = np.ones(N) conv_len = x.shape[0]-N y = np.zeros(conv_len) for i in range(conv_len): y[i] = kernel @ x[i:i+N] y[i] /= N return yscore = np.array(score)avg_score = running_mean(score)plt.figure(figsize=(15,7))plt.ylabel("Trajectory Duration",fontsize=12)plt.xlabel("Training Epochs",fontsize=12)plt.plot(score, color='gray' , linewidth=1)plt.plot(avg_score, color='blue', linewidth=3)plt.scatter(np.arange(score.shape[0]),score, color='green' , linewidth=0.3) You should be able to obtain a plot with a nicely increasing trend of the trajectory duration. We also can render how the Agent applies the policy with the following code: def watch_agent(): env = gym.make('CartPole-v0') state = env.reset() rewards = [] img = plt.imshow(env.render(mode='rgb_array')) for t in range(2000): pred = model(torch.from_numpy(state).float()) action = np.random.choice(np.array([0,1]), p=pred.data.numpy()) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) rewards.append(reward) if done: print("Reward:", sum([r for r in rewards])) break env.close()watch_agent() Now that we know the two families of methods, what are the main differences between them? Policy methods, such REINFORCE, directly optimize the policy. Value methods, such as DQN, do the same indirectly, learning the value first and, obtaining the policy based on this value. Policy methods are on-policy and require fresh samples from the Environment (obtained with the policy). Instead, Value methods can benefit from old data obtained from the old policy. Policy methods are usually less sample-efficient, which means they require more interaction with the Environment. Remember that value methods can benefit from large replay buffers. Policy methods will be the more natural choice in some situations, and in other situations, value methods will be a better option. In any case, and as we will see from the next post, both families of methods can be combined to achieve hybrid methods that take advantage of each of them’ properties. In this post, we have explained in detail the REINFORCE algorithm, and we have coded it. As a stochastic gradient method, REINFORCE works well in simple problems, and has good theorical convergence properties. As R. Sutton and G. Barton indicates in the textbook Reinforcement Learning: An Introduction, by construction, the expected update over a trajectory is in the same direction as the performance gradient. This assures an improvement in expected performance for sufficiently small, and convergence to a local optimum under standard stochastic approximation conditions for decreasing . However, as a Monte Carlo method REINFORCE may be of high variance and thus produce slow learning. But because we are using full Monte-Carlo return for calculating the gradient, the method may be of high variance and it is a problem for learning. Also, there are some limitations associated with REINFORCE algorithm: The update process is very inefficient. We run the policy once, update once, and then throw away the trajectory.The gradient estimate is very noisy. There is a possibility that the collected trajectory may not be representative of the policy.There is no clear credit assignment. A trajectory may contain many good/bad actions and whether or not these actions are reinforced depends only on the final total output. The update process is very inefficient. We run the policy once, update once, and then throw away the trajectory. The gradient estimate is very noisy. There is a possibility that the collected trajectory may not be representative of the policy. There is no clear credit assignment. A trajectory may contain many good/bad actions and whether or not these actions are reinforced depends only on the final total output. In summary, REINFORCE works well for a small problem like CartPole, but for a more complicated, for instance, Pong Environment, it will be painfully slow. Can REINFORCE be improved? Yes, there are many training algorithms that the research community created: A2C, A3C, DDPG, TD3, SAC, PPO, among others. However, programming these algorithms requires a more complex mathematical treatment, and its programming becomes more convoluted than that of REINFORCE. For this reason, in the next post, we will introduce Reinforcement Learning frameworks that simplify the use of these advanced methods, and above, all distributed algorithms. See you in the next! by UPC Barcelona Tech and Barcelona Supercomputing Center A relaxed introductory series that gradually and with a practical approach introduces the reader to this exciting technology that is the real enabler of the latest disruptive advances in the field of Artificial Intelligence. I started to write this series in May, during the period of lockdown in Barcelona. Honestly, writing these posts in my spare time helped me to #StayAtHome because of the lockdown. Thank you for reading this publication in those days; it justifies the effort I made. Disclaimers — These posts were written during this period of lockdown in Barcelona as a personal distraction and dissemination of scientific knowledge, in case it could be of help to someone, but without the purpose of being an academic reference document in the DRL area. If the reader needs a more rigorous document, the last post in the series offers an extensive list of academic resources and books that the reader can consult. The author is aware that this series of posts may contain some errors and suffers from a revision of the English text to improve it if the purpose were an academic document. But although the author would like to improve the content in quantity and quality, his professional commitments do not leave him free time to do so. However, the author agrees to refine all those errors that readers can report as soon as he can.
[ { "code": null, "e": 412, "s": 172, "text": "This is a new post devoted to Policy-Gradient Methods, in the “Deep Reinforcement Learning Explained” series. Policy-Gradient methods are a subclass of Policy-Based methods that estimate an optimal policy’s weights through gradient ascent." }, { "code": null, "e": 1034, "s": 412, "text": "Intuitively, gradient ascent begins with an initial guess for the value of policy’s weights that maximizes the expected return, then, the algorithm evaluates the gradient at that point that indicates the direction of the steepest increase of the function of expected return, and so we can make a small step in that direction. We hope that we end up at a new value of policy’s weights for which the value of the expected return function is a little bit larger. The algorithm then repeats this process of evaluating the gradient and taking steps until it considers that it is eventually reached the maximum expected return." }, { "code": null, "e": 1070, "s": 1034, "text": "Spanish version of this publication" }, { "code": null, "e": 1081, "s": 1070, "text": "medium.com" }, { "code": null, "e": 1397, "s": 1081, "text": "Although we have coded a deterministic policy in the previous post, Policy-based methods can learn either stochastic or deterministic policies. With a stochastic policy, our neural network’s output is an action vector that represents a probability distribution (rather than returning a single deterministic action)." }, { "code": null, "e": 1878, "s": 1397, "text": "The policy we will follow in the new method presented in this Post is selecting an action from this probability distribution. This means that if our Agent ends up in the same state twice, we may not end up taking the same action every time. Such representation of actions as probabilities has many advantages, for instance the advantage of smooth representation: if we change our network weights a bit, the output of the neural network will change, but probably just a little bit." }, { "code": null, "e": 2317, "s": 1878, "text": "In the case of a deterministic policy, with a discrete numbers output, even a small adjustment of the weights can lead to a jump to a different action. However, if the output is a probability distribution, a small change of weights will usually lead to a small change in output distribution. This is a very important property due gradient optimization methods are all about tweaking the parameters of a model a bit to improve the results." }, { "code": null, "e": 2697, "s": 2317, "text": "But how can be changed network’s parameters to improve the policy? If you remember from Post 6, we solved a very similar problem using the Cross-Entropy method: our network took observations as inputs and returned the probability distribution of the actions. In fact, the cross-entropy method is, somehow, a preliminary version of the methods that we will introduce in this Post." }, { "code": null, "e": 3193, "s": 2697, "text": "The key idea underlying policy gradients is reinforcing good actions: to push up the probabilities of actions that lead to higher return, and push down the probabilities of actions that lead to a lower return, until you arrive at the optimal policy. The policy gradient method will iteratively amend the policy network weights (with smooth updates) to make state-action pairs that resulted in positive return more likely, and make state-action pairs that resulted in negative return less likely." }, { "code": null, "e": 3480, "s": 3193, "text": "To introduce this idea we will start with a vanilla version (the basic version) of the policy gradient method called REINFORCE algorithm ( original paper). This algorithm is the fundamental policy gradient algorithm on which nearly all the advanced policy gradient algorithms are based." }, { "code": null, "e": 3643, "s": 3480, "text": "Let’s look at a more mathematical definition of the algorithm since it will be good for us in order to understand the most advanced algorithms in following Posts." }, { "code": null, "e": 4038, "s": 3643, "text": "The first thing we need to define is a trajectory, just a state-action-rewards sequence (but we ignore the reward). A trajectory is a little bit more flexible than an episode because there are no restrictions on its length; it can correspond to a full episode or just a part of an episode. We denote the length with a capital H, where H stands for Horizon, and we represent a trajectory with τ:" }, { "code": null, "e": 4265, "s": 4038, "text": "The method REINFORCE is built upon trajectories instead of episodes because maximizing expected return over trajectories (instead of episodes) lets the method search for optimal policies for both episodic and continuing tasks." }, { "code": null, "e": 4538, "s": 4265, "text": "Although for the vast majority of episodic tasks, where a reward is only delivered at the end of the episode, it only makes sense just to use the full episode as a trajectory; otherwise, we don’t have enough reward information to meaningfully estimate the expected return." }, { "code": null, "e": 4652, "s": 4538, "text": "We denote the return for a trajectory τ with R(τ), and it is calculated as the sum reward from that trajectory τ:" }, { "code": null, "e": 4751, "s": 4652, "text": "The parameter Gk is called the total return, or future return, at time step k for the transition k" }, { "code": null, "e": 4963, "s": 4751, "text": "It is the return we expect to collect from time step k until the end of the trajectory, and it can be approximated by adding the rewards from some state in the episode until the end of the episode using gamma γ:" }, { "code": null, "e": 5131, "s": 4963, "text": "Remember that the goal of this algorithm is to find the weights θ of the neural network that maximize the expected return that we denote by U(θ) and can be defined as:" }, { "code": null, "e": 5658, "s": 5131, "text": "To see how it corresponds to the expected return, note that we have expressed the return R(τ) as a function of the trajectory τ. Then, we calculate the weighted average, where the weights are given by P(τ;θ), the probability of each possible trajectory, of all possible values that the return R(τ) can take. Note that probability depends on the weights θ in the neural network because θ defines the policy used to select the actions in the trajectory, which also plays a role in determining the states that the agent observes." }, { "code": null, "e": 5777, "s": 5658, "text": "As we already introduced, one way to determine the value of θ that maximizes U(θ) function is through gradient ascent." }, { "code": null, "e": 6035, "s": 5777, "text": "Equivalent to Hill Climbing algorithm presented in the previous Post, intuitively we can visualize that the gradient ascent draws up a strategy to reach the highest point of a hill, U(θ), just iteratively taking small steps in the direction of the gradient:" }, { "code": null, "e": 6108, "s": 6035, "text": "Mathematically, our update step for gradient ascent can be expressed as:" }, { "code": null, "e": 6399, "s": 6108, "text": "where α is the step size that is generally allowed to decay over time (equivalent to the learning rate decay in deep learning). Once we know how to calculate or estimate this gradient, we can repeatedly apply this update step, in the hopes that θ converges to the value that maximizes U(θ)." }, { "code": null, "e": 6738, "s": 6399, "text": "Gradient ascent is closely related to gradient descent, where the differences are that gradient descent is designed to find the minimum of a function (steps in the direction of the negative gradient), whereas gradient ascent will find the maximum (steps in the direction of the gradient). We will use this approach in our code in PyTorch." }, { "code": null, "e": 7074, "s": 6738, "text": "To apply this method, we will need to be able to calculate the gradient ∇​U(θ); however, we won’t be able to calculate the exact value of the gradient since that is computationally too expensive because, to calculate the gradient exactly, we’ll have to consider every possible trajectory, becoming an intractable problem in most cases." }, { "code": null, "e": 7390, "s": 7074, "text": "Instead of doing this, the method samples trajectories using the policy and then use those trajectories only to estimate the gradient. This sampling is equivalent to the approach of Monte Carlo presented in Post 13 of this series, and for this reason, method REINFORCE is also known as Monte Carlo Policy Gradients." }, { "code": null, "e": 7494, "s": 7390, "text": "In summary, the pseudocode that describes in more detail the behavior of this method can be written as:" }, { "code": null, "e": 7693, "s": 7494, "text": "Let’s look a bit more closely at the equation of step 3 in the pseudocode to understand it. We begin by making some simplifying assumptions, for example, assuming that corresponds to a full episode." }, { "code": null, "e": 8040, "s": 7693, "text": "Remember that R(τ) is just the cumulative rewards from the trajectory τ (the only one trajectory) at each time step. Assume that the reward signal at time step t and the sample play we are working with gives the Agent a reward of positive one (Gt=+1) if we won the game and a reward of negative one (Gt=-1) if we lost. In the other hand, the term" }, { "code": null, "e": 8294, "s": 8040, "text": "looks at the probability that the Agent selects action at from state st in time step t. Remember that π with the subscript θ refers to the policy which is parameterized by θ. Then, the full expression takes the gradient of the log of that probability is" }, { "code": null, "e": 8718, "s": 8294, "text": "This will tell us how we should change the weights of the policy θ if we want to increase the log probability of selecting action at from state st. Specifically, suppose we nudge the policy weights by taking a small step in the direction of this gradient. In that case, it will increase the log probability of selecting the action from that state, and if we step in the opposite direction will decrease the log probability." }, { "code": null, "e": 8860, "s": 8718, "text": "The following equation will do all of these updates all at once for each state-action pair, at and st, at each time step t in the trajectory:" }, { "code": null, "e": 9232, "s": 8860, "text": "To see this behavior, assume that the Agent won the episode. Then, Gt is just a positive one (+1), and what the sum does is add up all the gradient directions we should step in to increase the log probability of selecting each state-action pair. That’s equivalent to just taking H+1 simultaneous steps where each step corresponds to a state-action pair in the trajectory." }, { "code": null, "e": 9462, "s": 9232, "text": "In the opposite, if the Agent lost, Gt becomes a negative one, which ensures that instead of stepping in the direction of the steepest increase of the log probabilities, the method steps in the direction of the steepest decrease." }, { "code": null, "e": 9618, "s": 9462, "text": "The proof of how to derive the equation that approximates the gradient can be safely skipped, what interests us much more is the meaning of the expression." }, { "code": null, "e": 9811, "s": 9618, "text": "In Gradient methods where we can formulate some probability p which should be maximized, we would actually optimize the log probability logp instead of the probability p for some parameters θ." }, { "code": null, "e": 10092, "s": 9811, "text": "The reason is that generally, work better to optimize logp(x) than p(x) due to the gradient of logp(x) is generally more well-scaled. Remember that probabilities are bounded by 0 and 1 by definition, so the range of values that the optimizer can operate over is limited and small." }, { "code": null, "e": 10539, "s": 10092, "text": "In this case, sometimes probabilities may be extremely tiny or very close to 1, and this runs into numerical issues when optimizing on a computer with limited numerical precision. If we instead use a surrogate objective, namely log p (natural logarithm), we have an objective that has a larger “dynamic range” than raw probability space, since the log of probability space ranges from (-∞,0), and this makes the log probability easier to compute." }, { "code": null, "e": 10639, "s": 10539, "text": "Now, we will explore an implementation of the REINFORCE to solve OpenAI Gym’s Cartpole environment." }, { "code": null, "e": 10750, "s": 10639, "text": "The entire code of this post can be found on GitHub and can be run as a Colab google notebook using this link." }, { "code": null, "e": 10829, "s": 10750, "text": "First, we will import all necessary packages with the following lines of code:" }, { "code": null, "e": 10906, "s": 10829, "text": "import numpy as npimport torchimport gymfrom matplotlib import pyplot as plt" }, { "code": null, "e": 10954, "s": 10906, "text": "And also the OpenAI Gym’s Cartpole Environment:" }, { "code": null, "e": 10984, "s": 10954, "text": "env = gym.make('CartPole-v0')" }, { "code": null, "e": 11191, "s": 10984, "text": "We will build a neural network that serves as a policy network. The policy network will accept a state vectors as inputs, and it will produce a (discrete) probability distribution over the possible actions." }, { "code": null, "e": 11487, "s": 11191, "text": "obs_size = env.observation_space.shape[0] n_actions = env.action_space.n HIDDEN_SIZE = 256model = torch.nn.Sequential( torch.nn.Linear(obs_size, HIDDEN_SIZE), torch.nn.ReLU(), torch.nn.Linear(HIDDEN_SIZE, n_actions), torch.nn.Softmax(dim=0) )" }, { "code": null, "e": 11678, "s": 11487, "text": "The model is only two linear layers, with a ReLU activation function for the first layer, and the Softmax function for the last layer. By default, the initialization is with random weights)." }, { "code": null, "e": 11692, "s": 11678, "text": "print (model)" }, { "code": null, "e": 11843, "s": 11692, "text": "With the result of the neural network, the Agent samples from the probability distribution to take an action that will be executed in the Environment." }, { "code": null, "e": 12031, "s": 11843, "text": "act_prob = model(torch.from_numpy(curr_state).float())action = np.random.choice(np.array([0,1]),p=act_prob.data.numpy())prev_state = curr_statecurr_state, _, done, info = env.step(action)" }, { "code": null, "e": 12232, "s": 12031, "text": "The second line of this code samples an action from the probability distribuion produced by the policy network obtained in the firt line. Then in the last line of this code the Agent takes the action." }, { "code": null, "e": 12374, "s": 12232, "text": "The training loop trains the policy network by updating the parameters θ to following the pseudocode steps describes in the previous section." }, { "code": null, "e": 12435, "s": 12374, "text": "First we define the optimizer and initialize some variables:" }, { "code": null, "e": 12580, "s": 12435, "text": "learning_rate = 0.003optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)Horizon = 500MAX_TRAJECTORIES = 500gamma = 0.99score = []" }, { "code": null, "e": 12793, "s": 12580, "text": "where is learning_rate is the step size α , Horizon is the H and gammais γ in the previous pseudocode. Using these variables, the main loop with the number of iterations is defined by MAX_TRAJECTORIESis coded as:" }, { "code": null, "e": 14197, "s": 12793, "text": "for trajectory in range(MAX_TRAJECTORIES): curr_state = env.reset() done = False transitions = [] for t in range(Horizon): act_prob = model(torch.from_numpy(curr_state).float()) action = np.random.choice(np.array([0,1]), p=act_prob.data.numpy()) prev_state = curr_state curr_state, _, done, info = env.step(action) transitions.append((prev_state, action, t+1)) if done: break score.append(len(transitions)) reward_batch = torch.Tensor([r for (s,a,r) in transitions]).flip(dims=(0,)) batch_Gvals =[] for i in range(len(transitions)): new_Gval=0 power=0 for j in range(i,len(transitions)): new_Gval=new_Gval+ ((gamma**power)*reward_batch[j]).numpy() power+=1 batch_Gvals.append(new_Gval) expected_returns_batch=torch.FloatTensor(batch_Gvals) expected_returns_batch /= expected_returns_batch.max() state_batch = torch.Tensor([s for (s,a,r) in transitions]) action_batch = torch.Tensor([a for (s,a,r) in transitions]) pred_batch = model(state_batch) prob_batch = pred_batch.gather(dim=1,index=action_batch .long().view(-1,1)).squeeze() loss= -torch.sum(torch.log(prob_batch)*expected_returns_batch) optimizer.zero_grad() loss.backward() optimizer.step()" }, { "code": null, "e": 14390, "s": 14197, "text": "With score list we will keep track of the trajectory length over training time. We keep track of the actions and states in the list transactions for the transactions of the current trajectory." }, { "code": null, "e": 14494, "s": 14390, "text": "Following we compute the expected return for each transaction (code snippet from the previous listing):" }, { "code": null, "e": 14822, "s": 14494, "text": "batch_Gvals =[]for i in range(len(transitions)): new_Gval=0 power=0 for j in range(i,len(transitions)): new_Gval=new_Gval+((gamma**power)*reward_batch[j]).numpy() power+=1 batch_Gvals.append(new_Gval)expected_returns_batch=torch.FloatTensor(batch_Gvals)expected_returns_batch /= expected_returns_batch.max()" }, { "code": null, "e": 15162, "s": 14822, "text": "The listbatch_Gvals is used to compute the expected return for each transaction as it is indicated in the previous pseudocode. The list expected_return stores the expected returns for all the transactions of the current trajectory. Finally this code normalizes the rewards to be within in the [0,1] interval to improve numerical stability." }, { "code": null, "e": 15295, "s": 15162, "text": "The loss function requires an array of action probabilities, prob_batch, for the actions that were taken and the discounted rewards:" }, { "code": null, "e": 15362, "s": 15295, "text": "loss = - torch.sum(torch.log(prob_batch) * expected_returns_batch)" }, { "code": null, "e": 15583, "s": 15362, "text": "For this purpose we recomputes the action probabilities for all the states in the trajectory and subsets the action-probabilities associated with the actions that were actually taken with the following two lines of code:" }, { "code": null, "e": 15717, "s": 15583, "text": "pred_batch = model(state_batch) prob_batch = pred_batch.gather(dim=1,index=action_batch .long().view(-1,1)).squeeze()" }, { "code": null, "e": 15790, "s": 15717, "text": "An important detail is the minus sign in the loss function of this code:" }, { "code": null, "e": 15853, "s": 15790, "text": "loss= -torch.sum(torch.log(prob_batch)*expected_returns_batch)" }, { "code": null, "e": 16266, "s": 15853, "text": "Why we introduced a - in the log_prob? In general, we prefer to set things up so that we are minimizing an objective function instead of maximizing, since it plays nicely with PyTorch’s built-in optimizers (using stochastic gradient descend) . We should instead tell PyTorch to minimize 1-π . This loss function approaches 0 as π nears 1, so we are encouraging the gradients to maximize π for the action we took." }, { "code": null, "e": 16607, "s": 16266, "text": "Also, let’s remember that we use a surrogate objective, namely –log π (where log is the natural logarithm), because we have an objective that has a larger dynamic range than raw probability space (bounded by 0 and 1 by definition), since the log of probability space ranges from (–∞,0), and this makes the log probability easier to compute." }, { "code": null, "e": 16726, "s": 16607, "text": "Finally, mention that we included in the code the following lines of code to control the progres of the training loop:" }, { "code": null, "e": 16869, "s": 16726, "text": "if trajectory % 50 == 0 and trajectory>0: print('Trajectory {}\\tAverage Score: {:.2f}' .format(trajectory, np.mean(score[-50:-1])))" }, { "code": null, "e": 16939, "s": 16869, "text": "We can visualise the results of this code running the following code:" }, { "code": null, "e": 17476, "s": 16939, "text": "def running_mean(x): N=50 kernel = np.ones(N) conv_len = x.shape[0]-N y = np.zeros(conv_len) for i in range(conv_len): y[i] = kernel @ x[i:i+N] y[i] /= N return yscore = np.array(score)avg_score = running_mean(score)plt.figure(figsize=(15,7))plt.ylabel(\"Trajectory Duration\",fontsize=12)plt.xlabel(\"Training Epochs\",fontsize=12)plt.plot(score, color='gray' , linewidth=1)plt.plot(avg_score, color='blue', linewidth=3)plt.scatter(np.arange(score.shape[0]),score, color='green' , linewidth=0.3)" }, { "code": null, "e": 17571, "s": 17476, "text": "You should be able to obtain a plot with a nicely increasing trend of the trajectory duration." }, { "code": null, "e": 17648, "s": 17571, "text": "We also can render how the Agent applies the policy with the following code:" }, { "code": null, "e": 18238, "s": 17648, "text": "def watch_agent(): env = gym.make('CartPole-v0') state = env.reset() rewards = [] img = plt.imshow(env.render(mode='rgb_array')) for t in range(2000): pred = model(torch.from_numpy(state).float()) action = np.random.choice(np.array([0,1]), p=pred.data.numpy()) img.set_data(env.render(mode='rgb_array')) plt.axis('off') display.display(plt.gcf()) display.clear_output(wait=True) state, reward, done, _ = env.step(action) rewards.append(reward) if done: print(\"Reward:\", sum([r for r in rewards])) break env.close()watch_agent()" }, { "code": null, "e": 18328, "s": 18238, "text": "Now that we know the two families of methods, what are the main differences between them?" }, { "code": null, "e": 18514, "s": 18328, "text": "Policy methods, such REINFORCE, directly optimize the policy. Value methods, such as DQN, do the same indirectly, learning the value first and, obtaining the policy based on this value." }, { "code": null, "e": 18697, "s": 18514, "text": "Policy methods are on-policy and require fresh samples from the Environment (obtained with the policy). Instead, Value methods can benefit from old data obtained from the old policy." }, { "code": null, "e": 18878, "s": 18697, "text": "Policy methods are usually less sample-efficient, which means they require more interaction with the Environment. Remember that value methods can benefit from large replay buffers." }, { "code": null, "e": 19177, "s": 18878, "text": "Policy methods will be the more natural choice in some situations, and in other situations, value methods will be a better option. In any case, and as we will see from the next post, both families of methods can be combined to achieve hybrid methods that take advantage of each of them’ properties." }, { "code": null, "e": 19387, "s": 19177, "text": "In this post, we have explained in detail the REINFORCE algorithm, and we have coded it. As a stochastic gradient method, REINFORCE works well in simple problems, and has good theorical convergence properties." }, { "code": null, "e": 19868, "s": 19387, "text": "As R. Sutton and G. Barton indicates in the textbook Reinforcement Learning: An Introduction, by construction, the expected update over a trajectory is in the same direction as the performance gradient. This assures an improvement in expected performance for sufficiently small, and convergence to a local optimum under standard stochastic approximation conditions for decreasing . However, as a Monte Carlo method REINFORCE may be of high variance and thus produce slow learning." }, { "code": null, "e": 20016, "s": 19868, "text": "But because we are using full Monte-Carlo return for calculating the gradient, the method may be of high variance and it is a problem for learning." }, { "code": null, "e": 20086, "s": 20016, "text": "Also, there are some limitations associated with REINFORCE algorithm:" }, { "code": null, "e": 20500, "s": 20086, "text": "The update process is very inefficient. We run the policy once, update once, and then throw away the trajectory.The gradient estimate is very noisy. There is a possibility that the collected trajectory may not be representative of the policy.There is no clear credit assignment. A trajectory may contain many good/bad actions and whether or not these actions are reinforced depends only on the final total output." }, { "code": null, "e": 20613, "s": 20500, "text": "The update process is very inefficient. We run the policy once, update once, and then throw away the trajectory." }, { "code": null, "e": 20744, "s": 20613, "text": "The gradient estimate is very noisy. There is a possibility that the collected trajectory may not be representative of the policy." }, { "code": null, "e": 20916, "s": 20744, "text": "There is no clear credit assignment. A trajectory may contain many good/bad actions and whether or not these actions are reinforced depends only on the final total output." }, { "code": null, "e": 21549, "s": 20916, "text": "In summary, REINFORCE works well for a small problem like CartPole, but for a more complicated, for instance, Pong Environment, it will be painfully slow. Can REINFORCE be improved? Yes, there are many training algorithms that the research community created: A2C, A3C, DDPG, TD3, SAC, PPO, among others. However, programming these algorithms requires a more complex mathematical treatment, and its programming becomes more convoluted than that of REINFORCE. For this reason, in the next post, we will introduce Reinforcement Learning frameworks that simplify the use of these advanced methods, and above, all distributed algorithms." }, { "code": null, "e": 21570, "s": 21549, "text": "See you in the next!" }, { "code": null, "e": 21628, "s": 21570, "text": "by UPC Barcelona Tech and Barcelona Supercomputing Center" }, { "code": null, "e": 21853, "s": 21628, "text": "A relaxed introductory series that gradually and with a practical approach introduces the reader to this exciting technology that is the real enabler of the latest disruptive advances in the field of Artificial Intelligence." }, { "code": null, "e": 22119, "s": 21853, "text": "I started to write this series in May, during the period of lockdown in Barcelona. Honestly, writing these posts in my spare time helped me to #StayAtHome because of the lockdown. Thank you for reading this publication in those days; it justifies the effort I made." } ]
How and where does String literals in Java stored in the memory?
Strings are used to store a sequence of characters in Java, they are treated as objects. The String class of the java.lang package represents a String. You can create a String either by using the new keyword (like any other object) or, by assigning value to the literal (like any other primitive datatype). public class StringDemo { public static void main(String args[]) { String stringObject = new String("Hello how are you"); System.out.println(stringObject); String stringLiteral = "Welcome to Tutorialspoint"; System.out.println(stringLiteral); } } Hello how are you Welcome to Tutorialspoint Strings are stored on the heap area in a separate memory location known as String Constant pool. String constant pool: It is a separate block of memory where all the String variables are held. When you store a String as String str1 = "Hello"; directly, then JVM creates a String object with the given value in a String constant pool. And whenever we try to create another String as String str2 = "Hello"; JVM verifies weather any String object with the same value exists in the String constant pool, if so, instead of creating a new object JVM assigns the reference of existing object to the new variable. And when we store String as String str = new String("Hello"); using the new keyword, a new object with the given value is created irrespective of the contents of the String constant pool.
[ { "code": null, "e": 1214, "s": 1062, "text": "Strings are used to store a sequence of characters in Java, they are treated as objects. The String class of the java.lang package represents a String." }, { "code": null, "e": 1369, "s": 1214, "text": "You can create a String either by using the new keyword (like any other object) or, by assigning value to the literal (like any other primitive datatype)." }, { "code": null, "e": 1646, "s": 1369, "text": "public class StringDemo {\n public static void main(String args[]) {\n String stringObject = new String(\"Hello how are you\");\n System.out.println(stringObject);\n String stringLiteral = \"Welcome to Tutorialspoint\";\n System.out.println(stringLiteral);\n }\n}" }, { "code": null, "e": 1690, "s": 1646, "text": "Hello how are you\nWelcome to Tutorialspoint" }, { "code": null, "e": 1883, "s": 1690, "text": "Strings are stored on the heap area in a separate memory location known as String Constant pool. String constant pool: It is a separate block of memory where all the String variables are held." }, { "code": null, "e": 1910, "s": 1883, "text": "When you store a String as" }, { "code": null, "e": 1933, "s": 1910, "text": "String str1 = \"Hello\";" }, { "code": null, "e": 2024, "s": 1933, "text": "directly, then JVM creates a String object with the given value in a String constant pool." }, { "code": null, "e": 2072, "s": 2024, "text": "And whenever we try to create another String as" }, { "code": null, "e": 2095, "s": 2072, "text": "String str2 = \"Hello\";" }, { "code": null, "e": 2296, "s": 2095, "text": "JVM verifies weather any String object with the same value exists in the String constant pool, if so, instead of creating a new object JVM assigns the reference of existing object to the new variable." }, { "code": null, "e": 2324, "s": 2296, "text": "And when we store String as" }, { "code": null, "e": 2358, "s": 2324, "text": "String str = new String(\"Hello\");" }, { "code": null, "e": 2484, "s": 2358, "text": "using the new keyword, a new object with the given value is created irrespective of the contents of the String constant pool." } ]
Hyperparameter Tuning with Grid Search and Random Search | by Idil Ismiguzel | Towards Data Science
Hyperparameter tuning also known as hyperparameter optimization is an important step in any machine learning model training that directly affects model performance. This article covers two very popular hyperparameter tuning techniques: grid search and random search and shows how to combine these two algorithms with coarse-to-fine tuning. By the end of the article, you will know their working principles and main differences that will help you to decide which one to use confidently. While following the article, I encourage you to check out the Jupyter Notebook on my GitHub for the full analysis and code. 🌠 Hyperparameters are parameters that are defined before training to specify how we want model training to happen. We have full control over hyperparameter settings and by doing that we control the learning process. For example in the random forest model n_estimators (number of decision trees we want to have) is a hyperparameter. It can be set to any integer value but of course, setting it to 10 or 1000 changes the learning process significantly. Parameters, on the other hand, are found during the training. We have no control over parameter values as they are the result of model training. For example, in linear regression coefficients and intercept are parameters and found at the end of model training. To learn model hyperparameters and their values we can simply call get_params in Python. 🔍 from sklearn.ensemble import RandomForestClassifier# Instantiate the modelrf_model = RandomForestClassifier()# Print hyperparametersrf_model.get_params RandomForestClassifier(bootstrap=True, ccp_alpha=0.0, class_weight=None, criterion=’gini’, max_depth=None, max_features=’auto’, max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start=False) As you can see random forest classifier has many hyperparameters and if the model is instantiated without defining hyperparameters, then it will have default values. In the random forest by default n_estimators=100 which results in a moderately small forest. (only 100 trees 🌳) You need to know that in any model (in this case random forest) some hyperparameters are more important than others such as: n_estimators: Number of decision treesmax_features: Maximum number of features considered while splittingmax_depth: Max depth of the treemin_samples_leaf: Minimum number of data points in a leaf nodebootstrap: Sampling with or without replacement And some hyperparameters do not affect model performance such as: n_jobs: Number of jobs to run in parallelrandom_state: Seedverbose: Printing information while training continuesoob_score: Whether or not to use out-of-bag samples Finally, hyperparameter tuning is finding the best combination of hyperparameters that gives the best performance according to the defined scoring metric. In this article, we will be using Glass Identification data set from UCI, where we have 9 attributes to predict the type of glass (out of 7 discrete values). Next, I will separate X and y, and generate train and test sets. # Seperate X and yX = df.drop(columns=['Type'], axis=1)y = df['Type']# Generate training and test sets for X and yX_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state=1) After that, we simply run a random forest classifier with default values and get the predictions for the test set. # Instantiate and fit random forest classifierrf_model = RandomForestClassifier()rf_model.fit(X_train, y_train)# Predict on the test set and call accuracyy_pred = rf_model.predict(X_test)accuracy = accuracy_score(y_test, y_pred)print(accuracy) 0.81 As you can see accuracy with the default model is 81%. Now we will see how to use grid search to tune selected hyperparameters. Grid Search starts with defining a search space grid. The grid consists of selected hyperparameter names and values, and grid search exhaustively searches the best combination of these given values. 🚀 Let’s say we decided to define the following parameter grid to optimize some hyperparameters for our random forest classifier. param_grid:n_estimators = [50, 100, 200, 300]max_depth = [2, 4, 6, 8, 10]min_samples_leaf = [1, 5, 10]max_features = ['auto', 'sqrt']bootstrap = [True, False] We will use GridSearchCV class from Scikit-Learn library for this optimization. The first thing to mention is grid search will have to run and compare 240 models (=4*5*3*2*2, multiplication of the selected number of values). Moreover, GridSearchCV class has the option to perform cross validation to resample the training and test data into multiple folds. By applying cross validation we use every record in data for training and testing instead of splitting the dataset at one time as training and testing. If we decide to use cross validation (let’s say with 5 folds) this means grid search will have to evaluate 1200 (=240*5) model performances. Let’s have a look at all the input parameters of GridSearchCV class: class sklearn.model_selection.GridSearchCV(estimator, param_grid, scoring=None, n_jobs=None, refit=True, cv=None, return_train_score=False) We start with defining a dictionary for the grid which we will be an input for GridSeachCv. # Define the gridparam_grid = {'n_estimators': [50, 100, 200, 300],'min_samples_leaf': [1, 5, 10],'max_depth': [2, 4, 6, 8, 10],'max_features': ['auto', 'sqrt'],'bootstrap': [True, False]}# Instantiate GridSearchCVmodel_gridsearch = GridSearchCV(estimator=rf_model,param_grid=param_grid,scoring='accuracy',n_jobs=4,cv=5,refit=True,return_train_score=True) As decided previouslyestimator is RandomForestClassifier (rf_model) and param_grid is the parameter grid we defined above. scoring is the desired evaluation metric such as accuracy for a classification task and n_jobs executes model evaluation in parallel, but be careful if you set n_jobs=-1 it uses all the processors! Since grid search is an uninformed tuning, we can take advantage of running models in parallel as their outcome does not affect other models’ runs. Settingrefit=True refits the estimator with the best-found hyperparameter values in the end so we do not need to code them in an additional step. cv defines the cross validation strategy and setting return_train_score=True we can print the logs of model runs to make further analysis. # Record the current time start = time()# Fit the selected modelmodel_gridsearch.fit(X_train, y_train)# Print the time spend and number of models ranprint("GridSearchCV took %.2f seconds for %d candidate parameter settings." % ((time() - start), len(model_gridsearch.cv_results_['params']))) GridSearchCV took 247.79 seconds for 240 candidate parameter settings. # Predict on the test set and call accuracyy_pred_grid = model_gridsearch.predict(X_test)accuracy_grid = accuracy_score(y_test, y_pred_grid) 0.88 As you can see, simply tuning some hyperparameters increased the initial accuracy from 81% to 88% spending 247 seconds to hyperparameter tuning. Grid search always finds the best-performing model with hyperparameter values mentioned in the grid. It is also easy to implement and explain. However, with the increasing number of hyperparameters and values to test it can easily become computationally expensive because it models all of the combinations of hyperparameters. The drawback of not learning from already ran models makes grid search inefficient and time-consuming. In addition, parameter grid plays an extremely important role: even though grid search will always find the best combination if the parameter grid is selected poorly the best combination will not be performing well. After running the GridSeachCV, we can return the following attributes for further investigation: cv_results_ best_estimator_ best_score_ best_params_ Let’s look at some of them: print(model_gridsearch.best_params_) {‘bootstrap’: True, ‘max_depth’: 10, ‘max_features’: ‘sqrt’, ‘min_samples_leaf’: 1, ‘n_estimators’: 300} print(model_gridsearch.best_estimator_) RandomForestClassifier(bootstrap=True, ccp_alpha=0.0, class_weight=None, criterion=’gini’, max_depth=10, max_features=’sqrt’, max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=300, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start=False) In random search, we define distributions for each hyperparameter which can be defined uniformly or with a sampling method. The key difference from grid search is in random search, not all the values are tested and values tested are selected at random. For example, if there are 500 values in the distribution and if we input n_iter=50 then random search will randomly sample 50 values to test. By doing that random search optimizes time spend and not defining an absolute grid allows it to explore other values in the given distribution. Since random search does not try every hyperparameter combination, it does not necessarily return the best performing values, but it returns a relatively good performing model in a significantly shorter time. ⏰ param_distributionsn_estimators = list(range(100, 300, 10))min_samples_leaf = list(range(1, 50))max_depth = list(range(2, 20)max_features = ['auto', 'sqrt']bootstrap = [True, False] RandomizedSearchCV from Scikit-Learn has the following input parameters: class sklearn.model_selection.RandomizedSearchCV(estimator, param_distributions, n_iter=10, scoring=None, n_jobs=None, refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs', random_state=None, error_score=nan, return_train_score=False) We start with defining a dictionary for parameter distributions which will be an input for RandomizedSearchCV. # specify distributions to sample fromparam_dist = {'n_estimators': list(range(50, 300, 10)),'min_samples_leaf': list(range(1, 50)),'max_depth': list(range(2, 20)),'max_features': ['auto', 'sqrt'],'bootstrap': [True, False]}# specify number of search iterationsn_iter = 50# Instantiate RandomSearchCVmodel_random_search = RandomizedSearchCV(estimator=rf_model,param_distributions=param_dist,n_iter=n_iter) Most parameters are similar to GridSearchCV, however, in RandomizedSearchCV we have param_distributions for defining the search distributions. n_iter is used to limit the total number of model runs (in other words parameter combinations that are sampled from the grid). Be careful with the trade-off here, since setting high n_iter increases search runtime, and setting low n_iter decreases model quality. # Record the current time start = time()# Fit the selected modelmodel_random_search.fit(X_train, y_train)# Print the time spend and number of models ranprint("RandomizedSearchCV took %.2f seconds for %d candidate parameter settings." % ((time() - start), len(model_random_search.cv_results_['params']))) RandomizedSearchCV took 64.17 seconds for 50 candidate parameter settings. # Predict on the test set and call accuracyy_pred_random = model_random_search.predict(X_test)accuracy_random = accuracy_score(y_test, y_pred_random) 0.86 As you see only in 64 seconds we were able to increase accuracy of the initial model from 81% to 86%. Random search did not reach 88% accuracy of grid search, however, this is the tradeoff between two tuning methods. Finally, let’s have a look at the best parameters found with random search. print(model_random_search.best_params_) {‘n_estimators’: 230, ‘min_samples_leaf’: 4, ‘max_features’: ‘auto’, ‘max_depth’: 13, ‘bootstrap’: False} From the beginning of the article, we have seen how to apply grid search and random search for hyperparameter optimization. Using grid search, we were able to test all of the hyperparameter values given in the grid to find the best among them. However, an increased number of hyperparameters easily becomes a bottleneck. Ideally, we can combine grid search with random search to prevent this inefficiency. In coarse-to-fine tuning, we start with a random search to find the promising value ranges for each hyperparameter. For example, if the random search returns high performance for n_estimators between 150 and 200, this is the range we want grid search to focus on. After getting focus area for each hyperparameter using random search, we can define the grid accordingly for grid search to find the best values amongst them. In the illustration above first, we are testing values at random to find a focus area with random search. Second, we are testing all values in this focus area with grid search and eventually find the optimal value. Another useful strategy to fine-tune machine learning models is using ensemble learning techniques. Different from hyperparameter tuning, ensemble learning aims to improve model performance by combining multiple models into a group model. This group model aims to perform better than each model alone. The most used ensemble learning methods are voting, bagging, boosting and stacking and if you want to learn more or refresh your knowledge, you can read my article on the topic. 🐬 In this article, we used a random forest classifier to predict “type of glass” using 9 different attributes. Initial random forest classifier with default hyperparameter values reached 81% accuracy on the test. Using grid search we were able to tune selected hyperparameters in 247 seconds and increased accuracy to 88%. Next, we did the same job using random search and in 64 seconds we increased accuracy to 86%. Last but not least, we discussed coarse-to-fine tuning to combine these two methods and get advantages from both! I hope you enjoyed reading about hyperparameter tuning and find the article useful for your analyses! If you liked this article, you can read my other articles here and follow me on Medium. Let me know if you have any questions or suggestions.✨ Enjoy this article? Become a member for more!
[ { "code": null, "e": 337, "s": 172, "text": "Hyperparameter tuning also known as hyperparameter optimization is an important step in any machine learning model training that directly affects model performance." }, { "code": null, "e": 658, "s": 337, "text": "This article covers two very popular hyperparameter tuning techniques: grid search and random search and shows how to combine these two algorithms with coarse-to-fine tuning. By the end of the article, you will know their working principles and main differences that will help you to decide which one to use confidently." }, { "code": null, "e": 784, "s": 658, "text": "While following the article, I encourage you to check out the Jupyter Notebook on my GitHub for the full analysis and code. 🌠" }, { "code": null, "e": 998, "s": 784, "text": "Hyperparameters are parameters that are defined before training to specify how we want model training to happen. We have full control over hyperparameter settings and by doing that we control the learning process." }, { "code": null, "e": 1233, "s": 998, "text": "For example in the random forest model n_estimators (number of decision trees we want to have) is a hyperparameter. It can be set to any integer value but of course, setting it to 10 or 1000 changes the learning process significantly." }, { "code": null, "e": 1494, "s": 1233, "text": "Parameters, on the other hand, are found during the training. We have no control over parameter values as they are the result of model training. For example, in linear regression coefficients and intercept are parameters and found at the end of model training." }, { "code": null, "e": 1585, "s": 1494, "text": "To learn model hyperparameters and their values we can simply call get_params in Python. 🔍" }, { "code": null, "e": 1737, "s": 1585, "text": "from sklearn.ensemble import RandomForestClassifier# Instantiate the modelrf_model = RandomForestClassifier()# Print hyperparametersrf_model.get_params" }, { "code": null, "e": 2123, "s": 1737, "text": "RandomForestClassifier(bootstrap=True, ccp_alpha=0.0, class_weight=None, criterion=’gini’, max_depth=None, max_features=’auto’, max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start=False)" }, { "code": null, "e": 2401, "s": 2123, "text": "As you can see random forest classifier has many hyperparameters and if the model is instantiated without defining hyperparameters, then it will have default values. In the random forest by default n_estimators=100 which results in a moderately small forest. (only 100 trees 🌳)" }, { "code": null, "e": 2526, "s": 2401, "text": "You need to know that in any model (in this case random forest) some hyperparameters are more important than others such as:" }, { "code": null, "e": 2773, "s": 2526, "text": "n_estimators: Number of decision treesmax_features: Maximum number of features considered while splittingmax_depth: Max depth of the treemin_samples_leaf: Minimum number of data points in a leaf nodebootstrap: Sampling with or without replacement" }, { "code": null, "e": 2839, "s": 2773, "text": "And some hyperparameters do not affect model performance such as:" }, { "code": null, "e": 3004, "s": 2839, "text": "n_jobs: Number of jobs to run in parallelrandom_state: Seedverbose: Printing information while training continuesoob_score: Whether or not to use out-of-bag samples" }, { "code": null, "e": 3159, "s": 3004, "text": "Finally, hyperparameter tuning is finding the best combination of hyperparameters that gives the best performance according to the defined scoring metric." }, { "code": null, "e": 3317, "s": 3159, "text": "In this article, we will be using Glass Identification data set from UCI, where we have 9 attributes to predict the type of glass (out of 7 discrete values)." }, { "code": null, "e": 3382, "s": 3317, "text": "Next, I will separate X and y, and generate train and test sets." }, { "code": null, "e": 3586, "s": 3382, "text": "# Seperate X and yX = df.drop(columns=['Type'], axis=1)y = df['Type']# Generate training and test sets for X and yX_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state=1)" }, { "code": null, "e": 3701, "s": 3586, "text": "After that, we simply run a random forest classifier with default values and get the predictions for the test set." }, { "code": null, "e": 3945, "s": 3701, "text": "# Instantiate and fit random forest classifierrf_model = RandomForestClassifier()rf_model.fit(X_train, y_train)# Predict on the test set and call accuracyy_pred = rf_model.predict(X_test)accuracy = accuracy_score(y_test, y_pred)print(accuracy)" }, { "code": null, "e": 3950, "s": 3945, "text": "0.81" }, { "code": null, "e": 4078, "s": 3950, "text": "As you can see accuracy with the default model is 81%. Now we will see how to use grid search to tune selected hyperparameters." }, { "code": null, "e": 4279, "s": 4078, "text": "Grid Search starts with defining a search space grid. The grid consists of selected hyperparameter names and values, and grid search exhaustively searches the best combination of these given values. 🚀" }, { "code": null, "e": 4406, "s": 4279, "text": "Let’s say we decided to define the following parameter grid to optimize some hyperparameters for our random forest classifier." }, { "code": null, "e": 4565, "s": 4406, "text": "param_grid:n_estimators = [50, 100, 200, 300]max_depth = [2, 4, 6, 8, 10]min_samples_leaf = [1, 5, 10]max_features = ['auto', 'sqrt']bootstrap = [True, False]" }, { "code": null, "e": 4790, "s": 4565, "text": "We will use GridSearchCV class from Scikit-Learn library for this optimization. The first thing to mention is grid search will have to run and compare 240 models (=4*5*3*2*2, multiplication of the selected number of values)." }, { "code": null, "e": 5215, "s": 4790, "text": "Moreover, GridSearchCV class has the option to perform cross validation to resample the training and test data into multiple folds. By applying cross validation we use every record in data for training and testing instead of splitting the dataset at one time as training and testing. If we decide to use cross validation (let’s say with 5 folds) this means grid search will have to evaluate 1200 (=240*5) model performances." }, { "code": null, "e": 5284, "s": 5215, "text": "Let’s have a look at all the input parameters of GridSearchCV class:" }, { "code": null, "e": 5424, "s": 5284, "text": "class sklearn.model_selection.GridSearchCV(estimator, param_grid, scoring=None, n_jobs=None, refit=True, cv=None, return_train_score=False)" }, { "code": null, "e": 5516, "s": 5424, "text": "We start with defining a dictionary for the grid which we will be an input for GridSeachCv." }, { "code": null, "e": 5872, "s": 5516, "text": "# Define the gridparam_grid = {'n_estimators': [50, 100, 200, 300],'min_samples_leaf': [1, 5, 10],'max_depth': [2, 4, 6, 8, 10],'max_features': ['auto', 'sqrt'],'bootstrap': [True, False]}# Instantiate GridSearchCVmodel_gridsearch = GridSearchCV(estimator=rf_model,param_grid=param_grid,scoring='accuracy',n_jobs=4,cv=5,refit=True,return_train_score=True)" }, { "code": null, "e": 6341, "s": 5872, "text": "As decided previouslyestimator is RandomForestClassifier (rf_model) and param_grid is the parameter grid we defined above. scoring is the desired evaluation metric such as accuracy for a classification task and n_jobs executes model evaluation in parallel, but be careful if you set n_jobs=-1 it uses all the processors! Since grid search is an uninformed tuning, we can take advantage of running models in parallel as their outcome does not affect other models’ runs." }, { "code": null, "e": 6626, "s": 6341, "text": "Settingrefit=True refits the estimator with the best-found hyperparameter values in the end so we do not need to code them in an additional step. cv defines the cross validation strategy and setting return_train_score=True we can print the logs of model runs to make further analysis." }, { "code": null, "e": 6918, "s": 6626, "text": "# Record the current time start = time()# Fit the selected modelmodel_gridsearch.fit(X_train, y_train)# Print the time spend and number of models ranprint(\"GridSearchCV took %.2f seconds for %d candidate parameter settings.\" % ((time() - start), len(model_gridsearch.cv_results_['params'])))" }, { "code": null, "e": 6989, "s": 6918, "text": "GridSearchCV took 247.79 seconds for 240 candidate parameter settings." }, { "code": null, "e": 7130, "s": 6989, "text": "# Predict on the test set and call accuracyy_pred_grid = model_gridsearch.predict(X_test)accuracy_grid = accuracy_score(y_test, y_pred_grid)" }, { "code": null, "e": 7135, "s": 7130, "text": "0.88" }, { "code": null, "e": 7280, "s": 7135, "text": "As you can see, simply tuning some hyperparameters increased the initial accuracy from 81% to 88% spending 247 seconds to hyperparameter tuning." }, { "code": null, "e": 7925, "s": 7280, "text": "Grid search always finds the best-performing model with hyperparameter values mentioned in the grid. It is also easy to implement and explain. However, with the increasing number of hyperparameters and values to test it can easily become computationally expensive because it models all of the combinations of hyperparameters. The drawback of not learning from already ran models makes grid search inefficient and time-consuming. In addition, parameter grid plays an extremely important role: even though grid search will always find the best combination if the parameter grid is selected poorly the best combination will not be performing well." }, { "code": null, "e": 8022, "s": 7925, "text": "After running the GridSeachCV, we can return the following attributes for further investigation:" }, { "code": null, "e": 8034, "s": 8022, "text": "cv_results_" }, { "code": null, "e": 8050, "s": 8034, "text": "best_estimator_" }, { "code": null, "e": 8062, "s": 8050, "text": "best_score_" }, { "code": null, "e": 8075, "s": 8062, "text": "best_params_" }, { "code": null, "e": 8103, "s": 8075, "text": "Let’s look at some of them:" }, { "code": null, "e": 8140, "s": 8103, "text": "print(model_gridsearch.best_params_)" }, { "code": null, "e": 8245, "s": 8140, "text": "{‘bootstrap’: True, ‘max_depth’: 10, ‘max_features’: ‘sqrt’, ‘min_samples_leaf’: 1, ‘n_estimators’: 300}" }, { "code": null, "e": 8285, "s": 8245, "text": "print(model_gridsearch.best_estimator_)" }, { "code": null, "e": 8669, "s": 8285, "text": "RandomForestClassifier(bootstrap=True, ccp_alpha=0.0, class_weight=None, criterion=’gini’, max_depth=10, max_features=’sqrt’, max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=300, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start=False)" }, { "code": null, "e": 8922, "s": 8669, "text": "In random search, we define distributions for each hyperparameter which can be defined uniformly or with a sampling method. The key difference from grid search is in random search, not all the values are tested and values tested are selected at random." }, { "code": null, "e": 9208, "s": 8922, "text": "For example, if there are 500 values in the distribution and if we input n_iter=50 then random search will randomly sample 50 values to test. By doing that random search optimizes time spend and not defining an absolute grid allows it to explore other values in the given distribution." }, { "code": null, "e": 9419, "s": 9208, "text": "Since random search does not try every hyperparameter combination, it does not necessarily return the best performing values, but it returns a relatively good performing model in a significantly shorter time. ⏰" }, { "code": null, "e": 9601, "s": 9419, "text": "param_distributionsn_estimators = list(range(100, 300, 10))min_samples_leaf = list(range(1, 50))max_depth = list(range(2, 20)max_features = ['auto', 'sqrt']bootstrap = [True, False]" }, { "code": null, "e": 9674, "s": 9601, "text": "RandomizedSearchCV from Scikit-Learn has the following input parameters:" }, { "code": null, "e": 9912, "s": 9674, "text": "class sklearn.model_selection.RandomizedSearchCV(estimator, param_distributions, n_iter=10, scoring=None, n_jobs=None, refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs', random_state=None, error_score=nan, return_train_score=False)" }, { "code": null, "e": 10023, "s": 9912, "text": "We start with defining a dictionary for parameter distributions which will be an input for RandomizedSearchCV." }, { "code": null, "e": 10429, "s": 10023, "text": "# specify distributions to sample fromparam_dist = {'n_estimators': list(range(50, 300, 10)),'min_samples_leaf': list(range(1, 50)),'max_depth': list(range(2, 20)),'max_features': ['auto', 'sqrt'],'bootstrap': [True, False]}# specify number of search iterationsn_iter = 50# Instantiate RandomSearchCVmodel_random_search = RandomizedSearchCV(estimator=rf_model,param_distributions=param_dist,n_iter=n_iter)" }, { "code": null, "e": 10835, "s": 10429, "text": "Most parameters are similar to GridSearchCV, however, in RandomizedSearchCV we have param_distributions for defining the search distributions. n_iter is used to limit the total number of model runs (in other words parameter combinations that are sampled from the grid). Be careful with the trade-off here, since setting high n_iter increases search runtime, and setting low n_iter decreases model quality." }, { "code": null, "e": 11139, "s": 10835, "text": "# Record the current time start = time()# Fit the selected modelmodel_random_search.fit(X_train, y_train)# Print the time spend and number of models ranprint(\"RandomizedSearchCV took %.2f seconds for %d candidate parameter settings.\" % ((time() - start), len(model_random_search.cv_results_['params'])))" }, { "code": null, "e": 11214, "s": 11139, "text": "RandomizedSearchCV took 64.17 seconds for 50 candidate parameter settings." }, { "code": null, "e": 11364, "s": 11214, "text": "# Predict on the test set and call accuracyy_pred_random = model_random_search.predict(X_test)accuracy_random = accuracy_score(y_test, y_pred_random)" }, { "code": null, "e": 11369, "s": 11364, "text": "0.86" }, { "code": null, "e": 11586, "s": 11369, "text": "As you see only in 64 seconds we were able to increase accuracy of the initial model from 81% to 86%. Random search did not reach 88% accuracy of grid search, however, this is the tradeoff between two tuning methods." }, { "code": null, "e": 11662, "s": 11586, "text": "Finally, let’s have a look at the best parameters found with random search." }, { "code": null, "e": 11702, "s": 11662, "text": "print(model_random_search.best_params_)" }, { "code": null, "e": 11808, "s": 11702, "text": "{‘n_estimators’: 230, ‘min_samples_leaf’: 4, ‘max_features’: ‘auto’, ‘max_depth’: 13, ‘bootstrap’: False}" }, { "code": null, "e": 11932, "s": 11808, "text": "From the beginning of the article, we have seen how to apply grid search and random search for hyperparameter optimization." }, { "code": null, "e": 12214, "s": 11932, "text": "Using grid search, we were able to test all of the hyperparameter values given in the grid to find the best among them. However, an increased number of hyperparameters easily becomes a bottleneck. Ideally, we can combine grid search with random search to prevent this inefficiency." }, { "code": null, "e": 12637, "s": 12214, "text": "In coarse-to-fine tuning, we start with a random search to find the promising value ranges for each hyperparameter. For example, if the random search returns high performance for n_estimators between 150 and 200, this is the range we want grid search to focus on. After getting focus area for each hyperparameter using random search, we can define the grid accordingly for grid search to find the best values amongst them." }, { "code": null, "e": 12852, "s": 12637, "text": "In the illustration above first, we are testing values at random to find a focus area with random search. Second, we are testing all values in this focus area with grid search and eventually find the optimal value." }, { "code": null, "e": 13334, "s": 12852, "text": "Another useful strategy to fine-tune machine learning models is using ensemble learning techniques. Different from hyperparameter tuning, ensemble learning aims to improve model performance by combining multiple models into a group model. This group model aims to perform better than each model alone. The most used ensemble learning methods are voting, bagging, boosting and stacking and if you want to learn more or refresh your knowledge, you can read my article on the topic. 🐬" }, { "code": null, "e": 13863, "s": 13334, "text": "In this article, we used a random forest classifier to predict “type of glass” using 9 different attributes. Initial random forest classifier with default hyperparameter values reached 81% accuracy on the test. Using grid search we were able to tune selected hyperparameters in 247 seconds and increased accuracy to 88%. Next, we did the same job using random search and in 64 seconds we increased accuracy to 86%. Last but not least, we discussed coarse-to-fine tuning to combine these two methods and get advantages from both!" }, { "code": null, "e": 13965, "s": 13863, "text": "I hope you enjoyed reading about hyperparameter tuning and find the article useful for your analyses!" }, { "code": null, "e": 14108, "s": 13965, "text": "If you liked this article, you can read my other articles here and follow me on Medium. Let me know if you have any questions or suggestions.✨" } ]
How to handle right-to-left and left-to-right swipe gestures on Android using Kotlin?
This example demonstrates how to handle right-to-left and left-to-right swipe gestures on Android using Kotlin. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:padding="4dp"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:layout_marginTop="70dp" android:background="#008080" android:padding="5dp" android:text="TutorialsPoint" android:textColor="#fff" android:textSize="24sp" android:textStyle="bold" /> <TextView android:id="@+id/textView" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_centerInParent="true" android:text="Right-to-Left and Left-to-Right swipe gestures" android:textAlignment="center" android:textColor="@android:color/holo_blue_light" android:textSize="24sp" android:textStyle="bold" /> </RelativeLayout> Step 3 − Add the following code to src/MainActivity.kt import android.os.Bundle import android.view.GestureDetector import android.view.MotionEvent import android.widget.Toast import androidx.appcompat.app.AppCompatActivity import kotlin.math.abs class MainActivity : AppCompatActivity(), GestureDetector.OnGestureListener { lateinit var gestureDetector: GestureDetector private val swipeThreshold = 100 private val swipeVelocityThreshold = 100 override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) title = "KotlinApp" gestureDetector = GestureDetector(this) } override fun onTouchEvent(event: MotionEvent): Boolean { return if (gestureDetector.onTouchEvent(event)) { true } else { super.onTouchEvent(event) } } override fun onDown(p0: MotionEvent?): Boolean { return false } override fun onShowPress(p0: MotionEvent?) { return } override fun onSingleTapUp(p0: MotionEvent?): Boolean { return false } override fun onScroll(p0: MotionEvent?, p1: MotionEvent?, p2: Float, p3: Float): Boolean { return false } override fun onLongPress(p0: MotionEvent?) { return } override fun onFling(e1: MotionEvent, e2: MotionEvent, velocityX: Float, velocityY: Float): Boolean { try { val diffY = e2.y - e1.y val diffX = e2.x - e1.x if (abs(diffX) > abs(diffY)) { if (abs(diffX) > swipeThreshold && abs(velocityX) > swipeVelocityThreshold) { if (diffX > 0) { Toast.makeText(applicationContext, "Left to Right swipe gesture", Toast.LENGTH_SHORT).show() } else { Toast.makeText(applicationContext, "Right to Left swipe gesture", Toast.LENGTH_SHORT).show() } } } } catch (exception: Exception) { exception.printStackTrace() } return true } } Step 4 − Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.q11"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen. Click here to download the project code.
[ { "code": null, "e": 1174, "s": 1062, "text": "This example demonstrates how to handle right-to-left and left-to-right swipe gestures on Android using Kotlin." }, { "code": null, "e": 1303, "s": 1174, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1368, "s": 1303, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 2380, "s": 1368, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:padding=\"4dp\">\n <TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginTop=\"70dp\"\n android:background=\"#008080\"\n android:padding=\"5dp\"\n android:text=\"TutorialsPoint\"\n android:textColor=\"#fff\"\n android:textSize=\"24sp\"\n android:textStyle=\"bold\" />\n <TextView\n android:id=\"@+id/textView\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:layout_centerInParent=\"true\"\n android:text=\"Right-to-Left and Left-to-Right swipe gestures\"\n android:textAlignment=\"center\"\n android:textColor=\"@android:color/holo_blue_light\"\n android:textSize=\"24sp\"\n android:textStyle=\"bold\" />\n</RelativeLayout>" }, { "code": null, "e": 2435, "s": 2380, "text": "Step 3 − Add the following code to src/MainActivity.kt" }, { "code": null, "e": 4424, "s": 2435, "text": "import android.os.Bundle\nimport android.view.GestureDetector\nimport android.view.MotionEvent\nimport android.widget.Toast\nimport androidx.appcompat.app.AppCompatActivity\nimport kotlin.math.abs\nclass MainActivity : AppCompatActivity(), GestureDetector.OnGestureListener {\n lateinit var gestureDetector: GestureDetector\n private val swipeThreshold = 100\n private val swipeVelocityThreshold = 100\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n title = \"KotlinApp\"\n gestureDetector = GestureDetector(this)\n }\n override fun onTouchEvent(event: MotionEvent): Boolean {\n return if (gestureDetector.onTouchEvent(event)) {\n true\n }\n else {\n super.onTouchEvent(event)\n }\n }\n override fun onDown(p0: MotionEvent?): Boolean {\n return false\n }\n override fun onShowPress(p0: MotionEvent?) {\n return\n }\n override fun onSingleTapUp(p0: MotionEvent?): Boolean {\n return false\n }\n override fun onScroll(p0: MotionEvent?, p1: MotionEvent?, p2: Float, p3: Float): Boolean {\n return false\n }\n override fun onLongPress(p0: MotionEvent?) {\n return\n }\n override fun onFling(e1: MotionEvent, e2: MotionEvent, velocityX: Float, velocityY: Float): Boolean {\n try {\n val diffY = e2.y - e1.y\n val diffX = e2.x - e1.x\n if (abs(diffX) > abs(diffY)) {\n if (abs(diffX) > swipeThreshold && abs(velocityX) > swipeVelocityThreshold) {\n if (diffX > 0) {\n Toast.makeText(applicationContext, \"Left to Right swipe gesture\", Toast.LENGTH_SHORT).show()\n }\n else {\n Toast.makeText(applicationContext, \"Right to Left swipe gesture\", Toast.LENGTH_SHORT).show()\n }\n }\n }\n }\n catch (exception: Exception) {\n exception.printStackTrace()\n }\n return true\n }\n}" }, { "code": null, "e": 4479, "s": 4424, "text": "Step 4 − Add the following code to androidManifest.xml" }, { "code": null, "e": 5153, "s": 4479, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"com.example.q11\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 5502, "s": 5153, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen." }, { "code": null, "e": 5543, "s": 5502, "text": "Click here to download the project code." } ]
How to set Wallpaper Image programmatically in Android?
This example demonstrates how do I set Android Wallpaper image in Android. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project Step 2 − Add the following code to res/layout/activity_main.xml. <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/button" android:text="Set Wallpapaer" android:layout_gravity="center_vertical" android:layout_centerInParent="true" android:layout_marginLeft="135dp"/> </LinearLayout> Step 3 − Add the following code to src/MainActivity.java import android.app.WallpaperManager; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.Toast; import java.io.IOException; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); Button button = (Button) findViewById(R.id.button); button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { setWallpaper(); } }); } private void setWallpaper() { Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.wallpaper); WallpaperManager manager = WallpaperManager.getInstance(getApplicationContext()); try{ manager.setBitmap(bitmap); Toast.makeText(this, "Wallpaper set!", Toast.LENGTH_SHORT).show(); } catch (IOException e) { Toast.makeText(this, "Error!", Toast.LENGTH_SHORT).show(); } } } Step 4 − Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.sample"> <uses-permission android:name="android.permission.SET_WALLPAPER"/> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen − Click here to download the project code.
[ { "code": null, "e": 1137, "s": 1062, "text": "This example demonstrates how do I set Android Wallpaper image in Android." }, { "code": null, "e": 1265, "s": 1137, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project" }, { "code": null, "e": 1330, "s": 1265, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 1877, "s": 1330, "text": "<LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n tools:context=\".MainActivity\">\n <Button\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:id=\"@+id/button\"\n android:text=\"Set Wallpapaer\"\n android:layout_gravity=\"center_vertical\"\n android:layout_centerInParent=\"true\"\n android:layout_marginLeft=\"135dp\"/>\n</LinearLayout>" }, { "code": null, "e": 1934, "s": 1877, "text": "Step 3 − Add the following code to src/MainActivity.java" }, { "code": null, "e": 3121, "s": 1934, "text": "import android.app.WallpaperManager;\nimport android.graphics.Bitmap;\nimport android.graphics.BitmapFactory;\nimport android.support.v7.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.view.View;\nimport android.widget.Button;\nimport android.widget.Toast;\nimport java.io.IOException;\npublic class MainActivity extends AppCompatActivity {\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n Button button = (Button) findViewById(R.id.button);\n button.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n setWallpaper();\n }\n });\n }\n private void setWallpaper() {\n Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.wallpaper);\n WallpaperManager manager = WallpaperManager.getInstance(getApplicationContext());\n try{\n manager.setBitmap(bitmap);\n Toast.makeText(this, \"Wallpaper set!\", Toast.LENGTH_SHORT).show();\n } catch (IOException e) {\n Toast.makeText(this, \"Error!\", Toast.LENGTH_SHORT).show();\n }\n }\n}" }, { "code": null, "e": 3176, "s": 3121, "text": "Step 4 − Add the following code to androidManifest.xml" }, { "code": null, "e": 3916, "s": 3176, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"app.com.sample\">\n <uses-permission android:name=\"android.permission.SET_WALLPAPER\"/>\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 4263, "s": 3916, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −" }, { "code": null, "e": 4304, "s": 4263, "text": "Click here to download the project code." } ]
F# - Program Structure
F# is a Functional Programming language. In F#, functions work like data types. You can declare and use a function in the same way like any other variable. In general, an F# application does not have any specific entry point. The compiler executes all top-level statements in the file from top to bottom. However, to follow procedural programming style, many applications keep a single top level statement that calls the main loop. The following code shows a simple F# program − open System (* This is a multi-line comment *) // This is a single-line comment let sign num = if num > 0 then "positive" elif num < 0 then "negative" else "zero" let main() = Console.WriteLine("sign 5: {0}", (sign 5)) main() When you compile and execute the program, it yields the following output − sign 5: positive Please note that − An F# code file might begin with a number of open statements that is used to import namespaces. An F# code file might begin with a number of open statements that is used to import namespaces. The body of the files includes other functions that implement the business logic of the application. The body of the files includes other functions that implement the business logic of the application. The main loop contains the top executable statements. The main loop contains the top executable statements. Print Add Notes Bookmark this page
[ { "code": null, "e": 2202, "s": 2161, "text": "F# is a Functional Programming language." }, { "code": null, "e": 2317, "s": 2202, "text": "In F#, functions work like data types. You can declare and use a function in the same way like any other variable." }, { "code": null, "e": 2466, "s": 2317, "text": "In general, an F# application does not have any specific entry point. The compiler executes all top-level statements in the file from top to bottom." }, { "code": null, "e": 2593, "s": 2466, "text": "However, to follow procedural programming style, many applications keep a single top level statement that calls the main loop." }, { "code": null, "e": 2640, "s": 2593, "text": "The following code shows a simple F# program −" }, { "code": null, "e": 2881, "s": 2640, "text": "open System\n(* This is a multi-line comment *)\n// This is a single-line comment\n\nlet sign num =\n if num > 0 then \"positive\"\n elif num < 0 then \"negative\"\n else \"zero\"\n\nlet main() =\n Console.WriteLine(\"sign 5: {0}\", (sign 5))\n\nmain()" }, { "code": null, "e": 2956, "s": 2881, "text": "When you compile and execute the program, it yields the following output −" }, { "code": null, "e": 2974, "s": 2956, "text": "sign 5: positive\n" }, { "code": null, "e": 2993, "s": 2974, "text": "Please note that −" }, { "code": null, "e": 3089, "s": 2993, "text": "An F# code file might begin with a number of open statements that is used to import namespaces." }, { "code": null, "e": 3185, "s": 3089, "text": "An F# code file might begin with a number of open statements that is used to import namespaces." }, { "code": null, "e": 3286, "s": 3185, "text": "The body of the files includes other functions that implement the business logic of the application." }, { "code": null, "e": 3387, "s": 3286, "text": "The body of the files includes other functions that implement the business logic of the application." }, { "code": null, "e": 3441, "s": 3387, "text": "The main loop contains the top executable statements." }, { "code": null, "e": 3495, "s": 3441, "text": "The main loop contains the top executable statements." }, { "code": null, "e": 3502, "s": 3495, "text": " Print" }, { "code": null, "e": 3513, "s": 3502, "text": " Add Notes" } ]
Math.Ceiling() Method in C#
The Math.Ceiling() method in C# is used to return the smallest integral value greater than or equal to the specified number. Following is the syntax − public static decimal Ceiling (decimal val); public static double Ceiling(double val) For the first syntax above, the value Val is the decimal number, whereas Val in the second syntax is the double number. Let us now see an example to implement Math.Ceiling() method − using System; public class Demo { public static void Main(){ decimal val1 = 9.99M; decimal val2 = -5.10M; Console.WriteLine("Result = " + Math.Ceiling(val1)); Console.WriteLine("Result = " + Math.Ceiling(val2)); } } This will produce the following output − Result = 10 Result = -5 Let us see another example to implement Math.Ceiling() method − using System; public class Demo { public static void Main(){ double val1 = 3.1; double val2 = 55.99; double val3 = -55.99; Console.WriteLine("Result = " + Math.Ceiling(val1)); Console.WriteLine("Result = " + Math.Ceiling(val2)); Console.WriteLine("Result = " + Math.Ceiling(val3)); } } This will produce the following output − Result = 4 Result = 56 Result = -55
[ { "code": null, "e": 1187, "s": 1062, "text": "The Math.Ceiling() method in C# is used to return the smallest integral value greater than or equal to the specified number." }, { "code": null, "e": 1213, "s": 1187, "text": "Following is the syntax −" }, { "code": null, "e": 1299, "s": 1213, "text": "public static decimal Ceiling (decimal val);\npublic static double Ceiling(double val)" }, { "code": null, "e": 1419, "s": 1299, "text": "For the first syntax above, the value Val is the decimal number, whereas Val in the second syntax is the double number." }, { "code": null, "e": 1482, "s": 1419, "text": "Let us now see an example to implement Math.Ceiling() method −" }, { "code": null, "e": 1728, "s": 1482, "text": "using System;\npublic class Demo {\n public static void Main(){\n decimal val1 = 9.99M;\n decimal val2 = -5.10M;\n Console.WriteLine(\"Result = \" + Math.Ceiling(val1));\n Console.WriteLine(\"Result = \" + Math.Ceiling(val2));\n }\n}" }, { "code": null, "e": 1769, "s": 1728, "text": "This will produce the following output −" }, { "code": null, "e": 1793, "s": 1769, "text": "Result = 10\nResult = -5" }, { "code": null, "e": 1857, "s": 1793, "text": "Let us see another example to implement Math.Ceiling() method −" }, { "code": null, "e": 2185, "s": 1857, "text": "using System;\npublic class Demo {\n public static void Main(){\n double val1 = 3.1;\n double val2 = 55.99;\n double val3 = -55.99;\n Console.WriteLine(\"Result = \" + Math.Ceiling(val1));\n Console.WriteLine(\"Result = \" + Math.Ceiling(val2));\n Console.WriteLine(\"Result = \" + Math.Ceiling(val3));\n }\n}" }, { "code": null, "e": 2226, "s": 2185, "text": "This will produce the following output −" }, { "code": null, "e": 2262, "s": 2226, "text": "Result = 4\nResult = 56\nResult = -55" } ]
How to create a simple alert dialog with Ok and cancel buttons using Kotlin?
This example demonstrates how to create a simple alert dialog with Ok and cancel buttons using Kotlin. Step 1 − Create a new project in Android Studio, go to File? New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/rl" android:layout_width="match_parent" android:layout_height="match_parent" android:padding="8dp" tools:context=".MainActivity"> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerHorizontal="true" android:layout_marginTop="50dp" android:padding="8dp" android:text="Tutorials Point" android:textColor="@android:color/holo_green_dark" android:textSize="48sp" android:textStyle="bold" /> <TextView android:id="@+id/button" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_centerInParent="true" android:clickable="true" android:focusable="true" android:text="Show Alert dialog" android:textAlignment="center" /> </RelativeLayout> Step 3 − Add the following code to src/MainActivity.kt import android.os.Bundle import android.widget.TextView import android.widget.Toast import androidx.appcompat.app.AlertDialog import androidx.appcompat.app.AppCompatActivity class MainActivity : AppCompatActivity() { private lateinit var textView: TextView override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) title = "KotlinApp" textView = findViewById(R.id.button) textView.setOnClickListener { showAlertDialog() } } private fun showAlertDialog() { val alertDialog: AlertDialog.Builder = AlertDialog.Builder(this@MainActivity) alertDialog.setTitle("AlertDialog") alertDialog.setMessage("Do you wanna close this Dialog?") alertDialog.setPositiveButton( "yes" ) { _, _ -> Toast.makeText(this@MainActivity, "Alert dialog closed.", Toast.LENGTH_LONG).show() } alertDialog.setNegativeButton( "No" ) { _, _ -> } val alert: AlertDialog = alertDialog.create() alert.setCanceledOnTouchOutside(false) alert.show() } } Step 4 − Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.myapplication"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen
[ { "code": null, "e": 1165, "s": 1062, "text": "This example demonstrates how to create a simple alert dialog with Ok and cancel buttons using Kotlin." }, { "code": null, "e": 1293, "s": 1165, "text": "Step 1 − Create a new project in Android Studio, go to File? New Project and fill all required details to create a new project." }, { "code": null, "e": 1358, "s": 1293, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 2382, "s": 1358, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:id=\"@+id/rl\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:padding=\"8dp\"\n tools:context=\".MainActivity\">\n <TextView\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_centerHorizontal=\"true\"\n android:layout_marginTop=\"50dp\"\n android:padding=\"8dp\"\n android:text=\"Tutorials Point\"\n android:textColor=\"@android:color/holo_green_dark\"\n android:textSize=\"48sp\"\n android:textStyle=\"bold\" />\n <TextView\n android:id=\"@+id/button\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:layout_centerInParent=\"true\"\n android:clickable=\"true\"\n android:focusable=\"true\"\n android:text=\"Show Alert dialog\"\n android:textAlignment=\"center\" />\n</RelativeLayout>" }, { "code": null, "e": 2437, "s": 2382, "text": "Step 3 − Add the following code to src/MainActivity.kt" }, { "code": null, "e": 3559, "s": 2437, "text": "import android.os.Bundle\nimport android.widget.TextView\nimport android.widget.Toast\nimport androidx.appcompat.app.AlertDialog\nimport androidx.appcompat.app.AppCompatActivity\nclass MainActivity : AppCompatActivity() {\n private lateinit var textView: TextView\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n title = \"KotlinApp\"\n textView = findViewById(R.id.button)\n textView.setOnClickListener { showAlertDialog() }\n }\n private fun showAlertDialog() {\n val alertDialog: AlertDialog.Builder = AlertDialog.Builder(this@MainActivity)\n alertDialog.setTitle(\"AlertDialog\")\n alertDialog.setMessage(\"Do you wanna close this Dialog?\")\n alertDialog.setPositiveButton(\n \"yes\"\n ) { _, _ ->\n Toast.makeText(this@MainActivity, \"Alert dialog closed.\", Toast.LENGTH_LONG).show()\n }\n alertDialog.setNegativeButton(\n \"No\"\n ) { _, _ -> }\n val alert: AlertDialog = alertDialog.create()\n alert.setCanceledOnTouchOutside(false)\n alert.show()\n }\n}" }, { "code": null, "e": 3614, "s": 3559, "text": "Step 4 − Add the following code to androidManifest.xml" }, { "code": null, "e": 4291, "s": 3614, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"app.com.myapplication\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 4639, "s": 4291, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen" } ]
How to Identify Potential Customers Among the Crowd? | by Harsh Darji | Towards Data Science
In this project, a mail-order sales company in Germany is interested in identifying segments of the general population to target with their marketing to grow. Demographics information has been provided (by Arvato Finacial Solutions through Udacity) for both the general population at large as well as for prior customers of the mail-order company to build a model of the customer base of the company. The target dataset contains demographics information for targets of a mailout marketing campaign. The objective is to identify which individuals are most likely to respond to the campaign and become customers of the mail-order company. The .csv files for this project are not provided because of a non-disclosure agreement with Arvato. The following are the files that were used: Udacity_AZDIAS_052018.csv: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns)Udacity_CUSTOMERS_052018.csv: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).Udacity_MAILOUT_052018_TRAIN.csv: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).Udacity_MAILOUT_052018_TEST.csv: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).DIAS Attributes — Values 2017.xlsx: Detailed information about the columns depicted in the files in alphabetical order. To learn more about the features, please click here. Udacity_AZDIAS_052018.csv: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns) Udacity_CUSTOMERS_052018.csv: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns). Udacity_MAILOUT_052018_TRAIN.csv: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns). Udacity_MAILOUT_052018_TEST.csv: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns). DIAS Attributes — Values 2017.xlsx: Detailed information about the columns depicted in the files in alphabetical order. To learn more about the features, please click here. Memory Reduction: The .csv (Udacity_AZDIAS_052018) file holding demographic data of the general population was of the size of 2.5 GB, so I wrote a simple function to change the data types (int64 to int16) and reduce the memory usage by 78%. To learn more about the function and memory reduction, please click here.Data Understanding: All the data files have the same features, so I decided to go with Udacity_AZDIAS_052018.csv file to understand the demographic data. I used DIAS Attributes — Values 2017.xlsx files to understand what each value means for the column. See the image below for an example: Memory Reduction: The .csv (Udacity_AZDIAS_052018) file holding demographic data of the general population was of the size of 2.5 GB, so I wrote a simple function to change the data types (int64 to int16) and reduce the memory usage by 78%. To learn more about the function and memory reduction, please click here. Data Understanding: All the data files have the same features, so I decided to go with Udacity_AZDIAS_052018.csv file to understand the demographic data. I used DIAS Attributes — Values 2017.xlsx files to understand what each value means for the column. See the image below for an example: Here, the attribute AGER_TYP describes the best-ager typology. If you see the Value column above, it has values -1, 0, which means the meaning is unknown, and no classification is possible. On further, examination it’s discovered that attributes having meaning ‘unknown’/ ‘no classification’ were missing values. So, I decided to fill such values with nan. Of course, there were many attributes that had values that were not known. To know the detailed analysis, please click here. 3. Handling Missing Values: There were 366 features, so I decided to inspect the columns first after understanding the data(from pt.2) So, I calculated the % of missing values in each column and decided on a threshold 0f 30%, i.e., if columns contained more than 70% of missing values, I would simply drop those columns. On further inspection, I discovered that attributes starting with D19 were dropped. D19 when we look up in the DIAS Attributes — Values 2017.xlsx file shows that it contained transactional data (e.g., transactional activity based on the product group GUIDEBOOKS) The next thing I did was to look missing values by rows, and I decided to remove the rows which contained more than 15 missing values. I here used Q3–1.5IQR to decide on the threshold of 15. And the rest of the missing values were imputed by mean for simplicity reason. 4. Feature Engineering: There were a lot of categorical variables, so I created dummy variables out of them if they had less than 10 levels. Now that all the features were numeric, we can apply any machine learning algorithm. But before that, I applied StandardScaler() to transform all the columns. Here, the main goal was to use unsupervised learning methods to analyze attributes of established customers and the general population to create customer segments. The analysis describes parts of the general population that are more likely to be part of the mail-order company’s main customer base, and which parts of the general population are less so. So, I used the PCA technique to capture the maximum variance in the data and reduce the dimensionality of data. I decided on a threshold of 50%. 36 features out of 366 were able to capture 50% of the variance. Following I the PCA Dimension 0 : Interpreting the 1st PCA component (top 10 features): PLZ8_BAUMAX: most common building-type within the PLZ8 (pos)-mainly >10 family homesPLZ8_ANTG4: number of >10 family houses in the PLZ8 (pos)-high sharePLZ8_ANTG3: number of 6–10 family houses in the PLZ8 (pos)-high sharePLZ8_ANTG1: number of 1–2 family houses in the PLZ8 (neg)-low shareMOBI_REGIO: moving patterns (neg)-high mobility PLZ8_BAUMAX: most common building-type within the PLZ8 (pos)-mainly >10 family homes PLZ8_ANTG4: number of >10 family houses in the PLZ8 (pos)-high share PLZ8_ANTG3: number of 6–10 family houses in the PLZ8 (pos)-high share PLZ8_ANTG1: number of 1–2 family houses in the PLZ8 (neg)-low share MOBI_REGIO: moving patterns (neg)-high mobility So this group is high mobility, large family area, crowed area, low-income. Next, I used data that was scaled using PCA to apply k-means clustering and identify customer segments groups. I used the elbow method to decide on the number of clusters, and I decided to go with 10. I did the same transformation on the demographics data of the customer of the mail-order company Udacity_CUSTOMERS_052018.csv. Here, we can see that people belonging to cluster 1 and 8 are the ones who respond to a marketing campaign from a mail-order company and become customers. So, the marketing team should focus on such groups. The good news is that there are many people in Germany(Blue bar of 1 and 8) who fall into these groups. Here, we will use the dataset containing the demographics data for individuals who were targets of a marketing campaign. The training dataset has the response of the customers, and we will use the ML model to learn the parameters and predict the response of customers in the test data. Udacity_MAILOUT_052018_TRAIN.csv: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).Udacity_MAILOUT_052018_TEST.csv: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns). Udacity_MAILOUT_052018_TRAIN.csv: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns). Udacity_MAILOUT_052018_TEST.csv: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns). I have used the same cleaning function, which I used for the segmentation report. I have filled the missing values in numerical columns with mean, and I have created dummy variables for categorical columns. I have also performed a scaler transformation because I wanted to check the performance of different ML algorithms. However, the class in the dataset is an imbalance, the response from 98% in the training data is negative, and 2% is positive, so using recall(identifying potential customer) as the metric won’t be right. Thus I have used ROC AUC as the metric to evaluate the performance. I have also added extra features from the customer dataset. test['CUSTOMER_GROUP'] = test['CUSTOMER_GROUP'].apply(lambda x:1 if x=='SINGLE_BUYER' else 0)test['PRODUCT_GROUP1'] = test['PRODUCT_GROUP'].apply(lambda x:1 if 'FOOD' in x else 0)test['PRODUCT_GROUP2'] = test['PRODUCT_GROUP'].apply(lambda x:1 if 'COSMETIC' in x else 0) The best performing models were: I tried to optimize the hyperparameters of both the models, and Adaboostclasifer came at the top. The following are the parameters I used: I submitted my prediction on Test data to the competition hosted on Kaggle, and I got a score of 0.79459, which is just 0.016 behind the 1st place on the leaderboard. D19_SOZIALES: This is the feature related to a transaction that is not given in the data dictionary was the most important feature for a potential customer to respond to the marketing campaign. We used the demographic data of Germany and historical customer data of the marketing campaign to identify the best demographic group to target and thereby reducing marketing spend. We were successfully able to apply PCA and K-means clustering to identify customer segments. We used the Adaboost classifier with hyperparameter tuning to predict and classify if the customer will respond to a marketing campaign. A detailed analysis can be found here.
[ { "code": null, "e": 671, "s": 172, "text": "In this project, a mail-order sales company in Germany is interested in identifying segments of the general population to target with their marketing to grow. Demographics information has been provided (by Arvato Finacial Solutions through Udacity) for both the general population at large as well as for prior customers of the mail-order company to build a model of the customer base of the company. The target dataset contains demographics information for targets of a mailout marketing campaign." }, { "code": null, "e": 809, "s": 671, "text": "The objective is to identify which individuals are most likely to respond to the campaign and become customers of the mail-order company." }, { "code": null, "e": 953, "s": 809, "text": "The .csv files for this project are not provided because of a non-disclosure agreement with Arvato. The following are the files that were used:" }, { "code": null, "e": 1687, "s": 953, "text": "Udacity_AZDIAS_052018.csv: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns)Udacity_CUSTOMERS_052018.csv: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns).Udacity_MAILOUT_052018_TRAIN.csv: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).Udacity_MAILOUT_052018_TEST.csv: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns).DIAS Attributes — Values 2017.xlsx: Detailed information about the columns depicted in the files in alphabetical order. To learn more about the features, please click here." }, { "code": null, "e": 1819, "s": 1687, "text": "Udacity_AZDIAS_052018.csv: Demographics data for the general population of Germany; 891 211 persons (rows) x 366 features (columns)" }, { "code": null, "e": 1955, "s": 1819, "text": "Udacity_CUSTOMERS_052018.csv: Demographics data for customers of a mail-order company; 191 652 persons (rows) x 369 features (columns)." }, { "code": null, "e": 2104, "s": 1955, "text": "Udacity_MAILOUT_052018_TRAIN.csv: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns)." }, { "code": null, "e": 2252, "s": 2104, "text": "Udacity_MAILOUT_052018_TEST.csv: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns)." }, { "code": null, "e": 2425, "s": 2252, "text": "DIAS Attributes — Values 2017.xlsx: Detailed information about the columns depicted in the files in alphabetical order. To learn more about the features, please click here." }, { "code": null, "e": 3029, "s": 2425, "text": "Memory Reduction: The .csv (Udacity_AZDIAS_052018) file holding demographic data of the general population was of the size of 2.5 GB, so I wrote a simple function to change the data types (int64 to int16) and reduce the memory usage by 78%. To learn more about the function and memory reduction, please click here.Data Understanding: All the data files have the same features, so I decided to go with Udacity_AZDIAS_052018.csv file to understand the demographic data. I used DIAS Attributes — Values 2017.xlsx files to understand what each value means for the column. See the image below for an example:" }, { "code": null, "e": 3344, "s": 3029, "text": "Memory Reduction: The .csv (Udacity_AZDIAS_052018) file holding demographic data of the general population was of the size of 2.5 GB, so I wrote a simple function to change the data types (int64 to int16) and reduce the memory usage by 78%. To learn more about the function and memory reduction, please click here." }, { "code": null, "e": 3634, "s": 3344, "text": "Data Understanding: All the data files have the same features, so I decided to go with Udacity_AZDIAS_052018.csv file to understand the demographic data. I used DIAS Attributes — Values 2017.xlsx files to understand what each value means for the column. See the image below for an example:" }, { "code": null, "e": 4116, "s": 3634, "text": "Here, the attribute AGER_TYP describes the best-ager typology. If you see the Value column above, it has values -1, 0, which means the meaning is unknown, and no classification is possible. On further, examination it’s discovered that attributes having meaning ‘unknown’/ ‘no classification’ were missing values. So, I decided to fill such values with nan. Of course, there were many attributes that had values that were not known. To know the detailed analysis, please click here." }, { "code": null, "e": 4437, "s": 4116, "text": "3. Handling Missing Values: There were 366 features, so I decided to inspect the columns first after understanding the data(from pt.2) So, I calculated the % of missing values in each column and decided on a threshold 0f 30%, i.e., if columns contained more than 70% of missing values, I would simply drop those columns." }, { "code": null, "e": 4700, "s": 4437, "text": "On further inspection, I discovered that attributes starting with D19 were dropped. D19 when we look up in the DIAS Attributes — Values 2017.xlsx file shows that it contained transactional data (e.g., transactional activity based on the product group GUIDEBOOKS)" }, { "code": null, "e": 4891, "s": 4700, "text": "The next thing I did was to look missing values by rows, and I decided to remove the rows which contained more than 15 missing values. I here used Q3–1.5IQR to decide on the threshold of 15." }, { "code": null, "e": 4970, "s": 4891, "text": "And the rest of the missing values were imputed by mean for simplicity reason." }, { "code": null, "e": 5270, "s": 4970, "text": "4. Feature Engineering: There were a lot of categorical variables, so I created dummy variables out of them if they had less than 10 levels. Now that all the features were numeric, we can apply any machine learning algorithm. But before that, I applied StandardScaler() to transform all the columns." }, { "code": null, "e": 5624, "s": 5270, "text": "Here, the main goal was to use unsupervised learning methods to analyze attributes of established customers and the general population to create customer segments. The analysis describes parts of the general population that are more likely to be part of the mail-order company’s main customer base, and which parts of the general population are less so." }, { "code": null, "e": 5769, "s": 5624, "text": "So, I used the PCA technique to capture the maximum variance in the data and reduce the dimensionality of data. I decided on a threshold of 50%." }, { "code": null, "e": 5868, "s": 5769, "text": "36 features out of 366 were able to capture 50% of the variance. Following I the PCA Dimension 0 :" }, { "code": null, "e": 5922, "s": 5868, "text": "Interpreting the 1st PCA component (top 10 features):" }, { "code": null, "e": 6258, "s": 5922, "text": "PLZ8_BAUMAX: most common building-type within the PLZ8 (pos)-mainly >10 family homesPLZ8_ANTG4: number of >10 family houses in the PLZ8 (pos)-high sharePLZ8_ANTG3: number of 6–10 family houses in the PLZ8 (pos)-high sharePLZ8_ANTG1: number of 1–2 family houses in the PLZ8 (neg)-low shareMOBI_REGIO: moving patterns (neg)-high mobility" }, { "code": null, "e": 6343, "s": 6258, "text": "PLZ8_BAUMAX: most common building-type within the PLZ8 (pos)-mainly >10 family homes" }, { "code": null, "e": 6412, "s": 6343, "text": "PLZ8_ANTG4: number of >10 family houses in the PLZ8 (pos)-high share" }, { "code": null, "e": 6482, "s": 6412, "text": "PLZ8_ANTG3: number of 6–10 family houses in the PLZ8 (pos)-high share" }, { "code": null, "e": 6550, "s": 6482, "text": "PLZ8_ANTG1: number of 1–2 family houses in the PLZ8 (neg)-low share" }, { "code": null, "e": 6598, "s": 6550, "text": "MOBI_REGIO: moving patterns (neg)-high mobility" }, { "code": null, "e": 6674, "s": 6598, "text": "So this group is high mobility, large family area, crowed area, low-income." }, { "code": null, "e": 6875, "s": 6674, "text": "Next, I used data that was scaled using PCA to apply k-means clustering and identify customer segments groups. I used the elbow method to decide on the number of clusters, and I decided to go with 10." }, { "code": null, "e": 7002, "s": 6875, "text": "I did the same transformation on the demographics data of the customer of the mail-order company Udacity_CUSTOMERS_052018.csv." }, { "code": null, "e": 7313, "s": 7002, "text": "Here, we can see that people belonging to cluster 1 and 8 are the ones who respond to a marketing campaign from a mail-order company and become customers. So, the marketing team should focus on such groups. The good news is that there are many people in Germany(Blue bar of 1 and 8) who fall into these groups." }, { "code": null, "e": 7599, "s": 7313, "text": "Here, we will use the dataset containing the demographics data for individuals who were targets of a marketing campaign. The training dataset has the response of the customers, and we will use the ML model to learn the parameters and predict the response of customers in the test data." }, { "code": null, "e": 7895, "s": 7599, "text": "Udacity_MAILOUT_052018_TRAIN.csv: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns).Udacity_MAILOUT_052018_TEST.csv: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns)." }, { "code": null, "e": 8044, "s": 7895, "text": "Udacity_MAILOUT_052018_TRAIN.csv: Demographics data for individuals who were targets of a marketing campaign; 42 982 persons (rows) x 367 (columns)." }, { "code": null, "e": 8192, "s": 8044, "text": "Udacity_MAILOUT_052018_TEST.csv: Demographics data for individuals who were targets of a marketing campaign; 42 833 persons (rows) x 366 (columns)." }, { "code": null, "e": 8848, "s": 8192, "text": "I have used the same cleaning function, which I used for the segmentation report. I have filled the missing values in numerical columns with mean, and I have created dummy variables for categorical columns. I have also performed a scaler transformation because I wanted to check the performance of different ML algorithms. However, the class in the dataset is an imbalance, the response from 98% in the training data is negative, and 2% is positive, so using recall(identifying potential customer) as the metric won’t be right. Thus I have used ROC AUC as the metric to evaluate the performance. I have also added extra features from the customer dataset." }, { "code": null, "e": 9118, "s": 8848, "text": "test['CUSTOMER_GROUP'] = test['CUSTOMER_GROUP'].apply(lambda x:1 if x=='SINGLE_BUYER' else 0)test['PRODUCT_GROUP1'] = test['PRODUCT_GROUP'].apply(lambda x:1 if 'FOOD' in x else 0)test['PRODUCT_GROUP2'] = test['PRODUCT_GROUP'].apply(lambda x:1 if 'COSMETIC' in x else 0)" }, { "code": null, "e": 9151, "s": 9118, "text": "The best performing models were:" }, { "code": null, "e": 9290, "s": 9151, "text": "I tried to optimize the hyperparameters of both the models, and Adaboostclasifer came at the top. The following are the parameters I used:" }, { "code": null, "e": 9457, "s": 9290, "text": "I submitted my prediction on Test data to the competition hosted on Kaggle, and I got a score of 0.79459, which is just 0.016 behind the 1st place on the leaderboard." }, { "code": null, "e": 9651, "s": 9457, "text": "D19_SOZIALES: This is the feature related to a transaction that is not given in the data dictionary was the most important feature for a potential customer to respond to the marketing campaign." } ]
How to insert a row into a table that has only a single autoincrement column?
You can easily insert a row into a table that has only a single auto increment column. The syntax is as follows − insert into yourTableName set yourColumnName =NULL; You can use the below syntax − insert into yourTableName values(NULL); To understand the above syntax, let us create a table. The query to create a table is as follows − mysql> create table singleAutoIncrementColumnDemo -> ( -> UserId int NOT NULL AUTO_INCREMENT PRIMARY KEY -> ); Query OK, 0 rows affected (0.62 sec) Insert some records in the table using insert command. The query is as follows − mysql> insert into singleAutoIncrementColumnDemo set UserId =NULL; Query OK, 1 row affected (0.18 sec) mysql> insert into singleAutoIncrementColumnDemo values(NULL); Query OK, 1 row affected (0.11 sec) Display all records from the table using select statement. The query is as follows − mysql> select *from singleAutoIncrementColumnDemo; Here is the output − +--------+ | UserId | +--------+ | 1 | | 2 | +--------+ 2 rows in set (0.00 sec)
[ { "code": null, "e": 1176, "s": 1062, "text": "You can easily insert a row into a table that has only a single auto increment column. The syntax is as follows −" }, { "code": null, "e": 1228, "s": 1176, "text": "insert into yourTableName set yourColumnName =NULL;" }, { "code": null, "e": 1259, "s": 1228, "text": "You can use the below syntax −" }, { "code": null, "e": 1299, "s": 1259, "text": "insert into yourTableName values(NULL);" }, { "code": null, "e": 1398, "s": 1299, "text": "To understand the above syntax, let us create a table. The query to create a table is as follows −" }, { "code": null, "e": 1555, "s": 1398, "text": "mysql> create table singleAutoIncrementColumnDemo\n -> (\n -> UserId int NOT NULL AUTO_INCREMENT PRIMARY KEY\n -> );\nQuery OK, 0 rows affected (0.62 sec)" }, { "code": null, "e": 1636, "s": 1555, "text": "Insert some records in the table using insert command. The query is as follows −" }, { "code": null, "e": 1838, "s": 1636, "text": "mysql> insert into singleAutoIncrementColumnDemo set UserId =NULL;\nQuery OK, 1 row affected (0.18 sec)\nmysql> insert into singleAutoIncrementColumnDemo values(NULL);\nQuery OK, 1 row affected (0.11 sec)" }, { "code": null, "e": 1923, "s": 1838, "text": "Display all records from the table using select statement. The query is as follows −" }, { "code": null, "e": 1974, "s": 1923, "text": "mysql> select *from singleAutoIncrementColumnDemo;" }, { "code": null, "e": 1995, "s": 1974, "text": "Here is the output −" }, { "code": null, "e": 2086, "s": 1995, "text": "+--------+\n| UserId |\n+--------+\n| 1 |\n| 2 |\n+--------+\n2 rows in set (0.00 sec)" } ]
final local variable in Java
Local variables are declared in methods, constructors, or blocks. Local variables are declared in methods, constructors, or blocks. Local variables are created when the method, constructor or block is entered and the variable will be destroyed once it exits the method, constructor, or block. Local variables are created when the method, constructor or block is entered and the variable will be destroyed once it exits the method, constructor, or block. Access modifiers cannot be used for local variables. Access modifiers cannot be used for local variables. Local variables are visible only within the declared method, constructor, or block. Local variables are visible only within the declared method, constructor, or block. Local variables are implemented at stack level internally. Local variables are implemented at stack level internally. There is no default value for local variables, so local variables should be declared and an initial value should be assigned before the first use. There is no default value for local variables, so local variables should be declared and an initial value should be assigned before the first use. final is the only allowed access modifier for local variables. final is the only allowed access modifier for local variables. final local variable is not required to be initialized during declaration. final local variable is not required to be initialized during declaration. final local variable allows compiler to generate an optimized code. final local variable allows compiler to generate an optimized code. final local variable can be used by anonymous inner class or in anonymous methods. final local variable can be used by anonymous inner class or in anonymous methods. abstract class AnonymousInner { public abstract void display(); } public class Tester { public static void main(String args[]) { final int value; value = 100; AnonymousInner inner = new AnonymousInner() { public void display() { System.out.println("Accessing value: " + value); } }; inner.display(); } } Accessing value: 100
[ { "code": null, "e": 1128, "s": 1062, "text": "Local variables are declared in methods, constructors, or blocks." }, { "code": null, "e": 1194, "s": 1128, "text": "Local variables are declared in methods, constructors, or blocks." }, { "code": null, "e": 1355, "s": 1194, "text": "Local variables are created when the method, constructor or block is entered and the variable will be destroyed once it exits the method, constructor, or block." }, { "code": null, "e": 1516, "s": 1355, "text": "Local variables are created when the method, constructor or block is entered and the variable will be destroyed once it exits the method, constructor, or block." }, { "code": null, "e": 1569, "s": 1516, "text": "Access modifiers cannot be used for local variables." }, { "code": null, "e": 1622, "s": 1569, "text": "Access modifiers cannot be used for local variables." }, { "code": null, "e": 1706, "s": 1622, "text": "Local variables are visible only within the declared method, constructor, or block." }, { "code": null, "e": 1790, "s": 1706, "text": "Local variables are visible only within the declared method, constructor, or block." }, { "code": null, "e": 1849, "s": 1790, "text": "Local variables are implemented at stack level internally." }, { "code": null, "e": 1908, "s": 1849, "text": "Local variables are implemented at stack level internally." }, { "code": null, "e": 2055, "s": 1908, "text": "There is no default value for local variables, so local variables should be declared and an initial value should be assigned before the first use." }, { "code": null, "e": 2202, "s": 2055, "text": "There is no default value for local variables, so local variables should be declared and an initial value should be assigned before the first use." }, { "code": null, "e": 2265, "s": 2202, "text": "final is the only allowed access modifier for local variables." }, { "code": null, "e": 2328, "s": 2265, "text": "final is the only allowed access modifier for local variables." }, { "code": null, "e": 2403, "s": 2328, "text": "final local variable is not required to be initialized during declaration." }, { "code": null, "e": 2478, "s": 2403, "text": "final local variable is not required to be initialized during declaration." }, { "code": null, "e": 2546, "s": 2478, "text": "final local variable allows compiler to generate an optimized code." }, { "code": null, "e": 2614, "s": 2546, "text": "final local variable allows compiler to generate an optimized code." }, { "code": null, "e": 2697, "s": 2614, "text": "final local variable can be used by anonymous inner class or in anonymous methods." }, { "code": null, "e": 2780, "s": 2697, "text": "final local variable can be used by anonymous inner class or in anonymous methods." }, { "code": null, "e": 3155, "s": 2780, "text": "abstract class AnonymousInner {\n public abstract void display();\n}\n\npublic class Tester {\n public static void main(String args[]) {\n\n final int value;\n value = 100;\n AnonymousInner inner = new AnonymousInner() {\n public void display() {\n System.out.println(\"Accessing value: \" + value);\n }\n };\n inner.display();\n }\n}" }, { "code": null, "e": 3176, "s": 3155, "text": "Accessing value: 100" } ]
Arithmetic Operations on NumPy Arrays - onlinetutorialspoint
PROGRAMMINGJava ExamplesC Examples Java Examples C Examples C Tutorials aws JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC EXCEPTIONS COLLECTIONS SWING JDBC JAVA 8 SPRING SPRING BOOT HIBERNATE PYTHON PHP JQUERY PROGRAMMINGJava ExamplesC Examples Java Examples C Examples C Tutorials aws In this tutorial, we will see how to perform basic arithmetic operations, apply trigonometric and logarithmic functions on the array elements of a NumPy array. We will also see how to find sum, mean, maximum and minimum of elements of a NumPy array and then we will also see how to perform matrix multiplication using NumPy arrays. In NumPy, Arithmetic operations are element-wise operations. i.e. we can perform arithmetic operations on the entire array and every element of the array gets updated by the same operation. For example, suppose we have an array ‘A’ with elements from 1 to 10 and we want to add 4 to each element. We can do this by simply doing ‘A+4’ and we don’t have to iterate the whole array and then add 4 to each element. import numpy as np A= np.arange(1,11) print("Array A is:") print(A) C=A+4 print("Array C is:") print(C) Output: Array A is: [ 1 2 3 4 5 6 7 8 9 10] Array C is: [ 5 6 7 8 9 10 11 12 13 14] Similarly, we can perform subtraction, division, multiplication etc. on NumPy arrays. import numpy as np A= np.arange(1,11) print("Array A is:") print(A) D=A-2 print("Array D is:") print(D) E=A/2 print("Array E is:") print(E) F=A*2 print("Array F is:") print(F) Output: Array A is: [ 1 2 3 4 5 6 7 8 9 10] Array D is: [-1 0 1 2 3 4 5 6 7 8] Array E is: [0.5 1. 1.5 2. 2.5 3. 3.5 4. 4.5 5. ] Array F is: [ 2 4 6 8 10 12 14 16 18 20] There are many functions which are called universal functions and which operate on a NumPy array in an element by element manner. Functions to calculate trigonometric ratios, logarithm and square root are some of the universal functions. We can calculate the square root of each element of an array by simply using np.sqrt() function and passing the array as a parameter. The output is also an array with the same shape that of the input array and elements having square roots of the input array. import numpy as np A= np.arange(1,11) print("Array A is:") print(A) B=np.sqrt(A) print("Array B is :") print(B) Output: Array A is: [ 1 2 3 4 5 6 7 8 9 10] Array B is : [1. 1.41421356 1.73205081 2. 2.23606798 2.44948974 2.64575131 2.82842712 3. 3.16227766] To calculate the logarithm of each element of the NumPy array, we can use np.log() function and pass the input array as a parameter to it. The output is a new array of the same shape of the input array and having elements as logarithms of elements of the input array. import numpy as np A= np.arange(1,11) print("Array A is:") print(A) C=np.log(A) print("Array C is :") print(C) Output: Array A is: [ 1 2 3 4 5 6 7 8 9 10] Array C is : [0. 0.69314718 1.09861229 1.38629436 1.60943791 1.79175947 1.94591015 2.07944154 2.19722458 2.30258509] Trigonometric functions can also be applied on NumPy array in a similar manner to sqrt and log functions. import numpy as np A= np.arange(1,11) print("Array A is:") print(A) D=np.sin(A) print("Array D is :") print(D) Output: Array A is: [ 1 2 3 4 5 6 7 8 9 10] Array D is : [ 0.84147098 0.90929743 0.14112001 -0.7568025 -0.95892427 -0.2794155 0.6569866 0.98935825 0.41211849 -0.54402111] import numpy as np A= np.arange(1,11) print("Array A is:") print(A) E=np.cos(A) print("Array E is :") print(E) Output: Array A is: [ 1 2 3 4 5 6 7 8 9 10] Array E is : [ 0.54030231 -0.41614684 -0.9899925 -0.65364362 0.28366219 0.96017029 0.75390225 -0.14550003 -0.91113026 -0.83907153] import numpy as np A= np.arange(1,11) print("Array A is:") print(A) F=np.tan(A) print("Array F is :") print(F) There are also aggregate functions which perform an operation on the whole array and produce a single result. These functions include calculation of sum, minimum, maximum, mean and standard deviation of all the elements of a NumPy array. We can calculate the sum of elements of a given NumPy array using sum() method. Suppose we have to calculate the sum of elements of a NumPy array A, then we can simply call A.sum() and it returns the sum of all the elements in A. import numpy as np A= np.arange(1,11) print("Array A is:") print(A) b= A.sum() print("sum of elements of array A is:") print(b) Output: Array A is: [ 1 2 3 4 5 6 7 8 9 10] sum of elements of array A is: 55 To find the minimum element in a NumPy array A, we can simply call A.min() method and it returns the minimum number among the elements of A. import numpy as np A= np.arange(1,11) print("Array A is:") print(A) min=A.min() print("Minimum number among the elements of A is:") print(min) Output: Array A is: [ 1 2 3 4 5 6 7 8 9 10] Minimum number among the elements of A is: 1 We can find maximum element in a NumPy array A by using method A.max() and it returns maximum number among all the elements of the array. import numpy as np A= np.arange(1,11) print("Array A is:") print(A) max=A.max() print("Maximum number among the elements of A is:") print(max) Output: Array A is: [ 1 2 3 4 5 6 7 8 9 10] Maximum number among the elements of A is: 10 We can find the average value/mean of elements of a NumPy array by using the mean() method. import numpy as np A= np.arange(1,11) print("Array A is:") print(A) e= A.mean() print("Average of elements of array A is:") print(e) Output: Array A is: [ 1 2 3 4 5 6 7 8 9 10] Average of elements of array A is: 5.5 Introduction to NumPy NumPy Math functions Happy Learning 🙂 What is Python NumPy Library Java Program to swap two arrays Example Different ways to use Lambdas in Python What is Java Arrays and how it works ? Java 8 Stream API and Parallelism Python Set Data Structure in Depth PHP Multidimensional Arrays Example Python Decorators – Classes and Functions AngularJs Directive Example Tutorials C Program – Bubble Sort Program in C Binary Search using Java Python How to read input from keyboard Python Selenium Automate the Login Form Bubble Sort In Java jQuery Get Attributes Example Tutorials What is Python NumPy Library Java Program to swap two arrays Example Different ways to use Lambdas in Python What is Java Arrays and how it works ? Java 8 Stream API and Parallelism Python Set Data Structure in Depth PHP Multidimensional Arrays Example Python Decorators – Classes and Functions AngularJs Directive Example Tutorials C Program – Bubble Sort Program in C Binary Search using Java Python How to read input from keyboard Python Selenium Automate the Login Form Bubble Sort In Java jQuery Get Attributes Example Tutorials Δ Python – Introduction Python – Features Python – Install on Windows Python – Modes of Program Python – Number System Python – Identifiers Python – Operators Python – Ternary Operator Python – Command Line Arguments Python – Keywords Python – Data Types Python – Upgrade Python PIP Python – Virtual Environment Pyhton – Type Casting Python – String to Int Python – Conditional Statements Python – if statement Python – *args and **kwargs Python – Date Formatting Python – Read input from keyboard Python – raw_input Python – List In Depth Python – List Comprehension Python – Set in Depth Python – Dictionary in Depth Python – Tuple in Depth Python – Stack Datastructure Python – Classes and Objects Python – Constructors Python – Object Introspection Python – Inheritance Python – Decorators Python – Serialization with Pickle Python – Exceptions Handling Python – User defined Exceptions Python – Multiprocessing Python – Default function parameters Python – Lambdas Functions Python – NumPy Library Python – MySQL Connector Python – MySQL Create Database Python – MySQL Read Data Python – MySQL Insert Data Python – MySQL Update Records Python – MySQL Delete Records Python – String Case Conversion Howto – Find biggest of 2 numbers Howto – Remove duplicates from List Howto – Convert any Number to Binary Howto – Merge two Lists Howto – Merge two dicts Howto – Get Characters Count in a File Howto – Get Words Count in a File Howto – Remove Spaces from String Howto – Read Env variables Howto – Read a text File Howto – Read a JSON File Howto – Read Config.ini files Howto – Iterate Dictionary Howto – Convert List Of Objects to CSV Howto – Merge two dict in Python Howto – create Zip File Howto – Get OS info Howto – Get size of Directory Howto – Check whether a file exists Howto – Remove key from dictionary Howto – Sort Objects Howto – Create or Delete Directories Howto – Read CSV File Howto – Create Python Iterable class Howto – Access for loop index Howto – Clear all elements from List Howto – Remove empty lists from a List Howto – Remove special characters from String Howto – Sort dictionary by key Howto – Filter a list
[ { "code": null, "e": 158, "s": 123, "text": "PROGRAMMINGJava ExamplesC Examples" }, { "code": null, "e": 172, "s": 158, "text": "Java Examples" }, { "code": null, "e": 183, "s": 172, "text": "C Examples" }, { "code": null, "e": 195, "s": 183, "text": "C Tutorials" }, { "code": null, "e": 199, "s": 195, "text": "aws" }, { "code": null, "e": 234, "s": 199, "text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC" }, { "code": null, "e": 245, "s": 234, "text": "EXCEPTIONS" }, { "code": null, "e": 257, "s": 245, "text": "COLLECTIONS" }, { "code": null, "e": 263, "s": 257, "text": "SWING" }, { "code": null, "e": 268, "s": 263, "text": "JDBC" }, { "code": null, "e": 275, "s": 268, "text": "JAVA 8" }, { "code": null, "e": 282, "s": 275, "text": "SPRING" }, { "code": null, "e": 294, "s": 282, "text": "SPRING BOOT" }, { "code": null, "e": 304, "s": 294, "text": "HIBERNATE" }, { "code": null, "e": 311, "s": 304, "text": "PYTHON" }, { "code": null, "e": 315, "s": 311, "text": "PHP" }, { "code": null, "e": 322, "s": 315, "text": "JQUERY" }, { "code": null, "e": 357, "s": 322, "text": "PROGRAMMINGJava ExamplesC Examples" }, { "code": null, "e": 371, "s": 357, "text": "Java Examples" }, { "code": null, "e": 382, "s": 371, "text": "C Examples" }, { "code": null, "e": 394, "s": 382, "text": "C Tutorials" }, { "code": null, "e": 398, "s": 394, "text": "aws" }, { "code": null, "e": 730, "s": 398, "text": "In this tutorial, we will see how to perform basic arithmetic operations, apply trigonometric and logarithmic functions on the array elements of a NumPy array. We will also see how to find sum, mean, maximum and minimum of elements of a NumPy array and then we will also see how to perform matrix multiplication using NumPy arrays." }, { "code": null, "e": 920, "s": 730, "text": "In NumPy, Arithmetic operations are element-wise operations. i.e. we can perform arithmetic operations on the entire array and every element of the array gets updated by the same operation." }, { "code": null, "e": 1141, "s": 920, "text": "For example, suppose we have an array ‘A’ with elements from 1 to 10 and we want to add 4 to each element. We can do this by simply doing ‘A+4’ and we don’t have to iterate the whole array and then add 4 to each element." }, { "code": null, "e": 1246, "s": 1141, "text": "import numpy as np\n\nA= np.arange(1,11)\nprint(\"Array A is:\")\nprint(A)\nC=A+4\nprint(\"Array C is:\")\nprint(C)" }, { "code": null, "e": 1254, "s": 1246, "text": "Output:" }, { "code": null, "e": 1342, "s": 1254, "text": "Array A is:\n[ 1 2 3 4 5 6 7 8 9 10]\nArray C is:\n[ 5 6 7 8 9 10 11 12 13 14]" }, { "code": null, "e": 1428, "s": 1342, "text": "Similarly, we can perform subtraction, division, multiplication etc. on NumPy arrays." }, { "code": null, "e": 1605, "s": 1428, "text": "import numpy as np\nA= np.arange(1,11)\nprint(\"Array A is:\")\nprint(A)\n\nD=A-2\nprint(\"Array D is:\")\nprint(D)\nE=A/2\nprint(\"Array E is:\")\nprint(E)\nF=A*2\nprint(\"Array F is:\")\nprint(F)" }, { "code": null, "e": 1613, "s": 1605, "text": "Output:" }, { "code": null, "e": 1799, "s": 1613, "text": "Array A is:\n[ 1 2 3 4 5 6 7 8 9 10]\nArray D is:\n[-1 0 1 2 3 4 5 6 7 8]\nArray E is:\n[0.5 1. 1.5 2. 2.5 3. 3.5 4. 4.5 5. ]\nArray F is:\n[ 2 4 6 8 10 12 14 16 18 20]" }, { "code": null, "e": 2037, "s": 1799, "text": "There are many functions which are called universal functions and which operate on a NumPy array in an element by element manner. Functions to calculate trigonometric ratios, logarithm and square root are some of the universal functions." }, { "code": null, "e": 2296, "s": 2037, "text": "We can calculate the square root of each element of an array by simply using np.sqrt() function and passing the array as a parameter. The output is also an array with the same shape that of the input array and elements having square roots of the input array." }, { "code": null, "e": 2410, "s": 2296, "text": "import numpy as np\n\nA= np.arange(1,11)\nprint(\"Array A is:\")\nprint(A)\n\nB=np.sqrt(A)\nprint(\"Array B is :\")\nprint(B)" }, { "code": null, "e": 2418, "s": 2410, "text": "Output:" }, { "code": null, "e": 2588, "s": 2418, "text": "Array A is:\n[ 1 2 3 4 5 6 7 8 9 10]\nArray B is :\n[1. 1.41421356 1.73205081 2. 2.23606798 2.44948974\n 2.64575131 2.82842712 3. 3.16227766]" }, { "code": null, "e": 2856, "s": 2588, "text": "To calculate the logarithm of each element of the NumPy array, we can use np.log() function and pass the input array as a parameter to it. The output is a new array of the same shape of the input array and having elements as logarithms of elements of the input array." }, { "code": null, "e": 2969, "s": 2856, "text": "import numpy as np\n\nA= np.arange(1,11)\nprint(\"Array A is:\")\nprint(A)\n\nC=np.log(A)\nprint(\"Array C is :\")\nprint(C)" }, { "code": null, "e": 2977, "s": 2969, "text": "Output:" }, { "code": null, "e": 3147, "s": 2977, "text": "Array A is:\n[ 1 2 3 4 5 6 7 8 9 10]\nArray C is :\n[0. 0.69314718 1.09861229 1.38629436 1.60943791 1.79175947\n 1.94591015 2.07944154 2.19722458 2.30258509]" }, { "code": null, "e": 3253, "s": 3147, "text": "Trigonometric functions can also be applied on NumPy array in a similar manner to sqrt and log functions." }, { "code": null, "e": 3365, "s": 3253, "text": "import numpy as np\nA= np.arange(1,11)\nprint(\"Array A is:\")\nprint(A)\n\nD=np.sin(A)\nprint(\"Array D is :\")\nprint(D)" }, { "code": null, "e": 3373, "s": 3365, "text": "Output:" }, { "code": null, "e": 3552, "s": 3373, "text": "Array A is:\n[ 1 2 3 4 5 6 7 8 9 10]\nArray D is :\n[ 0.84147098 0.90929743 0.14112001 -0.7568025 -0.95892427 -0.2794155\n 0.6569866 0.98935825 0.41211849 -0.54402111]" }, { "code": null, "e": 3665, "s": 3552, "text": "import numpy as np\n\nA= np.arange(1,11)\nprint(\"Array A is:\")\nprint(A)\n\nE=np.cos(A)\nprint(\"Array E is :\")\nprint(E)" }, { "code": null, "e": 3673, "s": 3665, "text": "Output:" }, { "code": null, "e": 3853, "s": 3673, "text": "Array A is:\n[ 1 2 3 4 5 6 7 8 9 10]\nArray E is :\n[ 0.54030231 -0.41614684 -0.9899925 -0.65364362 0.28366219 0.96017029\n 0.75390225 -0.14550003 -0.91113026 -0.83907153]" }, { "code": null, "e": 3966, "s": 3853, "text": "import numpy as np\nA= np.arange(1,11)\nprint(\"Array A is:\")\nprint(A)\n\nF=np.tan(A)\nprint(\"Array F is :\")\nprint(F)\n" }, { "code": null, "e": 4204, "s": 3966, "text": "There are also aggregate functions which perform an operation on the whole array and produce a single result. These functions include calculation of sum, minimum, maximum, mean and standard deviation of all the elements of a NumPy array." }, { "code": null, "e": 4434, "s": 4204, "text": "We can calculate the sum of elements of a given NumPy array using sum() method. Suppose we have to calculate the sum of elements of a NumPy array A, then we can simply call A.sum() and it returns the sum of all the elements in A." }, { "code": null, "e": 4563, "s": 4434, "text": "import numpy as np\nA= np.arange(1,11)\nprint(\"Array A is:\")\nprint(A)\n\nb= A.sum()\nprint(\"sum of elements of array A is:\")\nprint(b)" }, { "code": null, "e": 4571, "s": 4563, "text": "Output:" }, { "code": null, "e": 4649, "s": 4571, "text": "Array A is:\n[ 1 2 3 4 5 6 7 8 9 10]\nsum of elements of array A is:\n55" }, { "code": null, "e": 4790, "s": 4649, "text": "To find the minimum element in a NumPy array A, we can simply call A.min() method and it returns the minimum number among the elements of A." }, { "code": null, "e": 4934, "s": 4790, "text": "import numpy as np\nA= np.arange(1,11)\nprint(\"Array A is:\")\nprint(A)\n\nmin=A.min()\nprint(\"Minimum number among the elements of A is:\")\nprint(min)" }, { "code": null, "e": 4942, "s": 4934, "text": "Output:" }, { "code": null, "e": 5031, "s": 4942, "text": "Array A is:\n[ 1 2 3 4 5 6 7 8 9 10]\nMinimum number among the elements of A is:\n1" }, { "code": null, "e": 5169, "s": 5031, "text": "We can find maximum element in a NumPy array A by using method A.max() and it returns maximum number among all the elements of the array." }, { "code": null, "e": 5313, "s": 5169, "text": "import numpy as np\nA= np.arange(1,11)\nprint(\"Array A is:\")\nprint(A)\n\nmax=A.max()\nprint(\"Maximum number among the elements of A is:\")\nprint(max)" }, { "code": null, "e": 5321, "s": 5313, "text": "Output:" }, { "code": null, "e": 5411, "s": 5321, "text": "Array A is:\n[ 1 2 3 4 5 6 7 8 9 10]\nMaximum number among the elements of A is:\n10" }, { "code": null, "e": 5503, "s": 5411, "text": "We can find the average value/mean of elements of a NumPy array by using the mean() method." }, { "code": null, "e": 5637, "s": 5503, "text": "import numpy as np\nA= np.arange(1,11)\nprint(\"Array A is:\")\nprint(A)\n\ne= A.mean()\nprint(\"Average of elements of array A is:\")\nprint(e)" }, { "code": null, "e": 5645, "s": 5637, "text": "Output:" }, { "code": null, "e": 5728, "s": 5645, "text": "Array A is:\n[ 1 2 3 4 5 6 7 8 9 10]\nAverage of elements of array A is:\n5.5" }, { "code": null, "e": 5750, "s": 5728, "text": "Introduction to NumPy" }, { "code": null, "e": 5771, "s": 5750, "text": "NumPy Math functions" }, { "code": null, "e": 5788, "s": 5771, "text": "Happy Learning 🙂" }, { "code": null, "e": 6324, "s": 5788, "text": "\nWhat is Python NumPy Library\nJava Program to swap two arrays Example\nDifferent ways to use Lambdas in Python\nWhat is Java Arrays and how it works ?\nJava 8 Stream API and Parallelism\nPython Set Data Structure in Depth\nPHP Multidimensional Arrays Example\nPython Decorators – Classes and Functions\nAngularJs Directive Example Tutorials\nC Program – Bubble Sort Program in C\nBinary Search using Java\nPython How to read input from keyboard\nPython Selenium Automate the Login Form\nBubble Sort In Java\njQuery Get Attributes Example Tutorials\n" }, { "code": null, "e": 6353, "s": 6324, "text": "What is Python NumPy Library" }, { "code": null, "e": 6393, "s": 6353, "text": "Java Program to swap two arrays Example" }, { "code": null, "e": 6433, "s": 6393, "text": "Different ways to use Lambdas in Python" }, { "code": null, "e": 6472, "s": 6433, "text": "What is Java Arrays and how it works ?" }, { "code": null, "e": 6506, "s": 6472, "text": "Java 8 Stream API and Parallelism" }, { "code": null, "e": 6541, "s": 6506, "text": "Python Set Data Structure in Depth" }, { "code": null, "e": 6577, "s": 6541, "text": "PHP Multidimensional Arrays Example" }, { "code": null, "e": 6619, "s": 6577, "text": "Python Decorators – Classes and Functions" }, { "code": null, "e": 6657, "s": 6619, "text": "AngularJs Directive Example Tutorials" }, { "code": null, "e": 6694, "s": 6657, "text": "C Program – Bubble Sort Program in C" }, { "code": null, "e": 6719, "s": 6694, "text": "Binary Search using Java" }, { "code": null, "e": 6758, "s": 6719, "text": "Python How to read input from keyboard" }, { "code": null, "e": 6798, "s": 6758, "text": "Python Selenium Automate the Login Form" }, { "code": null, "e": 6818, "s": 6798, "text": "Bubble Sort In Java" }, { "code": null, "e": 6858, "s": 6818, "text": "jQuery Get Attributes Example Tutorials" }, { "code": null, "e": 6864, "s": 6862, "text": "Δ" }, { "code": null, "e": 6887, "s": 6864, "text": " Python – Introduction" }, { "code": null, "e": 6906, "s": 6887, "text": " Python – Features" }, { "code": null, "e": 6935, "s": 6906, "text": " Python – Install on Windows" }, { "code": null, "e": 6962, "s": 6935, "text": " Python – Modes of Program" }, { "code": null, "e": 6986, "s": 6962, "text": " Python – Number System" }, { "code": null, "e": 7008, "s": 6986, "text": " Python – Identifiers" }, { "code": null, "e": 7028, "s": 7008, "text": " Python – Operators" }, { "code": null, "e": 7055, "s": 7028, "text": " Python – Ternary Operator" }, { "code": null, "e": 7088, "s": 7055, "text": " Python – Command Line Arguments" }, { "code": null, "e": 7107, "s": 7088, "text": " Python – Keywords" }, { "code": null, "e": 7128, "s": 7107, "text": " Python – Data Types" }, { "code": null, "e": 7157, "s": 7128, "text": " Python – Upgrade Python PIP" }, { "code": null, "e": 7187, "s": 7157, "text": " Python – Virtual Environment" }, { "code": null, "e": 7210, "s": 7187, "text": " Pyhton – Type Casting" }, { "code": null, "e": 7234, "s": 7210, "text": " Python – String to Int" }, { "code": null, "e": 7267, "s": 7234, "text": " Python – Conditional Statements" }, { "code": null, "e": 7290, "s": 7267, "text": " Python – if statement" }, { "code": null, "e": 7319, "s": 7290, "text": " Python – *args and **kwargs" }, { "code": null, "e": 7345, "s": 7319, "text": " Python – Date Formatting" }, { "code": null, "e": 7380, "s": 7345, "text": " Python – Read input from keyboard" }, { "code": null, "e": 7400, "s": 7380, "text": " Python – raw_input" }, { "code": null, "e": 7424, "s": 7400, "text": " Python – List In Depth" }, { "code": null, "e": 7453, "s": 7424, "text": " Python – List Comprehension" }, { "code": null, "e": 7476, "s": 7453, "text": " Python – Set in Depth" }, { "code": null, "e": 7506, "s": 7476, "text": " Python – Dictionary in Depth" }, { "code": null, "e": 7531, "s": 7506, "text": " Python – Tuple in Depth" }, { "code": null, "e": 7561, "s": 7531, "text": " Python – Stack Datastructure" }, { "code": null, "e": 7591, "s": 7561, "text": " Python – Classes and Objects" }, { "code": null, "e": 7614, "s": 7591, "text": " Python – Constructors" }, { "code": null, "e": 7645, "s": 7614, "text": " Python – Object Introspection" }, { "code": null, "e": 7667, "s": 7645, "text": " Python – Inheritance" }, { "code": null, "e": 7688, "s": 7667, "text": " Python – Decorators" }, { "code": null, "e": 7724, "s": 7688, "text": " Python – Serialization with Pickle" }, { "code": null, "e": 7754, "s": 7724, "text": " Python – Exceptions Handling" }, { "code": null, "e": 7788, "s": 7754, "text": " Python – User defined Exceptions" }, { "code": null, "e": 7814, "s": 7788, "text": " Python – Multiprocessing" }, { "code": null, "e": 7852, "s": 7814, "text": " Python – Default function parameters" }, { "code": null, "e": 7880, "s": 7852, "text": " Python – Lambdas Functions" }, { "code": null, "e": 7904, "s": 7880, "text": " Python – NumPy Library" }, { "code": null, "e": 7930, "s": 7904, "text": " Python – MySQL Connector" }, { "code": null, "e": 7962, "s": 7930, "text": " Python – MySQL Create Database" }, { "code": null, "e": 7988, "s": 7962, "text": " Python – MySQL Read Data" }, { "code": null, "e": 8016, "s": 7988, "text": " Python – MySQL Insert Data" }, { "code": null, "e": 8047, "s": 8016, "text": " Python – MySQL Update Records" }, { "code": null, "e": 8078, "s": 8047, "text": " Python – MySQL Delete Records" }, { "code": null, "e": 8111, "s": 8078, "text": " Python – String Case Conversion" }, { "code": null, "e": 8146, "s": 8111, "text": " Howto – Find biggest of 2 numbers" }, { "code": null, "e": 8183, "s": 8146, "text": " Howto – Remove duplicates from List" }, { "code": null, "e": 8221, "s": 8183, "text": " Howto – Convert any Number to Binary" }, { "code": null, "e": 8247, "s": 8221, "text": " Howto – Merge two Lists" }, { "code": null, "e": 8272, "s": 8247, "text": " Howto – Merge two dicts" }, { "code": null, "e": 8312, "s": 8272, "text": " Howto – Get Characters Count in a File" }, { "code": null, "e": 8347, "s": 8312, "text": " Howto – Get Words Count in a File" }, { "code": null, "e": 8382, "s": 8347, "text": " Howto – Remove Spaces from String" }, { "code": null, "e": 8411, "s": 8382, "text": " Howto – Read Env variables" }, { "code": null, "e": 8437, "s": 8411, "text": " Howto – Read a text File" }, { "code": null, "e": 8463, "s": 8437, "text": " Howto – Read a JSON File" }, { "code": null, "e": 8495, "s": 8463, "text": " Howto – Read Config.ini files" }, { "code": null, "e": 8523, "s": 8495, "text": " Howto – Iterate Dictionary" }, { "code": null, "e": 8563, "s": 8523, "text": " Howto – Convert List Of Objects to CSV" }, { "code": null, "e": 8597, "s": 8563, "text": " Howto – Merge two dict in Python" }, { "code": null, "e": 8622, "s": 8597, "text": " Howto – create Zip File" }, { "code": null, "e": 8643, "s": 8622, "text": " Howto – Get OS info" }, { "code": null, "e": 8674, "s": 8643, "text": " Howto – Get size of Directory" }, { "code": null, "e": 8711, "s": 8674, "text": " Howto – Check whether a file exists" }, { "code": null, "e": 8748, "s": 8711, "text": " Howto – Remove key from dictionary" }, { "code": null, "e": 8770, "s": 8748, "text": " Howto – Sort Objects" }, { "code": null, "e": 8808, "s": 8770, "text": " Howto – Create or Delete Directories" }, { "code": null, "e": 8831, "s": 8808, "text": " Howto – Read CSV File" }, { "code": null, "e": 8869, "s": 8831, "text": " Howto – Create Python Iterable class" }, { "code": null, "e": 8900, "s": 8869, "text": " Howto – Access for loop index" }, { "code": null, "e": 8938, "s": 8900, "text": " Howto – Clear all elements from List" }, { "code": null, "e": 8978, "s": 8938, "text": " Howto – Remove empty lists from a List" }, { "code": null, "e": 9025, "s": 8978, "text": " Howto – Remove special characters from String" }, { "code": null, "e": 9057, "s": 9025, "text": " Howto – Sort dictionary by key" } ]
How to programmatically turn on Wifi on Android device?
This example demonstrates how do I programmatically turn on wifi in android device. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:gravity="center" android:orientation="vertical" tools:context=".MainActivity"> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Wifi On!" android:onClick="enableWifi"/> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Wifi Off!" android:onClick="disableWifi" /> </LinearLayout> Step 3 − Add the following code to src/MainActivity.java import android.content.Context; import android.net.wifi.WifiManager; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.Toast; public class MainActivity extends AppCompatActivity { private WifiManager wifiManager; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); wifiManager = (WifiManager) getApplicationContext().getSystemService(Context.WIFI_SERVICE); } public void enableWifi(View view){ wifiManager.setWifiEnabled(true); Toast.makeText(this, "Wifi enabled", Toast.LENGTH_SHORT).show(); } public void disableWifi(View view){ wifiManager.setWifiEnabled(false); Toast.makeText(this, "Wifi Disabled", Toast.LENGTH_SHORT).show(); } } Step 4 - Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.sample"> <uses-permission android:name="android.permission.ACCESS_WIFI_STATE"/> <uses-permission android:name="android.permission.CHANGE_WIFI_STATE"/> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run Icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –
[ { "code": null, "e": 1146, "s": 1062, "text": "This example demonstrates how do I programmatically turn on wifi in android device." }, { "code": null, "e": 1275, "s": 1146, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1340, "s": 1275, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 2022, "s": 1340, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:gravity=\"center\"\n android:orientation=\"vertical\"\n tools:context=\".MainActivity\">\n <Button\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Wifi On!\"\n android:onClick=\"enableWifi\"/>\n <Button\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:text=\"Wifi Off!\"\n android:onClick=\"disableWifi\" />\n</LinearLayout>" }, { "code": null, "e": 2079, "s": 2022, "text": "Step 3 − Add the following code to src/MainActivity.java" }, { "code": null, "e": 2941, "s": 2079, "text": "import android.content.Context;\nimport android.net.wifi.WifiManager;\nimport android.support.v7.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.view.View;\nimport android.widget.Toast;\npublic class MainActivity extends AppCompatActivity {\n private WifiManager wifiManager;\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n wifiManager = (WifiManager) getApplicationContext().getSystemService(Context.WIFI_SERVICE);\n }\n public void enableWifi(View view){\n wifiManager.setWifiEnabled(true);\n Toast.makeText(this, \"Wifi enabled\", Toast.LENGTH_SHORT).show();\n }\n public void disableWifi(View view){\n wifiManager.setWifiEnabled(false);\n Toast.makeText(this, \"Wifi Disabled\", Toast.LENGTH_SHORT).show();\n }\n}" }, { "code": null, "e": 2996, "s": 2941, "text": "Step 4 - Add the following code to androidManifest.xml" }, { "code": null, "e": 3814, "s": 2996, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"app.com.sample\">\n <uses-permission android:name=\"android.permission.ACCESS_WIFI_STATE\"/>\n <uses-permission android:name=\"android.permission.CHANGE_WIFI_STATE\"/>\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 4161, "s": 3814, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run Icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –" } ]
Python program to calculate Date, Month and Year from Seconds - GeeksforGeeks
15 Mar, 2021 Given the number of seconds, the task is to write a Python program to calculate the date, month, and year in the format MM-DD-YYYY that have been passed from 1 January 1947. Examples: Input: 0 Output: 01-01-1947 Input: 123456789 Output: 11-29-1950 Input: 9876543210 Output: 12-22-2259 Step-by-step Approach: Create a function to get the number of days in a year. Python # function to get number of # days in the year # if leap year then 366 # else 365 def dayInYear(year): if (year % 4) == 0: if (year % 100) == 0: if (year % 400) == 0: return 366 else: return 365 else: return 366 else: return 365 Create a function to count the years after 1947. Python3 # counting the years after 1947 def getYear(days): year = 1946 while True: year += 1 dcnt = dayInYear(year) if days >= dcnt: days -= dcnt else: break return year, days Create a function to count the number of months. Python3 # counting the number of months def monthCnt(days, year): if days == 0: return 1, 0 else: month_num = 1 months = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31] if dayInYear(year) == 366: months[1] = 29 for day in months: if day < days: month_num += 1 days -= day else: break return month_num, days Create a function to get a date using the number of seconds. Python3 # getting date using number of seconds def getDate(num_sec): # converting seconds into days days_sec = 24*60*60 days = num_sec//days_sec day_started = False # if some seconds are more if days % days_sec != 0: day_started = True # getting year year, days = getYear(days) # getting month month, days = monthCnt(days, year) if day_started or num_sec == 0: days += 1 # preparing date_format date = "" if month < 10: date = date+"0"+str(month) else: date = date+str(month) date = date+"-" if days < 10: date = date+"0"+str(days) else: date = date+str(days) date = date+"-" date = date+str(year) return date Create the driver code and call the required function. Python3 # Driver Code # returns 01-01-1970 date_format = getDate(0) print(date_format) # returns 11-29-1973 date_format = getDate(123456789) print(date_format) # returns 12-22-2282 date_format = getDate(9876543210) print(date_format) Below is the complete program based on the above stepwise approach: Python3 # function to get num of # days in the year # if leap year then 366 # else 365 def dayInYear(year): if (year % 4) == 0: if (year % 100) == 0: if (year % 400) == 0: return 366 else: return 365 else: return 366 else: return 365 # counting the years after 1947 def getYear(days): year = 1946 while True: year += 1 dcnt = dayInYear(year) if days >= dcnt: days -= dcnt else: break return year, days # counting the number of months def monthCnt(days, year): if days == 0: return 1, 0 else: month_num = 1 months = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31] if dayInYear(year) == 366: months[1] = 29 for day in months: if day < days: month_num += 1 days -= day else: break return month_num, days # getting date using number of seconds def getDate(num_sec): # converting seconds into days days_sec = 24*60*60 days = num_sec//days_sec day_started = False # if some seconds are more if days % days_sec != 0: day_started = True # getting year year, days = getYear(days) # getting month month, days = monthCnt(days, year) if day_started or num_sec == 0: days += 1 # preparing date_format date = "" if month < 10: date = date+"0"+str(month) else: date = date+str(month) date = date+"-" if days < 10: date = date+"0"+str(days) else: date = date+str(days) date = date+"-" date = date+str(year) return date # Driver Code # returns 01-01-1970 date_format = getDate(0) print(date_format) # returns 11-29-1973 date_format = getDate(123456789) print(date_format) # returns 12-22-2282 date_format = getDate(9876543210) print(date_format) Output: 01-01-1947 11-29-1950 12-22-2259 Python datetime-program Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? How to drop one or multiple columns in Pandas Dataframe How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | Pandas dataframe.groupby() Defaultdict in Python Python | Get dictionary keys as a list Python | Split string into list of characters Python | Convert a list to dictionary Python program to check whether a number is Prime or not
[ { "code": null, "e": 23973, "s": 23942, "text": " \n15 Mar, 2021\n" }, { "code": null, "e": 24147, "s": 23973, "text": "Given the number of seconds, the task is to write a Python program to calculate the date, month, and year in the format MM-DD-YYYY that have been passed from 1 January 1947." }, { "code": null, "e": 24157, "s": 24147, "text": "Examples:" }, { "code": null, "e": 24260, "s": 24157, "text": "Input: 0\nOutput: 01-01-1947\n\nInput: 123456789\nOutput: 11-29-1950\n\nInput: 9876543210\nOutput: 12-22-2259" }, { "code": null, "e": 24283, "s": 24260, "text": "Step-by-step Approach:" }, { "code": null, "e": 24338, "s": 24283, "text": "Create a function to get the number of days in a year." }, { "code": null, "e": 24345, "s": 24338, "text": "Python" }, { "code": "\n\n\n\n\n\n\n# function to get number of \n# days in the year \n# if leap year then 366 \n# else 365 \ndef dayInYear(year): \n \n if (year % 4) == 0: \n \n if (year % 100) == 0: \n \n if (year % 400) == 0: \n return 366\n else: \n return 365\n \n else: \n return 366\n \n else: \n return 365\n\n\n\n\n\n", "e": 24765, "s": 24355, "text": null }, { "code": null, "e": 24814, "s": 24765, "text": "Create a function to count the years after 1947." }, { "code": null, "e": 24822, "s": 24814, "text": "Python3" }, { "code": "\n\n\n\n\n\n\n# counting the years after 1947 \ndef getYear(days): \n year = 1946\n \n while True: \n year += 1\n dcnt = dayInYear(year) \n \n if days >= dcnt: \n days -= dcnt \n else: \n break\n return year, days \n\n\n\n\n\n", "e": 25108, "s": 24832, "text": null }, { "code": null, "e": 25157, "s": 25108, "text": "Create a function to count the number of months." }, { "code": null, "e": 25165, "s": 25157, "text": "Python3" }, { "code": "\n\n\n\n\n\n\n# counting the number of months \ndef monthCnt(days, year): \n \n if days == 0: \n return 1, 0\n else: \n month_num = 1\n months = [31, 28, 31, 30, 31, \n 30, 31, 31, 30, 31, \n 30, 31] \n \n if dayInYear(year) == 366: \n months[1] = 29\n \n for day in months: \n \n if day < days: \n month_num += 1\n days -= day \n else: \n break\n \n return month_num, days \n\n\n\n\n\n", "e": 25740, "s": 25175, "text": null }, { "code": null, "e": 25801, "s": 25740, "text": "Create a function to get a date using the number of seconds." }, { "code": null, "e": 25809, "s": 25801, "text": "Python3" }, { "code": "\n\n\n\n\n\n\n# getting date using number of seconds \ndef getDate(num_sec): \n \n # converting seconds into days \n days_sec = 24*60*60\n days = num_sec//days_sec \n day_started = False\n \n # if some seconds are more \n if days % days_sec != 0: \n day_started = True\n \n # getting year \n year, days = getYear(days) \n \n # getting month \n month, days = monthCnt(days, year) \n \n if day_started or num_sec == 0: \n days += 1\n \n # preparing date_format \n date = \"\" \n if month < 10: \n date = date+\"0\"+str(month) \n else: \n date = date+str(month) \n \n date = date+\"-\"\n \n if days < 10: \n date = date+\"0\"+str(days) \n else: \n date = date+str(days) \n \n date = date+\"-\"\n \n date = date+str(year) \n \n return date \n\n\n\n\n\n", "e": 26652, "s": 25819, "text": null }, { "code": null, "e": 26707, "s": 26652, "text": "Create the driver code and call the required function." }, { "code": null, "e": 26715, "s": 26707, "text": "Python3" }, { "code": "\n\n\n\n\n\n\n# Driver Code \n \n# returns 01-01-1970 \ndate_format = getDate(0) \nprint(date_format) \n \n# returns 11-29-1973 \ndate_format = getDate(123456789) \nprint(date_format) \n \n# returns 12-22-2282 \ndate_format = getDate(9876543210) \nprint(date_format)\n\n\n\n\n\n", "e": 26982, "s": 26725, "text": null }, { "code": null, "e": 27050, "s": 26982, "text": "Below is the complete program based on the above stepwise approach:" }, { "code": null, "e": 27058, "s": 27050, "text": "Python3" }, { "code": "\n\n\n\n\n\n\n# function to get num of \n# days in the year \n# if leap year then 366 \n# else 365 \ndef dayInYear(year): \n if (year % 4) == 0: \n if (year % 100) == 0: \n if (year % 400) == 0: \n return 366\n else: \n return 365\n else: \n return 366\n else: \n return 365\n \n \n# counting the years after 1947 \ndef getYear(days): \n year = 1946\n while True: \n year += 1\n dcnt = dayInYear(year) \n if days >= dcnt: \n days -= dcnt \n else: \n break\n return year, days \n \n \n# counting the number of months \ndef monthCnt(days, year): \n if days == 0: \n return 1, 0\n else: \n month_num = 1\n months = [31, 28, 31, 30, 31, \n 30, 31, 31, 30, 31, \n 30, 31] \n if dayInYear(year) == 366: \n months[1] = 29\n for day in months: \n if day < days: \n month_num += 1\n days -= day \n else: \n break\n return month_num, days \n \n \n# getting date using number of seconds \ndef getDate(num_sec): \n \n # converting seconds into days \n days_sec = 24*60*60\n days = num_sec//days_sec \n day_started = False\n \n # if some seconds are more \n if days % days_sec != 0: \n day_started = True\n \n # getting year \n year, days = getYear(days) \n \n # getting month \n month, days = monthCnt(days, year) \n \n if day_started or num_sec == 0: \n days += 1\n \n # preparing date_format \n date = \"\" \n if month < 10: \n date = date+\"0\"+str(month) \n else: \n date = date+str(month) \n \n date = date+\"-\"\n \n if days < 10: \n date = date+\"0\"+str(days) \n else: \n date = date+str(days) \n \n date = date+\"-\"\n \n date = date+str(year) \n \n return date \n \n \n# Driver Code \n \n# returns 01-01-1970 \ndate_format = getDate(0) \nprint(date_format) \n \n# returns 11-29-1973 \ndate_format = getDate(123456789) \nprint(date_format) \n \n# returns 12-22-2282 \ndate_format = getDate(9876543210) \nprint(date_format) \n\n\n\n\n\n", "e": 29249, "s": 27068, "text": null }, { "code": null, "e": 29257, "s": 29249, "text": "Output:" }, { "code": null, "e": 29290, "s": 29257, "text": "01-01-1947\n11-29-1950\n12-22-2259" }, { "code": null, "e": 29316, "s": 29290, "text": "\nPython datetime-program\n" }, { "code": null, "e": 29325, "s": 29316, "text": "\nPython\n" }, { "code": null, "e": 29343, "s": 29325, "text": "\nPython Programs\n" }, { "code": null, "e": 29548, "s": 29343, "text": "Writing code in comment? \n Please use ide.geeksforgeeks.org, \n generate link and share the link here.\n " }, { "code": null, "e": 29580, "s": 29548, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 29636, "s": 29580, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 29678, "s": 29636, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 29720, "s": 29678, "text": "Check if element exists in list in Python" }, { "code": null, "e": 29756, "s": 29720, "text": "Python | Pandas dataframe.groupby()" }, { "code": null, "e": 29778, "s": 29756, "text": "Defaultdict in Python" }, { "code": null, "e": 29817, "s": 29778, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 29863, "s": 29817, "text": "Python | Split string into list of characters" }, { "code": null, "e": 29901, "s": 29863, "text": "Python | Convert a list to dictionary" } ]
Increment and Decrement Operators in Python?
Python does not have unary increment/decrement operator( ++/--). Instead to increament a value, use a += 1 to decrement a value, use− a -= 1 >>> a = 0 >>> >>> #Increment >>> a +=1 >>> >>> #Decrement >>> a -= 1 >>> >>> #value of a >>> a 0 Python does not provide multiple ways to do the same thing . However, be careful if you are coming from a languae like C, Python doesn’t have “variables” in the sense that C does, instead python uses names and objects and in python integers(int’s) are immutable. Let’s understand it with an example− >>> a =1 >>> print(id(a)) 1919375088 >>> print(hex(id(a))) 0x726756f0 So what above statement means in python is: create an object of type int having value 1 and give the name a to it. The object is an instance of int having value 1 and the name a refers to it. The assigned name a and the object to which it refers are distinct. Now lets increment a >>> a +=1 >>> print(id(a)) 1919375104 >>> print(hex(id(a))) 0x72675700 As int are immutable, python understand above statement as Look up the object that a refers to (it is an int and id 0x726756f0) Look up the object that a refers to (it is an int and id 0x726756f0) Look up the value of object 0x726756f0 (it is 1). Look up the value of object 0x726756f0 (it is 1). Add 1 to that value (1+1 =2) Add 1 to that value (1+1 =2) Create a new int object with value 2 (object with id 0x72675700). Create a new int object with value 2 (object with id 0x72675700). Rebind the name a to this new object (0x72675700) Rebind the name a to this new object (0x72675700) Now a refers to object 0x72675700 and the previous object (0x726756f0) is no longer refered to by the name a. If there aren’t any other names referring to the original object it will be garbage collected later. Now a refers to object 0x72675700 and the previous object (0x726756f0) is no longer refered to by the name a. If there aren’t any other names referring to the original object it will be garbage collected later. So from above, you can understand when we do: a += 1 This will reassign a to a+1. That’s not an increment operator, because it does not increment a, but it reassign it. Let’s understand above increment/decrement with some more example− >>> a = b = c =1 >>> id(a) 1919375088 >>> id(b) 1919375088 >>> id(c) 1919375088 >>> #Above all have the same id >>> >>> # Now increment a >>> a +=1 >>> id(a) 1919375104 >>> id(b) 1919375088 >>> id(c) 1919375088 From above you can understand we have a single object that a, b and c refers to (an int with id 1919375088) On incrementing the value of a, now a is reasign to a+1 (id: 1919375104) and other b and c refers to same object (1919375088). Also python does come up with ++/-- operator. >>> a =1 >>> ++a 1 >>> --a 1
[ { "code": null, "e": 1162, "s": 1062, "text": "Python does not have unary increment/decrement operator( ++/--). Instead to increament a value, use" }, { "code": null, "e": 1169, "s": 1162, "text": "a += 1" }, { "code": null, "e": 1196, "s": 1169, "text": "to decrement a value, use−" }, { "code": null, "e": 1203, "s": 1196, "text": "a -= 1" }, { "code": null, "e": 1300, "s": 1203, "text": ">>> a = 0\n>>>\n>>> #Increment\n>>> a +=1\n>>>\n>>> #Decrement\n>>> a -= 1\n>>>\n>>> #value of a\n>>> a\n0" }, { "code": null, "e": 1361, "s": 1300, "text": "Python does not provide multiple ways to do the same thing ." }, { "code": null, "e": 1563, "s": 1361, "text": "However, be careful if you are coming from a languae like C, Python doesn’t have “variables” in the sense that C does, instead python uses names and objects and in python integers(int’s) are immutable." }, { "code": null, "e": 1600, "s": 1563, "text": "Let’s understand it with an example−" }, { "code": null, "e": 1670, "s": 1600, "text": ">>> a =1\n>>> print(id(a))\n1919375088\n>>> print(hex(id(a)))\n0x726756f0" }, { "code": null, "e": 1930, "s": 1670, "text": "So what above statement means in python is: create an object of type int having value 1 and give the name a to it. The object is an instance of int having value 1 and the name a refers to it. The assigned name a and the object to which it refers are distinct." }, { "code": null, "e": 1951, "s": 1930, "text": "Now lets increment a" }, { "code": null, "e": 2022, "s": 1951, "text": ">>> a +=1\n>>> print(id(a))\n1919375104\n>>> print(hex(id(a)))\n0x72675700" }, { "code": null, "e": 2081, "s": 2022, "text": "As int are immutable, python understand above statement as" }, { "code": null, "e": 2150, "s": 2081, "text": "Look up the object that a refers to (it is an int and id 0x726756f0)" }, { "code": null, "e": 2219, "s": 2150, "text": "Look up the object that a refers to (it is an int and id 0x726756f0)" }, { "code": null, "e": 2269, "s": 2219, "text": "Look up the value of object 0x726756f0 (it is 1)." }, { "code": null, "e": 2319, "s": 2269, "text": "Look up the value of object 0x726756f0 (it is 1)." }, { "code": null, "e": 2348, "s": 2319, "text": "Add 1 to that value (1+1 =2)" }, { "code": null, "e": 2377, "s": 2348, "text": "Add 1 to that value (1+1 =2)" }, { "code": null, "e": 2443, "s": 2377, "text": "Create a new int object with value 2 (object with id 0x72675700)." }, { "code": null, "e": 2509, "s": 2443, "text": "Create a new int object with value 2 (object with id 0x72675700)." }, { "code": null, "e": 2559, "s": 2509, "text": "Rebind the name a to this new object (0x72675700)" }, { "code": null, "e": 2609, "s": 2559, "text": "Rebind the name a to this new object (0x72675700)" }, { "code": null, "e": 2820, "s": 2609, "text": "Now a refers to object 0x72675700 and the previous object (0x726756f0) is no longer refered to by the name a. If there aren’t any other names referring to the original object it will be garbage collected later." }, { "code": null, "e": 3031, "s": 2820, "text": "Now a refers to object 0x72675700 and the previous object (0x726756f0) is no longer refered to by the name a. If there aren’t any other names referring to the original object it will be garbage collected later." }, { "code": null, "e": 3084, "s": 3031, "text": "So from above, you can understand when we do:\na += 1" }, { "code": null, "e": 3200, "s": 3084, "text": "This will reassign a to a+1. That’s not an increment operator, because it does not increment a, but it reassign it." }, { "code": null, "e": 3267, "s": 3200, "text": "Let’s understand above increment/decrement with some more example−" }, { "code": null, "e": 3478, "s": 3267, "text": ">>> a = b = c =1\n>>> id(a)\n1919375088\n>>> id(b)\n1919375088\n>>> id(c)\n1919375088\n>>> #Above all have the same id\n>>>\n>>> # Now increment a\n>>> a +=1\n>>> id(a)\n1919375104\n>>> id(b)\n1919375088\n>>> id(c)\n1919375088" }, { "code": null, "e": 3586, "s": 3478, "text": "From above you can understand we have a single object that a, b and c refers to (an int with id 1919375088)" }, { "code": null, "e": 3713, "s": 3586, "text": "On incrementing the value of a, now a is reasign to a+1 (id: 1919375104) and other b and c refers to same object (1919375088)." }, { "code": null, "e": 3759, "s": 3713, "text": "Also python does come up with ++/-- operator." }, { "code": null, "e": 3788, "s": 3759, "text": ">>> a =1\n>>> ++a\n1\n>>> --a\n1" } ]
A Flask API for serving scikit-learn models | by Amir Ziai | Towards Data Science
Scikit-learn is an intuitive and powerful Python machine learning library that makes training and validating many models fairly easy. Scikit-learn models can be persisted (pickled) to avoid retraining the model every time they are used. You can use Flask to create an API that can provide predictions based on a set of input variables using a pickled model. Before we get into Flask it’s important to point out that scikit-learn does not handle categorical variables and missing values. Categorical variables need to be encoded as numeric values. Typically categorical variables are transformed using OneHotEncoder (OHE) or LabelEncoder. LabelEncoder assigns an integer to each categorical value and transforms the original variable to a new variable with corresponding integers replaced for categorical variables. The problem with this approach is that a nominal variable is effectively transformed to an ordinal variable which may fool a model into thinking that the order is meaningful. OHE, on the other hand, does not suffer from this issue, however it tends to explode the number of transformed variables since a new variable is created for every value of a categorical variables. One thing to know about LabelEncoder is that the transformation will change based on the number of categorical values in a variable. Let’s say you have a “subscription” variable with “gold” and “platinum” values. LabelEncoder will map these to 0 and 1 respectively. Now if you add the value “free” to the mix the assignment is changed (free is encoded as 0, gold to 1, and platinum to 2). For this reason it’s important to keep your original LabelEncoder around for transformation at the prediction time. For this example I am going to use the titanic dataset. To simplify things further I will only use four variables: age, sex, embarked, and survived. import pandas as pddf = pd.read_csv('titanic.csv')include = ['Age', 'Sex', 'Embarked', 'Survived']df_ = df[include] # only using 4 variables Sex and Embarked are categorical variables and need to be transformed. “Age” has missing values which is typically imputed, meaning it’s replaced by a summary statistic such as median or mean. Missing values can be quite meaningful and it’s worth investigating what they represent in real-world applications. Here I’m simply going to replace NaNs with 0. categoricals = []for col, col_type in df_.dtypes.iteritems(): if col_type == 'O': categoricals.append(col) else: df_[col].fillna(0, inplace=True) The above snippet will iterate over all columns in df_ and append categorical variables (with data type “O”) to the categoricals list. For non-categorical variables (integers and floats), which is only age in this case, I’m replacing NaNs with zeros. Filling NaNs with a single value may have unintended consequences, especially if the value that you’re replacing NaNs with is within the observed range for the numeric variable. Since zero is not an observed and legitimate age value I’m not introducing bias, I would have if I used 40! Now we’re ready to OHE our categorical variables. Pandas provides a simple method get_dummies for creating OHE variables for a given dataframe. df_ohe = pd.get_dummies(df, columns=categoricals, dummy_na=True) The nice thing about OHE is that it’s deterministic. A new column is created for every column/value combination, in the following column_value format. For instance for the “Embarked” variable we’re going to get “Embarked_C”, “Embarked_Q”, “Embarked_S”, and “Embarked_nan”. Now that we’ve successfully transformed our dataset we’re ready to train our model. # using a random forest classifier (can be any classifier)from sklearn.ensemble import RandomForestClassifier as rfdependent_variable = 'Survived'x = df_ohe[df_ohe.columns.difference([dependent_variable])y = df_ohe[dependent_variable]clf = rf()clf.fit(x, y) The trained model is ready to be pickled. I’m going to use sklearn’s joblib. from sklearn.externals import joblibjoblib.dump(clf, 'model.pkl') That’s it! We have persisted our model. We can load this model into memory in a single line. clf = joblib.load('model.pkl') We’re now ready to use Flask to serve our persisted model. Flask is pretty minimalistic. Here’s what you need to start a bare bones Flask application (on port 8080 in this case). from flask import Flaskapp = Flask(__name__)if __name__ == '__main__': app.run(port=8080) We have to do two things: (1) load our persisted model into memory when the application starts, and (2) create an endpoint that takes input variables, transforms them into the appropriate format, and returns predictions. from flask import Flask, jsonifyfrom sklearn.externals import joblibimport pandas as pdapp = Flask(__name__)@app.route('/predict', methods=['POST'])def predict(): json_ = request.json query_df = pd.DataFrame(json_) query = pd.get_dummies(query_df) prediction = clf.predict(query) return jsonify({'prediction': list(prediction)})if __name__ == '__main__': clf = joblib.load('model.pkl') app.run(port=8080) This would only work under ideal circumstances where the incoming request contains all possible values for the categorical variables. If that’s not the case, get_dummies would generate a dataframe that has less columns than the classifier excepts, which would result in a runtime error. Also numerical variables need to be replaced using the same methodology that we trained the model with. A solution to the less than expected number of columns is to persist the list of columns from training. Remember that Python objects (including lists and dictionaries) can be pickled. To do this I’m going to use joblib, as I did previously, to dump the list of columns into a pkl file. model_columns = list(x.columns)joblib.dumps(model_columns, 'model_columns.pkl') Since we have this list persisted we can just replace the missing values with zeros at the time of prediction. Also we have to load model columns when the application starts. @app.route('/predict', methods=['POST'])def predict(): json_ = request.json query_df = pd.DataFrame(json_) query = pd.get_dummies(query_df) for col in model_columns: if col not in query.columns: query[col] = 0 prediction = clf.predict(query) return jsonify({'prediction': list(prediction)})if __name__ == '__main__': clf = joblib.load('model.pkl') model_columns = joblib.load('model_columns.pkl') app.run(port=8080) This solution is still not foolproof. If you happen to send values that were not seen as a part of the training set, get_dummies will produce extra columns and you’ll run into an error. For this solution to work we need to remove the extra columns that are not a part of model_columns from the query dataframe. A working solution is available on GitHub.
[ { "code": null, "e": 529, "s": 171, "text": "Scikit-learn is an intuitive and powerful Python machine learning library that makes training and validating many models fairly easy. Scikit-learn models can be persisted (pickled) to avoid retraining the model every time they are used. You can use Flask to create an API that can provide predictions based on a set of input variables using a pickled model." }, { "code": null, "e": 1358, "s": 529, "text": "Before we get into Flask it’s important to point out that scikit-learn does not handle categorical variables and missing values. Categorical variables need to be encoded as numeric values. Typically categorical variables are transformed using OneHotEncoder (OHE) or LabelEncoder. LabelEncoder assigns an integer to each categorical value and transforms the original variable to a new variable with corresponding integers replaced for categorical variables. The problem with this approach is that a nominal variable is effectively transformed to an ordinal variable which may fool a model into thinking that the order is meaningful. OHE, on the other hand, does not suffer from this issue, however it tends to explode the number of transformed variables since a new variable is created for every value of a categorical variables." }, { "code": null, "e": 1863, "s": 1358, "text": "One thing to know about LabelEncoder is that the transformation will change based on the number of categorical values in a variable. Let’s say you have a “subscription” variable with “gold” and “platinum” values. LabelEncoder will map these to 0 and 1 respectively. Now if you add the value “free” to the mix the assignment is changed (free is encoded as 0, gold to 1, and platinum to 2). For this reason it’s important to keep your original LabelEncoder around for transformation at the prediction time." }, { "code": null, "e": 2012, "s": 1863, "text": "For this example I am going to use the titanic dataset. To simplify things further I will only use four variables: age, sex, embarked, and survived." }, { "code": null, "e": 2154, "s": 2012, "text": "import pandas as pddf = pd.read_csv('titanic.csv')include = ['Age', 'Sex', 'Embarked', 'Survived']df_ = df[include] # only using 4 variables" }, { "code": null, "e": 2509, "s": 2154, "text": "Sex and Embarked are categorical variables and need to be transformed. “Age” has missing values which is typically imputed, meaning it’s replaced by a summary statistic such as median or mean. Missing values can be quite meaningful and it’s worth investigating what they represent in real-world applications. Here I’m simply going to replace NaNs with 0." }, { "code": null, "e": 2681, "s": 2509, "text": "categoricals = []for col, col_type in df_.dtypes.iteritems(): if col_type == 'O': categoricals.append(col) else: df_[col].fillna(0, inplace=True)" }, { "code": null, "e": 3218, "s": 2681, "text": "The above snippet will iterate over all columns in df_ and append categorical variables (with data type “O”) to the categoricals list. For non-categorical variables (integers and floats), which is only age in this case, I’m replacing NaNs with zeros. Filling NaNs with a single value may have unintended consequences, especially if the value that you’re replacing NaNs with is within the observed range for the numeric variable. Since zero is not an observed and legitimate age value I’m not introducing bias, I would have if I used 40!" }, { "code": null, "e": 3362, "s": 3218, "text": "Now we’re ready to OHE our categorical variables. Pandas provides a simple method get_dummies for creating OHE variables for a given dataframe." }, { "code": null, "e": 3427, "s": 3362, "text": "df_ohe = pd.get_dummies(df, columns=categoricals, dummy_na=True)" }, { "code": null, "e": 3700, "s": 3427, "text": "The nice thing about OHE is that it’s deterministic. A new column is created for every column/value combination, in the following column_value format. For instance for the “Embarked” variable we’re going to get “Embarked_C”, “Embarked_Q”, “Embarked_S”, and “Embarked_nan”." }, { "code": null, "e": 3784, "s": 3700, "text": "Now that we’ve successfully transformed our dataset we’re ready to train our model." }, { "code": null, "e": 4042, "s": 3784, "text": "# using a random forest classifier (can be any classifier)from sklearn.ensemble import RandomForestClassifier as rfdependent_variable = 'Survived'x = df_ohe[df_ohe.columns.difference([dependent_variable])y = df_ohe[dependent_variable]clf = rf()clf.fit(x, y)" }, { "code": null, "e": 4119, "s": 4042, "text": "The trained model is ready to be pickled. I’m going to use sklearn’s joblib." }, { "code": null, "e": 4185, "s": 4119, "text": "from sklearn.externals import joblibjoblib.dump(clf, 'model.pkl')" }, { "code": null, "e": 4278, "s": 4185, "text": "That’s it! We have persisted our model. We can load this model into memory in a single line." }, { "code": null, "e": 4309, "s": 4278, "text": "clf = joblib.load('model.pkl')" }, { "code": null, "e": 4368, "s": 4309, "text": "We’re now ready to use Flask to serve our persisted model." }, { "code": null, "e": 4488, "s": 4368, "text": "Flask is pretty minimalistic. Here’s what you need to start a bare bones Flask application (on port 8080 in this case)." }, { "code": null, "e": 4582, "s": 4488, "text": "from flask import Flaskapp = Flask(__name__)if __name__ == '__main__': app.run(port=8080)" }, { "code": null, "e": 4803, "s": 4582, "text": "We have to do two things: (1) load our persisted model into memory when the application starts, and (2) create an endpoint that takes input variables, transforms them into the appropriate format, and returns predictions." }, { "code": null, "e": 5236, "s": 4803, "text": "from flask import Flask, jsonifyfrom sklearn.externals import joblibimport pandas as pdapp = Flask(__name__)@app.route('/predict', methods=['POST'])def predict(): json_ = request.json query_df = pd.DataFrame(json_) query = pd.get_dummies(query_df) prediction = clf.predict(query) return jsonify({'prediction': list(prediction)})if __name__ == '__main__': clf = joblib.load('model.pkl') app.run(port=8080)" }, { "code": null, "e": 5627, "s": 5236, "text": "This would only work under ideal circumstances where the incoming request contains all possible values for the categorical variables. If that’s not the case, get_dummies would generate a dataframe that has less columns than the classifier excepts, which would result in a runtime error. Also numerical variables need to be replaced using the same methodology that we trained the model with." }, { "code": null, "e": 5913, "s": 5627, "text": "A solution to the less than expected number of columns is to persist the list of columns from training. Remember that Python objects (including lists and dictionaries) can be pickled. To do this I’m going to use joblib, as I did previously, to dump the list of columns into a pkl file." }, { "code": null, "e": 5993, "s": 5913, "text": "model_columns = list(x.columns)joblib.dumps(model_columns, 'model_columns.pkl')" }, { "code": null, "e": 6168, "s": 5993, "text": "Since we have this list persisted we can just replace the missing values with zeros at the time of prediction. Also we have to load model columns when the application starts." }, { "code": null, "e": 6643, "s": 6168, "text": "@app.route('/predict', methods=['POST'])def predict(): json_ = request.json query_df = pd.DataFrame(json_) query = pd.get_dummies(query_df) for col in model_columns: if col not in query.columns: query[col] = 0 prediction = clf.predict(query) return jsonify({'prediction': list(prediction)})if __name__ == '__main__': clf = joblib.load('model.pkl') model_columns = joblib.load('model_columns.pkl') app.run(port=8080)" }, { "code": null, "e": 6954, "s": 6643, "text": "This solution is still not foolproof. If you happen to send values that were not seen as a part of the training set, get_dummies will produce extra columns and you’ll run into an error. For this solution to work we need to remove the extra columns that are not a part of model_columns from the query dataframe." } ]
How to parse JSON object in JavaScript?
To parse JSON object in JavaScript, implement the following code − <!DOCTYPE html> <html> <body> <script> myData = JSON.parse('{"event1":{"title":"Employment period","start":"12\/29\/2011 10:20 ","end":"12\/15\/2013 00:00 "},"event2":{"title":"Employment period","start":"12\/14\/2011 10:20 ","end":"12\/18\/2013 00:00 "}}') myArray = [] for(var e in myData){ var dataCopy = myData[e] for(key in dataCopy){ if(key == "start" || key == "end"){ dataCopy[key] = new Date(dataCopy[key]) } } myArray.push(dataCopy) } document.write(JSON.stringify(myArray)); </script> </body> </html>
[ { "code": null, "e": 1129, "s": 1062, "text": "To parse JSON object in JavaScript, implement the following code −" }, { "code": null, "e": 1800, "s": 1129, "text": "<!DOCTYPE html>\n<html>\n <body>\n <script>\n myData = JSON.parse('{\"event1\":{\"title\":\"Employment period\",\"start\":\"12\\/29\\/2011 10:20 \",\"end\":\"12\\/15\\/2013 00:00 \"},\"event2\":{\"title\":\"Employment period\",\"start\":\"12\\/14\\/2011 10:20 \",\"end\":\"12\\/18\\/2013 00:00 \"}}')\n myArray = []\n for(var e in myData){\n var dataCopy = myData[e]\n for(key in dataCopy){\n if(key == \"start\" || key == \"end\"){\n dataCopy[key] = new Date(dataCopy[key])\n }\n }\n myArray.push(dataCopy)\n }\n document.write(JSON.stringify(myArray));\n </script>\n </body>\n</html>" } ]
MD2 Hash In Java - GeeksforGeeks
27 Sep, 2018 The MD2 is a Message-Digest Algorithm. It is a cryptographic hash function developed by Ronald Rivest in 1989. It is optimized for 8-bit computers. The MD2 algorithm is used in public key infrastructures as part of certificates generated with MD2 and RSA. From 2014, this algorithm is now not considered as a secure algorithm. To calculate cryptographic hashing value in Java, MessageDigest Class is used, under the package java.security. MessagDigest Class provides following cryptographic hash function to find hash value of a text as follows: MD2 MD5 SHA-1 SHA-224 SHA-256 SHA-384 SHA-512 These algorithms are initialized in static method called getInstance(). After selecting the algorithm the message digest value is calculated and the results are returned as a byte array. BigInteger class is used, to convert the resultant byte array into its signum representation. This representation is then converted into a hexadecimal format to get the expected MessageDigest. Examples: Input : hello worldOutput : d9cce882ee690a5c1ce70beff3a78c77 Input : GeeksForGeeksOutput : 787df774a3d25dca997b1f1c8bfee4af Below program shows the implementation of MD2 hash in Java. // Java program to calculate MD2 hash value import java.math.BigInteger;import java.security.MessageDigest;import java.security.NoSuchAlgorithmException; public class GFG { public static String encryptThisString(String input) { try { // getInstance() method is called with algorithm MD2 MessageDigest md = MessageDigest.getInstance("MD2"); // digest() method is called // to calculate message digest of the input string // returned as array of byte byte[] messageDigest = md.digest(input.getBytes()); // Convert byte array into signum representation BigInteger no = new BigInteger(1, messageDigest); // Convert message digest into hex value String hashtext = no.toString(16); // Add preceding 0s to make it 32 bit while (hashtext.length() < 32) { hashtext = "0" + hashtext; } // return the HashText return hashtext; } // For specifying wrong message digest algorithms catch (NoSuchAlgorithmException e) { throw new RuntimeException(e); } } // Driver code public static void main(String args[]) throws NoSuchAlgorithmException { System.out.println("HashCode Generated by MD2 for: "); String s1 = "GeeksForGeeks"; System.out.println("\n" + s1 + " : " + encryptThisString(s1)); String s2 = "hello world"; System.out.println("\n" + s2 + " : " + encryptThisString(s2)); }} Output: HashCode Generated by MD2 for: GeeksForGeeks : 787df774a3d25dca997b1f1c8bfee4af hello world : d9cce882ee690a5c1ce70beff3a78c77 Application: Cryptography Data Integrity Hash Computer Networks Java Java Programs Hash Java Computer Networks Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Advanced Encryption Standard (AES) Active and Passive attacks in Information Security Cryptography and its Types Multiple Access Protocols in Computer Network Architecture of Internet of Things (IoT) Arrays in Java Split() String method in Java with examples For-each loop in Java Reverse a string in Java Arrays.sort() in Java with examples
[ { "code": null, "e": 24119, "s": 24091, "text": "\n27 Sep, 2018" }, { "code": null, "e": 24446, "s": 24119, "text": "The MD2 is a Message-Digest Algorithm. It is a cryptographic hash function developed by Ronald Rivest in 1989. It is optimized for 8-bit computers. The MD2 algorithm is used in public key infrastructures as part of certificates generated with MD2 and RSA. From 2014, this algorithm is now not considered as a secure algorithm." }, { "code": null, "e": 24558, "s": 24446, "text": "To calculate cryptographic hashing value in Java, MessageDigest Class is used, under the package java.security." }, { "code": null, "e": 24665, "s": 24558, "text": "MessagDigest Class provides following cryptographic hash function to find hash value of a text as follows:" }, { "code": null, "e": 24669, "s": 24665, "text": "MD2" }, { "code": null, "e": 24673, "s": 24669, "text": "MD5" }, { "code": null, "e": 24679, "s": 24673, "text": "SHA-1" }, { "code": null, "e": 24687, "s": 24679, "text": "SHA-224" }, { "code": null, "e": 24695, "s": 24687, "text": "SHA-256" }, { "code": null, "e": 24703, "s": 24695, "text": "SHA-384" }, { "code": null, "e": 24711, "s": 24703, "text": "SHA-512" }, { "code": null, "e": 25091, "s": 24711, "text": "These algorithms are initialized in static method called getInstance(). After selecting the algorithm the message digest value is calculated and the results are returned as a byte array. BigInteger class is used, to convert the resultant byte array into its signum representation. This representation is then converted into a hexadecimal format to get the expected MessageDigest." }, { "code": null, "e": 25101, "s": 25091, "text": "Examples:" }, { "code": null, "e": 25162, "s": 25101, "text": "Input : hello worldOutput : d9cce882ee690a5c1ce70beff3a78c77" }, { "code": null, "e": 25225, "s": 25162, "text": "Input : GeeksForGeeksOutput : 787df774a3d25dca997b1f1c8bfee4af" }, { "code": null, "e": 25285, "s": 25225, "text": "Below program shows the implementation of MD2 hash in Java." }, { "code": "// Java program to calculate MD2 hash value import java.math.BigInteger;import java.security.MessageDigest;import java.security.NoSuchAlgorithmException; public class GFG { public static String encryptThisString(String input) { try { // getInstance() method is called with algorithm MD2 MessageDigest md = MessageDigest.getInstance(\"MD2\"); // digest() method is called // to calculate message digest of the input string // returned as array of byte byte[] messageDigest = md.digest(input.getBytes()); // Convert byte array into signum representation BigInteger no = new BigInteger(1, messageDigest); // Convert message digest into hex value String hashtext = no.toString(16); // Add preceding 0s to make it 32 bit while (hashtext.length() < 32) { hashtext = \"0\" + hashtext; } // return the HashText return hashtext; } // For specifying wrong message digest algorithms catch (NoSuchAlgorithmException e) { throw new RuntimeException(e); } } // Driver code public static void main(String args[]) throws NoSuchAlgorithmException { System.out.println(\"HashCode Generated by MD2 for: \"); String s1 = \"GeeksForGeeks\"; System.out.println(\"\\n\" + s1 + \" : \" + encryptThisString(s1)); String s2 = \"hello world\"; System.out.println(\"\\n\" + s2 + \" : \" + encryptThisString(s2)); }}", "e": 26893, "s": 25285, "text": null }, { "code": null, "e": 26901, "s": 26893, "text": "Output:" }, { "code": null, "e": 27032, "s": 26901, "text": "HashCode Generated by MD2 for: \n\nGeeksForGeeks : 787df774a3d25dca997b1f1c8bfee4af\n\nhello world : d9cce882ee690a5c1ce70beff3a78c77\n" }, { "code": null, "e": 27045, "s": 27032, "text": "Application:" }, { "code": null, "e": 27058, "s": 27045, "text": "Cryptography" }, { "code": null, "e": 27073, "s": 27058, "text": "Data Integrity" }, { "code": null, "e": 27078, "s": 27073, "text": "Hash" }, { "code": null, "e": 27096, "s": 27078, "text": "Computer Networks" }, { "code": null, "e": 27101, "s": 27096, "text": "Java" }, { "code": null, "e": 27115, "s": 27101, "text": "Java Programs" }, { "code": null, "e": 27120, "s": 27115, "text": "Hash" }, { "code": null, "e": 27125, "s": 27120, "text": "Java" }, { "code": null, "e": 27143, "s": 27125, "text": "Computer Networks" }, { "code": null, "e": 27241, "s": 27143, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27250, "s": 27241, "text": "Comments" }, { "code": null, "e": 27263, "s": 27250, "text": "Old Comments" }, { "code": null, "e": 27298, "s": 27263, "text": "Advanced Encryption Standard (AES)" }, { "code": null, "e": 27349, "s": 27298, "text": "Active and Passive attacks in Information Security" }, { "code": null, "e": 27376, "s": 27349, "text": "Cryptography and its Types" }, { "code": null, "e": 27422, "s": 27376, "text": "Multiple Access Protocols in Computer Network" }, { "code": null, "e": 27463, "s": 27422, "text": "Architecture of Internet of Things (IoT)" }, { "code": null, "e": 27478, "s": 27463, "text": "Arrays in Java" }, { "code": null, "e": 27522, "s": 27478, "text": "Split() String method in Java with examples" }, { "code": null, "e": 27544, "s": 27522, "text": "For-each loop in Java" }, { "code": null, "e": 27569, "s": 27544, "text": "Reverse a string in Java" } ]
Different Servers in Node.js - GeeksforGeeks
04 Jan, 2022 Node.js is an open-source and cross-platform runtime environment for executing JavaScript code outside a browser. You need to remember that NodeJS is not a framework and it’s not a programming language. Most of the people are confused and understand it’s a framework or a programming language. We often use Node.js for building back-end services like APIs like Web App or Mobile App. There are many ways to create a server and even node.js has its own inbuilt server ‘http’. Some of them are mentioned below: 1. Creating Server using ‘http‘ Module: Import http module: Import http module and store returned HTTP instance into a variable. Syntax: var http = require("http"); Creating and Binding Server: Create a server instance using createServer() method and bind it to some port using listen() method. Syntax: const server = http.createServer().listen(port) Parameter: This method (listen()) accepts a single parameter as mentioned above and described below: port <Number>: Ports are in the range 1024 to 65535 containing both registered and Dynamic ports. Below example illustrate the use of http module in Node.js. Example: Filename: index.js javascript // Node.js program to create // http server // Using require to access http module const http = require("http"); // Port numberconst PORT = process.env.PORT || 2020; // Creating serverconst server = http.createServer( // Server listening on port 2020 function (req, res) { res.write('Hello geeksforgeeks!'); // Write a response to the client res.end(); }).listen(PORT, error => { // Prints in console console.log(`Server listening on port ${PORT}`)}); Run index.js file using following command: node index.js Output: Server listening on port 2020 Now type http://127.0.0.1:2020/ OR http://localhost:2020 in a web browser to see the output. 2. Creating Server using ‘https‘ Module: Note: In order to create an HTTPS server, we need SSL key and certificate, and built-in https Node.js module. Import https module: Import https module and store returned HTTP instance into a variable. Syntax: var https = require("https"); Creating and Binding Server: Create a server instance using createServer() method and bind it to some port using listen() method. Syntax: const server = https.createServer(options, onResponseCallback).listen(port) Parameter: This method accepts three parameters as mentioned above and described below: options <key, certi>: It includes key and certificate passed. onResponseCallback <Callback>: It is a callback function which is called in response of createServer. port <Number>: Ports are in the range 1024 to 65535 containing both registered and Dynamic ports. Below example illustrate the use of https module in Node.js. Example: Filename: index.js javascript // Node.js program to create // https server // Using require to access https module var https = require('https');var fs = require('fs');const port= 8000; var options = { // Note: We require SSL and certificate // to create https servers key: fs.readFileSync('key.pem'), cert: fs.readFileSync('cert.pem')}; https.createServer(options, function (req, res) { // Returns the status res.writeHead(200); res.end("Hello GeeksforGeeks");}).listen(port); Run index.js file using the following command: node index.js Output: Hello GeeksforGeeks Now the server is set up and started, we can get the file by: curl -k https://localhost:8000 Now type https://127.0.0.1:8000/ OR https://localhost:8000 in a web browser to see the output. Example: Filename: index.js javascript // Node.js program to get the response // from https server // Using require to access http module const https = require('https'); // https://www.google.com/https.get('https://www.geeksforgeeks.org/', (res) => { // Printing status code console.log('statusCode:', res.statusCode); // Printing headers console.log('headers:', res.headers); }).on('error', (e) => { console.log(e);}); Run index.js file using the following command: node index.js Output: >> statusCode: 200 >> headers: { server: ‘Apache’, ......... ‘server-timing’: ‘cdn-cache; desc=HIT, edge; dur=1’} 3. Creating Server using ‘Express’ Module: In order to use the express module, we need to install the NPM (Node Package Manager) and the following modules (on cmd). // Creates package.json file >> npm init // Installs express module >> npm install express --save OR >> npm i express -s Import express module: Import express module and store returned instance into a variable. Syntax: var express = require("express"); Creating Server: The above syntax calls the “express()” function and creates a new express application which gets stored inside the app variable. Syntax: const app = express(); // OR by Importing and creating express application var express = require("express")(); Sending and listening to the response: It communicates the request and response with the client and the server. It requires PORT <number> and IP <number> to communicate. app.listen(PORT, IP, Callback); Parameter: This method accepts three parameters as mentioned above and described below. PORT <Number>: Ports are the endpoints of communication which helps to communicate with the client and the server. IP <Number>: IPs represent IPv4 or IPv6 address of a host or a device. Callback <Function>: It accepts a function. The below example illustrates the Express.js module in Node.js. Example: Filename: index.js javascript // Node.js program to create server // with help of Express module // Importing express const express = require('express'); // Creating new express app const app = express(); // PORT configurationconst PORT = process.env.PORT || 2020; // IP configurationconst IP = process.env.IP || 2021; // Create a route for the appapp.get('/', (req, res) => { res.send('Hello Vikas_g from geeksforgeeks!');}); // Create a route for the appapp.get('*', (req, res) => { res.send('OOPS!! The link is broken...');}); // Server listening to requestsapp.listen(PORT, IP, () => { console.log(`The Server is running at: http://localhost:${PORT}/`);}); Run index.js file using the following command: node index.js Output: The Server is running at: http://localhost:2020 Now type http://127.0.0.1:2020/ OR http://localhost:2020/ in a web browser to see the output. 4. Creating Server using ‘Hapi’ Module: In order to use the hapi module, we need to install the NPM (Node Package Manager) and the following modules (on cmd). // creates package.json file >> npm init // Installs hapi module >> npm install @hapi/hapi --save Import hapi module: Import hapi module and store returned instance into a variable. Syntax: var Hapi = require("@hapi/hapi"); Creating Server: The above syntax imports the “express()” module and now it creates a server. It communicates the request and response with the client and the server. It requires PORT <number> and host <string> to communicate. Syntax: const server = Hapi.server({port: 2020, host: 'localhost'}); Parameter: This method accepts three parameters as mentioned above and described below. PORT <Number>: Ports are the endpoints of communication which helps to communicate with the client and the server. HOST <String>: It it the name of the host. The below example illustrates the Hapi module in Node.js. Example: Filename: index.js javascript // Node.js program to create server// using hapi module // Importing hapi moduleconst Hapi = require('@hapi/hapi'); // Creating Server const server = Hapi.server({ port: 2020, host: 'localhost' }); // Creating route server.route({ method: 'GET', path: '/', handler: (request, hnd) => { return 'Hello GeeksForGeeks!'; } }); const start = async () => { await server.start(); console.log('Server running at', server.info.uri);}; process.on('unhandledRejection', (err) => { console.log(err); process.exit(1);}); start(); Run index.js file using the following command: node index.js Output: Server running at: http://localhost:2020 Now type http://localhost:2020/ in a web browser to see the output. 5. Creating Server using ‘Koa’ Module: In order to use the Koa module, we need to install the NPM (Node Package Manager) and the following modules (on cmd). // Creates package.json file >> npm init // Installs express module >> npm install koa --save OR >> npm i koa -s Import express module: Import koa module and store returned instance into a variable. Syntax: // Importing koa module var koa = require("koa"); Creating Server: The above syntax imports the koa module and creates a new koa application which gets stored inside the app variable. Syntax: // Creating koa application const app = new koa(); Sending and listening to the response: It communicates the request and response with the client and the server. It requires PORT <number> and IP <number> to communicate. app.listen(PORT, IP, Callback); Parameter: This method accepts three parameters as mentioned above and described below. PORT <Number>: Ports are the endpoints of communication which helps to communicate with the client and the server. IP <Number>: IPs represent IPv4 or IPv6 address of a host or a device. Callback <Function>: It accepts a function. The below example illustrates the Koa module in Node.js. Example: Filename: index.js javascript // Node.js program to create server // with help of Koa module // Importing koa const koa = require('koa'); // Creating new koa appconst app = new koa(); // PORT configurationconst PORT = process.env.PORT || 2020; // IP configurationconst IP = process.env.IP || 2021; app.use(function *(){ this.body = "Hello GeeksForGeeks!";}); // Server listening to requestsapp.listen(PORT, IP, ()=>{ console.log("Server started at port", PORT);}); Run index.js file using the following command: node index.js Output: The Server is running at port 2020 Now type http://127.0.0.1:2020/ OR http://localhost:2020/ in a web browser to see the output. sagartomar9927 Node.js-Misc Node.js Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Express.js express.Router() Function JWT Authentication with Node.js Express.js req.params Property Mongoose Populate() Method Difference between npm i and npm ci in Node.js Roadmap to Become a Web Developer in 2022 How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS? Top 10 Projects For Beginners To Practice HTML and CSS Skills Convert a string to an integer in JavaScript
[ { "code": null, "e": 25002, "s": 24974, "text": "\n04 Jan, 2022" }, { "code": null, "e": 25386, "s": 25002, "text": "Node.js is an open-source and cross-platform runtime environment for executing JavaScript code outside a browser. You need to remember that NodeJS is not a framework and it’s not a programming language. Most of the people are confused and understand it’s a framework or a programming language. We often use Node.js for building back-end services like APIs like Web App or Mobile App." }, { "code": null, "e": 25511, "s": 25386, "text": "There are many ways to create a server and even node.js has its own inbuilt server ‘http’. Some of them are mentioned below:" }, { "code": null, "e": 25551, "s": 25511, "text": "1. Creating Server using ‘http‘ Module:" }, { "code": null, "e": 25640, "s": 25551, "text": "Import http module: Import http module and store returned HTTP instance into a variable." }, { "code": null, "e": 25648, "s": 25640, "text": "Syntax:" }, { "code": null, "e": 25676, "s": 25648, "text": "var http = require(\"http\");" }, { "code": null, "e": 25806, "s": 25676, "text": "Creating and Binding Server: Create a server instance using createServer() method and bind it to some port using listen() method." }, { "code": null, "e": 25814, "s": 25806, "text": "Syntax:" }, { "code": null, "e": 25862, "s": 25814, "text": "const server = http.createServer().listen(port)" }, { "code": null, "e": 25963, "s": 25862, "text": "Parameter: This method (listen()) accepts a single parameter as mentioned above and described below:" }, { "code": null, "e": 26061, "s": 25963, "text": "port <Number>: Ports are in the range 1024 to 65535 containing both registered and Dynamic ports." }, { "code": null, "e": 26121, "s": 26061, "text": "Below example illustrate the use of http module in Node.js." }, { "code": null, "e": 26149, "s": 26121, "text": "Example: Filename: index.js" }, { "code": null, "e": 26160, "s": 26149, "text": "javascript" }, { "code": "// Node.js program to create // http server // Using require to access http module const http = require(\"http\"); // Port numberconst PORT = process.env.PORT || 2020; // Creating serverconst server = http.createServer( // Server listening on port 2020 function (req, res) { res.write('Hello geeksforgeeks!'); // Write a response to the client res.end(); }).listen(PORT, error => { // Prints in console console.log(`Server listening on port ${PORT}`)});", "e": 26641, "s": 26160, "text": null }, { "code": null, "e": 26684, "s": 26641, "text": "Run index.js file using following command:" }, { "code": null, "e": 26699, "s": 26684, "text": "node index.js\n" }, { "code": null, "e": 26707, "s": 26699, "text": "Output:" }, { "code": null, "e": 26737, "s": 26707, "text": "Server listening on port 2020" }, { "code": null, "e": 26830, "s": 26737, "text": "Now type http://127.0.0.1:2020/ OR http://localhost:2020 in a web browser to see the output." }, { "code": null, "e": 26871, "s": 26830, "text": "2. Creating Server using ‘https‘ Module:" }, { "code": null, "e": 26981, "s": 26871, "text": "Note: In order to create an HTTPS server, we need SSL key and certificate, and built-in https Node.js module." }, { "code": null, "e": 27072, "s": 26981, "text": "Import https module: Import https module and store returned HTTP instance into a variable." }, { "code": null, "e": 27080, "s": 27072, "text": "Syntax:" }, { "code": null, "e": 27110, "s": 27080, "text": "var https = require(\"https\");" }, { "code": null, "e": 27240, "s": 27110, "text": "Creating and Binding Server: Create a server instance using createServer() method and bind it to some port using listen() method." }, { "code": null, "e": 27248, "s": 27240, "text": "Syntax:" }, { "code": null, "e": 27338, "s": 27248, "text": "const server = https.createServer(options, \n onResponseCallback).listen(port)\n" }, { "code": null, "e": 27426, "s": 27338, "text": "Parameter: This method accepts three parameters as mentioned above and described below:" }, { "code": null, "e": 27488, "s": 27426, "text": "options <key, certi>: It includes key and certificate passed." }, { "code": null, "e": 27590, "s": 27488, "text": "onResponseCallback <Callback>: It is a callback function which is called in response of createServer." }, { "code": null, "e": 27688, "s": 27590, "text": "port <Number>: Ports are in the range 1024 to 65535 containing both registered and Dynamic ports." }, { "code": null, "e": 27749, "s": 27688, "text": "Below example illustrate the use of https module in Node.js." }, { "code": null, "e": 27777, "s": 27749, "text": "Example: Filename: index.js" }, { "code": null, "e": 27788, "s": 27777, "text": "javascript" }, { "code": "// Node.js program to create // https server // Using require to access https module var https = require('https');var fs = require('fs');const port= 8000; var options = { // Note: We require SSL and certificate // to create https servers key: fs.readFileSync('key.pem'), cert: fs.readFileSync('cert.pem')}; https.createServer(options, function (req, res) { // Returns the status res.writeHead(200); res.end(\"Hello GeeksforGeeks\");}).listen(port);", "e": 28252, "s": 27788, "text": null }, { "code": null, "e": 28299, "s": 28252, "text": "Run index.js file using the following command:" }, { "code": null, "e": 28313, "s": 28299, "text": "node index.js" }, { "code": null, "e": 28321, "s": 28313, "text": "Output:" }, { "code": null, "e": 28341, "s": 28321, "text": "Hello GeeksforGeeks" }, { "code": null, "e": 28403, "s": 28341, "text": "Now the server is set up and started, we can get the file by:" }, { "code": null, "e": 28434, "s": 28403, "text": "curl -k https://localhost:8000" }, { "code": null, "e": 28529, "s": 28434, "text": "Now type https://127.0.0.1:8000/ OR https://localhost:8000 in a web browser to see the output." }, { "code": null, "e": 28557, "s": 28529, "text": "Example: Filename: index.js" }, { "code": null, "e": 28568, "s": 28557, "text": "javascript" }, { "code": "// Node.js program to get the response // from https server // Using require to access http module const https = require('https'); // https://www.google.com/https.get('https://www.geeksforgeeks.org/', (res) => { // Printing status code console.log('statusCode:', res.statusCode); // Printing headers console.log('headers:', res.headers); }).on('error', (e) => { console.log(e);});", "e": 28963, "s": 28568, "text": null }, { "code": null, "e": 29010, "s": 28963, "text": "Run index.js file using the following command:" }, { "code": null, "e": 29025, "s": 29010, "text": "node index.js\n" }, { "code": null, "e": 29033, "s": 29025, "text": "Output:" }, { "code": null, "e": 29052, "s": 29033, "text": ">> statusCode: 200" }, { "code": null, "e": 29147, "s": 29052, "text": ">> headers: { server: ‘Apache’, ......... ‘server-timing’: ‘cdn-cache; desc=HIT, edge; dur=1’}" }, { "code": null, "e": 29312, "s": 29147, "text": "3. Creating Server using ‘Express’ Module: In order to use the express module, we need to install the NPM (Node Package Manager) and the following modules (on cmd)." }, { "code": null, "e": 29438, "s": 29312, "text": "// Creates package.json file\n>> npm init\n\n// Installs express module\n>> npm install express --save OR\n>> npm i express -s \n" }, { "code": null, "e": 29528, "s": 29438, "text": "Import express module: Import express module and store returned instance into a variable." }, { "code": null, "e": 29536, "s": 29528, "text": "Syntax:" }, { "code": null, "e": 29571, "s": 29536, "text": "var express = require(\"express\");\n" }, { "code": null, "e": 29717, "s": 29571, "text": "Creating Server: The above syntax calls the “express()” function and creates a new express application which gets stored inside the app variable." }, { "code": null, "e": 29725, "s": 29717, "text": "Syntax:" }, { "code": null, "e": 29841, "s": 29725, "text": "const app = express(); \n// OR by Importing and creating express application\nvar express = require(\"express\")(); \n" }, { "code": null, "e": 30012, "s": 29841, "text": "Sending and listening to the response: It communicates the request and response with the client and the server. It requires PORT <number> and IP <number> to communicate. " }, { "code": null, "e": 30045, "s": 30012, "text": "app.listen(PORT, IP, Callback);\n" }, { "code": null, "e": 30133, "s": 30045, "text": "Parameter: This method accepts three parameters as mentioned above and described below." }, { "code": null, "e": 30248, "s": 30133, "text": "PORT <Number>: Ports are the endpoints of communication which helps to communicate with the client and the server." }, { "code": null, "e": 30319, "s": 30248, "text": "IP <Number>: IPs represent IPv4 or IPv6 address of a host or a device." }, { "code": null, "e": 30363, "s": 30319, "text": "Callback <Function>: It accepts a function." }, { "code": null, "e": 30427, "s": 30363, "text": "The below example illustrates the Express.js module in Node.js." }, { "code": null, "e": 30455, "s": 30427, "text": "Example: Filename: index.js" }, { "code": null, "e": 30466, "s": 30455, "text": "javascript" }, { "code": "// Node.js program to create server // with help of Express module // Importing express const express = require('express'); // Creating new express app const app = express(); // PORT configurationconst PORT = process.env.PORT || 2020; // IP configurationconst IP = process.env.IP || 2021; // Create a route for the appapp.get('/', (req, res) => { res.send('Hello Vikas_g from geeksforgeeks!');}); // Create a route for the appapp.get('*', (req, res) => { res.send('OOPS!! The link is broken...');}); // Server listening to requestsapp.listen(PORT, IP, () => { console.log(`The Server is running at: http://localhost:${PORT}/`);});", "e": 31117, "s": 30466, "text": null }, { "code": null, "e": 31164, "s": 31117, "text": "Run index.js file using the following command:" }, { "code": null, "e": 31179, "s": 31164, "text": "node index.js\n" }, { "code": null, "e": 31187, "s": 31179, "text": "Output:" }, { "code": null, "e": 31235, "s": 31187, "text": "The Server is running at: http://localhost:2020" }, { "code": null, "e": 31329, "s": 31235, "text": "Now type http://127.0.0.1:2020/ OR http://localhost:2020/ in a web browser to see the output." }, { "code": null, "e": 31488, "s": 31329, "text": "4. Creating Server using ‘Hapi’ Module: In order to use the hapi module, we need to install the NPM (Node Package Manager) and the following modules (on cmd)." }, { "code": null, "e": 31590, "s": 31488, "text": "// creates package.json file\n>> npm init \n\n// Installs hapi module\n>> npm install @hapi/hapi --save \n" }, { "code": null, "e": 31674, "s": 31590, "text": "Import hapi module: Import hapi module and store returned instance into a variable." }, { "code": null, "e": 31682, "s": 31674, "text": "Syntax:" }, { "code": null, "e": 31716, "s": 31682, "text": "var Hapi = require(\"@hapi/hapi\");" }, { "code": null, "e": 31943, "s": 31716, "text": "Creating Server: The above syntax imports the “express()” module and now it creates a server. It communicates the request and response with the client and the server. It requires PORT <number> and host <string> to communicate." }, { "code": null, "e": 31951, "s": 31943, "text": "Syntax:" }, { "code": null, "e": 32013, "s": 31951, "text": "const server = Hapi.server({port: 2020, host: 'localhost'});\n" }, { "code": null, "e": 32101, "s": 32013, "text": "Parameter: This method accepts three parameters as mentioned above and described below." }, { "code": null, "e": 32216, "s": 32101, "text": "PORT <Number>: Ports are the endpoints of communication which helps to communicate with the client and the server." }, { "code": null, "e": 32259, "s": 32216, "text": "HOST <String>: It it the name of the host." }, { "code": null, "e": 32317, "s": 32259, "text": "The below example illustrates the Hapi module in Node.js." }, { "code": null, "e": 32345, "s": 32317, "text": "Example: Filename: index.js" }, { "code": null, "e": 32356, "s": 32345, "text": "javascript" }, { "code": "// Node.js program to create server// using hapi module // Importing hapi moduleconst Hapi = require('@hapi/hapi'); // Creating Server const server = Hapi.server({ port: 2020, host: 'localhost' }); // Creating route server.route({ method: 'GET', path: '/', handler: (request, hnd) => { return 'Hello GeeksForGeeks!'; } }); const start = async () => { await server.start(); console.log('Server running at', server.info.uri);}; process.on('unhandledRejection', (err) => { console.log(err); process.exit(1);}); start();", "e": 32962, "s": 32356, "text": null }, { "code": null, "e": 33009, "s": 32962, "text": "Run index.js file using the following command:" }, { "code": null, "e": 33023, "s": 33009, "text": "node index.js" }, { "code": null, "e": 33031, "s": 33023, "text": "Output:" }, { "code": null, "e": 33072, "s": 33031, "text": "Server running at: http://localhost:2020" }, { "code": null, "e": 33140, "s": 33072, "text": "Now type http://localhost:2020/ in a web browser to see the output." }, { "code": null, "e": 33297, "s": 33140, "text": "5. Creating Server using ‘Koa’ Module: In order to use the Koa module, we need to install the NPM (Node Package Manager) and the following modules (on cmd)." }, { "code": null, "e": 33415, "s": 33297, "text": "// Creates package.json file\n>> npm init \n\n// Installs express module\n>> npm install koa --save OR\n>> npm i koa -s \n" }, { "code": null, "e": 33501, "s": 33415, "text": "Import express module: Import koa module and store returned instance into a variable." }, { "code": null, "e": 33509, "s": 33501, "text": "Syntax:" }, { "code": null, "e": 33561, "s": 33509, "text": "// Importing koa module\nvar koa = require(\"koa\"); \n" }, { "code": null, "e": 33695, "s": 33561, "text": "Creating Server: The above syntax imports the koa module and creates a new koa application which gets stored inside the app variable." }, { "code": null, "e": 33703, "s": 33695, "text": "Syntax:" }, { "code": null, "e": 33757, "s": 33703, "text": "// Creating koa application\nconst app = new koa(); \n" }, { "code": null, "e": 33927, "s": 33757, "text": "Sending and listening to the response: It communicates the request and response with the client and the server. It requires PORT <number> and IP <number> to communicate." }, { "code": null, "e": 33959, "s": 33927, "text": "app.listen(PORT, IP, Callback);" }, { "code": null, "e": 34047, "s": 33959, "text": "Parameter: This method accepts three parameters as mentioned above and described below." }, { "code": null, "e": 34162, "s": 34047, "text": "PORT <Number>: Ports are the endpoints of communication which helps to communicate with the client and the server." }, { "code": null, "e": 34233, "s": 34162, "text": "IP <Number>: IPs represent IPv4 or IPv6 address of a host or a device." }, { "code": null, "e": 34277, "s": 34233, "text": "Callback <Function>: It accepts a function." }, { "code": null, "e": 34334, "s": 34277, "text": "The below example illustrates the Koa module in Node.js." }, { "code": null, "e": 34362, "s": 34334, "text": "Example: Filename: index.js" }, { "code": null, "e": 34373, "s": 34362, "text": "javascript" }, { "code": "// Node.js program to create server // with help of Koa module // Importing koa const koa = require('koa'); // Creating new koa appconst app = new koa(); // PORT configurationconst PORT = process.env.PORT || 2020; // IP configurationconst IP = process.env.IP || 2021; app.use(function *(){ this.body = \"Hello GeeksForGeeks!\";}); // Server listening to requestsapp.listen(PORT, IP, ()=>{ console.log(\"Server started at port\", PORT);});", "e": 34818, "s": 34373, "text": null }, { "code": null, "e": 34865, "s": 34818, "text": "Run index.js file using the following command:" }, { "code": null, "e": 34880, "s": 34865, "text": "node index.js\n" }, { "code": null, "e": 34888, "s": 34880, "text": "Output:" }, { "code": null, "e": 34923, "s": 34888, "text": "The Server is running at port 2020" }, { "code": null, "e": 35017, "s": 34923, "text": "Now type http://127.0.0.1:2020/ OR http://localhost:2020/ in a web browser to see the output." }, { "code": null, "e": 35032, "s": 35017, "text": "sagartomar9927" }, { "code": null, "e": 35045, "s": 35032, "text": "Node.js-Misc" }, { "code": null, "e": 35053, "s": 35045, "text": "Node.js" }, { "code": null, "e": 35070, "s": 35053, "text": "Web Technologies" }, { "code": null, "e": 35168, "s": 35070, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 35205, "s": 35168, "text": "Express.js express.Router() Function" }, { "code": null, "e": 35237, "s": 35205, "text": "JWT Authentication with Node.js" }, { "code": null, "e": 35268, "s": 35237, "text": "Express.js req.params Property" }, { "code": null, "e": 35295, "s": 35268, "text": "Mongoose Populate() Method" }, { "code": null, "e": 35342, "s": 35295, "text": "Difference between npm i and npm ci in Node.js" }, { "code": null, "e": 35384, "s": 35342, "text": "Roadmap to Become a Web Developer in 2022" }, { "code": null, "e": 35427, "s": 35384, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 35477, "s": 35427, "text": "How to insert spaces/tabs in text using HTML/CSS?" }, { "code": null, "e": 35539, "s": 35477, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" } ]
Removing leading zeros from a String using apache commons library in Java
The stripStart() method of the org.apache.commons.lang.StringUtils class accepts two strings and removes the set of characters represented by the second string from the string of the first string. To remove leading zeros from a string using apache communal library − Add the following dependency to your pom.xml file Add the following dependency to your pom.xml file <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> <version>3.9</version> </dependency> Get the string. Get the string. Pass the obtained string as first parameter and a string holding 0 as second parameter to the stripStart() method of the StringUtils class. Pass the obtained string as first parameter and a string holding 0 as second parameter to the stripStart() method of the StringUtils class. The following Java program reads an integer value from the user into a String and removes the leading zeroes from it using the stripStart() method of the StringUtils class. import java.util.Scanner; import org.apache.commons.lang3.StringUtils; public class LeadingZeroesCommons { public static void main(String args[]) { Scanner sc = new Scanner(System.in); System.out.println("Enter a String: "); String str = sc.nextLine(); String result = StringUtils.stripStart(str, "0"); System.out.println(result); } } Enter a String: 000Hello how are you Hello how are you
[ { "code": null, "e": 1259, "s": 1062, "text": "The stripStart() method of the org.apache.commons.lang.StringUtils class accepts two strings and removes the set of characters represented by the second string from the string of the first string." }, { "code": null, "e": 1329, "s": 1259, "text": "To remove leading zeros from a string using apache communal library −" }, { "code": null, "e": 1379, "s": 1329, "text": "Add the following dependency to your pom.xml file" }, { "code": null, "e": 1429, "s": 1379, "text": "Add the following dependency to your pom.xml file" }, { "code": null, "e": 1565, "s": 1429, "text": "<dependency>\n <groupId>org.apache.commons</groupId>\n <artifactId>commons-lang3</artifactId>\n <version>3.9</version>\n</dependency>" }, { "code": null, "e": 1581, "s": 1565, "text": "Get the string." }, { "code": null, "e": 1597, "s": 1581, "text": "Get the string." }, { "code": null, "e": 1737, "s": 1597, "text": "Pass the obtained string as first parameter and a string holding 0 as second parameter to the stripStart() method of the StringUtils class." }, { "code": null, "e": 1877, "s": 1737, "text": "Pass the obtained string as first parameter and a string holding 0 as second parameter to the stripStart() method of the StringUtils class." }, { "code": null, "e": 2050, "s": 1877, "text": "The following Java program reads an integer value from the user into a String and removes the leading zeroes from it using the stripStart() method of the StringUtils class." }, { "code": null, "e": 2421, "s": 2050, "text": "import java.util.Scanner;\nimport org.apache.commons.lang3.StringUtils;\npublic class LeadingZeroesCommons {\n public static void main(String args[]) {\n Scanner sc = new Scanner(System.in);\n System.out.println(\"Enter a String: \");\n String str = sc.nextLine();\n String result = StringUtils.stripStart(str, \"0\");\n System.out.println(result);\n }\n}" }, { "code": null, "e": 2476, "s": 2421, "text": "Enter a String:\n000Hello how are you\nHello how are you" } ]
Python program to convert seconds into hours, minutes and seconds
In this article, we will learn about the solution to the problem statement given below. Problem statement: We are given time, we need to convert seconds into hours & minutes to seconds. There are three approaches as discussed below− Live Demo def convert(seconds): seconds = seconds % (24 * 3600) hour = seconds // 3600 seconds %= 3600 minutes = seconds // 60 seconds %= 60 return "%02d:%02d:%02d" % (hour, minutes, seconds) #formatting n = 23451 print(convert(n)) 06:30:51 Live Demo #using date-time module import datetime def convert(n): return str(datetime.timedelta(seconds = n)) n = 23451 print(convert(n)) 6:30:51 Live Demo #using time module import time def convert(seconds): return time.strftime("%H:%M:%S", time.gmtime(n)) n = 23451 print(convert(n)) 06:30:51 In this article, we have learned about how we can convert seconds into hours, minutes and seconds.
[ { "code": null, "e": 1150, "s": 1062, "text": "In this article, we will learn about the solution to the problem statement given below." }, { "code": null, "e": 1248, "s": 1150, "text": "Problem statement: We are given time, we need to convert seconds into hours & minutes to seconds." }, { "code": null, "e": 1295, "s": 1248, "text": "There are three approaches as discussed below−" }, { "code": null, "e": 1306, "s": 1295, "text": " Live Demo" }, { "code": null, "e": 1546, "s": 1306, "text": "def convert(seconds):\n seconds = seconds % (24 * 3600)\n hour = seconds // 3600\n seconds %= 3600\n minutes = seconds // 60\n seconds %= 60\n return \"%02d:%02d:%02d\" % (hour, minutes, seconds) #formatting\nn = 23451\nprint(convert(n))" }, { "code": null, "e": 1555, "s": 1546, "text": "06:30:51" }, { "code": null, "e": 1566, "s": 1555, "text": " Live Demo" }, { "code": null, "e": 1697, "s": 1566, "text": "#using date-time module\nimport datetime\ndef convert(n):\n return str(datetime.timedelta(seconds = n))\nn = 23451\nprint(convert(n))" }, { "code": null, "e": 1705, "s": 1697, "text": "6:30:51" }, { "code": null, "e": 1716, "s": 1705, "text": " Live Demo" }, { "code": null, "e": 1849, "s": 1716, "text": "#using time module\nimport time\ndef convert(seconds):\n return time.strftime(\"%H:%M:%S\", time.gmtime(n))\nn = 23451\nprint(convert(n))" }, { "code": null, "e": 1858, "s": 1849, "text": "06:30:51" }, { "code": null, "e": 1957, "s": 1858, "text": "In this article, we have learned about how we can convert seconds into hours, minutes and seconds." } ]
Bootstrap nav-pills class
To turn the tabs into pills, use the class .nav-pills. You can try to run the following code to implement Bootstrap Pills Live Demo <!DOCTYPE html> <html> <head> <title>Bootstrap Example</title> <link href = "/bootstrap/css/bootstrap.min.css" rel = "stylesheet"> <script src = "/scripts/jquery.min.js"></script> <script src = "/bootstrap/js/bootstrap.min.js"></script> </head> <body> <h2>Subjects</h2> <ul class = "nav nav-pills"> <li class = "active"><a href = "#">Java</a></li> <li><a href = "#">WordPress</a></li> <li><a href = "#">JavaScript</a></li> <li><a href = "#">AngularJS</a></li> </ul> </body> </html>
[ { "code": null, "e": 1184, "s": 1062, "text": "To turn the tabs into pills, use the class .nav-pills. You can try to run the following code to implement Bootstrap Pills" }, { "code": null, "e": 1194, "s": 1184, "text": "Live Demo" }, { "code": null, "e": 1766, "s": 1194, "text": "<!DOCTYPE html>\n<html>\n <head>\n <title>Bootstrap Example</title>\n <link href = \"/bootstrap/css/bootstrap.min.css\" rel = \"stylesheet\">\n <script src = \"/scripts/jquery.min.js\"></script>\n <script src = \"/bootstrap/js/bootstrap.min.js\"></script>\n </head>\n <body>\n <h2>Subjects</h2>\n <ul class = \"nav nav-pills\">\n <li class = \"active\"><a href = \"#\">Java</a></li>\n <li><a href = \"#\">WordPress</a></li>\n <li><a href = \"#\">JavaScript</a></li>\n <li><a href = \"#\">AngularJS</a></li>\n </ul>\n </body>\n</html>" } ]
Basics of BASH for Beginners.. Learn about some of the most useful... | by Parul Pandey | Towards Data Science
Most computers today are not powered by electricity. They instead seem to be powered by the “pumping” motion of the mouse: William Shotts Have you ever noticed how a super nerdy hacker in movies, can easily infiltrate the most secure banks and rob them all off by merely typing some commands ferociously and staring at the green screen with a black background? How the person consistently gets access to all the passwords and takes control of the hidden cameras, anywhere, with just a few strokes on the keyboard. Well, I am not sure how the movie makers got this, but I guess it is their way of telling us that the command line is a powerful tool, albeit without all these hacking and ACCESS GRANTED!! absurdity. A lot of times, beginners are so used to working with the GUI based interface, that they tend to overlook the capabilities of the command line interface(CLI). A mouse comes in real handy when we need to copy about a hundred thousand files into a folder, but what if were to rename all those thousand files or were to separate them based on their extensions? Since GUIs are not programmable, it would take forever for us to rename or separate them With the command line, however, we could quickly achieve this in a few lines of code. The Unix shell is a pretty powerful tool for developers of all sorts. This article intends to give a quick introduction to the very basics starting from the UNIX operating system. Most operating systems today except the WINDOWS based, are built on top of UNIX. These include a lot of Linux distributions, macOS, iOS, Android, among others. A mere glance at the family tree of UNIX-based operating systems is sufficient to highlight the importance of UNIX, and this is the reason why it has been so widely adopted in the industry. In fact, the back end of many data and computing systems, including industry giants like Facebook and Google, heavily utilize UNIX. Shell is a command line interface for running programs on a computer. The user types a bunch of commands at the prompt, the shell runs the programs for the user and then displays the output. The commands can be either directly entered by the user or read from a file called the shell script or shell program. The UNIX system usually offers a variety of shell types. Some of the common ones are: We shall, however, limit ourselves to the Bash shell in this article. However, you are encouraged to read and try the other shells also especially the zsh shell since, in the latest MacOS called Catalina, zsh will replace the bash shell. So it’ll be a good idea to get to know it, now. The terminal is a program that is used to interact with a shell. It is just an interface to the Shell and to the other command line programs that run inside it. This is akin to how a web browser is an interface to websites. Here is how a typical terminal on Mac looks like: Mac and Linux have their respective versions of the terminal. Windows also has a built-in command shell, but that is based on the MS-DOS command line and not on UNIX. So let’s see how we can install a shell and a terminal program on Windows that works the same as the ones on Mac and Linux. Windows Subsystem for Linux (WSL) It is a new Linux compatibility system in Windows 10. The WSL lets developers run GNU/Linux environment — including most command-line tools, utilities, and applications — directly on Windows, unmodified, without the overhead of a virtual machine. You can read more about its installation and features here. Git Bash Git Bash is something that we will use for this article. Download the Git on your Windows computer from here and install it with all the default settings. What you get in the end is a terminal window, something like the one below. Whenever we open a terminal window, we see our last login credentials and a Shell prompt. The Shell prompt appears whenever the shell is ready to accept the input. It might vary a little in appearance depending upon the distribution, but mostly it appears as username@machinename followed by a $ sign. In case you do not want this whole bunch of information, you can use PS1 to customize your shell prompt. The terminal will now only show the $ at the prompt. However, this is only temporary and will reset to its original settings, once the terminal is closed. To get a little hang of the bash, let’s try a few simple commands: echo: returns whatever you type at the shell prompt similar to Print in Python. date: displays the current time and date. cal: displays a calendar of the current month. Clearing the Terminal : Ctrl-L or clear clears the terminal A bash command is the smallest unit of code that bash can independently execute. These commands tell bash what we need it to do. Bash generally takes in a single command from a user and returns to the user once the command has been executed. pwd stands for print working directory and it points to the current working directory, that is, the directory that the shell is currently looking at. It’s also the default place where the shell commands will look for data files. A directory is similar to a folder, but in the Shell, we shall stick to the name, directory. The UNIX file hierarchy has a tree structure. To reach a particular folder or file, we need to traverse certain paths within this tree structure. Paths separate every node of the above structure with the help of a slash( / ) character. Commands like ls and cd are used to navigate and organize the files. ls stands for a list and it lists the contents of a directory. ls usually starts out looking at our home directory. This means if we print ls by itself, it will always print the contents of the current directory which in my case is /Users/parul. Parameters The parameters and options turn on some special features when used with the ls command. ls <folder> : to see the contents of a particular folder. ls -a: For listing all the hidden files in a folder ls -l: Prints out a longer and more detailed listing of the files. ls -l can also be used with the name of the Directory to list the files of that particular directory. ls ~: tilde(~) is a short cut which denotes the home directory. So, regardless of what directory we are into, ls ~ will always list the home directory. The shell also lets us match filenames with patterns, denoted by an asterisk(*). It serves as a wildcard to replace any other character within a given pattern. For example, if we list *.txt, it will list all the files with a .txt extension. Let’s try and list out all the .py files in our Demo folder: cd stands for Change Directory and changes the active directory to the path specified. After we cd into a directory, ls command can be used to see the contents of that directory. Let’s see some of the ways in which this command can be used: cd <Directory>: changes the current directory to the desired Directory. Let’s navigate to the test directory which lies within the Demo directory and see the contents of it with the ls command. Note that we can also use a semicolon(;) to write two commands on the same line. cd .. : To go back to the parent directory. cd : To go back to the home directory There are certain commands which let us move, remove, create, and copy files from within the shell itself. mkdir stands for Make directory and is used to make a new directory or a folder. mv stands for Move and it moves one or more files or directories from one place to another. We need to specify what we want to move, i.e., the source and where we want to move them, i.e., the destination. Let’s create a new directory in the Demo Folder called PythonFiles and move all the .py files from the Demo folder into it using the above two commands. The touch command is used to create new, empty files. It is also used to change the timestamps on existing files and directories. Here is how we can create a file called foo.txt in the Demo folder. rm stands for Remove and it removes files or directories. By default, it does not remove directories, but if used as rm -r * within a directory, then every directory and file inside that directory is deleted. Let’s now remove the previously created foo.txt file. rmdir stands for remove directory and is used to remove empty directories from the filesystem. Let’s delete the PythonFiles folder that we created a while ago. Note that ../ denotes the parent directory. This is another aspect of the shell, which is super useful. There are commands which help us to view the contents of a file so that we can then manipulate them. cat stands for concatenate and it reads a file and outputs its content. It can read any number of files, and hence the name concatenate. There are some text files in our Demo folder and let’s use cat to view their content. To view more than one file, mention both the filenames after the cat command: $ cat Names.txt fruits.txt The cat command displays the contents of a file on the screen. This is fine when the contents are less but becomes a problem when the file is big. As can be seen in the example below, the command pops out everything at the terminal at a very high speed, and we cannot make sense of all the contents of the file. Fortunately, there is a command called less which lets us view the contents, one screen at a time. $ less babynames.txt There are certain options with less that can be used: Spacebar : To go to the next screen b: to go to the previous screen /: to search for a specific word q: quit The man command displays the man pages which are a user manual built default into many Linux and most Unix operating systems. man bash : To display the entire manual man <keyword> eg man ls gives information about the ls command. The pipe operator ‘|’ (vertical bar), is a way to send the output of one command as an input to another command. command1 | command2 When a command sends its output to a pipe, the receiving end for that output is another command, not a file. The figure below shows how the wc command count the contents of a file which have been displayed by the cat command. in a way wc is a command that takes in inputs and transforms those inputs in some way. Such commands are called filters and are placed after the Unix pipe. Let’s now look at some of the commonly used filter commands. We shall be working with a file called babynames.txt that contains around 1000 baby names and a fruits.txt file that contains names of few fruits. grep or global regular expression print searches for lines with a given string or looks for a pattern in a given input stream. The following command will read all the files and output all the lines that contain either the word ‘Tom.’ But this is a vast list, and we cannot possibly make sense of all these data just blasted at the terminal. Let’s see how we can use the pipe operator to make sense out of it. wc is short for word count. It reads a list of files and generates one or more of the following statistics: newline count, word count, and byte count. Let's input the output of the above grep command to wc to count the number of lines that contain the word ‘Tom.’ sort filter sorts lines alphabetically or numerically The cat command first reads the contents of the file fruits.txt and then sorts it. uniq stands for unique and gives us the number of unique lines in the input stream. It is important to note that uniq cannot detect duplicate entries unless they are adjacent. Hence we have used sorted the file before using the sort command. Alternatively, you can also use sort -u instead of uniq. Pipelines come in very handy for performing some complex tasks as several of the commands can be put together into a pipeline. Command Line tools can be a great addition to the toolkit. Initially, it can be intimidating for beginners, but once they get the hang of it, its real advantages and benefits can be realized. This article is to make one started and only scratches the surface. There are a plethora of useful resources available online to explore and learn about the shell in details. Here are some of the recommended resources: The Linux Command Line Fifth Internet Edition William Shotts Bash Beginners Guide The Bash Academy
[ { "code": null, "e": 185, "s": 47, "text": "Most computers today are not powered by electricity. They instead seem to be powered by the “pumping” motion of the mouse: William Shotts" }, { "code": null, "e": 761, "s": 185, "text": "Have you ever noticed how a super nerdy hacker in movies, can easily infiltrate the most secure banks and rob them all off by merely typing some commands ferociously and staring at the green screen with a black background? How the person consistently gets access to all the passwords and takes control of the hidden cameras, anywhere, with just a few strokes on the keyboard. Well, I am not sure how the movie makers got this, but I guess it is their way of telling us that the command line is a powerful tool, albeit without all these hacking and ACCESS GRANTED!! absurdity." }, { "code": null, "e": 1294, "s": 761, "text": "A lot of times, beginners are so used to working with the GUI based interface, that they tend to overlook the capabilities of the command line interface(CLI). A mouse comes in real handy when we need to copy about a hundred thousand files into a folder, but what if were to rename all those thousand files or were to separate them based on their extensions? Since GUIs are not programmable, it would take forever for us to rename or separate them With the command line, however, we could quickly achieve this in a few lines of code." }, { "code": null, "e": 1474, "s": 1294, "text": "The Unix shell is a pretty powerful tool for developers of all sorts. This article intends to give a quick introduction to the very basics starting from the UNIX operating system." }, { "code": null, "e": 1956, "s": 1474, "text": "Most operating systems today except the WINDOWS based, are built on top of UNIX. These include a lot of Linux distributions, macOS, iOS, Android, among others. A mere glance at the family tree of UNIX-based operating systems is sufficient to highlight the importance of UNIX, and this is the reason why it has been so widely adopted in the industry. In fact, the back end of many data and computing systems, including industry giants like Facebook and Google, heavily utilize UNIX." }, { "code": null, "e": 2265, "s": 1956, "text": "Shell is a command line interface for running programs on a computer. The user types a bunch of commands at the prompt, the shell runs the programs for the user and then displays the output. The commands can be either directly entered by the user or read from a file called the shell script or shell program." }, { "code": null, "e": 2351, "s": 2265, "text": "The UNIX system usually offers a variety of shell types. Some of the common ones are:" }, { "code": null, "e": 2637, "s": 2351, "text": "We shall, however, limit ourselves to the Bash shell in this article. However, you are encouraged to read and try the other shells also especially the zsh shell since, in the latest MacOS called Catalina, zsh will replace the bash shell. So it’ll be a good idea to get to know it, now." }, { "code": null, "e": 2911, "s": 2637, "text": "The terminal is a program that is used to interact with a shell. It is just an interface to the Shell and to the other command line programs that run inside it. This is akin to how a web browser is an interface to websites. Here is how a typical terminal on Mac looks like:" }, { "code": null, "e": 3202, "s": 2911, "text": "Mac and Linux have their respective versions of the terminal. Windows also has a built-in command shell, but that is based on the MS-DOS command line and not on UNIX. So let’s see how we can install a shell and a terminal program on Windows that works the same as the ones on Mac and Linux." }, { "code": null, "e": 3236, "s": 3202, "text": "Windows Subsystem for Linux (WSL)" }, { "code": null, "e": 3543, "s": 3236, "text": "It is a new Linux compatibility system in Windows 10. The WSL lets developers run GNU/Linux environment — including most command-line tools, utilities, and applications — directly on Windows, unmodified, without the overhead of a virtual machine. You can read more about its installation and features here." }, { "code": null, "e": 3552, "s": 3543, "text": "Git Bash" }, { "code": null, "e": 3783, "s": 3552, "text": "Git Bash is something that we will use for this article. Download the Git on your Windows computer from here and install it with all the default settings. What you get in the end is a terminal window, something like the one below." }, { "code": null, "e": 4085, "s": 3783, "text": "Whenever we open a terminal window, we see our last login credentials and a Shell prompt. The Shell prompt appears whenever the shell is ready to accept the input. It might vary a little in appearance depending upon the distribution, but mostly it appears as username@machinename followed by a $ sign." }, { "code": null, "e": 4190, "s": 4085, "text": "In case you do not want this whole bunch of information, you can use PS1 to customize your shell prompt." }, { "code": null, "e": 4345, "s": 4190, "text": "The terminal will now only show the $ at the prompt. However, this is only temporary and will reset to its original settings, once the terminal is closed." }, { "code": null, "e": 4412, "s": 4345, "text": "To get a little hang of the bash, let’s try a few simple commands:" }, { "code": null, "e": 4492, "s": 4412, "text": "echo: returns whatever you type at the shell prompt similar to Print in Python." }, { "code": null, "e": 4534, "s": 4492, "text": "date: displays the current time and date." }, { "code": null, "e": 4581, "s": 4534, "text": "cal: displays a calendar of the current month." }, { "code": null, "e": 4641, "s": 4581, "text": "Clearing the Terminal : Ctrl-L or clear clears the terminal" }, { "code": null, "e": 4883, "s": 4641, "text": "A bash command is the smallest unit of code that bash can independently execute. These commands tell bash what we need it to do. Bash generally takes in a single command from a user and returns to the user once the command has been executed." }, { "code": null, "e": 5112, "s": 4883, "text": "pwd stands for print working directory and it points to the current working directory, that is, the directory that the shell is currently looking at. It’s also the default place where the shell commands will look for data files." }, { "code": null, "e": 5441, "s": 5112, "text": "A directory is similar to a folder, but in the Shell, we shall stick to the name, directory. The UNIX file hierarchy has a tree structure. To reach a particular folder or file, we need to traverse certain paths within this tree structure. Paths separate every node of the above structure with the help of a slash( / ) character." }, { "code": null, "e": 5510, "s": 5441, "text": "Commands like ls and cd are used to navigate and organize the files." }, { "code": null, "e": 5756, "s": 5510, "text": "ls stands for a list and it lists the contents of a directory. ls usually starts out looking at our home directory. This means if we print ls by itself, it will always print the contents of the current directory which in my case is /Users/parul." }, { "code": null, "e": 5767, "s": 5756, "text": "Parameters" }, { "code": null, "e": 5855, "s": 5767, "text": "The parameters and options turn on some special features when used with the ls command." }, { "code": null, "e": 5913, "s": 5855, "text": "ls <folder> : to see the contents of a particular folder." }, { "code": null, "e": 5965, "s": 5913, "text": "ls -a: For listing all the hidden files in a folder" }, { "code": null, "e": 6134, "s": 5965, "text": "ls -l: Prints out a longer and more detailed listing of the files. ls -l can also be used with the name of the Directory to list the files of that particular directory." }, { "code": null, "e": 6286, "s": 6134, "text": "ls ~: tilde(~) is a short cut which denotes the home directory. So, regardless of what directory we are into, ls ~ will always list the home directory." }, { "code": null, "e": 6588, "s": 6286, "text": "The shell also lets us match filenames with patterns, denoted by an asterisk(*). It serves as a wildcard to replace any other character within a given pattern. For example, if we list *.txt, it will list all the files with a .txt extension. Let’s try and list out all the .py files in our Demo folder:" }, { "code": null, "e": 6829, "s": 6588, "text": "cd stands for Change Directory and changes the active directory to the path specified. After we cd into a directory, ls command can be used to see the contents of that directory. Let’s see some of the ways in which this command can be used:" }, { "code": null, "e": 7104, "s": 6829, "text": "cd <Directory>: changes the current directory to the desired Directory. Let’s navigate to the test directory which lies within the Demo directory and see the contents of it with the ls command. Note that we can also use a semicolon(;) to write two commands on the same line." }, { "code": null, "e": 7148, "s": 7104, "text": "cd .. : To go back to the parent directory." }, { "code": null, "e": 7186, "s": 7148, "text": "cd : To go back to the home directory" }, { "code": null, "e": 7293, "s": 7186, "text": "There are certain commands which let us move, remove, create, and copy files from within the shell itself." }, { "code": null, "e": 7374, "s": 7293, "text": "mkdir stands for Make directory and is used to make a new directory or a folder." }, { "code": null, "e": 7579, "s": 7374, "text": "mv stands for Move and it moves one or more files or directories from one place to another. We need to specify what we want to move, i.e., the source and where we want to move them, i.e., the destination." }, { "code": null, "e": 7732, "s": 7579, "text": "Let’s create a new directory in the Demo Folder called PythonFiles and move all the .py files from the Demo folder into it using the above two commands." }, { "code": null, "e": 7930, "s": 7732, "text": "The touch command is used to create new, empty files. It is also used to change the timestamps on existing files and directories. Here is how we can create a file called foo.txt in the Demo folder." }, { "code": null, "e": 8139, "s": 7930, "text": "rm stands for Remove and it removes files or directories. By default, it does not remove directories, but if used as rm -r * within a directory, then every directory and file inside that directory is deleted." }, { "code": null, "e": 8193, "s": 8139, "text": "Let’s now remove the previously created foo.txt file." }, { "code": null, "e": 8353, "s": 8193, "text": "rmdir stands for remove directory and is used to remove empty directories from the filesystem. Let’s delete the PythonFiles folder that we created a while ago." }, { "code": null, "e": 8397, "s": 8353, "text": "Note that ../ denotes the parent directory." }, { "code": null, "e": 8558, "s": 8397, "text": "This is another aspect of the shell, which is super useful. There are commands which help us to view the contents of a file so that we can then manipulate them." }, { "code": null, "e": 8781, "s": 8558, "text": "cat stands for concatenate and it reads a file and outputs its content. It can read any number of files, and hence the name concatenate. There are some text files in our Demo folder and let’s use cat to view their content." }, { "code": null, "e": 8859, "s": 8781, "text": "To view more than one file, mention both the filenames after the cat command:" }, { "code": null, "e": 8886, "s": 8859, "text": "$ cat Names.txt fruits.txt" }, { "code": null, "e": 9297, "s": 8886, "text": "The cat command displays the contents of a file on the screen. This is fine when the contents are less but becomes a problem when the file is big. As can be seen in the example below, the command pops out everything at the terminal at a very high speed, and we cannot make sense of all the contents of the file. Fortunately, there is a command called less which lets us view the contents, one screen at a time." }, { "code": null, "e": 9318, "s": 9297, "text": "$ less babynames.txt" }, { "code": null, "e": 9372, "s": 9318, "text": "There are certain options with less that can be used:" }, { "code": null, "e": 9408, "s": 9372, "text": "Spacebar : To go to the next screen" }, { "code": null, "e": 9440, "s": 9408, "text": "b: to go to the previous screen" }, { "code": null, "e": 9473, "s": 9440, "text": "/: to search for a specific word" }, { "code": null, "e": 9481, "s": 9473, "text": "q: quit" }, { "code": null, "e": 9607, "s": 9481, "text": "The man command displays the man pages which are a user manual built default into many Linux and most Unix operating systems." }, { "code": null, "e": 9647, "s": 9607, "text": "man bash : To display the entire manual" }, { "code": null, "e": 9711, "s": 9647, "text": "man <keyword> eg man ls gives information about the ls command." }, { "code": null, "e": 9824, "s": 9711, "text": "The pipe operator ‘|’ (vertical bar), is a way to send the output of one command as an input to another command." }, { "code": null, "e": 9844, "s": 9824, "text": "command1 | command2" }, { "code": null, "e": 10070, "s": 9844, "text": "When a command sends its output to a pipe, the receiving end for that output is another command, not a file. The figure below shows how the wc command count the contents of a file which have been displayed by the cat command." }, { "code": null, "e": 10226, "s": 10070, "text": "in a way wc is a command that takes in inputs and transforms those inputs in some way. Such commands are called filters and are placed after the Unix pipe." }, { "code": null, "e": 10434, "s": 10226, "text": "Let’s now look at some of the commonly used filter commands. We shall be working with a file called babynames.txt that contains around 1000 baby names and a fruits.txt file that contains names of few fruits." }, { "code": null, "e": 10668, "s": 10434, "text": "grep or global regular expression print searches for lines with a given string or looks for a pattern in a given input stream. The following command will read all the files and output all the lines that contain either the word ‘Tom.’" }, { "code": null, "e": 10843, "s": 10668, "text": "But this is a vast list, and we cannot possibly make sense of all these data just blasted at the terminal. Let’s see how we can use the pipe operator to make sense out of it." }, { "code": null, "e": 11107, "s": 10843, "text": "wc is short for word count. It reads a list of files and generates one or more of the following statistics: newline count, word count, and byte count. Let's input the output of the above grep command to wc to count the number of lines that contain the word ‘Tom.’" }, { "code": null, "e": 11161, "s": 11107, "text": "sort filter sorts lines alphabetically or numerically" }, { "code": null, "e": 11244, "s": 11161, "text": "The cat command first reads the contents of the file fruits.txt and then sorts it." }, { "code": null, "e": 11328, "s": 11244, "text": "uniq stands for unique and gives us the number of unique lines in the input stream." }, { "code": null, "e": 11543, "s": 11328, "text": "It is important to note that uniq cannot detect duplicate entries unless they are adjacent. Hence we have used sorted the file before using the sort command. Alternatively, you can also use sort -u instead of uniq." }, { "code": null, "e": 11670, "s": 11543, "text": "Pipelines come in very handy for performing some complex tasks as several of the commands can be put together into a pipeline." }, { "code": null, "e": 12081, "s": 11670, "text": "Command Line tools can be a great addition to the toolkit. Initially, it can be intimidating for beginners, but once they get the hang of it, its real advantages and benefits can be realized. This article is to make one started and only scratches the surface. There are a plethora of useful resources available online to explore and learn about the shell in details. Here are some of the recommended resources:" }, { "code": null, "e": 12142, "s": 12081, "text": "The Linux Command Line Fifth Internet Edition William Shotts" }, { "code": null, "e": 12163, "s": 12142, "text": "Bash Beginners Guide" } ]
Ashok - Osint Recon Tool in Kali Linux - GeeksforGeeks
28 Apr, 2021 Ashok is a free and open-source tool available on GitHub. Ashok is used for information gathering. Ashok is used to scan websites for information gathering and finding vulnerabilities in websites and webapps. Ashok is one of the easiest and useful tools for performing reconnaissance on websites and web apps. The Ashok tool is also available for Linux. That is coded in python language. Ashok interface is very similar to Metasploit 1 and Metasploit. Ashok provides a command-line interface that you can run on Kali Linux. This tool can be used to get information about our target(domain). We can target any domain using Ashok. The interactive console provides a number of helpful features, such as command completion and contextual help. A tool for every beginner/pentester in their penetration testing tasks. It contains several features like http-headers extractor, dns-lookup, whois-lookup, nslookup, subdomain-finder, nmap scanning, github,githubrecon, cms-detector, linkextractor, banner-grabbing, subnet-lookup, geoip-lookup. This tool is written in python language. You must have python language installed in your Kali Linux to use this tool. Ashok can detect WordPress, Drupal, Joomla, and Magento CMS, WordPress sensitive files, and WordPress version-related vulnerabilities. Ashok uses different modules for doing all the scannings. The whois data collection gives us information about Geoip lookup, Banner grabbing, DNS lookup, port scanning, sub-domain information, reverse IP, and MX records lookup. Overall Ashok is a vulnerability Scanner. Ashok has the following modules DNS Lookup, WHOIS lookup, GEO-Lookup, Subnet lookup, port scanner, Links extractor, etc. Ashok can detect closed and open ports of networks. Ashok also called a complete package of Information gathering tools. Features of Ashok: Ashok is free and open source tool. Ashok is a complete package of information gathering modules. Ashok works and acts as a web application/website scanner. Ashok is one of the easiest and useful tools for performing reconnaissance. Ashok is written in python language. Ashok interface is very similar to metasploitable 1 and metasploitable 2 that makes it easy to use. Ashok’s interactive console provides a number of helpful features. Ashok is used for information gathering and vulnerability assessment of web applications. Ashok can easily find loopholes in the code of web applications and websites. Ashok has the following modules Geoip lookup, Banner grabbing, DNS lookup, port scanning,–headers extractor, dns-lookup, whois-lookup, nslookup, subdomain-finder, nmap scanning, Github, githubrecon, cms-detector, linkextractor, banner-grabbing, subnet-lookup, geoip-lookup. These modules make this tool so powerful. Ashok can target a single domain and can found all the subdomains of that domain which makes work easy for pentesters. Step 1: Open your kali Linux operating system. Move to desktop. Here you have to create a directory called Ashok. In this directory, you have to install the tool. To move to desktop use the following command. cd Desktop Step 2: Now you are on the desktop. Here you have to create a directory called Ashok. To create an Ashok directory using the following command. mkdir Ashok Step 3. You have created a directory. Now use the following command to move into that directory. cd Ashok Step 4: Now you are in the Ashok directory. In this directory you have to download the tool means you have to clone the tool from GitHub. Use the following command to clone the tool from GitHub. git clone https://github.com/ankitdobhal/Ashok Step 5: The tool has been downloaded in the directory Ashok. Now to list out the contents of the tool that has been downloaded use following command. ls Step 6: When you listed out the contents of the tool you can see that a new directory has been generated by the tool that is Ashok. You have to move to this directory to view the contents of the tool. To move in this directory using the following command. cd Ashok Step 7. To list out the contents of this directory using the following command. ls Step 8: Now you can run the tool using the following command. This command will open the help menu of the tool. python3 Ashok.py -h The tool is running successfully now we will see the example of how to use the tool. Example 1: Use the Ashok tool to find headers of a website. python3 Ashok.py --headers https://hackthebox.eu These details of the header we got after scanning for the header. Similarly, you can scan for your domain. Example 2: Use Ashok tools to find subdomains of a target. python3 Ashok.py --subdomain hackthebox.eu We scanned for subdomains and we got these subdomains. Along with IP address of the domain Conclusion: Similarly, you can find the subdomains of your target also. This is simple as that. This tool can be used to get information about our target(domain). We can target any domain using Ashok. The interactive console provides a number of helpful features, such as command completion and contextual help. A tool for every beginner/pentester in their penetration testing tasks. It contains several features like: http-headers extractor, dns-lookup, whois-lookup, nslookup, subdomain-finder, nmap scanning, GitHub, githubrecon, cms-detector, linkextractor, banner-grabbing, subnet-lookup, geoip-lookup. This tool is written in python language. You must have python language installed in your kali linux to use this tool. Ashok can detect WordPress, Drupal, Joomla, and Magento CMS, WordPress sensitive files, and WordPress version-related vulnerabilities. Ashok uses different modules for doing all the scannings. The whois data collection gives us information about Geoip lookup, Banner grabbing, DNS lookup, port scanning, sub-domain information, reverse IP, and MX records lookup. Cyber-security Kali-Linux Linux-Tools Linux-Unix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. TCP Server-Client implementation in C ZIP command in Linux with examples tar command in Linux with examples SORT command in Linux/Unix with examples curl command in Linux with Examples UDP Server-Client implementation in C 'crontab' in Linux with Examples diff command in Linux with examples Conditional Statements | Shell Script Cat command in Linux with examples
[ { "code": null, "e": 24638, "s": 24610, "text": "\n28 Apr, 2021" }, { "code": null, "e": 25164, "s": 24638, "text": "Ashok is a free and open-source tool available on GitHub. Ashok is used for information gathering. Ashok is used to scan websites for information gathering and finding vulnerabilities in websites and webapps. Ashok is one of the easiest and useful tools for performing reconnaissance on websites and web apps. The Ashok tool is also available for Linux. That is coded in python language. Ashok interface is very similar to Metasploit 1 and Metasploit. Ashok provides a command-line interface that you can run on Kali Linux. " }, { "code": null, "e": 25676, "s": 25164, "text": "This tool can be used to get information about our target(domain). We can target any domain using Ashok. The interactive console provides a number of helpful features, such as command completion and contextual help. A tool for every beginner/pentester in their penetration testing tasks. It contains several features like http-headers extractor, dns-lookup, whois-lookup, nslookup, subdomain-finder, nmap scanning, github,githubrecon, cms-detector, linkextractor, banner-grabbing, subnet-lookup, geoip-lookup. " }, { "code": null, "e": 26442, "s": 25676, "text": "This tool is written in python language. You must have python language installed in your Kali Linux to use this tool. Ashok can detect WordPress, Drupal, Joomla, and Magento CMS, WordPress sensitive files, and WordPress version-related vulnerabilities. Ashok uses different modules for doing all the scannings. The whois data collection gives us information about Geoip lookup, Banner grabbing, DNS lookup, port scanning, sub-domain information, reverse IP, and MX records lookup. Overall Ashok is a vulnerability Scanner. Ashok has the following modules DNS Lookup, WHOIS lookup, GEO-Lookup, Subnet lookup, port scanner, Links extractor, etc. Ashok can detect closed and open ports of networks. Ashok also called a complete package of Information gathering tools. " }, { "code": null, "e": 26461, "s": 26442, "text": "Features of Ashok:" }, { "code": null, "e": 26497, "s": 26461, "text": "Ashok is free and open source tool." }, { "code": null, "e": 26559, "s": 26497, "text": "Ashok is a complete package of information gathering modules." }, { "code": null, "e": 26618, "s": 26559, "text": "Ashok works and acts as a web application/website scanner." }, { "code": null, "e": 26694, "s": 26618, "text": "Ashok is one of the easiest and useful tools for performing reconnaissance." }, { "code": null, "e": 26731, "s": 26694, "text": "Ashok is written in python language." }, { "code": null, "e": 26831, "s": 26731, "text": "Ashok interface is very similar to metasploitable 1 and metasploitable 2 that makes it easy to use." }, { "code": null, "e": 26898, "s": 26831, "text": "Ashok’s interactive console provides a number of helpful features." }, { "code": null, "e": 26988, "s": 26898, "text": "Ashok is used for information gathering and vulnerability assessment of web applications." }, { "code": null, "e": 27066, "s": 26988, "text": "Ashok can easily find loopholes in the code of web applications and websites." }, { "code": null, "e": 27382, "s": 27066, "text": "Ashok has the following modules Geoip lookup, Banner grabbing, DNS lookup, port scanning,–headers extractor, dns-lookup, whois-lookup, nslookup, subdomain-finder, nmap scanning, Github, githubrecon, cms-detector, linkextractor, banner-grabbing, subnet-lookup, geoip-lookup. These modules make this tool so powerful." }, { "code": null, "e": 27501, "s": 27382, "text": "Ashok can target a single domain and can found all the subdomains of that domain which makes work easy for pentesters." }, { "code": null, "e": 27664, "s": 27501, "text": "Step 1: Open your kali Linux operating system. Move to desktop. Here you have to create a directory called Ashok. In this directory, you have to install the tool." }, { "code": null, "e": 27710, "s": 27664, "text": "To move to desktop use the following command." }, { "code": null, "e": 27721, "s": 27710, "text": "cd Desktop" }, { "code": null, "e": 27865, "s": 27721, "text": "Step 2: Now you are on the desktop. Here you have to create a directory called Ashok. To create an Ashok directory using the following command." }, { "code": null, "e": 27877, "s": 27865, "text": "mkdir Ashok" }, { "code": null, "e": 27974, "s": 27877, "text": "Step 3. You have created a directory. Now use the following command to move into that directory." }, { "code": null, "e": 27983, "s": 27974, "text": "cd Ashok" }, { "code": null, "e": 28178, "s": 27983, "text": "Step 4: Now you are in the Ashok directory. In this directory you have to download the tool means you have to clone the tool from GitHub. Use the following command to clone the tool from GitHub." }, { "code": null, "e": 28225, "s": 28178, "text": "git clone https://github.com/ankitdobhal/Ashok" }, { "code": null, "e": 28375, "s": 28225, "text": "Step 5: The tool has been downloaded in the directory Ashok. Now to list out the contents of the tool that has been downloaded use following command." }, { "code": null, "e": 28378, "s": 28375, "text": "ls" }, { "code": null, "e": 28634, "s": 28378, "text": "Step 6: When you listed out the contents of the tool you can see that a new directory has been generated by the tool that is Ashok. You have to move to this directory to view the contents of the tool. To move in this directory using the following command." }, { "code": null, "e": 28643, "s": 28634, "text": "cd Ashok" }, { "code": null, "e": 28723, "s": 28643, "text": "Step 7. To list out the contents of this directory using the following command." }, { "code": null, "e": 28726, "s": 28723, "text": "ls" }, { "code": null, "e": 28839, "s": 28726, "text": "Step 8: Now you can run the tool using the following command. This command will open the help menu of the tool." }, { "code": null, "e": 28859, "s": 28839, "text": "python3 Ashok.py -h" }, { "code": null, "e": 28944, "s": 28859, "text": "The tool is running successfully now we will see the example of how to use the tool." }, { "code": null, "e": 29004, "s": 28944, "text": "Example 1: Use the Ashok tool to find headers of a website." }, { "code": null, "e": 29053, "s": 29004, "text": "python3 Ashok.py --headers https://hackthebox.eu" }, { "code": null, "e": 29160, "s": 29053, "text": "These details of the header we got after scanning for the header. Similarly, you can scan for your domain." }, { "code": null, "e": 29219, "s": 29160, "text": "Example 2: Use Ashok tools to find subdomains of a target." }, { "code": null, "e": 29262, "s": 29219, "text": "python3 Ashok.py --subdomain hackthebox.eu" }, { "code": null, "e": 29353, "s": 29262, "text": "We scanned for subdomains and we got these subdomains. Along with IP address of the domain" }, { "code": null, "e": 29365, "s": 29353, "text": "Conclusion:" }, { "code": null, "e": 30444, "s": 29365, "text": "Similarly, you can find the subdomains of your target also. This is simple as that. This tool can be used to get information about our target(domain). We can target any domain using Ashok. The interactive console provides a number of helpful features, such as command completion and contextual help. A tool for every beginner/pentester in their penetration testing tasks. It contains several features like: http-headers extractor, dns-lookup, whois-lookup, nslookup, subdomain-finder, nmap scanning, GitHub, githubrecon, cms-detector, linkextractor, banner-grabbing, subnet-lookup, geoip-lookup. This tool is written in python language. You must have python language installed in your kali linux to use this tool. Ashok can detect WordPress, Drupal, Joomla, and Magento CMS, WordPress sensitive files, and WordPress version-related vulnerabilities. Ashok uses different modules for doing all the scannings. The whois data collection gives us information about Geoip lookup, Banner grabbing, DNS lookup, port scanning, sub-domain information, reverse IP, and MX records lookup." }, { "code": null, "e": 30459, "s": 30444, "text": "Cyber-security" }, { "code": null, "e": 30470, "s": 30459, "text": "Kali-Linux" }, { "code": null, "e": 30482, "s": 30470, "text": "Linux-Tools" }, { "code": null, "e": 30493, "s": 30482, "text": "Linux-Unix" }, { "code": null, "e": 30591, "s": 30493, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30629, "s": 30591, "text": "TCP Server-Client implementation in C" }, { "code": null, "e": 30664, "s": 30629, "text": "ZIP command in Linux with examples" }, { "code": null, "e": 30699, "s": 30664, "text": "tar command in Linux with examples" }, { "code": null, "e": 30740, "s": 30699, "text": "SORT command in Linux/Unix with examples" }, { "code": null, "e": 30776, "s": 30740, "text": "curl command in Linux with Examples" }, { "code": null, "e": 30814, "s": 30776, "text": "UDP Server-Client implementation in C" }, { "code": null, "e": 30847, "s": 30814, "text": "'crontab' in Linux with Examples" }, { "code": null, "e": 30883, "s": 30847, "text": "diff command in Linux with examples" }, { "code": null, "e": 30921, "s": 30883, "text": "Conditional Statements | Shell Script" } ]
How to send a SMS using SMSmanager in Dual SIM mobile in Android using Kotlin?
This example demonstrates how to send a SMS using SMSmanager in Dual SIM mobile in Android using Kotlin. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project. Step 2 − Add the following code to res/layout/activity_main.xml. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:padding="4dp" tools:context=".MainActivity"> <EditText android:id="@+id/editTextNum" android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="Enter mobile number" android:textColor="@android:color/black" /> <EditText android:id="@+id/editTextMsg" android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="Enter your message" android:textColor="@android:color/black" /> <Button android:id="@+id/btnSendMsg" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:layout_marginTop="5dp" android:onClick="sendMessage" android:text="Send Message" /> </LinearLayout> Step 3 − Add the following code to src/MainActivity.kt import android.Manifest import android.content.pm.PackageManager import android.os.Bundle import android.telephony.SmsManager import android.text.TextUtils import android.view.View import android.widget.Button import android.widget.EditText import android.widget.Toast import androidx.appcompat.app.AppCompatActivity import androidx.core.app.ActivityCompat import androidx.core.content.ContextCompat class MainActivity : AppCompatActivity() { lateinit var button: Button lateinit var editTextNumber: EditText lateinit var editTextMessage: EditText private val permissionRequest = 101 override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) title = "KotlinApp" editTextNumber = findViewById(R.id.editTextNum) editTextMessage = findViewById(R.id.editTextMsg) button = findViewById(R.id.btnSendMsg) } fun sendMessage(view: View) { val permissionCheck = ContextCompat.checkSelfPermission(this, Manifest.permission.SEND_SMS) if (permissionCheck == PackageManager.PERMISSION_GRANTED) { myMessage() } else { ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.SEND_SMS), permissionRequest) } } private fun myMessage() { val myNumber: String = editTextNumber.text.toString().trim() val myMsg: String = editTextMessage.text.toString().trim() if (myNumber == "" || myMsg == "") { Toast.makeText(this, "Field cannot be empty", Toast.LENGTH_SHORT).show() } else { if (TextUtils.isDigitsOnly(myNumber)) { val smsManager: SmsManager = SmsManager.getDefault() smsManager.sendTextMessage(myNumber, null, myMsg, null, null) Toast.makeText(this, "Message Sent", Toast.LENGTH_SHORT).show() } else { Toast.makeText(this, "Please enter the correct number", Toast.LENGTH_SHORT).show() } } } override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) { super.onRequestPermissionsResult(requestCode, permissions, grantResults) if (requestCode == permissionRequest) { if (grantResults[0] == PackageManager.PERMISSION_GRANTED) { myMessage(); } else { Toast.makeText(this, "You don't have required permission to send a message", Toast.LENGTH_SHORT).show(); } } } } Step 4 − Add the following code to androidManifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.q11"> <uses-permission android:name="android.permission.SEND_SMS" /> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen
[ { "code": null, "e": 1167, "s": 1062, "text": "This example demonstrates how to send a SMS using SMSmanager in Dual SIM mobile in Android using Kotlin." }, { "code": null, "e": 1296, "s": 1167, "text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project." }, { "code": null, "e": 1361, "s": 1296, "text": "Step 2 − Add the following code to res/layout/activity_main.xml." }, { "code": null, "e": 2439, "s": 1361, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:orientation=\"vertical\"\n android:padding=\"4dp\"\n tools:context=\".MainActivity\">\n <EditText\n android:id=\"@+id/editTextNum\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:hint=\"Enter mobile number\"\n android:textColor=\"@android:color/black\" />\n <EditText\n android:id=\"@+id/editTextMsg\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:hint=\"Enter your message\"\n android:textColor=\"@android:color/black\" />\n <Button\n android:id=\"@+id/btnSendMsg\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_gravity=\"center\"\n android:layout_marginTop=\"5dp\"\n android:onClick=\"sendMessage\"\n android:text=\"Send Message\" />\n</LinearLayout>" }, { "code": null, "e": 2494, "s": 2439, "text": "Step 3 − Add the following code to src/MainActivity.kt" }, { "code": null, "e": 4995, "s": 2494, "text": "import android.Manifest\nimport android.content.pm.PackageManager\nimport android.os.Bundle\nimport android.telephony.SmsManager\nimport android.text.TextUtils\nimport android.view.View\nimport android.widget.Button\nimport android.widget.EditText\nimport android.widget.Toast\nimport androidx.appcompat.app.AppCompatActivity\nimport androidx.core.app.ActivityCompat\nimport androidx.core.content.ContextCompat\nclass MainActivity : AppCompatActivity() {\n lateinit var button: Button\n lateinit var editTextNumber: EditText\n lateinit var editTextMessage: EditText\n private val permissionRequest = 101\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n title = \"KotlinApp\"\n editTextNumber = findViewById(R.id.editTextNum)\n editTextMessage = findViewById(R.id.editTextMsg)\n button = findViewById(R.id.btnSendMsg)\n }\n fun sendMessage(view: View) {\n val permissionCheck = ContextCompat.checkSelfPermission(this, Manifest.permission.SEND_SMS)\n if (permissionCheck == PackageManager.PERMISSION_GRANTED) {\n myMessage()\n } else {\n ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.SEND_SMS),\n permissionRequest)\n }\n }\n private fun myMessage() {\n val myNumber: String = editTextNumber.text.toString().trim()\n val myMsg: String = editTextMessage.text.toString().trim()\n if (myNumber == \"\" || myMsg == \"\") {\n Toast.makeText(this, \"Field cannot be empty\", Toast.LENGTH_SHORT).show()\n } else {\n if (TextUtils.isDigitsOnly(myNumber)) {\n val smsManager: SmsManager = SmsManager.getDefault()\n smsManager.sendTextMessage(myNumber, null, myMsg, null, null)\n Toast.makeText(this, \"Message Sent\", Toast.LENGTH_SHORT).show()\n } else {\n Toast.makeText(this, \"Please enter the correct number\", Toast.LENGTH_SHORT).show()\n }\n }\n }\n override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults:\n IntArray) {\n super.onRequestPermissionsResult(requestCode, permissions, grantResults)\n if (requestCode == permissionRequest) {\n if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {\n myMessage();\n } else {\n Toast.makeText(this, \"You don't have required permission to send a message\",\n Toast.LENGTH_SHORT).show();\n }\n }\n }\n}" }, { "code": null, "e": 5050, "s": 4995, "text": "Step 4 − Add the following code to androidManifest.xml" }, { "code": null, "e": 5787, "s": 5050, "text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"com.example.q11\">\n <uses-permission android:name=\"android.permission.SEND_SMS\" />\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>" }, { "code": null, "e": 6135, "s": 5787, "text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen" } ]
Next Greater Element | Practice | GeeksforGeeks
Given an array arr[ ] of size N having distinct elements, the task is to find the next greater element for each element of the array in order of their appearance in the array. Next greater element of an element in the array is the nearest element on the right which is greater than the current element. If there does not exist next greater of current element, then next greater element for current element is -1. For example, next greater of the last element is always -1. Example 1: Input: N = 4, arr[] = [1 3 2 4] Output: 3 4 4 -1 Explanation: In the array, the next larger element to 1 is 3 , 3 is 4 , 2 is 4 and for 4 ? since it doesn't exist, it is -1. Example 2: Input: N = 5, arr[] [6 8 0 1 3] Output: 8 -1 1 3 -1 Explanation: In the array, the next larger element to 6 is 8, for 8 there is no larger elements hence it is -1, for 0 it is 1 , for 1 it is 3 and then for 3 there is no larger element on right and hence -1. Your Task: This is a function problem. You only need to complete the function nextLargerElement() that takes list of integers arr[ ] and N as input parameters and returns list of integers of length N denoting the next greater elements for all the corresponding elements in the input array. Expected Time Complexity : O(N) Expected Auxiliary Space : O(N) Constraints: 1 ≤ N ≤ 106 1 ≤ Ai ≤ 1018 +1 bhadoriyashiva7354in 7 hours class Solution: def nextLargerElement(self,arr,n): #code here stack, lst = [], [] for i in range(n-1, -1, -1): if len(stack) == 0: lst.append(-1) elif (len(stack) > 0 and stack[-1] > arr[i]): lst.append(stack[-1]) elif (len(stack) > 0 and stack[-1] <= arr[i]): while (len(stack) > 0 and stack[-1] <= arr[i]): stack.pop() if len(stack) == 0: lst.append(-1) else: lst.append(stack[-1]) stack.append(arr[i]) return lst[::-1] +1 putyavka6 days ago vector<long long> nextLargerElement(vector<long long> arr, int n){ vector<long long> res(n); stack<long long> S; for (int i = n - 1; i >= 0; i--) { while (S.size() && S.top() < arr[i]) S.pop(); res[i] = S.size() ? S.top() : -1; S.push(arr[i]); } return res; } 0 cse19bcs60561 week ago JAVA SOLUTION public static long[] nextLargerElement(long[] arr, int n) { long ans[]=new long[n]; Stack<Long> st= new Stack<Long>(); for(int i=arr.length-1;i>=0;i--) { while(st.size()!=0&&st.peek()<=arr[i]) { st.pop(); } if(st.size()==0) ans[i]=(long)-1; else { ans[i]=st.peek(); } st.push(arr[i]); } return ans; } 0 garjeaniket71 week ago // Your code here Stack<Long>st = new Stack<>(); int i =n-1; while(i>=0){ if(st.empty()){ st.push(arr[i]); arr[i]=-1; i--; } else{ if(st.peek()>arr[i]){ long temp=arr[i]; arr[i]=st.peek(); st.push(temp); i--; } else{ st.pop(); } } } return arr; +1 19bcs13141 week ago O(n) Solution using stack class Solution { //Function to find the next greater element for each element of the array. public static long[] nextLargerElement(long[] arr, int n) { // Your code here long[] ans=new long[n]; Stack<Long> s=new Stack<>(); for(int i=n-1;i>=0;i--){ if(s.size()==0){ s.push((long)-1); ans[i]=s.peek(); }else if(s.size()!=0 && s.peek()> arr[i]){ ans[i]=s.peek(); }else if(s.size()!=0 && s.peek()<=arr[i]){ while(s.size()!=0 && s.peek()<=arr[i]) s.pop(); if(s.size()==0){ ans[i]=(long)-1; s.push((long)-1); }else{ ans[i]=s.peek(); } } s.push(arr[i]); } return ans ; } } -1 nevilvas2 weeks ago class Solution: def nextLargerElement(self,arr,n): #code here s = [] ans = [] for i in range(n-1,-1,-1): while len(s) != 0 and arr[i] > s[-1]: s.pop() if len(s) == 0: ans.append(-1) else: ans.append(s[-1]) s.append(arr[i]) ans = ans[::-1] return ans 0 jotarokujo2 weeks ago // pattern: wall-hit public static long[] nextLargerElement(long[] arr, int n) { Stack<Long> stack = new Stack<Long>(); long[] ans = new long[n + 1]; for (int i = n - 1; i >= 0; i -= 1) { while (!stack.isEmpty() && stack.peek() <= arr[i]) { stack.pop(); } if (stack.isEmpty()) { ans[i] = -1; } else { ans[i] = stack.peek(); } stack.push(arr[i]); } return ans; } -1 manish14092 weeks ago very short code and easy in c++ vector<long long> nextLargerElement(vector<long long> arr, int n){ // manish kumar stack<long long>st; st.push(-1); vector<long long>ans(n,0); for(int i=n-1;i>=0;i--){ while(st.top()!=-1 and arr[i]>=st.top())st.pop(); ans[i]=st.top(); st.push(arr[i]); } return ans; } 0 abhinavbisht0252 weeks ago using C++ vector<long long> nextLargerElement(vector<long long> arr, int n){ // Your code here vector<long long> result; stack<long long> st; for(int i = n-1; i>=0; i--) { if(st.size() == 0) { result.push_back(-1); } else if(st.size() > 0 && st.top() > arr[i]) { result.push_back(st.top()); } else if(st.size() > 0 && st.top() <= arr[i]) { while(st.size() > 0 && st.top() <= arr[i]) { st.pop(); } if(st.size() == 0) { result.push_back(-1); } else{ result.push_back(st.top()); } } st.push(arr[i]); } reverse(result.begin(),result.end()); return result; } 0 hharshit81182 weeks ago void StackQueue :: push(int x){ s1.push(x);} //Function to pop an element from queue by using 2 stacks.int StackQueue :: pop(){ if(s1.empty() == false){ while(s1.size() != 1){ s2.push(s1.top()); s1.pop(); } int x = s1.top(); s1.pop(); while(s2.empty() == false){ s1.push(s2.top()); s2.pop(); } return x; } else{ return -1; }} We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 711, "s": 238, "text": "Given an array arr[ ] of size N having distinct elements, the task is to find the next greater element for each element of the array in order of their appearance in the array.\nNext greater element of an element in the array is the nearest element on the right which is greater than the current element.\nIf there does not exist next greater of current element, then next greater element for current element is -1. For example, next greater of the last element is always -1." }, { "code": null, "e": 722, "s": 711, "text": "Example 1:" }, { "code": null, "e": 900, "s": 722, "text": "Input: \nN = 4, arr[] = [1 3 2 4]\nOutput:\n3 4 4 -1\nExplanation:\nIn the array, the next larger element \nto 1 is 3 , 3 is 4 , 2 is 4 and for 4 ? \nsince it doesn't exist, it is -1.\n" }, { "code": null, "e": 911, "s": 900, "text": "Example 2:" }, { "code": null, "e": 1175, "s": 911, "text": "Input: \nN = 5, arr[] [6 8 0 1 3]\nOutput:\n8 -1 1 3 -1\nExplanation:\nIn the array, the next larger element to \n6 is 8, for 8 there is no larger elements \nhence it is -1, for 0 it is 1 , for 1 it \nis 3 and then for 3 there is no larger \nelement on right and hence -1." }, { "code": null, "e": 1465, "s": 1175, "text": "Your Task:\nThis is a function problem. You only need to complete the function nextLargerElement() that takes list of integers arr[ ] and N as input parameters and returns list of integers of length N denoting the next greater elements for all the corresponding elements in the input array." }, { "code": null, "e": 1529, "s": 1465, "text": "Expected Time Complexity : O(N)\nExpected Auxiliary Space : O(N)" }, { "code": null, "e": 1568, "s": 1529, "text": "Constraints:\n1 ≤ N ≤ 106\n1 ≤ Ai ≤ 1018" }, { "code": null, "e": 1571, "s": 1568, "text": "+1" }, { "code": null, "e": 1600, "s": 1571, "text": "bhadoriyashiva7354in 7 hours" }, { "code": null, "e": 2255, "s": 1600, "text": "class Solution:\n def nextLargerElement(self,arr,n):\n #code here\n stack, lst = [], []\n \n for i in range(n-1, -1, -1):\n if len(stack) == 0:\n lst.append(-1)\n elif (len(stack) > 0 and stack[-1] > arr[i]):\n lst.append(stack[-1])\n elif (len(stack) > 0 and stack[-1] <= arr[i]):\n while (len(stack) > 0 and stack[-1] <= arr[i]):\n stack.pop()\n if len(stack) == 0:\n lst.append(-1)\n else:\n lst.append(stack[-1])\n stack.append(arr[i])\n return lst[::-1]" }, { "code": null, "e": 2258, "s": 2255, "text": "+1" }, { "code": null, "e": 2277, "s": 2258, "text": "putyavka6 days ago" }, { "code": null, "e": 2594, "s": 2277, "text": "vector<long long> nextLargerElement(vector<long long> arr, int n){\n vector<long long> res(n);\n stack<long long> S;\n for (int i = n - 1; i >= 0; i--) {\n while (S.size() && S.top() < arr[i])\n S.pop();\n res[i] = S.size() ? S.top() : -1;\n S.push(arr[i]);\n }\n return res;\n}\n" }, { "code": null, "e": 2596, "s": 2594, "text": "0" }, { "code": null, "e": 2619, "s": 2596, "text": "cse19bcs60561 week ago" }, { "code": null, "e": 2633, "s": 2619, "text": "JAVA SOLUTION" }, { "code": null, "e": 3236, "s": 2633, "text": " public static long[] nextLargerElement(long[] arr, int n) { long ans[]=new long[n]; Stack<Long> st= new Stack<Long>(); for(int i=arr.length-1;i>=0;i--) { while(st.size()!=0&&st.peek()<=arr[i]) { st.pop(); } if(st.size()==0) ans[i]=(long)-1; else { ans[i]=st.peek(); } st.push(arr[i]); } return ans; } " }, { "code": null, "e": 3238, "s": 3236, "text": "0" }, { "code": null, "e": 3261, "s": 3238, "text": "garjeaniket71 week ago" }, { "code": null, "e": 3785, "s": 3261, "text": " // Your code here Stack<Long>st = new Stack<>(); int i =n-1; while(i>=0){ if(st.empty()){ st.push(arr[i]); arr[i]=-1; i--; } else{ if(st.peek()>arr[i]){ long temp=arr[i]; arr[i]=st.peek(); st.push(temp); i--; } else{ st.pop(); } } } return arr;" }, { "code": null, "e": 3788, "s": 3785, "text": "+1" }, { "code": null, "e": 3808, "s": 3788, "text": "19bcs13141 week ago" }, { "code": null, "e": 3834, "s": 3808, "text": "O(n) Solution using stack" }, { "code": null, "e": 4748, "s": 3836, "text": "class Solution\n{\n //Function to find the next greater element for each element of the array.\n public static long[] nextLargerElement(long[] arr, int n)\n { \n // Your code here\n long[] ans=new long[n];\n Stack<Long> s=new Stack<>();\n \n \n for(int i=n-1;i>=0;i--){\n if(s.size()==0){\n s.push((long)-1);\n ans[i]=s.peek();\n }else if(s.size()!=0 && s.peek()> arr[i]){\n ans[i]=s.peek();\n }else if(s.size()!=0 && s.peek()<=arr[i]){\n while(s.size()!=0 && s.peek()<=arr[i]) s.pop();\n \n if(s.size()==0){\n ans[i]=(long)-1;\n s.push((long)-1);\n }else{\n ans[i]=s.peek();\n }\n }\n \n s.push(arr[i]);\n }\n return ans ;\n } \n}" }, { "code": null, "e": 4751, "s": 4748, "text": "-1" }, { "code": null, "e": 4771, "s": 4751, "text": "nevilvas2 weeks ago" }, { "code": null, "e": 5263, "s": 4771, "text": "class Solution:\n def nextLargerElement(self,arr,n):\n #code here\n s = []\n ans = []\n for i in range(n-1,-1,-1):\n while len(s) != 0 and arr[i] > s[-1]: \n s.pop()\n \n if len(s) == 0:\n ans.append(-1)\n else:\n ans.append(s[-1])\n \n s.append(arr[i])\n \n \n \n \n ans = ans[::-1] \n return ans" }, { "code": null, "e": 5265, "s": 5263, "text": "0" }, { "code": null, "e": 5287, "s": 5265, "text": "jotarokujo2 weeks ago" }, { "code": null, "e": 5812, "s": 5287, "text": "// pattern: wall-hit\npublic static long[] nextLargerElement(long[] arr, int n) {\n Stack<Long> stack = new Stack<Long>();\n long[] ans = new long[n + 1];\n for (int i = n - 1; i >= 0; i -= 1) {\n while (!stack.isEmpty()\n \t\t\t&& stack.peek() <= arr[i]) {\n stack.pop();\n }\n if (stack.isEmpty()) {\n ans[i] = -1;\n } else {\n ans[i] = stack.peek();\n }\n stack.push(arr[i]);\n }\n return ans;\n }" }, { "code": null, "e": 5815, "s": 5812, "text": "-1" }, { "code": null, "e": 5837, "s": 5815, "text": "manish14092 weeks ago" }, { "code": null, "e": 5869, "s": 5837, "text": "very short code and easy in c++" }, { "code": null, "e": 6211, "s": 5869, "text": "vector<long long> nextLargerElement(vector<long long> arr, int n){ // manish kumar stack<long long>st; st.push(-1); vector<long long>ans(n,0); for(int i=n-1;i>=0;i--){ while(st.top()!=-1 and arr[i]>=st.top())st.pop(); ans[i]=st.top(); st.push(arr[i]); } return ans; }" }, { "code": null, "e": 6213, "s": 6211, "text": "0" }, { "code": null, "e": 6240, "s": 6213, "text": "abhinavbisht0252 weeks ago" }, { "code": null, "e": 6250, "s": 6240, "text": "using C++" }, { "code": null, "e": 7211, "s": 6250, "text": "vector<long long> nextLargerElement(vector<long long> arr, int n){\n // Your code here\n vector<long long> result;\n stack<long long> st;\n \n for(int i = n-1; i>=0; i--)\n {\n if(st.size() == 0)\n {\n result.push_back(-1);\n }\n else if(st.size() > 0 && st.top() > arr[i])\n {\n result.push_back(st.top());\n }\n else if(st.size() > 0 && st.top() <= arr[i])\n {\n while(st.size() > 0 && st.top() <= arr[i])\n {\n st.pop();\n }\n if(st.size() == 0)\n {\n result.push_back(-1);\n }\n else{\n result.push_back(st.top());\n }\n }\n st.push(arr[i]);\n }\n reverse(result.begin(),result.end());\n return result;\n }" }, { "code": null, "e": 7213, "s": 7211, "text": "0" }, { "code": null, "e": 7237, "s": 7213, "text": "hharshit81182 weeks ago" }, { "code": null, "e": 7284, "s": 7237, "text": "void StackQueue :: push(int x){ s1.push(x);}" }, { "code": null, "e": 7661, "s": 7284, "text": "//Function to pop an element from queue by using 2 stacks.int StackQueue :: pop(){ if(s1.empty() == false){ while(s1.size() != 1){ s2.push(s1.top()); s1.pop(); } int x = s1.top(); s1.pop(); while(s2.empty() == false){ s1.push(s2.top()); s2.pop(); } return x; } else{ return -1; }}" }, { "code": null, "e": 7807, "s": 7661, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 7843, "s": 7807, "text": " Login to access your submissions. " }, { "code": null, "e": 7853, "s": 7843, "text": "\nProblem\n" }, { "code": null, "e": 7863, "s": 7853, "text": "\nContest\n" }, { "code": null, "e": 7926, "s": 7863, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 8074, "s": 7926, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 8282, "s": 8074, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 8388, "s": 8282, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
__name__ (A Special variable) in Python
Unlike other programming languages, python is not designed to start execution of the code from a main function explicitly. A special variable called __name__ provides the functionality of the main function. As it is an in-built variable in python language, we can write a program just to see the value of this variable as below. print type(__name__) print __name__ Running the above code gives us the following result − <type 'str'> __main__ As you can see above the value of the __name__ variable is of string data type and equals to ___main__. Below are the two key features of __name__ variable. 1. When you run any well-written stand-alone python script which is not referring to any other script, the value of __name__ variable is equal to __main__. Let's write a program named standalone.py to illustrate this feature. def func1(): print 'The value of __name__ is ' + __name__ if __name__ == '__main__': func1() Running the above code gives us the following result − The value of __name__ is __main__ As expected the __name__ variable evaluates to __main__ hence the output. 2. The second feature is about importing one python script into another. In such a scenario, there seem to be two different scopes which can be considered as the main() function. The first scope can be the __main__ variable of the currently running program and the second the scope of the __main__ variable of the imported script used in the current program. So which __main__ variable will be used when you run a function from the imported script, the __name__ variable will evaluate to the actual name of the script and not equal to __main__. But when you find out the value of the name variable from the current program, without referring to the imported script, it will evaluate to __main__ as expected, as that is the scope of the program at level 0 indention. The below program illustrates this example. import standalone as sa print('Running the imported script') sa.func1() print('\n') print('Running the current script') print 'The value of __name__ is ' + __name__ Running the above code gives us the following result − Running the imported script The value of __name__ is standalone Running the current script The value of __name__ is __main__ The usefulness of this approach is when the code if __name__ == "__main__": is used, the python interpreter checks if it's parsing the currently executed script, or if it's temporarily parsing another external script. This way the programmer has the ability to control the behaviors of different parts of the program by choosing to run lines of code from the external script as well as the currently executed script depending on the scenarios.
[ { "code": null, "e": 1391, "s": 1062, "text": "Unlike other programming languages, python is not designed to start execution of the code from a main function explicitly. A special variable called __name__ provides the functionality of the main function. As it is an in-built variable in python language, we can write a program just to see the value of this variable as below." }, { "code": null, "e": 1427, "s": 1391, "text": "print type(__name__)\nprint __name__" }, { "code": null, "e": 1482, "s": 1427, "text": "Running the above code gives us the following result −" }, { "code": null, "e": 1504, "s": 1482, "text": "<type 'str'>\n__main__" }, { "code": null, "e": 1608, "s": 1504, "text": "As you can see above the value of the __name__ variable is of string data type and equals to ___main__." }, { "code": null, "e": 1661, "s": 1608, "text": "Below are the two key features of __name__ variable." }, { "code": null, "e": 1817, "s": 1661, "text": "1. When you run any well-written stand-alone python script which is not referring to any other script, the value of __name__ variable is equal to __main__." }, { "code": null, "e": 1887, "s": 1817, "text": "Let's write a program named standalone.py to illustrate this feature." }, { "code": null, "e": 1986, "s": 1887, "text": "def func1():\n print 'The value of __name__ is ' + __name__\nif __name__ == '__main__':\n func1()" }, { "code": null, "e": 2041, "s": 1986, "text": "Running the above code gives us the following result −" }, { "code": null, "e": 2075, "s": 2041, "text": "The value of __name__ is __main__" }, { "code": null, "e": 2149, "s": 2075, "text": "As expected the __name__ variable evaluates to __main__ hence the output." }, { "code": null, "e": 2508, "s": 2149, "text": "2. The second feature is about importing one python script into another. In such a scenario, there seem to be two different scopes which can be considered as the main() function. The first scope can be the __main__ variable of the currently running program and the second the scope of the __main__ variable of the imported script used in the current program." }, { "code": null, "e": 2549, "s": 2508, "text": "So which __main__ variable will be used " }, { "code": null, "e": 2695, "s": 2549, "text": "when you run a function from the imported script, the __name__ variable will evaluate to the actual name of the script and not equal to __main__." }, { "code": null, "e": 2916, "s": 2695, "text": "But when you find out the value of the name variable from the current program, without referring to the imported script, it will evaluate to __main__ as expected, as that is the scope of the program at level 0 indention." }, { "code": null, "e": 2960, "s": 2916, "text": "The below program illustrates this example." }, { "code": null, "e": 3127, "s": 2960, "text": "import standalone as sa\n\nprint('Running the imported script')\nsa.func1()\n\nprint('\\n')\nprint('Running the current script')\nprint 'The value of __name__ is ' + __name__" }, { "code": null, "e": 3182, "s": 3127, "text": "Running the above code gives us the following result −" }, { "code": null, "e": 3308, "s": 3182, "text": "Running the imported script\nThe value of __name__ is standalone\n\nRunning the current script\nThe value of __name__ is __main__" }, { "code": null, "e": 3752, "s": 3308, "text": "The usefulness of this approach is when the code if __name__ == \"__main__\": is used, the python interpreter checks if it's parsing the currently executed script, or if it's temporarily parsing another external script. This way the programmer has the ability to control the behaviors of different parts of the program by choosing to run lines of code from the external script as well as the currently executed script depending on the scenarios." } ]
K'th Smallest/Largest Element in Unsorted Array | Set 3 (Worst Case Linear Time) - GeeksforGeeks
31 Oct, 2021 We recommend reading following posts as a prerequisite of this post.K’th Smallest/Largest Element in Unsorted Array | Set 1 K’th Smallest/Largest Element in Unsorted Array | Set 2 (Expected Linear Time)Given an array and a number k where k is smaller than the size of the array, we need to find the k’th smallest element in the given array. It is given that all array elements are distinct.Examples: Input: arr[] = {7, 10, 4, 3, 20, 15} k = 3 Output: 7 Input: arr[] = {7, 10, 4, 3, 20, 15} k = 4 Output: 10 In previous post, we discussed an expected linear time algorithm. In this post, a worst-case linear time method is discussed. The idea in this new method is similar to quickSelect(), we get worst-case linear time by selecting a pivot that divides array in a balanced way (there are not very few elements on one side and many on another side). After the array is divided in a balanced way, we apply the same steps as used in quickSelect() to decide whether to go left or right of the pivot.Following is complete algorithm. kthSmallest(arr[0..n-1], k) 1) Divide arr[] into ⌈n/5⌉ groups where size of each group is 5 except possibly the last group which may have less than 5 elements. 2) Sort the above created ⌈n/5⌉ groups and find median of all groups. Create an auxiliary array ‘median[]’ and store medians of all ⌈n/5⌉ groups in this median array.// Recursively call this method to find median of median[0..⌈n/5⌉-1] 3) medOfMed = kthSmallest(median[0..⌈n/5⌉-1], ⌈n/10⌉)4) Partition arr[] around medOfMed and obtain its position. pos = partition(arr, n, medOfMed)5) If pos == k return medOfMed 6) If pos > k return kthSmallest(arr[l..pos-1], k) 7) If pos < k return kthSmallest(arr[pos+1..r], k-pos+l-1) In above algorithm, last 3 steps are same as algorithm in previous post. The first four steps are used to obtain a good point for partitioning the array (to make sure that there are not too many elements either side of pivot).Following is the implementation of above algorithm. C++ Java Python3 C# Javascript // C++ implementation of worst case linear time algorithm// to find k'th smallest element#include<iostream>#include<algorithm>#include<climits> using namespace std; int partition(int arr[], int l, int r, int k); // A simple function to find median of arr[]. This is called// only for an array of size 5 in this program.int findMedian(int arr[], int n){ sort(arr, arr+n); // Sort the array return arr[n/2]; // Return middle element} // Returns k'th smallest element in arr[l..r] in worst case// linear time. ASSUMPTION: ALL ELEMENTS IN ARR[] ARE DISTINCTint kthSmallest(int arr[], int l, int r, int k){ // If k is smaller than number of elements in array if (k > 0 && k <= r - l + 1) { int n = r-l+1; // Number of elements in arr[l..r] // Divide arr[] in groups of size 5, calculate median // of every group and store it in median[] array. int i, median[(n+4)/5]; // There will be floor((n+4)/5) groups; for (i=0; i<n/5; i++) median[i] = findMedian(arr+l+i*5, 5); if (i*5 < n) //For last group with less than 5 elements { median[i] = findMedian(arr+l+i*5, n%5); i++; } // Find median of all medians using recursive call. // If median[] has only one element, then no need // of recursive call int medOfMed = (i == 1)? median[i-1]: kthSmallest(median, 0, i-1, i/2); // Partition the array around a random element and // get position of pivot element in sorted array int pos = partition(arr, l, r, medOfMed); // If position is same as k if (pos-l == k-1) return arr[pos]; if (pos-l > k-1) // If position is more, recur for left return kthSmallest(arr, l, pos-1, k); // Else recur for right subarray return kthSmallest(arr, pos+1, r, k-pos+l-1); } // If k is more than number of elements in array return INT_MAX;} void swap(int *a, int *b){ int temp = *a; *a = *b; *b = temp;} // It searches for x in arr[l..r], and partitions the array// around x.int partition(int arr[], int l, int r, int x){ // Search for x in arr[l..r] and move it to end int i; for (i=l; i<r; i++) if (arr[i] == x) break; swap(&arr[i], &arr[r]); // Standard partition algorithm i = l; for (int j = l; j <= r - 1; j++) { if (arr[j] <= x) { swap(&arr[i], &arr[j]); i++; } } swap(&arr[i], &arr[r]); return i;} // Driver program to test above methodsint main(){ int arr[] = {12, 3, 5, 7, 4, 19, 26}; int n = sizeof(arr)/sizeof(arr[0]), k = 3; cout << "K'th smallest element is " << kthSmallest(arr, 0, n-1, k); return 0;} // Java implementation of worst// case linear time algorithm// to find k'th smallest elementimport java.util.*; class GFG{ // int partition(int arr[], int l, int r, int k); // A simple function to find median of arr[]. This is called// only for an array of size 5 in this program.static int findMedian(int arr[], int i,int n){ Arrays.sort(arr, i, n); return arr[i+(n-i)/2]; // sort the array and return middle element} // Returns k'th smallest element// in arr[l..r] in worst case// linear time. ASSUMPTION: ALL// ELEMENTS IN ARR[] ARE DISTINCTstatic int kthSmallest(int arr[], int l, int r, int k){ // If k is smaller than // number of elements in array if (k > 0 && k <= r - l + 1) { int n = r - l + 1 ; // Number of elements in arr[l..r] // Divide arr[] in groups of size 5, // calculate median of every group // and store it in median[] array. int i; // There will be floor((n+4)/5) groups; int []median = new int[(n + 4) / 5]; for (i = 0; i < n/5; i++) median[i] = findMedian(arr, l+i*5, l+i*5+5); // For last group with less than 5 elements if (i*5 < n) { median[i] = findMedian(arr, l+i*5, l+i*5+n%5); i++; } // Find median of all medians using recursive call. // If median[] has only one element, then no need // of recursive call int medOfMed = (i == 1)? median[i - 1]: kthSmallest(median, 0, i - 1, i / 2); // Partition the array around a random element and // get position of pivot element in sorted array int pos = partition(arr, l, r, medOfMed); // If position is same as k if (pos-l == k - 1) return arr[pos]; if (pos-l > k - 1) // If position is more, recur for left return kthSmallest(arr, l, pos - 1, k); // Else recur for right subarray return kthSmallest(arr, pos + 1, r, k - pos + l - 1); } // If k is more than number of elements in array return Integer.MAX_VALUE;} static int[] swap(int []arr, int i, int j){ int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; return arr;} // It searches for x in arr[l..r], and// partitions the array around x.static int partition(int arr[], int l, int r, int x){ // Search for x in arr[l..r] and move it to end int i; for (i = l; i < r; i++) if (arr[i] == x) break; swap(arr, i, r); // Standard partition algorithm i = l; for (int j = l; j <= r - 1; j++) { if (arr[j] <= x) { swap(arr, i, j); i++; } } swap(arr, i, r); return i;} // Driver codepublic static void main(String[] args){ int arr[] = {12, 3, 5, 7, 4, 19, 26}; int n = arr.length, k = 3; System.out.println("K'th smallest element is " + kthSmallest(arr, 0, n - 1, k));}} // This code has been contributed by 29AjayKumar and updated by ajayhajare # Python3 implementation of worst case # linear time algorithm to find# k'th smallest element # Returns k'th smallest element in arr[l..r]# in worst case linear time.# ASSUMPTION: ALL ELEMENTS IN ARR[] ARE DISTINCTdef kthSmallest(arr, l, r, k): # If k is smaller than number of # elements in array if (k > 0 and k <= r - l + 1): # Number of elements in arr[l..r] n = r - l + 1 # Divide arr[] in groups of size 5, # calculate median of every group # and store it in median[] array. median = [] i = 0 while (i < n // 5): median.append(findMedian(arr, l + i * 5, 5)) i += 1 # For last group with less than 5 elements if (i * 5 < n): median.append(findMedian(arr, l + i * 5, n % 5)) i += 1 # Find median of all medians using recursive call. # If median[] has only one element, then no need # of recursive call if i == 1: medOfMed = median[i - 1] else: medOfMed = kthSmallest(median, 0, i - 1, i // 2) # Partition the array around a medOfMed # element and get position of pivot # element in sorted array pos = partition(arr, l, r, medOfMed) # If position is same as k if (pos - l == k - 1): return arr[pos] if (pos - l > k - 1): # If position is more, # recur for left subarray return kthSmallest(arr, l, pos - 1, k) # Else recur for right subarray return kthSmallest(arr, pos + 1, r, k - pos + l - 1) # If k is more than the number of # elements in the array return 999999999999 def swap(arr, a, b): temp = arr[a] arr[a] = arr[b] arr[b] = temp # It searches for x in arr[l..r], # and partitions the array around x.def partition(arr, l, r, x): for i in range(l, r): if arr[i] == x: swap(arr, r, i) break x = arr[r] i = l for j in range(l, r): if (arr[j] <= x): swap(arr, i, j) i += 1 swap(arr, i, r) return i # A simple function to find# median of arr[] from index l to l+ndef findMedian(arr, l, n): lis = [] for i in range(l, l + n): lis.append(arr[i]) # Sort the array lis.sort() # Return the middle element return lis[n // 2] # Driver Codeif __name__ == '__main__': arr = [12, 3, 5, 7, 4, 19, 26] n = len(arr) k = 3 print("K'th smallest element is", kthSmallest(arr, 0, n - 1, k)) # This code is contributed by Ashutosh450 // C# implementation of worst// case linear time algorithm// to find k'th smallest elementusing System; class GFG{ // int partition(int arr[], int l, int r, int k); // A simple function to find median of arr[]. This is called// only for an array of size 5 in this program.static int findMedian(int []arr, int i, int n){ if(i <= n) Array.Sort(arr, i, n); // Sort the array else Array.Sort(arr, n, i); return arr[n/2]; // Return middle element} // Returns k'th smallest element// in arr[l..r] in worst case// linear time. ASSUMPTION: ALL// ELEMENTS IN ARR[] ARE DISTINCTstatic int kthSmallest(int []arr, int l, int r, int k){ // If k is smaller than // number of elements in array if (k > 0 && k <= r - l + 1) { int n = r - l + 1 ; // Number of elements in arr[l..r] // Divide arr[] in groups of size 5, // calculate median of every group // and store it in median[] array. int i; // There will be floor((n+4)/5) groups; int []median = new int[(n + 4) / 5]; for (i = 0; i < n/5; i++) median[i] = findMedian(arr, l + i * 5, 5); // For last group with less than 5 elements if (i*5 < n) { median[i] = findMedian(arr,l + i * 5, n % 5); i++; } // Find median of all medians using recursive call. // If median[] has only one element, then no need // of recursive call int medOfMed = (i == 1)? median[i - 1]: kthSmallest(median, 0, i - 1, i / 2); // Partition the array around a random element and // get position of pivot element in sorted array int pos = partition(arr, l, r, medOfMed); // If position is same as k if (pos-l == k - 1) return arr[pos]; if (pos-l > k - 1) // If position is more, recur for left return kthSmallest(arr, l, pos - 1, k); // Else recur for right subarray return kthSmallest(arr, pos + 1, r, k - pos + l - 1); } // If k is more than number of elements in array return int.MaxValue;} static int[] swap(int []arr, int i, int j){ int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; return arr;} // It searches for x in arr[l..r], and// partitions the array around x.static int partition(int []arr, int l, int r, int x){ // Search for x in arr[l..r] and move it to end int i; for (i = l; i < r; i++) if (arr[i] == x) break; swap(arr, i, r); // Standard partition algorithm i = l; for (int j = l; j <= r - 1; j++) { if (arr[j] <= x) { swap(arr, i, j); i++; } } swap(arr, i, r); return i;} // Driver codepublic static void Main(String[] args){ int []arr = {12, 3, 5, 7, 4, 19, 26}; int n = arr.Length, k = 3; Console.WriteLine("K'th smallest element is " + kthSmallest(arr, 0, n - 1, k));}} // This code contributed by Rajput-Ji <script>// Javascript implementation of worst// case linear time algorithm// to find k'th smallest element // int partition(int arr[], int l, int r, int k); // A simple function to find median of arr[]. This is called// only for an array of size 5 in this program.function findMedian(arr, i, n) { if (i <= n) arr.sort((a, b) => a - b); // Sort the array else arr.sort((a, b) => a - b); return arr[Math.floor(n / 2)]; // Return middle element} // Returns k'th smallest element// in arr[l..r] in worst case// linear time. ASSUMPTION: ALL// ELEMENTS IN ARR[] ARE DISTINCTfunction kthSmallest(arr, l, r, k){ // If k is smaller than // number of elements in array if (k > 0 && k <= r - l + 1) { let n = r - l + 1; // Number of elements in arr[l..r] // Divide arr[] in groups of size 5, // calculate median of every group // and store it in median[] array. let i; // There will be floor((n+4)/5) groups; let median = new Array(Math.floor((n + 4) / 5)); for (i = 0; i < n / 5; i++) median[i] = findMedian(arr, l + i * 5, 5); // For last group with less than 5 elements if (i * 5 < n) { median[i] = findMedian(arr, l + i * 5, n % 5); i++; } // Find median of all medians using recursive call. // If median[] has only one element, then no need // of recursive call let medOfMed = (i == 1) ? median[i - 1] : kthSmallest(median, 0, i - 1, Math.floor(i / 2)); // Partition the array around a random element and // get position of pivot element in sorted array let pos = partition(arr, l, r, medOfMed); // If position is same as k if (pos - l == k - 1) return arr[pos]; if (pos - l > k - 1) // If position is more, recur for left return kthSmallest(arr, l, pos - 1, k); // Else recur for right subarray return kthSmallest(arr, pos + 1, r, k - pos + l - 1); } // If k is more than number of elements in array return Integer.MAX_VALUE;} function swap(arr, i, j) { let temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; return arr;} // It searches for x in arr[l..r], and// partitions the array around x.function partition(arr, l, r, x) { // Search for x in arr[l..r] and move it to end let i; for (i = l; i < r; i++) if (arr[i] == x) break; swap(arr, i, r); // Standard partition algorithm i = l; for (let j = l; j <= r - 1; j++) { if (arr[j] <= x) { swap(arr, i, j); i++; } } swap(arr, i, r); return i;} // Driver code let arr = [12, 3, 5, 7, 4, 19, 26];let n = arr.length, k = 3;document.write("K'th smallest element is " + kthSmallest(arr, 0, n - 1, k)); // This code has been contributed by Saurabh Jaiswal</script> Output: K'th smallest element is 5 Time Complexity: The worst case time complexity of the above algorithm is O(n). Let us analyze all steps. The steps 1) and 2) take O(n) time as finding median of an array of size 5 takes O(1) time and there are n/5 arrays of size 5. The step 3) takes T(n/5) time. The step 4 is standard partition and takes O(n) time. The interesting steps are 6) and 7). At most, one of them is executed. These are recursive steps. What is the worst case size of these recursive calls. The answer is maximum number of elements greater than medOfMed (obtained in step 3) or maximum number of elements smaller than medOfMed. How many elements are greater than medOfMed and how many are smaller? At least half of the medians found in step 2 are greater than or equal to medOfMed. Thus, at least half of the n/5 groups contribute 3 elements that are greater than medOfMed, except for the one group that has fewer than 5 elements. Therefore, the number of elements greater than medOfMed is at least. Similarly, the number of elements that are less than medOfMed is at least 3n/10 – 6. In the worst case, the function recurs for at most n – (3n/10 – 6) which is 7n/10 + 6 elements.Note that 7n/10 + 6 20 20 and that any input of 80 or fewer elements requires O(1) time. We can therefore obtain the recurrence We show that the running time is linear by substitution. Assume that T(n) cn for some constant c and all n > 80. Substituting this inductive hypothesis into the right-hand side of the recurrence yields T(n) <= cn/5 + c(7n/10 + 6) + O(n) <= cn/5 + c + 7cn/10 + 6c + O(n) <= 9cn/10 + 7c + O(n) <= cn, since we can pick c large enough so that c(n/10 – 7) is larger than the function described by the O(n) term for all n > 80. The worst-case running time of is therefore linear (Source: http://staff.ustc.edu.cn/~csli/graduate/algorithms/book6/chap10.htm ).Note that the above algorithm is linear in worst case, but the constants are very high for this algorithm. Therefore, this algorithm doesn’t work well in practical situations, randomized quickSelect works much better and preferred.Sources: MIT Video Lecture on Order Statistics, Median Introduction to Algorithms by Clifford Stein, Thomas H. Cormen, Charles E. Leiserson, Ronald L. http://staff.ustc.edu.cn/~csli/graduate/algorithms/book6/chap10.htmThis article is contributed by Shivam. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above falcon95 29AjayKumar Rajput-Ji ashutosh450 gfgking ajayhajare ABCO Cisco Microsoft Order-Statistics VMWare Arrays Searching VMWare Microsoft ABCO Cisco Arrays Searching Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Multidimensional Arrays in Java Introduction to Arrays Python | Using 2D arrays/lists the right way Linked List vs Array Queue | Set 1 (Introduction and Array Implementation) Binary Search Find the Missing Number Program to find largest element in an array Search an element in a sorted and rotated array Given an array of size n and a number k, find all elements that appear more than n/k times
[ { "code": null, "e": 24860, "s": 24832, "text": "\n31 Oct, 2021" }, { "code": null, "e": 25262, "s": 24860, "text": "We recommend reading following posts as a prerequisite of this post.K’th Smallest/Largest Element in Unsorted Array | Set 1 K’th Smallest/Largest Element in Unsorted Array | Set 2 (Expected Linear Time)Given an array and a number k where k is smaller than the size of the array, we need to find the k’th smallest element in the given array. It is given that all array elements are distinct.Examples: " }, { "code": null, "e": 25384, "s": 25262, "text": "Input: arr[] = {7, 10, 4, 3, 20, 15}\n k = 3\nOutput: 7\n\nInput: arr[] = {7, 10, 4, 3, 20, 15}\n k = 4\nOutput: 10" }, { "code": null, "e": 25909, "s": 25386, "text": "In previous post, we discussed an expected linear time algorithm. In this post, a worst-case linear time method is discussed. The idea in this new method is similar to quickSelect(), we get worst-case linear time by selecting a pivot that divides array in a balanced way (there are not very few elements on one side and many on another side). After the array is divided in a balanced way, we apply the same steps as used in quickSelect() to decide whether to go left or right of the pivot.Following is complete algorithm. " }, { "code": null, "e": 26591, "s": 25909, "text": "kthSmallest(arr[0..n-1], k) 1) Divide arr[] into ⌈n/5⌉ groups where size of each group is 5 except possibly the last group which may have less than 5 elements. 2) Sort the above created ⌈n/5⌉ groups and find median of all groups. Create an auxiliary array ‘median[]’ and store medians of all ⌈n/5⌉ groups in this median array.// Recursively call this method to find median of median[0..⌈n/5⌉-1] 3) medOfMed = kthSmallest(median[0..⌈n/5⌉-1], ⌈n/10⌉)4) Partition arr[] around medOfMed and obtain its position. pos = partition(arr, n, medOfMed)5) If pos == k return medOfMed 6) If pos > k return kthSmallest(arr[l..pos-1], k) 7) If pos < k return kthSmallest(arr[pos+1..r], k-pos+l-1)" }, { "code": null, "e": 26871, "s": 26591, "text": "In above algorithm, last 3 steps are same as algorithm in previous post. The first four steps are used to obtain a good point for partitioning the array (to make sure that there are not too many elements either side of pivot).Following is the implementation of above algorithm. " }, { "code": null, "e": 26875, "s": 26871, "text": "C++" }, { "code": null, "e": 26880, "s": 26875, "text": "Java" }, { "code": null, "e": 26888, "s": 26880, "text": "Python3" }, { "code": null, "e": 26891, "s": 26888, "text": "C#" }, { "code": null, "e": 26902, "s": 26891, "text": "Javascript" }, { "code": "// C++ implementation of worst case linear time algorithm// to find k'th smallest element#include<iostream>#include<algorithm>#include<climits> using namespace std; int partition(int arr[], int l, int r, int k); // A simple function to find median of arr[]. This is called// only for an array of size 5 in this program.int findMedian(int arr[], int n){ sort(arr, arr+n); // Sort the array return arr[n/2]; // Return middle element} // Returns k'th smallest element in arr[l..r] in worst case// linear time. ASSUMPTION: ALL ELEMENTS IN ARR[] ARE DISTINCTint kthSmallest(int arr[], int l, int r, int k){ // If k is smaller than number of elements in array if (k > 0 && k <= r - l + 1) { int n = r-l+1; // Number of elements in arr[l..r] // Divide arr[] in groups of size 5, calculate median // of every group and store it in median[] array. int i, median[(n+4)/5]; // There will be floor((n+4)/5) groups; for (i=0; i<n/5; i++) median[i] = findMedian(arr+l+i*5, 5); if (i*5 < n) //For last group with less than 5 elements { median[i] = findMedian(arr+l+i*5, n%5); i++; } // Find median of all medians using recursive call. // If median[] has only one element, then no need // of recursive call int medOfMed = (i == 1)? median[i-1]: kthSmallest(median, 0, i-1, i/2); // Partition the array around a random element and // get position of pivot element in sorted array int pos = partition(arr, l, r, medOfMed); // If position is same as k if (pos-l == k-1) return arr[pos]; if (pos-l > k-1) // If position is more, recur for left return kthSmallest(arr, l, pos-1, k); // Else recur for right subarray return kthSmallest(arr, pos+1, r, k-pos+l-1); } // If k is more than number of elements in array return INT_MAX;} void swap(int *a, int *b){ int temp = *a; *a = *b; *b = temp;} // It searches for x in arr[l..r], and partitions the array// around x.int partition(int arr[], int l, int r, int x){ // Search for x in arr[l..r] and move it to end int i; for (i=l; i<r; i++) if (arr[i] == x) break; swap(&arr[i], &arr[r]); // Standard partition algorithm i = l; for (int j = l; j <= r - 1; j++) { if (arr[j] <= x) { swap(&arr[i], &arr[j]); i++; } } swap(&arr[i], &arr[r]); return i;} // Driver program to test above methodsint main(){ int arr[] = {12, 3, 5, 7, 4, 19, 26}; int n = sizeof(arr)/sizeof(arr[0]), k = 3; cout << \"K'th smallest element is \" << kthSmallest(arr, 0, n-1, k); return 0;}", "e": 29675, "s": 26902, "text": null }, { "code": "// Java implementation of worst// case linear time algorithm// to find k'th smallest elementimport java.util.*; class GFG{ // int partition(int arr[], int l, int r, int k); // A simple function to find median of arr[]. This is called// only for an array of size 5 in this program.static int findMedian(int arr[], int i,int n){ Arrays.sort(arr, i, n); return arr[i+(n-i)/2]; // sort the array and return middle element} // Returns k'th smallest element// in arr[l..r] in worst case// linear time. ASSUMPTION: ALL// ELEMENTS IN ARR[] ARE DISTINCTstatic int kthSmallest(int arr[], int l, int r, int k){ // If k is smaller than // number of elements in array if (k > 0 && k <= r - l + 1) { int n = r - l + 1 ; // Number of elements in arr[l..r] // Divide arr[] in groups of size 5, // calculate median of every group // and store it in median[] array. int i; // There will be floor((n+4)/5) groups; int []median = new int[(n + 4) / 5]; for (i = 0; i < n/5; i++) median[i] = findMedian(arr, l+i*5, l+i*5+5); // For last group with less than 5 elements if (i*5 < n) { median[i] = findMedian(arr, l+i*5, l+i*5+n%5); i++; } // Find median of all medians using recursive call. // If median[] has only one element, then no need // of recursive call int medOfMed = (i == 1)? median[i - 1]: kthSmallest(median, 0, i - 1, i / 2); // Partition the array around a random element and // get position of pivot element in sorted array int pos = partition(arr, l, r, medOfMed); // If position is same as k if (pos-l == k - 1) return arr[pos]; if (pos-l > k - 1) // If position is more, recur for left return kthSmallest(arr, l, pos - 1, k); // Else recur for right subarray return kthSmallest(arr, pos + 1, r, k - pos + l - 1); } // If k is more than number of elements in array return Integer.MAX_VALUE;} static int[] swap(int []arr, int i, int j){ int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; return arr;} // It searches for x in arr[l..r], and// partitions the array around x.static int partition(int arr[], int l, int r, int x){ // Search for x in arr[l..r] and move it to end int i; for (i = l; i < r; i++) if (arr[i] == x) break; swap(arr, i, r); // Standard partition algorithm i = l; for (int j = l; j <= r - 1; j++) { if (arr[j] <= x) { swap(arr, i, j); i++; } } swap(arr, i, r); return i;} // Driver codepublic static void main(String[] args){ int arr[] = {12, 3, 5, 7, 4, 19, 26}; int n = arr.length, k = 3; System.out.println(\"K'th smallest element is \" + kthSmallest(arr, 0, n - 1, k));}} // This code has been contributed by 29AjayKumar and updated by ajayhajare", "e": 32720, "s": 29675, "text": null }, { "code": "# Python3 implementation of worst case # linear time algorithm to find# k'th smallest element # Returns k'th smallest element in arr[l..r]# in worst case linear time.# ASSUMPTION: ALL ELEMENTS IN ARR[] ARE DISTINCTdef kthSmallest(arr, l, r, k): # If k is smaller than number of # elements in array if (k > 0 and k <= r - l + 1): # Number of elements in arr[l..r] n = r - l + 1 # Divide arr[] in groups of size 5, # calculate median of every group # and store it in median[] array. median = [] i = 0 while (i < n // 5): median.append(findMedian(arr, l + i * 5, 5)) i += 1 # For last group with less than 5 elements if (i * 5 < n): median.append(findMedian(arr, l + i * 5, n % 5)) i += 1 # Find median of all medians using recursive call. # If median[] has only one element, then no need # of recursive call if i == 1: medOfMed = median[i - 1] else: medOfMed = kthSmallest(median, 0, i - 1, i // 2) # Partition the array around a medOfMed # element and get position of pivot # element in sorted array pos = partition(arr, l, r, medOfMed) # If position is same as k if (pos - l == k - 1): return arr[pos] if (pos - l > k - 1): # If position is more, # recur for left subarray return kthSmallest(arr, l, pos - 1, k) # Else recur for right subarray return kthSmallest(arr, pos + 1, r, k - pos + l - 1) # If k is more than the number of # elements in the array return 999999999999 def swap(arr, a, b): temp = arr[a] arr[a] = arr[b] arr[b] = temp # It searches for x in arr[l..r], # and partitions the array around x.def partition(arr, l, r, x): for i in range(l, r): if arr[i] == x: swap(arr, r, i) break x = arr[r] i = l for j in range(l, r): if (arr[j] <= x): swap(arr, i, j) i += 1 swap(arr, i, r) return i # A simple function to find# median of arr[] from index l to l+ndef findMedian(arr, l, n): lis = [] for i in range(l, l + n): lis.append(arr[i]) # Sort the array lis.sort() # Return the middle element return lis[n // 2] # Driver Codeif __name__ == '__main__': arr = [12, 3, 5, 7, 4, 19, 26] n = len(arr) k = 3 print(\"K'th smallest element is\", kthSmallest(arr, 0, n - 1, k)) # This code is contributed by Ashutosh450", "e": 35415, "s": 32720, "text": null }, { "code": "// C# implementation of worst// case linear time algorithm// to find k'th smallest elementusing System; class GFG{ // int partition(int arr[], int l, int r, int k); // A simple function to find median of arr[]. This is called// only for an array of size 5 in this program.static int findMedian(int []arr, int i, int n){ if(i <= n) Array.Sort(arr, i, n); // Sort the array else Array.Sort(arr, n, i); return arr[n/2]; // Return middle element} // Returns k'th smallest element// in arr[l..r] in worst case// linear time. ASSUMPTION: ALL// ELEMENTS IN ARR[] ARE DISTINCTstatic int kthSmallest(int []arr, int l, int r, int k){ // If k is smaller than // number of elements in array if (k > 0 && k <= r - l + 1) { int n = r - l + 1 ; // Number of elements in arr[l..r] // Divide arr[] in groups of size 5, // calculate median of every group // and store it in median[] array. int i; // There will be floor((n+4)/5) groups; int []median = new int[(n + 4) / 5]; for (i = 0; i < n/5; i++) median[i] = findMedian(arr, l + i * 5, 5); // For last group with less than 5 elements if (i*5 < n) { median[i] = findMedian(arr,l + i * 5, n % 5); i++; } // Find median of all medians using recursive call. // If median[] has only one element, then no need // of recursive call int medOfMed = (i == 1)? median[i - 1]: kthSmallest(median, 0, i - 1, i / 2); // Partition the array around a random element and // get position of pivot element in sorted array int pos = partition(arr, l, r, medOfMed); // If position is same as k if (pos-l == k - 1) return arr[pos]; if (pos-l > k - 1) // If position is more, recur for left return kthSmallest(arr, l, pos - 1, k); // Else recur for right subarray return kthSmallest(arr, pos + 1, r, k - pos + l - 1); } // If k is more than number of elements in array return int.MaxValue;} static int[] swap(int []arr, int i, int j){ int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; return arr;} // It searches for x in arr[l..r], and// partitions the array around x.static int partition(int []arr, int l, int r, int x){ // Search for x in arr[l..r] and move it to end int i; for (i = l; i < r; i++) if (arr[i] == x) break; swap(arr, i, r); // Standard partition algorithm i = l; for (int j = l; j <= r - 1; j++) { if (arr[j] <= x) { swap(arr, i, j); i++; } } swap(arr, i, r); return i;} // Driver codepublic static void Main(String[] args){ int []arr = {12, 3, 5, 7, 4, 19, 26}; int n = arr.Length, k = 3; Console.WriteLine(\"K'th smallest element is \" + kthSmallest(arr, 0, n - 1, k));}} // This code contributed by Rajput-Ji", "e": 38452, "s": 35415, "text": null }, { "code": "<script>// Javascript implementation of worst// case linear time algorithm// to find k'th smallest element // int partition(int arr[], int l, int r, int k); // A simple function to find median of arr[]. This is called// only for an array of size 5 in this program.function findMedian(arr, i, n) { if (i <= n) arr.sort((a, b) => a - b); // Sort the array else arr.sort((a, b) => a - b); return arr[Math.floor(n / 2)]; // Return middle element} // Returns k'th smallest element// in arr[l..r] in worst case// linear time. ASSUMPTION: ALL// ELEMENTS IN ARR[] ARE DISTINCTfunction kthSmallest(arr, l, r, k){ // If k is smaller than // number of elements in array if (k > 0 && k <= r - l + 1) { let n = r - l + 1; // Number of elements in arr[l..r] // Divide arr[] in groups of size 5, // calculate median of every group // and store it in median[] array. let i; // There will be floor((n+4)/5) groups; let median = new Array(Math.floor((n + 4) / 5)); for (i = 0; i < n / 5; i++) median[i] = findMedian(arr, l + i * 5, 5); // For last group with less than 5 elements if (i * 5 < n) { median[i] = findMedian(arr, l + i * 5, n % 5); i++; } // Find median of all medians using recursive call. // If median[] has only one element, then no need // of recursive call let medOfMed = (i == 1) ? median[i - 1] : kthSmallest(median, 0, i - 1, Math.floor(i / 2)); // Partition the array around a random element and // get position of pivot element in sorted array let pos = partition(arr, l, r, medOfMed); // If position is same as k if (pos - l == k - 1) return arr[pos]; if (pos - l > k - 1) // If position is more, recur for left return kthSmallest(arr, l, pos - 1, k); // Else recur for right subarray return kthSmallest(arr, pos + 1, r, k - pos + l - 1); } // If k is more than number of elements in array return Integer.MAX_VALUE;} function swap(arr, i, j) { let temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; return arr;} // It searches for x in arr[l..r], and// partitions the array around x.function partition(arr, l, r, x) { // Search for x in arr[l..r] and move it to end let i; for (i = l; i < r; i++) if (arr[i] == x) break; swap(arr, i, r); // Standard partition algorithm i = l; for (let j = l; j <= r - 1; j++) { if (arr[j] <= x) { swap(arr, i, j); i++; } } swap(arr, i, r); return i;} // Driver code let arr = [12, 3, 5, 7, 4, 19, 26];let n = arr.length, k = 3;document.write(\"K'th smallest element is \" + kthSmallest(arr, 0, n - 1, k)); // This code has been contributed by Saurabh Jaiswal</script>", "e": 41340, "s": 38452, "text": null }, { "code": null, "e": 41350, "s": 41340, "text": "Output: " }, { "code": null, "e": 41377, "s": 41350, "text": "K'th smallest element is 5" }, { "code": null, "e": 42867, "s": 41377, "text": "Time Complexity: The worst case time complexity of the above algorithm is O(n). Let us analyze all steps. The steps 1) and 2) take O(n) time as finding median of an array of size 5 takes O(1) time and there are n/5 arrays of size 5. The step 3) takes T(n/5) time. The step 4 is standard partition and takes O(n) time. The interesting steps are 6) and 7). At most, one of them is executed. These are recursive steps. What is the worst case size of these recursive calls. The answer is maximum number of elements greater than medOfMed (obtained in step 3) or maximum number of elements smaller than medOfMed. How many elements are greater than medOfMed and how many are smaller? At least half of the medians found in step 2 are greater than or equal to medOfMed. Thus, at least half of the n/5 groups contribute 3 elements that are greater than medOfMed, except for the one group that has fewer than 5 elements. Therefore, the number of elements greater than medOfMed is at least. Similarly, the number of elements that are less than medOfMed is at least 3n/10 – 6. In the worst case, the function recurs for at most n – (3n/10 – 6) which is 7n/10 + 6 elements.Note that 7n/10 + 6 20 20 and that any input of 80 or fewer elements requires O(1) time. We can therefore obtain the recurrence We show that the running time is linear by substitution. Assume that T(n) cn for some constant c and all n > 80. Substituting this inductive hypothesis into the right-hand side of the recurrence yields " }, { "code": null, "e": 42979, "s": 42867, "text": "T(n) <= cn/5 + c(7n/10 + 6) + O(n)\n <= cn/5 + c + 7cn/10 + 6c + O(n)\n <= 9cn/10 + 7c + O(n)\n <= cn, " }, { "code": null, "e": 43846, "s": 42979, "text": "since we can pick c large enough so that c(n/10 – 7) is larger than the function described by the O(n) term for all n > 80. The worst-case running time of is therefore linear (Source: http://staff.ustc.edu.cn/~csli/graduate/algorithms/book6/chap10.htm ).Note that the above algorithm is linear in worst case, but the constants are very high for this algorithm. Therefore, this algorithm doesn’t work well in practical situations, randomized quickSelect works much better and preferred.Sources: MIT Video Lecture on Order Statistics, Median Introduction to Algorithms by Clifford Stein, Thomas H. Cormen, Charles E. Leiserson, Ronald L. http://staff.ustc.edu.cn/~csli/graduate/algorithms/book6/chap10.htmThis article is contributed by Shivam. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above " }, { "code": null, "e": 43855, "s": 43846, "text": "falcon95" }, { "code": null, "e": 43867, "s": 43855, "text": "29AjayKumar" }, { "code": null, "e": 43877, "s": 43867, "text": "Rajput-Ji" }, { "code": null, "e": 43889, "s": 43877, "text": "ashutosh450" }, { "code": null, "e": 43897, "s": 43889, "text": "gfgking" }, { "code": null, "e": 43908, "s": 43897, "text": "ajayhajare" }, { "code": null, "e": 43913, "s": 43908, "text": "ABCO" }, { "code": null, "e": 43919, "s": 43913, "text": "Cisco" }, { "code": null, "e": 43929, "s": 43919, "text": "Microsoft" }, { "code": null, "e": 43946, "s": 43929, "text": "Order-Statistics" }, { "code": null, "e": 43953, "s": 43946, "text": "VMWare" }, { "code": null, "e": 43960, "s": 43953, "text": "Arrays" }, { "code": null, "e": 43970, "s": 43960, "text": "Searching" }, { "code": null, "e": 43977, "s": 43970, "text": "VMWare" }, { "code": null, "e": 43987, "s": 43977, "text": "Microsoft" }, { "code": null, "e": 43992, "s": 43987, "text": "ABCO" }, { "code": null, "e": 43998, "s": 43992, "text": "Cisco" }, { "code": null, "e": 44005, "s": 43998, "text": "Arrays" }, { "code": null, "e": 44015, "s": 44005, "text": "Searching" }, { "code": null, "e": 44113, "s": 44015, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 44145, "s": 44113, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 44168, "s": 44145, "text": "Introduction to Arrays" }, { "code": null, "e": 44213, "s": 44168, "text": "Python | Using 2D arrays/lists the right way" }, { "code": null, "e": 44234, "s": 44213, "text": "Linked List vs Array" }, { "code": null, "e": 44288, "s": 44234, "text": "Queue | Set 1 (Introduction and Array Implementation)" }, { "code": null, "e": 44302, "s": 44288, "text": "Binary Search" }, { "code": null, "e": 44326, "s": 44302, "text": "Find the Missing Number" }, { "code": null, "e": 44370, "s": 44326, "text": "Program to find largest element in an array" }, { "code": null, "e": 44418, "s": 44370, "text": "Search an element in a sorted and rotated array" } ]
How to pass pointers as parameters to methods in C#?
To pass pointers as parameters to methods, refer the below steps − Firstly, crate a function swap with unsafe modifier. public unsafe void swap(int* p, int *q) { int temp = *p; *p = *q; *q = temp; } Now under static void main, add the value for the first and second variable, set pointers for both of them. Display the values of the variables and then call the swap() method shown above. The method swaps the values and displays the result − public unsafe static void Main() { Program p = new Program(); int var1 = 10; int var2 = 20; int* x = &var1; int* y = &var2; Console.WriteLine("Before Swap: var1:{0}, var2: {1}", var1, var2); p.swap(x, y); Console.WriteLine("After Swap: var1:{0}, var2: {1}", var1, var2); Console.ReadKey(); }
[ { "code": null, "e": 1129, "s": 1062, "text": "To pass pointers as parameters to methods, refer the below steps −" }, { "code": null, "e": 1182, "s": 1129, "text": "Firstly, crate a function swap with unsafe modifier." }, { "code": null, "e": 1270, "s": 1182, "text": "public unsafe void swap(int* p, int *q) {\n int temp = *p;\n *p = *q;\n *q = temp;\n}" }, { "code": null, "e": 1378, "s": 1270, "text": "Now under static void main, add the value for the first and second variable, set pointers for both of them." }, { "code": null, "e": 1513, "s": 1378, "text": "Display the values of the variables and then call the swap() method shown above. The method swaps the values and displays the result −" }, { "code": null, "e": 1834, "s": 1513, "text": "public unsafe static void Main() {\n Program p = new Program();\n int var1 = 10;\n int var2 = 20;\n int* x = &var1;\n int* y = &var2;\n\n Console.WriteLine(\"Before Swap: var1:{0}, var2: {1}\", var1, var2);\n p.swap(x, y);\n\n Console.WriteLine(\"After Swap: var1:{0}, var2: {1}\", var1, var2);\n Console.ReadKey();\n}" } ]
Terraform + SageMaker Part 1: Terraform Initialization | by David Hundley | Towards Data Science
Hello there, folks! Today, we’re starting a new series on using Terraform to create resources on AWS SageMaker. I expect you’re likely familiar with what SageMaker is, even if only on a very general level. In a nutshell, SageMaker is AWS’s machine learning service. AWS has poured a lot of energy into this particular service over the years, obviously in response to the explosive growth that the data science / machine learning practice has become across all industries. It’s sort of difficult to keep up with the growth! Even I will readily admit I’m not that familiar with some of the newer stuff like SageMaker Studio, but I’m excited to learn more about it alongside you all in this new series of posts. Even if you have a general understanding of what SageMaker is, you might not be aware of what Terraform is. Truth be told, I didn’t know about Terraform myself until the company I work for adopted it. Terraform is an “infrastructure as code” product created by HashiCorp to deploy infrastructure resources onto various platforms. It is not an AWS product in and of itself, but it is designed to interact with AWS very well. Naturally, I’d expect you to ask yourself the question, “Why Terraform?” We’ll cover that in next section, and in the following sections, we’ll start delving into creating your first resource on AWS using Terraform. Transparently, we won’t be touching SageMaker at all in this initial post, but we’re laying the foundation for what will come in subsequent posts. Trust me, you won’t want to miss this post! With that, let’s jump into why we’re going to use Terraform. When you interact with the AWS console to do things like create an S3 bucket, provision an EC2 instance, or pretty much everything else, you’re basically issuing software commands to create a resource. The AWS console is great for guiding you through understanding how to properly provision a given resource. But the console isn’t great for creating or destroying resources at mass in one fell swoop. Because the AWS console is basically executing software commands to create these infrastructure resources, we can create software scripts that will automate the creation of these resources just like we would automate anything else we can with software. So if you’re a learner like me, you might not want to keep your AWS resources running when you’re not using them. Terraform can help you quickly set up or blow away your resources in a matter of seconds if set up correctly. Think about how long it takes just to set up one S3 bucket in the console interface. With Terraform, I can create a whole arsenal of resources in a faster amount of time than a person can create a single S3 bucket in the console. Nice! If you’ve taken any of the AWS certifications, you might recall that AWS has its own service called Cloud Formation that allows you to do exactly this. You might reasonably ask then why we’d want to use Terraform over Cloud Formation. Don’t get me wrong, Cloud Formation is a great service, but Terraform offers many other benefits over Cloud Formation. A few include the following: Platform agnostic: Even though we will only be working out of AWS in this series, Terraform can also provision resources on other platforms including Google Cloud Platform, Microsoft Azure, and more. You won’t necessarily be able to use your AWS Terraform scripts on those other platforms, but because Terraform is its own scripting language, that knowledge carries over very well to the other platforms. State management: Terraform offers some pretty robust options for maintaining the state of your resources. We’ll talk about this more in an upcoming section, but if you were to use one of HashiCorp’s more premium offerings like Terraform Cloud or Terraform Enterprise, you can really do some very advanced stuff with this state management. Speaking of advanced stuff... Sentinel policies: This is one of these advanced topics and premium features we won’t be covering in this series, but sentinel policies basically enforce certain rules that all Terraform provisioners must abide by. Imagine that you work for a large corporation that requires a specific set of tags for every resource deployed. Creating a sentinel policy can help facilitate that in a very streamlined way! There are too many benefits to list all here. Though those premium features mentioned above aren’t free, we can use the standard Terraform command line actions at no cost. Granted, the resources you provision onto AWS may not be free, but there is no cost associated with issuing the Terraform commands we’ll be learning about in this series. Before we get into initializing Terraform, let’s briefly talk about state management. When we create resources on our platform of choice — AWS in our case — Terraform manages what it has provisioned in the form of a Terraform state file. The Terraform state file is generally a file that ends in the .tfstate suffix and maintains the information of everything that you’ve provisioned to date with your Terraform scripts. If you make an adjustment to any of those scripts, the Terraform CLI commands will analyze the tfstate file to see what action it needs to take. For example, if the resource is not present in the tfstate file, then Terraform will create that new resource for you. If the resource is present but has had some slight configuration changes, Terraform will update the resource appropriately as well as update the tfstate file with that new configuration information. As you might be able to guess, there are some “catches” with this tfstate file. First, if you are collaborating on a team that is working together to provision various resources using the same tfstate file, you run the risk of basically stepping on each others toes and corrupting your tfstate file. This is a very important consideration, but since this series is going to focus more on single user usage, I’m not going to cover that aspect of state management in this series. This is still an extremely important topic to consider when working amongst multiple people, so I’d advise you to check out Terraform’s website to learn more about state locking to avoid corruption. The more important “catch” that we will be covering in this post has to do with the nature of the tfstate file itself. As you might be able to guess, the tfstate file may contain very sensitive information as it basically tells the world what you have provisioned out to whatever platform. It may even contain sensitive metadata you would not want exposed to the public. Fortunately, Terraform offers many solutions to keep your tfstate file safe. If there’s one takeaway I hope you get from this paragraph, it’s this: DO NOT post your tfstate file to any public place, especially GitHub. Bad actors are out there and WILL exploit this if they can. But don’t worry! One of the actions we’ll be taking in this post is setting up an S3 bucket to house your tfstate file and be able to remotely interact with it from your local machine. Alrighty, now we’re ready to jump into the meat of the post! Let’s start by setting up what we need in order to be able to use Terraform properly. In order to use Terraform, we will need to have the following things: Installing the Terraform CLI Setting up credentials on your local machine to interact with AWS Setting up the S3 bucket for state management Let’s rapid fire how to do each of these things in the subsections below. This first point is pretty simple and straightforward. There are lots of ways to install the Terraform CLI, and I’m going to point you to Terraform’s official documentation for that. If you’ve been following along with my other recent posts, I’m actually doing this whole post on an iPad with a Raspberry Pi “computational accessory.” Not only am I writing this post on my iPad, but I used the Textastic app to write the Terraform scripts and then transferred them to my Raspberry Pi to actually execute the Terraform commands. It’s so cool! Because we will be interacting with our tfstate file located in our remote S3 bucket, you will need to have your local credentials set up properly to interact with your AWS account. If you’ve set up the AWS CLI before, chances are that you’re already good to go. If you haven’t done that before, here’s the official AWS documentation on how to set up those credentials. As with the tfstate file, please be mindful NOT to share these credentials out in public spaces. Again, bad actors WILL exploit them, and they WILL run up your AWS bill. Please treat these credentials as you would your Social Security number. As for setting up the S3 bucket for the state management, there’s really nothing special we have to do here. Because we’re using your AWS credentials to interact with AWS, you do NOT have to make this a public bucket. In fact, I would highly advise against making the bucket public. (You’re also welcome to use a pre-existing bucket.) Using the AWS console to create a bucket is very straightforward, but here’s the documentation on how to do that just in case. Once your S3 bucket is created, we’re ready to move on. You’ll see how Terraform will make use of this bucket in a forthcoming section. Okay, hopefully that pre-work process was pretty painless! We’re now ready to start making use of Terraform to create our first resource. To keep things super simple, the only thing we’ll be creating is a super rudimentary IAM role that only has access to list out S3 buckets. The reason I’m selecting to teach this is because a) it is very simple and b) IAM roles are free on AWS. So if you forget to delete the role, no big deal. (Side note: I PROMISE what we’re doing in this post will NOT expose you to any vulnerability. This isn’t some secret “Trojan horse” thing that would open your AWS account to hackers. If done properly, this IAM role we are creating will basically be able to do nothing. I’ll also teach you how to delete resources with Terraform so that you can guarantee nothing is left lingering in your account that you wouldn’t want there.) If you would like to see the precise Terraform files I used in this post, check out my GitHub repository here. Even though this does contain the plain text name of the S3 bucket I’m using, you won’t be able to do anything with it because you don’t have my AWS credentials. If you do choose to clone my repo, you will need to update the Terraform backend piece to match your own respective bucket. We’re not going to delve too much in this post on how Terraform files are structured, but we will cover a few basics here. First, Terraform looks to provision resources within files that end with the .tf suffix. You can technically jam everything you want into a single .tf file. If you’re familiar with Kubernetes YAMLs, the concept is the same here. Even though you technically can jam everything in a single .tf file, it always makes sense to logically separate them out into separate files that make sense as you see fit. (I personally like to bundle individual AWS resources into its own file along with the directly associated IAM policies / roles.) When initializing Terraform for usage on a given platform, Terraform will always look for a provider block. In this case, we’re using AWS and will be deploying resources into the regionus-east-1. (You can change the region to whatever you’d like, that does not matter.) The other thing it will look for is to see if you would like to make use of a remote backend for state management. Given that we’ve already set up our AWS credentials and bucket, the actual Terraform file to get the ball rolling is very simple. Here’s what my script looks like: terraform { backend "s3" { bucket = "dkhundley-terraform-test" key = "terraform-sagemaker-tutorial.tfstate" region = "us-east-1" }}provider "aws" { region = "us-east-1"} What this script is basically saying is the following: I want to deploy Terraform resources to AWS (region US East 1), and I want to use a remote tfstate file within this given bucket with this key. That key there can be whatever you want to call it. So essentially if you want to emulate what I did, the only thing you should really have to change is the S3 bucket name. Okay, that’s all we need to do for backend set up! Now, let’s quickly show the script of that IAM role we’re going to be creating with Terraform: ## DATA BLOCKS## ----------------------------------------------------------------# Creating the data that will be used by the IAM policydata "aws_iam_policy_document" "s3_ls_policy_data" { statement { actions = [ "s3:ListAllMyBuckets" ]resources = [ "arn:aws:s3:::*" ] }}# Creating the assume role policy datadata "aws_iam_policy_document" "s3_ls_assume_role_data" { statement { actions = ["sts:AssumeRole"]principals { type = "Service" identifiers = ["s3.amazonaws.com"] } }}## RESOURCE BLOCKS## ----------------------------------------------------------------# Creating the IAM policy using the data block from aboveresource "aws_iam_role_policy" "s3_ls_policy" { name = "s3_ls_iam_policy" policy = data.aws_iam_policy_document.s3_ls_policy_data.json role = aws_iam_role.s3_ls_role.id}# Creating the IAM role associated to the resources aboveresource "aws_iam_role" "s3_ls_role" { name = "s3_ls_iam_role" description = "This role allows for all S3 buckets to be listed." assume_role_policy = data.aws_iam_policy_document.s3_ls_assume_role_data.json} Again, we’re not going to delve super deeply into what all this syntax means. At a high level, I want to call out a few things: Resource blocks: These are the “rubber meets the road” blocks of code that actually provision a desired resource. The string in the first set of quotes defines the resource type as defined by Terraform, so you must use their precise syntax for that. The string in the second set of quotes is the arbitrary name you want to give the resource. Because Terraform allows you to reference that resource in other bits of your Terraform code, it’s always a good idea to give your resource a name you can remember. Data blocks: These are generally used to provide additional configuration for a resource you plan to provision. In this case, the data blocks provide the configuration for how the IAM policy should be set up. Comments: Just like with Python, beginning a piece of code with the # symbol will allow you to annotate your scripts with appropriate comments. You can see that I have used them above to descriptive note what each block of code is doing. Terraform provides an extremely robust library of documentation on how to provision pretty much any resource on AWS. (Because AWS is constantly rolling out new stuff, Terraform might not be able to support a new AWS service on day 1, but they are very good about updating their stuff in new Terraform versions.) To see an example of this, here is the documentation I referenced to create the IAM role itself. If you’re structuring your code the same way I am, you should now have two Terraform files: backend.tf iam.tf We’ll get into reusable Terraform variables and more in later posts. For now, this simple set up will work just fine to get us started. Let’s now actually see how the Terraform CLI works to provision these resources. Now it’s time to get to the fun part! Fun because we’ll get to see the fruits of labor come to life very quickly in this last section. To quickly summarize what we’ll be doing in these subsections, let’s rattle off those topics in these bullets below: Performing a terraform init Validating and formatting your tf files Creating your resources on AWS with Terraform Destroying your resources on AWS with Terraform As mentioned above, Terraform will first look for the provider and backend to know how it should be properly interacting with your AWS account. In order to get that “communication” started, you’ll first need to perform the terraform init command. It will also download the proper plugins required to deploy the resources out to AWS. If the command has run successfully, this is what you should see: That’s it for initialization! Let’s move on to validating and formatting your files. These steps are actually not required, but they help to make your life easier when trying to read / debug your code. By running the terraform validate command, Terraform will make sure that everything is squared away with your Terraform files without actually provisioning anything. If you’ve messed something up, Terraform will tell you what’s wrong. Just so you can see what this looks like, I’m going to comment out a required piece in my Terraform. Let’s see what happens when I run my terraform validate command... As you can see, the first time I run the validate command outputs very plainly spelled out where I messed up and how to fix it. The second time I run the terraform validate command after I’ve fixed things, Terraform gives me the greenlight that my stuff looks good! The latter command isn’t at all required, but it’s still nice. By running the terraform fmt command, Terraform will look at your tf files and do what it can to make everything all nice and pretty. This includes lining up all your resource parameters to have the = signs all line up. Again, not required at all, but I personally find it helpful. With your Terraform files validated, we’re finally ready to use them to provision our resources on AWS! Now if you want to get an understanding of how your resources will change prior to formally creating them, it’s a good idea to run the terraform plan command. This command will plainly spell out how things will change. Here’s an example of what this means with our little IAM role creation here: If everything looks good to you, you’re clear to create these resources by running the terraform apply command. By running this command without any flags, Terraform will show you a similar output as terraform plan, except it will also prompt you to confirm the changes by entering yes. Upon entering yes, Terraform will appropriately create the resources and note successful creation appropriately. And there you go! Your first resources created on AWS with Terraform! If you go into the IAM console, you should now be able to see this IAM role we created from within there. Very cool!!! Of course, I can understand if you would not want to keep this resource around, so let’s finish off this post with destroying what we just created. As you might be able to guess, destroying resources with Terraform is just as easy as creating them. Instead of running terraform apply, we instead run terraform destroy. Again, you’ll be prompted by Terraform to confirm the destruction of these resources by entering yes, and then Terraform will go ahead and destroy the IAM role we just created. Poof! Those resources are now gone. And that wraps up this post for today! I hope you’re not too disappointed we didn’t actually get to anything SageMaker-related today, but I promise you this foundational post is going to serve us very well as we continue along in subsequent posts. Thanks for sticking through this long post! See you in the next one. 😃
[ { "code": null, "e": 880, "s": 171, "text": "Hello there, folks! Today, we’re starting a new series on using Terraform to create resources on AWS SageMaker. I expect you’re likely familiar with what SageMaker is, even if only on a very general level. In a nutshell, SageMaker is AWS’s machine learning service. AWS has poured a lot of energy into this particular service over the years, obviously in response to the explosive growth that the data science / machine learning practice has become across all industries. It’s sort of difficult to keep up with the growth! Even I will readily admit I’m not that familiar with some of the newer stuff like SageMaker Studio, but I’m excited to learn more about it alongside you all in this new series of posts." }, { "code": null, "e": 1304, "s": 880, "text": "Even if you have a general understanding of what SageMaker is, you might not be aware of what Terraform is. Truth be told, I didn’t know about Terraform myself until the company I work for adopted it. Terraform is an “infrastructure as code” product created by HashiCorp to deploy infrastructure resources onto various platforms. It is not an AWS product in and of itself, but it is designed to interact with AWS very well." }, { "code": null, "e": 1711, "s": 1304, "text": "Naturally, I’d expect you to ask yourself the question, “Why Terraform?” We’ll cover that in next section, and in the following sections, we’ll start delving into creating your first resource on AWS using Terraform. Transparently, we won’t be touching SageMaker at all in this initial post, but we’re laying the foundation for what will come in subsequent posts. Trust me, you won’t want to miss this post!" }, { "code": null, "e": 1772, "s": 1711, "text": "With that, let’s jump into why we’re going to use Terraform." }, { "code": null, "e": 2426, "s": 1772, "text": "When you interact with the AWS console to do things like create an S3 bucket, provision an EC2 instance, or pretty much everything else, you’re basically issuing software commands to create a resource. The AWS console is great for guiding you through understanding how to properly provision a given resource. But the console isn’t great for creating or destroying resources at mass in one fell swoop. Because the AWS console is basically executing software commands to create these infrastructure resources, we can create software scripts that will automate the creation of these resources just like we would automate anything else we can with software." }, { "code": null, "e": 2886, "s": 2426, "text": "So if you’re a learner like me, you might not want to keep your AWS resources running when you’re not using them. Terraform can help you quickly set up or blow away your resources in a matter of seconds if set up correctly. Think about how long it takes just to set up one S3 bucket in the console interface. With Terraform, I can create a whole arsenal of resources in a faster amount of time than a person can create a single S3 bucket in the console. Nice!" }, { "code": null, "e": 3269, "s": 2886, "text": "If you’ve taken any of the AWS certifications, you might recall that AWS has its own service called Cloud Formation that allows you to do exactly this. You might reasonably ask then why we’d want to use Terraform over Cloud Formation. Don’t get me wrong, Cloud Formation is a great service, but Terraform offers many other benefits over Cloud Formation. A few include the following:" }, { "code": null, "e": 3674, "s": 3269, "text": "Platform agnostic: Even though we will only be working out of AWS in this series, Terraform can also provision resources on other platforms including Google Cloud Platform, Microsoft Azure, and more. You won’t necessarily be able to use your AWS Terraform scripts on those other platforms, but because Terraform is its own scripting language, that knowledge carries over very well to the other platforms." }, { "code": null, "e": 4044, "s": 3674, "text": "State management: Terraform offers some pretty robust options for maintaining the state of your resources. We’ll talk about this more in an upcoming section, but if you were to use one of HashiCorp’s more premium offerings like Terraform Cloud or Terraform Enterprise, you can really do some very advanced stuff with this state management. Speaking of advanced stuff..." }, { "code": null, "e": 4450, "s": 4044, "text": "Sentinel policies: This is one of these advanced topics and premium features we won’t be covering in this series, but sentinel policies basically enforce certain rules that all Terraform provisioners must abide by. Imagine that you work for a large corporation that requires a specific set of tags for every resource deployed. Creating a sentinel policy can help facilitate that in a very streamlined way!" }, { "code": null, "e": 4793, "s": 4450, "text": "There are too many benefits to list all here. Though those premium features mentioned above aren’t free, we can use the standard Terraform command line actions at no cost. Granted, the resources you provision onto AWS may not be free, but there is no cost associated with issuing the Terraform commands we’ll be learning about in this series." }, { "code": null, "e": 4879, "s": 4793, "text": "Before we get into initializing Terraform, let’s briefly talk about state management." }, { "code": null, "e": 5677, "s": 4879, "text": "When we create resources on our platform of choice — AWS in our case — Terraform manages what it has provisioned in the form of a Terraform state file. The Terraform state file is generally a file that ends in the .tfstate suffix and maintains the information of everything that you’ve provisioned to date with your Terraform scripts. If you make an adjustment to any of those scripts, the Terraform CLI commands will analyze the tfstate file to see what action it needs to take. For example, if the resource is not present in the tfstate file, then Terraform will create that new resource for you. If the resource is present but has had some slight configuration changes, Terraform will update the resource appropriately as well as update the tfstate file with that new configuration information." }, { "code": null, "e": 6354, "s": 5677, "text": "As you might be able to guess, there are some “catches” with this tfstate file. First, if you are collaborating on a team that is working together to provision various resources using the same tfstate file, you run the risk of basically stepping on each others toes and corrupting your tfstate file. This is a very important consideration, but since this series is going to focus more on single user usage, I’m not going to cover that aspect of state management in this series. This is still an extremely important topic to consider when working amongst multiple people, so I’d advise you to check out Terraform’s website to learn more about state locking to avoid corruption." }, { "code": null, "e": 7188, "s": 6354, "text": "The more important “catch” that we will be covering in this post has to do with the nature of the tfstate file itself. As you might be able to guess, the tfstate file may contain very sensitive information as it basically tells the world what you have provisioned out to whatever platform. It may even contain sensitive metadata you would not want exposed to the public. Fortunately, Terraform offers many solutions to keep your tfstate file safe. If there’s one takeaway I hope you get from this paragraph, it’s this: DO NOT post your tfstate file to any public place, especially GitHub. Bad actors are out there and WILL exploit this if they can. But don’t worry! One of the actions we’ll be taking in this post is setting up an S3 bucket to house your tfstate file and be able to remotely interact with it from your local machine." }, { "code": null, "e": 7335, "s": 7188, "text": "Alrighty, now we’re ready to jump into the meat of the post! Let’s start by setting up what we need in order to be able to use Terraform properly." }, { "code": null, "e": 7405, "s": 7335, "text": "In order to use Terraform, we will need to have the following things:" }, { "code": null, "e": 7434, "s": 7405, "text": "Installing the Terraform CLI" }, { "code": null, "e": 7500, "s": 7434, "text": "Setting up credentials on your local machine to interact with AWS" }, { "code": null, "e": 7546, "s": 7500, "text": "Setting up the S3 bucket for state management" }, { "code": null, "e": 7620, "s": 7546, "text": "Let’s rapid fire how to do each of these things in the subsections below." }, { "code": null, "e": 8162, "s": 7620, "text": "This first point is pretty simple and straightforward. There are lots of ways to install the Terraform CLI, and I’m going to point you to Terraform’s official documentation for that. If you’ve been following along with my other recent posts, I’m actually doing this whole post on an iPad with a Raspberry Pi “computational accessory.” Not only am I writing this post on my iPad, but I used the Textastic app to write the Terraform scripts and then transferred them to my Raspberry Pi to actually execute the Terraform commands. It’s so cool!" }, { "code": null, "e": 8775, "s": 8162, "text": "Because we will be interacting with our tfstate file located in our remote S3 bucket, you will need to have your local credentials set up properly to interact with your AWS account. If you’ve set up the AWS CLI before, chances are that you’re already good to go. If you haven’t done that before, here’s the official AWS documentation on how to set up those credentials. As with the tfstate file, please be mindful NOT to share these credentials out in public spaces. Again, bad actors WILL exploit them, and they WILL run up your AWS bill. Please treat these credentials as you would your Social Security number." }, { "code": null, "e": 9373, "s": 8775, "text": "As for setting up the S3 bucket for the state management, there’s really nothing special we have to do here. Because we’re using your AWS credentials to interact with AWS, you do NOT have to make this a public bucket. In fact, I would highly advise against making the bucket public. (You’re also welcome to use a pre-existing bucket.) Using the AWS console to create a bucket is very straightforward, but here’s the documentation on how to do that just in case. Once your S3 bucket is created, we’re ready to move on. You’ll see how Terraform will make use of this bucket in a forthcoming section." }, { "code": null, "e": 9805, "s": 9373, "text": "Okay, hopefully that pre-work process was pretty painless! We’re now ready to start making use of Terraform to create our first resource. To keep things super simple, the only thing we’ll be creating is a super rudimentary IAM role that only has access to list out S3 buckets. The reason I’m selecting to teach this is because a) it is very simple and b) IAM roles are free on AWS. So if you forget to delete the role, no big deal." }, { "code": null, "e": 10232, "s": 9805, "text": "(Side note: I PROMISE what we’re doing in this post will NOT expose you to any vulnerability. This isn’t some secret “Trojan horse” thing that would open your AWS account to hackers. If done properly, this IAM role we are creating will basically be able to do nothing. I’ll also teach you how to delete resources with Terraform so that you can guarantee nothing is left lingering in your account that you wouldn’t want there.)" }, { "code": null, "e": 10629, "s": 10232, "text": "If you would like to see the precise Terraform files I used in this post, check out my GitHub repository here. Even though this does contain the plain text name of the S3 bucket I’m using, you won’t be able to do anything with it because you don’t have my AWS credentials. If you do choose to clone my repo, you will need to update the Terraform backend piece to match your own respective bucket." }, { "code": null, "e": 11285, "s": 10629, "text": "We’re not going to delve too much in this post on how Terraform files are structured, but we will cover a few basics here. First, Terraform looks to provision resources within files that end with the .tf suffix. You can technically jam everything you want into a single .tf file. If you’re familiar with Kubernetes YAMLs, the concept is the same here. Even though you technically can jam everything in a single .tf file, it always makes sense to logically separate them out into separate files that make sense as you see fit. (I personally like to bundle individual AWS resources into its own file along with the directly associated IAM policies / roles.)" }, { "code": null, "e": 11834, "s": 11285, "text": "When initializing Terraform for usage on a given platform, Terraform will always look for a provider block. In this case, we’re using AWS and will be deploying resources into the regionus-east-1. (You can change the region to whatever you’d like, that does not matter.) The other thing it will look for is to see if you would like to make use of a remote backend for state management. Given that we’ve already set up our AWS credentials and bucket, the actual Terraform file to get the ball rolling is very simple. Here’s what my script looks like:" }, { "code": null, "e": 12019, "s": 11834, "text": "terraform { backend \"s3\" { bucket = \"dkhundley-terraform-test\" key = \"terraform-sagemaker-tutorial.tfstate\" region = \"us-east-1\" }}provider \"aws\" { region = \"us-east-1\"}" }, { "code": null, "e": 12391, "s": 12019, "text": "What this script is basically saying is the following: I want to deploy Terraform resources to AWS (region US East 1), and I want to use a remote tfstate file within this given bucket with this key. That key there can be whatever you want to call it. So essentially if you want to emulate what I did, the only thing you should really have to change is the S3 bucket name." }, { "code": null, "e": 12537, "s": 12391, "text": "Okay, that’s all we need to do for backend set up! Now, let’s quickly show the script of that IAM role we’re going to be creating with Terraform:" }, { "code": null, "e": 13666, "s": 12537, "text": "## DATA BLOCKS## ----------------------------------------------------------------# Creating the data that will be used by the IAM policydata \"aws_iam_policy_document\" \"s3_ls_policy_data\" { statement { actions = [ \"s3:ListAllMyBuckets\" ]resources = [ \"arn:aws:s3:::*\" ] }}# Creating the assume role policy datadata \"aws_iam_policy_document\" \"s3_ls_assume_role_data\" { statement { actions = [\"sts:AssumeRole\"]principals { type = \"Service\" identifiers = [\"s3.amazonaws.com\"] } }}## RESOURCE BLOCKS## ----------------------------------------------------------------# Creating the IAM policy using the data block from aboveresource \"aws_iam_role_policy\" \"s3_ls_policy\" { name = \"s3_ls_iam_policy\" policy = data.aws_iam_policy_document.s3_ls_policy_data.json role = aws_iam_role.s3_ls_role.id}# Creating the IAM role associated to the resources aboveresource \"aws_iam_role\" \"s3_ls_role\" { name = \"s3_ls_iam_role\" description = \"This role allows for all S3 buckets to be listed.\" assume_role_policy = data.aws_iam_policy_document.s3_ls_assume_role_data.json}" }, { "code": null, "e": 13794, "s": 13666, "text": "Again, we’re not going to delve super deeply into what all this syntax means. At a high level, I want to call out a few things:" }, { "code": null, "e": 14301, "s": 13794, "text": "Resource blocks: These are the “rubber meets the road” blocks of code that actually provision a desired resource. The string in the first set of quotes defines the resource type as defined by Terraform, so you must use their precise syntax for that. The string in the second set of quotes is the arbitrary name you want to give the resource. Because Terraform allows you to reference that resource in other bits of your Terraform code, it’s always a good idea to give your resource a name you can remember." }, { "code": null, "e": 14510, "s": 14301, "text": "Data blocks: These are generally used to provide additional configuration for a resource you plan to provision. In this case, the data blocks provide the configuration for how the IAM policy should be set up." }, { "code": null, "e": 14748, "s": 14510, "text": "Comments: Just like with Python, beginning a piece of code with the # symbol will allow you to annotate your scripts with appropriate comments. You can see that I have used them above to descriptive note what each block of code is doing." }, { "code": null, "e": 15157, "s": 14748, "text": "Terraform provides an extremely robust library of documentation on how to provision pretty much any resource on AWS. (Because AWS is constantly rolling out new stuff, Terraform might not be able to support a new AWS service on day 1, but they are very good about updating their stuff in new Terraform versions.) To see an example of this, here is the documentation I referenced to create the IAM role itself." }, { "code": null, "e": 15249, "s": 15157, "text": "If you’re structuring your code the same way I am, you should now have two Terraform files:" }, { "code": null, "e": 15260, "s": 15249, "text": "backend.tf" }, { "code": null, "e": 15267, "s": 15260, "text": "iam.tf" }, { "code": null, "e": 15484, "s": 15267, "text": "We’ll get into reusable Terraform variables and more in later posts. For now, this simple set up will work just fine to get us started. Let’s now actually see how the Terraform CLI works to provision these resources." }, { "code": null, "e": 15736, "s": 15484, "text": "Now it’s time to get to the fun part! Fun because we’ll get to see the fruits of labor come to life very quickly in this last section. To quickly summarize what we’ll be doing in these subsections, let’s rattle off those topics in these bullets below:" }, { "code": null, "e": 15764, "s": 15736, "text": "Performing a terraform init" }, { "code": null, "e": 15804, "s": 15764, "text": "Validating and formatting your tf files" }, { "code": null, "e": 15850, "s": 15804, "text": "Creating your resources on AWS with Terraform" }, { "code": null, "e": 15898, "s": 15850, "text": "Destroying your resources on AWS with Terraform" }, { "code": null, "e": 16297, "s": 15898, "text": "As mentioned above, Terraform will first look for the provider and backend to know how it should be properly interacting with your AWS account. In order to get that “communication” started, you’ll first need to perform the terraform init command. It will also download the proper plugins required to deploy the resources out to AWS. If the command has run successfully, this is what you should see:" }, { "code": null, "e": 16382, "s": 16297, "text": "That’s it for initialization! Let’s move on to validating and formatting your files." }, { "code": null, "e": 16902, "s": 16382, "text": "These steps are actually not required, but they help to make your life easier when trying to read / debug your code. By running the terraform validate command, Terraform will make sure that everything is squared away with your Terraform files without actually provisioning anything. If you’ve messed something up, Terraform will tell you what’s wrong. Just so you can see what this looks like, I’m going to comment out a required piece in my Terraform. Let’s see what happens when I run my terraform validate command..." }, { "code": null, "e": 17168, "s": 16902, "text": "As you can see, the first time I run the validate command outputs very plainly spelled out where I messed up and how to fix it. The second time I run the terraform validate command after I’ve fixed things, Terraform gives me the greenlight that my stuff looks good!" }, { "code": null, "e": 17513, "s": 17168, "text": "The latter command isn’t at all required, but it’s still nice. By running the terraform fmt command, Terraform will look at your tf files and do what it can to make everything all nice and pretty. This includes lining up all your resource parameters to have the = signs all line up. Again, not required at all, but I personally find it helpful." }, { "code": null, "e": 17913, "s": 17513, "text": "With your Terraform files validated, we’re finally ready to use them to provision our resources on AWS! Now if you want to get an understanding of how your resources will change prior to formally creating them, it’s a good idea to run the terraform plan command. This command will plainly spell out how things will change. Here’s an example of what this means with our little IAM role creation here:" }, { "code": null, "e": 18312, "s": 17913, "text": "If everything looks good to you, you’re clear to create these resources by running the terraform apply command. By running this command without any flags, Terraform will show you a similar output as terraform plan, except it will also prompt you to confirm the changes by entering yes. Upon entering yes, Terraform will appropriately create the resources and note successful creation appropriately." }, { "code": null, "e": 18501, "s": 18312, "text": "And there you go! Your first resources created on AWS with Terraform! If you go into the IAM console, you should now be able to see this IAM role we created from within there. Very cool!!!" }, { "code": null, "e": 18649, "s": 18501, "text": "Of course, I can understand if you would not want to keep this resource around, so let’s finish off this post with destroying what we just created." }, { "code": null, "e": 18997, "s": 18649, "text": "As you might be able to guess, destroying resources with Terraform is just as easy as creating them. Instead of running terraform apply, we instead run terraform destroy. Again, you’ll be prompted by Terraform to confirm the destruction of these resources by entering yes, and then Terraform will go ahead and destroy the IAM role we just created." }, { "code": null, "e": 19033, "s": 18997, "text": "Poof! Those resources are now gone." } ]
Class or Static Variables in Python - GeeksforGeeks
03 Aug, 2021 All objects share class or static variables. An instance or non-static variables are different for different objects (every object has a copy). For example, let a Computer Science Student be represented by class CSStudent. The class may have a static variable whose value is “cse” for all objects. And class may also have non-static members like name and roll. In C++ and Java, we can use static keywords to make a variable a class variable. The variables which don’t have a preceding static keyword are instance variables. See this for the Java example and this for the C++ example.The Python approach is simple; it doesn’t require a static keyword. All variables which are assigned a value in the class declaration are class variables. And variables that are assigned values inside methods are instance variables. Python # Python program to show that the variables with a value# assigned in class declaration, are class variables # Class for Computer Science Studentclass CSStudent: stream = 'cse' # Class Variable def __init__(self,name,roll): self.name = name # Instance Variable self.roll = roll # Instance Variable # Objects of CSStudent classa = CSStudent('Geek', 1)b = CSStudent('Nerd', 2) print(a.stream) # prints "cse"print(b.stream) # prints "cse"print(a.name) # prints "Geek"print(b.name) # prints "Nerd"print(a.roll) # prints "1"print(b.roll) # prints "2" # Class variables can be accessed using class# name alsoprint(CSStudent.stream) # prints "cse" # Now if we change the stream for just a it won't be changed for ba.stream = 'ece'print(a.stream) # prints 'ece'print(b.stream) # prints 'cse' # To change the stream for all instances of the class we can change it# directly from the classCSStudent.stream = 'mech' print(a.stream) # prints 'ece'print(b.stream) # prints 'mech' Output: cse cse Geek Nerd 1 2 cse ece cse ece mech YouTubeGeeksforGeeks501K subscribersPython Programming Tutorial | Class or Static Variables in Python | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 2:13•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=FlGiKthOFbU" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div> This article is contributed by Harshit Gupta. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above rahulpy adarshb20 neeruto1 Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Read JSON file using Python Adding new column to existing DataFrame in Pandas Python map() function How to get column names in Pandas dataframe Read a file line by line in Python How to Install PIP on Windows ? Enumerate() in Python Iterate over a list in Python Different ways to create Pandas Dataframe Python String | replace()
[ { "code": null, "e": 41552, "s": 41524, "text": "\n03 Aug, 2021" }, { "code": null, "e": 42204, "s": 41552, "text": "All objects share class or static variables. An instance or non-static variables are different for different objects (every object has a copy). For example, let a Computer Science Student be represented by class CSStudent. The class may have a static variable whose value is “cse” for all objects. And class may also have non-static members like name and roll. In C++ and Java, we can use static keywords to make a variable a class variable. The variables which don’t have a preceding static keyword are instance variables. See this for the Java example and this for the C++ example.The Python approach is simple; it doesn’t require a static keyword. " }, { "code": null, "e": 42370, "s": 42204, "text": "All variables which are assigned a value in the class declaration are class variables. And variables that are assigned values inside methods are instance variables. " }, { "code": null, "e": 42377, "s": 42370, "text": "Python" }, { "code": "# Python program to show that the variables with a value# assigned in class declaration, are class variables # Class for Computer Science Studentclass CSStudent: stream = 'cse' # Class Variable def __init__(self,name,roll): self.name = name # Instance Variable self.roll = roll # Instance Variable # Objects of CSStudent classa = CSStudent('Geek', 1)b = CSStudent('Nerd', 2) print(a.stream) # prints \"cse\"print(b.stream) # prints \"cse\"print(a.name) # prints \"Geek\"print(b.name) # prints \"Nerd\"print(a.roll) # prints \"1\"print(b.roll) # prints \"2\" # Class variables can be accessed using class# name alsoprint(CSStudent.stream) # prints \"cse\" # Now if we change the stream for just a it won't be changed for ba.stream = 'ece'print(a.stream) # prints 'ece'print(b.stream) # prints 'cse' # To change the stream for all instances of the class we can change it# directly from the classCSStudent.stream = 'mech' print(a.stream) # prints 'ece'print(b.stream) # prints 'mech'", "e": 43417, "s": 42377, "text": null }, { "code": null, "e": 43426, "s": 43417, "text": "Output: " }, { "code": null, "e": 43469, "s": 43426, "text": "cse\ncse\nGeek\nNerd\n1\n2\ncse\nece\ncse\nece\nmech" }, { "code": null, "e": 44333, "s": 43469, "text": "YouTubeGeeksforGeeks501K subscribersPython Programming Tutorial | Class or Static Variables in Python | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 2:13•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=FlGiKthOFbU\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>" }, { "code": null, "e": 44601, "s": 44333, "text": "This article is contributed by Harshit Gupta. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks." }, { "code": null, "e": 44725, "s": 44601, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above" }, { "code": null, "e": 44733, "s": 44725, "text": "rahulpy" }, { "code": null, "e": 44743, "s": 44733, "text": "adarshb20" }, { "code": null, "e": 44752, "s": 44743, "text": "neeruto1" }, { "code": null, "e": 44759, "s": 44752, "text": "Python" }, { "code": null, "e": 44857, "s": 44759, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 44866, "s": 44857, "text": "Comments" }, { "code": null, "e": 44879, "s": 44866, "text": "Old Comments" }, { "code": null, "e": 44907, "s": 44879, "text": "Read JSON file using Python" }, { "code": null, "e": 44957, "s": 44907, "text": "Adding new column to existing DataFrame in Pandas" }, { "code": null, "e": 44979, "s": 44957, "text": "Python map() function" }, { "code": null, "e": 45023, "s": 44979, "text": "How to get column names in Pandas dataframe" }, { "code": null, "e": 45058, "s": 45023, "text": "Read a file line by line in Python" }, { "code": null, "e": 45090, "s": 45058, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 45112, "s": 45090, "text": "Enumerate() in Python" }, { "code": null, "e": 45142, "s": 45112, "text": "Iterate over a list in Python" }, { "code": null, "e": 45184, "s": 45142, "text": "Different ways to create Pandas Dataframe" } ]
What happen when we exceed valid range of built-in data types in C++? - GeeksforGeeks
12 Feb, 2021 Consider the below programs. 1) Program to show what happens when we cross range of ‘char’ : CPP // C++ program to demonstrate// the problem with 'char'#include <iostream> using namespace std; int main(){ for (char a = 0; a <= 225; a++) cout << a; return 0;} a is declared as char. Here the loop is working from 0 to 225. So, it should print from 0 to 225, then stop. But it will generate a infinite loop. The reason for this is the valid range of character datatype is -128 to 127. When ‘a’ become 128 through a++, the range is exceeded and as a result the first number from negative side of the range (i.e. -128) gets assigned to a. As a result of this ‘a’ will never reach at point 225. so it will print the infinite series of character.2) Program to show what happens when we cross range of ‘bool’ : CPP // C++ program to demonstrate// the problem with 'bool'#include <iostream> using namespace std; int main(){ // declaring Boolean // variable with true value bool a = true; for (a = 1; a <= 5; a++) cout << a; return 0;} This code will print ‘1’ infinite time because here ‘a’ is declared as ‘bool’ and it’s valid range is 0 to 1. And for a Boolean variable anything else than 0 is 1 (or true). When ‘a’ tries to become 2 (through a++), 1 gets assigned to ‘a’. The condition a<=5 is satisfied and the control remains with in the loop. See this for Bool data type.3) Program to show what happens when we cross range of ‘short’ : Note that short is short for short int. They are synonymous. short, short int, signed short, and signed short int are all the same data-type. CPP // C++ program to demonstrate// the problem with 'short'#include <iostream> using namespace std; int main(){ // declaring short variable short a; for (a = 32767; a < 32770; a++) cout << a << "\n"; return 0;} Will this code print ‘a’ till it becomes 32770? Well the answer is indefinite loop, because here ‘a’ is declared as a short and its valid range is -32768 to +32767. When ‘a’ tries to become 32768 through a++, the range is exceeded and as a result the first number from negative side of the range(i.e. -32768) gets assigned to a. Hence the condition “a < 32770” is satisfied and control remains within the loop.4) Program to show what happens when we cross range of ‘unsigned short’ : CPP // C++ program to demonstrate// the problem with 'unsigned short'#include <iostream> using namespace std; int main(){ unsigned short a; for (a = 65532; a < 65536; a++) cout << a << "\n"; return 0;} Will this code print ‘a’ till it becomes 65536? Well the answer is indefinite loop, because here ‘a’ is declared as a short and its valid range is 0 to +65535. When ‘a’ tries to become 65536 through a++, the range is exceeded and as a result the first number from the range(i.e. 0) gets assigned to a. Hence the condition “a < 65536” is satisfied and control remains within the loop.Explanation – We know that computer uses 2’s complement to represent data. For example if we have 1 byte (We can use char and use %d as format specifier to view it as decimal), we can represent -128 to 127. If we add 1 to 127 we will get -128. Thats because 127 is 01111111 in binary. And if we add 1 into 01111111 we will get 10000000. 10000000 is -128 in 2’s complement form.Same will happen if we use unsigned integers. 255 is 11111111 when we add 1 to 11111111 we will get 100000000. But we are using only first 8 bits, so that’s 0. Hence we get 0 after adding 1 in 255.This article is contributed by Aditya Rakhecha and improved by Sakshi Tiwari If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. abhinavaggarwal36 SayanBanerjee4 SumeetMathpati tiwarisakshi1302 cpp-data-types C Language C++ CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments fork() in C Command line arguments in C/C++ Different methods to reverse a string in C/C++ Substring in C++ Function Pointer in C Vector in C++ STL Initialize a vector in C++ (6 different ways) Inheritance in C++ Map in C++ Standard Template Library (STL) C++ Classes and Objects
[ { "code": null, "e": 24514, "s": 24486, "text": "\n12 Feb, 2021" }, { "code": null, "e": 24608, "s": 24514, "text": "Consider the below programs. 1) Program to show what happens when we cross range of ‘char’ : " }, { "code": null, "e": 24612, "s": 24608, "text": "CPP" }, { "code": "// C++ program to demonstrate// the problem with 'char'#include <iostream> using namespace std; int main(){ for (char a = 0; a <= 225; a++) cout << a; return 0;}", "e": 24787, "s": 24612, "text": null }, { "code": null, "e": 25334, "s": 24787, "text": "a is declared as char. Here the loop is working from 0 to 225. So, it should print from 0 to 225, then stop. But it will generate a infinite loop. The reason for this is the valid range of character datatype is -128 to 127. When ‘a’ become 128 through a++, the range is exceeded and as a result the first number from negative side of the range (i.e. -128) gets assigned to a. As a result of this ‘a’ will never reach at point 225. so it will print the infinite series of character.2) Program to show what happens when we cross range of ‘bool’ : " }, { "code": null, "e": 25338, "s": 25334, "text": "CPP" }, { "code": "// C++ program to demonstrate// the problem with 'bool'#include <iostream> using namespace std; int main(){ // declaring Boolean // variable with true value bool a = true; for (a = 1; a <= 5; a++) cout << a; return 0;}", "e": 25581, "s": 25338, "text": null }, { "code": null, "e": 26132, "s": 25581, "text": "This code will print ‘1’ infinite time because here ‘a’ is declared as ‘bool’ and it’s valid range is 0 to 1. And for a Boolean variable anything else than 0 is 1 (or true). When ‘a’ tries to become 2 (through a++), 1 gets assigned to ‘a’. The condition a<=5 is satisfied and the control remains with in the loop. See this for Bool data type.3) Program to show what happens when we cross range of ‘short’ : Note that short is short for short int. They are synonymous. short, short int, signed short, and signed short int are all the same data-type. " }, { "code": null, "e": 26136, "s": 26132, "text": "CPP" }, { "code": "// C++ program to demonstrate// the problem with 'short'#include <iostream> using namespace std; int main(){ // declaring short variable short a; for (a = 32767; a < 32770; a++) cout << a << \"\\n\"; return 0;}", "e": 26365, "s": 26136, "text": null }, { "code": null, "e": 26851, "s": 26365, "text": "Will this code print ‘a’ till it becomes 32770? Well the answer is indefinite loop, because here ‘a’ is declared as a short and its valid range is -32768 to +32767. When ‘a’ tries to become 32768 through a++, the range is exceeded and as a result the first number from negative side of the range(i.e. -32768) gets assigned to a. Hence the condition “a < 32770” is satisfied and control remains within the loop.4) Program to show what happens when we cross range of ‘unsigned short’ : " }, { "code": null, "e": 26855, "s": 26851, "text": "CPP" }, { "code": "// C++ program to demonstrate// the problem with 'unsigned short'#include <iostream> using namespace std; int main(){ unsigned short a; for (a = 65532; a < 65536; a++) cout << a << \"\\n\"; return 0;}", "e": 27071, "s": 26855, "text": null }, { "code": null, "e": 28485, "s": 27071, "text": "Will this code print ‘a’ till it becomes 65536? Well the answer is indefinite loop, because here ‘a’ is declared as a short and its valid range is 0 to +65535. When ‘a’ tries to become 65536 through a++, the range is exceeded and as a result the first number from the range(i.e. 0) gets assigned to a. Hence the condition “a < 65536” is satisfied and control remains within the loop.Explanation – We know that computer uses 2’s complement to represent data. For example if we have 1 byte (We can use char and use %d as format specifier to view it as decimal), we can represent -128 to 127. If we add 1 to 127 we will get -128. Thats because 127 is 01111111 in binary. And if we add 1 into 01111111 we will get 10000000. 10000000 is -128 in 2’s complement form.Same will happen if we use unsigned integers. 255 is 11111111 when we add 1 to 11111111 we will get 100000000. But we are using only first 8 bits, so that’s 0. Hence we get 0 after adding 1 in 255.This article is contributed by Aditya Rakhecha and improved by Sakshi Tiwari If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 28503, "s": 28485, "text": "abhinavaggarwal36" }, { "code": null, "e": 28518, "s": 28503, "text": "SayanBanerjee4" }, { "code": null, "e": 28533, "s": 28518, "text": "SumeetMathpati" }, { "code": null, "e": 28550, "s": 28533, "text": "tiwarisakshi1302" }, { "code": null, "e": 28565, "s": 28550, "text": "cpp-data-types" }, { "code": null, "e": 28576, "s": 28565, "text": "C Language" }, { "code": null, "e": 28580, "s": 28576, "text": "C++" }, { "code": null, "e": 28584, "s": 28580, "text": "CPP" }, { "code": null, "e": 28682, "s": 28584, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28691, "s": 28682, "text": "Comments" }, { "code": null, "e": 28704, "s": 28691, "text": "Old Comments" }, { "code": null, "e": 28716, "s": 28704, "text": "fork() in C" }, { "code": null, "e": 28748, "s": 28716, "text": "Command line arguments in C/C++" }, { "code": null, "e": 28795, "s": 28748, "text": "Different methods to reverse a string in C/C++" }, { "code": null, "e": 28812, "s": 28795, "text": "Substring in C++" }, { "code": null, "e": 28834, "s": 28812, "text": "Function Pointer in C" }, { "code": null, "e": 28852, "s": 28834, "text": "Vector in C++ STL" }, { "code": null, "e": 28898, "s": 28852, "text": "Initialize a vector in C++ (6 different ways)" }, { "code": null, "e": 28917, "s": 28898, "text": "Inheritance in C++" }, { "code": null, "e": 28960, "s": 28917, "text": "Map in C++ Standard Template Library (STL)" } ]
2D Shapes PathElement Vertical Line
The path element Vertical Line is used to draw a vertical line to a point in the specified coordinates from the current position. It is represented by a class named VLineTo. This class belongs to the package javafx.scene.shape. This class has a property of the double datatype namely − Y − The y coordinate of the point to which a vertical is to be drawn from the current position. Y − The y coordinate of the point to which a vertical is to be drawn from the current position. To draw the path element vertical line, you need to pass a value to this property. This can be done either by passing it to the constructor of this class at the time of instantiation as follows − LineTO line = new LineTo(x) Or, by using its respective setter methods as follows − setY(value); To draw a vertical line to a specified point from the current position in JavaFX, follow the steps given below. Create a Java class and inherit the Application class of the package javafx.application and implement the start() method of this class as follows. public class ClassName extends Application { @Override public void start(Stage primaryStage) throws Exception { } } Create the path class object as follows − //Creating a Path object Path path = new Path() Create the MoveTo path element and set XY coordinates to the starting point of the line to the coordinates (100, 150). This can be done by using the methods setX() and setY() of the class MoveTo as shown below. //Moving to the starting point MoveTo moveTo = new MoveTo(); moveTo.setX(100.0f); moveTo.setY(150.0f) Create the path element vertical line by instantiating the class named VLineTo, which belongs to the package javafx.scene.shape as follows. //Creating an object of the class VLineTo VLineTo vLineTo = new VLineTo(); Specify the coordinates of the point to which a vertical line is to be drawn from the current position. This can be done by setting the properties x and y using their respective setter methods as shown in the following code block. //Setting the Properties of the vertical line element lineTo.setX(500.0f); lineTo.setY(150.0f); Add the path elements MoveTo and VlineTo created in the previous steps to the observable list of the Path class as follows − //Adding the path elements to Observable list of the Path class path.getElements().add(moveTo); path.getElements().add(VlineTo); Create a group object by instantiating the class named Group, which belongs to the package javafx.scene. Pass the Line (node) object created in the previous step as a parameter to the constructor of the Group class. This can be done in order to add it to the group as shown below − Group root = new Group(line); Create a Scene by instantiating the class named Scene which belongs to the package javafx.scene. To this class, pass the Group object (root) created in the previous step. In addition to the root object, you can also pass two double parameters representing height and width of the screen along with the object of the Group class as follows − Scene scene = new Scene(group ,600, 300); You can set the title to the stage using the setTitle() method of the Stage class. The primaryStage is a Stage object which is passed to the start method of the scene class as a parameter. Using the primaryStage object, set the title of the scene as Sample Application as follows. primaryStage.setTitle("Sample Application"); You can add a Scene object to the stage using the method setScene() of the class named Stage. Add the Scene object prepared in the previous steps using this method as shown below − primaryStage.setScene(scene); Display the contents of the scene using the method named show() of the Stage class as follows. primaryStage.show(); Launch the JavaFX application by calling the static method launch() of the Application class from the main method as follows. public static void main(String args[]){ launch(args); } Following is a program which draws a vertical line from the current point to a specified position using the class Path of JavaFX. Save this code in a file with the name − VLineToExample.java. import javafx.application.Application; import javafx.scene.Group; import javafx.scene.Scene; import javafx.stage.Stage; import javafx.scene.shape.VLineTo; import javafx.scene.shape.MoveTo; import javafx.scene.shape.Path; public class VLineToExample extends Application { @Override public void start(Stage stage) { //Creating an object of the Path class Path path = new Path(); //Moving to the starting point MoveTo moveTo = new MoveTo(); moveTo.setX(100.0); moveTo.setY(150.0); //Instantiating the VLineTo class VLineTo vLineTo = new VLineTo(); //Setting the properties of the path element vertical line vLineTo.setY(10.0); //Adding the path elements to Observable list of the Path class path.getElements().add(moveTo); path.getElements().add(vLineTo); //Creating a Group object Group root = new Group(path); //Creating a scene object Scene scene = new Scene(root, 600, 300); //Setting title to the Stage stage.setTitle("Drawing a vertical line"); //Adding scene to the stage stage.setScene(scene); //Displaying the contents of the stage stage.show(); } public static void main(String args[]){ launch(args); } } Compile and execute the saved java file from the command prompt using the following commands. javac VLineToExample.java java VLineToExample On executing, the above program generates a JavaFX window displaying a vertical line, which is drawn from the current position to the specified point, as shown below. 33 Lectures 7.5 hours Syed Raza 64 Lectures 12.5 hours Emenwa Global, Ejike IfeanyiChukwu 20 Lectures 4 hours Emenwa Global, Ejike IfeanyiChukwu Print Add Notes Bookmark this page
[ { "code": null, "e": 2030, "s": 1900, "text": "The path element Vertical Line is used to draw a vertical line to a point in the specified coordinates from the current position." }, { "code": null, "e": 2128, "s": 2030, "text": "It is represented by a class named VLineTo. This class belongs to the package javafx.scene.shape." }, { "code": null, "e": 2186, "s": 2128, "text": "This class has a property of the double datatype namely −" }, { "code": null, "e": 2282, "s": 2186, "text": "Y − The y coordinate of the point to which a vertical is to be drawn from the current position." }, { "code": null, "e": 2378, "s": 2282, "text": "Y − The y coordinate of the point to which a vertical is to be drawn from the current position." }, { "code": null, "e": 2574, "s": 2378, "text": "To draw the path element vertical line, you need to pass a value to this property. This can be done either by passing it to the constructor of this class at the time of instantiation as follows −" }, { "code": null, "e": 2603, "s": 2574, "text": "LineTO line = new LineTo(x)\n" }, { "code": null, "e": 2659, "s": 2603, "text": "Or, by using its respective setter methods as follows −" }, { "code": null, "e": 2675, "s": 2659, "text": "setY(value); \n" }, { "code": null, "e": 2787, "s": 2675, "text": "To draw a vertical line to a specified point from the current position in JavaFX, follow the steps given below." }, { "code": null, "e": 2934, "s": 2787, "text": "Create a Java class and inherit the Application class of the package javafx.application and implement the start() method of this class as follows." }, { "code": null, "e": 3079, "s": 2934, "text": "public class ClassName extends Application { \n @Override \n public void start(Stage primaryStage) throws Exception { \n } \n} " }, { "code": null, "e": 3121, "s": 3079, "text": "Create the path class object as follows −" }, { "code": null, "e": 3171, "s": 3121, "text": "//Creating a Path object \nPath path = new Path()\n" }, { "code": null, "e": 3382, "s": 3171, "text": "Create the MoveTo path element and set XY coordinates to the starting point of the line to the coordinates (100, 150). This can be done by using the methods setX() and setY() of the class MoveTo as shown below." }, { "code": null, "e": 3489, "s": 3382, "text": "//Moving to the starting point \nMoveTo moveTo = new MoveTo(); \nmoveTo.setX(100.0f); \nmoveTo.setY(150.0f) \n" }, { "code": null, "e": 3629, "s": 3489, "text": "Create the path element vertical line by instantiating the class named VLineTo, which belongs to the package javafx.scene.shape as follows." }, { "code": null, "e": 3707, "s": 3629, "text": "//Creating an object of the class VLineTo \nVLineTo vLineTo = new VLineTo();\n" }, { "code": null, "e": 3938, "s": 3707, "text": "Specify the coordinates of the point to which a vertical line is to be drawn from the current position. This can be done by setting the properties x and y using their respective setter methods as shown in the following code block." }, { "code": null, "e": 4037, "s": 3938, "text": "//Setting the Properties of the vertical line element \nlineTo.setX(500.0f); \nlineTo.setY(150.0f);\n" }, { "code": null, "e": 4162, "s": 4037, "text": "Add the path elements MoveTo and VlineTo created in the previous steps to the observable list of the Path class as follows −" }, { "code": null, "e": 4297, "s": 4162, "text": "//Adding the path elements to Observable list of the Path class \npath.getElements().add(moveTo); \npath.getElements().add(VlineTo); \n" }, { "code": null, "e": 4402, "s": 4297, "text": "Create a group object by instantiating the class named Group, which belongs to the package javafx.scene." }, { "code": null, "e": 4579, "s": 4402, "text": "Pass the Line (node) object created in the previous step as a parameter to the constructor of the Group class. This can be done in order to add it to the group as shown below −" }, { "code": null, "e": 4611, "s": 4579, "text": "Group root = new Group(line); \n" }, { "code": null, "e": 4782, "s": 4611, "text": "Create a Scene by instantiating the class named Scene which belongs to the package javafx.scene. To this class, pass the Group object (root) created in the previous step." }, { "code": null, "e": 4952, "s": 4782, "text": "In addition to the root object, you can also pass two double parameters representing height and width of the screen along with the object of the Group class as follows −" }, { "code": null, "e": 4995, "s": 4952, "text": "Scene scene = new Scene(group ,600, 300);\n" }, { "code": null, "e": 5184, "s": 4995, "text": "You can set the title to the stage using the setTitle() method of the Stage class. The primaryStage is a Stage object which is passed to the start method of the scene class as a parameter." }, { "code": null, "e": 5276, "s": 5184, "text": "Using the primaryStage object, set the title of the scene as Sample Application as follows." }, { "code": null, "e": 5323, "s": 5276, "text": "primaryStage.setTitle(\"Sample Application\"); \n" }, { "code": null, "e": 5504, "s": 5323, "text": "You can add a Scene object to the stage using the method setScene() of the class named Stage. Add the Scene object prepared in the previous steps using this method as shown below −" }, { "code": null, "e": 5535, "s": 5504, "text": "primaryStage.setScene(scene);\n" }, { "code": null, "e": 5630, "s": 5535, "text": "Display the contents of the scene using the method named show() of the Stage class as follows." }, { "code": null, "e": 5652, "s": 5630, "text": "primaryStage.show();\n" }, { "code": null, "e": 5778, "s": 5652, "text": "Launch the JavaFX application by calling the static method launch() of the Application class from the main method as follows." }, { "code": null, "e": 5846, "s": 5778, "text": "public static void main(String args[]){ \n launch(args); \n}" }, { "code": null, "e": 6038, "s": 5846, "text": "Following is a program which draws a vertical line from the current point to a specified position using the class Path of JavaFX. Save this code in a file with the name − VLineToExample.java." }, { "code": null, "e": 7448, "s": 6038, "text": "import javafx.application.Application; \nimport javafx.scene.Group; \nimport javafx.scene.Scene; \nimport javafx.stage.Stage;\nimport javafx.scene.shape.VLineTo; \nimport javafx.scene.shape.MoveTo; \nimport javafx.scene.shape.Path; \n\npublic class VLineToExample extends Application { \n @Override \n public void start(Stage stage) { \n //Creating an object of the Path class \n Path path = new Path(); \n \n //Moving to the starting point \n MoveTo moveTo = new MoveTo(); \n moveTo.setX(100.0); \n moveTo.setY(150.0); \n \n //Instantiating the VLineTo class \n VLineTo vLineTo = new VLineTo(); \n \n //Setting the properties of the path element vertical line \n vLineTo.setY(10.0); \n \n //Adding the path elements to Observable list of the Path class \n path.getElements().add(moveTo); \n path.getElements().add(vLineTo); \n \n //Creating a Group object \n Group root = new Group(path); \n \n //Creating a scene object \n Scene scene = new Scene(root, 600, 300); \n \n //Setting title to the Stage \n stage.setTitle(\"Drawing a vertical line\"); \n \n //Adding scene to the stage \n stage.setScene(scene);\n \n //Displaying the contents of the stage \n stage.show(); \n } \n public static void main(String args[]){ \n launch(args); \n } \n} " }, { "code": null, "e": 7542, "s": 7448, "text": "Compile and execute the saved java file from the command prompt using the following commands." }, { "code": null, "e": 7590, "s": 7542, "text": "javac VLineToExample.java \njava VLineToExample\n" }, { "code": null, "e": 7757, "s": 7590, "text": "On executing, the above program generates a JavaFX window displaying a vertical line, which is drawn from the current position to the specified point, as shown below." }, { "code": null, "e": 7792, "s": 7757, "text": "\n 33 Lectures \n 7.5 hours \n" }, { "code": null, "e": 7803, "s": 7792, "text": " Syed Raza" }, { "code": null, "e": 7839, "s": 7803, "text": "\n 64 Lectures \n 12.5 hours \n" }, { "code": null, "e": 7875, "s": 7839, "text": " Emenwa Global, Ejike IfeanyiChukwu" }, { "code": null, "e": 7908, "s": 7875, "text": "\n 20 Lectures \n 4 hours \n" }, { "code": null, "e": 7944, "s": 7908, "text": " Emenwa Global, Ejike IfeanyiChukwu" }, { "code": null, "e": 7951, "s": 7944, "text": " Print" }, { "code": null, "e": 7962, "s": 7951, "text": " Add Notes" } ]
Arithmetic Progression - Common difference and Nth term | Class 10 Maths - GeeksforGeeks
27 Oct, 2020 Arithmetic Progression is a sequence of numbers where the difference between any two successive numbers is constant. For example 1, 3, 5, 7, 9....... is in a series which has a common difference (3 – 1) between two successive terms is equal to 2. If we take natural numbers as an example of series 1, 2, 3, 4... then the common difference (2 – 1) between the two successive terms is equal to 1. In other words, arithmetic progression can be defined as “A mathematical sequence in which the difference between two consecutive terms is always a constant“. We come across the different words like sequence, series, and progression in AP, now let us see what does each word define – Sequence is a finite or infinite list of numbers that follows a certain pattern. For example 0, 1, 2, 3, 4, 5... is the sequence, which is infinite sequence of whole numbers. Series is the sum of the elements in which the sequence is corresponding For example 1 + 2 + 3 + 4 + 5....is the series of natural numbers. Each number in a sequence or a series is called a term. Here 1 is a term, 2 is a term, 3 is a term ....... Progression is a sequence in which the general term can be expressed using a mathematical formula or the Sequence which uses a mathematical formula that can be defined as the progression. The common difference in the arithmetic progression is denoted by d. The difference between the successive term and its preceding term. It is always constant or the same for arithmetic progression. In other words, we can say that, in a given sequence if the common difference is constant or the same then we can say that the given sequence is in Arithmetic Progression. The formula to find common difference is d = (an + 1 – an ) or d = (an – an-1). If the common difference is positive, then AP increases. For Example 4, 8, 12, 16..... in these series, AP increases If the common difference is negative then AP decreases. For Example -4, -6, -8......., here AP decreases. If the common difference is zero then AP will be constant. For Example 1, 2, 3, 4, 5........., here AP is constant. The sequence of Arithmetic Progression will be like a1, a2, a3, a4,... Example 1: 0, 5, 10, 15, 20..... here, a1 = 0, a2 = 5, so a2 - a1 = d = 5 - 0 = 5. a3 = 10, a2 = 5, so a3 - a2 = 10 - 5 = 5. a4 = 15, a3 = 10, so a4 - a3 = 15 - 10 =5. a5 =20, a4 =15, so a5 -a4 = 20 - 15 = 5. From the above example, we can say that the common difference is “5”. Example 2: 0, 7, 14, 21, 28....... here, a1 = 0, a2 = 7, so a2 - a1 = 7 - 0 = 7 a3 = 14, a2 = 7, so a3 - a2 = 14 - 7 = 7 a4 = 21, a3 = 14, so a4 - a3 = 21 - 14 = 7 a5 =28, a4 = 21, so a5 -a4 =28 - 21 = 7 From the above example, we can say that the common difference is “7”. To find the middle term of an arithmetic progression we need the total number of terms in a sequence. We have two cases: Even: If the number of terms in the sequence is even then we will be having two middle terms i.e (n/2) and (n/2 + 1).n Odd: If the number of terms in the sequence is odd then we will be having only one middle terms i.e (n/2). Example 1: If n = 9 then, Middle term = n/2 = 9/2 = 4. Example 2: If n = 16 then, First middle term = n/2 = 16/2 = 8. Second middle term = (n/2) + 1 = (16/2) + 1 = 8 + 1 = 9. To find the nth term of an arithmetic progression, We know that the A.P series is in the form of a, a + d, a + 2d, a + 3d, a + 4d.......... The nth term is denoted by Tn. Thus to find the nth term of an A.P series will be : Example: Find the 9th term of the given A.P sequence: 3, 6, 9, 12, 15...........? Step 1: Write the given series. Given series = 3, 6, 9, 12, 15........... Step 2: Now write down the value of a and n from the given series. a = 3, n = 9 Step 3: Find the common difference d by using the formula (an+1 – an). d = a2 - a1 , here a2 = 6 and a1 = 3 so d = (6 - 3) = 3. Step 4: We need to substitute values of a, d, n in the formula (Tn = a + (n – 1)d). Tn = a + (n - 1)d given n = 9. T9 = 3 + (9 - 1)3 = 3 + (8)3 = 3 + 24 = 27 Therefore the 9th term of given A.P series 3, 6, 9, 12, 15.......... is “27”. Arithmetic Progressions Class 10 School Learning School Mathematics Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Mobile Technologies - Definition, Types, Uses, Advantages Introduction to Internet Chemical Indicators - Definition, Types, Examples Rusting of Iron - Explanation, Chemical Reaction, Prevention Magnetic Field due to Current in Straight Wire How to Align Text in HTML? Cloud Deployment Models What is a Storage Device? Definition, Types, Examples Libraries in Python Reading Rows from a CSV File in Python
[ { "code": null, "e": 24442, "s": 24414, "text": "\n27 Oct, 2020" }, { "code": null, "e": 24837, "s": 24442, "text": "Arithmetic Progression is a sequence of numbers where the difference between any two successive numbers is constant. For example 1, 3, 5, 7, 9....... is in a series which has a common difference (3 – 1) between two successive terms is equal to 2. If we take natural numbers as an example of series 1, 2, 3, 4... then the common difference (2 – 1) between the two successive terms is equal to 1." }, { "code": null, "e": 24996, "s": 24837, "text": "In other words, arithmetic progression can be defined as “A mathematical sequence in which the difference between two consecutive terms is always a constant“." }, { "code": null, "e": 25121, "s": 24996, "text": "We come across the different words like sequence, series, and progression in AP, now let us see what does each word define –" }, { "code": null, "e": 25297, "s": 25121, "text": "Sequence is a finite or infinite list of numbers that follows a certain pattern. For example 0, 1, 2, 3, 4, 5... is the sequence, which is infinite sequence of whole numbers. " }, { "code": null, "e": 25548, "s": 25299, "text": "Series is the sum of the elements in which the sequence is corresponding For example 1 + 2 + 3 + 4 + 5....is the series of natural numbers. Each number in a sequence or a series is called a term. Here 1 is a term, 2 is a term, 3 is a term ....... " }, { "code": null, "e": 25738, "s": 25550, "text": "Progression is a sequence in which the general term can be expressed using a mathematical formula or the Sequence which uses a mathematical formula that can be defined as the progression." }, { "code": null, "e": 26108, "s": 25738, "text": "The common difference in the arithmetic progression is denoted by d. The difference between the successive term and its preceding term. It is always constant or the same for arithmetic progression. In other words, we can say that, in a given sequence if the common difference is constant or the same then we can say that the given sequence is in Arithmetic Progression." }, { "code": null, "e": 26188, "s": 26108, "text": "The formula to find common difference is d = (an + 1 – an ) or d = (an – an-1)." }, { "code": null, "e": 26305, "s": 26188, "text": "If the common difference is positive, then AP increases. For Example 4, 8, 12, 16..... in these series, AP increases" }, { "code": null, "e": 26412, "s": 26305, "text": "If the common difference is negative then AP decreases. For Example -4, -6, -8......., here AP decreases." }, { "code": null, "e": 26528, "s": 26412, "text": "If the common difference is zero then AP will be constant. For Example 1, 2, 3, 4, 5........., here AP is constant." }, { "code": null, "e": 26599, "s": 26528, "text": "The sequence of Arithmetic Progression will be like a1, a2, a3, a4,..." }, { "code": null, "e": 26632, "s": 26599, "text": "Example 1: 0, 5, 10, 15, 20....." }, { "code": null, "e": 26812, "s": 26632, "text": "here, \na1 = 0, a2 = 5, so a2 - a1 = d = 5 - 0 = 5. \na3 = 10, a2 = 5, so a3 - a2 = 10 - 5 = 5.\na4 = 15, a3 = 10, so a4 - a3 = 15 - 10 =5.\na5 =20, a4 =15, so a5 -a4 = 20 - 15 = 5.\n\n" }, { "code": null, "e": 26882, "s": 26812, "text": "From the above example, we can say that the common difference is “5”." }, { "code": null, "e": 26917, "s": 26882, "text": "Example 2: 0, 7, 14, 21, 28......." }, { "code": null, "e": 27089, "s": 26917, "text": "here, \na1 = 0, a2 = 7, so a2 - a1 = 7 - 0 = 7\na3 = 14, a2 = 7, so a3 - a2 = 14 - 7 = 7\na4 = 21, a3 = 14, so a4 - a3 = 21 - 14 = 7\na5 =28, a4 = 21, so a5 -a4 =28 - 21 = 7\n\n" }, { "code": null, "e": 27159, "s": 27089, "text": "From the above example, we can say that the common difference is “7”." }, { "code": null, "e": 27281, "s": 27159, "text": "To find the middle term of an arithmetic progression we need the total number of terms in a sequence. We have two cases:" }, { "code": null, "e": 27401, "s": 27281, "text": "Even: If the number of terms in the sequence is even then we will be having two middle terms i.e (n/2) and (n/2 + 1).n" }, { "code": null, "e": 27508, "s": 27401, "text": "Odd: If the number of terms in the sequence is odd then we will be having only one middle terms i.e (n/2)." }, { "code": null, "e": 27519, "s": 27508, "text": "Example 1:" }, { "code": null, "e": 27564, "s": 27519, "text": "If n = 9 then,\nMiddle term = n/2 = 9/2 = 4.\n" }, { "code": null, "e": 27575, "s": 27564, "text": "Example 2:" }, { "code": null, "e": 27686, "s": 27575, "text": "If n = 16 then,\nFirst middle term = n/2 = 16/2 = 8.\nSecond middle term = (n/2) + 1 = (16/2) + 1 = 8 + 1 = 9.\n\n" }, { "code": null, "e": 27826, "s": 27686, "text": "To find the nth term of an arithmetic progression, We know that the A.P series is in the form of a, a + d, a + 2d, a + 3d, a + 4d.........." }, { "code": null, "e": 27913, "s": 27826, "text": "The nth term is denoted by Tn. Thus to find the nth term of an A.P series will be : " }, { "code": null, "e": 27995, "s": 27913, "text": "Example: Find the 9th term of the given A.P sequence: 3, 6, 9, 12, 15...........?" }, { "code": null, "e": 28027, "s": 27995, "text": "Step 1: Write the given series." }, { "code": null, "e": 28071, "s": 28027, "text": "Given series = 3, 6, 9, 12, 15...........\n\n" }, { "code": null, "e": 28139, "s": 28071, "text": "Step 2: Now write down the value of a and n from the given series." }, { "code": null, "e": 28154, "s": 28139, "text": "a = 3, n = 9\n\n" }, { "code": null, "e": 28227, "s": 28154, "text": "Step 3: Find the common difference d by using the formula (an+1 – an)." }, { "code": null, "e": 28290, "s": 28227, "text": "d = a2 - a1 , \nhere a2 = 6 and a1 = 3 \nso d = (6 - 3) = 3.\n\n" }, { "code": null, "e": 28375, "s": 28290, "text": "Step 4: We need to substitute values of a, d, n in the formula (Tn = a + (n – 1)d)." }, { "code": null, "e": 28456, "s": 28375, "text": "Tn = a + (n - 1)d \ngiven n = 9.\n\nT9 = 3 + (9 - 1)3\n\n= 3 + (8)3\n\n= 3 + 24 = 27\n\n" }, { "code": null, "e": 28536, "s": 28456, "text": "Therefore the 9th term of given A.P series 3, 6, 9, 12, 15.......... is “27”." }, { "code": null, "e": 28560, "s": 28536, "text": "Arithmetic Progressions" }, { "code": null, "e": 28569, "s": 28560, "text": "Class 10" }, { "code": null, "e": 28585, "s": 28569, "text": "School Learning" }, { "code": null, "e": 28604, "s": 28585, "text": "School Mathematics" }, { "code": null, "e": 28702, "s": 28604, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28760, "s": 28702, "text": "Mobile Technologies - Definition, Types, Uses, Advantages" }, { "code": null, "e": 28785, "s": 28760, "text": "Introduction to Internet" }, { "code": null, "e": 28835, "s": 28785, "text": "Chemical Indicators - Definition, Types, Examples" }, { "code": null, "e": 28896, "s": 28835, "text": "Rusting of Iron - Explanation, Chemical Reaction, Prevention" }, { "code": null, "e": 28943, "s": 28896, "text": "Magnetic Field due to Current in Straight Wire" }, { "code": null, "e": 28970, "s": 28943, "text": "How to Align Text in HTML?" }, { "code": null, "e": 28994, "s": 28970, "text": "Cloud Deployment Models" }, { "code": null, "e": 29048, "s": 28994, "text": "What is a Storage Device? Definition, Types, Examples" }, { "code": null, "e": 29068, "s": 29048, "text": "Libraries in Python" } ]
Robot Bounded In Circle C++
Suppose we have an infinite plane, a robot initially stands at position (0, 0) and faces north. The robot can receive one of three instructions − G − go straight 1 unit; G − go straight 1 unit; L − turn 90 degrees to the left direction; L − turn 90 degrees to the left direction; R − turn 90 degrees to the right direction. R − turn 90 degrees to the right direction. The robot performs the instructions given in order, Instructions are repeated forever. We have to check whether there exists a circle in the plane such that the robot never leaves the circle. So if the input is like [GGLLGG], then the answer will be true. from (0,0) to (0,2), it will loop forever, so this is a closed path, and the answer is true. To solve this, we will follow these steps − make an array dir := [[0,1], [1,0], [0,-1], [-1,0]] make an array dir := [[0,1], [1,0], [0,-1], [-1,0]] make a pair temp, and initially this is (0, 0) and k := 0 make a pair temp, and initially this is (0, 0) and k := 0 for i in range 0 to size of sif s[i] is G, thentemp := (dir[k, 0], dir[k, 1])otherwise when s[i] is L, then k := (k + 1) mod 4, otherwise k := (k - 1) mod 4 for i in range 0 to size of s if s[i] is G, thentemp := (dir[k, 0], dir[k, 1]) if s[i] is G, then temp := (dir[k, 0], dir[k, 1]) temp := (dir[k, 0], dir[k, 1]) otherwise when s[i] is L, then k := (k + 1) mod 4, otherwise k := (k - 1) mod 4 otherwise when s[i] is L, then k := (k + 1) mod 4, otherwise k := (k - 1) mod 4 if false when temp is not (0, 0) and k > 0, otherwise true if false when temp is not (0, 0) and k > 0, otherwise true Let us see the following implementation to get better understanding − Live Demo #include <bits/stdc++.h> using namespace std; int dir[4][2] = {{0, 1}, {1, 0}, {0, -1}, {-1, 0}}; class Solution { public: bool isRobotBounded(string s) { pair <int, int> temp({0,0}); int k = 0; for(int i = 0; i < s.size(); i++){ if(s[i] == 'G'){ temp.first += dir[k][0]; temp.second += dir[k][1]; }else if(s[i] == 'L'){ k = (k + 1) % 4; }else{ k = ((k - 1) + 4) % 4; } } return temp.first == 0 && temp.second == 0 || k > 0; } }; main(){ Solution ob; cout << (ob.isRobotBounded("GGLLGG")); } "GGLLGG" 1
[ { "code": null, "e": 1208, "s": 1062, "text": "Suppose we have an infinite plane, a robot initially stands at position (0, 0) and faces north. The robot can receive one of three instructions −" }, { "code": null, "e": 1232, "s": 1208, "text": "G − go straight 1 unit;" }, { "code": null, "e": 1256, "s": 1232, "text": "G − go straight 1 unit;" }, { "code": null, "e": 1299, "s": 1256, "text": "L − turn 90 degrees to the left direction;" }, { "code": null, "e": 1342, "s": 1299, "text": "L − turn 90 degrees to the left direction;" }, { "code": null, "e": 1386, "s": 1342, "text": "R − turn 90 degrees to the right direction." }, { "code": null, "e": 1430, "s": 1386, "text": "R − turn 90 degrees to the right direction." }, { "code": null, "e": 1779, "s": 1430, "text": "The robot performs the instructions given in order, Instructions are repeated forever. We have to check whether there exists a circle in the plane such that the robot never leaves the circle. So if the input is like [GGLLGG], then the answer will be true. from (0,0) to (0,2), it will loop forever, so this is a closed path, and the answer is true." }, { "code": null, "e": 1823, "s": 1779, "text": "To solve this, we will follow these steps −" }, { "code": null, "e": 1875, "s": 1823, "text": "make an array dir := [[0,1], [1,0], [0,-1], [-1,0]]" }, { "code": null, "e": 1927, "s": 1875, "text": "make an array dir := [[0,1], [1,0], [0,-1], [-1,0]]" }, { "code": null, "e": 1985, "s": 1927, "text": "make a pair temp, and initially this is (0, 0) and k := 0" }, { "code": null, "e": 2043, "s": 1985, "text": "make a pair temp, and initially this is (0, 0) and k := 0" }, { "code": null, "e": 2200, "s": 2043, "text": "for i in range 0 to size of sif s[i] is G, thentemp := (dir[k, 0], dir[k, 1])otherwise when s[i] is L, then k := (k + 1) mod 4, otherwise k := (k - 1) mod 4" }, { "code": null, "e": 2230, "s": 2200, "text": "for i in range 0 to size of s" }, { "code": null, "e": 2279, "s": 2230, "text": "if s[i] is G, thentemp := (dir[k, 0], dir[k, 1])" }, { "code": null, "e": 2298, "s": 2279, "text": "if s[i] is G, then" }, { "code": null, "e": 2329, "s": 2298, "text": "temp := (dir[k, 0], dir[k, 1])" }, { "code": null, "e": 2360, "s": 2329, "text": "temp := (dir[k, 0], dir[k, 1])" }, { "code": null, "e": 2440, "s": 2360, "text": "otherwise when s[i] is L, then k := (k + 1) mod 4, otherwise k := (k - 1) mod 4" }, { "code": null, "e": 2520, "s": 2440, "text": "otherwise when s[i] is L, then k := (k + 1) mod 4, otherwise k := (k - 1) mod 4" }, { "code": null, "e": 2579, "s": 2520, "text": "if false when temp is not (0, 0) and k > 0, otherwise true" }, { "code": null, "e": 2638, "s": 2579, "text": "if false when temp is not (0, 0) and k > 0, otherwise true" }, { "code": null, "e": 2708, "s": 2638, "text": "Let us see the following implementation to get better understanding −" }, { "code": null, "e": 2719, "s": 2708, "text": " Live Demo" }, { "code": null, "e": 3340, "s": 2719, "text": "#include <bits/stdc++.h>\nusing namespace std;\nint dir[4][2] = {{0, 1}, {1, 0}, {0, -1}, {-1, 0}};\nclass Solution {\n public:\n bool isRobotBounded(string s) {\n pair <int, int> temp({0,0});\n int k = 0;\n for(int i = 0; i < s.size(); i++){\n if(s[i] == 'G'){\n temp.first += dir[k][0];\n temp.second += dir[k][1];\n }else if(s[i] == 'L'){\n k = (k + 1) % 4;\n }else{\n k = ((k - 1) + 4) % 4;\n }\n }\n return temp.first == 0 && temp.second == 0 || k > 0;\n }\n};\nmain(){\n Solution ob;\n cout << (ob.isRobotBounded(\"GGLLGG\"));\n}" }, { "code": null, "e": 3349, "s": 3340, "text": "\"GGLLGG\"" }, { "code": null, "e": 3351, "s": 3349, "text": "1" } ]
How to open json file ?
31 Oct, 2021 A JSON file stores the data and object in the JSON format. A JSON ( JavaScript object notation )format is a standard format to store and exchange data. Initially, JSON file is only used to exchange the data between the web application and server. Now, it is used for many purposes such as to take and restore the data backup. Users can create a JSON file with the .json extension. It is a simple text-based and human-readable file that we can edit and read in the compatible text editor. JSON file doesn’t take too much space to store the data as it is a plain text file. Before we learn to open the JSON files, we need to create them. To create a sample JSON file follow the below basic steps. Open the text editor on your computer.Create a new file and save it.Users need to save files with the .json extension.Copy the below sample JSON code and paste into a file and save it again. Open the text editor on your computer. Create a new file and save it. Users need to save files with the .json extension. Copy the below sample JSON code and paste into a file and save it again. Sample JSON code: Javascript /* sample json code */{ "firstName": "fname", "lastName": "lname", "gender": "male", "age": 25, "address": { "streetAddress": "26 colony", "city": "Ahemdabad", "state": "Gujarat", "postalCode": "354323" }, "phoneNumbers": [ { "type": "business", "number": "9323227323" } ]} We have created a sample JSON file successfully. In this section, we will discuss tools to open JSON files. Cross-platform to open JSON files: Generally, users can open the JSON file in any text editor as it is a plain text-based file. The Google Chrome and Mozilla Firefox web browsers are cross-platform to open JSON files that are compatible with every operating system (OS). Users can follow the below steps to open JSON files in Chrome or Firefox browsers. Right-click on the JSON file. Choose open with option from the menu. From the drop-down menu either choose Chrome or Firefox. If you are not able to find the chrome in-app menu, click on Choose another app. Here, you will find chrome and firefox listed. The user will see the following output when the JSON file will open in the browser. However, users can directly copy the location path of the JSON file and copy it to the browser to read the file. Every operating system supports different text editors. Here, we have provided the list of best text editors for every operating system. A user can open and edit JSON files with any text editor from the above table. If users do not want to download any tools or applications to open JSON files, they can edit them with online tools. Users should have a working internet connection to edit JSON files online. Edit JSON files online Open any browser and search for ‘Online JSON editor ‘. Click on the first link in the result. The JSON editor is open and you can read files from the local computer. You can also edit JSON files and save them locally or to the cloud. Online tools like these save you from having to download offline tools. JavaScript-Questions JSON Picked JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n31 Oct, 2021" }, { "code": null, "e": 354, "s": 28, "text": "A JSON file stores the data and object in the JSON format. A JSON ( JavaScript object notation )format is a standard format to store and exchange data. Initially, JSON file is only used to exchange the data between the web application and server. Now, it is used for many purposes such as to take and restore the data backup." }, { "code": null, "e": 600, "s": 354, "text": "Users can create a JSON file with the .json extension. It is a simple text-based and human-readable file that we can edit and read in the compatible text editor. JSON file doesn’t take too much space to store the data as it is a plain text file." }, { "code": null, "e": 723, "s": 600, "text": "Before we learn to open the JSON files, we need to create them. To create a sample JSON file follow the below basic steps." }, { "code": null, "e": 914, "s": 723, "text": "Open the text editor on your computer.Create a new file and save it.Users need to save files with the .json extension.Copy the below sample JSON code and paste into a file and save it again." }, { "code": null, "e": 953, "s": 914, "text": "Open the text editor on your computer." }, { "code": null, "e": 984, "s": 953, "text": "Create a new file and save it." }, { "code": null, "e": 1035, "s": 984, "text": "Users need to save files with the .json extension." }, { "code": null, "e": 1108, "s": 1035, "text": "Copy the below sample JSON code and paste into a file and save it again." }, { "code": null, "e": 1128, "s": 1110, "text": "Sample JSON code:" }, { "code": null, "e": 1139, "s": 1128, "text": "Javascript" }, { "code": "/* sample json code */{ \"firstName\": \"fname\", \"lastName\": \"lname\", \"gender\": \"male\", \"age\": 25, \"address\": { \"streetAddress\": \"26 colony\", \"city\": \"Ahemdabad\", \"state\": \"Gujarat\", \"postalCode\": \"354323\" }, \"phoneNumbers\": [ { \"type\": \"business\", \"number\": \"9323227323\" } ]}", "e": 1472, "s": 1139, "text": null }, { "code": null, "e": 1580, "s": 1472, "text": "We have created a sample JSON file successfully. In this section, we will discuss tools to open JSON files." }, { "code": null, "e": 1934, "s": 1580, "text": "Cross-platform to open JSON files: Generally, users can open the JSON file in any text editor as it is a plain text-based file. The Google Chrome and Mozilla Firefox web browsers are cross-platform to open JSON files that are compatible with every operating system (OS). Users can follow the below steps to open JSON files in Chrome or Firefox browsers." }, { "code": null, "e": 1964, "s": 1934, "text": "Right-click on the JSON file." }, { "code": null, "e": 2003, "s": 1964, "text": "Choose open with option from the menu." }, { "code": null, "e": 2060, "s": 2003, "text": "From the drop-down menu either choose Chrome or Firefox." }, { "code": null, "e": 2188, "s": 2060, "text": "If you are not able to find the chrome in-app menu, click on Choose another app. Here, you will find chrome and firefox listed." }, { "code": null, "e": 2272, "s": 2188, "text": "The user will see the following output when the JSON file will open in the browser." }, { "code": null, "e": 2522, "s": 2272, "text": "However, users can directly copy the location path of the JSON file and copy it to the browser to read the file. Every operating system supports different text editors. Here, we have provided the list of best text editors for every operating system." }, { "code": null, "e": 2793, "s": 2522, "text": "A user can open and edit JSON files with any text editor from the above table. If users do not want to download any tools or applications to open JSON files, they can edit them with online tools. Users should have a working internet connection to edit JSON files online." }, { "code": null, "e": 2816, "s": 2793, "text": "Edit JSON files online" }, { "code": null, "e": 2871, "s": 2816, "text": "Open any browser and search for ‘Online JSON editor ‘." }, { "code": null, "e": 2910, "s": 2871, "text": "Click on the first link in the result." }, { "code": null, "e": 3050, "s": 2910, "text": "The JSON editor is open and you can read files from the local computer. You can also edit JSON files and save them locally or to the cloud." }, { "code": null, "e": 3122, "s": 3050, "text": "Online tools like these save you from having to download offline tools." }, { "code": null, "e": 3143, "s": 3122, "text": "JavaScript-Questions" }, { "code": null, "e": 3148, "s": 3143, "text": "JSON" }, { "code": null, "e": 3155, "s": 3148, "text": "Picked" }, { "code": null, "e": 3166, "s": 3155, "text": "JavaScript" }, { "code": null, "e": 3183, "s": 3166, "text": "Web Technologies" } ]
Cyclic Redundancy Check and Modulo-2 Division
01 Jul, 2022 CRC or Cyclic Redundancy Check is a method of detecting accidental changes/errors in the communication channel. CRC uses Generator Polynomial which is available on both sender and receiver side. An example generator polynomial is of the form like x3 + x + 1. This generator polynomial represents key 1011. Another example is x2 + 1 that represents key 101. n : Number of bits in data to be sent from sender side. k : Number of bits in the key obtained from generator polynomial. Sender Side (Generation of Encoded Data from Data and Generator Polynomial (or Key)): The binary data is first augmented by adding k-1 zeros in the end of the dataUse modulo-2 binary division to divide binary data by the key and store remainder of division.Append the remainder at the end of the data to form the encoded data and send the same The binary data is first augmented by adding k-1 zeros in the end of the data Use modulo-2 binary division to divide binary data by the key and store remainder of division. Append the remainder at the end of the data to form the encoded data and send the same Receiver Side (Check if there are errors introduced in transmission)Perform modulo-2 division again and if the remainder is 0, then there are no errors. In this article we will focus only on finding the remainder i.e. check word and the code word. Modulo 2 Division:The process of modulo-2 binary division is the same as the familiar division process we use for decimal numbers. Just that instead of subtraction, we use XOR here. In each step, a copy of the divisor (or data) is XORed with the k bits of the dividend (or key). The result of the XOR operation (remainder) is (n-1) bits, which is used for the next step after 1 extra bit is pulled down to make it n bits long. When there are no bits left to pull down, we have a result. The (n-1)-bit remainder which is appended at the sender side. Illustration:Example 1 (No error in transmission): Data word to be sent - 100100 Key - 1101 [ Or generator polynomial x3 + x2 + 1] Sender Side: Therefore, the remainder is 001 and hence the encoded data sent is 100100001. Receiver Side: Code word received at the receiver side 100100001 Therefore, the remainder is all zeros. Hence, the data received has no error. Example 2: (Error in transmission) Data word to be sent - 100100 Key - 1101 Sender Side: Therefore, the remainder is 001 and hence the code word sent is 100100001. Receiver Side Let there be an error in transmission media Code word received at the receiver side - 100000001 Since the remainder is not all zeroes, the erroris detected at the receiver side. ImplementationBelow implementation for generating code word from given binary data and key. C++ Python3 Javascript #include<bits/stdc++.h>using namespace std; // Returns XOR of 'a' and 'b'// (both of same length)string xor1(string a, string b){ // Initialize result string result = ""; int n = b.length(); // Traverse all bits, if bits are // same, then XOR is 0, else 1 for(int i = 1; i < n; i++) { if (a[i] == b[i]) result += "0"; else result += "1"; } return result;} // Performs Modulo-2 divisionstring mod2div(string divident, string divisor){ // Number of bits to be XORed at a time. int pick = divisor.length(); // Slicing the divident to appropriate // length for particular step string tmp = divident.substr(0, pick); int n = divident.length(); while (pick < n) { if (tmp[0] == '1') // Replace the divident by the result // of XOR and pull 1 bit down tmp = xor1(divisor, tmp) + divident[pick]; else // If leftmost bit is '0'. // If the leftmost bit of the dividend (or the // part used in each step) is 0, the step cannot // use the regular divisor; we need to use an // all-0s divisor. tmp = xor1(std::string(pick, '0'), tmp) + divident[pick]; // Increment pick to move further pick += 1; } // For the last n bits, we have to carry it out // normally as increased value of pick will cause // Index Out of Bounds. if (tmp[0] == '1') tmp = xor1(divisor, tmp); else tmp = xor1(std::string(pick, '0'), tmp); return tmp;} // Function used at the sender side to encode// data by appending remainder of modular division// at the end of data.void encodeData(string data, string key){ int l_key = key.length(); // Appends n-1 zeroes at end of data string appended_data = (data + std::string( l_key - 1, '0')); string remainder = mod2div(appended_data, key); // Append remainder in the original data string codeword = data + remainder; cout << "Remainder : " << remainder << "\n"; cout << "Encoded Data (Data + Remainder) :" << codeword << "\n";} // Driver codeint main(){ string data = "100100"; string key = "1101"; encodeData(data, key); return 0;} // This code is contributed by MuskanKalra1 # Returns XOR of 'a' and 'b'# (both of same length)def xor(a, b): # initialize result result = [] # Traverse all bits, if bits are # same, then XOR is 0, else 1 for i in range(1, len(b)): if a[i] == b[i]: result.append('0') else: result.append('1') return ''.join(result) # Performs Modulo-2 divisiondef mod2div(dividend, divisor): # Number of bits to be XORed at a time. pick = len(divisor) # Slicing the dividend to appropriate # length for particular step tmp = dividend[0 : pick] while pick < len(dividend): if tmp[0] == '1': # replace the dividend by the result # of XOR and pull 1 bit down tmp = xor(divisor, tmp) + dividend[pick] else: # If leftmost bit is '0' # If the leftmost bit of the dividend (or the # part used in each step) is 0, the step cannot # use the regular divisor; we need to use an # all-0s divisor. tmp = xor('0'*pick, tmp) + dividend[pick] # increment pick to move further pick += 1 # For the last n bits, we have to carry it out # normally as increased value of pick will cause # Index Out of Bounds. if tmp[0] == '1': tmp = xor(divisor, tmp) else: tmp = xor('0'*pick, tmp) checkword = tmp return checkword # Function used at the sender side to encode# data by appending remainder of modular division# at the end of data.def encodeData(data, key): l_key = len(key) # Appends n-1 zeroes at end of data appended_data = data + '0'*(l_key-1) remainder = mod2div(appended_data, key) # Append remainder in the original data codeword = data + remainder print("Remainder : ", remainder) print("Encoded Data (Data + Remainder) : ", codeword) # Driver codedata = "100100"key = "1101"encodeData(data, key) <script>// A JavaScript program for generating code// word from given binary data and key. // Returns XOR of 'a' and 'b'// (both of same length)function xor1(a, b){ // Initialize result let result = ""; let n = b.length; // Traverse all bits, if bits are // same, then XOR is 0, else 1 for (let i = 1; i < n; i++) { if (a[i] == b[i]) { result += "0"; } else { result += "1"; } } return result;} // Performs Modulo-2 divisionfunction mod2div(divident, divisor) { // Number of bits to be XORed at a time. let pick = divisor.length; // Slicing the divident to appropriate // length for particular step let tmp = divident.substr(0, pick); let n = divident.length; while (pick < n) { if (tmp[0] == '1') { // Replace the divident by the result // of XOR and pull 1 bit down tmp = xor1(divisor, tmp) + divident[pick]; } else { // If leftmost bit is '0'. // If the leftmost bit of the dividend (or the // part used in each step) is 0, the step cannot // use the regular divisor; we need to use an // all-0s divisor. let str = ""; for (let i = 0; i < pick; i++) { str = str.concat('0'); } tmp = xor1(str, tmp) + divident[pick]; } // Increment pick to move further pick += 1; } // For the last n bits, we have to carry it out // normally as increased value of pick will cause // Index Out of Bounds. if (tmp[0] == '1') { tmp = xor1(divisor, tmp); } else { tmp = xor1(string(pick, '0'), tmp); } return tmp;} // Function used at the sender side to encode// data by appending remainder of modular division// at the end of data.function encodeData(data, key) { let l_key = key.length; // Appends n-1 zeroes at end of data let str = ""; for (let i = 0; i < l_key - 1; i++) { str = str.concat('0'); } console.log(str); let appended_data = data.concat(str); let remainder = mod2div(appended_data, key); // Append remainder in the original data let codeword = data + remainder; // Adding the print statements document.write("Remainder : ", remainder); document.write("Encoded Data (Data + Remainder) :", codeword);} // Driver code{ let data = "100100"; let key = "1101"; encodeData(data, key);} // This code is contributed by Gautam goel (gautamgoel962)</script> Output: Remainder : 001 Encoded Data (Data + Remainder) : 100100001 Time Complexity: O(n) Auxiliary Space: O(n) Note that CRC is mainly designed and used to protect against common of errors on communication channels and NOT suitable protection against intentional alteration of data (See reasons here) Implementation using Bit Manipulation:CRC codeword generation can also be done using bit manipulation methods as follows: C++ Java Python3 C# Javascript // C++ Program to generate CRC codeword#include<stdio.h>#include<iostream>#include<math.h> using namespace std; // function to convert integer to binary stringstring toBin(long long int num){ string bin = ""; while (num){ if (num & 1) bin = "1" + bin; else bin = "0" + bin; num = num>>1; } return bin;} // function to convert binary string to decimallong long int toDec(string bin){ long long int num = 0; for (int i=0; i<bin.length(); i++){ if (bin.at(i)=='1') num += 1 << (bin.length() - i - 1); } return num;} // function to compute CRC and codewordvoid CRC(string dataword, string generator){ int l_gen = generator.length(); long long int gen = toDec(generator); long long int dword = toDec(dataword); // append 0s to dividend long long int dividend = dword << (l_gen-1); // shft specifies the no. of least // significant bits not being XORed int shft = (int) ceill(log2l(dividend+1)) - l_gen; long long int rem; while ((dividend >= gen) || (shft >= 0)){ // bitwise XOR the MSBs of dividend with generator // replace the operated MSBs from the dividend with // remainder generated rem = (dividend >> shft) ^ gen; dividend = (dividend & ((1 << shft) - 1)) | (rem << shft); // change shft variable shft = (int) ceill(log2l(dividend + 1)) - l_gen; } // finally, AND the initial dividend with the remainder (=dividend) long long int codeword = (dword << (l_gen - 1)) | dividend; cout << "Remainder: " << toBin(dividend) << endl; cout << "Codeword : " << toBin(codeword) << endl;} int main(){ string dataword, generator; dataword = "10011101"; generator = "1001"; CRC(dataword, generator); return 0;} // Java Program to generate CRC codewordclass GFG { // function to convert integer to binary string static String toBin(int num) { String bin = ""; while (num > 0) { if ((num & 1) != 0) bin = "1" + bin; else bin = "0" + bin; num = num >> 1; } return bin; } // function to convert binary string to decimal static int toDec(String bin) { int num = 0; for (int i = 0; i < bin.length(); i++) { if (bin.charAt(i) == '1') num += 1 << (bin.length() - i - 1); } return num; } // function to compute CRC and codeword static void CRC(String dataword, String generator) { int l_gen = generator.length(); int gen = toDec(generator); int dword = toDec(dataword); // append 0s to dividend int dividend = dword << (l_gen - 1); // shft specifies the no. of least // significant bits not being XORed int shft = (int)Math.ceil(Math.log(dividend + 1) / Math.log(2)) - l_gen; int rem; while ((dividend >= gen) || (shft >= 0)) { // bitwise XOR the MSBs of dividend with // generator replace the operated MSBs from the // dividend with remainder generated rem = (dividend >> shft) ^ gen; dividend = (dividend & ((1 << shft) - 1)) | (rem << shft); // change shft variable shft = (int)Math.ceil(Math.log(dividend + 1) / Math.log(2)) - l_gen; } // finally, AND the initial dividend with the // remainder (=dividend) int codeword = (dword << (l_gen - 1)) | dividend; System.out.println("Remainder: " + toBin(dividend)); System.out.println("Codeword : " + toBin(codeword)); } // Driver Code public static void main(String[] args) { String dataword, generator; dataword = "10011101"; generator = "1001"; CRC(dataword, generator); }} // This code is contributed by phasing17 # Python3 program to generate CRC codewordfrom math import log, ceil def CRC(dataword, generator): dword = int(dataword, 2) l_gen = len(generator) # append 0s to dividend dividend = dword << (l_gen - 1) # shft specifies the no. of least significant # bits not being XORed shft = ceil(log(dividend + 1, 2)) - l_gen # ceil(log(dividend+1 , 2)) is the no. of binary # digits in dividend generator = int(generator, 2) while dividend >= generator or shft >= 0: # bitwise XOR the MSBs of dividend with generator # replace the operated MSBs from the dividend with # remainder generated rem = (dividend >> shft) ^ generator dividend = (dividend & ((1 << shft) - 1)) | (rem << shft) # change shft variable shft = ceil(log(dividend+1, 2)) - l_gen # finally, AND the initial dividend with the remainder (=dividend) codeword = dword << (l_gen-1)|dividend print("Remainder:", bin(dividend).lstrip("-0b")) print("Codeword :", bin(codeword).lstrip("-0b")) # Driver codedataword = "10011101"generator = "1001"CRC(dataword, generator) // C# Program to generate CRC codewordusing System; class GFG { // function to convert integer to binary string static string toBin(int num) { string bin = ""; while (num > 0) { if ((num & 1) != 0) bin = "1" + bin; else bin = "0" + bin; num = num >> 1; } return bin; } // function to convert binary string to decimal static int toDec(string bin) { int num = 0; for (int i = 0; i < bin.Length; i++) { if (bin[i] == '1') num += 1 << (bin.Length - i - 1); } return num; } // function to compute CRC and codeword static void CRC(string dataword, string generator) { int l_gen = generator.Length; int gen = toDec(generator); int dword = toDec(dataword); // append 0s to dividend int dividend = dword << (l_gen - 1); // shft specifies the no. of least // significant bits not being XORed int shft = (int)Math.Ceiling(Math.Log(dividend + 1) / Math.Log(2)) - l_gen; int rem = (dividend >> shft) ^ gen; while ((dividend >= gen) || (shft >= 0)) { // bitwise XOR the MSBs of dividend with // generator replace the operated MSBs from the // dividend with remainder generated rem = (dividend >> shft) ^ gen; dividend = (dividend & ((1 << shft) - 1)) | (rem << shft); // change shft variable shft = (int)Math.Ceiling(Math.Log(dividend + 1) / Math.Log(2)) - l_gen; } // finally, AND the initial dividend with the // remainder (=dividend) int codeword = (dword << (l_gen - 1)) | dividend; Console.WriteLine("Remainder: " + toBin(dividend)); Console.WriteLine("Codeword : " + toBin(codeword)); } // Driver Code public static void Main(string[] args) { string dataword, generator; dataword = "10011101"; generator = "1001"; CRC(dataword, generator); }} // This code is contributed by phasing17 // JavaScript Program to generate CRC codeword // function to convert integer to binary stringfunction toBin(num){ var bin = ""; while (num){ if (num & 1) bin = "1" + bin; else bin = "0" + bin; num = num>>1; } return bin;} // function to convert binary string to decimalfunction toDec(bin){ var num = 0; for (var i=0; i<bin.length; i++){ if (bin[i]=='1') num += 1 << (bin.length - i - 1); } return num;} // function to compute CRC and codewordfunction CRC(dataword, generator){ var l_gen = generator.length; var gen = toDec(generator); var dword = toDec(dataword); // append 0s to dividend var dividend = dword << (l_gen-1); // shft specifies the no. of least // significant bits not being XORed var shft = Math.ceil(Math.log2(dividend+1)) - l_gen; var rem; while ((dividend >= gen) || (shft >= 0)){ // bitwise XOR the MSBs of dividend with generator // replace the operated MSBs from the dividend with // remainder generated rem = (dividend >> shft) ^ gen; dividend = (dividend & ((1 << shft) - 1)) | (rem << shft); // change shft variable shft = Math.ceil(Math.log2(dividend + 1)) - l_gen; } // finally, AND the initial dividend with the remainder (=dividend) var codeword = (dword << (l_gen - 1)) | dividend; console.log( "Remainder:", toBin(dividend)); console.log("Codeword :", toBin(codeword));} //Driver codevar dataword = "10011101";var generator = "1001";CRC(dataword, generator); //This code is contributed by phasing17 Remainder: 100 Codeword : 10011101100 Time Complexity: O(n) Auxiliary Space: O(n) References:https://en.wikipedia.org/wiki/Cyclic_redundancy_check This article is contributed by Jay Patel. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above AndrewFurman VarunSharma12 Akanksha_Rai MuskanKalra1 surinderdawra388 gautamgoel962 ranjanrohit840 phasing17 Modular Arithmetic Bit Magic Bit Magic Modular Arithmetic Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Bitwise Operators in C/C++ Left Shift and Right Shift Operators in C/C++ Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming) Count set bits in an integer How to swap two numbers without using a temporary variable? Program to find whether a given number is power of 2 Little and Big Endian Mystery Bits manipulation (Important tactics) Binary representation of a given number Josephus problem | Set 1 (A O(n) Solution)
[ { "code": null, "e": 54, "s": 26, "text": "\n01 Jul, 2022" }, { "code": null, "e": 412, "s": 54, "text": "CRC or Cyclic Redundancy Check is a method of detecting accidental changes/errors in the communication channel. CRC uses Generator Polynomial which is available on both sender and receiver side. An example generator polynomial is of the form like x3 + x + 1. This generator polynomial represents key 1011. Another example is x2 + 1 that represents key 101. " }, { "code": null, "e": 546, "s": 412, "text": "n : Number of bits in data to be sent \n from sender side. \nk : Number of bits in the key obtained \n from generator polynomial." }, { "code": null, "e": 633, "s": 546, "text": "Sender Side (Generation of Encoded Data from Data and Generator Polynomial (or Key)): " }, { "code": null, "e": 891, "s": 633, "text": "The binary data is first augmented by adding k-1 zeros in the end of the dataUse modulo-2 binary division to divide binary data by the key and store remainder of division.Append the remainder at the end of the data to form the encoded data and send the same" }, { "code": null, "e": 969, "s": 891, "text": "The binary data is first augmented by adding k-1 zeros in the end of the data" }, { "code": null, "e": 1064, "s": 969, "text": "Use modulo-2 binary division to divide binary data by the key and store remainder of division." }, { "code": null, "e": 1151, "s": 1064, "text": "Append the remainder at the end of the data to form the encoded data and send the same" }, { "code": null, "e": 1306, "s": 1151, "text": " Receiver Side (Check if there are errors introduced in transmission)Perform modulo-2 division again and if the remainder is 0, then there are no errors. " }, { "code": null, "e": 1401, "s": 1306, "text": "In this article we will focus only on finding the remainder i.e. check word and the code word." }, { "code": null, "e": 1583, "s": 1401, "text": "Modulo 2 Division:The process of modulo-2 binary division is the same as the familiar division process we use for decimal numbers. Just that instead of subtraction, we use XOR here." }, { "code": null, "e": 1680, "s": 1583, "text": "In each step, a copy of the divisor (or data) is XORed with the k bits of the dividend (or key)." }, { "code": null, "e": 1828, "s": 1680, "text": "The result of the XOR operation (remainder) is (n-1) bits, which is used for the next step after 1 extra bit is pulled down to make it n bits long." }, { "code": null, "e": 1950, "s": 1828, "text": "When there are no bits left to pull down, we have a result. The (n-1)-bit remainder which is appended at the sender side." }, { "code": null, "e": 2002, "s": 1950, "text": "Illustration:Example 1 (No error in transmission): " }, { "code": null, "e": 2096, "s": 2002, "text": "Data word to be sent - 100100\nKey - 1101 [ Or generator polynomial x3 + x2 + 1]\n\nSender Side:" }, { "code": null, "e": 2242, "s": 2096, "text": "Therefore, the remainder is 001 and hence the encoded \ndata sent is 100100001.\n\nReceiver Side:\nCode word received at the receiver side 100100001" }, { "code": null, "e": 2320, "s": 2242, "text": "Therefore, the remainder is all zeros. Hence, the\ndata received has no error." }, { "code": null, "e": 2357, "s": 2320, "text": " Example 2: (Error in transmission) " }, { "code": null, "e": 2412, "s": 2357, "text": "Data word to be sent - 100100\nKey - 1101\n\nSender Side:" }, { "code": null, "e": 2599, "s": 2412, "text": "Therefore, the remainder is 001 and hence the \ncode word sent is 100100001.\n\nReceiver Side\nLet there be an error in transmission media\nCode word received at the receiver side - 100000001" }, { "code": null, "e": 2682, "s": 2599, "text": "Since the remainder is not all zeroes, the erroris detected at the receiver side. " }, { "code": null, "e": 2775, "s": 2682, "text": " ImplementationBelow implementation for generating code word from given binary data and key." }, { "code": null, "e": 2779, "s": 2775, "text": "C++" }, { "code": null, "e": 2787, "s": 2779, "text": "Python3" }, { "code": null, "e": 2798, "s": 2787, "text": "Javascript" }, { "code": "#include<bits/stdc++.h>using namespace std; // Returns XOR of 'a' and 'b'// (both of same length)string xor1(string a, string b){ // Initialize result string result = \"\"; int n = b.length(); // Traverse all bits, if bits are // same, then XOR is 0, else 1 for(int i = 1; i < n; i++) { if (a[i] == b[i]) result += \"0\"; else result += \"1\"; } return result;} // Performs Modulo-2 divisionstring mod2div(string divident, string divisor){ // Number of bits to be XORed at a time. int pick = divisor.length(); // Slicing the divident to appropriate // length for particular step string tmp = divident.substr(0, pick); int n = divident.length(); while (pick < n) { if (tmp[0] == '1') // Replace the divident by the result // of XOR and pull 1 bit down tmp = xor1(divisor, tmp) + divident[pick]; else // If leftmost bit is '0'. // If the leftmost bit of the dividend (or the // part used in each step) is 0, the step cannot // use the regular divisor; we need to use an // all-0s divisor. tmp = xor1(std::string(pick, '0'), tmp) + divident[pick]; // Increment pick to move further pick += 1; } // For the last n bits, we have to carry it out // normally as increased value of pick will cause // Index Out of Bounds. if (tmp[0] == '1') tmp = xor1(divisor, tmp); else tmp = xor1(std::string(pick, '0'), tmp); return tmp;} // Function used at the sender side to encode// data by appending remainder of modular division// at the end of data.void encodeData(string data, string key){ int l_key = key.length(); // Appends n-1 zeroes at end of data string appended_data = (data + std::string( l_key - 1, '0')); string remainder = mod2div(appended_data, key); // Append remainder in the original data string codeword = data + remainder; cout << \"Remainder : \" << remainder << \"\\n\"; cout << \"Encoded Data (Data + Remainder) :\" << codeword << \"\\n\";} // Driver codeint main(){ string data = \"100100\"; string key = \"1101\"; encodeData(data, key); return 0;} // This code is contributed by MuskanKalra1", "e": 5268, "s": 2798, "text": null }, { "code": "# Returns XOR of 'a' and 'b'# (both of same length)def xor(a, b): # initialize result result = [] # Traverse all bits, if bits are # same, then XOR is 0, else 1 for i in range(1, len(b)): if a[i] == b[i]: result.append('0') else: result.append('1') return ''.join(result) # Performs Modulo-2 divisiondef mod2div(dividend, divisor): # Number of bits to be XORed at a time. pick = len(divisor) # Slicing the dividend to appropriate # length for particular step tmp = dividend[0 : pick] while pick < len(dividend): if tmp[0] == '1': # replace the dividend by the result # of XOR and pull 1 bit down tmp = xor(divisor, tmp) + dividend[pick] else: # If leftmost bit is '0' # If the leftmost bit of the dividend (or the # part used in each step) is 0, the step cannot # use the regular divisor; we need to use an # all-0s divisor. tmp = xor('0'*pick, tmp) + dividend[pick] # increment pick to move further pick += 1 # For the last n bits, we have to carry it out # normally as increased value of pick will cause # Index Out of Bounds. if tmp[0] == '1': tmp = xor(divisor, tmp) else: tmp = xor('0'*pick, tmp) checkword = tmp return checkword # Function used at the sender side to encode# data by appending remainder of modular division# at the end of data.def encodeData(data, key): l_key = len(key) # Appends n-1 zeroes at end of data appended_data = data + '0'*(l_key-1) remainder = mod2div(appended_data, key) # Append remainder in the original data codeword = data + remainder print(\"Remainder : \", remainder) print(\"Encoded Data (Data + Remainder) : \", codeword) # Driver codedata = \"100100\"key = \"1101\"encodeData(data, key)", "e": 7165, "s": 5268, "text": null }, { "code": "<script>// A JavaScript program for generating code// word from given binary data and key. // Returns XOR of 'a' and 'b'// (both of same length)function xor1(a, b){ // Initialize result let result = \"\"; let n = b.length; // Traverse all bits, if bits are // same, then XOR is 0, else 1 for (let i = 1; i < n; i++) { if (a[i] == b[i]) { result += \"0\"; } else { result += \"1\"; } } return result;} // Performs Modulo-2 divisionfunction mod2div(divident, divisor) { // Number of bits to be XORed at a time. let pick = divisor.length; // Slicing the divident to appropriate // length for particular step let tmp = divident.substr(0, pick); let n = divident.length; while (pick < n) { if (tmp[0] == '1') { // Replace the divident by the result // of XOR and pull 1 bit down tmp = xor1(divisor, tmp) + divident[pick]; } else { // If leftmost bit is '0'. // If the leftmost bit of the dividend (or the // part used in each step) is 0, the step cannot // use the regular divisor; we need to use an // all-0s divisor. let str = \"\"; for (let i = 0; i < pick; i++) { str = str.concat('0'); } tmp = xor1(str, tmp) + divident[pick]; } // Increment pick to move further pick += 1; } // For the last n bits, we have to carry it out // normally as increased value of pick will cause // Index Out of Bounds. if (tmp[0] == '1') { tmp = xor1(divisor, tmp); } else { tmp = xor1(string(pick, '0'), tmp); } return tmp;} // Function used at the sender side to encode// data by appending remainder of modular division// at the end of data.function encodeData(data, key) { let l_key = key.length; // Appends n-1 zeroes at end of data let str = \"\"; for (let i = 0; i < l_key - 1; i++) { str = str.concat('0'); } console.log(str); let appended_data = data.concat(str); let remainder = mod2div(appended_data, key); // Append remainder in the original data let codeword = data + remainder; // Adding the print statements document.write(\"Remainder : \", remainder); document.write(\"Encoded Data (Data + Remainder) :\", codeword);} // Driver code{ let data = \"100100\"; let key = \"1101\"; encodeData(data, key);} // This code is contributed by Gautam goel (gautamgoel962)</script>", "e": 10222, "s": 7165, "text": null }, { "code": null, "e": 10230, "s": 10222, "text": "Output:" }, { "code": null, "e": 10292, "s": 10230, "text": "Remainder : 001\nEncoded Data (Data + Remainder) : 100100001" }, { "code": null, "e": 10314, "s": 10292, "text": "Time Complexity: O(n)" }, { "code": null, "e": 10336, "s": 10314, "text": "Auxiliary Space: O(n)" }, { "code": null, "e": 10526, "s": 10336, "text": "Note that CRC is mainly designed and used to protect against common of errors on communication channels and NOT suitable protection against intentional alteration of data (See reasons here)" }, { "code": null, "e": 10648, "s": 10526, "text": "Implementation using Bit Manipulation:CRC codeword generation can also be done using bit manipulation methods as follows:" }, { "code": null, "e": 10652, "s": 10648, "text": "C++" }, { "code": null, "e": 10657, "s": 10652, "text": "Java" }, { "code": null, "e": 10665, "s": 10657, "text": "Python3" }, { "code": null, "e": 10668, "s": 10665, "text": "C#" }, { "code": null, "e": 10679, "s": 10668, "text": "Javascript" }, { "code": "// C++ Program to generate CRC codeword#include<stdio.h>#include<iostream>#include<math.h> using namespace std; // function to convert integer to binary stringstring toBin(long long int num){ string bin = \"\"; while (num){ if (num & 1) bin = \"1\" + bin; else bin = \"0\" + bin; num = num>>1; } return bin;} // function to convert binary string to decimallong long int toDec(string bin){ long long int num = 0; for (int i=0; i<bin.length(); i++){ if (bin.at(i)=='1') num += 1 << (bin.length() - i - 1); } return num;} // function to compute CRC and codewordvoid CRC(string dataword, string generator){ int l_gen = generator.length(); long long int gen = toDec(generator); long long int dword = toDec(dataword); // append 0s to dividend long long int dividend = dword << (l_gen-1); // shft specifies the no. of least // significant bits not being XORed int shft = (int) ceill(log2l(dividend+1)) - l_gen; long long int rem; while ((dividend >= gen) || (shft >= 0)){ // bitwise XOR the MSBs of dividend with generator // replace the operated MSBs from the dividend with // remainder generated rem = (dividend >> shft) ^ gen; dividend = (dividend & ((1 << shft) - 1)) | (rem << shft); // change shft variable shft = (int) ceill(log2l(dividend + 1)) - l_gen; } // finally, AND the initial dividend with the remainder (=dividend) long long int codeword = (dword << (l_gen - 1)) | dividend; cout << \"Remainder: \" << toBin(dividend) << endl; cout << \"Codeword : \" << toBin(codeword) << endl;} int main(){ string dataword, generator; dataword = \"10011101\"; generator = \"1001\"; CRC(dataword, generator); return 0;}", "e": 12503, "s": 10679, "text": null }, { "code": "// Java Program to generate CRC codewordclass GFG { // function to convert integer to binary string static String toBin(int num) { String bin = \"\"; while (num > 0) { if ((num & 1) != 0) bin = \"1\" + bin; else bin = \"0\" + bin; num = num >> 1; } return bin; } // function to convert binary string to decimal static int toDec(String bin) { int num = 0; for (int i = 0; i < bin.length(); i++) { if (bin.charAt(i) == '1') num += 1 << (bin.length() - i - 1); } return num; } // function to compute CRC and codeword static void CRC(String dataword, String generator) { int l_gen = generator.length(); int gen = toDec(generator); int dword = toDec(dataword); // append 0s to dividend int dividend = dword << (l_gen - 1); // shft specifies the no. of least // significant bits not being XORed int shft = (int)Math.ceil(Math.log(dividend + 1) / Math.log(2)) - l_gen; int rem; while ((dividend >= gen) || (shft >= 0)) { // bitwise XOR the MSBs of dividend with // generator replace the operated MSBs from the // dividend with remainder generated rem = (dividend >> shft) ^ gen; dividend = (dividend & ((1 << shft) - 1)) | (rem << shft); // change shft variable shft = (int)Math.ceil(Math.log(dividend + 1) / Math.log(2)) - l_gen; } // finally, AND the initial dividend with the // remainder (=dividend) int codeword = (dword << (l_gen - 1)) | dividend; System.out.println(\"Remainder: \" + toBin(dividend)); System.out.println(\"Codeword : \" + toBin(codeword)); } // Driver Code public static void main(String[] args) { String dataword, generator; dataword = \"10011101\"; generator = \"1001\"; CRC(dataword, generator); }} // This code is contributed by phasing17", "e": 14411, "s": 12503, "text": null }, { "code": "# Python3 program to generate CRC codewordfrom math import log, ceil def CRC(dataword, generator): dword = int(dataword, 2) l_gen = len(generator) # append 0s to dividend dividend = dword << (l_gen - 1) # shft specifies the no. of least significant # bits not being XORed shft = ceil(log(dividend + 1, 2)) - l_gen # ceil(log(dividend+1 , 2)) is the no. of binary # digits in dividend generator = int(generator, 2) while dividend >= generator or shft >= 0: # bitwise XOR the MSBs of dividend with generator # replace the operated MSBs from the dividend with # remainder generated rem = (dividend >> shft) ^ generator dividend = (dividend & ((1 << shft) - 1)) | (rem << shft) # change shft variable shft = ceil(log(dividend+1, 2)) - l_gen # finally, AND the initial dividend with the remainder (=dividend) codeword = dword << (l_gen-1)|dividend print(\"Remainder:\", bin(dividend).lstrip(\"-0b\")) print(\"Codeword :\", bin(codeword).lstrip(\"-0b\")) # Driver codedataword = \"10011101\"generator = \"1001\"CRC(dataword, generator)", "e": 15548, "s": 14411, "text": null }, { "code": "// C# Program to generate CRC codewordusing System; class GFG { // function to convert integer to binary string static string toBin(int num) { string bin = \"\"; while (num > 0) { if ((num & 1) != 0) bin = \"1\" + bin; else bin = \"0\" + bin; num = num >> 1; } return bin; } // function to convert binary string to decimal static int toDec(string bin) { int num = 0; for (int i = 0; i < bin.Length; i++) { if (bin[i] == '1') num += 1 << (bin.Length - i - 1); } return num; } // function to compute CRC and codeword static void CRC(string dataword, string generator) { int l_gen = generator.Length; int gen = toDec(generator); int dword = toDec(dataword); // append 0s to dividend int dividend = dword << (l_gen - 1); // shft specifies the no. of least // significant bits not being XORed int shft = (int)Math.Ceiling(Math.Log(dividend + 1) / Math.Log(2)) - l_gen; int rem = (dividend >> shft) ^ gen; while ((dividend >= gen) || (shft >= 0)) { // bitwise XOR the MSBs of dividend with // generator replace the operated MSBs from the // dividend with remainder generated rem = (dividend >> shft) ^ gen; dividend = (dividend & ((1 << shft) - 1)) | (rem << shft); // change shft variable shft = (int)Math.Ceiling(Math.Log(dividend + 1) / Math.Log(2)) - l_gen; } // finally, AND the initial dividend with the // remainder (=dividend) int codeword = (dword << (l_gen - 1)) | dividend; Console.WriteLine(\"Remainder: \" + toBin(dividend)); Console.WriteLine(\"Codeword : \" + toBin(codeword)); } // Driver Code public static void Main(string[] args) { string dataword, generator; dataword = \"10011101\"; generator = \"1001\"; CRC(dataword, generator); }} // This code is contributed by phasing17", "e": 17775, "s": 15548, "text": null }, { "code": "// JavaScript Program to generate CRC codeword // function to convert integer to binary stringfunction toBin(num){ var bin = \"\"; while (num){ if (num & 1) bin = \"1\" + bin; else bin = \"0\" + bin; num = num>>1; } return bin;} // function to convert binary string to decimalfunction toDec(bin){ var num = 0; for (var i=0; i<bin.length; i++){ if (bin[i]=='1') num += 1 << (bin.length - i - 1); } return num;} // function to compute CRC and codewordfunction CRC(dataword, generator){ var l_gen = generator.length; var gen = toDec(generator); var dword = toDec(dataword); // append 0s to dividend var dividend = dword << (l_gen-1); // shft specifies the no. of least // significant bits not being XORed var shft = Math.ceil(Math.log2(dividend+1)) - l_gen; var rem; while ((dividend >= gen) || (shft >= 0)){ // bitwise XOR the MSBs of dividend with generator // replace the operated MSBs from the dividend with // remainder generated rem = (dividend >> shft) ^ gen; dividend = (dividend & ((1 << shft) - 1)) | (rem << shft); // change shft variable shft = Math.ceil(Math.log2(dividend + 1)) - l_gen; } // finally, AND the initial dividend with the remainder (=dividend) var codeword = (dword << (l_gen - 1)) | dividend; console.log( \"Remainder:\", toBin(dividend)); console.log(\"Codeword :\", toBin(codeword));} //Driver codevar dataword = \"10011101\";var generator = \"1001\";CRC(dataword, generator); //This code is contributed by phasing17", "e": 19415, "s": 17775, "text": null }, { "code": null, "e": 19453, "s": 19415, "text": "Remainder: 100\nCodeword : 10011101100" }, { "code": null, "e": 19477, "s": 19455, "text": "Time Complexity: O(n)" }, { "code": null, "e": 19499, "s": 19477, "text": "Auxiliary Space: O(n)" }, { "code": null, "e": 19564, "s": 19499, "text": "References:https://en.wikipedia.org/wiki/Cyclic_redundancy_check" }, { "code": null, "e": 19828, "s": 19564, "text": "This article is contributed by Jay Patel. If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks." }, { "code": null, "e": 19952, "s": 19828, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above" }, { "code": null, "e": 19967, "s": 19954, "text": "AndrewFurman" }, { "code": null, "e": 19981, "s": 19967, "text": "VarunSharma12" }, { "code": null, "e": 19994, "s": 19981, "text": "Akanksha_Rai" }, { "code": null, "e": 20007, "s": 19994, "text": "MuskanKalra1" }, { "code": null, "e": 20024, "s": 20007, "text": "surinderdawra388" }, { "code": null, "e": 20038, "s": 20024, "text": "gautamgoel962" }, { "code": null, "e": 20053, "s": 20038, "text": "ranjanrohit840" }, { "code": null, "e": 20063, "s": 20053, "text": "phasing17" }, { "code": null, "e": 20082, "s": 20063, "text": "Modular Arithmetic" }, { "code": null, "e": 20092, "s": 20082, "text": "Bit Magic" }, { "code": null, "e": 20102, "s": 20092, "text": "Bit Magic" }, { "code": null, "e": 20121, "s": 20102, "text": "Modular Arithmetic" }, { "code": null, "e": 20219, "s": 20121, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 20246, "s": 20219, "text": "Bitwise Operators in C/C++" }, { "code": null, "e": 20292, "s": 20246, "text": "Left Shift and Right Shift Operators in C/C++" }, { "code": null, "e": 20360, "s": 20292, "text": "Travelling Salesman Problem | Set 1 (Naive and Dynamic Programming)" }, { "code": null, "e": 20389, "s": 20360, "text": "Count set bits in an integer" }, { "code": null, "e": 20449, "s": 20389, "text": "How to swap two numbers without using a temporary variable?" }, { "code": null, "e": 20502, "s": 20449, "text": "Program to find whether a given number is power of 2" }, { "code": null, "e": 20532, "s": 20502, "text": "Little and Big Endian Mystery" }, { "code": null, "e": 20570, "s": 20532, "text": "Bits manipulation (Important tactics)" }, { "code": null, "e": 20610, "s": 20570, "text": "Binary representation of a given number" } ]
Matplotlib.axes.Axes.get_xticklabels() in Python
19 Apr, 2020 Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. The Axes Class contains most of the figure elements: Axis, Tick, Line2D, Text, Polygon, etc., and sets the coordinate system. And the instances of Axes supports callbacks through a callbacks attribute. The Axes.get_xticklabels() function in axes module of matplotlib library is used to return the x ticks labels as a list of Text instances. Syntax:Axes.get_xticklabels(self, minor=False, which=None) Parameters: This method accepts the following parameters. minor : This parameter is used whether set major ticklabels or to set minor ticklabels which : This parameter is used to selects which ticklabels to return. Return value: This method returns a list of Text instances. Below examples illustrate the matplotlib.axes.Axes.get_xticklabels() function in matplotlib.axes: Example 1: # Implementation of matplotlib functionimport numpy as npimport matplotlib.pyplot as pltfrom matplotlib.patches import Polygon def func(x): return (x - 4) * (x - 6) * (x - 5) + 100 a, b = 2, 9 # integral limitsx = np.linspace(0, 10)y = func(x) fig, ax = plt.subplots()ax.plot(x, y, "k", linewidth = 2)ax.set_ylim(bottom = 0) # Make the shaded regionix = np.linspace(a, b)iy = func(ix)verts = [(a, 0), *zip(ix, iy), (b, 0)]poly = Polygon(verts, facecolor ='green', edgecolor ='0.5', alpha = 0.4)ax.add_patch(poly) ax.text(0.5 * (a + b), 30, r"$\int_a ^ b f(x)\mathrm{d}x$", horizontalalignment ='center', fontsize = 20) fig.text(0.9, 0.05, '$x$')fig.text(0.1, 0.9, '$y$') ax.spines['right'].set_visible(False)ax.spines['top'].set_visible(False) ax.set_xticks((a, b-a, b))ax.set_xticklabels(('$a$', '$valx$', '$b$')) w = ax.get_xticklabels()strr = str(list(w))ax.text(3.4, 200, "xticklabels values : ", fontweight ="bold")ax.text(1, 185, strr, fontweight ="bold") fig.suptitle('matplotlib.axes.Axes.get_xticklabels() \function Example\n\n', fontweight ="bold")fig.canvas.draw()plt.show() Output: Example 2: # Implementation of matplotlib functionimport numpy as npimport matplotlib.pyplot as plt # Fixing random state for reproducibilitynp.random.seed(19680801) x = np.linspace(0, 2 * np.pi, 100)y = np.sin(x)y2 = y + 0.2 * np.random.normal(size = x.shape) fig, ax = plt.subplots()ax.plot(x, y)ax.plot(x, y2) ax.set_xticks([0, np.pi, 2 * np.pi])ax.set_xticklabels(['0', r'$pi$', r'2$pi$']) ax.spines['left'].set_bounds(-1, 1)ax.spines['right'].set_visible(False)ax.spines['top'].set_visible(False) w = ax.get_xticklabels()strr = str(list(w))ax.text(2.5, 0, "xticklabels values : ", fontweight ="bold")ax.text(1, -0.2, strr, fontweight ="bold") fig.suptitle('matplotlib.axes.Axes.get_xticklabels()\ function Example\n\n', fontweight ="bold")fig.canvas.draw()plt.show() Output: Python-matplotlib Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Python Classes and Objects Python OOPs Concepts Python | os.path.join() method How to drop one or multiple columns in Pandas Dataframe Introduction To PYTHON How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | datetime.timedelta() function Python | Get unique values from a list
[ { "code": null, "e": 28, "s": 0, "text": "\n19 Apr, 2020" }, { "code": null, "e": 328, "s": 28, "text": "Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. The Axes Class contains most of the figure elements: Axis, Tick, Line2D, Text, Polygon, etc., and sets the coordinate system. And the instances of Axes supports callbacks through a callbacks attribute." }, { "code": null, "e": 467, "s": 328, "text": "The Axes.get_xticklabels() function in axes module of matplotlib library is used to return the x ticks labels as a list of Text instances." }, { "code": null, "e": 526, "s": 467, "text": "Syntax:Axes.get_xticklabels(self, minor=False, which=None)" }, { "code": null, "e": 584, "s": 526, "text": "Parameters: This method accepts the following parameters." }, { "code": null, "e": 671, "s": 584, "text": "minor : This parameter is used whether set major ticklabels or to set minor ticklabels" }, { "code": null, "e": 741, "s": 671, "text": "which : This parameter is used to selects which ticklabels to return." }, { "code": null, "e": 801, "s": 741, "text": "Return value: This method returns a list of Text instances." }, { "code": null, "e": 899, "s": 801, "text": "Below examples illustrate the matplotlib.axes.Axes.get_xticklabels() function in matplotlib.axes:" }, { "code": null, "e": 910, "s": 899, "text": "Example 1:" }, { "code": "# Implementation of matplotlib functionimport numpy as npimport matplotlib.pyplot as pltfrom matplotlib.patches import Polygon def func(x): return (x - 4) * (x - 6) * (x - 5) + 100 a, b = 2, 9 # integral limitsx = np.linspace(0, 10)y = func(x) fig, ax = plt.subplots()ax.plot(x, y, \"k\", linewidth = 2)ax.set_ylim(bottom = 0) # Make the shaded regionix = np.linspace(a, b)iy = func(ix)verts = [(a, 0), *zip(ix, iy), (b, 0)]poly = Polygon(verts, facecolor ='green', edgecolor ='0.5', alpha = 0.4)ax.add_patch(poly) ax.text(0.5 * (a + b), 30, r\"$\\int_a ^ b f(x)\\mathrm{d}x$\", horizontalalignment ='center', fontsize = 20) fig.text(0.9, 0.05, '$x$')fig.text(0.1, 0.9, '$y$') ax.spines['right'].set_visible(False)ax.spines['top'].set_visible(False) ax.set_xticks((a, b-a, b))ax.set_xticklabels(('$a$', '$valx$', '$b$')) w = ax.get_xticklabels()strr = str(list(w))ax.text(3.4, 200, \"xticklabels values : \", fontweight =\"bold\")ax.text(1, 185, strr, fontweight =\"bold\") fig.suptitle('matplotlib.axes.Axes.get_xticklabels() \\function Example\\n\\n', fontweight =\"bold\")fig.canvas.draw()plt.show()", "e": 2086, "s": 910, "text": null }, { "code": null, "e": 2094, "s": 2086, "text": "Output:" }, { "code": null, "e": 2105, "s": 2094, "text": "Example 2:" }, { "code": "# Implementation of matplotlib functionimport numpy as npimport matplotlib.pyplot as plt # Fixing random state for reproducibilitynp.random.seed(19680801) x = np.linspace(0, 2 * np.pi, 100)y = np.sin(x)y2 = y + 0.2 * np.random.normal(size = x.shape) fig, ax = plt.subplots()ax.plot(x, y)ax.plot(x, y2) ax.set_xticks([0, np.pi, 2 * np.pi])ax.set_xticklabels(['0', r'$pi$', r'2$pi$']) ax.spines['left'].set_bounds(-1, 1)ax.spines['right'].set_visible(False)ax.spines['top'].set_visible(False) w = ax.get_xticklabels()strr = str(list(w))ax.text(2.5, 0, \"xticklabels values : \", fontweight =\"bold\")ax.text(1, -0.2, strr, fontweight =\"bold\") fig.suptitle('matplotlib.axes.Axes.get_xticklabels()\\ function Example\\n\\n', fontweight =\"bold\")fig.canvas.draw()plt.show()", "e": 2897, "s": 2105, "text": null }, { "code": null, "e": 2905, "s": 2897, "text": "Output:" }, { "code": null, "e": 2923, "s": 2905, "text": "Python-matplotlib" }, { "code": null, "e": 2930, "s": 2923, "text": "Python" }, { "code": null, "e": 3028, "s": 2930, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3060, "s": 3028, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 3087, "s": 3060, "text": "Python Classes and Objects" }, { "code": null, "e": 3108, "s": 3087, "text": "Python OOPs Concepts" }, { "code": null, "e": 3139, "s": 3108, "text": "Python | os.path.join() method" }, { "code": null, "e": 3195, "s": 3139, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 3218, "s": 3195, "text": "Introduction To PYTHON" }, { "code": null, "e": 3260, "s": 3218, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 3302, "s": 3260, "text": "Check if element exists in list in Python" }, { "code": null, "e": 3341, "s": 3302, "text": "Python | datetime.timedelta() function" } ]
Minimum number of basic logic gates required to realize given Boolean expression
13 Sep, 2021 Given a string S of length N representing a boolean expression, the task is to find the minimum number of AND, OR, and NOT gates required to realize the given expression. Examples: Input: S = “A+B.C”Output: 2Explanation: Realizing the expression requires 1 AND gate represented by ‘.’ and 1 OR gate represented by ‘+’. Input: S = “(1 – A). B+C”Output: 3Explanation: Realizing the expression requires 1 AND gate represented by ‘.’ and 1 OR gate represented by ‘+’ and 1 NOT gate represented by ‘-‘. Approach: Follow the steps below to solve the problem: Iterate over the characters of the string.Initialize, count of gates to 0.If the current character is either ‘.’ or ‘+’, or ‘1’, then increment the count of gates by 1Print the count of gates required. Iterate over the characters of the string. Initialize, count of gates to 0. If the current character is either ‘.’ or ‘+’, or ‘1’, then increment the count of gates by 1 Print the count of gates required. Below is the implementation of the above approach: C++ Java Python3 C# Javascript // C++ implementation of// the above approach#include <bits/stdc++.h>using namespace std; // Function to count the total// number of gates required to// realize the boolean expression Svoid numberOfGates(string s){ // Length of the string int N = s.size(); // Stores the count // of total gates int ans = 0; // Traverse the string for (int i = 0; i < (int)s.size(); i++) { // AND, OR and NOT Gate if (s[i] == '.' || s[i] == '+' || s[i] == '1') { ans++; } } // Print the count // of gates required cout << ans;} // Driver Codeint main(){ // Input string S = "(1-A).B+C"; // Function call to count the // total number of gates required numberOfGates(S);} // Java implementation of// the above approachclass GFG{ // Function to count the total// number of gates required to// realize the boolean expression Sstatic void numberOfGates(String s){ // Length of the string int N = s.length(); // Stores the count // of total gates int ans = 0; // Traverse the string for(int i = 0; i < (int)s.length(); i++) { // AND, OR and NOT Gate if (s.charAt(i) == '.' || s.charAt(i) == '+' || s.charAt(i) == '1') { ans++; } } // Print the count // of gates required System.out.println(ans);} // Driver Codepublic static void main(String[] args){ // Input String S = "(1-A).B+C"; // Function call to count the // total number of gates required numberOfGates(S);}} // This code is contributed by user_qa7r # Python3 implementation of# the above approach # Function to count the total# number of gates required to# realize the boolean expression Sdef numberOfGates(s): # Length of the string N = len(s) # Stores the count # of total gates ans = 0 # Traverse the string for i in range(len(s)): # AND, OR and NOT Gate if (s[i] == '.' or s[i] == '+' or s[i] == '1'): ans += 1 # Print the count # of gates required print(ans, end = "") # Driver Codeif __name__ == "__main__": # Input S = "(1-A).B+C" # Function call to count the # total number of gates required numberOfGates(S) # This code is contributed by AnkThon // C# implementation of// the above approachusing System;public class GFG{ // Function to count the total// number of gates required to// realize the boolean expression Sstatic void numberOfGates(string s){ // Length of the string int N = s.Length; // Stores the count // of total gates int ans = 0; // Traverse the string for(int i = 0; i < s.Length; i++) { // AND, OR and NOT Gate if (s[i] == '.' || s[i] == '+' || s[i] == '1') { ans++; } } // Print the count // of gates required Console.WriteLine(ans);} // Driver Codepublic static void Main(string[] args){ // Input string S = "(1-A).B+C"; // Function call to count the // total number of gates required numberOfGates(S);}} // This code is contributed by AnkThon <script>// JavaScript program for the above approach // Function to count the total// number of gates required to// realize the boolean expression Sfunction numberOfGates(s){ // Length of the string let N = s.length; // Stores the count // of total gates let ans = 0; // Traverse the string for(let i = 0; i < s.length; i++) { // AND, OR and NOT Gate if (s[i] == '.' || s[i] == '+' || s[i] == '1') { ans++; } } // Print the count // of gates required document.write(ans);} // Driver Code // Input let S = "(1-A).B+C"; // Function call to count the // total number of gates required numberOfGates(S); </script> 3 Time Complexity: O(N)Auxiliary Space: O(1) hritikrommie ankthon chinmoy1997pal arorakashish0911 logical-thinking Technical Scripter 2020 Mathematical Searching Strings Technical Scripter Searching Strings Mathematical logical-thinking Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Merge two sorted arrays Operators in C / C++ Prime Numbers Find minimum number of coins that make a given value Minimum number of jumps to reach end Binary Search Maximum and minimum of an array using minimum number of comparisons Linear Search K'th Smallest/Largest Element in Unsorted Array | Set 1 Search an element in a sorted and rotated array
[ { "code": null, "e": 52, "s": 24, "text": "\n13 Sep, 2021" }, { "code": null, "e": 223, "s": 52, "text": "Given a string S of length N representing a boolean expression, the task is to find the minimum number of AND, OR, and NOT gates required to realize the given expression." }, { "code": null, "e": 233, "s": 223, "text": "Examples:" }, { "code": null, "e": 372, "s": 233, "text": "Input: S = “A+B.C”Output: 2Explanation: Realizing the expression requires 1 AND gate represented by ‘.’ and 1 OR gate represented by ‘+’. " }, { "code": null, "e": 552, "s": 372, "text": "Input: S = “(1 – A). B+C”Output: 3Explanation: Realizing the expression requires 1 AND gate represented by ‘.’ and 1 OR gate represented by ‘+’ and 1 NOT gate represented by ‘-‘. " }, { "code": null, "e": 607, "s": 552, "text": "Approach: Follow the steps below to solve the problem:" }, { "code": null, "e": 809, "s": 607, "text": "Iterate over the characters of the string.Initialize, count of gates to 0.If the current character is either ‘.’ or ‘+’, or ‘1’, then increment the count of gates by 1Print the count of gates required." }, { "code": null, "e": 852, "s": 809, "text": "Iterate over the characters of the string." }, { "code": null, "e": 885, "s": 852, "text": "Initialize, count of gates to 0." }, { "code": null, "e": 979, "s": 885, "text": "If the current character is either ‘.’ or ‘+’, or ‘1’, then increment the count of gates by 1" }, { "code": null, "e": 1014, "s": 979, "text": "Print the count of gates required." }, { "code": null, "e": 1065, "s": 1014, "text": "Below is the implementation of the above approach:" }, { "code": null, "e": 1069, "s": 1065, "text": "C++" }, { "code": null, "e": 1074, "s": 1069, "text": "Java" }, { "code": null, "e": 1082, "s": 1074, "text": "Python3" }, { "code": null, "e": 1085, "s": 1082, "text": "C#" }, { "code": null, "e": 1096, "s": 1085, "text": "Javascript" }, { "code": "// C++ implementation of// the above approach#include <bits/stdc++.h>using namespace std; // Function to count the total// number of gates required to// realize the boolean expression Svoid numberOfGates(string s){ // Length of the string int N = s.size(); // Stores the count // of total gates int ans = 0; // Traverse the string for (int i = 0; i < (int)s.size(); i++) { // AND, OR and NOT Gate if (s[i] == '.' || s[i] == '+' || s[i] == '1') { ans++; } } // Print the count // of gates required cout << ans;} // Driver Codeint main(){ // Input string S = \"(1-A).B+C\"; // Function call to count the // total number of gates required numberOfGates(S);}", "e": 1845, "s": 1096, "text": null }, { "code": "// Java implementation of// the above approachclass GFG{ // Function to count the total// number of gates required to// realize the boolean expression Sstatic void numberOfGates(String s){ // Length of the string int N = s.length(); // Stores the count // of total gates int ans = 0; // Traverse the string for(int i = 0; i < (int)s.length(); i++) { // AND, OR and NOT Gate if (s.charAt(i) == '.' || s.charAt(i) == '+' || s.charAt(i) == '1') { ans++; } } // Print the count // of gates required System.out.println(ans);} // Driver Codepublic static void main(String[] args){ // Input String S = \"(1-A).B+C\"; // Function call to count the // total number of gates required numberOfGates(S);}} // This code is contributed by user_qa7r", "e": 2713, "s": 1845, "text": null }, { "code": "# Python3 implementation of# the above approach # Function to count the total# number of gates required to# realize the boolean expression Sdef numberOfGates(s): # Length of the string N = len(s) # Stores the count # of total gates ans = 0 # Traverse the string for i in range(len(s)): # AND, OR and NOT Gate if (s[i] == '.' or s[i] == '+' or s[i] == '1'): ans += 1 # Print the count # of gates required print(ans, end = \"\") # Driver Codeif __name__ == \"__main__\": # Input S = \"(1-A).B+C\" # Function call to count the # total number of gates required numberOfGates(S) # This code is contributed by AnkThon", "e": 3407, "s": 2713, "text": null }, { "code": "// C# implementation of// the above approachusing System;public class GFG{ // Function to count the total// number of gates required to// realize the boolean expression Sstatic void numberOfGates(string s){ // Length of the string int N = s.Length; // Stores the count // of total gates int ans = 0; // Traverse the string for(int i = 0; i < s.Length; i++) { // AND, OR and NOT Gate if (s[i] == '.' || s[i] == '+' || s[i] == '1') { ans++; } } // Print the count // of gates required Console.WriteLine(ans);} // Driver Codepublic static void Main(string[] args){ // Input string S = \"(1-A).B+C\"; // Function call to count the // total number of gates required numberOfGates(S);}} // This code is contributed by AnkThon", "e": 4260, "s": 3407, "text": null }, { "code": "<script>// JavaScript program for the above approach // Function to count the total// number of gates required to// realize the boolean expression Sfunction numberOfGates(s){ // Length of the string let N = s.length; // Stores the count // of total gates let ans = 0; // Traverse the string for(let i = 0; i < s.length; i++) { // AND, OR and NOT Gate if (s[i] == '.' || s[i] == '+' || s[i] == '1') { ans++; } } // Print the count // of gates required document.write(ans);} // Driver Code // Input let S = \"(1-A).B+C\"; // Function call to count the // total number of gates required numberOfGates(S); </script>", "e": 5011, "s": 4260, "text": null }, { "code": null, "e": 5013, "s": 5011, "text": "3" }, { "code": null, "e": 5058, "s": 5015, "text": "Time Complexity: O(N)Auxiliary Space: O(1)" }, { "code": null, "e": 5073, "s": 5060, "text": "hritikrommie" }, { "code": null, "e": 5081, "s": 5073, "text": "ankthon" }, { "code": null, "e": 5096, "s": 5081, "text": "chinmoy1997pal" }, { "code": null, "e": 5113, "s": 5096, "text": "arorakashish0911" }, { "code": null, "e": 5130, "s": 5113, "text": "logical-thinking" }, { "code": null, "e": 5154, "s": 5130, "text": "Technical Scripter 2020" }, { "code": null, "e": 5167, "s": 5154, "text": "Mathematical" }, { "code": null, "e": 5177, "s": 5167, "text": "Searching" }, { "code": null, "e": 5185, "s": 5177, "text": "Strings" }, { "code": null, "e": 5204, "s": 5185, "text": "Technical Scripter" }, { "code": null, "e": 5214, "s": 5204, "text": "Searching" }, { "code": null, "e": 5222, "s": 5214, "text": "Strings" }, { "code": null, "e": 5235, "s": 5222, "text": "Mathematical" }, { "code": null, "e": 5252, "s": 5235, "text": "logical-thinking" }, { "code": null, "e": 5350, "s": 5252, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 5374, "s": 5350, "text": "Merge two sorted arrays" }, { "code": null, "e": 5395, "s": 5374, "text": "Operators in C / C++" }, { "code": null, "e": 5409, "s": 5395, "text": "Prime Numbers" }, { "code": null, "e": 5462, "s": 5409, "text": "Find minimum number of coins that make a given value" }, { "code": null, "e": 5499, "s": 5462, "text": "Minimum number of jumps to reach end" }, { "code": null, "e": 5513, "s": 5499, "text": "Binary Search" }, { "code": null, "e": 5581, "s": 5513, "text": "Maximum and minimum of an array using minimum number of comparisons" }, { "code": null, "e": 5595, "s": 5581, "text": "Linear Search" }, { "code": null, "e": 5651, "s": 5595, "text": "K'th Smallest/Largest Element in Unsorted Array | Set 1" } ]
std::search in C++
27 Dec, 2021 std::search is defined in the header file <algorithm> and used to find out the presence of a subsequence satisfying a condition (equality if no such predicate is defined) with respect to another sequence. It searches the sequence [first1, last1) for the first occurrence of the subsequence defined by [first2, last2), and returns an iterator to its first element of the occurrence, or last1 if no occurrences are found. It compares the elements in both ranges sequentially using operator== (version 1) or based on any given predicate (version 2). A subsequence of [first1, last1) is considered a match only when this is true for all the elements of [first2, last2). Finally, std::search returns the first of such occurrences. It can be used in either of the two versions, as depicted below : For comparing elements using == :ForwardIterator1 search (ForwardIterator1 first1, ForwardIterator1 last1,ForwardIterator2 first2, ForwardIterator2 last2);first1:Forward iterator to beginning of first container to be searched into.last1:Forward iterator to end of first container to be searched into.first2:Forward iterator to the beginning of the subsequence of second container to be searched for.last2:Forward iterator to the ending of the subsequence of second container to be searched for.Returns: an iterator to the first element of the first occurrence of [first2, last2) in [first1, last1), or last1if no occurrences are found.CPPCPP// C++ program to demonstrate the use of std::search #include <iostream>#include <vector>#include <algorithm>using namespace std;int main(){ int i, j; // Declaring the sequence to be searched into vector<int> v1 = { 1, 2, 3, 4, 5, 6, 7 }; // Declaring the subsequence to be searched for vector<int> v2 = { 3, 4, 5 }; // Declaring an iterator for storing the returning pointer vector<int>::iterator i1; // Using std::search and storing the result in // iterator i1 i1 = std::search(v1.begin(), v1.end(), v2.begin(), v2.end()); // checking if iterator i1 contains end pointer of v1 or not if (i1 != v1.end()) { cout << "vector2 is present at index " << (i1 - v1.begin()); } else { cout << "vector2 is not present in vector1"; } return 0;}Output:vector2 is present at index 2 For comparison based on a predicate (or condition) :ForwardIterator1 search (ForwardIterator1 first1, ForwardIterator1 last1,ForwardIterator2 first2, ForwardIterator2 last2,BinaryPredicate pred);All the arguments are same as previous template, just one more argument is addedpred: Binary function that accepts two elements as arguments (one of each of the two containers, in the same order), and returns a value convertible to bool. The returned value indicates whether the elements are considered to match in the context of this function. The function shall not modify any of its arguments. This can either be a function pointer or a function object.Returns: an iterator, to the first element of the first occurrence of [first2, last2) satisfying a predicate, in [first1, last1), or last1 if no occurrences are found.CPPCPP// C++ program to demonstrate the use of std::search// with binary predicate#include <iostream>#include <vector>#include <algorithm>using namespace std; // Defining the BinaryPredicate functionbool pred(int i, int j){ if (i > j) { return 1; } else { return 0; }} int main(){ int i, j; // Declaring the sequence to be searched into vector<int> v1 = { 1, 2, 3, 4, 5, 6, 7 }; // Declaring the subsequence to be compared to based // on predicate vector<int> v2 = { 3, 4, 5 }; // Declaring an iterator for storing the returning pointer vector<int>::iterator i1; // Using std::search and storing the result in // iterator i1 based on predicate pred i1 = std::search(v1.begin(), v1.end(), v2.begin(), v2.end(), pred); // checking if iterator i1 contains end pointer of v1 or not if (i1 != v1.end()) { cout << "vector1 elements are greater than vector2 starting " << "from position " << (i1 - v1.begin()); } else { cout << "vector1 elements are not greater than vector2 " << "elements consecutively."; } return 0;}Output:vector1 elements are greater than vector2 starting from position 3 For comparing elements using == :ForwardIterator1 search (ForwardIterator1 first1, ForwardIterator1 last1,ForwardIterator2 first2, ForwardIterator2 last2);first1:Forward iterator to beginning of first container to be searched into.last1:Forward iterator to end of first container to be searched into.first2:Forward iterator to the beginning of the subsequence of second container to be searched for.last2:Forward iterator to the ending of the subsequence of second container to be searched for.Returns: an iterator to the first element of the first occurrence of [first2, last2) in [first1, last1), or last1if no occurrences are found.CPPCPP// C++ program to demonstrate the use of std::search #include <iostream>#include <vector>#include <algorithm>using namespace std;int main(){ int i, j; // Declaring the sequence to be searched into vector<int> v1 = { 1, 2, 3, 4, 5, 6, 7 }; // Declaring the subsequence to be searched for vector<int> v2 = { 3, 4, 5 }; // Declaring an iterator for storing the returning pointer vector<int>::iterator i1; // Using std::search and storing the result in // iterator i1 i1 = std::search(v1.begin(), v1.end(), v2.begin(), v2.end()); // checking if iterator i1 contains end pointer of v1 or not if (i1 != v1.end()) { cout << "vector2 is present at index " << (i1 - v1.begin()); } else { cout << "vector2 is not present in vector1"; } return 0;}Output:vector2 is present at index 2 ForwardIterator1 search (ForwardIterator1 first1, ForwardIterator1 last1,ForwardIterator2 first2, ForwardIterator2 last2);first1:Forward iterator to beginning of first container to be searched into.last1:Forward iterator to end of first container to be searched into.first2:Forward iterator to the beginning of the subsequence of second container to be searched for.last2:Forward iterator to the ending of the subsequence of second container to be searched for. Returns: an iterator to the first element of the first occurrence of [first2, last2) in [first1, last1), or last1if no occurrences are found. CPP // C++ program to demonstrate the use of std::search #include <iostream>#include <vector>#include <algorithm>using namespace std;int main(){ int i, j; // Declaring the sequence to be searched into vector<int> v1 = { 1, 2, 3, 4, 5, 6, 7 }; // Declaring the subsequence to be searched for vector<int> v2 = { 3, 4, 5 }; // Declaring an iterator for storing the returning pointer vector<int>::iterator i1; // Using std::search and storing the result in // iterator i1 i1 = std::search(v1.begin(), v1.end(), v2.begin(), v2.end()); // checking if iterator i1 contains end pointer of v1 or not if (i1 != v1.end()) { cout << "vector2 is present at index " << (i1 - v1.begin()); } else { cout << "vector2 is not present in vector1"; } return 0;} vector2 is present at index 2 For comparison based on a predicate (or condition) :ForwardIterator1 search (ForwardIterator1 first1, ForwardIterator1 last1,ForwardIterator2 first2, ForwardIterator2 last2,BinaryPredicate pred);All the arguments are same as previous template, just one more argument is addedpred: Binary function that accepts two elements as arguments (one of each of the two containers, in the same order), and returns a value convertible to bool. The returned value indicates whether the elements are considered to match in the context of this function. The function shall not modify any of its arguments. This can either be a function pointer or a function object.Returns: an iterator, to the first element of the first occurrence of [first2, last2) satisfying a predicate, in [first1, last1), or last1 if no occurrences are found.CPPCPP// C++ program to demonstrate the use of std::search// with binary predicate#include <iostream>#include <vector>#include <algorithm>using namespace std; // Defining the BinaryPredicate functionbool pred(int i, int j){ if (i > j) { return 1; } else { return 0; }} int main(){ int i, j; // Declaring the sequence to be searched into vector<int> v1 = { 1, 2, 3, 4, 5, 6, 7 }; // Declaring the subsequence to be compared to based // on predicate vector<int> v2 = { 3, 4, 5 }; // Declaring an iterator for storing the returning pointer vector<int>::iterator i1; // Using std::search and storing the result in // iterator i1 based on predicate pred i1 = std::search(v1.begin(), v1.end(), v2.begin(), v2.end(), pred); // checking if iterator i1 contains end pointer of v1 or not if (i1 != v1.end()) { cout << "vector1 elements are greater than vector2 starting " << "from position " << (i1 - v1.begin()); } else { cout << "vector1 elements are not greater than vector2 " << "elements consecutively."; } return 0;}Output:vector1 elements are greater than vector2 starting from position 3 ForwardIterator1 search (ForwardIterator1 first1, ForwardIterator1 last1,ForwardIterator2 first2, ForwardIterator2 last2,BinaryPredicate pred); All the arguments are same as previous template, just one more argument is added pred: Binary function that accepts two elements as arguments (one of each of the two containers, in the same order), and returns a value convertible to bool. The returned value indicates whether the elements are considered to match in the context of this function. The function shall not modify any of its arguments. This can either be a function pointer or a function object. Returns: an iterator, to the first element of the first occurrence of [first2, last2) satisfying a predicate, in [first1, last1), or last1 if no occurrences are found. CPP // C++ program to demonstrate the use of std::search// with binary predicate#include <iostream>#include <vector>#include <algorithm>using namespace std; // Defining the BinaryPredicate functionbool pred(int i, int j){ if (i > j) { return 1; } else { return 0; }} int main(){ int i, j; // Declaring the sequence to be searched into vector<int> v1 = { 1, 2, 3, 4, 5, 6, 7 }; // Declaring the subsequence to be compared to based // on predicate vector<int> v2 = { 3, 4, 5 }; // Declaring an iterator for storing the returning pointer vector<int>::iterator i1; // Using std::search and storing the result in // iterator i1 based on predicate pred i1 = std::search(v1.begin(), v1.end(), v2.begin(), v2.end(), pred); // checking if iterator i1 contains end pointer of v1 or not if (i1 != v1.end()) { cout << "vector1 elements are greater than vector2 starting " << "from position " << (i1 - v1.begin()); } else { cout << "vector1 elements are not greater than vector2 " << "elements consecutively."; } return 0;} vector1 elements are greater than vector2 starting from position 3 Related Articles: std::search_n std::find std::find_if, std::find_if_not std::nth_element std::find_end This article is contributed by Mrigendra Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. sumitgumber28 cpp-algorithm-library STL C++ STL CPP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Bitwise Operators in C/C++ Set in C++ Standard Template Library (STL) vector erase() and clear() in C++ unordered_map in C++ STL Inheritance in C++ Priority Queue in C++ Standard Template Library (STL) The C++ Standard Template Library (STL) Object Oriented Programming in C++ C++ Classes and Objects Sorting a vector in C++
[ { "code": null, "e": 54, "s": 26, "text": "\n27 Dec, 2021" }, { "code": null, "e": 259, "s": 54, "text": "std::search is defined in the header file <algorithm> and used to find out the presence of a subsequence satisfying a condition (equality if no such predicate is defined) with respect to another sequence." }, { "code": null, "e": 474, "s": 259, "text": "It searches the sequence [first1, last1) for the first occurrence of the subsequence defined by [first2, last2), and returns an iterator to its first element of the occurrence, or last1 if no occurrences are found." }, { "code": null, "e": 780, "s": 474, "text": "It compares the elements in both ranges sequentially using operator== (version 1) or based on any given predicate (version 2). A subsequence of [first1, last1) is considered a match only when this is true for all the elements of [first2, last2). Finally, std::search returns the first of such occurrences." }, { "code": null, "e": 846, "s": 780, "text": "It can be used in either of the two versions, as depicted below :" }, { "code": null, "e": 4360, "s": 846, "text": "For comparing elements using == :ForwardIterator1 search (ForwardIterator1 first1, ForwardIterator1 last1,ForwardIterator2 first2, ForwardIterator2 last2);first1:Forward iterator to beginning of first container to be searched into.last1:Forward iterator to end of first container to be searched into.first2:Forward iterator to the beginning of the subsequence of second container to be searched for.last2:Forward iterator to the ending of the subsequence of second container to be searched for.Returns: an iterator to the first element of the first occurrence of [first2, last2) in [first1, last1), or last1if no occurrences are found.CPPCPP// C++ program to demonstrate the use of std::search #include <iostream>#include <vector>#include <algorithm>using namespace std;int main(){ int i, j; // Declaring the sequence to be searched into vector<int> v1 = { 1, 2, 3, 4, 5, 6, 7 }; // Declaring the subsequence to be searched for vector<int> v2 = { 3, 4, 5 }; // Declaring an iterator for storing the returning pointer vector<int>::iterator i1; // Using std::search and storing the result in // iterator i1 i1 = std::search(v1.begin(), v1.end(), v2.begin(), v2.end()); // checking if iterator i1 contains end pointer of v1 or not if (i1 != v1.end()) { cout << \"vector2 is present at index \" << (i1 - v1.begin()); } else { cout << \"vector2 is not present in vector1\"; } return 0;}Output:vector2 is present at index 2\nFor comparison based on a predicate (or condition) :ForwardIterator1 search (ForwardIterator1 first1, ForwardIterator1 last1,ForwardIterator2 first2, ForwardIterator2 last2,BinaryPredicate pred);All the arguments are same as previous template, just one more argument is addedpred: Binary function that accepts two elements as arguments (one of each of the two containers, in the same order), and returns a value convertible to bool. The returned value indicates whether the elements are considered to match in the context of this function. The function shall not modify any of its arguments. This can either be a function pointer or a function object.Returns: an iterator, to the first element of the first occurrence of [first2, last2) satisfying a predicate, in [first1, last1), or last1 if no occurrences are found.CPPCPP// C++ program to demonstrate the use of std::search// with binary predicate#include <iostream>#include <vector>#include <algorithm>using namespace std; // Defining the BinaryPredicate functionbool pred(int i, int j){ if (i > j) { return 1; } else { return 0; }} int main(){ int i, j; // Declaring the sequence to be searched into vector<int> v1 = { 1, 2, 3, 4, 5, 6, 7 }; // Declaring the subsequence to be compared to based // on predicate vector<int> v2 = { 3, 4, 5 }; // Declaring an iterator for storing the returning pointer vector<int>::iterator i1; // Using std::search and storing the result in // iterator i1 based on predicate pred i1 = std::search(v1.begin(), v1.end(), v2.begin(), v2.end(), pred); // checking if iterator i1 contains end pointer of v1 or not if (i1 != v1.end()) { cout << \"vector1 elements are greater than vector2 starting \" << \"from position \" << (i1 - v1.begin()); } else { cout << \"vector1 elements are not greater than vector2 \" << \"elements consecutively.\"; } return 0;}Output:vector1 elements are greater than vector2 starting from position 3\n" }, { "code": null, "e": 5847, "s": 4360, "text": "For comparing elements using == :ForwardIterator1 search (ForwardIterator1 first1, ForwardIterator1 last1,ForwardIterator2 first2, ForwardIterator2 last2);first1:Forward iterator to beginning of first container to be searched into.last1:Forward iterator to end of first container to be searched into.first2:Forward iterator to the beginning of the subsequence of second container to be searched for.last2:Forward iterator to the ending of the subsequence of second container to be searched for.Returns: an iterator to the first element of the first occurrence of [first2, last2) in [first1, last1), or last1if no occurrences are found.CPPCPP// C++ program to demonstrate the use of std::search #include <iostream>#include <vector>#include <algorithm>using namespace std;int main(){ int i, j; // Declaring the sequence to be searched into vector<int> v1 = { 1, 2, 3, 4, 5, 6, 7 }; // Declaring the subsequence to be searched for vector<int> v2 = { 3, 4, 5 }; // Declaring an iterator for storing the returning pointer vector<int>::iterator i1; // Using std::search and storing the result in // iterator i1 i1 = std::search(v1.begin(), v1.end(), v2.begin(), v2.end()); // checking if iterator i1 contains end pointer of v1 or not if (i1 != v1.end()) { cout << \"vector2 is present at index \" << (i1 - v1.begin()); } else { cout << \"vector2 is not present in vector1\"; } return 0;}Output:vector2 is present at index 2\n" }, { "code": null, "e": 6309, "s": 5847, "text": "ForwardIterator1 search (ForwardIterator1 first1, ForwardIterator1 last1,ForwardIterator2 first2, ForwardIterator2 last2);first1:Forward iterator to beginning of first container to be searched into.last1:Forward iterator to end of first container to be searched into.first2:Forward iterator to the beginning of the subsequence of second container to be searched for.last2:Forward iterator to the ending of the subsequence of second container to be searched for." }, { "code": null, "e": 6451, "s": 6309, "text": "Returns: an iterator to the first element of the first occurrence of [first2, last2) in [first1, last1), or last1if no occurrences are found." }, { "code": null, "e": 6455, "s": 6451, "text": "CPP" }, { "code": "// C++ program to demonstrate the use of std::search #include <iostream>#include <vector>#include <algorithm>using namespace std;int main(){ int i, j; // Declaring the sequence to be searched into vector<int> v1 = { 1, 2, 3, 4, 5, 6, 7 }; // Declaring the subsequence to be searched for vector<int> v2 = { 3, 4, 5 }; // Declaring an iterator for storing the returning pointer vector<int>::iterator i1; // Using std::search and storing the result in // iterator i1 i1 = std::search(v1.begin(), v1.end(), v2.begin(), v2.end()); // checking if iterator i1 contains end pointer of v1 or not if (i1 != v1.end()) { cout << \"vector2 is present at index \" << (i1 - v1.begin()); } else { cout << \"vector2 is not present in vector1\"; } return 0;}", "e": 7264, "s": 6455, "text": null }, { "code": null, "e": 7295, "s": 7264, "text": "vector2 is present at index 2\n" }, { "code": null, "e": 9323, "s": 7295, "text": "For comparison based on a predicate (or condition) :ForwardIterator1 search (ForwardIterator1 first1, ForwardIterator1 last1,ForwardIterator2 first2, ForwardIterator2 last2,BinaryPredicate pred);All the arguments are same as previous template, just one more argument is addedpred: Binary function that accepts two elements as arguments (one of each of the two containers, in the same order), and returns a value convertible to bool. The returned value indicates whether the elements are considered to match in the context of this function. The function shall not modify any of its arguments. This can either be a function pointer or a function object.Returns: an iterator, to the first element of the first occurrence of [first2, last2) satisfying a predicate, in [first1, last1), or last1 if no occurrences are found.CPPCPP// C++ program to demonstrate the use of std::search// with binary predicate#include <iostream>#include <vector>#include <algorithm>using namespace std; // Defining the BinaryPredicate functionbool pred(int i, int j){ if (i > j) { return 1; } else { return 0; }} int main(){ int i, j; // Declaring the sequence to be searched into vector<int> v1 = { 1, 2, 3, 4, 5, 6, 7 }; // Declaring the subsequence to be compared to based // on predicate vector<int> v2 = { 3, 4, 5 }; // Declaring an iterator for storing the returning pointer vector<int>::iterator i1; // Using std::search and storing the result in // iterator i1 based on predicate pred i1 = std::search(v1.begin(), v1.end(), v2.begin(), v2.end(), pred); // checking if iterator i1 contains end pointer of v1 or not if (i1 != v1.end()) { cout << \"vector1 elements are greater than vector2 starting \" << \"from position \" << (i1 - v1.begin()); } else { cout << \"vector1 elements are not greater than vector2 \" << \"elements consecutively.\"; } return 0;}Output:vector1 elements are greater than vector2 starting from position 3\n" }, { "code": null, "e": 9467, "s": 9323, "text": "ForwardIterator1 search (ForwardIterator1 first1, ForwardIterator1 last1,ForwardIterator2 first2, ForwardIterator2 last2,BinaryPredicate pred);" }, { "code": null, "e": 9548, "s": 9467, "text": "All the arguments are same as previous template, just one more argument is added" }, { "code": null, "e": 9925, "s": 9548, "text": "pred: Binary function that accepts two elements as arguments (one of each of the two containers, in the same order), and returns a value convertible to bool. The returned value indicates whether the elements are considered to match in the context of this function. The function shall not modify any of its arguments. This can either be a function pointer or a function object." }, { "code": null, "e": 10093, "s": 9925, "text": "Returns: an iterator, to the first element of the first occurrence of [first2, last2) satisfying a predicate, in [first1, last1), or last1 if no occurrences are found." }, { "code": null, "e": 10097, "s": 10093, "text": "CPP" }, { "code": "// C++ program to demonstrate the use of std::search// with binary predicate#include <iostream>#include <vector>#include <algorithm>using namespace std; // Defining the BinaryPredicate functionbool pred(int i, int j){ if (i > j) { return 1; } else { return 0; }} int main(){ int i, j; // Declaring the sequence to be searched into vector<int> v1 = { 1, 2, 3, 4, 5, 6, 7 }; // Declaring the subsequence to be compared to based // on predicate vector<int> v2 = { 3, 4, 5 }; // Declaring an iterator for storing the returning pointer vector<int>::iterator i1; // Using std::search and storing the result in // iterator i1 based on predicate pred i1 = std::search(v1.begin(), v1.end(), v2.begin(), v2.end(), pred); // checking if iterator i1 contains end pointer of v1 or not if (i1 != v1.end()) { cout << \"vector1 elements are greater than vector2 starting \" << \"from position \" << (i1 - v1.begin()); } else { cout << \"vector1 elements are not greater than vector2 \" << \"elements consecutively.\"; } return 0;}", "e": 11227, "s": 10097, "text": null }, { "code": null, "e": 11295, "s": 11227, "text": "vector1 elements are greater than vector2 starting from position 3\n" }, { "code": null, "e": 11313, "s": 11295, "text": "Related Articles:" }, { "code": null, "e": 11327, "s": 11313, "text": "std::search_n" }, { "code": null, "e": 11337, "s": 11327, "text": "std::find" }, { "code": null, "e": 11368, "s": 11337, "text": "std::find_if, std::find_if_not" }, { "code": null, "e": 11385, "s": 11368, "text": "std::nth_element" }, { "code": null, "e": 11399, "s": 11385, "text": "std::find_end" }, { "code": null, "e": 11702, "s": 11399, "text": "This article is contributed by Mrigendra Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks." }, { "code": null, "e": 11827, "s": 11702, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 11841, "s": 11827, "text": "sumitgumber28" }, { "code": null, "e": 11863, "s": 11841, "text": "cpp-algorithm-library" }, { "code": null, "e": 11867, "s": 11863, "text": "STL" }, { "code": null, "e": 11871, "s": 11867, "text": "C++" }, { "code": null, "e": 11875, "s": 11871, "text": "STL" }, { "code": null, "e": 11879, "s": 11875, "text": "CPP" }, { "code": null, "e": 11977, "s": 11879, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 12004, "s": 11977, "text": "Bitwise Operators in C/C++" }, { "code": null, "e": 12047, "s": 12004, "text": "Set in C++ Standard Template Library (STL)" }, { "code": null, "e": 12081, "s": 12047, "text": "vector erase() and clear() in C++" }, { "code": null, "e": 12106, "s": 12081, "text": "unordered_map in C++ STL" }, { "code": null, "e": 12125, "s": 12106, "text": "Inheritance in C++" }, { "code": null, "e": 12179, "s": 12125, "text": "Priority Queue in C++ Standard Template Library (STL)" }, { "code": null, "e": 12219, "s": 12179, "text": "The C++ Standard Template Library (STL)" }, { "code": null, "e": 12254, "s": 12219, "text": "Object Oriented Programming in C++" }, { "code": null, "e": 12278, "s": 12254, "text": "C++ Classes and Objects" } ]
Understanding the /etc/passwd File
28 Jul, 2021 The /etc/passwd file is the most important file in Linux operating system. This file stores essential information about the users on the system. This file is owned by the root user and to edit this file we must have root privileges. But try to avoid edit this file. Now let’s see actually how this file look This file contains one entry per line. That means it stores one user’s information on one line. The user information contains seven fields and each field is separated by the colon ( : )symbol. Each entry in the /etc/passwd file looks like this: Username: This field stores the usernames which are used while login into the system. The length of this field is between 1 and 32 characters.Password: This field store the password of the user. The x character indicates the password is stored in /etc/shadow file in the encrypted format. We can use the passwd command to update this field.User ID(UID): User identifier is the number assigned to each user by the operating system to refer the users. The 0 UID is reserved for the root user. And 1-99 UID are reserved for other predefined accounts. And 100-999 are reserved by the system for administrative and system accounts/groups.Group ID(GID): Group identifier is the number indicating the primary group of users. Most of the time it is the same as the UID.User ID Info (GECOS): This is a comment field. This field contains information like the user phone number, address, or full name of the user. This field is used by the finger command to get information about the user.Home directory: This field contains the absolute path of the user’s home directory. By default, the users are created under the /home directory. If this file is empty, then the home directory of that user will be /Login shell: This field store the absolute path of the user shell. This shell is started when the user is log in to the system. Username: This field stores the usernames which are used while login into the system. The length of this field is between 1 and 32 characters. Password: This field store the password of the user. The x character indicates the password is stored in /etc/shadow file in the encrypted format. We can use the passwd command to update this field. User ID(UID): User identifier is the number assigned to each user by the operating system to refer the users. The 0 UID is reserved for the root user. And 1-99 UID are reserved for other predefined accounts. And 100-999 are reserved by the system for administrative and system accounts/groups. Group ID(GID): Group identifier is the number indicating the primary group of users. Most of the time it is the same as the UID. User ID Info (GECOS): This is a comment field. This field contains information like the user phone number, address, or full name of the user. This field is used by the finger command to get information about the user. Home directory: This field contains the absolute path of the user’s home directory. By default, the users are created under the /home directory. If this file is empty, then the home directory of that user will be / Login shell: This field store the absolute path of the user shell. This shell is started when the user is log in to the system. Now we have understood the file structure of the /etc/passwd file now let’s see one example of this file. You can view the content of file using the cat file like: cat /etc/passwd We can see that there are many users with all information. To search for a specific user, we can use the grep command. Now for example to get information about the user Nishant we can use the following command: grep nishant /etc/passwd The normal users have only read permissions to the /etc/passwd file. The only root user can write into this file. To see the permissions of /etc/passwd file, we can use the ls command as follows: ls -l /etc/passwd The output will be We can see that the permissions of the file /etc/passwd are rw-r–r–. This means the root user has read and write access and other groups and user have read-only access to the file. To get more details like size, modify the time of this file we can use the stat command: stat /etc/passwd We can read the /etc/passwd file more user-friendly by using the while loop and IFS separator. A while loop is used to iterate through the file, and IFS is a special variable is used to separate the string by a specific character. #!/bin/bash # using while loop to iterate through file while IFS=: read -r f1 f2 f3 f4 f5 f6 f7 do echo "User $f1 use $f7 shell and stores files in $f6 directory." done < /etc/passwd After using this script, we get the following output: Linux-Unix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n28 Jul, 2021" }, { "code": null, "e": 336, "s": 28, "text": "The /etc/passwd file is the most important file in Linux operating system. This file stores essential information about the users on the system. This file is owned by the root user and to edit this file we must have root privileges. But try to avoid edit this file. Now let’s see actually how this file look" }, { "code": null, "e": 581, "s": 336, "text": "This file contains one entry per line. That means it stores one user’s information on one line. The user information contains seven fields and each field is separated by the colon ( : )symbol. Each entry in the /etc/passwd file looks like this:" }, { "code": null, "e": 1901, "s": 581, "text": "Username: This field stores the usernames which are used while login into the system. The length of this field is between 1 and 32 characters.Password: This field store the password of the user. The x character indicates the password is stored in /etc/shadow file in the encrypted format. We can use the passwd command to update this field.User ID(UID): User identifier is the number assigned to each user by the operating system to refer the users. The 0 UID is reserved for the root user. And 1-99 UID are reserved for other predefined accounts. And 100-999 are reserved by the system for administrative and system accounts/groups.Group ID(GID): Group identifier is the number indicating the primary group of users. Most of the time it is the same as the UID.User ID Info (GECOS): This is a comment field. This field contains information like the user phone number, address, or full name of the user. This field is used by the finger command to get information about the user.Home directory: This field contains the absolute path of the user’s home directory. By default, the users are created under the /home directory. If this file is empty, then the home directory of that user will be /Login shell: This field store the absolute path of the user shell. This shell is started when the user is log in to the system." }, { "code": null, "e": 2044, "s": 1901, "text": "Username: This field stores the usernames which are used while login into the system. The length of this field is between 1 and 32 characters." }, { "code": null, "e": 2243, "s": 2044, "text": "Password: This field store the password of the user. The x character indicates the password is stored in /etc/shadow file in the encrypted format. We can use the passwd command to update this field." }, { "code": null, "e": 2537, "s": 2243, "text": "User ID(UID): User identifier is the number assigned to each user by the operating system to refer the users. The 0 UID is reserved for the root user. And 1-99 UID are reserved for other predefined accounts. And 100-999 are reserved by the system for administrative and system accounts/groups." }, { "code": null, "e": 2666, "s": 2537, "text": "Group ID(GID): Group identifier is the number indicating the primary group of users. Most of the time it is the same as the UID." }, { "code": null, "e": 2884, "s": 2666, "text": "User ID Info (GECOS): This is a comment field. This field contains information like the user phone number, address, or full name of the user. This field is used by the finger command to get information about the user." }, { "code": null, "e": 3099, "s": 2884, "text": "Home directory: This field contains the absolute path of the user’s home directory. By default, the users are created under the /home directory. If this file is empty, then the home directory of that user will be /" }, { "code": null, "e": 3227, "s": 3099, "text": "Login shell: This field store the absolute path of the user shell. This shell is started when the user is log in to the system." }, { "code": null, "e": 3391, "s": 3227, "text": "Now we have understood the file structure of the /etc/passwd file now let’s see one example of this file. You can view the content of file using the cat file like:" }, { "code": null, "e": 3407, "s": 3391, "text": "cat /etc/passwd" }, { "code": null, "e": 3466, "s": 3407, "text": "We can see that there are many users with all information." }, { "code": null, "e": 3618, "s": 3466, "text": "To search for a specific user, we can use the grep command. Now for example to get information about the user Nishant we can use the following command:" }, { "code": null, "e": 3643, "s": 3618, "text": "grep nishant /etc/passwd" }, { "code": null, "e": 3839, "s": 3643, "text": "The normal users have only read permissions to the /etc/passwd file. The only root user can write into this file. To see the permissions of /etc/passwd file, we can use the ls command as follows:" }, { "code": null, "e": 3857, "s": 3839, "text": "ls -l /etc/passwd" }, { "code": null, "e": 3877, "s": 3857, "text": "The output will be " }, { "code": null, "e": 4058, "s": 3877, "text": "We can see that the permissions of the file /etc/passwd are rw-r–r–. This means the root user has read and write access and other groups and user have read-only access to the file." }, { "code": null, "e": 4147, "s": 4058, "text": "To get more details like size, modify the time of this file we can use the stat command:" }, { "code": null, "e": 4164, "s": 4147, "text": "stat /etc/passwd" }, { "code": null, "e": 4395, "s": 4164, "text": "We can read the /etc/passwd file more user-friendly by using the while loop and IFS separator. A while loop is used to iterate through the file, and IFS is a special variable is used to separate the string by a specific character." }, { "code": null, "e": 4616, "s": 4395, "text": "#!/bin/bash\n\n# using while loop to iterate through file\nwhile IFS=: read -r f1 f2 f3 f4 f5 f6 f7 \ndo\necho \"User $f1 use $f7 shell and stores files in $f6 directory.\"\ndone < /etc/passwd " }, { "code": null, "e": 4670, "s": 4616, "text": "After using this script, we get the following output:" }, { "code": null, "e": 4681, "s": 4670, "text": "Linux-Unix" } ]
How to Create Custom Shape Button using SVG ?
27 Apr, 2020 To design the shape of an HTML button we can use SVG elements (Scalable Vector Graphics). It basically defines the vector-based graphics in XML format. Every element and every attribute in SVG files can be animated. We can use SVG to create 2-D graphics of any custom shape. Example 1: This example creating a circle shape button using SVG. <!DOCTYPE html> <html> <head> <title> Create custom shape button </title> </head> <body> <h1 style="color:green;">GeeksforGeeks</h1> <h3>Circle Shape Button</h3> <svg width="500" height="500"> <a href="#"> <Circle cx="60" cy="60" r="50" stroke="black" fill="green" stroke-width="3"/> </a> </svg></body> </html> Output: There are many more shapes available in SVG elements such as boxes, text, rectangles, etc. Example 2: This example creating a rectangle shape button using SVG. <!DOCTYPE html> <html> <head> <title> Rectangle Shape Button </title> </head> <body> <h1 style="color:green;">GeeksforGeeks</h1> <h3>Rectangle Shape Button</h3> <svg width="300" height="200"> <a href="#"> <rect width="250" height="150" style="fill:rgb(0, 255, 0); stroke-width:5;stroke:rgb(0, 0, 0)" /> </a> </svg></body> </html> Output: Example 3: This example creating a star shape button using SVG. <!DOCTYPE html> <html> <head> <title> Star Shape Button </title> </head> <body> <h1 style="color:green;">GeeksforGeeks</h1> <h3>Star Shape Button</h3> <a href="#"> <svg width="300" height="200"> <polygon points="100, 10 40, 198 190, 78 10, 78 160, 198" style="fill:green; stroke:black; stroke-width:5; fill-rule:evenodd;" /> </svg> </a></body> </html> Output: Example 4: This example creating a flag shape button using SVG. <!DOCTYPE html> <html> <head> <title> Flag Shape Button </title> </head> <body> <h1 style="color:green;">GeeksforGeeks</h1> <h3>Flag Shape Button</h3> <svg width="240" height="240"> <a href="#"> <path d="M 0 0 L 120 0 L 120 120 L 60 80 L 0 120 Z" fill="green"/> <text x="60" y="50" fill="#FFFFFF" text-anchor="middle" alignment-baseline="middle"> GeeksforGeeks. </text> </a> </svg></body> </html> Output: HTML-Misc Picked HTML Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. REST API (Introduction) CSS to put icon inside an input element in a form Types of CSS (Cascading Style Sheet) Design a Tribute Page using HTML & CSS HTTP headers | Content-Type Installation of Node.js on Linux Difference between var, let and const keywords in JavaScript How to fetch data from an API in ReactJS ? Differences between Functional Components and Class Components in React Remove elements from a JavaScript Array
[ { "code": null, "e": 28, "s": 0, "text": "\n27 Apr, 2020" }, { "code": null, "e": 303, "s": 28, "text": "To design the shape of an HTML button we can use SVG elements (Scalable Vector Graphics). It basically defines the vector-based graphics in XML format. Every element and every attribute in SVG files can be animated. We can use SVG to create 2-D graphics of any custom shape." }, { "code": null, "e": 369, "s": 303, "text": "Example 1: This example creating a circle shape button using SVG." }, { "code": "<!DOCTYPE html> <html> <head> <title> Create custom shape button </title> </head> <body> <h1 style=\"color:green;\">GeeksforGeeks</h1> <h3>Circle Shape Button</h3> <svg width=\"500\" height=\"500\"> <a href=\"#\"> <Circle cx=\"60\" cy=\"60\" r=\"50\" stroke=\"black\" fill=\"green\" stroke-width=\"3\"/> </a> </svg></body> </html>", "e": 856, "s": 369, "text": null }, { "code": null, "e": 864, "s": 856, "text": "Output:" }, { "code": null, "e": 955, "s": 864, "text": "There are many more shapes available in SVG elements such as boxes, text, rectangles, etc." }, { "code": null, "e": 1024, "s": 955, "text": "Example 2: This example creating a rectangle shape button using SVG." }, { "code": "<!DOCTYPE html> <html> <head> <title> Rectangle Shape Button </title> </head> <body> <h1 style=\"color:green;\">GeeksforGeeks</h1> <h3>Rectangle Shape Button</h3> <svg width=\"300\" height=\"200\"> <a href=\"#\"> <rect width=\"250\" height=\"150\" style=\"fill:rgb(0, 255, 0); stroke-width:5;stroke:rgb(0, 0, 0)\" /> </a> </svg></body> </html>", "e": 1476, "s": 1024, "text": null }, { "code": null, "e": 1484, "s": 1476, "text": "Output:" }, { "code": null, "e": 1548, "s": 1484, "text": "Example 3: This example creating a star shape button using SVG." }, { "code": "<!DOCTYPE html> <html> <head> <title> Star Shape Button </title> </head> <body> <h1 style=\"color:green;\">GeeksforGeeks</h1> <h3>Star Shape Button</h3> <a href=\"#\"> <svg width=\"300\" height=\"200\"> <polygon points=\"100, 10 40, 198 190, 78 10, 78 160, 198\" style=\"fill:green; stroke:black; stroke-width:5; fill-rule:evenodd;\" /> </svg> </a></body> </html>", "e": 2122, "s": 1548, "text": null }, { "code": null, "e": 2130, "s": 2122, "text": "Output:" }, { "code": null, "e": 2194, "s": 2130, "text": "Example 4: This example creating a flag shape button using SVG." }, { "code": "<!DOCTYPE html> <html> <head> <title> Flag Shape Button </title> </head> <body> <h1 style=\"color:green;\">GeeksforGeeks</h1> <h3>Flag Shape Button</h3> <svg width=\"240\" height=\"240\"> <a href=\"#\"> <path d=\"M 0 0 L 120 0 L 120 120 L 60 80 L 0 120 Z\" fill=\"green\"/> <text x=\"60\" y=\"50\" fill=\"#FFFFFF\" text-anchor=\"middle\" alignment-baseline=\"middle\"> GeeksforGeeks. </text> </a> </svg></body> </html>", "e": 2897, "s": 2194, "text": null }, { "code": null, "e": 2905, "s": 2897, "text": "Output:" }, { "code": null, "e": 2915, "s": 2905, "text": "HTML-Misc" }, { "code": null, "e": 2922, "s": 2915, "text": "Picked" }, { "code": null, "e": 2927, "s": 2922, "text": "HTML" }, { "code": null, "e": 2944, "s": 2927, "text": "Web Technologies" }, { "code": null, "e": 2949, "s": 2944, "text": "HTML" }, { "code": null, "e": 3047, "s": 2949, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3071, "s": 3047, "text": "REST API (Introduction)" }, { "code": null, "e": 3121, "s": 3071, "text": "CSS to put icon inside an input element in a form" }, { "code": null, "e": 3158, "s": 3121, "text": "Types of CSS (Cascading Style Sheet)" }, { "code": null, "e": 3197, "s": 3158, "text": "Design a Tribute Page using HTML & CSS" }, { "code": null, "e": 3225, "s": 3197, "text": "HTTP headers | Content-Type" }, { "code": null, "e": 3258, "s": 3225, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 3319, "s": 3258, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 3362, "s": 3319, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 3434, "s": 3362, "text": "Differences between Functional Components and Class Components in React" } ]
Flutter – SilverAppBar Widget
22 Feb, 2022 SliverAppBar is a Material Design widget in flutter which gives scrollable or collapsible app-bar. The word Sliver is given to scrollable areas here. SliverAppBar basically gives us means to create an app-bar that can change appearance, blend in the background, or even disappear as we scroll. We already had AppBar widget in flutter which places the app bar at a fixed height. But, looking around us we can see that the scrollable app bar user interface is widely used. We can that even the GeeksforGeeks app uses the app bar which is collapsible. In order to achieve the same functionality, flutter gives us SliverAppBar widget, which is usually taken as a child widget to CustomScrollView (flutter widget), which provided it the power to interact with scroll. const SliverAppBar( {Key key, Widget leading, bool automaticallyImplyLeading: true, Widget title, List<Widget> actions, Widget flexibleSpace, PreferredSizeWidget bottom, double elevation, Color shadowColor, bool forceElevated: false, Color backgroundColor, Brightness brightness, IconThemeData iconTheme, IconThemeData actionsIconTheme, TextTheme textTheme, bool primary: true, bool centerTitle, bool excludeHeaderSemantics: false, double titleSpacing: NavigationToolbar.kMiddleSpacing, double collapsedHeight, double expandedHeight, bool floating: false, bool pinned: false, bool snap: false, bool stretch: false, double stretchTriggerOffset: 100.0, Future<void> onStretchTrigger(), ShapeBorder shape, double toolbarHeight: kToolbarHeight, double leadingWidth} ) action: This property takes in a list of widgets as a parameter to be displayed after the title if the SliverAppBar is a row. actionIconTheme: This property determines the color, opacity, and size of trailing app bar icons. automaticallyImplyLeading: This property takes in a boolean as a parameter and controls whether to imply the leading widget if the boolean is null. backgroundColor: This property is used to add colors to the background of the SliverAppbar. bottom: This property takes in PrefferedSizeWidget as a parameter. And it determines the widget to be shown across the bottom of the SliverAppBar. It is usually a TabBar similar to GeeksforGeeks app. brightness: This property controls the brightness of the SliverAppBar. centerTitle: This property determines whether the title widget should be in the center of the SliverAppBar or not. by taking a boolean as a parameter. collapsedHeight: This property controls at which height the SliverAppBar should collapse. elevation: This property is used to set the z-coordinate at which to place this app bar relative to its parent. excludeHeaderSemantics: This property takes boolean as a parameter and controls whether the title widget should be wrapped in header Semantics which describe the widgets uses in the app. expandedHeight: Similar to the collapsedHeight property it also takes a double as a parameter and determines the height at which the SliverAppBar should be fully expanded. flexibleSpace: This property takes in widget as a parameter and stacks it behind the took bar when it collapses. floating: This property takes in boolean as a parameter and controls the animation related to the visibility of the SliverAppBar. It determines whether the SliverAppBar should be made visible as soon as the user scrolls towards it (top or bottom) or not. forceElevated: This property controls whether to show shadow for the elevation or not if the content is not scrolled under SliverAppBar. iconTheme: This property is similar to the actionIconTheme. It controls the color, size, opacity, etc of the icon used in the SliverAppBar. leading: This property sets the widget that should be displayed before the title. leadingWidth: This property takes double as a parameter and controls the width of the leading widget. onStretchTrigger: This property takes in AsyncCallback as a parameter, which gets triggered when the user over-scrolls. pinned; This property sets whether the SliverAppBar should remain visible at the start of scroll view. It takes a boolean as a parameter. primary: This property takes boolean as a parameter and controls whether the SliverAppBar is being displayed at the top of the screen or not. shadowColor: This property determines the color of the shadow which gets displayed below SliverAppBar. shape: This property is used to give shape to the SliverAppbar and manage its shadow. snap: This property takes boolean as a parameter and if set true it makes the SliverAppBar snap in the view when a user scrolls near it instead of smoothly animated. There is one constrain to snap property that it can only be set to true when floating is also set to true. stretch: Again, this property also takes a boolean as a parameter to determine whether the SliverAppBar should stretch to the full space of the over-scroll area. stretchTriggerOffset: This property determines the offset of over-scroll which activates onStretch property. textTheme: This property takes TextTheme widget as a parameter to determine the typography style used in the SliverAppBar. title: This property usually takes in the main widget as a parameter to be displayed in the SliverAppBar. titleSpacing: This property determines the amount of spacing around the title widget in a horizontal fashion. toolbarHeight: This property controls the height given to the toolbar portion of the SliverAppBar. Example: src/lib/main.dart Dart import 'package:flutter/material.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { final title = 'GeeksforGeeks'; return MaterialApp( home: Scaffold( body: CustomScrollView( slivers: <Widget>[ SliverAppBar( snap: false, pinned: false, floating: false, flexibleSpace: FlexibleSpaceBar( centerTitle: true, title: Text("$title", style: TextStyle( color: Colors.white, fontSize: 16.0, ) //TextStyle ), //Text background: Image.network( "https://i.ibb.co/QpWGK5j/Geeksfor-Geeks.png", fit: BoxFit.cover, ) //Images.network ), //FlexibleSpaceBar expandedHeight: 230, backgroundColor: Colors.greenAccent[400], leading: IconButton( icon: Icon(Icons.menu), tooltip: 'Menu', onPressed: () {}, ), //IconButton actions: <Widget>[ IconButton( icon: Icon(Icons.comment), tooltip: 'Comment Icon', onPressed: () {}, ), //IconButton IconButton( icon: Icon(Icons.settings), tooltip: 'Setting Icon', onPressed: () {}, ), //IconButton ], //<Widget>[] ), //SliverAppBar SliverList( delegate: SliverChildBuilderDelegate( (context, index) => ListTile( tileColor: (index % 2 == 0) ? Colors.white : Colors.green[50], title: Center( child: Text('$index', style: TextStyle( fontWeight: FontWeight.normal, fontSize: 50, color: Colors.greenAccent[400]) //TextStyle ), //Text ), //Center ), //ListTile childCount: 51, ), //SliverChildBuildDelegate ) //SliverList ], //<Widget>[] ) //CustonScrollView ), //Scaffold debugShowCheckedModeBanner:false, // Remove debug banner for proper // view of setting icon ); //MaterialApp }} Explanation: At first, we have imported the material library. Then we have our main function which calls the MyApp class through runApp method. We have set the MyApp class to be a stateless widget. Then with the Widget build(BuildContext context) we have started describing the UI of the app. Our MateriaApp starts with the Scaffold widget. Then in the CustomScrollView widget, we have slivers property that takes a list of widgets and makes them scrollable. We have passed SliverAppBar as the first child in the sliver. The first three properties snap, pinned and floating have been made false for the first case. We are going to use five combinations of these three properties to give different effects to our SliverAppBar. In FLexibleSpaceBar widget we have passes the title and the cover image with their respective properties. In the leading widget, we have the menu icon button and in the action widget, we have comment and setting icon buttons. All this is followed by the SliverList widget which forms the body of our app here. It contains 51 list tiles indexed from 0 to 51. Below we have five different outputs for five different combinations of the snap, pinned, and floating property. Output: If the properties are defines as follows: snap: false; pinned: false; floating: false; Output: If the properties are defines as follows: snap: false; pinned: false; floating: true; Output: If the properties are defines as follows: snap: false; pinned: true; floating: false; Output:If the properties are defines as follows: snap: true; pinned: false; floating: true; Output: If the properties are defines as follows: snap: true; pinned: true; floating: true; surindertarika1234 simranarora5sos arorakashish0911 android Flutter Flutter-widgets Android Dart Flutter Android Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n22 Feb, 2022" }, { "code": null, "e": 817, "s": 52, "text": "SliverAppBar is a Material Design widget in flutter which gives scrollable or collapsible app-bar. The word Sliver is given to scrollable areas here. SliverAppBar basically gives us means to create an app-bar that can change appearance, blend in the background, or even disappear as we scroll. We already had AppBar widget in flutter which places the app bar at a fixed height. But, looking around us we can see that the scrollable app bar user interface is widely used. We can that even the GeeksforGeeks app uses the app bar which is collapsible. In order to achieve the same functionality, flutter gives us SliverAppBar widget, which is usually taken as a child widget to CustomScrollView (flutter widget), which provided it the power to interact with scroll. " }, { "code": null, "e": 1581, "s": 817, "text": "const SliverAppBar(\n{Key key,\nWidget leading,\nbool automaticallyImplyLeading: true,\nWidget title,\nList<Widget> actions,\nWidget flexibleSpace,\nPreferredSizeWidget bottom,\ndouble elevation,\nColor shadowColor,\nbool forceElevated: false,\nColor backgroundColor,\nBrightness brightness,\nIconThemeData iconTheme,\nIconThemeData actionsIconTheme,\nTextTheme textTheme,\nbool primary: true,\nbool centerTitle,\nbool excludeHeaderSemantics: false,\ndouble titleSpacing: NavigationToolbar.kMiddleSpacing,\ndouble collapsedHeight,\ndouble expandedHeight,\nbool floating: false,\nbool pinned: false,\nbool snap: false,\nbool stretch: false,\ndouble stretchTriggerOffset: 100.0,\nFuture<void> onStretchTrigger(),\nShapeBorder shape,\ndouble toolbarHeight: kToolbarHeight,\ndouble leadingWidth}\n)" }, { "code": null, "e": 1707, "s": 1581, "text": "action: This property takes in a list of widgets as a parameter to be displayed after the title if the SliverAppBar is a row." }, { "code": null, "e": 1805, "s": 1707, "text": "actionIconTheme: This property determines the color, opacity, and size of trailing app bar icons." }, { "code": null, "e": 1953, "s": 1805, "text": "automaticallyImplyLeading: This property takes in a boolean as a parameter and controls whether to imply the leading widget if the boolean is null." }, { "code": null, "e": 2045, "s": 1953, "text": "backgroundColor: This property is used to add colors to the background of the SliverAppbar." }, { "code": null, "e": 2245, "s": 2045, "text": "bottom: This property takes in PrefferedSizeWidget as a parameter. And it determines the widget to be shown across the bottom of the SliverAppBar. It is usually a TabBar similar to GeeksforGeeks app." }, { "code": null, "e": 2316, "s": 2245, "text": "brightness: This property controls the brightness of the SliverAppBar." }, { "code": null, "e": 2467, "s": 2316, "text": "centerTitle: This property determines whether the title widget should be in the center of the SliverAppBar or not. by taking a boolean as a parameter." }, { "code": null, "e": 2557, "s": 2467, "text": "collapsedHeight: This property controls at which height the SliverAppBar should collapse." }, { "code": null, "e": 2669, "s": 2557, "text": "elevation: This property is used to set the z-coordinate at which to place this app bar relative to its parent." }, { "code": null, "e": 2856, "s": 2669, "text": "excludeHeaderSemantics: This property takes boolean as a parameter and controls whether the title widget should be wrapped in header Semantics which describe the widgets uses in the app." }, { "code": null, "e": 3028, "s": 2856, "text": "expandedHeight: Similar to the collapsedHeight property it also takes a double as a parameter and determines the height at which the SliverAppBar should be fully expanded." }, { "code": null, "e": 3141, "s": 3028, "text": "flexibleSpace: This property takes in widget as a parameter and stacks it behind the took bar when it collapses." }, { "code": null, "e": 3396, "s": 3141, "text": "floating: This property takes in boolean as a parameter and controls the animation related to the visibility of the SliverAppBar. It determines whether the SliverAppBar should be made visible as soon as the user scrolls towards it (top or bottom) or not." }, { "code": null, "e": 3534, "s": 3396, "text": "forceElevated: This property controls whether to show shadow for the elevation or not if the content is not scrolled under SliverAppBar." }, { "code": null, "e": 3674, "s": 3534, "text": "iconTheme: This property is similar to the actionIconTheme. It controls the color, size, opacity, etc of the icon used in the SliverAppBar." }, { "code": null, "e": 3756, "s": 3674, "text": "leading: This property sets the widget that should be displayed before the title." }, { "code": null, "e": 3858, "s": 3756, "text": "leadingWidth: This property takes double as a parameter and controls the width of the leading widget." }, { "code": null, "e": 3978, "s": 3858, "text": "onStretchTrigger: This property takes in AsyncCallback as a parameter, which gets triggered when the user over-scrolls." }, { "code": null, "e": 4116, "s": 3978, "text": "pinned; This property sets whether the SliverAppBar should remain visible at the start of scroll view. It takes a boolean as a parameter." }, { "code": null, "e": 4258, "s": 4116, "text": "primary: This property takes boolean as a parameter and controls whether the SliverAppBar is being displayed at the top of the screen or not." }, { "code": null, "e": 4361, "s": 4258, "text": "shadowColor: This property determines the color of the shadow which gets displayed below SliverAppBar." }, { "code": null, "e": 4447, "s": 4361, "text": "shape: This property is used to give shape to the SliverAppbar and manage its shadow." }, { "code": null, "e": 4720, "s": 4447, "text": "snap: This property takes boolean as a parameter and if set true it makes the SliverAppBar snap in the view when a user scrolls near it instead of smoothly animated. There is one constrain to snap property that it can only be set to true when floating is also set to true." }, { "code": null, "e": 4882, "s": 4720, "text": "stretch: Again, this property also takes a boolean as a parameter to determine whether the SliverAppBar should stretch to the full space of the over-scroll area." }, { "code": null, "e": 4992, "s": 4882, "text": "stretchTriggerOffset: This property determines the offset of over-scroll which activates onStretch property." }, { "code": null, "e": 5115, "s": 4992, "text": "textTheme: This property takes TextTheme widget as a parameter to determine the typography style used in the SliverAppBar." }, { "code": null, "e": 5221, "s": 5115, "text": "title: This property usually takes in the main widget as a parameter to be displayed in the SliverAppBar." }, { "code": null, "e": 5331, "s": 5221, "text": "titleSpacing: This property determines the amount of spacing around the title widget in a horizontal fashion." }, { "code": null, "e": 5430, "s": 5331, "text": "toolbarHeight: This property controls the height given to the toolbar portion of the SliverAppBar." }, { "code": null, "e": 5439, "s": 5430, "text": "Example:" }, { "code": null, "e": 5457, "s": 5439, "text": "src/lib/main.dart" }, { "code": null, "e": 5462, "s": 5457, "text": "Dart" }, { "code": "import 'package:flutter/material.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { final title = 'GeeksforGeeks'; return MaterialApp( home: Scaffold( body: CustomScrollView( slivers: <Widget>[ SliverAppBar( snap: false, pinned: false, floating: false, flexibleSpace: FlexibleSpaceBar( centerTitle: true, title: Text(\"$title\", style: TextStyle( color: Colors.white, fontSize: 16.0, ) //TextStyle ), //Text background: Image.network( \"https://i.ibb.co/QpWGK5j/Geeksfor-Geeks.png\", fit: BoxFit.cover, ) //Images.network ), //FlexibleSpaceBar expandedHeight: 230, backgroundColor: Colors.greenAccent[400], leading: IconButton( icon: Icon(Icons.menu), tooltip: 'Menu', onPressed: () {}, ), //IconButton actions: <Widget>[ IconButton( icon: Icon(Icons.comment), tooltip: 'Comment Icon', onPressed: () {}, ), //IconButton IconButton( icon: Icon(Icons.settings), tooltip: 'Setting Icon', onPressed: () {}, ), //IconButton ], //<Widget>[] ), //SliverAppBar SliverList( delegate: SliverChildBuilderDelegate( (context, index) => ListTile( tileColor: (index % 2 == 0) ? Colors.white : Colors.green[50], title: Center( child: Text('$index', style: TextStyle( fontWeight: FontWeight.normal, fontSize: 50, color: Colors.greenAccent[400]) //TextStyle ), //Text ), //Center ), //ListTile childCount: 51, ), //SliverChildBuildDelegate ) //SliverList ], //<Widget>[] ) //CustonScrollView ), //Scaffold debugShowCheckedModeBanner:false, // Remove debug banner for proper // view of setting icon ); //MaterialApp }}", "e": 7885, "s": 5462, "text": null }, { "code": null, "e": 7898, "s": 7885, "text": "Explanation:" }, { "code": null, "e": 8180, "s": 7898, "text": "At first, we have imported the material library. Then we have our main function which calls the MyApp class through runApp method. We have set the MyApp class to be a stateless widget. Then with the Widget build(BuildContext context) we have started describing the UI of the app. " }, { "code": null, "e": 8841, "s": 8180, "text": " Our MateriaApp starts with the Scaffold widget. Then in the CustomScrollView widget, we have slivers property that takes a list of widgets and makes them scrollable. We have passed SliverAppBar as the first child in the sliver. The first three properties snap, pinned and floating have been made false for the first case. We are going to use five combinations of these three properties to give different effects to our SliverAppBar. In FLexibleSpaceBar widget we have passes the title and the cover image with their respective properties. In the leading widget, we have the menu icon button and in the action widget, we have comment and setting icon buttons. " }, { "code": null, "e": 8973, "s": 8841, "text": "All this is followed by the SliverList widget which forms the body of our app here. It contains 51 list tiles indexed from 0 to 51." }, { "code": null, "e": 9086, "s": 8973, "text": "Below we have five different outputs for five different combinations of the snap, pinned, and floating property." }, { "code": null, "e": 9094, "s": 9086, "text": "Output:" }, { "code": null, "e": 9136, "s": 9094, "text": "If the properties are defines as follows:" }, { "code": null, "e": 9181, "s": 9136, "text": "snap: false;\npinned: false;\nfloating: false;" }, { "code": null, "e": 9189, "s": 9181, "text": "Output:" }, { "code": null, "e": 9231, "s": 9189, "text": "If the properties are defines as follows:" }, { "code": null, "e": 9275, "s": 9231, "text": "snap: false;\npinned: false;\nfloating: true;" }, { "code": null, "e": 9283, "s": 9275, "text": "Output:" }, { "code": null, "e": 9325, "s": 9283, "text": "If the properties are defines as follows:" }, { "code": null, "e": 9369, "s": 9325, "text": "snap: false;\npinned: true;\nfloating: false;" }, { "code": null, "e": 9418, "s": 9369, "text": "Output:If the properties are defines as follows:" }, { "code": null, "e": 9461, "s": 9418, "text": "snap: true;\npinned: false;\nfloating: true;" }, { "code": null, "e": 9469, "s": 9461, "text": "Output:" }, { "code": null, "e": 9511, "s": 9469, "text": "If the properties are defines as follows:" }, { "code": null, "e": 9553, "s": 9511, "text": "snap: true;\npinned: true;\nfloating: true;" }, { "code": null, "e": 9572, "s": 9553, "text": "surindertarika1234" }, { "code": null, "e": 9588, "s": 9572, "text": "simranarora5sos" }, { "code": null, "e": 9605, "s": 9588, "text": "arorakashish0911" }, { "code": null, "e": 9613, "s": 9605, "text": "android" }, { "code": null, "e": 9621, "s": 9613, "text": "Flutter" }, { "code": null, "e": 9637, "s": 9621, "text": "Flutter-widgets" }, { "code": null, "e": 9645, "s": 9637, "text": "Android" }, { "code": null, "e": 9650, "s": 9645, "text": "Dart" }, { "code": null, "e": 9658, "s": 9650, "text": "Flutter" }, { "code": null, "e": 9666, "s": 9658, "text": "Android" } ]
SQL Query to select Data from Tables Using Join and Where
27 Apr, 2021 The aim of this article is to make a simple program to Join two tables using Join and Where clause using MySQL. Below is the method to do the same using MySQL. The prerequisites of this article are MySQL and Apache Server on your computer are installed. A SQL query is a request passed for data/information from a table in a database. This data can be used for various purposes like Training a model, finding the patterns in the data, etc. A JOIN query is used to combine rows from two or more tables, based on a single column which can be used to store the same data from both tables. So we join over that point and join rows. WHERE keyword in SQL is used for retrieving data in a result under a certain query. It can also be used to retrieve data by matching patterns like Select all the students whose marks are greater than 90 or select all the data from tables where employees salary is greater than 6 lakhs and less than 12 lakhs. So we will start by creating a database – Step 1: Create a Database CREATE DATABASE geeksforgeeks; Step 2: Enter this database to use it – USE geeksforgeeks; Step 3: Create a table1 as employee in the database where we will perform our operations – CREATE TABLE employee ( ID int(10), Name varchar(55), Email varchar(100), Department int(10) ); Step 4: Create another table2 as dept where we will store the data of employees of the second company- CREATE TABLE dept ( ID int(10), Name varchar(55), hodId int(10), profit int(20) ); Step 5: View the schema of the table to ensure the table is correct – > DESC employee; > DESC dept; Step 6: Insert the data into the employee table – INSERT INTO employee VALUES(1, "Devesh", "[email protected]", 1); INSERT INTO employee VALUES(2, "Mayank", "[email protected]", 1); INSERT INTO employee VALUES(3, "Aditya", "[email protected]", 2); INSERT INTO employee VALUES(4, "Divyanshi", "[email protected]", 2); INSERT INTO employee VALUES(5, "Megha", "[email protected]", 3); INSERT INTO employee VALUES(6, "Himanshi", "[email protected]", 3); INSERT INTO employee VALUES(7, "Tanishka", "[email protected]", 4); INSERT INTO employee VALUES(8, "Jatin", "[email protected]", 4); Step 7: Insert data into dept table – INSERT INTO dept VALUES(1, "Computer Science", 1, 100000); INSERT INTO dept VALUES(2, "Electrical", 2, 45000); INSERT INTO dept VALUES(3, "Biotechnology", 3, 30000); INSERT INTO dept VALUES(4, "Architecture", 4, 15000); Step 8: Query the data using where and Join – Example 1: Select all the data of employees who are the HODs of the departments – SELECT employee.ID, employee.Name, employee.Email FROM employee JOIN dept WHERE employee.ID = dept.hodId; Output: Example 2: Select all the data where the department’s profit is greater than 45000 – SELECT * FROM employee LEFT JOIN dept ON employee.Department = dept.ID WHERE employee.Name IN (SELECT Name FROM employee WHERE dept.profit > 45000); Output: Example 3: Select all the data from both the tables using JOIN (cross join) – SELECT * FROM employee FULL JOIN dept WHERE dept.id > 0; Example 4: Select all the employees from a department whose sum profit is greater than 5000 SELECT DISTINCT dept.ID, dept.Name, dept.hodId FROM dept JOIN employee ON dept.ID = employee.Department WHERE hodId IN (SELECT hodId FROM dept WHERE hodId > 0); Output: DBMS-SQL mysql Picked SQL SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n27 Apr, 2021" }, { "code": null, "e": 308, "s": 54, "text": "The aim of this article is to make a simple program to Join two tables using Join and Where clause using MySQL. Below is the method to do the same using MySQL. The prerequisites of this article are MySQL and Apache Server on your computer are installed." }, { "code": null, "e": 494, "s": 308, "text": "A SQL query is a request passed for data/information from a table in a database. This data can be used for various purposes like Training a model, finding the patterns in the data, etc." }, { "code": null, "e": 682, "s": 494, "text": "A JOIN query is used to combine rows from two or more tables, based on a single column which can be used to store the same data from both tables. So we join over that point and join rows." }, { "code": null, "e": 991, "s": 682, "text": "WHERE keyword in SQL is used for retrieving data in a result under a certain query. It can also be used to retrieve data by matching patterns like Select all the students whose marks are greater than 90 or select all the data from tables where employees salary is greater than 6 lakhs and less than 12 lakhs." }, { "code": null, "e": 1034, "s": 991, "text": "So we will start by creating a database – " }, { "code": null, "e": 1060, "s": 1034, "text": "Step 1: Create a Database" }, { "code": null, "e": 1091, "s": 1060, "text": "CREATE DATABASE geeksforgeeks;" }, { "code": null, "e": 1132, "s": 1091, "text": "Step 2: Enter this database to use it – " }, { "code": null, "e": 1151, "s": 1132, "text": "USE geeksforgeeks;" }, { "code": null, "e": 1243, "s": 1151, "text": "Step 3: Create a table1 as employee in the database where we will perform our operations – " }, { "code": null, "e": 1387, "s": 1243, "text": "CREATE TABLE employee ( ID int(10),\n Name varchar(55),\n Email varchar(100),\n Department int(10)\n );" }, { "code": null, "e": 1491, "s": 1387, "text": "Step 4: Create another table2 as dept where we will store the data of employees of the second company- " }, { "code": null, "e": 1626, "s": 1491, "text": "CREATE TABLE dept ( ID int(10),\n Name varchar(55),\n hodId int(10),\n profit int(20)\n );" }, { "code": null, "e": 1697, "s": 1626, "text": "Step 5: View the schema of the table to ensure the table is correct – " }, { "code": null, "e": 1727, "s": 1697, "text": "> DESC employee;\n> DESC dept;" }, { "code": null, "e": 1778, "s": 1727, "text": "Step 6: Insert the data into the employee table – " }, { "code": null, "e": 2292, "s": 1778, "text": "INSERT INTO employee VALUES(1, \"Devesh\", \"[email protected]\", 1);\nINSERT INTO employee VALUES(2, \"Mayank\", \"[email protected]\", 1);\nINSERT INTO employee VALUES(3, \"Aditya\", \"[email protected]\", 2);\nINSERT INTO employee VALUES(4, \"Divyanshi\", \"[email protected]\", 2);\nINSERT INTO employee VALUES(5, \"Megha\", \"[email protected]\", 3);\nINSERT INTO employee VALUES(6, \"Himanshi\", \"[email protected]\", 3);\nINSERT INTO employee VALUES(7, \"Tanishka\", \"[email protected]\", 4);\nINSERT INTO employee VALUES(8, \"Jatin\", \"[email protected]\", 4);" }, { "code": null, "e": 2330, "s": 2292, "text": "Step 7: Insert data into dept table –" }, { "code": null, "e": 2550, "s": 2330, "text": "INSERT INTO dept VALUES(1, \"Computer Science\", 1, 100000);\nINSERT INTO dept VALUES(2, \"Electrical\", 2, 45000);\nINSERT INTO dept VALUES(3, \"Biotechnology\", 3, 30000);\nINSERT INTO dept VALUES(4, \"Architecture\", 4, 15000);" }, { "code": null, "e": 2597, "s": 2550, "text": "Step 8: Query the data using where and Join – " }, { "code": null, "e": 2679, "s": 2597, "text": "Example 1: Select all the data of employees who are the HODs of the departments –" }, { "code": null, "e": 2786, "s": 2679, "text": "SELECT employee.ID, employee.Name, employee.Email\nFROM employee \nJOIN dept\nWHERE\nemployee.ID = dept.hodId;" }, { "code": null, "e": 2794, "s": 2786, "text": "Output:" }, { "code": null, "e": 2880, "s": 2794, "text": "Example 2: Select all the data where the department’s profit is greater than 45000 – " }, { "code": null, "e": 3031, "s": 2880, "text": "SELECT * \nFROM employee\nLEFT JOIN dept\nON\nemployee.Department = dept.ID\nWHERE \nemployee.Name IN\n(SELECT Name FROM employee WHERE dept.profit > 45000);" }, { "code": null, "e": 3039, "s": 3031, "text": "Output:" }, { "code": null, "e": 3117, "s": 3039, "text": "Example 3: Select all the data from both the tables using JOIN (cross join) –" }, { "code": null, "e": 3175, "s": 3117, "text": "SELECT *\nFROM employee \nFULL JOIN dept\nWHERE\ndept.id > 0;" }, { "code": null, "e": 3267, "s": 3175, "text": "Example 4: Select all the employees from a department whose sum profit is greater than 5000" }, { "code": null, "e": 3428, "s": 3267, "text": "SELECT DISTINCT dept.ID, dept.Name, dept.hodId\nFROM dept\nJOIN employee\nON\ndept.ID = employee.Department\nWHERE\nhodId IN\n(SELECT hodId FROM dept WHERE hodId > 0);" }, { "code": null, "e": 3436, "s": 3428, "text": "Output:" }, { "code": null, "e": 3445, "s": 3436, "text": "DBMS-SQL" }, { "code": null, "e": 3451, "s": 3445, "text": "mysql" }, { "code": null, "e": 3458, "s": 3451, "text": "Picked" }, { "code": null, "e": 3462, "s": 3458, "text": "SQL" }, { "code": null, "e": 3466, "s": 3462, "text": "SQL" } ]
Rust – Traits
20 Sep, 2021 A trait tells the Rust compiler about functionality a particular type has and can share with other types. Traits are an abstract definition of shared behavior amongst different types. So, we can say that traits are to Rust what interfaces are to Java or abstract classes are to C++. A trait method is able to access other methods within that trait. Traits are defined by providing the name of the Trait followed by the keyword “trait”. While defining any Trait we have to provide method declaration inside the Trait. Defining a trait: pub trait Detail { fn Description(&self) -> i32; fn years_since_launched(&self) -> i32; } In the above example, we have defined a Trait called Detail and declare two methods (Description(), years_since_launched()) with &self as a parameter and set a return type to i32. So, whenever any Type will implement this Trait, will have to override both methods and will have to define their custom body. Trait implementation is similar to implementing an interface in other languages like java or c++. Here we use the keyword “impl” to implement a Trait then write Trait’s Name which we have to implement, and then we use the keyword “for” to define for whom we are going to implement a Trait. We will define method bodies for all the methods which have been declared in the Trait’s Definition inside the impl block. Implementation of Trait: impl trait_Name for type_Name { /// method definitions /// } Let us understand the implementation of traits examples: Example 1: Let’s implement a built-in trait called Detail on a Car struct: Rust // Defining a Detail trait by defining the// functionality it should includepub trait Detail{ fn description(&self) -> String; fn years_since_launched(&self) -> i32; } struct Car { brand_name : String, color: String, launched_year : i32} // Implementing an in-built trait Detail// on the Car structimpl Detail for Car { // Method returns an overview of the car fn description(&self) -> String{ return format!("I have a {} which is {} in color.", self.brand_name, self.color); } // Method returns the number of years between // the launched year of this car i.e. // 2020 and the release year of the movie fn years_since_launched(&self) -> i32{ return 2021 - self.launched_year; }} fn main(){ let car = Car{ brand_name: "WagonR".to_string(), color: "Red".to_string(), launched_year:1992 }; let car2 = Car{ brand_name: "Venue".to_string(), color: "White".to_string(), launched_year:1997 }; println!("{}", car.description()); println!("The car was released {} years ago.\n", car.years_since_launched()); println!("{}", car.description()); println!("The car was released {} years ago.", car.years_since_launched());} Output: I have a WagonR which is Red in color. The car was released 29 years ago. I have a WagonR which is Red in color. The car was released 29 years ago. Example 2: Let’s implement a built-in trait called Maths on a Parameter struct: Rust // Defining a Maths trait by defining// the functionality it should includepub trait Maths{ fn area_of_rectangle(&self) -> i32; fn perimeter_of_rectangle(&self) -> i32;} struct Parameter { l:i32, b:i32} // Implementing an in-built trait Detail// on the Parameter structimpl Maths for Parameter { // Method returns area of rectangle fn area_of_rectangle(&self) -> i32{ return self.l*self.b ; } // Method returns the perimeter of rectangle fn perimeter_of_rectangle(&self) -> i32{ return 2*(self.l+self.b); } } fn main(){ let para =Parameter{ l: 5, b: 6 }; println!("The area of rectangle is {}.", para.area_of_rectangle()); println!("The perimeter of the rectangle is {}.", para.perimeter_of_rectangle());} Output: The area of rectangle is 30. The perimeter of the rectangle is 22. Drop trait is important to the smart pointer pattern. Drop trait lets us customize what happens when a value is about to go out of scope. Drop trait functionality is almost always used when implementing a smart pointer. For example, Box<T> customizes Drop to deallocate the space on the heap that the box points to. Example: Rust struct SmartPointer { data: String,} // implementing Drop traitimpl Drop for SmartPointer { fn drop(&mut self) { println!("Dropping SmartPointer with data `{}`!", self.data); }}fn main() { let _c = SmartPointer { data: String::from("my stuff") }; let _d = SmartPointer { data: String::from("other stuff") }; println!("SmartPointers created."); } In the above example, there is a SmartPointer struct whose custom functionality is to print Dropping SmartPointer when the instance goes out of scope. Output: SmartPointers created. Dropping SmartPointer with data `other stuff`! Dropping SmartPointer with data `my stuff`! The Iterator trait implements the iterators over collections such as arrays. Iterator trait relates each iterator type with the type of value it produces. The trait requires only a method to be defined for the next element, which may be manually defined in an impl block (as in arrays and ranges), which returns one item of the iterator at a time( Option<Item>), next will return Some(Item) as long as there are elements and when the iteration is over, returns None to indicate that the iteration is finished. Clone trait is for types that can make copies of themselves. Clone is defined as follows: Rust trait Clone: Sized {fn clone(&self) -> Self;fn clone_from(&mut self, source: &Self) {*self = source.clone()}} The clone method should construct an independent copy of self and return it. Since the method’s return type is self and functions may not return unsized values, the Clone trait itself extends the Sized trait (self types to be sized) Rust does not automatically clone the values but it let you make an explicit method call. The reference-counted pointer types like Rc<T> and Arc<T> are exceptions: cloning one of these simply increments the reference count and hands you a new pointer. If we might need one trait to use another trait’s functionality. In this situation, we need to rely on the dependent trait which is also being implemented. Rust has a way to specify that a trait is an extension of another trait, giving us something similar to subclassing in other languages. To create a subtrait, indicate that it implements the supertrait in the same way you would with a type: Rust trait MyDebug : std::fmt::Debug { fn my_subtrait_function(&self);} Returning Traits with dyn: A trait object in Rust is similar to an object in Java or C++. A trait object is always passed by a pointer and has a vtable so that methods can be dispatched dynamically. VTable is a kind of function pointer array that contains the addresses of all virtual functions of this class. Type of trait objects uses dyn Trait: e.g; &dyn Bar or Box<dyn Bar> Function using trait objects: fn f(b: &dyn Bar) -> usize use std::fmt::Debug; fn dyn_trait(n: u8) -> Box<dyn Debug> { todo!() } The dyn_trait function can return any number of types that implement the Debug trait and can even return a different type depending on the input argument. Single variable, Argument, or Return value can take values of multiple different types. But Virtual dispatch tends to slower method calls. And Objects must be passed by pointer. Example: Rust struct Breakfast{}struct Dinner{} trait Food { fn dish(&self) -> &'static str;} // Implement the `Food` trait for `breakfast`.impl Food for Breakfast { fn dish(&self) -> &'static str { "bread-butter!" }} // Implement the `Food` trait for `dinner`.impl Food for Dinner { fn dish(&self) -> &'static str { "paneer-butter-masala!" }} // Returns some struct that implements Food,// but we don't know which one at compile time.fn eat(n: i32) -> Box<dyn Food> { if n < 8 { Box::new(Breakfast {}) } else { Box::new(Dinner {}) }} fn main() { let n = 3; let food = eat(n); println!("You have chosen a random dish for today {}", food.dish());} Output: You have chosen a random dish for today bread-butter! #[derive] attribute can be used by the compiler to provide the basic implementation for some traits.if more complex behavior is required, traits can be implemented manually. The following is a list of derivable traits: Take a look at the below code: Rust #[derive(HelloWorld)]struct Hello; fn main() { Hello::hello_world();} The derive attribute allows new items to be automatically generated for data structures. It uses the MetaListPaths syntax to specify a list of traits to implement or paths to derive macros to process. Operator overloading is customizing the behavior of an operator in particular situations. We cannot create our own operator, but we can overload the operations and corresponding traits listed in std::ops by implementing the traits. Rust use std::ops::Add;struct Value1(u32);struct Value2(u32);impl Add<Value2> for Value1 { type Output = Value1; fn add(self, other: Value2) ->Value1 { Value1(self.0 + (other.0 * 1000)) }} sweetyty akshaysingh98088 Picked Rust traits Rust Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n20 Sep, 2021" }, { "code": null, "e": 377, "s": 28, "text": "A trait tells the Rust compiler about functionality a particular type has and can share with other types. Traits are an abstract definition of shared behavior amongst different types. So, we can say that traits are to Rust what interfaces are to Java or abstract classes are to C++. A trait method is able to access other methods within that trait." }, { "code": null, "e": 545, "s": 377, "text": "Traits are defined by providing the name of the Trait followed by the keyword “trait”. While defining any Trait we have to provide method declaration inside the Trait." }, { "code": null, "e": 563, "s": 545, "text": "Defining a trait:" }, { "code": null, "e": 664, "s": 563, "text": "pub trait Detail\n{\n fn Description(&self) -> i32;\n fn years_since_launched(&self) -> i32;\n \n}" }, { "code": null, "e": 971, "s": 664, "text": "In the above example, we have defined a Trait called Detail and declare two methods (Description(), years_since_launched()) with &self as a parameter and set a return type to i32. So, whenever any Type will implement this Trait, will have to override both methods and will have to define their custom body." }, { "code": null, "e": 1070, "s": 971, "text": "Trait implementation is similar to implementing an interface in other languages like java or c++. " }, { "code": null, "e": 1385, "s": 1070, "text": "Here we use the keyword “impl” to implement a Trait then write Trait’s Name which we have to implement, and then we use the keyword “for” to define for whom we are going to implement a Trait. We will define method bodies for all the methods which have been declared in the Trait’s Definition inside the impl block." }, { "code": null, "e": 1410, "s": 1385, "text": "Implementation of Trait:" }, { "code": null, "e": 1478, "s": 1410, "text": "impl trait_Name for type_Name {\n ///\n method definitions\n /// \n}" }, { "code": null, "e": 1535, "s": 1478, "text": "Let us understand the implementation of traits examples:" }, { "code": null, "e": 1546, "s": 1535, "text": "Example 1:" }, { "code": null, "e": 1610, "s": 1546, "text": "Let’s implement a built-in trait called Detail on a Car struct:" }, { "code": null, "e": 1615, "s": 1610, "text": "Rust" }, { "code": "// Defining a Detail trait by defining the// functionality it should includepub trait Detail{ fn description(&self) -> String; fn years_since_launched(&self) -> i32; } struct Car { brand_name : String, color: String, launched_year : i32} // Implementing an in-built trait Detail// on the Car structimpl Detail for Car { // Method returns an overview of the car fn description(&self) -> String{ return format!(\"I have a {} which is {} in color.\", self.brand_name, self.color); } // Method returns the number of years between // the launched year of this car i.e. // 2020 and the release year of the movie fn years_since_launched(&self) -> i32{ return 2021 - self.launched_year; }} fn main(){ let car = Car{ brand_name: \"WagonR\".to_string(), color: \"Red\".to_string(), launched_year:1992 }; let car2 = Car{ brand_name: \"Venue\".to_string(), color: \"White\".to_string(), launched_year:1997 }; println!(\"{}\", car.description()); println!(\"The car was released {} years ago.\\n\", car.years_since_launched()); println!(\"{}\", car.description()); println!(\"The car was released {} years ago.\", car.years_since_launched());}", "e": 2791, "s": 1615, "text": null }, { "code": null, "e": 2799, "s": 2791, "text": "Output:" }, { "code": null, "e": 2948, "s": 2799, "text": "I have a WagonR which is Red in color.\nThe car was released 29 years ago.\n\nI have a WagonR which is Red in color.\nThe car was released 29 years ago." }, { "code": null, "e": 2959, "s": 2948, "text": "Example 2:" }, { "code": null, "e": 3028, "s": 2959, "text": "Let’s implement a built-in trait called Maths on a Parameter struct:" }, { "code": null, "e": 3033, "s": 3028, "text": "Rust" }, { "code": "// Defining a Maths trait by defining// the functionality it should includepub trait Maths{ fn area_of_rectangle(&self) -> i32; fn perimeter_of_rectangle(&self) -> i32;} struct Parameter { l:i32, b:i32} // Implementing an in-built trait Detail// on the Parameter structimpl Maths for Parameter { // Method returns area of rectangle fn area_of_rectangle(&self) -> i32{ return self.l*self.b ; } // Method returns the perimeter of rectangle fn perimeter_of_rectangle(&self) -> i32{ return 2*(self.l+self.b); } } fn main(){ let para =Parameter{ l: 5, b: 6 }; println!(\"The area of rectangle is {}.\", para.area_of_rectangle()); println!(\"The perimeter of the rectangle is {}.\", para.perimeter_of_rectangle());}", "e": 3781, "s": 3033, "text": null }, { "code": null, "e": 3789, "s": 3781, "text": "Output:" }, { "code": null, "e": 3856, "s": 3789, "text": "The area of rectangle is 30.\nThe perimeter of the rectangle is 22." }, { "code": null, "e": 4172, "s": 3856, "text": "Drop trait is important to the smart pointer pattern. Drop trait lets us customize what happens when a value is about to go out of scope. Drop trait functionality is almost always used when implementing a smart pointer. For example, Box<T> customizes Drop to deallocate the space on the heap that the box points to." }, { "code": null, "e": 4181, "s": 4172, "text": "Example:" }, { "code": null, "e": 4186, "s": 4181, "text": "Rust" }, { "code": "struct SmartPointer { data: String,} // implementing Drop traitimpl Drop for SmartPointer { fn drop(&mut self) { println!(\"Dropping SmartPointer with data `{}`!\", self.data); }}fn main() { let _c = SmartPointer { data: String::from(\"my stuff\") }; let _d = SmartPointer { data: String::from(\"other stuff\") }; println!(\"SmartPointers created.\"); }", "e": 4532, "s": 4186, "text": null }, { "code": null, "e": 4683, "s": 4532, "text": "In the above example, there is a SmartPointer struct whose custom functionality is to print Dropping SmartPointer when the instance goes out of scope." }, { "code": null, "e": 4691, "s": 4683, "text": "Output:" }, { "code": null, "e": 4805, "s": 4691, "text": "SmartPointers created.\nDropping SmartPointer with data `other stuff`!\nDropping SmartPointer with data `my stuff`!" }, { "code": null, "e": 4960, "s": 4805, "text": "The Iterator trait implements the iterators over collections such as arrays. Iterator trait relates each iterator type with the type of value it produces." }, { "code": null, "e": 5315, "s": 4960, "text": "The trait requires only a method to be defined for the next element, which may be manually defined in an impl block (as in arrays and ranges), which returns one item of the iterator at a time( Option<Item>), next will return Some(Item) as long as there are elements and when the iteration is over, returns None to indicate that the iteration is finished." }, { "code": null, "e": 5406, "s": 5315, "text": "Clone trait is for types that can make copies of themselves. Clone is defined as follows: " }, { "code": null, "e": 5411, "s": 5406, "text": "Rust" }, { "code": "trait Clone: Sized {fn clone(&self) -> Self;fn clone_from(&mut self, source: &Self) {*self = source.clone()}}", "e": 5521, "s": 5411, "text": null }, { "code": null, "e": 5754, "s": 5521, "text": "The clone method should construct an independent copy of self and return it. Since the method’s return type is self and functions may not return unsized values, the Clone trait itself extends the Sized trait (self types to be sized)" }, { "code": null, "e": 6006, "s": 5754, "text": "Rust does not automatically clone the values but it let you make an explicit method call. The reference-counted pointer types like Rc<T> and Arc<T> are exceptions: cloning one of these simply increments the reference count and hands you a new pointer." }, { "code": null, "e": 6298, "s": 6006, "text": "If we might need one trait to use another trait’s functionality. In this situation, we need to rely on the dependent trait which is also being implemented. Rust has a way to specify that a trait is an extension of another trait, giving us something similar to subclassing in other languages." }, { "code": null, "e": 6403, "s": 6298, "text": "To create a subtrait, indicate that it implements the supertrait in the same way you would with a type: " }, { "code": null, "e": 6408, "s": 6403, "text": "Rust" }, { "code": "trait MyDebug : std::fmt::Debug { fn my_subtrait_function(&self);}", "e": 6478, "s": 6408, "text": null }, { "code": null, "e": 6506, "s": 6478, "text": "Returning Traits with dyn: " }, { "code": null, "e": 6789, "s": 6506, "text": "A trait object in Rust is similar to an object in Java or C++. A trait object is always passed by a pointer and has a vtable so that methods can be dispatched dynamically. VTable is a kind of function pointer array that contains the addresses of all virtual functions of this class." }, { "code": null, "e": 6827, "s": 6789, "text": "Type of trait objects uses dyn Trait:" }, { "code": null, "e": 6858, "s": 6827, "text": "e.g; &dyn Bar or Box<dyn Bar> " }, { "code": null, "e": 6888, "s": 6858, "text": "Function using trait objects:" }, { "code": null, "e": 6992, "s": 6888, "text": "fn f(b: &dyn Bar) -> usize\n\nuse std::fmt::Debug;\n\nfn dyn_trait(n: u8) -> Box<dyn Debug> {\n todo!()\n}" }, { "code": null, "e": 7147, "s": 6992, "text": "The dyn_trait function can return any number of types that implement the Debug trait and can even return a different type depending on the input argument." }, { "code": null, "e": 7325, "s": 7147, "text": "Single variable, Argument, or Return value can take values of multiple different types. But Virtual dispatch tends to slower method calls. And Objects must be passed by pointer." }, { "code": null, "e": 7334, "s": 7325, "text": "Example:" }, { "code": null, "e": 7339, "s": 7334, "text": "Rust" }, { "code": "struct Breakfast{}struct Dinner{} trait Food { fn dish(&self) -> &'static str;} // Implement the `Food` trait for `breakfast`.impl Food for Breakfast { fn dish(&self) -> &'static str { \"bread-butter!\" }} // Implement the `Food` trait for `dinner`.impl Food for Dinner { fn dish(&self) -> &'static str { \"paneer-butter-masala!\" }} // Returns some struct that implements Food,// but we don't know which one at compile time.fn eat(n: i32) -> Box<dyn Food> { if n < 8 { Box::new(Breakfast {}) } else { Box::new(Dinner {}) }} fn main() { let n = 3; let food = eat(n); println!(\"You have chosen a random dish for today {}\", food.dish());}", "e": 8038, "s": 7339, "text": null }, { "code": null, "e": 8047, "s": 8038, "text": "Output: " }, { "code": null, "e": 8101, "s": 8047, "text": "You have chosen a random dish for today bread-butter!" }, { "code": null, "e": 8275, "s": 8101, "text": "#[derive] attribute can be used by the compiler to provide the basic implementation for some traits.if more complex behavior is required, traits can be implemented manually." }, { "code": null, "e": 8320, "s": 8275, "text": "The following is a list of derivable traits:" }, { "code": null, "e": 8351, "s": 8320, "text": "Take a look at the below code:" }, { "code": null, "e": 8356, "s": 8351, "text": "Rust" }, { "code": "#[derive(HelloWorld)]struct Hello; fn main() { Hello::hello_world();}", "e": 8429, "s": 8356, "text": null }, { "code": null, "e": 8630, "s": 8429, "text": "The derive attribute allows new items to be automatically generated for data structures. It uses the MetaListPaths syntax to specify a list of traits to implement or paths to derive macros to process." }, { "code": null, "e": 8863, "s": 8630, "text": "Operator overloading is customizing the behavior of an operator in particular situations. We cannot create our own operator, but we can overload the operations and corresponding traits listed in std::ops by implementing the traits. " }, { "code": null, "e": 8868, "s": 8863, "text": "Rust" }, { "code": "use std::ops::Add;struct Value1(u32);struct Value2(u32);impl Add<Value2> for Value1 { type Output = Value1; fn add(self, other: Value2) ->Value1 { Value1(self.0 + (other.0 * 1000)) }}", "e": 9052, "s": 8868, "text": null }, { "code": null, "e": 9061, "s": 9052, "text": "sweetyty" }, { "code": null, "e": 9078, "s": 9061, "text": "akshaysingh98088" }, { "code": null, "e": 9085, "s": 9078, "text": "Picked" }, { "code": null, "e": 9097, "s": 9085, "text": "Rust traits" }, { "code": null, "e": 9102, "s": 9097, "text": "Rust" } ]
Python | os.DirEntry.path attribute
28 Aug, 2019 OS module in Python provides functions for interacting with the operating system. OS comes under Python’s standard utility modules. This module provides a portable way of using operating system dependent functionality. os.scandir() method of os module yields os.DirEntry objects corresponding to the entries in the directory given by specified path. os.DirEntry object has various attributes and method which is used to expose the file path and other file attributes of the directory entry. path attribute of os.DirEntry object is used to get entry’s full path name. The full path is absolute only if the path parameter used in os.scandir() method is absolute. Also if os.scandir() method path parameter was a file descriptor then the value of os.DirEntry.path attribute is same as os.DirEntry.name attribute. Note: os.DirEntry objects are intended to be used and thrown away after iteration as attributes and methods of the object cache their values and never refetch the values again. If the metadata of the file has been changed or if a long time has elapsed since calling os.scandir() method. we will not get up-to-date information. Syntax: os.DirEntry.path Parameter: None Return value: This attribute returns a bytes value if os.scandir() path parameter is bytes otherwise returns a string value which represents the entry’s full path. Code #1: Use of os.DirEntry.path attribute # Python program to explain os.DirEntry.path attribute # importing os module import os # Directory to be scanned# Current working directorypath = os.getcwd() # Using os.scandir() method# scan the specified directory# and yield os.DirEntry object# for each file and sub-directory print("Full path of all directory entry in '% s':" % path) with os.scandir(path) as itr: for entry in itr : # Exclude the entry name # starting with '.' if not entry.name.startswith('.') : # print entry's name # and its full path print(entry.name, ":", entry.path) Full path of all directory entry in '/home/ihritik': Public : /home/ihritik/Public Desktop : /home/ihritik/Deskop R : /home/ihritik/R foo.txt : /home/ihritik/foo.txt graph.cpp : /home/ihritik/graph.cpp tree.cpp : /home/ihritik/tree.cpp Pictures : /home/ihritik/Pictures abc.py : /home/ihritik/abc.py file.txt : /home/ihritik/file.txt Videos : /home/ihritik/Videos images : /home/ihritik/images Downloads : /home/ihritik/Downloads GeeksforGeeks : /home/ihritik/GeeksforGeeks Music : /home/ihritik/Music Documents : /home/ihritik/Documents References: https://docs.python.org/3/library/os.html#os.DirEntry.path python-os-module Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to iterate through Excel rows in Python? Rotate axis tick labels in Seaborn and Matplotlib Deque in Python Queue in Python Defaultdict in Python Check if element exists in list in Python Python Classes and Objects Bar Plot in Matplotlib reduce() in Python Python | Get unique values from a list
[ { "code": null, "e": 28, "s": 0, "text": "\n28 Aug, 2019" }, { "code": null, "e": 247, "s": 28, "text": "OS module in Python provides functions for interacting with the operating system. OS comes under Python’s standard utility modules. This module provides a portable way of using operating system dependent functionality." }, { "code": null, "e": 519, "s": 247, "text": "os.scandir() method of os module yields os.DirEntry objects corresponding to the entries in the directory given by specified path. os.DirEntry object has various attributes and method which is used to expose the file path and other file attributes of the directory entry." }, { "code": null, "e": 838, "s": 519, "text": "path attribute of os.DirEntry object is used to get entry’s full path name. The full path is absolute only if the path parameter used in os.scandir() method is absolute. Also if os.scandir() method path parameter was a file descriptor then the value of os.DirEntry.path attribute is same as os.DirEntry.name attribute." }, { "code": null, "e": 1165, "s": 838, "text": "Note: os.DirEntry objects are intended to be used and thrown away after iteration as attributes and methods of the object cache their values and never refetch the values again. If the metadata of the file has been changed or if a long time has elapsed since calling os.scandir() method. we will not get up-to-date information." }, { "code": null, "e": 1190, "s": 1165, "text": "Syntax: os.DirEntry.path" }, { "code": null, "e": 1206, "s": 1190, "text": "Parameter: None" }, { "code": null, "e": 1370, "s": 1206, "text": "Return value: This attribute returns a bytes value if os.scandir() path parameter is bytes otherwise returns a string value which represents the entry’s full path." }, { "code": null, "e": 1413, "s": 1370, "text": "Code #1: Use of os.DirEntry.path attribute" }, { "code": "# Python program to explain os.DirEntry.path attribute # importing os module import os # Directory to be scanned# Current working directorypath = os.getcwd() # Using os.scandir() method# scan the specified directory# and yield os.DirEntry object# for each file and sub-directory print(\"Full path of all directory entry in '% s':\" % path) with os.scandir(path) as itr: for entry in itr : # Exclude the entry name # starting with '.' if not entry.name.startswith('.') : # print entry's name # and its full path print(entry.name, \":\", entry.path)", "e": 2023, "s": 1413, "text": null }, { "code": null, "e": 2562, "s": 2023, "text": "Full path of all directory entry in '/home/ihritik':\nPublic : /home/ihritik/Public\nDesktop : /home/ihritik/Deskop\nR : /home/ihritik/R\nfoo.txt : /home/ihritik/foo.txt\ngraph.cpp : /home/ihritik/graph.cpp\ntree.cpp : /home/ihritik/tree.cpp\nPictures : /home/ihritik/Pictures\nabc.py : /home/ihritik/abc.py\nfile.txt : /home/ihritik/file.txt\nVideos : /home/ihritik/Videos\nimages : /home/ihritik/images\nDownloads : /home/ihritik/Downloads\nGeeksforGeeks : /home/ihritik/GeeksforGeeks\nMusic : /home/ihritik/Music\nDocuments : /home/ihritik/Documents\n" }, { "code": null, "e": 2633, "s": 2562, "text": "References: https://docs.python.org/3/library/os.html#os.DirEntry.path" }, { "code": null, "e": 2650, "s": 2633, "text": "python-os-module" }, { "code": null, "e": 2657, "s": 2650, "text": "Python" }, { "code": null, "e": 2755, "s": 2657, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2800, "s": 2755, "text": "How to iterate through Excel rows in Python?" }, { "code": null, "e": 2850, "s": 2800, "text": "Rotate axis tick labels in Seaborn and Matplotlib" }, { "code": null, "e": 2866, "s": 2850, "text": "Deque in Python" }, { "code": null, "e": 2882, "s": 2866, "text": "Queue in Python" }, { "code": null, "e": 2904, "s": 2882, "text": "Defaultdict in Python" }, { "code": null, "e": 2946, "s": 2904, "text": "Check if element exists in list in Python" }, { "code": null, "e": 2973, "s": 2946, "text": "Python Classes and Objects" }, { "code": null, "e": 2996, "s": 2973, "text": "Bar Plot in Matplotlib" }, { "code": null, "e": 3015, "s": 2996, "text": "reduce() in Python" } ]
How to create a hidden input field in form using HTML ?
18 Oct, 2021 In this article, we will learn how to add a hidden input field into our form using HTML. A hidden control stores the data that is not visible to the user. It is used to send some information to the server which is not edited by the user. Normally, this hidden field usually contains the value which is common for all the users. The value must be recorded in the database. A hidden input field often contains those value which needs to be updated on the database when the form is submitted. Approach: We create an HTML Document that contains an <input> Tag. Use the type attribute with the <input> tag. Set the type attribute to value “hidden“. Syntax: <input type="hidden"> Example: In this code, we create a hidden field that contains a record of the user who belongs to the same country i.e India. HTML <!DOCTYPE html><html> <head> <title> How to create a hidden input field in form using HTML? </title> <style> h1 { color:green; } body { text-align:center; } </style></head> <body> <h1> GeeksforGeeks </h1> <h3> How to create a hidden input field in form using HTML? </h3> <form action="#"> <input type="hidden" name="country_name" id="inputID" value="India" > Name: <input type="text"> <input type="submit" value="Submit"> </form></body> </html> Output: hidden input field HTML-Questions HTML Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. REST API (Introduction) Design a Tribute Page using HTML & CSS Build a Survey Form using HTML and CSS Angular File Upload Form validation using jQuery Installation of Node.js on Linux Difference between var, let and const keywords in JavaScript How to fetch data from an API in ReactJS ? Differences between Functional Components and Class Components in React Remove elements from a JavaScript Array
[ { "code": null, "e": 28, "s": 0, "text": "\n18 Oct, 2021" }, { "code": null, "e": 402, "s": 28, "text": "In this article, we will learn how to add a hidden input field into our form using HTML. A hidden control stores the data that is not visible to the user. It is used to send some information to the server which is not edited by the user. Normally, this hidden field usually contains the value which is common for all the users. The value must be recorded in the database. " }, { "code": null, "e": 521, "s": 402, "text": "A hidden input field often contains those value which needs to be updated on the database when the form is submitted. " }, { "code": null, "e": 531, "s": 521, "text": "Approach:" }, { "code": null, "e": 588, "s": 531, "text": "We create an HTML Document that contains an <input> Tag." }, { "code": null, "e": 633, "s": 588, "text": "Use the type attribute with the <input> tag." }, { "code": null, "e": 675, "s": 633, "text": "Set the type attribute to value “hidden“." }, { "code": null, "e": 683, "s": 675, "text": "Syntax:" }, { "code": null, "e": 707, "s": 683, "text": "<input type=\"hidden\"> " }, { "code": null, "e": 834, "s": 707, "text": "Example: In this code, we create a hidden field that contains a record of the user who belongs to the same country i.e India. " }, { "code": null, "e": 839, "s": 834, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <head> <title> How to create a hidden input field in form using HTML? </title> <style> h1 { color:green; } body { text-align:center; } </style></head> <body> <h1> GeeksforGeeks </h1> <h3> How to create a hidden input field in form using HTML? </h3> <form action=\"#\"> <input type=\"hidden\" name=\"country_name\" id=\"inputID\" value=\"India\" > Name: <input type=\"text\"> <input type=\"submit\" value=\"Submit\"> </form></body> </html>", "e": 1475, "s": 839, "text": null }, { "code": null, "e": 1496, "s": 1475, "text": "Output: " }, { "code": null, "e": 1515, "s": 1496, "text": "hidden input field" }, { "code": null, "e": 1530, "s": 1515, "text": "HTML-Questions" }, { "code": null, "e": 1535, "s": 1530, "text": "HTML" }, { "code": null, "e": 1552, "s": 1535, "text": "Web Technologies" }, { "code": null, "e": 1557, "s": 1552, "text": "HTML" }, { "code": null, "e": 1655, "s": 1557, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 1679, "s": 1655, "text": "REST API (Introduction)" }, { "code": null, "e": 1718, "s": 1679, "text": "Design a Tribute Page using HTML & CSS" }, { "code": null, "e": 1757, "s": 1718, "text": "Build a Survey Form using HTML and CSS" }, { "code": null, "e": 1777, "s": 1757, "text": "Angular File Upload" }, { "code": null, "e": 1806, "s": 1777, "text": "Form validation using jQuery" }, { "code": null, "e": 1839, "s": 1806, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 1900, "s": 1839, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 1943, "s": 1900, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 2015, "s": 1943, "text": "Differences between Functional Components and Class Components in React" } ]
How to Extract the Last Word From a Cell in Excel?
30 Nov, 2021 In this article, we explain how to extract the last word from a text in a cell using the Excel function. Extracting words from a text is an important task in text processing. Eg. Suppose we have a data file with a field called “Product_Category”, which was combined both product name and their respective category name with space as below. Assume that the last word from the “Product_Category” field is a category. We have to extract categories from the given data in column B for further analysis. Sample Data: We use the below list of 4 Excel user-defined functions to extract the last word. Syntax: REPT(text, number) Where, text – character to repeat number – number of times to repeat the character Eg: REPT(“*“,10) – returns ********** Syntax: SUBSTITUTE( text, old_text, new_text, [instance_number] ) Where, text – the original text old_text – text need to replace new_text –text replace with old text [instance_number] – Optional. The number indicates the instance number of old text to replace Eg: SUBSTITUTE(“Filo Mix”,” “,REPT(“*”,10)) – returns Filo**********Mix Syntax: RIGHT( text, [number_of_characters] ) Where, text – original text [number_of_characters] – optional. Number characters extract from the right. Eg: RIGHT(“Filo**********Mix”) – returns *******Mix Syntax: TRIM(text) Where, text – Removes leading and trailing spaces Eg: Eg: Trim(“Mix”) – returns Mix Follow the below steps to Extract the last word from a Cell in Excel: Step 1: Write header “Category” in cell B1. Step 2: Write the below formula to cells “B2”. In the given data category name is not more than 10 characters. So we used 10 in both REPT() and SUBSTITUTE(). You can use any number greater than the maximum of the last word. =TRIM(RIGHT(SUBSTITUTE(A2," ",REPT(" ",10)),10)) Step 3: Drag formula B2 to B14 to fill the same formula to all other cells Excel-functions Picked Excel Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Delete Blank Columns in Excel? How to Normalize Data in Excel? How to Get Length of Array in Excel VBA? How to Find the Last Used Row and Column in Excel VBA? How to Use Solver in Excel? How to make a 3 Axis Graph using Excel? Macros in Excel Introduction to Excel Spreadsheet How to Show Percentages in Stacked Column Chart in Excel? How to Create a Macro in Excel?
[ { "code": null, "e": 28, "s": 0, "text": "\n30 Nov, 2021" }, { "code": null, "e": 206, "s": 28, "text": "In this article, we explain how to extract the last word from a text in a cell using the Excel function. Extracting words from a text is an important task in text processing. " }, { "code": null, "e": 533, "s": 206, "text": "Eg. Suppose we have a data file with a field called “Product_Category”, which was combined both product name and their respective category name with space as below. Assume that the last word from the “Product_Category” field is a category. We have to extract categories from the given data in column B for further analysis." }, { "code": null, "e": 546, "s": 533, "text": "Sample Data:" }, { "code": null, "e": 628, "s": 546, "text": "We use the below list of 4 Excel user-defined functions to extract the last word." }, { "code": null, "e": 636, "s": 628, "text": "Syntax:" }, { "code": null, "e": 655, "s": 636, "text": "REPT(text, number)" }, { "code": null, "e": 662, "s": 655, "text": "Where," }, { "code": null, "e": 689, "s": 662, "text": "text – character to repeat" }, { "code": null, "e": 738, "s": 689, "text": "number – number of times to repeat the character" }, { "code": null, "e": 776, "s": 738, "text": "Eg: REPT(“*“,10) – returns **********" }, { "code": null, "e": 784, "s": 776, "text": "Syntax:" }, { "code": null, "e": 842, "s": 784, "text": "SUBSTITUTE( text, old_text, new_text, [instance_number] )" }, { "code": null, "e": 849, "s": 842, "text": "Where," }, { "code": null, "e": 875, "s": 849, "text": "text – the original text " }, { "code": null, "e": 907, "s": 875, "text": "old_text – text need to replace" }, { "code": null, "e": 944, "s": 907, "text": "new_text –text replace with old text" }, { "code": null, "e": 1038, "s": 944, "text": "[instance_number] – Optional. The number indicates the instance number of old text to replace" }, { "code": null, "e": 1110, "s": 1038, "text": "Eg: SUBSTITUTE(“Filo Mix”,” “,REPT(“*”,10)) – returns Filo**********Mix" }, { "code": null, "e": 1118, "s": 1110, "text": "Syntax:" }, { "code": null, "e": 1156, "s": 1118, "text": "RIGHT( text, [number_of_characters] )" }, { "code": null, "e": 1163, "s": 1156, "text": "Where," }, { "code": null, "e": 1185, "s": 1163, "text": "text – original text " }, { "code": null, "e": 1262, "s": 1185, "text": "[number_of_characters] – optional. Number characters extract from the right." }, { "code": null, "e": 1314, "s": 1262, "text": "Eg: RIGHT(“Filo**********Mix”) – returns *******Mix" }, { "code": null, "e": 1322, "s": 1314, "text": "Syntax:" }, { "code": null, "e": 1333, "s": 1322, "text": "TRIM(text)" }, { "code": null, "e": 1340, "s": 1333, "text": "Where," }, { "code": null, "e": 1383, "s": 1340, "text": "text – Removes leading and trailing spaces" }, { "code": null, "e": 1418, "s": 1383, "text": "Eg: Eg: Trim(“Mix”) – returns Mix " }, { "code": null, "e": 1489, "s": 1418, "text": "Follow the below steps to Extract the last word from a Cell in Excel:" }, { "code": null, "e": 1533, "s": 1489, "text": "Step 1: Write header “Category” in cell B1." }, { "code": null, "e": 1760, "s": 1533, "text": "Step 2: Write the below formula to cells “B2”. In the given data category name is not more than 10 characters. So we used 10 in both REPT() and SUBSTITUTE(). You can use any number greater than the maximum of the last word." }, { "code": null, "e": 1809, "s": 1760, "text": "=TRIM(RIGHT(SUBSTITUTE(A2,\" \",REPT(\" \",10)),10))" }, { "code": null, "e": 1884, "s": 1809, "text": "Step 3: Drag formula B2 to B14 to fill the same formula to all other cells" }, { "code": null, "e": 1900, "s": 1884, "text": "Excel-functions" }, { "code": null, "e": 1907, "s": 1900, "text": "Picked" }, { "code": null, "e": 1913, "s": 1907, "text": "Excel" }, { "code": null, "e": 2011, "s": 1913, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2049, "s": 2011, "text": "How to Delete Blank Columns in Excel?" }, { "code": null, "e": 2081, "s": 2049, "text": "How to Normalize Data in Excel?" }, { "code": null, "e": 2122, "s": 2081, "text": "How to Get Length of Array in Excel VBA?" }, { "code": null, "e": 2177, "s": 2122, "text": "How to Find the Last Used Row and Column in Excel VBA?" }, { "code": null, "e": 2205, "s": 2177, "text": "How to Use Solver in Excel?" }, { "code": null, "e": 2245, "s": 2205, "text": "How to make a 3 Axis Graph using Excel?" }, { "code": null, "e": 2261, "s": 2245, "text": "Macros in Excel" }, { "code": null, "e": 2295, "s": 2261, "text": "Introduction to Excel Spreadsheet" }, { "code": null, "e": 2353, "s": 2295, "text": "How to Show Percentages in Stacked Column Chart in Excel?" } ]
Scanner nextDouble() method in Java with Examples
12 Oct, 2018 The nextDouble() method of java.util.Scanner class scans the next token of the input as a Double. If the translation is successful, the scanner advances past the input that matched. Syntax: public double nextDouble() Parameters: The function does not accepts any parameter. Return Value: This function returns the Double scanned from the input. Exceptions: The function throws three exceptions as described below: InputMismatchException: if the next token does not matches the Double regular expression, or is out of range NoSuchElementException: throws if input is exhausted IllegalStateException: throws if this scanner is closed Below programs illustrate the above function: Program 1: // Java program to illustrate the// nextDouble() method of Scanner class in Java// without parameter import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { String s = "Gfg 9 + 6 = 12.0"; // create a new scanner // with the specified String Object Scanner scanner = new Scanner(s); while (scanner.hasNext()) { // if the next is a Double, // print found and the Double if (scanner.hasNextDouble()) { System.out.println("Found Double value :" + scanner.nextDouble()); } // if no Double is found, // print "Not Found:" and the token else { System.out.println("Not found Double() value :" + scanner.next()); } } scanner.close(); }} Not found Double() value :Gfg Found Double value :9.0 Not found Double() value :+ Found Double value :6.0 Not found Double() value := Found Double value :12.0 Program 2: To demonstrate InputMismatchException // Java program to illustrate the// nextDouble() method of Scanner class in Java// InputMismatchException import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { String s = "Gfg 9 + 6 = 12.0"; // create a new scanner // with the specified String Object Scanner scanner = new Scanner(s); while (scanner.hasNext()) { // if the next is a Double // print found and the Double // since the value 60 is out of range // it throws an exception System.out.println("Next Double value :" + scanner.nextDouble()); } scanner.close(); } catch (Exception e) { System.out.println("Exception thrown: " + e); } }} Exception thrown: java.util.InputMismatchException Program 3: To demonstrate NoSuchElementException // Java program to illustrate the// nextDouble() method of Scanner class in Java// NoSuchElementException import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { String s = "Gfg"; // create a new scanner // with the specified String Object Scanner scanner = new Scanner(s); // Trying to get the next Double value // more times than the scanner // Hence it will throw exception for (int i = 0; i < 5; i++) { // if the next is a Double, // print found and the Double if (scanner.hasNextDouble()) { System.out.println("Found Double value :" + scanner.nextDouble()); } // if no Double is found, // print "Not Found:" and the token else { System.out.println("Not found Double value :" + scanner.next()); } } scanner.close(); } catch (Exception e) { System.out.println("Exception thrown: " + e); } }} Not found Double value :Gfg Exception thrown: java.util.NoSuchElementException Program 4: To demonstrate IllegalStateException // Java program to illustrate the// nextDouble() method of Scanner class in Java// IllegalStateException import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { String s = "Gfg 9 + 6 = 12.0"; // create a new scanner // with the specified String Object Scanner scanner = new Scanner(s); // close the scanner scanner.close(); System.out.println("Scanner Closed"); System.out.println("Trying to get " + "next Double value"); while (scanner.hasNext()) { // if the next is a Double, // print found and the Double if (scanner.hasNextDouble()) { System.out.println("Found Double value :" + scanner.nextDouble()); } // if no Double is found, // print "Not Found:" and the token else { System.out.println("Not found Double value :" + scanner.next()); } } } catch (Exception e) { System.out.println("Exception thrown: " + e); } }} Scanner Closed Trying to get next Double value Exception thrown: java.lang.IllegalStateException: Scanner closed Reference: https://docs.oracle.com/javase/7/docs/api/java/util/Scanner.html#nextDouble() Java - util package Java-Functions Java-Library Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Interfaces in Java ArrayList in Java Collections in Java Multidimensional Arrays in Java Stream In Java Set in Java Singleton Class in Java Stack Class in Java Initialize an ArrayList in Java Initializing a List in Java
[ { "code": null, "e": 53, "s": 25, "text": "\n12 Oct, 2018" }, { "code": null, "e": 235, "s": 53, "text": "The nextDouble() method of java.util.Scanner class scans the next token of the input as a Double. If the translation is successful, the scanner advances past the input that matched." }, { "code": null, "e": 243, "s": 235, "text": "Syntax:" }, { "code": null, "e": 270, "s": 243, "text": "public double nextDouble()" }, { "code": null, "e": 327, "s": 270, "text": "Parameters: The function does not accepts any parameter." }, { "code": null, "e": 398, "s": 327, "text": "Return Value: This function returns the Double scanned from the input." }, { "code": null, "e": 467, "s": 398, "text": "Exceptions: The function throws three exceptions as described below:" }, { "code": null, "e": 576, "s": 467, "text": "InputMismatchException: if the next token does not matches the Double regular expression, or is out of range" }, { "code": null, "e": 629, "s": 576, "text": "NoSuchElementException: throws if input is exhausted" }, { "code": null, "e": 685, "s": 629, "text": "IllegalStateException: throws if this scanner is closed" }, { "code": null, "e": 731, "s": 685, "text": "Below programs illustrate the above function:" }, { "code": null, "e": 742, "s": 731, "text": "Program 1:" }, { "code": "// Java program to illustrate the// nextDouble() method of Scanner class in Java// without parameter import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { String s = \"Gfg 9 + 6 = 12.0\"; // create a new scanner // with the specified String Object Scanner scanner = new Scanner(s); while (scanner.hasNext()) { // if the next is a Double, // print found and the Double if (scanner.hasNextDouble()) { System.out.println(\"Found Double value :\" + scanner.nextDouble()); } // if no Double is found, // print \"Not Found:\" and the token else { System.out.println(\"Not found Double() value :\" + scanner.next()); } } scanner.close(); }}", "e": 1675, "s": 742, "text": null }, { "code": null, "e": 1835, "s": 1675, "text": "Not found Double() value :Gfg\nFound Double value :9.0\nNot found Double() value :+\nFound Double value :6.0\nNot found Double() value :=\nFound Double value :12.0\n" }, { "code": null, "e": 1884, "s": 1835, "text": "Program 2: To demonstrate InputMismatchException" }, { "code": "// Java program to illustrate the// nextDouble() method of Scanner class in Java// InputMismatchException import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { String s = \"Gfg 9 + 6 = 12.0\"; // create a new scanner // with the specified String Object Scanner scanner = new Scanner(s); while (scanner.hasNext()) { // if the next is a Double // print found and the Double // since the value 60 is out of range // it throws an exception System.out.println(\"Next Double value :\" + scanner.nextDouble()); } scanner.close(); } catch (Exception e) { System.out.println(\"Exception thrown: \" + e); } }}", "e": 2783, "s": 1884, "text": null }, { "code": null, "e": 2835, "s": 2783, "text": "Exception thrown: java.util.InputMismatchException\n" }, { "code": null, "e": 2884, "s": 2835, "text": "Program 3: To demonstrate NoSuchElementException" }, { "code": "// Java program to illustrate the// nextDouble() method of Scanner class in Java// NoSuchElementException import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { String s = \"Gfg\"; // create a new scanner // with the specified String Object Scanner scanner = new Scanner(s); // Trying to get the next Double value // more times than the scanner // Hence it will throw exception for (int i = 0; i < 5; i++) { // if the next is a Double, // print found and the Double if (scanner.hasNextDouble()) { System.out.println(\"Found Double value :\" + scanner.nextDouble()); } // if no Double is found, // print \"Not Found:\" and the token else { System.out.println(\"Not found Double value :\" + scanner.next()); } } scanner.close(); } catch (Exception e) { System.out.println(\"Exception thrown: \" + e); } }}", "e": 4140, "s": 2884, "text": null }, { "code": null, "e": 4220, "s": 4140, "text": "Not found Double value :Gfg\nException thrown: java.util.NoSuchElementException\n" }, { "code": null, "e": 4268, "s": 4220, "text": "Program 4: To demonstrate IllegalStateException" }, { "code": "// Java program to illustrate the// nextDouble() method of Scanner class in Java// IllegalStateException import java.util.*; public class GFG1 { public static void main(String[] argv) throws Exception { try { String s = \"Gfg 9 + 6 = 12.0\"; // create a new scanner // with the specified String Object Scanner scanner = new Scanner(s); // close the scanner scanner.close(); System.out.println(\"Scanner Closed\"); System.out.println(\"Trying to get \" + \"next Double value\"); while (scanner.hasNext()) { // if the next is a Double, // print found and the Double if (scanner.hasNextDouble()) { System.out.println(\"Found Double value :\" + scanner.nextDouble()); } // if no Double is found, // print \"Not Found:\" and the token else { System.out.println(\"Not found Double value :\" + scanner.next()); } } } catch (Exception e) { System.out.println(\"Exception thrown: \" + e); } }}", "e": 5584, "s": 4268, "text": null }, { "code": null, "e": 5698, "s": 5584, "text": "Scanner Closed\nTrying to get next Double value\nException thrown: java.lang.IllegalStateException: Scanner closed\n" }, { "code": null, "e": 5787, "s": 5698, "text": "Reference: https://docs.oracle.com/javase/7/docs/api/java/util/Scanner.html#nextDouble()" }, { "code": null, "e": 5807, "s": 5787, "text": "Java - util package" }, { "code": null, "e": 5822, "s": 5807, "text": "Java-Functions" }, { "code": null, "e": 5835, "s": 5822, "text": "Java-Library" }, { "code": null, "e": 5840, "s": 5835, "text": "Java" }, { "code": null, "e": 5845, "s": 5840, "text": "Java" }, { "code": null, "e": 5943, "s": 5845, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 5962, "s": 5943, "text": "Interfaces in Java" }, { "code": null, "e": 5980, "s": 5962, "text": "ArrayList in Java" }, { "code": null, "e": 6000, "s": 5980, "text": "Collections in Java" }, { "code": null, "e": 6032, "s": 6000, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 6047, "s": 6032, "text": "Stream In Java" }, { "code": null, "e": 6059, "s": 6047, "text": "Set in Java" }, { "code": null, "e": 6083, "s": 6059, "text": "Singleton Class in Java" }, { "code": null, "e": 6103, "s": 6083, "text": "Stack Class in Java" }, { "code": null, "e": 6135, "s": 6103, "text": "Initialize an ArrayList in Java" } ]
How to create a list of uniformly spaced numbers using a logarithmic scale with Python?
15 Mar, 2021 In this article, we will create a list of uniformly spaced numbers using a logarithmic scale. It means on a log scale difference between two adjacent samples is the same. The goal can be achieved using two different functions from the Python Numpy library. numpy.logspace: This function returns number scaled evenly on logarithmic scale. Parameters: start: Starting value of sequence is base**start stop: If endpoint is True then ending value of sequence is base**stop num (Optional): Specifies the number of samples to generate endpoint (Optional): It can either be true or false with default value true base (Optional): Specifies the base of log sequence. Default value is 10. dtype (Optional): Specifies the type of output array axis (Optional): The axis in the result to store the samples. Return: It returns array of samples equally spaced on log scale. numpy.geomspace: This function is similar to logspace function only difference being end points are specified directly. In Output sample every output is obtained by multiplying previous output by same constant. Parameters: start: It is the starting value of sequence stop: If endpoint is True then it is the ending value of sequence num (Optional): Specifies the number of samples to generate endpoint (Optional): It can either be true or false with default value true dtype (Optional): Specifies the type of output array axis (Optional): The axis in the result to store the samples. Return: It returns array of samples equally spaced on log scale. Example 1: This example uses logspace function. In this example, start is passed as 1 and the stop is passed as 3 with the base being 10. So starting point of the sequence will be 10**1 = 10 and the ending point of the sequence will be 10**3 = 1000. Python3 # importing the libraryimport numpy as npimport matplotlib.pyplot as plt # Initializing variabley = np.ones(10) # Calculating resultres = np.logspace(1, 3, 10, endpoint = True) # Printing the resultprint(res) # Plotting the graphplt.scatter(res, y, color = 'green')plt.title('logarithmically spaced numbers')plt.show() Output: Example 2: This example generates the same list as the previous example using geomspace function. Here we directly passed 10 and 1000 as starting and ending points Python3 # importing the libraryimport numpy as npimport matplotlib.pyplot as plt # Initializing variabley = np.ones(10) # Calculating resultres = np.geomspace(10, 1000, 10, endpoint = True) # Printing the resultprint(res) # Plotting the graphplt.scatter(res, y, color = 'green')plt.title('logarithmically spaced numbers')plt.show() Output: Example 3: In this example, endpoint is set to false so it will generate n+1 sample and return only first n sample i.e. stop will not be included in the sequence. Python3 # importing the libraryimport numpy as npimport matplotlib.pyplot as plt # Initializing variabley = np.ones(10) # Calculating resultres = np.logspace(1, 3, 10, endpoint = False) # Printing the resultprint(res) Output: [ 10. 15.84893192 25.11886432 39.81071706 63.09573445 100. 158.48931925 251.18864315 398.10717055 630.95734448] Picked Python numpy-Mathematical Function Python-numpy Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Python Classes and Objects Python OOPs Concepts Introduction To PYTHON How to drop one or multiple columns in Pandas Dataframe Python | os.path.join() method How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | datetime.timedelta() function Python | Get unique values from a list
[ { "code": null, "e": 28, "s": 0, "text": "\n15 Mar, 2021" }, { "code": null, "e": 285, "s": 28, "text": "In this article, we will create a list of uniformly spaced numbers using a logarithmic scale. It means on a log scale difference between two adjacent samples is the same. The goal can be achieved using two different functions from the Python Numpy library." }, { "code": null, "e": 366, "s": 285, "text": "numpy.logspace: This function returns number scaled evenly on logarithmic scale." }, { "code": null, "e": 378, "s": 366, "text": "Parameters:" }, { "code": null, "e": 428, "s": 378, "text": "start: Starting value of sequence is base**start" }, { "code": null, "e": 498, "s": 428, "text": "stop: If endpoint is True then ending value of sequence is base**stop" }, { "code": null, "e": 558, "s": 498, "text": "num (Optional): Specifies the number of samples to generate" }, { "code": null, "e": 634, "s": 558, "text": "endpoint (Optional): It can either be true or false with default value true" }, { "code": null, "e": 708, "s": 634, "text": "base (Optional): Specifies the base of log sequence. Default value is 10." }, { "code": null, "e": 761, "s": 708, "text": "dtype (Optional): Specifies the type of output array" }, { "code": null, "e": 823, "s": 761, "text": "axis (Optional): The axis in the result to store the samples." }, { "code": null, "e": 888, "s": 823, "text": "Return: It returns array of samples equally spaced on log scale." }, { "code": null, "e": 1099, "s": 888, "text": "numpy.geomspace: This function is similar to logspace function only difference being end points are specified directly. In Output sample every output is obtained by multiplying previous output by same constant." }, { "code": null, "e": 1111, "s": 1099, "text": "Parameters:" }, { "code": null, "e": 1156, "s": 1111, "text": "start: It is the starting value of sequence" }, { "code": null, "e": 1222, "s": 1156, "text": "stop: If endpoint is True then it is the ending value of sequence" }, { "code": null, "e": 1282, "s": 1222, "text": "num (Optional): Specifies the number of samples to generate" }, { "code": null, "e": 1358, "s": 1282, "text": "endpoint (Optional): It can either be true or false with default value true" }, { "code": null, "e": 1411, "s": 1358, "text": "dtype (Optional): Specifies the type of output array" }, { "code": null, "e": 1473, "s": 1411, "text": "axis (Optional): The axis in the result to store the samples." }, { "code": null, "e": 1538, "s": 1473, "text": "Return: It returns array of samples equally spaced on log scale." }, { "code": null, "e": 1788, "s": 1538, "text": "Example 1: This example uses logspace function. In this example, start is passed as 1 and the stop is passed as 3 with the base being 10. So starting point of the sequence will be 10**1 = 10 and the ending point of the sequence will be 10**3 = 1000." }, { "code": null, "e": 1796, "s": 1788, "text": "Python3" }, { "code": "# importing the libraryimport numpy as npimport matplotlib.pyplot as plt # Initializing variabley = np.ones(10) # Calculating resultres = np.logspace(1, 3, 10, endpoint = True) # Printing the resultprint(res) # Plotting the graphplt.scatter(res, y, color = 'green')plt.title('logarithmically spaced numbers')plt.show()", "e": 2119, "s": 1796, "text": null }, { "code": null, "e": 2127, "s": 2119, "text": "Output:" }, { "code": null, "e": 2291, "s": 2127, "text": "Example 2: This example generates the same list as the previous example using geomspace function. Here we directly passed 10 and 1000 as starting and ending points" }, { "code": null, "e": 2299, "s": 2291, "text": "Python3" }, { "code": "# importing the libraryimport numpy as npimport matplotlib.pyplot as plt # Initializing variabley = np.ones(10) # Calculating resultres = np.geomspace(10, 1000, 10, endpoint = True) # Printing the resultprint(res) # Plotting the graphplt.scatter(res, y, color = 'green')plt.title('logarithmically spaced numbers')plt.show()", "e": 2627, "s": 2299, "text": null }, { "code": null, "e": 2635, "s": 2627, "text": "Output:" }, { "code": null, "e": 2798, "s": 2635, "text": "Example 3: In this example, endpoint is set to false so it will generate n+1 sample and return only first n sample i.e. stop will not be included in the sequence." }, { "code": null, "e": 2806, "s": 2798, "text": "Python3" }, { "code": "# importing the libraryimport numpy as npimport matplotlib.pyplot as plt # Initializing variabley = np.ones(10) # Calculating resultres = np.logspace(1, 3, 10, endpoint = False) # Printing the resultprint(res)", "e": 3019, "s": 2806, "text": null }, { "code": null, "e": 3027, "s": 3019, "text": "Output:" }, { "code": null, "e": 3160, "s": 3027, "text": "[ 10. 15.84893192 25.11886432 39.81071706 63.09573445\n 100. 158.48931925 251.18864315 398.10717055 630.95734448]" }, { "code": null, "e": 3167, "s": 3160, "text": "Picked" }, { "code": null, "e": 3202, "s": 3167, "text": "Python numpy-Mathematical Function" }, { "code": null, "e": 3215, "s": 3202, "text": "Python-numpy" }, { "code": null, "e": 3222, "s": 3215, "text": "Python" }, { "code": null, "e": 3320, "s": 3222, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3352, "s": 3320, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 3379, "s": 3352, "text": "Python Classes and Objects" }, { "code": null, "e": 3400, "s": 3379, "text": "Python OOPs Concepts" }, { "code": null, "e": 3423, "s": 3400, "text": "Introduction To PYTHON" }, { "code": null, "e": 3479, "s": 3423, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 3510, "s": 3479, "text": "Python | os.path.join() method" }, { "code": null, "e": 3552, "s": 3510, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 3594, "s": 3552, "text": "Check if element exists in list in Python" }, { "code": null, "e": 3633, "s": 3594, "text": "Python | datetime.timedelta() function" } ]
Number of divisors of a given number N which are divisible by K
06 May, 2021 Given a number N and a number K. The task is to find the number of divisors of N which are divisible by K. Here K is a number always less than or equal to √(N) Examples: Input: N = 12, K = 3 Output: 3 Input: N = 8, K = 2 Output: 3 Simple Approach: A simple approach is to check all the numbers from 1 to N and check whether any number is a divisor of N and is divisible by K. Count such numbers less than N which satisfies both the conditions.Below is the implementation of the above approach: C++ Java Python3 C# PHP Javascript // C++ program to count number of divisors// of N which are divisible by K #include <iostream>using namespace std; // Function to count number of divisors// of N which are divisible by Kint countDivisors(int n, int k){ // Variable to store // count of divisors int count = 0, i; // Traverse from 1 to n for (i = 1; i <= n; i++) { // increase the count if both // the conditions are satisfied if (n % i == 0 && i % k == 0) { count++; } } return count;} // Driver codeint main(){ int n = 12, k = 3; cout << countDivisors(n, k); return 0;} // Java program to count number of divisors// of N which are divisible by K import java.io.*; class GFG { // Function to count number of divisors// of N which are divisible by K static int countDivisors(int n, int k){ // Variable to store // count of divisors int count = 0, i; // Traverse from 1 to n for (i = 1; i <= n; i++) { // increase the count if both // the conditions are satisfied if (n % i == 0 && i % k == 0) { count++; } } return count;} // Driver code public static void main (String[] args) { int n = 12, k = 3; System.out.println(countDivisors(n, k)); }}// This code is contributed by shashank.. # Python program to count number# of divisors of N which are# divisible by K # Function to count number of divisors# of N which are divisible by Kdef countDivisors(n, k) : # Variable to store # count of divisors count = 0 # Traverse from 1 to n for i in range(1, n + 1) : # increase the count if both # the conditions are satisfied if (n % i == 0 and i % k == 0) : count += 1 return count # Driver code if __name__ == "__main__" : n, k = 12, 3 print(countDivisors(n, k)) # This code is contributed by ANKITRAI1 // C# program to count number// of divisors of N which are// divisible by Kusing System; class GFG{ // Function to count number// of divisors of N which// are divisible by Kstatic int countDivisors(int n, int k){ // Variable to store // count of divisors int count = 0, i; // Traverse from 1 to n for (i = 1; i <= n; i++) { // increase the count if both // the conditions are satisfied if (n % i == 0 && i % k == 0) { count++; } } return count;} // Driver codepublic static void Main (){ int n = 12, k = 3; Console.WriteLine(countDivisors(n, k));}} // This code is contributed by Shashank <?php// PHP program to count number// of divisors of N which are// divisible by K // Function to count number of divisors// of N which are divisible by Kfunction countDivisors($n, $k){ // Variable to store // count of divisors $count = 0; // Traverse from 1 to n for ($i = 1; $i <= $n; $i++) { // increase the count if both // the conditions are satisfied if ($n % $i == 0 && $i % $k == 0) { $count++; } } return $count;} // Driver code$n = 12; $k = 3; echo countDivisors($n, $k); // This code is contributed// by Akanksha Rai(Abby_akku) <script>// Javascript implementation of above approach // Function to count number of divisors// of N which are divisible by Kfunction countDivisors(n, k){ // Variable to store // count of divisors var count = 0, i; // Traverse from 1 to n for (i = 1; i <= n; i++) { // increase the count if both // the conditions are satisfied if (n % i == 0 && i % k == 0) { count++; } } return count;} var n = 12, k = 3;document.write(countDivisors(n, k)); // This code is contributed by SoumikMondal.</script> 3 Time Complexity : O(N)Efficient Approach: The idea is to run a loop from 1 to < √(N) and check whether the number is a divisor of N and is divisible by K and we will also check whether ( N/i ) is divisible by K or not. As (N/i) will also be a factor of N if i is a factor of N. Below is the implementation of the above approach: C++ Java Python 3 C# PHP Javascript // C++ program to count number of divisors// of N which are divisible by K#include <bits/stdc++.h>using namespace std; // Function to count number of divisors// of N which are divisible by Kint countDivisors(int n, int k){ // integer to count the divisors int count = 0, i; // Traverse from 1 to sqrt(N) for (i = 1; i <= sqrt(n); i++) { // Check if i is a factor if (n % i == 0) { // increase the count if i // is divisible by k if (i % k == 0) { count++; } // (n/i) is also a factor // check whether it is divisible by k if ((n / i) % k == 0) { count++; } } } i--; // If the number is a perfect square // and it is divisible by k if ((i * i == n) && (i % k == 0)) { count--; } return count;} // Driver codeint main(){ int n = 16, k = 4; // Function Call cout << countDivisors(n, k); return 0;} // Java program to count number of divisors// of N which are divisible by Kimport java.io.*; class GFG { // Function to count number of divisors// of N which are divisible by Kstatic int countDivisors(int n, int k){ // integer to count the divisors int count = 0, i; // Traverse from 1 to sqrt(N) for (i = 1; i <= Math.sqrt(n); i++) { // Check if i is a factor if (n % i == 0) { // increase the count if i // is divisible by k if (i % k == 0) { count++; } // (n/i) is also a factor // check whether it is divisible by k if ((n / i) % k == 0) { count++; } } } i--; // If the number is a perfect square // and it is divisible by k if ((i * i == n) && (i % k == 0)) { count--; } return count;} // Driver code public static void main (String[] args) { int n = 16, k = 4; System.out.println( countDivisors(n, k)); }}//This Code is Contributed by akt_mit # Python 3 program to count number of# divisors of N which are divisible by Kimport math # Function to count number of divisors# of N which are divisible by Kdef countDivisors(n, k): # integer to count the divisors count = 0 # Traverse from 1 to sqrt(N) for i in range(1, int(math.sqrt(n)) + 1): # Check if i is a factor if (n % i == 0) : # increase the count if i # is divisible by k if (i % k == 0) : count += 1 # (n/i) is also a factor check # whether it is divisible by k if ((n // i) % k == 0) : count += 1 # If the number is a perfect square # and it is divisible by k # if i is sqrt reduce by 1 if ((i * i == n) and (i % k == 0)) : count -= 1 return count # Driver codeif __name__ == "__main__": n = 16 k = 4 print(countDivisors(n, k)) # This code is contributed# by ChitraNayal // C# program to count number of divisors// of N which are divisible by Kusing System; class GFG{ // Function to count number of divisors// of N which are divisible by Kstatic int countDivisors(int n, int k){ // integer to count the divisors int count = 0, i; // Traverse from 1 to sqrt(N) for (i = 1; i <= Math.Sqrt(n); i++) { // Check if i is a factor if (n % i == 0) { // increase the count if i // is divisible by k if (i % k == 0) { count++; } // (n/i) is also a factor check // whether it is divisible by k if ((n / i) % k == 0) { count++; } } } i--; // If the number is a perfect square // and it is divisible by k if ((i * i == n) && (i % k == 0)) { count--; } return count;} // Driver codestatic public void Main (){ int n = 16, k = 4; Console.WriteLine( countDivisors(n, k));}} // This code is contributed by ajit <?php// PHP program to count number// of divisors of N which are// divisible by K // Function to count number// of divisors of N which// are divisible by Kfunction countDivisors($n, $k){ // integer to count the divisors $count = 0; // Traverse from 1 to sqrt(N) for ($i = 1; $i <= sqrt($n); $i++) { // Check if i is a factor if ($n % $i == 0) { // increase the count if i // is divisible by k if ($i % $k == 0) { $count++; } // (n/i) is also a factor // check whether it is // divisible by k if (($n / $i) % $k == 0) { $count++; } } } i--; // If the number is a perfect // square and it is divisible by k if (($i * $i == $n) && ($i % $k == 0)) { $count--; } return $count;} // Driver code$n = 16;$k = 4; echo (countDivisors($n, $k)); // This code is contributed// by Shivi_Aggarwal?> <script> // Javascript program to count number of divisors // of N which are divisible by K // Function to count number of divisors // of N which are divisible by K function countDivisors(n, k) { // integer to count the divisors let count = 0, i; // Traverse from 1 to sqrt(N) for (i = 1; i <= Math.sqrt(n); i++) { // Check if i is a factor if (n % i == 0) { // increase the count if i // is divisible by k if (i % k == 0) { count++; } // (n/i) is also a factor check // whether it is divisible by k if ((n / i) % k == 0) { count++; } } } i--; // If the number is a perfect square // and it is divisible by k if ((i * i == n) && (i % k == 0)) { count--; } return count; } let n = 16, k = 4; document.write( countDivisors(n, k)); </script> 3 Time Complexity :O(√(n)) ankthon Shashank12 Akanksha_Rai Shivi_Aggarwal jit_t ukasp ShadabSayeed jaintanvi795 SoumikMondal divyesh072019 divisibility divisors factor Competitive Programming Mathematical Mathematical Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Modulo 10^9+7 (1000000007) Prefix Sum Array - Implementation and Applications in Competitive Programming Bits manipulation (Important tactics) What is Competitive Programming and How to Prepare for It? Count of strings whose prefix match with the given string to a given length k Program for Fibonacci numbers Set in C++ Standard Template Library (STL) Write a program to print all permutations of a given string C++ Data Types Merge two sorted arrays
[ { "code": null, "e": 54, "s": 26, "text": "\n06 May, 2021" }, { "code": null, "e": 214, "s": 54, "text": "Given a number N and a number K. The task is to find the number of divisors of N which are divisible by K. Here K is a number always less than or equal to √(N)" }, { "code": null, "e": 225, "s": 214, "text": "Examples: " }, { "code": null, "e": 287, "s": 225, "text": "Input: N = 12, K = 3\nOutput: 3\n\nInput: N = 8, K = 2\nOutput: 3" }, { "code": null, "e": 552, "s": 287, "text": "Simple Approach: A simple approach is to check all the numbers from 1 to N and check whether any number is a divisor of N and is divisible by K. Count such numbers less than N which satisfies both the conditions.Below is the implementation of the above approach: " }, { "code": null, "e": 556, "s": 552, "text": "C++" }, { "code": null, "e": 561, "s": 556, "text": "Java" }, { "code": null, "e": 569, "s": 561, "text": "Python3" }, { "code": null, "e": 572, "s": 569, "text": "C#" }, { "code": null, "e": 576, "s": 572, "text": "PHP" }, { "code": null, "e": 587, "s": 576, "text": "Javascript" }, { "code": "// C++ program to count number of divisors// of N which are divisible by K #include <iostream>using namespace std; // Function to count number of divisors// of N which are divisible by Kint countDivisors(int n, int k){ // Variable to store // count of divisors int count = 0, i; // Traverse from 1 to n for (i = 1; i <= n; i++) { // increase the count if both // the conditions are satisfied if (n % i == 0 && i % k == 0) { count++; } } return count;} // Driver codeint main(){ int n = 12, k = 3; cout << countDivisors(n, k); return 0;}", "e": 1199, "s": 587, "text": null }, { "code": "// Java program to count number of divisors// of N which are divisible by K import java.io.*; class GFG { // Function to count number of divisors// of N which are divisible by K static int countDivisors(int n, int k){ // Variable to store // count of divisors int count = 0, i; // Traverse from 1 to n for (i = 1; i <= n; i++) { // increase the count if both // the conditions are satisfied if (n % i == 0 && i % k == 0) { count++; } } return count;} // Driver code public static void main (String[] args) { int n = 12, k = 3; System.out.println(countDivisors(n, k)); }}// This code is contributed by shashank..", "e": 1893, "s": 1199, "text": null }, { "code": "# Python program to count number# of divisors of N which are# divisible by K # Function to count number of divisors# of N which are divisible by Kdef countDivisors(n, k) : # Variable to store # count of divisors count = 0 # Traverse from 1 to n for i in range(1, n + 1) : # increase the count if both # the conditions are satisfied if (n % i == 0 and i % k == 0) : count += 1 return count # Driver code if __name__ == \"__main__\" : n, k = 12, 3 print(countDivisors(n, k)) # This code is contributed by ANKITRAI1", "e": 2481, "s": 1893, "text": null }, { "code": "// C# program to count number// of divisors of N which are// divisible by Kusing System; class GFG{ // Function to count number// of divisors of N which// are divisible by Kstatic int countDivisors(int n, int k){ // Variable to store // count of divisors int count = 0, i; // Traverse from 1 to n for (i = 1; i <= n; i++) { // increase the count if both // the conditions are satisfied if (n % i == 0 && i % k == 0) { count++; } } return count;} // Driver codepublic static void Main (){ int n = 12, k = 3; Console.WriteLine(countDivisors(n, k));}} // This code is contributed by Shashank", "e": 3158, "s": 2481, "text": null }, { "code": "<?php// PHP program to count number// of divisors of N which are// divisible by K // Function to count number of divisors// of N which are divisible by Kfunction countDivisors($n, $k){ // Variable to store // count of divisors $count = 0; // Traverse from 1 to n for ($i = 1; $i <= $n; $i++) { // increase the count if both // the conditions are satisfied if ($n % $i == 0 && $i % $k == 0) { $count++; } } return $count;} // Driver code$n = 12; $k = 3; echo countDivisors($n, $k); // This code is contributed// by Akanksha Rai(Abby_akku)", "e": 3770, "s": 3158, "text": null }, { "code": "<script>// Javascript implementation of above approach // Function to count number of divisors// of N which are divisible by Kfunction countDivisors(n, k){ // Variable to store // count of divisors var count = 0, i; // Traverse from 1 to n for (i = 1; i <= n; i++) { // increase the count if both // the conditions are satisfied if (n % i == 0 && i % k == 0) { count++; } } return count;} var n = 12, k = 3;document.write(countDivisors(n, k)); // This code is contributed by SoumikMondal.</script>", "e": 4332, "s": 3770, "text": null }, { "code": null, "e": 4334, "s": 4332, "text": "3" }, { "code": null, "e": 4664, "s": 4334, "text": "Time Complexity : O(N)Efficient Approach: The idea is to run a loop from 1 to < √(N) and check whether the number is a divisor of N and is divisible by K and we will also check whether ( N/i ) is divisible by K or not. As (N/i) will also be a factor of N if i is a factor of N. Below is the implementation of the above approach: " }, { "code": null, "e": 4668, "s": 4664, "text": "C++" }, { "code": null, "e": 4673, "s": 4668, "text": "Java" }, { "code": null, "e": 4682, "s": 4673, "text": "Python 3" }, { "code": null, "e": 4685, "s": 4682, "text": "C#" }, { "code": null, "e": 4689, "s": 4685, "text": "PHP" }, { "code": null, "e": 4700, "s": 4689, "text": "Javascript" }, { "code": "// C++ program to count number of divisors// of N which are divisible by K#include <bits/stdc++.h>using namespace std; // Function to count number of divisors// of N which are divisible by Kint countDivisors(int n, int k){ // integer to count the divisors int count = 0, i; // Traverse from 1 to sqrt(N) for (i = 1; i <= sqrt(n); i++) { // Check if i is a factor if (n % i == 0) { // increase the count if i // is divisible by k if (i % k == 0) { count++; } // (n/i) is also a factor // check whether it is divisible by k if ((n / i) % k == 0) { count++; } } } i--; // If the number is a perfect square // and it is divisible by k if ((i * i == n) && (i % k == 0)) { count--; } return count;} // Driver codeint main(){ int n = 16, k = 4; // Function Call cout << countDivisors(n, k); return 0;}", "e": 5732, "s": 4700, "text": null }, { "code": "// Java program to count number of divisors// of N which are divisible by Kimport java.io.*; class GFG { // Function to count number of divisors// of N which are divisible by Kstatic int countDivisors(int n, int k){ // integer to count the divisors int count = 0, i; // Traverse from 1 to sqrt(N) for (i = 1; i <= Math.sqrt(n); i++) { // Check if i is a factor if (n % i == 0) { // increase the count if i // is divisible by k if (i % k == 0) { count++; } // (n/i) is also a factor // check whether it is divisible by k if ((n / i) % k == 0) { count++; } } } i--; // If the number is a perfect square // and it is divisible by k if ((i * i == n) && (i % k == 0)) { count--; } return count;} // Driver code public static void main (String[] args) { int n = 16, k = 4; System.out.println( countDivisors(n, k)); }}//This Code is Contributed by akt_mit", "e": 6853, "s": 5732, "text": null }, { "code": "# Python 3 program to count number of# divisors of N which are divisible by Kimport math # Function to count number of divisors# of N which are divisible by Kdef countDivisors(n, k): # integer to count the divisors count = 0 # Traverse from 1 to sqrt(N) for i in range(1, int(math.sqrt(n)) + 1): # Check if i is a factor if (n % i == 0) : # increase the count if i # is divisible by k if (i % k == 0) : count += 1 # (n/i) is also a factor check # whether it is divisible by k if ((n // i) % k == 0) : count += 1 # If the number is a perfect square # and it is divisible by k # if i is sqrt reduce by 1 if ((i * i == n) and (i % k == 0)) : count -= 1 return count # Driver codeif __name__ == \"__main__\": n = 16 k = 4 print(countDivisors(n, k)) # This code is contributed# by ChitraNayal", "e": 7827, "s": 6853, "text": null }, { "code": "// C# program to count number of divisors// of N which are divisible by Kusing System; class GFG{ // Function to count number of divisors// of N which are divisible by Kstatic int countDivisors(int n, int k){ // integer to count the divisors int count = 0, i; // Traverse from 1 to sqrt(N) for (i = 1; i <= Math.Sqrt(n); i++) { // Check if i is a factor if (n % i == 0) { // increase the count if i // is divisible by k if (i % k == 0) { count++; } // (n/i) is also a factor check // whether it is divisible by k if ((n / i) % k == 0) { count++; } } } i--; // If the number is a perfect square // and it is divisible by k if ((i * i == n) && (i % k == 0)) { count--; } return count;} // Driver codestatic public void Main (){ int n = 16, k = 4; Console.WriteLine( countDivisors(n, k));}} // This code is contributed by ajit", "e": 8883, "s": 7827, "text": null }, { "code": "<?php// PHP program to count number// of divisors of N which are// divisible by K // Function to count number// of divisors of N which// are divisible by Kfunction countDivisors($n, $k){ // integer to count the divisors $count = 0; // Traverse from 1 to sqrt(N) for ($i = 1; $i <= sqrt($n); $i++) { // Check if i is a factor if ($n % $i == 0) { // increase the count if i // is divisible by k if ($i % $k == 0) { $count++; } // (n/i) is also a factor // check whether it is // divisible by k if (($n / $i) % $k == 0) { $count++; } } } i--; // If the number is a perfect // square and it is divisible by k if (($i * $i == $n) && ($i % $k == 0)) { $count--; } return $count;} // Driver code$n = 16;$k = 4; echo (countDivisors($n, $k)); // This code is contributed// by Shivi_Aggarwal?>", "e": 9900, "s": 8883, "text": null }, { "code": "<script> // Javascript program to count number of divisors // of N which are divisible by K // Function to count number of divisors // of N which are divisible by K function countDivisors(n, k) { // integer to count the divisors let count = 0, i; // Traverse from 1 to sqrt(N) for (i = 1; i <= Math.sqrt(n); i++) { // Check if i is a factor if (n % i == 0) { // increase the count if i // is divisible by k if (i % k == 0) { count++; } // (n/i) is also a factor check // whether it is divisible by k if ((n / i) % k == 0) { count++; } } } i--; // If the number is a perfect square // and it is divisible by k if ((i * i == n) && (i % k == 0)) { count--; } return count; } let n = 16, k = 4; document.write( countDivisors(n, k)); </script>", "e": 11050, "s": 9900, "text": null }, { "code": null, "e": 11052, "s": 11050, "text": "3" }, { "code": null, "e": 11077, "s": 11052, "text": "Time Complexity :O(√(n))" }, { "code": null, "e": 11085, "s": 11077, "text": "ankthon" }, { "code": null, "e": 11096, "s": 11085, "text": "Shashank12" }, { "code": null, "e": 11109, "s": 11096, "text": "Akanksha_Rai" }, { "code": null, "e": 11124, "s": 11109, "text": "Shivi_Aggarwal" }, { "code": null, "e": 11130, "s": 11124, "text": "jit_t" }, { "code": null, "e": 11136, "s": 11130, "text": "ukasp" }, { "code": null, "e": 11149, "s": 11136, "text": "ShadabSayeed" }, { "code": null, "e": 11162, "s": 11149, "text": "jaintanvi795" }, { "code": null, "e": 11175, "s": 11162, "text": "SoumikMondal" }, { "code": null, "e": 11189, "s": 11175, "text": "divyesh072019" }, { "code": null, "e": 11202, "s": 11189, "text": "divisibility" }, { "code": null, "e": 11211, "s": 11202, "text": "divisors" }, { "code": null, "e": 11218, "s": 11211, "text": "factor" }, { "code": null, "e": 11242, "s": 11218, "text": "Competitive Programming" }, { "code": null, "e": 11255, "s": 11242, "text": "Mathematical" }, { "code": null, "e": 11268, "s": 11255, "text": "Mathematical" }, { "code": null, "e": 11366, "s": 11268, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 11393, "s": 11366, "text": "Modulo 10^9+7 (1000000007)" }, { "code": null, "e": 11471, "s": 11393, "text": "Prefix Sum Array - Implementation and Applications in Competitive Programming" }, { "code": null, "e": 11509, "s": 11471, "text": "Bits manipulation (Important tactics)" }, { "code": null, "e": 11568, "s": 11509, "text": "What is Competitive Programming and How to Prepare for It?" }, { "code": null, "e": 11646, "s": 11568, "text": "Count of strings whose prefix match with the given string to a given length k" }, { "code": null, "e": 11676, "s": 11646, "text": "Program for Fibonacci numbers" }, { "code": null, "e": 11719, "s": 11676, "text": "Set in C++ Standard Template Library (STL)" }, { "code": null, "e": 11779, "s": 11719, "text": "Write a program to print all permutations of a given string" }, { "code": null, "e": 11794, "s": 11779, "text": "C++ Data Types" } ]
Python | Reverse sequence of strictly increasing integers in a list
17 Apr, 2019 Given a list of integers, write a Python program to reverse the order of consecutively incrementing chunk in given list. Examples: Input : [0, 1, 9, 8, 7, 5, 3, 14] Output : [9, 1, 0, 8, 7, 5, 14, 3] Explanation: There are two chunks of strictly increasing elements (0, 1, 9) and (3, 14). Input : [-5, -3, 0, 1, 3, 5, -2, -12] Output : [5, 3, 1, 0, -3, -5, -2, -12] Explanation: Only one chunk of strictly increasing elements exists i.e. (-5, -3, 0, 1, 3, 5). Approach #1 : Naive (Using extended slices) A naive approach to solve the given problem is to use extended slice. We initialize two variables ‘res’ (to store final output) and block( to store chunks of incrementing integers) with empty lists. Now using a for loop, every time we check if the current element is less than the last element of block or not. If yes, add reversed chunk to ‘res’ using extended slicing (block[::-1]) and clean block (block[:] = [i]). otherwise, simply append the element to ‘block’. At last, extend ‘res’ by reversing block and output it. # Python3 program to Reverse order # of incrementing integers in list def reverseOrder(lst): res = [] block = [] for i in lst: # check if the current element is less # than the last element of block if block and i < block[-1]: # add reversed chunk to 'res' res.extend(block[::-1]) block[:] = [i] else: # append the element to 'block' block.append(i) # extend 'res' by reversing block res.extend(block[::-1]) return(res) # Driver codelst = [0, 1, 9, 8, 7, 5, 3, 14]print(reverseOrder(lst)) [9, 1, 0, 8, 7, 5, 14, 3] Approach #2 : Using list comprehension An efficient approach to the above problem is to use list comprehension. It first finds out all the positions where the incrementing integers begin and stores them in a variable ‘break_’. After this, all the chunks are reversed and are managed in the form of sublists, which are in turn stored in ‘block’. Finally, ‘block’ is unzipped and returned. # Python3 program to Reverse order # of incrementing integers in list def reverseOrder(lst): break_ = [0] + [i for i in range(1, len(lst)) if lst[i-1] > lst[i]] + [len(lst)] block =[list(reversed(lst[i:j])) for i, j in zip(break_[:-1], break_[1:])] return(sum(block, [])) # Driver codelst = [0, 1, 9, 8, 7, 5, 3, 14]print(reverseOrder(lst)) [9, 1, 0, 8, 7, 5, 14, 3] Python list-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n17 Apr, 2019" }, { "code": null, "e": 149, "s": 28, "text": "Given a list of integers, write a Python program to reverse the order of consecutively incrementing chunk in given list." }, { "code": null, "e": 159, "s": 149, "text": "Examples:" }, { "code": null, "e": 518, "s": 159, "text": "Input : [0, 1, 9, 8, 7, 5, 3, 14]\nOutput : [9, 1, 0, 8, 7, 5, 14, 3]\nExplanation: There are two chunks of strictly\n increasing elements (0, 1, 9) and (3, 14). \n\nInput : [-5, -3, 0, 1, 3, 5, -2, -12]\nOutput : [5, 3, 1, 0, -3, -5, -2, -12]\nExplanation: Only one chunk of strictly increasing \n elements exists i.e. (-5, -3, 0, 1, 3, 5).\n" }, { "code": null, "e": 563, "s": 518, "text": " Approach #1 : Naive (Using extended slices)" }, { "code": null, "e": 1086, "s": 563, "text": "A naive approach to solve the given problem is to use extended slice. We initialize two variables ‘res’ (to store final output) and block( to store chunks of incrementing integers) with empty lists. Now using a for loop, every time we check if the current element is less than the last element of block or not. If yes, add reversed chunk to ‘res’ using extended slicing (block[::-1]) and clean block (block[:] = [i]). otherwise, simply append the element to ‘block’. At last, extend ‘res’ by reversing block and output it." }, { "code": "# Python3 program to Reverse order # of incrementing integers in list def reverseOrder(lst): res = [] block = [] for i in lst: # check if the current element is less # than the last element of block if block and i < block[-1]: # add reversed chunk to 'res' res.extend(block[::-1]) block[:] = [i] else: # append the element to 'block' block.append(i) # extend 'res' by reversing block res.extend(block[::-1]) return(res) # Driver codelst = [0, 1, 9, 8, 7, 5, 3, 14]print(reverseOrder(lst))", "e": 1716, "s": 1086, "text": null }, { "code": null, "e": 1743, "s": 1716, "text": "[9, 1, 0, 8, 7, 5, 14, 3]\n" }, { "code": null, "e": 1783, "s": 1743, "text": " Approach #2 : Using list comprehension" }, { "code": null, "e": 2132, "s": 1783, "text": "An efficient approach to the above problem is to use list comprehension. It first finds out all the positions where the incrementing integers begin and stores them in a variable ‘break_’. After this, all the chunks are reversed and are managed in the form of sublists, which are in turn stored in ‘block’. Finally, ‘block’ is unzipped and returned." }, { "code": "# Python3 program to Reverse order # of incrementing integers in list def reverseOrder(lst): break_ = [0] + [i for i in range(1, len(lst)) if lst[i-1] > lst[i]] + [len(lst)] block =[list(reversed(lst[i:j])) for i, j in zip(break_[:-1], break_[1:])] return(sum(block, [])) # Driver codelst = [0, 1, 9, 8, 7, 5, 3, 14]print(reverseOrder(lst))", "e": 2517, "s": 2132, "text": null }, { "code": null, "e": 2544, "s": 2517, "text": "[9, 1, 0, 8, 7, 5, 14, 3]\n" }, { "code": null, "e": 2565, "s": 2544, "text": "Python list-programs" }, { "code": null, "e": 2572, "s": 2565, "text": "Python" }, { "code": null, "e": 2588, "s": 2572, "text": "Python Programs" } ]
Plot t Distribution in R
27 Jul, 2021 The t-distribution, also known as the Student’s t-distribution is a type of probability distribution that is used to perform sampling of a normally distributed distribution where the sample size is small and the standard deviation of the input distribution is unknown. The distribution normally forms a bell curve, that is, the distribution is normally distributed but with a lower peak and more observations near the tail. The t-distribution has only one associated parameter, called the degrees of freedom (df). The shape of a particular t-distribution curve relies on the number of degrees of freedom (df) chosen which is equivalent to the given sample size minus one, that is, df=n−1 A vector of coordinates can be generated using the seq() method in R, which is used to generate an incremental sequence of integers to provide a distribution sequence for the given t-distribution. The corresponding y coordinates can be constructed using the various variants of the t-distribution function which are detailed below. These are then plotted using the plot() method in R programming language. The dt() method in R is used to compute probability density analysis of the t-distribution with a specified degree of freedom. Syntax: dt(x, df ) Parameter : x – vector of quantiles df – degrees of freedom Example: R # generating x coordinatesxpos <- seq(- 100, 100, by = 20) print ("X coordinates")print (xpos) # generating y coordinates using dt() method# degreesoffreedomdegree <- 2ypos <- dt(xpos, df = degree) print ("Y coordinates")print (ypos) # plotting t distributionplot (ypos , type = "l") Output [1] “X coordinates” [1] -100 -80 -60 -40 -20 0 20 40 60 80 100 [1] “Y coordinates” [1] 9.997001e-07 1.952210e-06 4.625774e-06 1.559575e-05 1.240683e-04 [6] 3.535534e-01 1.240683e-04 1.559575e-05 4.625774e-06 1.952210e-06 [11] 9.997001e-07 The pt() method in R is used to produce a distribution function for a given student T-distribution. It is used to produce a cumulative distribution function. This function returns the area under the t-curve for any given interval. Syntax: pt(q, df, lower.tail = TRUE) Parameter : q – quantile vector df – degrees of freedom lower.tail – if TRUE (default), probabilities are P[X ≤ x], otherwise, P[X > x]. Example: R # generating x coordinatesxpos <- seq(- 100, 100, by = 20) print ("X coordinates")print (xpos) # generating y coordinates using dt() method# degreesoffreedomdegree <- 2 ypos <- pt(xpos, df = degree) print ("Y coordinates")print (ypos) # plotting t distributionplot (ypos , type = "l") Output [1] “X coordinates” [1] -100 -80 -60 -40 -20 0 20 40 60 80 100 [1] “Y coordinates” [1] 4.999250e-05 7.810669e-05 1.388310e-04 3.122073e-04 1.245332e-03 [6] 5.000000e-01 9.987547e-01 9.996878e-01 9.998612e-01 9.999219e-01 [11] 9.999500e-01 The qt() method in R is used to compute a quantile function or inverse cumulative density function for the given t-distribution for a specified number of degrees of freedom. It is used to compute the nth percentile of the student’s t-distribution with a specified degree of freedom. Syntax: qt(p, df, lower.tail = TRUE) Parameter : p – vector of probabilities df – degrees of freedom lower.tail – if TRUE (default), probabilities are P[X ≤ x], otherwise, P[X > x]. Example: R # generating x coordinatesxpos <- seq(0, 1, by = 0.05) # generating y coordinates using dt() method# degreesoffreedomdegree <- 2ypos <- qt(xpos, df = degree) # plotting t distributionplot (ypos , type = "l") Output The rt() method is used for random generation for the t distribution using a specified number of degrees of freedom. n number of random samples may be generated. Syntax: rt(n, df) Parameter : n – number of observations df – degrees of freedom Example: R # using a random numbern <- 1000 # degreesoffreedomdegree <- 2 # generating y coordinates using rt() methodypos <- rt(n , df = degree) # plotting t distribution in the form of# distributionhist(ypos, breaks = 100, main = "") Output anikaseth98 Picked R-Statistics R Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n27 Jul, 2021" }, { "code": null, "e": 453, "s": 28, "text": "The t-distribution, also known as the Student’s t-distribution is a type of probability distribution that is used to perform sampling of a normally distributed distribution where the sample size is small and the standard deviation of the input distribution is unknown. The distribution normally forms a bell curve, that is, the distribution is normally distributed but with a lower peak and more observations near the tail. " }, { "code": null, "e": 710, "s": 453, "text": "The t-distribution has only one associated parameter, called the degrees of freedom (df). The shape of a particular t-distribution curve relies on the number of degrees of freedom (df) chosen which is equivalent to the given sample size minus one, that is," }, { "code": null, "e": 717, "s": 710, "text": "df=n−1" }, { "code": null, "e": 1123, "s": 717, "text": "A vector of coordinates can be generated using the seq() method in R, which is used to generate an incremental sequence of integers to provide a distribution sequence for the given t-distribution. The corresponding y coordinates can be constructed using the various variants of the t-distribution function which are detailed below. These are then plotted using the plot() method in R programming language." }, { "code": null, "e": 1251, "s": 1123, "text": "The dt() method in R is used to compute probability density analysis of the t-distribution with a specified degree of freedom. " }, { "code": null, "e": 1259, "s": 1251, "text": "Syntax:" }, { "code": null, "e": 1270, "s": 1259, "text": "dt(x, df )" }, { "code": null, "e": 1283, "s": 1270, "text": "Parameter : " }, { "code": null, "e": 1307, "s": 1283, "text": "x – vector of quantiles" }, { "code": null, "e": 1331, "s": 1307, "text": "df – degrees of freedom" }, { "code": null, "e": 1340, "s": 1331, "text": "Example:" }, { "code": null, "e": 1342, "s": 1340, "text": "R" }, { "code": "# generating x coordinatesxpos <- seq(- 100, 100, by = 20) print (\"X coordinates\")print (xpos) # generating y coordinates using dt() method# degreesoffreedomdegree <- 2ypos <- dt(xpos, df = degree) print (\"Y coordinates\")print (ypos) # plotting t distributionplot (ypos , type = \"l\")", "e": 1627, "s": 1342, "text": null }, { "code": null, "e": 1638, "s": 1631, "text": "Output" }, { "code": null, "e": 1661, "s": 1640, "text": "[1] “X coordinates” " }, { "code": null, "e": 1720, "s": 1661, "text": "[1] -100 -80 -60 -40 -20 0 20 40 60 80 100" }, { "code": null, "e": 1741, "s": 1720, "text": "[1] “Y coordinates” " }, { "code": null, "e": 1881, "s": 1741, "text": "[1] 9.997001e-07 1.952210e-06 4.625774e-06 1.559575e-05 1.240683e-04 [6] 3.535534e-01 1.240683e-04 1.559575e-05 4.625774e-06 1.952210e-06 " }, { "code": null, "e": 1899, "s": 1881, "text": "[11] 9.997001e-07" }, { "code": null, "e": 2133, "s": 1901, "text": "The pt() method in R is used to produce a distribution function for a given student T-distribution. It is used to produce a cumulative distribution function. This function returns the area under the t-curve for any given interval. " }, { "code": null, "e": 2143, "s": 2135, "text": "Syntax:" }, { "code": null, "e": 2172, "s": 2143, "text": "pt(q, df, lower.tail = TRUE)" }, { "code": null, "e": 2185, "s": 2172, "text": "Parameter : " }, { "code": null, "e": 2205, "s": 2185, "text": "q – quantile vector" }, { "code": null, "e": 2229, "s": 2205, "text": "df – degrees of freedom" }, { "code": null, "e": 2310, "s": 2229, "text": "lower.tail – if TRUE (default), probabilities are P[X ≤ x], otherwise, P[X > x]." }, { "code": null, "e": 2321, "s": 2312, "text": "Example:" }, { "code": null, "e": 2325, "s": 2323, "text": "R" }, { "code": "# generating x coordinatesxpos <- seq(- 100, 100, by = 20) print (\"X coordinates\")print (xpos) # generating y coordinates using dt() method# degreesoffreedomdegree <- 2 ypos <- pt(xpos, df = degree) print (\"Y coordinates\")print (ypos) # plotting t distributionplot (ypos , type = \"l\")", "e": 2611, "s": 2325, "text": null }, { "code": null, "e": 2622, "s": 2615, "text": "Output" }, { "code": null, "e": 2645, "s": 2624, "text": "[1] “X coordinates” " }, { "code": null, "e": 2705, "s": 2645, "text": "[1] -100 -80 -60 -40 -20 0 20 40 60 80 100 " }, { "code": null, "e": 2726, "s": 2705, "text": "[1] “Y coordinates” " }, { "code": null, "e": 2866, "s": 2726, "text": "[1] 4.999250e-05 7.810669e-05 1.388310e-04 3.122073e-04 1.245332e-03 [6] 5.000000e-01 9.987547e-01 9.996878e-01 9.998612e-01 9.999219e-01 " }, { "code": null, "e": 2884, "s": 2866, "text": "[11] 9.999500e-01" }, { "code": null, "e": 3171, "s": 2886, "text": "The qt() method in R is used to compute a quantile function or inverse cumulative density function for the given t-distribution for a specified number of degrees of freedom. It is used to compute the nth percentile of the student’s t-distribution with a specified degree of freedom. " }, { "code": null, "e": 3181, "s": 3173, "text": "Syntax:" }, { "code": null, "e": 3210, "s": 3181, "text": "qt(p, df, lower.tail = TRUE)" }, { "code": null, "e": 3223, "s": 3210, "text": "Parameter : " }, { "code": null, "e": 3251, "s": 3223, "text": "p – vector of probabilities" }, { "code": null, "e": 3275, "s": 3251, "text": "df – degrees of freedom" }, { "code": null, "e": 3356, "s": 3275, "text": "lower.tail – if TRUE (default), probabilities are P[X ≤ x], otherwise, P[X > x]." }, { "code": null, "e": 3367, "s": 3358, "text": "Example:" }, { "code": null, "e": 3371, "s": 3369, "text": "R" }, { "code": "# generating x coordinatesxpos <- seq(0, 1, by = 0.05) # generating y coordinates using dt() method# degreesoffreedomdegree <- 2ypos <- qt(xpos, df = degree) # plotting t distributionplot (ypos , type = \"l\")", "e": 3579, "s": 3371, "text": null }, { "code": null, "e": 3590, "s": 3583, "text": "Output" }, { "code": null, "e": 3756, "s": 3594, "text": "The rt() method is used for random generation for the t distribution using a specified number of degrees of freedom. n number of random samples may be generated." }, { "code": null, "e": 3766, "s": 3758, "text": "Syntax:" }, { "code": null, "e": 3776, "s": 3766, "text": "rt(n, df)" }, { "code": null, "e": 3789, "s": 3776, "text": "Parameter : " }, { "code": null, "e": 3816, "s": 3789, "text": "n – number of observations" }, { "code": null, "e": 3840, "s": 3816, "text": "df – degrees of freedom" }, { "code": null, "e": 3851, "s": 3842, "text": "Example:" }, { "code": null, "e": 3855, "s": 3853, "text": "R" }, { "code": "# using a random numbern <- 1000 # degreesoffreedomdegree <- 2 # generating y coordinates using rt() methodypos <- rt(n , df = degree) # plotting t distribution in the form of# distributionhist(ypos, breaks = 100, main = \"\")", "e": 4127, "s": 3855, "text": null }, { "code": null, "e": 4138, "s": 4131, "text": "Output" }, { "code": null, "e": 4154, "s": 4142, "text": "anikaseth98" }, { "code": null, "e": 4161, "s": 4154, "text": "Picked" }, { "code": null, "e": 4174, "s": 4161, "text": "R-Statistics" }, { "code": null, "e": 4185, "s": 4174, "text": "R Language" } ]
How to disable (grey out) a checkbutton in Tkinter?
Tkinter provides a variety of input widgets such as entry widget, text widget, listbox, combobox, spinbox, checkbox, etc. Checkboxes are used for taking validity input and the state gets active whenever the user clicks on the checkbutton. In terms of a particular application, we can enable and disable the state of CheckButtons by using the state property. #Import the required library from tkinter import* from tkinter import ttk #Create an instance of tkinter frame win= Tk() #Set the geometry win.geometry("750x250") #Create CheckButtons chk= ttk.Checkbutton(win, text="Python") chk.pack() chk.config(state=DISABLED) win.mainloop() Running the example code will display a window with a check button that is initially disabled. We can change the state of the checkbuttons by changing the values of the state property to NORMAL or DISABLED.
[ { "code": null, "e": 1545, "s": 1187, "text": "Tkinter provides a variety of input widgets such as entry widget, text widget, listbox, combobox, spinbox, checkbox, etc. Checkboxes are used for taking validity input and the state gets active whenever the user clicks on the checkbutton. In terms of a particular application, we can enable and disable the state of CheckButtons by using the state property." }, { "code": null, "e": 1823, "s": 1545, "text": "#Import the required library\nfrom tkinter import*\nfrom tkinter import ttk\n#Create an instance of tkinter frame\nwin= Tk()\n#Set the geometry\nwin.geometry(\"750x250\")\n#Create CheckButtons\nchk= ttk.Checkbutton(win, text=\"Python\")\nchk.pack()\nchk.config(state=DISABLED)\nwin.mainloop()" }, { "code": null, "e": 1918, "s": 1823, "text": "Running the example code will display a window with a check button that is initially disabled." }, { "code": null, "e": 2030, "s": 1918, "text": "We can change the state of the checkbuttons by changing the values of the state property to NORMAL or DISABLED." } ]
How to prevent sticky hover effects for buttons on touch devices ?
27 Oct, 2020 When we add a hover effect to an element in CSS, it sticks in touch devices. In this article, we will learn how to solve this issue. There are two possible approaches to solve this problem – 1. Without Using JavaScript: It can be solved by using media query in CSS. The condition ‘hover: hover’ refers to the devices that support hover. Using media query along this condition ensures that the below CSS is added only on such devices. Code snippet: @media(hover: hover) { #btn:hover { background-color: #ccf6c8; } } This adds a hover effect only on hover enabled devices, which means no hover effect is applied on touch devices. Here the background color of the button is changed on hover. Example: <!DOCTYPE html><html> <head> <style> #btn { background-color: #0dad78; margin: 3%; font-size: 30px; } @media (hover: hover) { #btn:hover { /*Add hover effect to button on hover enabled devices*/ background-color: #ccf6c8; } } </style></head> <body> <button type="button" id="btn"> Submit </button></body> </html> Output (on non touch screen): The button retains its original state on touch devices as there is no hover effect. 2. Using JavaScript: In this method, we will use JavaScript to determine if we are on a touch enabled device using the below function. The ontouchstart event returns true when the user touches an element. The navigator.maxTouchPoints returns maximum number of simultaneous touch points supported by the device. The navigator.msMaxTouchPoints also has the same function with vendor prefix “ms” to target browsers IE 10 and below. Thus the given function returns true if the device is touch enabled. (To read more about the function refer: https://www.geeksforgeeks.org/how-to-detect-touch-screen-device-using-javascript/) function is_touch_enabled() { return ('ontouchstart' in window) || (navigator.maxTouchPoints > 0) || (navigator.msMaxTouchPoints > 0); } If touch is not enabled, we add a class to our button. This class adds hover effect to the button in CSS as described in the below example: Example: <!DOCTYPE html><html> <head> <style> #btn { background-color: #0dad78; margin: 3%; font-size: 30px; } .btn2:hover { background-color: #ccf6c8 !important; /*Hover effect is added to btn2 class*/ } </style></head> <body onload="hover()"> <button type="button" id="btn">Submit</button> <script> function hover() { function is_touch_enabled() { // Check if touch is enabled return "ontouchstart" in window || navigator.maxTouchPoints > 0 || navigator.msMaxTouchPoints > 0; } if (!is_touch_enabled()) { // If touch is not enabled, add "btn2" class var b = document.getElementById("btn"); b.classList.add("btn2"); } } </script></body> </html> Output (on non touch devices): The button retains its original state on touch devices as there is no hover effect. CSS-Misc HTML-Misc JavaScript-Misc CSS HTML JavaScript Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to set space between the flexbox ? Design a Tribute Page using HTML & CSS Form validation using jQuery How to Change the Position of Scrollbar using CSS ? Build a Survey Form using HTML and CSS REST API (Introduction) How to set input type date in dd-mm-yyyy format using HTML ? Design a Tribute Page using HTML & CSS How to set the default value for an HTML <select> element ? Hide or show elements in HTML using display property
[ { "code": null, "e": 53, "s": 25, "text": "\n27 Oct, 2020" }, { "code": null, "e": 187, "s": 53, "text": "When we add a hover effect to an element in CSS, it sticks in touch devices. In this article, we will learn how to solve this issue. " }, { "code": null, "e": 245, "s": 187, "text": "There are two possible approaches to solve this problem –" }, { "code": null, "e": 488, "s": 245, "text": "1. Without Using JavaScript: It can be solved by using media query in CSS. The condition ‘hover: hover’ refers to the devices that support hover. Using media query along this condition ensures that the below CSS is added only on such devices." }, { "code": null, "e": 502, "s": 488, "text": "Code snippet:" }, { "code": null, "e": 585, "s": 502, "text": "@media(hover: hover) {\n #btn:hover {\n background-color: #ccf6c8;\n }\n}" }, { "code": null, "e": 759, "s": 585, "text": "This adds a hover effect only on hover enabled devices, which means no hover effect is applied on touch devices. Here the background color of the button is changed on hover." }, { "code": null, "e": 768, "s": 759, "text": "Example:" }, { "code": "<!DOCTYPE html><html> <head> <style> #btn { background-color: #0dad78; margin: 3%; font-size: 30px; } @media (hover: hover) { #btn:hover { /*Add hover effect to button on hover enabled devices*/ background-color: #ccf6c8; } } </style></head> <body> <button type=\"button\" id=\"btn\"> Submit </button></body> </html>", "e": 1236, "s": 768, "text": null }, { "code": null, "e": 1266, "s": 1236, "text": "Output (on non touch screen):" }, { "code": null, "e": 1350, "s": 1266, "text": "The button retains its original state on touch devices as there is no hover effect." }, { "code": null, "e": 1779, "s": 1350, "text": "2. Using JavaScript: In this method, we will use JavaScript to determine if we are on a touch enabled device using the below function. The ontouchstart event returns true when the user touches an element. The navigator.maxTouchPoints returns maximum number of simultaneous touch points supported by the device. The navigator.msMaxTouchPoints also has the same function with vendor prefix “ms” to target browsers IE 10 and below." }, { "code": null, "e": 1971, "s": 1779, "text": "Thus the given function returns true if the device is touch enabled. (To read more about the function refer: https://www.geeksforgeeks.org/how-to-detect-touch-screen-device-using-javascript/)" }, { "code": null, "e": 2120, "s": 1971, "text": "function is_touch_enabled() {\n return ('ontouchstart' in window) ||\n (navigator.maxTouchPoints > 0) ||\n (navigator.msMaxTouchPoints > 0);\n}" }, { "code": null, "e": 2260, "s": 2120, "text": "If touch is not enabled, we add a class to our button. This class adds hover effect to the button in CSS as described in the below example:" }, { "code": null, "e": 2269, "s": 2260, "text": "Example:" }, { "code": "<!DOCTYPE html><html> <head> <style> #btn { background-color: #0dad78; margin: 3%; font-size: 30px; } .btn2:hover { background-color: #ccf6c8 !important; /*Hover effect is added to btn2 class*/ } </style></head> <body onload=\"hover()\"> <button type=\"button\" id=\"btn\">Submit</button> <script> function hover() { function is_touch_enabled() { // Check if touch is enabled return \"ontouchstart\" in window || navigator.maxTouchPoints > 0 || navigator.msMaxTouchPoints > 0; } if (!is_touch_enabled()) { // If touch is not enabled, add \"btn2\" class var b = document.getElementById(\"btn\"); b.classList.add(\"btn2\"); } } </script></body> </html>", "e": 3186, "s": 2269, "text": null }, { "code": null, "e": 3217, "s": 3186, "text": "Output (on non touch devices):" }, { "code": null, "e": 3301, "s": 3217, "text": "The button retains its original state on touch devices as there is no hover effect." }, { "code": null, "e": 3310, "s": 3301, "text": "CSS-Misc" }, { "code": null, "e": 3320, "s": 3310, "text": "HTML-Misc" }, { "code": null, "e": 3336, "s": 3320, "text": "JavaScript-Misc" }, { "code": null, "e": 3340, "s": 3336, "text": "CSS" }, { "code": null, "e": 3345, "s": 3340, "text": "HTML" }, { "code": null, "e": 3356, "s": 3345, "text": "JavaScript" }, { "code": null, "e": 3373, "s": 3356, "text": "Web Technologies" }, { "code": null, "e": 3378, "s": 3373, "text": "HTML" }, { "code": null, "e": 3476, "s": 3378, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3515, "s": 3476, "text": "How to set space between the flexbox ?" }, { "code": null, "e": 3554, "s": 3515, "text": "Design a Tribute Page using HTML & CSS" }, { "code": null, "e": 3583, "s": 3554, "text": "Form validation using jQuery" }, { "code": null, "e": 3635, "s": 3583, "text": "How to Change the Position of Scrollbar using CSS ?" }, { "code": null, "e": 3674, "s": 3635, "text": "Build a Survey Form using HTML and CSS" }, { "code": null, "e": 3698, "s": 3674, "text": "REST API (Introduction)" }, { "code": null, "e": 3759, "s": 3698, "text": "How to set input type date in dd-mm-yyyy format using HTML ?" }, { "code": null, "e": 3798, "s": 3759, "text": "Design a Tribute Page using HTML & CSS" }, { "code": null, "e": 3858, "s": 3798, "text": "How to set the default value for an HTML <select> element ?" } ]
Generating dynamic URLs in Flask
15 Oct, 2021 Prerequisites: Basics of Flask When creating an application, it’s quite cumbersome to hard-code each URL. A better way to resolve this problem is through building Dynamic URLs. Let us briefly understand the meaning of a few common terms first. Dynamic Routing: It is the process of getting dynamic data(variable names) in the URL and then using it. Variable Rules: Variable sections can be added to a URL by marking sections with <variable_name>. Let us first create a basic flask application: Python3 #importing the flask Modulefrom flask import Flask # Flask constructor takes the name of# current module (__name__) as argumentapp = Flask(__name__) @app.route('/')# ‘/’ URL is bound with hello_world() function.def home(): return 'You are at home page.' @app.route('/allow')def allow(): return 'You have been allowed to enter.' @app.route('/disallow')def disallow(): return 'You have not been allowed to enter.' # main driver functionif __name__ == '__main__': # run() method of Flask class runs the application # on the local development server. app.run() Output: Now consider a situation where you have many users and you want to route the user to a specific page with his or her name or ID in the URL as well as the template. If you try to do this manually then you have to manually type the complete URL for every user. Doing this can be very tedious and next to impossible. However, this can be solved using the flask with something called dynamic routing. We shall now look at a better approach using Variable Rules. We will add a <variable name> with each route. Optionally, we can also define the converter with each variable name <converter: variable name>. By default, the converter is String. Example: @app.route('allow/<variable name>') OR @app.route('allow/<converter: variable name>') Some converters are: Let’s allow the user with having an ID of less than 25 to visit the page. The modified code with dynamic URL binding is given below. The function uses the <variable name> passed in route() decorator as an argument. @app.route('/allow/<int:Number>') def allow(Number): if Number < 25: return f'You have been allowed to enter because\ your number is {str(Number)}' else: return f'You are not allowed' Python3 #importing the flask Modulefrom flask import Flask # Flask constructor takes the name of# current module (__name__) as argumentapp = Flask(__name__) @app.route('/')# ‘/’ URL is bound with hello_world() function.def home(): return 'You are at home page.' # Use of <converter: variable name> in the# route() [email protected]('/allow/<int:Number>')def allow(Number): if Number < 25: return f'You have been allowed to enter because your number is {str(Number)}' else: return f'You are not allowed' # main driver functionif __name__ == '__main__': # run() method of Flask class runs the application # on the local development server. app.run() Output: simmytarika5 surindertarika1234 Python Flask Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n15 Oct, 2021" }, { "code": null, "e": 59, "s": 28, "text": "Prerequisites: Basics of Flask" }, { "code": null, "e": 273, "s": 59, "text": "When creating an application, it’s quite cumbersome to hard-code each URL. A better way to resolve this problem is through building Dynamic URLs. Let us briefly understand the meaning of a few common terms first." }, { "code": null, "e": 378, "s": 273, "text": "Dynamic Routing: It is the process of getting dynamic data(variable names) in the URL and then using it." }, { "code": null, "e": 476, "s": 378, "text": "Variable Rules: Variable sections can be added to a URL by marking sections with <variable_name>." }, { "code": null, "e": 523, "s": 476, "text": "Let us first create a basic flask application:" }, { "code": null, "e": 531, "s": 523, "text": "Python3" }, { "code": "#importing the flask Modulefrom flask import Flask # Flask constructor takes the name of# current module (__name__) as argumentapp = Flask(__name__) @app.route('/')# ‘/’ URL is bound with hello_world() function.def home(): return 'You are at home page.' @app.route('/allow')def allow(): return 'You have been allowed to enter.' @app.route('/disallow')def disallow(): return 'You have not been allowed to enter.' # main driver functionif __name__ == '__main__': # run() method of Flask class runs the application # on the local development server. app.run()", "e": 1106, "s": 531, "text": null }, { "code": null, "e": 1114, "s": 1106, "text": "Output:" }, { "code": null, "e": 1512, "s": 1114, "text": "Now consider a situation where you have many users and you want to route the user to a specific page with his or her name or ID in the URL as well as the template. If you try to do this manually then you have to manually type the complete URL for every user. Doing this can be very tedious and next to impossible. However, this can be solved using the flask with something called dynamic routing. " }, { "code": null, "e": 1754, "s": 1512, "text": "We shall now look at a better approach using Variable Rules. We will add a <variable name> with each route. Optionally, we can also define the converter with each variable name <converter: variable name>. By default, the converter is String." }, { "code": null, "e": 1763, "s": 1754, "text": "Example:" }, { "code": null, "e": 1851, "s": 1763, "text": "@app.route('allow/<variable name>')\n\nOR\n\[email protected]('allow/<converter: variable name>')" }, { "code": null, "e": 1872, "s": 1851, "text": "Some converters are:" }, { "code": null, "e": 2089, "s": 1872, "text": "Let’s allow the user with having an ID of less than 25 to visit the page. The modified code with dynamic URL binding is given below. The function uses the <variable name> passed in route() decorator as an argument. " }, { "code": null, "e": 2305, "s": 2089, "text": "@app.route('/allow/<int:Number>')\ndef allow(Number):\n if Number < 25:\n return f'You have been allowed to enter because\\\n your number is {str(Number)}'\n else:\n return f'You are not allowed'" }, { "code": null, "e": 2313, "s": 2305, "text": "Python3" }, { "code": "#importing the flask Modulefrom flask import Flask # Flask constructor takes the name of# current module (__name__) as argumentapp = Flask(__name__) @app.route('/')# ‘/’ URL is bound with hello_world() function.def home(): return 'You are at home page.' # Use of <converter: variable name> in the# route() [email protected]('/allow/<int:Number>')def allow(Number): if Number < 25: return f'You have been allowed to enter because your number is {str(Number)}' else: return f'You are not allowed' # main driver functionif __name__ == '__main__': # run() method of Flask class runs the application # on the local development server. app.run()", "e": 2987, "s": 2313, "text": null }, { "code": null, "e": 2995, "s": 2987, "text": "Output:" }, { "code": null, "e": 3008, "s": 2995, "text": "simmytarika5" }, { "code": null, "e": 3027, "s": 3008, "text": "surindertarika1234" }, { "code": null, "e": 3040, "s": 3027, "text": "Python Flask" }, { "code": null, "e": 3047, "s": 3040, "text": "Python" } ]
Typedef in Dart
08 Oct, 2020 Typedef in Dart is used to create a user-defined identity (alias) for a function, and we can use that identity in place of the function in the program code. When we use typedef we can define the parameters of the function. Syntax: typedef function_name ( parameters ); With the help of typedef, we can also assign a variable to a function. Syntax:typedef variable_name = function_name; After assigning the variable, if we have to invoke it then we go as: Syntax: variable_name( parameters ); By this we will be able to use a single function in different ways: Example 1: Using typedef in Dart. Dart // Dart program to show the usage of typedef // Defining alias nametypedef GeeksForGeeks(int a, int b); // Defining Geek1 functionGeek1(int a, int b) { print("This is Geek1"); print("$a and $b are lucky geek numbers !!");} // Defining Geek2 functionGeek2(int a, int b) { print("This is Geek2"); print("$a + $b is equal to ${a + b}.");} // Main Functionvoid main(){ // Using alias name to define // number with Geek1 function GeeksForGeeks number = Geek1; // Calling number number(1,2); // Redefining number // with Geek2 function number = Geek2; // Calling number number(3,4);} Output: This is Geek1 1 and 2 are lucky geek numbers !! This is Geek2 3 + 4 is equal to 7. Note: Apart from this, typedef can also act as parameters of a function. Example 2: Using typedef as a parameter of a function. Dart // Dart program to show the usage of typedef // Defining alias nametypedef GeeksForGeeks(int a, int b); // Defining Geek1 functionGeek1(int a, int b) { print("This is Geek1"); print("$a and $b are lucky geek numbers !!");} // Defining a function with a typedef variablenumber(int a, int b, GeeksForGeeks geek) { print("Welcome to GeeksForGeeks"); geek(a, b);} // Main Functionvoid main(){ // Calling number function number(21,23, Geek1);} Output: Welcome to GeeksForGeeks This is Geek1 21 and 23 are lucky geek numbers !! addamkoroma Dart Function Dart Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Flutter - Custom Bottom Navigation Bar Flutter - Checkbox Widget ListView Class in Flutter Dart Tutorial Flutter - Stack Widget Flutter - Search Bar Container class in Flutter Operators in Dart Flutter - Dialogs Flutter - FutureBuilder Widget
[ { "code": null, "e": 28, "s": 0, "text": "\n08 Oct, 2020" }, { "code": null, "e": 251, "s": 28, "text": "Typedef in Dart is used to create a user-defined identity (alias) for a function, and we can use that identity in place of the function in the program code. When we use typedef we can define the parameters of the function." }, { "code": null, "e": 298, "s": 251, "text": "Syntax: typedef function_name ( parameters );\n" }, { "code": null, "e": 369, "s": 298, "text": "With the help of typedef, we can also assign a variable to a function." }, { "code": null, "e": 416, "s": 369, "text": "Syntax:typedef variable_name = function_name;\n" }, { "code": null, "e": 485, "s": 416, "text": "After assigning the variable, if we have to invoke it then we go as:" }, { "code": null, "e": 523, "s": 485, "text": "Syntax: variable_name( parameters );\n" }, { "code": null, "e": 591, "s": 523, "text": "By this we will be able to use a single function in different ways:" }, { "code": null, "e": 626, "s": 591, "text": "Example 1: Using typedef in Dart." }, { "code": null, "e": 631, "s": 626, "text": "Dart" }, { "code": "// Dart program to show the usage of typedef // Defining alias nametypedef GeeksForGeeks(int a, int b); // Defining Geek1 functionGeek1(int a, int b) { print(\"This is Geek1\"); print(\"$a and $b are lucky geek numbers !!\");} // Defining Geek2 functionGeek2(int a, int b) { print(\"This is Geek2\"); print(\"$a + $b is equal to ${a + b}.\");} // Main Functionvoid main(){ // Using alias name to define // number with Geek1 function GeeksForGeeks number = Geek1; // Calling number number(1,2); // Redefining number // with Geek2 function number = Geek2; // Calling number number(3,4);}", "e": 1228, "s": 631, "text": null }, { "code": null, "e": 1236, "s": 1228, "text": "Output:" }, { "code": null, "e": 1320, "s": 1236, "text": "This is Geek1\n1 and 2 are lucky geek numbers !!\nThis is Geek2\n3 + 4 is equal to 7.\n" }, { "code": null, "e": 1393, "s": 1320, "text": "Note: Apart from this, typedef can also act as parameters of a function." }, { "code": null, "e": 1448, "s": 1393, "text": "Example 2: Using typedef as a parameter of a function." }, { "code": null, "e": 1453, "s": 1448, "text": "Dart" }, { "code": "// Dart program to show the usage of typedef // Defining alias nametypedef GeeksForGeeks(int a, int b); // Defining Geek1 functionGeek1(int a, int b) { print(\"This is Geek1\"); print(\"$a and $b are lucky geek numbers !!\");} // Defining a function with a typedef variablenumber(int a, int b, GeeksForGeeks geek) { print(\"Welcome to GeeksForGeeks\"); geek(a, b);} // Main Functionvoid main(){ // Calling number function number(21,23, Geek1);}", "e": 1898, "s": 1453, "text": null }, { "code": null, "e": 1906, "s": 1898, "text": "Output:" }, { "code": null, "e": 1982, "s": 1906, "text": "Welcome to GeeksForGeeks\nThis is Geek1\n21 and 23 are lucky geek numbers !!\n" }, { "code": null, "e": 1994, "s": 1982, "text": "addamkoroma" }, { "code": null, "e": 2008, "s": 1994, "text": "Dart Function" }, { "code": null, "e": 2013, "s": 2008, "text": "Dart" }, { "code": null, "e": 2111, "s": 2013, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2150, "s": 2111, "text": "Flutter - Custom Bottom Navigation Bar" }, { "code": null, "e": 2176, "s": 2150, "text": "Flutter - Checkbox Widget" }, { "code": null, "e": 2202, "s": 2176, "text": "ListView Class in Flutter" }, { "code": null, "e": 2216, "s": 2202, "text": "Dart Tutorial" }, { "code": null, "e": 2239, "s": 2216, "text": "Flutter - Stack Widget" }, { "code": null, "e": 2260, "s": 2239, "text": "Flutter - Search Bar" }, { "code": null, "e": 2287, "s": 2260, "text": "Container class in Flutter" }, { "code": null, "e": 2305, "s": 2287, "text": "Operators in Dart" }, { "code": null, "e": 2323, "s": 2305, "text": "Flutter - Dialogs" } ]
Explain the different variants of for loop in TypeScript
16 Dec, 2021 Loops are used to execute a specific block of code a specific number of times. There are 2 types of loops in TypeScript which are Definite Loop (for), and Indefinite Loops (while, do..while) In TypeScript, we have basically 3 kinds of for loops. for for .. of for .. in for loop: The for loop is used to execute a particular block of code for a specific number of times, which is defined by a specific conditional statement. This is a traditional for loop that we’ve been studying everywhere and using all the time. Syntax: for(Initialization; Condition; Updation) { ... } Initialization statement executes before loop starts, and it initializes the iteration variable to a particular value, which is used to terminate the loop after certain iterations. Condition statement contains the termination condition which defines when the loop should stop. It is very important because an incorrect condition might cause the loop to continue forever! Updation statement is executed at the end of each iteration. It updates the value of the iteration variable. It is done so that the iteration variable reaches a value that falsifies the iteration condition thereby terminating the loop. Example: Here, let i =10; is the Initialization statement that initializes an iteration variable i with an initial value of 10. The i < 15; is the condition that is checked before each iteration and i++; is the updation statement that increments the iteration variable i by +1 after each iteration. HTML <script> for(let i = 10; i < 15; i++) { // This statement is repeated console.log(i); }</script> Output: Output of the code. Example 2: In this example, we’ll create an array of some elements, and we will access each element of the array using the for loop. HTML <script> let animals = ['cat', 'dog', 'lion', 'wolf', 'deer'] for(let i = 0; i < animals.length; i++) { // Prints i-th element of the array console.log(animals[i]); }</script> Output: Elements of the array are printed. for...of loop: This is another type of for loop in Typescript that works in a different way. It operates on an iterable objects like arrays, lists, etc. It iterates each element of the iterable and assigns it to the iteration variable. So, there’s no need to write the traditional way of for loop if we want to access elements of an iterable. We can instead use for..of loop. Syntax: for (initializer of collection) { ... } Example: Here, the elements of the array animals are accessed one by one and are stored in the iteration variable i. So, i store the element of the array at a particular iteration, so we don’t need to access array elements manually. Also, the loop executes the same number of times as the length of the array, so we didn’t even have to worry about the termination condition. HTML <script> let animals = ['cat', 'dog', 'lion', 'wolf', 'deer'] for(let i of animals) { // Print each element of the array console.log(i); }</script> Output: Elements of the array are printed. The for..of loop is useful when we just want the elements of an iterable (array, list, tuple), and we don’t have to worry about the index of the elements inside the array. for...in loop: The for..in loop works in a similar way as that of for..of loop, but instead of assigning the array elements to the iteration variable, the elements’ indices are assigned, so we get the element index at each iteration which can be further used to access individual array elements or perform necessary operations that require index instead of array elements (for example swapping of two elements). Syntax: for (initializer in collection) { ... } Example: Here, i is the iteration variable, that gets assigned to of the indices of the array one by one, starting from the first index that is 0 and going all the way up to the last index of the array which in this case is 4. The condition and updation are managed automatically by JavaScript. HTML <script> let animals = ['cat', 'dog', 'lion', 'wolf', 'deer'] for(let i in animals) { // Print each element of the array console.log(i, animals[i]); }</script> Output: As in the code, I’ve printed i as well as animals[i], we get index and value at that index in each iteration. Index and value at index gets printed. Conclusion: All the 3 for loops have their own advantages. Almost everything can be done using the traditional and universal for loop, but if we are working with iterables (arrays, lists, etc.), then using the specialized versions of for loop, i.e, for..of and for..in can result in cleaner and less complex code. JavaScript-Questions Picked TypeScript JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React Remove elements from a JavaScript Array Difference Between PUT and PATCH Request How to append HTML code to a div using JavaScript ? Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills Difference between var, let and const keywords in JavaScript How to insert spaces/tabs in text using HTML/CSS? How to fetch data from an API in ReactJS ?
[ { "code": null, "e": 28, "s": 0, "text": "\n16 Dec, 2021" }, { "code": null, "e": 219, "s": 28, "text": "Loops are used to execute a specific block of code a specific number of times. There are 2 types of loops in TypeScript which are Definite Loop (for), and Indefinite Loops (while, do..while)" }, { "code": null, "e": 274, "s": 219, "text": "In TypeScript, we have basically 3 kinds of for loops." }, { "code": null, "e": 278, "s": 274, "text": "for" }, { "code": null, "e": 288, "s": 278, "text": "for .. of" }, { "code": null, "e": 298, "s": 288, "text": "for .. in" }, { "code": null, "e": 544, "s": 298, "text": "for loop: The for loop is used to execute a particular block of code for a specific number of times, which is defined by a specific conditional statement. This is a traditional for loop that we’ve been studying everywhere and using all the time." }, { "code": null, "e": 552, "s": 544, "text": "Syntax:" }, { "code": null, "e": 605, "s": 552, "text": "for(Initialization; Condition; Updation) {\n ...\n}" }, { "code": null, "e": 786, "s": 605, "text": "Initialization statement executes before loop starts, and it initializes the iteration variable to a particular value, which is used to terminate the loop after certain iterations." }, { "code": null, "e": 976, "s": 786, "text": "Condition statement contains the termination condition which defines when the loop should stop. It is very important because an incorrect condition might cause the loop to continue forever!" }, { "code": null, "e": 1212, "s": 976, "text": "Updation statement is executed at the end of each iteration. It updates the value of the iteration variable. It is done so that the iteration variable reaches a value that falsifies the iteration condition thereby terminating the loop." }, { "code": null, "e": 1511, "s": 1212, "text": "Example: Here, let i =10; is the Initialization statement that initializes an iteration variable i with an initial value of 10. The i < 15; is the condition that is checked before each iteration and i++; is the updation statement that increments the iteration variable i by +1 after each iteration." }, { "code": null, "e": 1516, "s": 1511, "text": "HTML" }, { "code": "<script> for(let i = 10; i < 15; i++) { // This statement is repeated console.log(i); }</script>", "e": 1635, "s": 1516, "text": null }, { "code": null, "e": 1643, "s": 1635, "text": "Output:" }, { "code": null, "e": 1663, "s": 1643, "text": "Output of the code." }, { "code": null, "e": 1796, "s": 1663, "text": "Example 2: In this example, we’ll create an array of some elements, and we will access each element of the array using the for loop." }, { "code": null, "e": 1801, "s": 1796, "text": "HTML" }, { "code": "<script> let animals = ['cat', 'dog', 'lion', 'wolf', 'deer'] for(let i = 0; i < animals.length; i++) { // Prints i-th element of the array console.log(animals[i]); }</script>", "e": 2008, "s": 1801, "text": null }, { "code": null, "e": 2016, "s": 2008, "text": "Output:" }, { "code": null, "e": 2051, "s": 2016, "text": "Elements of the array are printed." }, { "code": null, "e": 2427, "s": 2051, "text": "for...of loop: This is another type of for loop in Typescript that works in a different way. It operates on an iterable objects like arrays, lists, etc. It iterates each element of the iterable and assigns it to the iteration variable. So, there’s no need to write the traditional way of for loop if we want to access elements of an iterable. We can instead use for..of loop." }, { "code": null, "e": 2435, "s": 2427, "text": "Syntax:" }, { "code": null, "e": 2479, "s": 2435, "text": "for (initializer of collection) {\n ...\n}" }, { "code": null, "e": 2855, "s": 2479, "text": "Example: Here, the elements of the array animals are accessed one by one and are stored in the iteration variable i. So, i store the element of the array at a particular iteration, so we don’t need to access array elements manually. Also, the loop executes the same number of times as the length of the array, so we didn’t even have to worry about the termination condition. " }, { "code": null, "e": 2860, "s": 2855, "text": "HTML" }, { "code": "<script> let animals = ['cat', 'dog', 'lion', 'wolf', 'deer'] for(let i of animals) { // Print each element of the array console.log(i); }</script>", "e": 3039, "s": 2860, "text": null }, { "code": null, "e": 3047, "s": 3039, "text": "Output:" }, { "code": null, "e": 3082, "s": 3047, "text": "Elements of the array are printed." }, { "code": null, "e": 3254, "s": 3082, "text": "The for..of loop is useful when we just want the elements of an iterable (array, list, tuple), and we don’t have to worry about the index of the elements inside the array." }, { "code": null, "e": 3666, "s": 3254, "text": "for...in loop: The for..in loop works in a similar way as that of for..of loop, but instead of assigning the array elements to the iteration variable, the elements’ indices are assigned, so we get the element index at each iteration which can be further used to access individual array elements or perform necessary operations that require index instead of array elements (for example swapping of two elements)." }, { "code": null, "e": 3674, "s": 3666, "text": "Syntax:" }, { "code": null, "e": 3718, "s": 3674, "text": "for (initializer in collection) {\n ...\n}" }, { "code": null, "e": 4013, "s": 3718, "text": "Example: Here, i is the iteration variable, that gets assigned to of the indices of the array one by one, starting from the first index that is 0 and going all the way up to the last index of the array which in this case is 4. The condition and updation are managed automatically by JavaScript." }, { "code": null, "e": 4018, "s": 4013, "text": "HTML" }, { "code": "<script> let animals = ['cat', 'dog', 'lion', 'wolf', 'deer'] for(let i in animals) { // Print each element of the array console.log(i, animals[i]); }</script>", "e": 4207, "s": 4018, "text": null }, { "code": null, "e": 4325, "s": 4207, "text": "Output: As in the code, I’ve printed i as well as animals[i], we get index and value at that index in each iteration." }, { "code": null, "e": 4364, "s": 4325, "text": "Index and value at index gets printed." }, { "code": null, "e": 4678, "s": 4364, "text": "Conclusion: All the 3 for loops have their own advantages. Almost everything can be done using the traditional and universal for loop, but if we are working with iterables (arrays, lists, etc.), then using the specialized versions of for loop, i.e, for..of and for..in can result in cleaner and less complex code." }, { "code": null, "e": 4699, "s": 4678, "text": "JavaScript-Questions" }, { "code": null, "e": 4706, "s": 4699, "text": "Picked" }, { "code": null, "e": 4717, "s": 4706, "text": "TypeScript" }, { "code": null, "e": 4728, "s": 4717, "text": "JavaScript" }, { "code": null, "e": 4745, "s": 4728, "text": "Web Technologies" }, { "code": null, "e": 4843, "s": 4745, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 4904, "s": 4843, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 4976, "s": 4904, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 5016, "s": 4976, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 5057, "s": 5016, "text": "Difference Between PUT and PATCH Request" }, { "code": null, "e": 5109, "s": 5057, "text": "How to append HTML code to a div using JavaScript ?" }, { "code": null, "e": 5142, "s": 5109, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 5204, "s": 5142, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 5265, "s": 5204, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 5315, "s": 5265, "text": "How to insert spaces/tabs in text using HTML/CSS?" } ]
Convert 2D array to object using map or reduce in JavaScript
Let’s say, we have a two-dimensional array that contains some data about the age of some people. The data is given by the following 2D array const data = [ ['Rahul',23], ['Vikky',27], ['Sanjay',29], ['Jay',19], ['Dinesh',21], ['Sandeep',45], ['Umesh',32], ['Rohit',28], ]; We are required to write a function that takes in this 2-D array of data and returns an object with key as the first element of each subarray i.e., the string and value as the second element. We will use the Array.prototype.reduce() method to construct this object, and the code for doing this will be − const data = [ ['Rahul',23], ['Vikky',27], ['Sanjay',29], ['Jay',19], ['Dinesh',21], ['Sandeep',45], ['Umesh',32], ['Rohit',28], ]; const constructObject = arr => { return arr.reduce((acc, val) => { const [key, value] = val; acc[key] = value; return acc; }, {}); }; console.log(constructObject(data)); The output in the console will be − { Rahul: 23, Vikky: 27, Sanjay: 29, Jay: 19, Dinesh: 21, Sandeep: 45, Umesh: 32, Rohit: 28 }
[ { "code": null, "e": 1284, "s": 1187, "text": "Let’s say, we have a two-dimensional array that contains some data about the age of some\npeople." }, { "code": null, "e": 1328, "s": 1284, "text": "The data is given by the following 2D array" }, { "code": null, "e": 1484, "s": 1328, "text": "const data = [\n ['Rahul',23],\n ['Vikky',27],\n ['Sanjay',29],\n ['Jay',19],\n ['Dinesh',21],\n ['Sandeep',45],\n ['Umesh',32],\n ['Rohit',28],\n];" }, { "code": null, "e": 1676, "s": 1484, "text": "We are required to write a function that takes in this 2-D array of data and returns an object with\nkey as the first element of each subarray i.e., the string and value as the second element." }, { "code": null, "e": 1788, "s": 1676, "text": "We will use the Array.prototype.reduce() method to construct this object, and the code for doing this will be −" }, { "code": null, "e": 2138, "s": 1788, "text": "const data = [\n ['Rahul',23],\n ['Vikky',27],\n ['Sanjay',29],\n ['Jay',19],\n ['Dinesh',21],\n ['Sandeep',45],\n ['Umesh',32],\n ['Rohit',28],\n];\nconst constructObject = arr => {\n return arr.reduce((acc, val) => {\n const [key, value] = val;\n acc[key] = value;\n return acc;\n }, {});\n};\nconsole.log(constructObject(data));" }, { "code": null, "e": 2174, "s": 2138, "text": "The output in the console will be −" }, { "code": null, "e": 2291, "s": 2174, "text": "{\n Rahul: 23,\n Vikky: 27,\n Sanjay: 29,\n Jay: 19,\n Dinesh: 21,\n Sandeep: 45,\n Umesh: 32,\n Rohit: 28\n}" } ]
Handling Deadlocks
24 May, 2022 Deadlock is a situation where a process or a set of processes is blocked, waiting for some other resource that is held by some other waiting process. It is an undesirable state of the system. The following are the four conditions that must hold simultaneously for a deadlock to occur. Mutual Exclusion – A resource can be used by only one process at a time. If another process requests for that resource then the requesting process must be delayed until the resource has been released.Hold and wait – Some processes must be holding some resources in non shareable mode and at the same time must be waiting to acquire some more resources, which are currently held by other processes in non-shareable mode.No pre-emption – Resources granted to a process can be released back to the system only as a result of voluntary action of that process, after the process has completed its task.Circular wait – Deadlocked processes are involved in a circular chain such that each process holds one or more resources being requested by the next process in the chain. Mutual Exclusion – A resource can be used by only one process at a time. If another process requests for that resource then the requesting process must be delayed until the resource has been released. Hold and wait – Some processes must be holding some resources in non shareable mode and at the same time must be waiting to acquire some more resources, which are currently held by other processes in non-shareable mode. No pre-emption – Resources granted to a process can be released back to the system only as a result of voluntary action of that process, after the process has completed its task. Circular wait – Deadlocked processes are involved in a circular chain such that each process holds one or more resources being requested by the next process in the chain. Methods of handling deadlocks : There are three approaches to deal with deadlocks. 1. Deadlock Prevention 2. Deadlock avoidance 3. Deadlock detection These are explained as following below. 1. Deadlock Prevention : The strategy of deadlock prevention is to design the system in such a way that the possibility of deadlock is excluded. Indirect method prevent the occurrence of one of three necessary condition of deadlock i.e., mutual exclusion, no pre-emption and hold and wait. Direct method prevent the occurrence of circular wait. Prevention techniques – Mutual exclusion – is supported by the OS. Hold and Wait – condition can be prevented by requiring that a process requests all its required resources at one time and blocking the process until all of its requests can be granted at a same time simultaneously. But this prevention does not yield good result because : long waiting time required in efficient use of allocated resource A process may not know all the required resources in advance No pre-emption – techniques for ‘no pre-emption are’ If a process that is holding some resource, requests another resource that can not be immediately allocated to it, the all resource currently being held are released and if necessary, request them again together with the additional resource. If a process requests a resource that is currently held by another process, the OS may pre-empt the second process and require it to release its resources. This works only if both the processes do not have same priority. Circular wait One way to ensure that this condition never hold is to impose a total ordering of all resource types and to require that each process requests resource in an increasing order of enumeration, i.e., if a process has been allocated resources of type R, then it may subsequently request only those resources of types following R in ordering. 2. Deadlock Avoidance : This approach allows the three necessary conditions of deadlock but makes judicious choices to assure that deadlock point is never reached. It allows more concurrency than avoidance detection A decision is made dynamically whether the current resource allocation request will, if granted, potentially lead to deadlock. It requires the knowledge of future process requests. Two techniques to avoid deadlock : Process initiation denialResource allocation denial Process initiation denial Resource allocation denial Advantages of deadlock avoidance techniques : Not necessary to pre-empt and rollback processes Less restrictive than deadlock prevention Disadvantages : Future resource requirements must be known in advance Processes can be blocked for long periods Exists fixed number of resources for allocation 3. Deadlock Detection : Deadlock detection is used by employing an algorithm that tracks the circular waiting and killing one or more processes so that deadlock is removed. The system state is examined periodically to determine if a set of processes is deadlocked. A deadlock is resolved by aborting and restarting a process, relinquishing all the resources that the process held. This technique does not limit resources access or restrict process action. Requested resources are granted to processes whenever possible. It never delays the process initiation and facilitates online handling. The disadvantage is the inherent pre-emption losses. samgamedev007 simmytarika5 GATE CS Operating Systems Operating Systems Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n24 May, 2022" }, { "code": null, "e": 337, "s": 52, "text": "Deadlock is a situation where a process or a set of processes is blocked, waiting for some other resource that is held by some other waiting process. It is an undesirable state of the system. The following are the four conditions that must hold simultaneously for a deadlock to occur." }, { "code": null, "e": 1105, "s": 337, "text": "Mutual Exclusion – A resource can be used by only one process at a time. If another process requests for that resource then the requesting process must be delayed until the resource has been released.Hold and wait – Some processes must be holding some resources in non shareable mode and at the same time must be waiting to acquire some more resources, which are currently held by other processes in non-shareable mode.No pre-emption – Resources granted to a process can be released back to the system only as a result of voluntary action of that process, after the process has completed its task.Circular wait – Deadlocked processes are involved in a circular chain such that each process holds one or more resources being requested by the next process in the chain." }, { "code": null, "e": 1306, "s": 1105, "text": "Mutual Exclusion – A resource can be used by only one process at a time. If another process requests for that resource then the requesting process must be delayed until the resource has been released." }, { "code": null, "e": 1526, "s": 1306, "text": "Hold and wait – Some processes must be holding some resources in non shareable mode and at the same time must be waiting to acquire some more resources, which are currently held by other processes in non-shareable mode." }, { "code": null, "e": 1705, "s": 1526, "text": "No pre-emption – Resources granted to a process can be released back to the system only as a result of voluntary action of that process, after the process has completed its task." }, { "code": null, "e": 1876, "s": 1705, "text": "Circular wait – Deadlocked processes are involved in a circular chain such that each process holds one or more resources being requested by the next process in the chain." }, { "code": null, "e": 1959, "s": 1876, "text": "Methods of handling deadlocks : There are three approaches to deal with deadlocks." }, { "code": null, "e": 2027, "s": 1959, "text": "1. Deadlock Prevention\n2. Deadlock avoidance\n3. Deadlock detection " }, { "code": null, "e": 2068, "s": 2027, "text": "These are explained as following below. " }, { "code": null, "e": 2753, "s": 2068, "text": "1. Deadlock Prevention : The strategy of deadlock prevention is to design the system in such a way that the possibility of deadlock is excluded. Indirect method prevent the occurrence of one of three necessary condition of deadlock i.e., mutual exclusion, no pre-emption and hold and wait. Direct method prevent the occurrence of circular wait. Prevention techniques – Mutual exclusion – is supported by the OS. Hold and Wait – condition can be prevented by requiring that a process requests all its required resources at one time and blocking the process until all of its requests can be granted at a same time simultaneously. But this prevention does not yield good result because :" }, { "code": null, "e": 2780, "s": 2753, "text": "long waiting time required" }, { "code": null, "e": 2819, "s": 2780, "text": "in efficient use of allocated resource" }, { "code": null, "e": 2880, "s": 2819, "text": "A process may not know all the required resources in advance" }, { "code": null, "e": 2933, "s": 2880, "text": "No pre-emption – techniques for ‘no pre-emption are’" }, { "code": null, "e": 3175, "s": 2933, "text": "If a process that is holding some resource, requests another resource that can not be immediately allocated to it, the all resource currently being held are released and if necessary, request them again together with the additional resource." }, { "code": null, "e": 3396, "s": 3175, "text": "If a process requests a resource that is currently held by another process, the OS may pre-empt the second process and require it to release its resources. This works only if both the processes do not have same priority." }, { "code": null, "e": 3749, "s": 3396, "text": "Circular wait One way to ensure that this condition never hold is to impose a total ordering of all resource types and to require that each process requests resource in an increasing order of enumeration, i.e., if a process has been allocated resources of type R, then it may subsequently request only those resources of types following R in ordering. " }, { "code": null, "e": 4181, "s": 3749, "text": "2. Deadlock Avoidance : This approach allows the three necessary conditions of deadlock but makes judicious choices to assure that deadlock point is never reached. It allows more concurrency than avoidance detection A decision is made dynamically whether the current resource allocation request will, if granted, potentially lead to deadlock. It requires the knowledge of future process requests. Two techniques to avoid deadlock :" }, { "code": null, "e": 4233, "s": 4181, "text": "Process initiation denialResource allocation denial" }, { "code": null, "e": 4259, "s": 4233, "text": "Process initiation denial" }, { "code": null, "e": 4286, "s": 4259, "text": "Resource allocation denial" }, { "code": null, "e": 4332, "s": 4286, "text": "Advantages of deadlock avoidance techniques :" }, { "code": null, "e": 4381, "s": 4332, "text": "Not necessary to pre-empt and rollback processes" }, { "code": null, "e": 4423, "s": 4381, "text": "Less restrictive than deadlock prevention" }, { "code": null, "e": 4439, "s": 4423, "text": "Disadvantages :" }, { "code": null, "e": 4493, "s": 4439, "text": "Future resource requirements must be known in advance" }, { "code": null, "e": 4535, "s": 4493, "text": "Processes can be blocked for long periods" }, { "code": null, "e": 4583, "s": 4535, "text": "Exists fixed number of resources for allocation" }, { "code": null, "e": 4964, "s": 4583, "text": "3. Deadlock Detection : Deadlock detection is used by employing an algorithm that tracks the circular waiting and killing one or more processes so that deadlock is removed. The system state is examined periodically to determine if a set of processes is deadlocked. A deadlock is resolved by aborting and restarting a process, relinquishing all the resources that the process held." }, { "code": null, "e": 5039, "s": 4964, "text": "This technique does not limit resources access or restrict process action." }, { "code": null, "e": 5103, "s": 5039, "text": "Requested resources are granted to processes whenever possible." }, { "code": null, "e": 5175, "s": 5103, "text": "It never delays the process initiation and facilitates online handling." }, { "code": null, "e": 5228, "s": 5175, "text": "The disadvantage is the inherent pre-emption losses." }, { "code": null, "e": 5242, "s": 5228, "text": "samgamedev007" }, { "code": null, "e": 5255, "s": 5242, "text": "simmytarika5" }, { "code": null, "e": 5263, "s": 5255, "text": "GATE CS" }, { "code": null, "e": 5281, "s": 5263, "text": "Operating Systems" }, { "code": null, "e": 5299, "s": 5281, "text": "Operating Systems" } ]
D3.js | d3.mean() function
26 Jun, 2019 The d3.mean() function in D3.js is used to return the mean or average of the given array’s elements. If the array is empty then it returns undefined. Syntax: d3.mean(Array) Parameters: This function accepts a parameters Array which is an array of elements whose mean are to be calculated. Here elements should be integers not string otherwise it returns undefined. Return Value: It returns the mean or average of the given array’s element. Below programs illustrate the d3.mean() function in D3.js. Example 1: <html> <head> <title> Getting mean of the elements of given array </title></head> <body> <script src='https://d3js.org/d3.v4.min.js'> </script> <script> // initialising the array of elements var Array1 = [10, 20, 30, 40, 50, 60]; var Array2 = [1, 2]; var Array3 = [0, 1.5, 6.8]; var Array4 = [.8, .08, .008]; // Calling to d3.mean() function A = d3.mean(Array1); B = d3.mean(Array2); C = d3.mean(Array3); D = d3.mean(Array4); // Getting mean of the given array's element document.write(A + "<br>"); document.write(B + "<br>"); document.write(C + "<br>"); document.write(D + "<br>"); </script></body> </html> Output: 35 1.5 2.766666666666667 0.296 Example 2: <html> <head> <title> Getting mean of the elements of given array </title></head> <body> <script src='https://d3js.org/d3.v4.min.js'> </script> <script> // initialising the array of elements var Array1 = []; var Array2 = ["a", "b", "c"]; var Array3 = [1, "B", "C"]; var Array4 = ["Geek", "Geeks", 2, 3, "GeeksforGeeks"]; // Calling to d3.mean() function A = d3.mean(Array1); B = d3.mean(Array2); C = d3.mean(Array3); D = d3.mean(Array4); // Getting mean of the given array's element document.write(A + "<br>"); document.write(B + "<br>"); document.write(C + "<br>"); document.write(D + "<br>"); </script></body> </html> Output: undefined undefined 1 2.5 Note: In the above output, if the parameter is empty or strings then it returns undefined and if the parameter is string including some integers value then the mean of the integer’s value is returned. Ref: https://devdocs.io/d3~4/d3-array#mean D3.js JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React Remove elements from a JavaScript Array Difference Between PUT and PATCH Request How to append HTML code to a div using JavaScript ? Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills Difference between var, let and const keywords in JavaScript How to insert spaces/tabs in text using HTML/CSS? How to fetch data from an API in ReactJS ?
[ { "code": null, "e": 28, "s": 0, "text": "\n26 Jun, 2019" }, { "code": null, "e": 178, "s": 28, "text": "The d3.mean() function in D3.js is used to return the mean or average of the given array’s elements. If the array is empty then it returns undefined." }, { "code": null, "e": 186, "s": 178, "text": "Syntax:" }, { "code": null, "e": 201, "s": 186, "text": "d3.mean(Array)" }, { "code": null, "e": 393, "s": 201, "text": "Parameters: This function accepts a parameters Array which is an array of elements whose mean are to be calculated. Here elements should be integers not string otherwise it returns undefined." }, { "code": null, "e": 468, "s": 393, "text": "Return Value: It returns the mean or average of the given array’s element." }, { "code": null, "e": 527, "s": 468, "text": "Below programs illustrate the d3.mean() function in D3.js." }, { "code": null, "e": 538, "s": 527, "text": "Example 1:" }, { "code": "<html> <head> <title> Getting mean of the elements of given array </title></head> <body> <script src='https://d3js.org/d3.v4.min.js'> </script> <script> // initialising the array of elements var Array1 = [10, 20, 30, 40, 50, 60]; var Array2 = [1, 2]; var Array3 = [0, 1.5, 6.8]; var Array4 = [.8, .08, .008]; // Calling to d3.mean() function A = d3.mean(Array1); B = d3.mean(Array2); C = d3.mean(Array3); D = d3.mean(Array4); // Getting mean of the given array's element document.write(A + \"<br>\"); document.write(B + \"<br>\"); document.write(C + \"<br>\"); document.write(D + \"<br>\"); </script></body> </html>", "e": 1284, "s": 538, "text": null }, { "code": null, "e": 1292, "s": 1284, "text": "Output:" }, { "code": null, "e": 1323, "s": 1292, "text": "35\n1.5\n2.766666666666667\n0.296" }, { "code": null, "e": 1334, "s": 1323, "text": "Example 2:" }, { "code": "<html> <head> <title> Getting mean of the elements of given array </title></head> <body> <script src='https://d3js.org/d3.v4.min.js'> </script> <script> // initialising the array of elements var Array1 = []; var Array2 = [\"a\", \"b\", \"c\"]; var Array3 = [1, \"B\", \"C\"]; var Array4 = [\"Geek\", \"Geeks\", 2, 3, \"GeeksforGeeks\"]; // Calling to d3.mean() function A = d3.mean(Array1); B = d3.mean(Array2); C = d3.mean(Array3); D = d3.mean(Array4); // Getting mean of the given array's element document.write(A + \"<br>\"); document.write(B + \"<br>\"); document.write(C + \"<br>\"); document.write(D + \"<br>\"); </script></body> </html>", "e": 2093, "s": 1334, "text": null }, { "code": null, "e": 2101, "s": 2093, "text": "Output:" }, { "code": null, "e": 2127, "s": 2101, "text": "undefined\nundefined\n1\n2.5" }, { "code": null, "e": 2328, "s": 2127, "text": "Note: In the above output, if the parameter is empty or strings then it returns undefined and if the parameter is string including some integers value then the mean of the integer’s value is returned." }, { "code": null, "e": 2371, "s": 2328, "text": "Ref: https://devdocs.io/d3~4/d3-array#mean" }, { "code": null, "e": 2377, "s": 2371, "text": "D3.js" }, { "code": null, "e": 2388, "s": 2377, "text": "JavaScript" }, { "code": null, "e": 2405, "s": 2388, "text": "Web Technologies" }, { "code": null, "e": 2503, "s": 2405, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2564, "s": 2503, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 2636, "s": 2564, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 2676, "s": 2636, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 2717, "s": 2676, "text": "Difference Between PUT and PATCH Request" }, { "code": null, "e": 2769, "s": 2717, "text": "How to append HTML code to a div using JavaScript ?" }, { "code": null, "e": 2802, "s": 2769, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 2864, "s": 2802, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 2925, "s": 2864, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 2975, "s": 2925, "text": "How to insert spaces/tabs in text using HTML/CSS?" } ]
How to overwrite or remove PowerShell alias?
You can overwrite the Powershell alias by redefining it. For example, if the alias is created Edit for Notepad.exe and if you want to overwrite it with another program, say wordpad.exe then use the below command. We will overwrite the Edit alias cmdlet with Wordpad.exe using the Set-Alias command. When you close the PowerShell session, it will remove newly created aliases and modified aliases. Set-Alias edit "C:\Program Files\Windows NT\Accessories\wordpad.exe" You cannot overwrite pre-defined aliases. It will throw an exception. For example, when you try to modify dir alias which points to Get-Content, error output will be as below. To remove newly created aliases without closing the PowerShell console, you need to use RemoveAlias command. Remove-Alias -AliasName Edit You can also use Del command to remove the alias. Del alias:Edit Again you cannot remove the Aliases which are permanent. Only newly created aliases can be removed and automatically deleted when the session ends.
[ { "code": null, "e": 1400, "s": 1187, "text": "You can overwrite the Powershell alias by redefining it. For example, if the alias is created Edit for Notepad.exe and if you want to overwrite it with another program, say wordpad.exe then use the below command." }, { "code": null, "e": 1584, "s": 1400, "text": "We will overwrite the Edit alias cmdlet with Wordpad.exe using the Set-Alias command. When you close the PowerShell session, it will remove newly created aliases and modified\naliases." }, { "code": null, "e": 1653, "s": 1584, "text": "Set-Alias edit \"C:\\Program Files\\Windows NT\\Accessories\\wordpad.exe\"" }, { "code": null, "e": 1723, "s": 1653, "text": "You cannot overwrite pre-defined aliases. It will throw an exception." }, { "code": null, "e": 1829, "s": 1723, "text": "For example, when you try to modify dir alias which points to Get-Content, error output will be as below." }, { "code": null, "e": 1938, "s": 1829, "text": "To remove newly created aliases without closing the PowerShell console, you need to use RemoveAlias command." }, { "code": null, "e": 1967, "s": 1938, "text": "Remove-Alias -AliasName Edit" }, { "code": null, "e": 2017, "s": 1967, "text": "You can also use Del command to remove the alias." }, { "code": null, "e": 2032, "s": 2017, "text": "Del alias:Edit" }, { "code": null, "e": 2180, "s": 2032, "text": "Again you cannot remove the Aliases which are permanent. Only newly created aliases can be removed and automatically deleted when the session ends." } ]
Map.has( ) In JavaScript
21 Jan, 2022 What is a Map in JavaScript ? Map is a data structure in JavaScript which allows storing of [key, value] pairs where any value can be either used as a key or value. The keys and values in the map collection may be of any type and if a value is added to the map collection using a key which already exists in the collection, then the new value replaces the old value. The iteration of elements in a map object is done in the insertion order and a “for...” loop returns an array of all [key, value] pairs for each iteration. Differences between Objects and Maps in JavaScriptBoth of these data structures are similar in many ways such as both store values using keys, allow retrieval of those values using keys, deletion of keys and verify whether a key holds any value or not. However, there are quite significant differences between Objects and Maps in JavaScript which make the usage of maps a better and more preferable option in many cases. The keys used in maps can be any type of values such as functions, objects etc whereas the keys in objects are limited to symbols and strings. The size of a map can be known easily by using the size property but while dealing with objects, the size has to be determined manually. A Map should be preferred in cases where the requirement involves frequent addition and removal of [key, value] pairs because a map is an iterative data type and can be directly iterated whereas iterating an object requires obtaining its keys in a specific manner. Map.has() Method in JavaScriptThe Map.has() method in JavaScript is used to check whether an element with a specified key exists in a map or not. It returns a boolean value indicating the presence or absence of an element with a specified key in a map.The Map.has() method takes the key of the element to be searched as an argument and returns a boolean value. It returns true if the element exists in the map else it returns false if the element doesn’t exist. Applications: Map.has() Method can be used to check whether an element with a specified key exists in a map or not. . Syntax: mapObj.has(key) Parameters Used: key: It is the key of the element of the map which has to be searched. Return Value: The Map.has() method returns a boolean value. It returns true if the element exists in the map else it returns false if the element doesn’t exist. Examples of the above function are provided below. Examples: Input : var myMap = new Map(); myMap.set(0, 'geeksforgeeks'); console.log(myMap.has(0)); Output: true Explanation: In this example, a map object “myMap” has been created with a single [key, value] pair and the Map.has() method is used to check whether an element with the key ‘0’ exists in the map or not. Input : var myMap = new Map(); myMap.set(0, 'geeksforgeeks'); myMap.set(1, 'is an online portal'); myMap.set(2, 'for geeks'); console.log(myMap.has(0)); console.log(myMap.has(3)); Output: true false Explanation: In this example, a map object “myMap” has been created with three [key, value] pairs and the Map.has() method is used to check whether an element with the key ‘0’ and ‘3’ exists in the map or not. Code 1: <script> // creating a map object var myMap = new Map(); // Adding [key, value] pair to the mapmyMap.set(0, 'geeksforgeeks'); // displaying whether an element with the key '0' exists in the map or not// using Map.has() methodconsole.log(myMap.has(0)); < /script> OUTPUT : true Code 2: <script> // creating a map object var myMap = new Map(); // Adding [key, value] pair to the mapmyMap.set(0, 'geeksforgeeks');myMap.set(1, 'is an online portal');myMap.set(2, 'for geeks'); // displaying whether an element with the key '0' and '3' exists in// the map or not using Map.has() methodconsole.log(myMap.has(0));console.log(myMap.has(3)); < /script> OUTPUT : true false Supported Browsers: Chrome 38 and above Edge 12 and above Firefox 13 and above Internet Explorer 11 and above Opera 25 and above Safari 8 and above Reference :https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map/has ysachin2314 javascript-functions javascript-map JavaScript Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React Node.js | fs.writeFileSync() Method Remove elements from a JavaScript Array How do you run JavaScript script through the Terminal? Form validation using HTML and JavaScript JavaScript | Promises JavaScript | console.log() with Examples How to append HTML code to a div using JavaScript ? How to Open URL in New Tab using JavaScript ?
[ { "code": null, "e": 28, "s": 0, "text": "\n21 Jan, 2022" }, { "code": null, "e": 58, "s": 28, "text": "What is a Map in JavaScript ?" }, { "code": null, "e": 193, "s": 58, "text": "Map is a data structure in JavaScript which allows storing of [key, value] pairs where any value can be either used as a key or value." }, { "code": null, "e": 395, "s": 193, "text": "The keys and values in the map collection may be of any type and if a value is added to the map collection using a key which already exists in the collection, then the new value replaces the old value." }, { "code": null, "e": 551, "s": 395, "text": "The iteration of elements in a map object is done in the insertion order and a “for...” loop returns an array of all [key, value] pairs for each iteration." }, { "code": null, "e": 972, "s": 551, "text": "Differences between Objects and Maps in JavaScriptBoth of these data structures are similar in many ways such as both store values using keys, allow retrieval of those values using keys, deletion of keys and verify whether a key holds any value or not. However, there are quite significant differences between Objects and Maps in JavaScript which make the usage of maps a better and more preferable option in many cases." }, { "code": null, "e": 1115, "s": 972, "text": "The keys used in maps can be any type of values such as functions, objects etc whereas the keys in objects are limited to symbols and strings." }, { "code": null, "e": 1252, "s": 1115, "text": "The size of a map can be known easily by using the size property but while dealing with objects, the size has to be determined manually." }, { "code": null, "e": 1517, "s": 1252, "text": "A Map should be preferred in cases where the requirement involves frequent addition and removal of [key, value] pairs because a map is an iterative data type and can be directly iterated whereas iterating an object requires obtaining its keys in a specific manner." }, { "code": null, "e": 1979, "s": 1517, "text": "Map.has() Method in JavaScriptThe Map.has() method in JavaScript is used to check whether an element with a specified key exists in a map or not. It returns a boolean value indicating the presence or absence of an element with a specified key in a map.The Map.has() method takes the key of the element to be searched as an argument and returns a boolean value. It returns true if the element exists in the map else it returns false if the element doesn’t exist." }, { "code": null, "e": 1993, "s": 1979, "text": "Applications:" }, { "code": null, "e": 2097, "s": 1993, "text": "Map.has() Method can be used to check whether an element with a specified key exists in a map or not. ." }, { "code": null, "e": 2105, "s": 2097, "text": "Syntax:" }, { "code": null, "e": 2121, "s": 2105, "text": "mapObj.has(key)" }, { "code": null, "e": 2138, "s": 2121, "text": "Parameters Used:" }, { "code": null, "e": 2209, "s": 2138, "text": "key: It is the key of the element of the map which has to be searched." }, { "code": null, "e": 2223, "s": 2209, "text": "Return Value:" }, { "code": null, "e": 2370, "s": 2223, "text": "The Map.has() method returns a boolean value. It returns true if the element exists in the map else it returns false if the element doesn’t exist." }, { "code": null, "e": 2421, "s": 2370, "text": "Examples of the above function are provided below." }, { "code": null, "e": 2431, "s": 2421, "text": "Examples:" }, { "code": null, "e": 2559, "s": 2431, "text": "Input : var myMap = new Map();\n myMap.set(0, 'geeksforgeeks');\n console.log(myMap.has(0));\n \nOutput: true\n" }, { "code": null, "e": 2763, "s": 2559, "text": "Explanation: In this example, a map object “myMap” has been created with a single [key, value] pair and the Map.has() method is used to check whether an element with the key ‘0’ exists in the map or not." }, { "code": null, "e": 3011, "s": 2763, "text": "Input : var myMap = new Map();\n myMap.set(0, 'geeksforgeeks');\n myMap.set(1, 'is an online portal');\n myMap.set(2, 'for geeks');\n console.log(myMap.has(0));\n console.log(myMap.has(3));\n\nOutput: true\n false" }, { "code": null, "e": 3221, "s": 3011, "text": "Explanation: In this example, a map object “myMap” has been created with three [key, value] pairs and the Map.has() method is used to check whether an element with the key ‘0’ and ‘3’ exists in the map or not." }, { "code": null, "e": 3229, "s": 3221, "text": "Code 1:" }, { "code": "<script> // creating a map object var myMap = new Map(); // Adding [key, value] pair to the mapmyMap.set(0, 'geeksforgeeks'); // displaying whether an element with the key '0' exists in the map or not// using Map.has() methodconsole.log(myMap.has(0)); < /script>", "e": 3501, "s": 3229, "text": null }, { "code": null, "e": 3510, "s": 3501, "text": "OUTPUT :" }, { "code": null, "e": 3516, "s": 3510, "text": "true\n" }, { "code": null, "e": 3524, "s": 3516, "text": "Code 2:" }, { "code": "<script> // creating a map object var myMap = new Map(); // Adding [key, value] pair to the mapmyMap.set(0, 'geeksforgeeks');myMap.set(1, 'is an online portal');myMap.set(2, 'for geeks'); // displaying whether an element with the key '0' and '3' exists in// the map or not using Map.has() methodconsole.log(myMap.has(0));console.log(myMap.has(3)); < /script>", "e": 3892, "s": 3524, "text": null }, { "code": null, "e": 3901, "s": 3892, "text": "OUTPUT :" }, { "code": null, "e": 3913, "s": 3901, "text": "true\nfalse\n" }, { "code": null, "e": 3933, "s": 3913, "text": "Supported Browsers:" }, { "code": null, "e": 3953, "s": 3933, "text": "Chrome 38 and above" }, { "code": null, "e": 3971, "s": 3953, "text": "Edge 12 and above" }, { "code": null, "e": 3992, "s": 3971, "text": "Firefox 13 and above" }, { "code": null, "e": 4023, "s": 3992, "text": "Internet Explorer 11 and above" }, { "code": null, "e": 4042, "s": 4023, "text": "Opera 25 and above" }, { "code": null, "e": 4061, "s": 4042, "text": "Safari 8 and above" }, { "code": null, "e": 4161, "s": 4061, "text": "Reference :https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map/has" }, { "code": null, "e": 4173, "s": 4161, "text": "ysachin2314" }, { "code": null, "e": 4194, "s": 4173, "text": "javascript-functions" }, { "code": null, "e": 4209, "s": 4194, "text": "javascript-map" }, { "code": null, "e": 4220, "s": 4209, "text": "JavaScript" }, { "code": null, "e": 4318, "s": 4220, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 4379, "s": 4318, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 4451, "s": 4379, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 4487, "s": 4451, "text": "Node.js | fs.writeFileSync() Method" }, { "code": null, "e": 4527, "s": 4487, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 4582, "s": 4527, "text": "How do you run JavaScript script through the Terminal?" }, { "code": null, "e": 4624, "s": 4582, "text": "Form validation using HTML and JavaScript" }, { "code": null, "e": 4646, "s": 4624, "text": "JavaScript | Promises" }, { "code": null, "e": 4687, "s": 4646, "text": "JavaScript | console.log() with Examples" }, { "code": null, "e": 4739, "s": 4687, "text": "How to append HTML code to a div using JavaScript ?" } ]
Toggle bits in the given range
05 Jul, 2022 Given a non-negative number n and two values l and r. The problem is to toggle the bits in the range l to r in the binary representation of n, i.e, to toggle bits from the rightmost lth bit to the rightmost rth bit. A toggle operation flips a bit 0 to 1 and a bit 1 to 0.Constraint: 1 <= l <= r <= number of bits in the binary representation of n.Examples: Input: n = 17, l = 2, r = 3Output: 23Explaination: (17)10 = (10001)2 (23)10 = (10111)2The bits in the range 2 to 3 in the binary representation of 17 are toggled. Input: n = 50, l = 2, r = 5Output: 44 Approach: Following are the steps: Calculate num as = ((1 << r) – 1) ^ ((1 << (l-1)) – 1) or as ((1 <<r)-l). This will produce a number num having r number of bits and bits in the range l to r are the only set bits.Now, perform n = n ^ num. This will toggle the bits in the range l to r in n. Calculate num as = ((1 << r) – 1) ^ ((1 << (l-1)) – 1) or as ((1 <<r)-l). This will produce a number num having r number of bits and bits in the range l to r are the only set bits. Now, perform n = n ^ num. This will toggle the bits in the range l to r in n. C++ Java Python3 C# PHP Javascript // C++ implementation to toggle bits in// the given range#include <bits/stdc++.h>using namespace std; // function to toggle bits in the given rangeunsigned int toggleBitsFromLToR(unsigned int n, unsigned int l, unsigned int r){ // calculating a number 'num' having 'r' // number of bits and bits in the range l // to r are the only set bits int num = ((1 << r) - 1) ^ ((1 << (l - 1)) - 1); // toggle bits in the range l to r in 'n' // and return the number // Besides this, we can calculate num as: num=(1<<r)-l . return (n ^ num);} // Driver program to test aboveint main(){ unsigned int n = 50; unsigned int l = 2, r = 5; cout << toggleBitsFromLToR(n, l, r); return 0;} // Java implementation to toggle bits in// the given rangeimport java.io.*; class GFG { // Function to toggle bits in the given range static int toggleBitsFromLToR(int n, int l, int r) { // calculating a number 'num' having 'r' // number of bits and bits in the range l // to r are the only set bits int num = ((1 << r) - 1) ^ ((1 << (l - 1)) - 1); // toggle bits in the range l to r in 'n' // and return the number // Besides this, we can calculate num as: // num=(1<<r)-l . return (n ^ num); } // driver program public static void main(String[] args) { int n = 50; int l = 2, r = 5; System.out.println(toggleBitsFromLToR(n, l, r)); }} // Contributed by Pramod Kumar # Python implementation# to toggle bits in# the given range # function to toggle bits# in the given range def toggleBitsFromLToR(n, l, r): # calculating a number # 'num' having 'r' # number of bits and # bits in the range l # to r are the only set bits num = ((1 << r) - 1) ^ ((1 << (l - 1)) - 1) # toggle bits in the # range l to r in 'n' # Besides this, we can calculate num as: num=(1<<r)-l . # and return the number return (n ^ num) # Driver code n = 50l = 2r = 5 print(toggleBitsFromLToR(n, l, r)) # This code is contributed# by Anant Agarwal. // C# implementation to toggle bits// in the given rangeusing System; namespace Toggle {public class GFG { // Function to toggle bits in the given range static int toggleBitsFromLToR(int n, int l, int r) { // calculating a number 'num' having 'r' // number of bits and bits in the range l // to r are the only set bits int num = ((1 << r) - 1) ^ ((1 << (l - 1)) - 1); // toggle bits in the range l to r in 'n' // Besides this, we can calculate num as: // num=(1<<r)-l . // and return the number return (n ^ num); } // Driver Code public static void Main() { int n = 50; int l = 2, r = 5; Console.Write(toggleBitsFromLToR(n, l, r)); }}} // This code is contributed by Sam007. <?php// PHP implementation// to toggle bits in// the given range // function to toggle bits// in the given rangefunction toggleBitsFromLToR($n, $l, $r){ // calculating a number // 'num' having 'r' // number of bits and // bits in the range l // to r are the only // set bits $num = ((1 << $r) - 1) ^ ((1 << ($l - 1)) - 1); // toggle bits in the // range l to r in 'n' //Besides this, we can calculate num as: $num=(1<<$r)-$l . // and return the number return ($n ^ $num);} // Driver Code $n = 50; $l = 2; $r = 5; echo toggleBitsFromLToR($n, $l, $r); // This code is contributed by anuj_67?> <script> // Javascript implementation to toggle bits in// the given range // function to toggle bits in the given rangefunction toggleBitsFromLToR(n, l, r){ // calculating a number 'num' having 'r' // number of bits and bits in the range l // to r are the only set bits var num = ((1 << r) - 1) ^ ((1 << (l - 1)) - 1); // toggle bits in the range l to r in 'n' // and return the number//Besides this, we can calculate num as: num=(1<<r)-l . return (n ^ num);} // Driver program to test abovevar n = 50;var l = 2, r = 5;document.write( toggleBitsFromLToR(n, l, r)); </script> 44 Time Complexity: O(1)Auxiliary Space: O(1) Approach 2: Iterate over the given range from L to R and check if the ith bit is set or not. if the ith bit is set then make it unset otherwise make it set bit. C++ Java C# Javascript // C++ implementation to toggle bits in// the given range#include <bits/stdc++.h> using namespace std; // Function to toggle bits in the given rangeint toggleBitsFromLToR(int N, int L, int R){ int res = N; for (int i = L; i <= R; i++) { // Set bit if ((N & (1 << (i - 1))) != 0) { // XOR will set 0 to already set // bits(a^a=0) res = res ^ (1 << (i - 1)); } // unset bits else { // OR will set'0'bits to 1 res = res | (1 << (i - 1)); } } return res;} // Driver codeint main(){ int n = 50; int l = 2, r = 5; cout << toggleBitsFromLToR(n, l, r); return 0;} // This code is contributed by phasing17 // Java implementation to toggle bits in// the given rangeimport java.io.*; class GFG { // Function to toggle bits in the given range static int toggleBitsFromLToR(int N, int L, int R) { int res = N; for (int i = L; i <= R; i++) { // Set bit if ((N & (1 << (i - 1))) != 0) { // XOR will set 0 to already set // bits(a^a=0) res = res ^ (1 << (i - 1)); } // unset bits else { // OR will set'0'bits to 1 res = res | (1 << (i - 1)); } } return res; } // Driver method public static void main(String[] args) { int n = 50; int l = 2, r = 5; System.out.println(toggleBitsFromLToR(n, l, r)); }} // Contributed by Ocean Bhardwaj // C# implementation to toggle bits in// the given rangeusing System; class GFG { // Function to toggle bits in the given range static int toggleBitsFromLToR(int N, int L, int R) { int res = N; for (int i = L; i <= R; i++) { // Set bit if ((N & (1 << (i - 1))) != 0) { // XOR will set 0 to already set // bits(a^a=0) res = res ^ (1 << (i - 1)); } // unset bits else { // OR will set'0'bits to 1 res = res | (1 << (i - 1)); } } return res; } // Driver Code public static void Main(string[] args) { int n = 50; int l = 2, r = 5; // Function call Console.WriteLine(toggleBitsFromLToR(n, l, r)); }} // This code is Contributed by phasing17 // JavaScript implementation to toggle bits in// the given range // Function to toggle bits in the given rangefunction toggleBitsFromLToR(N, L, R){ let res = N; for (let i = L; i <= R; i++) { // Set bit if ((N & (1 << (i - 1))) != 0) { // XOR will set 0 to already set // bits(a^a=0) res = res ^ (1 << (i - 1)); } // unset bits else { // OR will set'0'bits to 1 res = res | (1 << (i - 1)); } } return res;} // Driver codelet n = 50;let l = 2, r = 5;console.log(toggleBitsFromLToR(n, l, r)); // This code is contributed by phasing17 44 Time Complexity: O(R – L + 1) Auxiliary Space: O(1) This article is contributed by Ayush Jauhari. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. vt_m rajat_kumar noob2000 harendrakumar123 phasing17 Bit Magic Bit Magic Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Set, Clear and Toggle a given bit of a number in C Find two numbers from their sum and XOR Find a number X such that (X XOR A) is minimum and the count of set bits in X and B are equal Equal Sum and XOR of three Numbers Builtin functions of GCC compiler Calculate XOR from 1 to n. Calculate square of a number without using *, / and pow() Reverse actual bits of the given number Find XOR of two number without using XOR operator Unique element in an array where all elements occur k times except one
[ { "code": null, "e": 54, "s": 26, "text": "\n05 Jul, 2022" }, { "code": null, "e": 412, "s": 54, "text": "Given a non-negative number n and two values l and r. The problem is to toggle the bits in the range l to r in the binary representation of n, i.e, to toggle bits from the rightmost lth bit to the rightmost rth bit. A toggle operation flips a bit 0 to 1 and a bit 1 to 0.Constraint: 1 <= l <= r <= number of bits in the binary representation of n.Examples: " }, { "code": null, "e": 597, "s": 412, "text": "Input: n = 17, l = 2, r = 3Output: 23Explaination: (17)10 = (10001)2 (23)10 = (10111)2The bits in the range 2 to 3 in the binary representation of 17 are toggled." }, { "code": null, "e": 635, "s": 597, "text": "Input: n = 50, l = 2, r = 5Output: 44" }, { "code": null, "e": 670, "s": 635, "text": "Approach: Following are the steps:" }, { "code": null, "e": 928, "s": 670, "text": "Calculate num as = ((1 << r) – 1) ^ ((1 << (l-1)) – 1) or as ((1 <<r)-l). This will produce a number num having r number of bits and bits in the range l to r are the only set bits.Now, perform n = n ^ num. This will toggle the bits in the range l to r in n." }, { "code": null, "e": 1109, "s": 928, "text": "Calculate num as = ((1 << r) – 1) ^ ((1 << (l-1)) – 1) or as ((1 <<r)-l). This will produce a number num having r number of bits and bits in the range l to r are the only set bits." }, { "code": null, "e": 1187, "s": 1109, "text": "Now, perform n = n ^ num. This will toggle the bits in the range l to r in n." }, { "code": null, "e": 1191, "s": 1187, "text": "C++" }, { "code": null, "e": 1196, "s": 1191, "text": "Java" }, { "code": null, "e": 1204, "s": 1196, "text": "Python3" }, { "code": null, "e": 1207, "s": 1204, "text": "C#" }, { "code": null, "e": 1211, "s": 1207, "text": "PHP" }, { "code": null, "e": 1222, "s": 1211, "text": "Javascript" }, { "code": "// C++ implementation to toggle bits in// the given range#include <bits/stdc++.h>using namespace std; // function to toggle bits in the given rangeunsigned int toggleBitsFromLToR(unsigned int n, unsigned int l, unsigned int r){ // calculating a number 'num' having 'r' // number of bits and bits in the range l // to r are the only set bits int num = ((1 << r) - 1) ^ ((1 << (l - 1)) - 1); // toggle bits in the range l to r in 'n' // and return the number // Besides this, we can calculate num as: num=(1<<r)-l . return (n ^ num);} // Driver program to test aboveint main(){ unsigned int n = 50; unsigned int l = 2, r = 5; cout << toggleBitsFromLToR(n, l, r); return 0;}", "e": 1993, "s": 1222, "text": null }, { "code": "// Java implementation to toggle bits in// the given rangeimport java.io.*; class GFG { // Function to toggle bits in the given range static int toggleBitsFromLToR(int n, int l, int r) { // calculating a number 'num' having 'r' // number of bits and bits in the range l // to r are the only set bits int num = ((1 << r) - 1) ^ ((1 << (l - 1)) - 1); // toggle bits in the range l to r in 'n' // and return the number // Besides this, we can calculate num as: // num=(1<<r)-l . return (n ^ num); } // driver program public static void main(String[] args) { int n = 50; int l = 2, r = 5; System.out.println(toggleBitsFromLToR(n, l, r)); }} // Contributed by Pramod Kumar", "e": 2771, "s": 1993, "text": null }, { "code": "# Python implementation# to toggle bits in# the given range # function to toggle bits# in the given range def toggleBitsFromLToR(n, l, r): # calculating a number # 'num' having 'r' # number of bits and # bits in the range l # to r are the only set bits num = ((1 << r) - 1) ^ ((1 << (l - 1)) - 1) # toggle bits in the # range l to r in 'n' # Besides this, we can calculate num as: num=(1<<r)-l . # and return the number return (n ^ num) # Driver code n = 50l = 2r = 5 print(toggleBitsFromLToR(n, l, r)) # This code is contributed# by Anant Agarwal.", "e": 3358, "s": 2771, "text": null }, { "code": "// C# implementation to toggle bits// in the given rangeusing System; namespace Toggle {public class GFG { // Function to toggle bits in the given range static int toggleBitsFromLToR(int n, int l, int r) { // calculating a number 'num' having 'r' // number of bits and bits in the range l // to r are the only set bits int num = ((1 << r) - 1) ^ ((1 << (l - 1)) - 1); // toggle bits in the range l to r in 'n' // Besides this, we can calculate num as: // num=(1<<r)-l . // and return the number return (n ^ num); } // Driver Code public static void Main() { int n = 50; int l = 2, r = 5; Console.Write(toggleBitsFromLToR(n, l, r)); }}} // This code is contributed by Sam007.", "e": 4144, "s": 3358, "text": null }, { "code": "<?php// PHP implementation// to toggle bits in// the given range // function to toggle bits// in the given rangefunction toggleBitsFromLToR($n, $l, $r){ // calculating a number // 'num' having 'r' // number of bits and // bits in the range l // to r are the only // set bits $num = ((1 << $r) - 1) ^ ((1 << ($l - 1)) - 1); // toggle bits in the // range l to r in 'n' //Besides this, we can calculate num as: $num=(1<<$r)-$l . // and return the number return ($n ^ $num);} // Driver Code $n = 50; $l = 2; $r = 5; echo toggleBitsFromLToR($n, $l, $r); // This code is contributed by anuj_67?>", "e": 4801, "s": 4144, "text": null }, { "code": "<script> // Javascript implementation to toggle bits in// the given range // function to toggle bits in the given rangefunction toggleBitsFromLToR(n, l, r){ // calculating a number 'num' having 'r' // number of bits and bits in the range l // to r are the only set bits var num = ((1 << r) - 1) ^ ((1 << (l - 1)) - 1); // toggle bits in the range l to r in 'n' // and return the number//Besides this, we can calculate num as: num=(1<<r)-l . return (n ^ num);} // Driver program to test abovevar n = 50;var l = 2, r = 5;document.write( toggleBitsFromLToR(n, l, r)); </script>", "e": 5398, "s": 4801, "text": null }, { "code": null, "e": 5401, "s": 5398, "text": "44" }, { "code": null, "e": 5444, "s": 5401, "text": "Time Complexity: O(1)Auxiliary Space: O(1)" }, { "code": null, "e": 5605, "s": 5444, "text": "Approach 2: Iterate over the given range from L to R and check if the ith bit is set or not. if the ith bit is set then make it unset otherwise make it set bit." }, { "code": null, "e": 5609, "s": 5605, "text": "C++" }, { "code": null, "e": 5614, "s": 5609, "text": "Java" }, { "code": null, "e": 5617, "s": 5614, "text": "C#" }, { "code": null, "e": 5628, "s": 5617, "text": "Javascript" }, { "code": "// C++ implementation to toggle bits in// the given range#include <bits/stdc++.h> using namespace std; // Function to toggle bits in the given rangeint toggleBitsFromLToR(int N, int L, int R){ int res = N; for (int i = L; i <= R; i++) { // Set bit if ((N & (1 << (i - 1))) != 0) { // XOR will set 0 to already set // bits(a^a=0) res = res ^ (1 << (i - 1)); } // unset bits else { // OR will set'0'bits to 1 res = res | (1 << (i - 1)); } } return res;} // Driver codeint main(){ int n = 50; int l = 2, r = 5; cout << toggleBitsFromLToR(n, l, r); return 0;} // This code is contributed by phasing17", "e": 6350, "s": 5628, "text": null }, { "code": "// Java implementation to toggle bits in// the given rangeimport java.io.*; class GFG { // Function to toggle bits in the given range static int toggleBitsFromLToR(int N, int L, int R) { int res = N; for (int i = L; i <= R; i++) { // Set bit if ((N & (1 << (i - 1))) != 0) { // XOR will set 0 to already set // bits(a^a=0) res = res ^ (1 << (i - 1)); } // unset bits else { // OR will set'0'bits to 1 res = res | (1 << (i - 1)); } } return res; } // Driver method public static void main(String[] args) { int n = 50; int l = 2, r = 5; System.out.println(toggleBitsFromLToR(n, l, r)); }} // Contributed by Ocean Bhardwaj", "e": 7189, "s": 6350, "text": null }, { "code": "// C# implementation to toggle bits in// the given rangeusing System; class GFG { // Function to toggle bits in the given range static int toggleBitsFromLToR(int N, int L, int R) { int res = N; for (int i = L; i <= R; i++) { // Set bit if ((N & (1 << (i - 1))) != 0) { // XOR will set 0 to already set // bits(a^a=0) res = res ^ (1 << (i - 1)); } // unset bits else { // OR will set'0'bits to 1 res = res | (1 << (i - 1)); } } return res; } // Driver Code public static void Main(string[] args) { int n = 50; int l = 2, r = 5; // Function call Console.WriteLine(toggleBitsFromLToR(n, l, r)); }} // This code is Contributed by phasing17", "e": 7928, "s": 7189, "text": null }, { "code": "// JavaScript implementation to toggle bits in// the given range // Function to toggle bits in the given rangefunction toggleBitsFromLToR(N, L, R){ let res = N; for (let i = L; i <= R; i++) { // Set bit if ((N & (1 << (i - 1))) != 0) { // XOR will set 0 to already set // bits(a^a=0) res = res ^ (1 << (i - 1)); } // unset bits else { // OR will set'0'bits to 1 res = res | (1 << (i - 1)); } } return res;} // Driver codelet n = 50;let l = 2, r = 5;console.log(toggleBitsFromLToR(n, l, r)); // This code is contributed by phasing17", "e": 8575, "s": 7928, "text": null }, { "code": null, "e": 8578, "s": 8575, "text": "44" }, { "code": null, "e": 8630, "s": 8578, "text": "Time Complexity: O(R – L + 1) Auxiliary Space: O(1)" }, { "code": null, "e": 9052, "s": 8630, "text": "This article is contributed by Ayush Jauhari. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 9057, "s": 9052, "text": "vt_m" }, { "code": null, "e": 9069, "s": 9057, "text": "rajat_kumar" }, { "code": null, "e": 9078, "s": 9069, "text": "noob2000" }, { "code": null, "e": 9095, "s": 9078, "text": "harendrakumar123" }, { "code": null, "e": 9105, "s": 9095, "text": "phasing17" }, { "code": null, "e": 9115, "s": 9105, "text": "Bit Magic" }, { "code": null, "e": 9125, "s": 9115, "text": "Bit Magic" }, { "code": null, "e": 9223, "s": 9125, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 9274, "s": 9223, "text": "Set, Clear and Toggle a given bit of a number in C" }, { "code": null, "e": 9314, "s": 9274, "text": "Find two numbers from their sum and XOR" }, { "code": null, "e": 9408, "s": 9314, "text": "Find a number X such that (X XOR A) is minimum and the count of set bits in X and B are equal" }, { "code": null, "e": 9443, "s": 9408, "text": "Equal Sum and XOR of three Numbers" }, { "code": null, "e": 9477, "s": 9443, "text": "Builtin functions of GCC compiler" }, { "code": null, "e": 9504, "s": 9477, "text": "Calculate XOR from 1 to n." }, { "code": null, "e": 9562, "s": 9504, "text": "Calculate square of a number without using *, / and pow()" }, { "code": null, "e": 9602, "s": 9562, "text": "Reverse actual bits of the given number" }, { "code": null, "e": 9652, "s": 9602, "text": "Find XOR of two number without using XOR operator" } ]
Where Does Python Look for Modules?
20 Aug, 2021 Modules are simply a python .py file from which we can use functions, classes, variables in another file. To use these things in another file we need to first import that module in that file and then we can use them. Modules can exist in various directories. In this article, we will discuss where does python looks for modules. Python looks for modules in 3 steps:- First, it searches in the current directory.If not found then it searches in the directories which are in shell variable PYTHONPATHIf that also fails python checks the installation-dependent list of directories configured at the time Python is installed First, it searches in the current directory. If not found then it searches in the directories which are in shell variable PYTHONPATH If that also fails python checks the installation-dependent list of directories configured at the time Python is installed Now we will discuss each of these steps: Step 1: Firstly the python searches in the current directory. From the current directory we mean the directory in which the file calling the module exists. We can check the working directory from os module of python by os.getcwd() method. The directory returned from this method is referred to as the current directory. The code for getting the current directory is: Python # importing os moduleimport os # printing the current working directoryprint(os.getcwd()) The output of the above code will be the current working directory which will be first searched for a module to be imported. Step 2: If the module that needs to be imported is not found in the current directory. Then python will search it in the PYTHONPATH which is a list of directory names, with the same syntax as the shell variable PATH. To know the directories in PYTHONPATH we can simply get them by the sys module. The sys.path gives us the list of all the paths where the module will be searched when it is needed to import. To see these directories we have to write the following code: Python # importing the sys moduleimport sys # printing sys.path variable of sys moduleprint(sys.path) Step 3: If the module is not found in the above 2 steps the python interpreter then tries to find it in installation dependent list of directories that are configured at the time of installation of python. These directories are also included in sys.path variable of sys module and can be known in the same way as the above step. The code will be: Python # importing the sys moduleimport sys # printing sys.path variable of sys moduleprint(sys.path) simranarora5sos Picked python-modules python-os-module Python-sys Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n20 Aug, 2021" }, { "code": null, "e": 287, "s": 28, "text": "Modules are simply a python .py file from which we can use functions, classes, variables in another file. To use these things in another file we need to first import that module in that file and then we can use them. Modules can exist in various directories." }, { "code": null, "e": 357, "s": 287, "text": "In this article, we will discuss where does python looks for modules." }, { "code": null, "e": 395, "s": 357, "text": "Python looks for modules in 3 steps:-" }, { "code": null, "e": 649, "s": 395, "text": "First, it searches in the current directory.If not found then it searches in the directories which are in shell variable PYTHONPATHIf that also fails python checks the installation-dependent list of directories configured at the time Python is installed" }, { "code": null, "e": 694, "s": 649, "text": "First, it searches in the current directory." }, { "code": null, "e": 782, "s": 694, "text": "If not found then it searches in the directories which are in shell variable PYTHONPATH" }, { "code": null, "e": 905, "s": 782, "text": "If that also fails python checks the installation-dependent list of directories configured at the time Python is installed" }, { "code": null, "e": 946, "s": 905, "text": "Now we will discuss each of these steps:" }, { "code": null, "e": 1313, "s": 946, "text": "Step 1: Firstly the python searches in the current directory. From the current directory we mean the directory in which the file calling the module exists. We can check the working directory from os module of python by os.getcwd() method. The directory returned from this method is referred to as the current directory. The code for getting the current directory is:" }, { "code": null, "e": 1320, "s": 1313, "text": "Python" }, { "code": "# importing os moduleimport os # printing the current working directoryprint(os.getcwd())", "e": 1417, "s": 1320, "text": null }, { "code": null, "e": 1542, "s": 1417, "text": "The output of the above code will be the current working directory which will be first searched for a module to be imported." }, { "code": null, "e": 2012, "s": 1542, "text": "Step 2: If the module that needs to be imported is not found in the current directory. Then python will search it in the PYTHONPATH which is a list of directory names, with the same syntax as the shell variable PATH. To know the directories in PYTHONPATH we can simply get them by the sys module. The sys.path gives us the list of all the paths where the module will be searched when it is needed to import. To see these directories we have to write the following code:" }, { "code": null, "e": 2019, "s": 2012, "text": "Python" }, { "code": "# importing the sys moduleimport sys # printing sys.path variable of sys moduleprint(sys.path)", "e": 2122, "s": 2019, "text": null }, { "code": null, "e": 2469, "s": 2122, "text": "Step 3: If the module is not found in the above 2 steps the python interpreter then tries to find it in installation dependent list of directories that are configured at the time of installation of python. These directories are also included in sys.path variable of sys module and can be known in the same way as the above step. The code will be:" }, { "code": null, "e": 2476, "s": 2469, "text": "Python" }, { "code": "# importing the sys moduleimport sys # printing sys.path variable of sys moduleprint(sys.path)", "e": 2579, "s": 2476, "text": null }, { "code": null, "e": 2595, "s": 2579, "text": "simranarora5sos" }, { "code": null, "e": 2602, "s": 2595, "text": "Picked" }, { "code": null, "e": 2617, "s": 2602, "text": "python-modules" }, { "code": null, "e": 2634, "s": 2617, "text": "python-os-module" }, { "code": null, "e": 2645, "s": 2634, "text": "Python-sys" }, { "code": null, "e": 2652, "s": 2645, "text": "Python" } ]
PySpark Collect() – Retrieve data from DataFrame
17 Jun, 2021 Collect() is the function, operation for RDD or Dataframe that is used to retrieve the data from the Dataframe. It is used useful in retrieving all the elements of the row from each partition in an RDD and brings that over the driver node/program. So, in this article, we are going to learn how to retrieve the data from the Dataframe using collect() action operation. Syntax: df.collect() Where df is the dataframe Example 1: Retrieving all the Data from the Dataframe using collect(). After creating the Dataframe, for retrieving all the data from the dataframe we have used the collect() action by writing df.collect(), this will return the Array of row type, in the below output shows the schema of the dataframe and the actual created Dataframe. Python # importing necessary librariesfrom pyspark.sql import SparkSession # function to create new SparkSessiondef create_session(): spk = SparkSession.builder \ .appName("Corona_cases_statewise.com") \ .getOrCreate() return spk # function to create RDDdef create_RDD(sc_obj,data): df = sc.parallelize(data) return df if __name__ == "__main__": input_data = [("Uttar Pradesh",122000,89600,12238), ("Maharashtra",454000,380000,67985), ("Tamil Nadu",115000,102000,13933), ("Karnataka",147000,111000,15306), ("Kerala",153000,124000,5259)] # calling function to create SparkSession spark = create_session() # creating spark context object sc = spark.sparkContext # calling function to create RDD rd_df = create_RDD(sc,input_data) schema_lst = ["State","Cases","Recovered","Deaths"] # creating the dataframe using createDataFrame function df = spark.createDataFrame(rd_df,schema_lst) # printing schema of the dataframe and showing the dataframe df.printSchema() df.show() # retrieving the data from the dataframe using collect() df2= df.collect() print("Retrieved Data is:-") print(df2) Output: Example 2: Retrieving Data of specific rows using collect(). After creating the Dataframe, we have retrieved the data of 0th row Dataframe using collect() action by writing print(df.collect()[0][0:]) respectively in this we are passing row and column after collect(), in the first print statement we have passed row and column as [0][0:] here first [0] represents the row that we have passed 0 and second [0:] this represents the column and colon(:) is used to retrieve all the columns, in short, we have retrieve the 0th row with all the column elements. Python # importing necessary librariesfrom pyspark.sql import SparkSession # function to create new SparkSessiondef create_session(): spk = SparkSession.builder \ .appName("Corona_cases_statewise.com") \ .getOrCreate() return spk # function to create RDDdef create_RDD(sc_obj,data): df = sc.parallelize(data) return df if __name__ == "__main__": input_data = [("Uttar Pradesh",122000,89600,12238), ("Maharashtra",454000,380000,67985), ("Tamil Nadu",115000,102000,13933), ("Karnataka",147000,111000,15306), ("Kerala",153000,124000,5259)] # calling function to create SparkSession spark = create_session() # creating spark context object sc = spark.sparkContext # calling function to create RDD rd_df = create_RDD(sc,input_data) schema_lst = ["State","Cases","Recovered","Deaths"] # creating the dataframe using createDataFrame function df = spark.createDataFrame(rd_df,schema_lst) # printing schema of the dataframe and showing the dataframe df.printSchema() df.show() print("Retrieved Data is:-") # Retrieving data from 0th row print(df.collect()[0][0:]) Output: Example 3: Retrieve data of multiple rows using collect(). After creating the Dataframe, we are retrieving the data of the first three rows of the dataframe using collect() action with for loop, by writing for row in df.collect()[0:3], after writing the collect() action we are passing the number rows we want [0:3], first [0] represents the starting row and using “:” semicolon and [3] represents the ending row till which we want the data of multiple rows. Here is the number of rows from which we are retrieving the data is 0,1 and 2 the last index is always excluded i.e, 3. Python # importing necessary librariesfrom pyspark.sql import SparkSessionfrom pyspark.sql.functions import col # function to create new SparkSessiondef create_session(): spk = SparkSession.builder \ .appName("Corona_cases_statewise.com") \ .getOrCreate() return spk # function to create RDDdef create_RDD(sc_obj,data): df = sc.parallelize(data) return df if __name__ == "__main__": input_data = [("Uttar Pradesh",122000,89600,12238), ("Maharashtra",454000,380000,67985), ("Tamil Nadu",115000,102000,13933), ("Karnataka",147000,111000,15306), ("Kerala",153000,124000,5259)] # calling function to create SparkSession spark = create_session() # creating spark context object sc = spark.sparkContext # calling function to create RDD rd_df = create_RDD(sc,input_data) schema_lst = ["State","Cases","Recovered","Deaths"] # creating the dataframe using createDataFrame function df = spark.createDataFrame(rd_df,schema_lst) # showing the dataframe and schema df.printSchema() df.show() print("Retrieved Data is:-") # Retrieving multiple rows using collect() and for loop for row in df.collect()[0:3]: print((row["State"]),",",str(row["Cases"]),",", str(row["Recovered"]),",",str(row["Deaths"])) Output: Example 4: Retrieve data from a specific column using collect(). After creating the Dataframe, we are retrieving the data of ‘Cases’ column using collect() action with for loop. By iterating the loop to df.collect(), that gives us the Array of rows from that rows we are retrieving and printing the data of ‘Cases’ column by writing print(col[“Cases”]); As we are getting the rows one by iterating for loop from Array of rows, from that row we are retrieving the data of “Cases” column only. By writing print(col[“Cases”]) here from each row we are retrieving the data of ‘Cases’ column by passing ‘Cases’ in col. Python # importing necessary librariesfrom pyspark.sql import SparkSessionfrom pyspark.sql.functions import col # function to create new SparkSessiondef create_session(): spk = SparkSession.builder \ .appName("Corona_cases_statewise.com") \ .getOrCreate() return spk # function to create RDDdef create_RDD(sc_obj,data): df = sc.parallelize(data) return df if __name__ == "__main__": input_data = [("Uttar Pradesh",122000,89600,12238), ("Maharashtra",454000,380000,67985), ("Tamil Nadu",115000,102000,13933), ("Karnataka",147000,111000,15306), ("Kerala",153000,124000,5259)] # calling function to create SparkSession spark = create_session() # creating spark context object sc = spark.sparkContext # calling function to create RDD rd_df = create_RDD(sc,input_data) schema_lst = ["State","Cases","Recovered","Deaths"] # creating the dataframe using createDataFrame function df = spark.createDataFrame(rd_df,schema_lst) # showing the dataframe and schema df.printSchema() df.show() print("Retrieved Data is:-") # Retrieving data from the "Cases" column for col in df.collect(): print(col["Cases"]) Output: Example 5: Retrieving the data from multiple columns using collect(). After creating the dataframe, we are retrieving the data of multiple columns which include “State”, “Recovered” and “Deaths”. For retrieving the data of multiple columns, firstly we have to get the Array of rows which we get using df.collect() action now iterate the for loop of every row of Array, as by iterating we are getting rows one by one so from that row we are retrieving the data of “State”, “Recovered” and “Deaths” column from every column and printing the data by writing, print(col[“State”],”,”,col[“Recovered”],”,”,col[“Deaths”]) Python # importing necessary librariesfrom pyspark.sql import SparkSessionfrom pyspark.sql.functions import col # function to create new SparkSessiondef create_session(): spk = SparkSession.builder \ .appName("Corona_cases_statewise.com") \ .getOrCreate() return spk # function to create RDDdef create_RDD(sc_obj,data): df = sc.parallelize(data) return df if __name__ == "__main__": input_data = [("Uttar Pradesh",122000,89600,12238), ("Maharashtra",454000,380000,67985), ("Tamil Nadu",115000,102000,13933), ("Karnataka",147000,111000,15306), ("Kerala",153000,124000,5259)] # calling function to create SparkSession spark = create_session() # creating spark context object sc = spark.sparkContext # calling function to create RDD rd_df = create_RDD(sc,input_data) schema_lst = ["State","Cases","Recovered","Deaths"] # creating the dataframe using createDataFrame function df = spark.createDataFrame(rd_df,schema_lst) # showing the dataframe and schema df.printSchema() df.show() print("Retrieved Data is:-") # Retrieving data of the "State", # "Recovered" and "Deaths" column for col in df.collect(): print(col["State"],",",col["Recovered"],", ",col["Deaths"]) Output: Picked Python-Pyspark Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Python Dictionary Different ways to create Pandas Dataframe Enumerate() in Python Read a file line by line in Python Python String | replace() How to Install PIP on Windows ? *args and **kwargs in Python Python Classes and Objects Iterate over a list in Python Python OOPs Concepts
[ { "code": null, "e": 28, "s": 0, "text": "\n17 Jun, 2021" }, { "code": null, "e": 276, "s": 28, "text": "Collect() is the function, operation for RDD or Dataframe that is used to retrieve the data from the Dataframe. It is used useful in retrieving all the elements of the row from each partition in an RDD and brings that over the driver node/program." }, { "code": null, "e": 397, "s": 276, "text": "So, in this article, we are going to learn how to retrieve the data from the Dataframe using collect() action operation." }, { "code": null, "e": 418, "s": 397, "text": "Syntax: df.collect()" }, { "code": null, "e": 444, "s": 418, "text": "Where df is the dataframe" }, { "code": null, "e": 515, "s": 444, "text": "Example 1: Retrieving all the Data from the Dataframe using collect()." }, { "code": null, "e": 779, "s": 515, "text": "After creating the Dataframe, for retrieving all the data from the dataframe we have used the collect() action by writing df.collect(), this will return the Array of row type, in the below output shows the schema of the dataframe and the actual created Dataframe." }, { "code": null, "e": 786, "s": 779, "text": "Python" }, { "code": "# importing necessary librariesfrom pyspark.sql import SparkSession # function to create new SparkSessiondef create_session(): spk = SparkSession.builder \\ .appName(\"Corona_cases_statewise.com\") \\ .getOrCreate() return spk # function to create RDDdef create_RDD(sc_obj,data): df = sc.parallelize(data) return df if __name__ == \"__main__\": input_data = [(\"Uttar Pradesh\",122000,89600,12238), (\"Maharashtra\",454000,380000,67985), (\"Tamil Nadu\",115000,102000,13933), (\"Karnataka\",147000,111000,15306), (\"Kerala\",153000,124000,5259)] # calling function to create SparkSession spark = create_session() # creating spark context object sc = spark.sparkContext # calling function to create RDD rd_df = create_RDD(sc,input_data) schema_lst = [\"State\",\"Cases\",\"Recovered\",\"Deaths\"] # creating the dataframe using createDataFrame function df = spark.createDataFrame(rd_df,schema_lst) # printing schema of the dataframe and showing the dataframe df.printSchema() df.show() # retrieving the data from the dataframe using collect() df2= df.collect() print(\"Retrieved Data is:-\") print(df2) ", "e": 1959, "s": 786, "text": null }, { "code": null, "e": 1967, "s": 1959, "text": "Output:" }, { "code": null, "e": 2028, "s": 1967, "text": "Example 2: Retrieving Data of specific rows using collect()." }, { "code": null, "e": 2523, "s": 2028, "text": "After creating the Dataframe, we have retrieved the data of 0th row Dataframe using collect() action by writing print(df.collect()[0][0:]) respectively in this we are passing row and column after collect(), in the first print statement we have passed row and column as [0][0:] here first [0] represents the row that we have passed 0 and second [0:] this represents the column and colon(:) is used to retrieve all the columns, in short, we have retrieve the 0th row with all the column elements." }, { "code": null, "e": 2530, "s": 2523, "text": "Python" }, { "code": "# importing necessary librariesfrom pyspark.sql import SparkSession # function to create new SparkSessiondef create_session(): spk = SparkSession.builder \\ .appName(\"Corona_cases_statewise.com\") \\ .getOrCreate() return spk # function to create RDDdef create_RDD(sc_obj,data): df = sc.parallelize(data) return df if __name__ == \"__main__\": input_data = [(\"Uttar Pradesh\",122000,89600,12238), (\"Maharashtra\",454000,380000,67985), (\"Tamil Nadu\",115000,102000,13933), (\"Karnataka\",147000,111000,15306), (\"Kerala\",153000,124000,5259)] # calling function to create SparkSession spark = create_session() # creating spark context object sc = spark.sparkContext # calling function to create RDD rd_df = create_RDD(sc,input_data) schema_lst = [\"State\",\"Cases\",\"Recovered\",\"Deaths\"] # creating the dataframe using createDataFrame function df = spark.createDataFrame(rd_df,schema_lst) # printing schema of the dataframe and showing the dataframe df.printSchema() df.show() print(\"Retrieved Data is:-\") # Retrieving data from 0th row print(df.collect()[0][0:])", "e": 3676, "s": 2530, "text": null }, { "code": null, "e": 3684, "s": 3676, "text": "Output:" }, { "code": null, "e": 3743, "s": 3684, "text": "Example 3: Retrieve data of multiple rows using collect()." }, { "code": null, "e": 4143, "s": 3743, "text": "After creating the Dataframe, we are retrieving the data of the first three rows of the dataframe using collect() action with for loop, by writing for row in df.collect()[0:3], after writing the collect() action we are passing the number rows we want [0:3], first [0] represents the starting row and using “:” semicolon and [3] represents the ending row till which we want the data of multiple rows." }, { "code": null, "e": 4263, "s": 4143, "text": "Here is the number of rows from which we are retrieving the data is 0,1 and 2 the last index is always excluded i.e, 3." }, { "code": null, "e": 4270, "s": 4263, "text": "Python" }, { "code": "# importing necessary librariesfrom pyspark.sql import SparkSessionfrom pyspark.sql.functions import col # function to create new SparkSessiondef create_session(): spk = SparkSession.builder \\ .appName(\"Corona_cases_statewise.com\") \\ .getOrCreate() return spk # function to create RDDdef create_RDD(sc_obj,data): df = sc.parallelize(data) return df if __name__ == \"__main__\": input_data = [(\"Uttar Pradesh\",122000,89600,12238), (\"Maharashtra\",454000,380000,67985), (\"Tamil Nadu\",115000,102000,13933), (\"Karnataka\",147000,111000,15306), (\"Kerala\",153000,124000,5259)] # calling function to create SparkSession spark = create_session() # creating spark context object sc = spark.sparkContext # calling function to create RDD rd_df = create_RDD(sc,input_data) schema_lst = [\"State\",\"Cases\",\"Recovered\",\"Deaths\"] # creating the dataframe using createDataFrame function df = spark.createDataFrame(rd_df,schema_lst) # showing the dataframe and schema df.printSchema() df.show() print(\"Retrieved Data is:-\") # Retrieving multiple rows using collect() and for loop for row in df.collect()[0:3]: print((row[\"State\"]),\",\",str(row[\"Cases\"]),\",\", str(row[\"Recovered\"]),\",\",str(row[\"Deaths\"]))", "e": 5560, "s": 4270, "text": null }, { "code": null, "e": 5568, "s": 5560, "text": "Output:" }, { "code": null, "e": 5633, "s": 5568, "text": "Example 4: Retrieve data from a specific column using collect()." }, { "code": null, "e": 5922, "s": 5633, "text": "After creating the Dataframe, we are retrieving the data of ‘Cases’ column using collect() action with for loop. By iterating the loop to df.collect(), that gives us the Array of rows from that rows we are retrieving and printing the data of ‘Cases’ column by writing print(col[“Cases”]);" }, { "code": null, "e": 6183, "s": 5922, "text": "As we are getting the rows one by iterating for loop from Array of rows, from that row we are retrieving the data of “Cases” column only. By writing print(col[“Cases”]) here from each row we are retrieving the data of ‘Cases’ column by passing ‘Cases’ in col." }, { "code": null, "e": 6190, "s": 6183, "text": "Python" }, { "code": "# importing necessary librariesfrom pyspark.sql import SparkSessionfrom pyspark.sql.functions import col # function to create new SparkSessiondef create_session(): spk = SparkSession.builder \\ .appName(\"Corona_cases_statewise.com\") \\ .getOrCreate() return spk # function to create RDDdef create_RDD(sc_obj,data): df = sc.parallelize(data) return df if __name__ == \"__main__\": input_data = [(\"Uttar Pradesh\",122000,89600,12238), (\"Maharashtra\",454000,380000,67985), (\"Tamil Nadu\",115000,102000,13933), (\"Karnataka\",147000,111000,15306), (\"Kerala\",153000,124000,5259)] # calling function to create SparkSession spark = create_session() # creating spark context object sc = spark.sparkContext # calling function to create RDD rd_df = create_RDD(sc,input_data) schema_lst = [\"State\",\"Cases\",\"Recovered\",\"Deaths\"] # creating the dataframe using createDataFrame function df = spark.createDataFrame(rd_df,schema_lst) # showing the dataframe and schema df.printSchema() df.show() print(\"Retrieved Data is:-\") # Retrieving data from the \"Cases\" column for col in df.collect(): print(col[\"Cases\"])", "e": 7376, "s": 6190, "text": null }, { "code": null, "e": 7384, "s": 7376, "text": "Output:" }, { "code": null, "e": 7454, "s": 7384, "text": "Example 5: Retrieving the data from multiple columns using collect()." }, { "code": null, "e": 7580, "s": 7454, "text": "After creating the dataframe, we are retrieving the data of multiple columns which include “State”, “Recovered” and “Deaths”." }, { "code": null, "e": 7999, "s": 7580, "text": "For retrieving the data of multiple columns, firstly we have to get the Array of rows which we get using df.collect() action now iterate the for loop of every row of Array, as by iterating we are getting rows one by one so from that row we are retrieving the data of “State”, “Recovered” and “Deaths” column from every column and printing the data by writing, print(col[“State”],”,”,col[“Recovered”],”,”,col[“Deaths”])" }, { "code": null, "e": 8006, "s": 7999, "text": "Python" }, { "code": "# importing necessary librariesfrom pyspark.sql import SparkSessionfrom pyspark.sql.functions import col # function to create new SparkSessiondef create_session(): spk = SparkSession.builder \\ .appName(\"Corona_cases_statewise.com\") \\ .getOrCreate() return spk # function to create RDDdef create_RDD(sc_obj,data): df = sc.parallelize(data) return df if __name__ == \"__main__\": input_data = [(\"Uttar Pradesh\",122000,89600,12238), (\"Maharashtra\",454000,380000,67985), (\"Tamil Nadu\",115000,102000,13933), (\"Karnataka\",147000,111000,15306), (\"Kerala\",153000,124000,5259)] # calling function to create SparkSession spark = create_session() # creating spark context object sc = spark.sparkContext # calling function to create RDD rd_df = create_RDD(sc,input_data) schema_lst = [\"State\",\"Cases\",\"Recovered\",\"Deaths\"] # creating the dataframe using createDataFrame function df = spark.createDataFrame(rd_df,schema_lst) # showing the dataframe and schema df.printSchema() df.show() print(\"Retrieved Data is:-\") # Retrieving data of the \"State\", # \"Recovered\" and \"Deaths\" column for col in df.collect(): print(col[\"State\"],\",\",col[\"Recovered\"],\", \",col[\"Deaths\"])", "e": 9270, "s": 8006, "text": null }, { "code": null, "e": 9278, "s": 9270, "text": "Output:" }, { "code": null, "e": 9285, "s": 9278, "text": "Picked" }, { "code": null, "e": 9300, "s": 9285, "text": "Python-Pyspark" }, { "code": null, "e": 9307, "s": 9300, "text": "Python" }, { "code": null, "e": 9405, "s": 9307, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 9423, "s": 9405, "text": "Python Dictionary" }, { "code": null, "e": 9465, "s": 9423, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 9487, "s": 9465, "text": "Enumerate() in Python" }, { "code": null, "e": 9522, "s": 9487, "text": "Read a file line by line in Python" }, { "code": null, "e": 9548, "s": 9522, "text": "Python String | replace()" }, { "code": null, "e": 9580, "s": 9548, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 9609, "s": 9580, "text": "*args and **kwargs in Python" }, { "code": null, "e": 9636, "s": 9609, "text": "Python Classes and Objects" }, { "code": null, "e": 9666, "s": 9636, "text": "Iterate over a list in Python" } ]
How to avoid ConcurrentModificationException while iterating a collection in java?
When you are working with collection objects, while one thread is iterating over a particular collection object, if you try to add or remove elements from it, a ConcurrentModificationException will be thrown. Not only that, If you are iterating a collection object, add or remove elements to it and try to iterate its contents again it is considered that you are trying to access the collection object using multiple threads and ConcurrentModificationException is thrown. Live Demo import java.util.ArrayList; import java.util.Iterator; public class OccurenceOfElements { public static void main(String args[]) { ArrayList <String> list = new ArrayList<String>(); //Instantiating an ArrayList object list.add("JavaFX"); list.add("Java"); list.add("WebGL"); list.add("OpenCV"); System.out.println("Contents of the array list (first to last): "); Iterator<String> it = list.iterator(); while(it.hasNext()) { System.out.print(it.next()+", "); } //list.remove(3); list.add(3, "Hadoop"); while(it.hasNext()) { System.out.print(it.next()+", "); } } } Contents of the array list (first to last): JavaFX, Java, WebGL, OpenCV, Exception in thread "main" java.util.ConcurrentModificationException at java.util.ArrayList$Itr.checkForComodification(Unknown Source) at java.util.ArrayList$Itr.next(Unknown Source) at sample.OccurenceOfElements.main(OccurenceOfElements.java:23) To resolve this while accessing collection objects from multiple threads use synchronized block or method and, if you are modifying data while retrieving it, get the Iterator object again after modifying the data. Live Demo import java.util.ArrayList; import java.util.Iterator; public class OccurenceOfElements { public static void main(String args[]) { ArrayList <String> list = new ArrayList<String>(); //Instantiating an ArrayList object list.add("JavaFX"); list.add("Java"); list.add("WebGL"); list.add("OpenCV"); System.out.println("Contents of the array list (first to last): "); Iterator<String> it = list.iterator(); while(it.hasNext()) { System.out.print(it.next()+". "); } list.remove(3); System.out.println(""); System.out.println("Contents of the array list after removal: "); it = list.iterator(); while(it.hasNext()) { System.out.print(it.next()+". "); } } } Contents of the array list (first to last): JavaFX. Java. WebGL. OpenCV. Contents of the array list after removal: JavaFX. Java. WebGL.
[ { "code": null, "e": 1396, "s": 1187, "text": "When you are working with collection objects, while one thread is iterating over a particular collection object, if you try to add or remove elements from it, a ConcurrentModificationException will be thrown." }, { "code": null, "e": 1659, "s": 1396, "text": "Not only that, If you are iterating a collection object, add or remove elements to it and try to iterate its contents again it is considered that you are trying to access the collection object using multiple threads and ConcurrentModificationException is thrown." }, { "code": null, "e": 1670, "s": 1659, "text": " Live Demo" }, { "code": null, "e": 2341, "s": 1670, "text": "import java.util.ArrayList;\nimport java.util.Iterator;\npublic class OccurenceOfElements {\n public static void main(String args[]) {\n ArrayList <String> list = new ArrayList<String>();\n //Instantiating an ArrayList object\n list.add(\"JavaFX\");\n list.add(\"Java\");\n list.add(\"WebGL\");\n list.add(\"OpenCV\");\n System.out.println(\"Contents of the array list (first to last): \");\n Iterator<String> it = list.iterator();\n while(it.hasNext()) {\n System.out.print(it.next()+\", \");\n }\n //list.remove(3);\n list.add(3, \"Hadoop\");\n while(it.hasNext()) {\n System.out.print(it.next()+\", \");\n }\n }\n}" }, { "code": null, "e": 2670, "s": 2341, "text": "Contents of the array list (first to last):\nJavaFX, Java, WebGL, OpenCV, Exception in thread \"main\"\njava.util.ConcurrentModificationException\n at java.util.ArrayList$Itr.checkForComodification(Unknown Source)\n at java.util.ArrayList$Itr.next(Unknown Source)\n at sample.OccurenceOfElements.main(OccurenceOfElements.java:23)" }, { "code": null, "e": 2884, "s": 2670, "text": "To resolve this while accessing collection objects from multiple threads use synchronized block or method and, if you are modifying data while retrieving it, get the Iterator object again after modifying the data." }, { "code": null, "e": 2895, "s": 2884, "text": " Live Demo" }, { "code": null, "e": 3665, "s": 2895, "text": "import java.util.ArrayList;\nimport java.util.Iterator;\npublic class OccurenceOfElements {\n public static void main(String args[]) {\n ArrayList <String> list = new ArrayList<String>();\n //Instantiating an ArrayList object\n list.add(\"JavaFX\");\n list.add(\"Java\");\n list.add(\"WebGL\");\n list.add(\"OpenCV\");\n System.out.println(\"Contents of the array list (first to last): \");\n Iterator<String> it = list.iterator();\n while(it.hasNext()) {\n System.out.print(it.next()+\". \");\n }\n list.remove(3);\n System.out.println(\"\");\n System.out.println(\"Contents of the array list after removal: \");\n it = list.iterator();\n while(it.hasNext()) {\n System.out.print(it.next()+\". \");\n }\n }\n}" }, { "code": null, "e": 3801, "s": 3665, "text": "Contents of the array list (first to last):\nJavaFX. Java. WebGL. OpenCV.\nContents of the array list after removal:\nJavaFX. Java. WebGL." } ]
How to add a prefix to columns of an R data frame?
If we want to provide more information about the data, we have in columns of an R data frames then we might want to use prefixes. These prefixes help everyone to understand the data, for example, we can use data set name as a prefix, the analysis objective as a prefix, or something that is common among all the columns. To add a prefix to columns of an R data frame, we can use paste function to separate the prefix with the original column names. Consider the below data frame − set.seed(100) Rate <-sample(1:100,20) Level <-sample(1:10,20,replace=TRUE) Region <-rep(1:4,times=5) df <-data.frame(Rate,Level,Region) df Rate Level Region 1 74 2 1 2 89 3 2 3 78 4 3 4 23 4 4 5 86 4 1 6 70 5 2 7 4 7 3 8 55 9 4 9 95 4 1 10 7 2 2 11 91 6 3 12 93 7 4 13 43 1 1 14 82 6 2 15 61 9 3 16 12 9 4 17 51 9 1 18 72 6 2 19 18 8 3 20 25 7 4 Adding prefix to the columns of the data frame df − colnames(df) <-paste("2FactorData",colnames(df),sep="-") df 2FactorData-Rate 2FactorData-Level 2FactorData-Region 1 74 2 1 2 89 3 2 3 78 4 3 4 23 4 4 5 86 4 1 6 70 5 2 7 4 7 3 8 55 9 4 9 95 4 1 10 7 2 2 11 91 6 3 12 93 7 4 13 43 1 1 14 82 6 2 15 61 9 3 16 12 9 4 17 51 9 1 18 72 6 2 19 18 8 3 20 25 7 4 Let’s have a look at another example − x1 <-1:20 x2 <-20:1 y <-rnorm(20) df_new <-data.frame(x1,x2,y) df_new x1 x2 y 1 1 20 -0.69001432 2 2 19 -0.22179423 3 3 18 0.18290768 4 4 17 0.41732329 5 5 16 1.06540233 6 6 15 0.97020202 7 7 14 -0.10162924 8 8 13 1.40320349 9 9 12 -1.77677563 10 10 11 0.62286739 11 11 10 -0.52228335 12 12 9 1.32223096 13 13 8 -0.36344033 14 14 7 1.31906574 15 15 6 0.04377907 16 16 5 -1.87865588 17 17 4 -0.44706218 18 18 3 -1.73859795 19 19 2 0.17886485 20 20 1 1.89746570 colnames(df_new) <-paste("MultipleRegression",colnames(df_new),sep="_") df_new MultipleRegression_x1 MultipleRegression_x2 MultipleRegression_y 1 1 20 -0.69001432 2 2 19 -0.22179423 3 3 18 0.18290768 4 4 17 0.41732329 5 5 16 1.06540233 6 6 15 0.97020202 7 7 14 -0.10162924 8 8 13 1.40320349 9 9 12 -1.77677563 10 10 11 0.62286739 11 11 10 -0.52228335 12 12 9 1.32223096 13 13 8 -0.36344033 14 14 7 1.31906574 15 15 6 0.04377907 16 16 5 -1.87865588 17 17 4 -0.44706218 18 18 3 -1.73859795 19 19 2 0.17886485 20 20 1 1.89746570
[ { "code": null, "e": 1636, "s": 1187, "text": "If we want to provide more information about the data, we have in columns of an R data frames then we might want to use prefixes. These prefixes help everyone to understand the data, for example, we can use data set name as a prefix, the analysis objective as a prefix, or something that is common among all the columns. To add a prefix to columns of an R data frame, we can use paste function to separate the prefix with the original column names." }, { "code": null, "e": 1668, "s": 1636, "text": "Consider the below data frame −" }, { "code": null, "e": 1807, "s": 1668, "text": "set.seed(100)\nRate <-sample(1:100,20)\nLevel <-sample(1:10,20,replace=TRUE)\nRegion <-rep(1:4,times=5)\ndf <-data.frame(Rate,Level,Region)\ndf" }, { "code": null, "e": 2014, "s": 1807, "text": "Rate Level Region\n1 74 2 1\n2 89 3 2\n3 78 4 3\n4 23 4 4\n5 86 4 1\n6 70 5 2\n7 4 7 3\n8 55 9 4\n9 95 4 1\n10 7 2 2\n11 91 6 3\n12 93 7 4\n13 43 1 1\n14 82 6 2\n15 61 9 3\n16 12 9 4\n17 51 9 1\n18 72 6 2\n19 18 8 3\n20 25 7 4" }, { "code": null, "e": 2066, "s": 2014, "text": "Adding prefix to the columns of the data frame df −" }, { "code": null, "e": 2126, "s": 2066, "text": "colnames(df) <-paste(\"2FactorData\",colnames(df),sep=\"-\")\ndf" }, { "code": null, "e": 2369, "s": 2126, "text": "2FactorData-Rate 2FactorData-Level 2FactorData-Region\n1 74 2 1\n2 89 3 2\n3 78 4 3\n4 23 4 4\n5 86 4 1\n6 70 5 2\n7 4 7 3\n8 55 9 4\n9 95 4 1\n10 7 2 2\n11 91 6 3\n12 93 7 4\n13 43 1 1\n14 82 6 2\n15 61 9 3\n16 12 9 4\n17 51 9 1\n18 72 6 2\n19 18 8 3\n20 25 7 4" }, { "code": null, "e": 2408, "s": 2369, "text": "Let’s have a look at another example −" }, { "code": null, "e": 2478, "s": 2408, "text": "x1 <-1:20\nx2 <-20:1\ny <-rnorm(20)\ndf_new <-data.frame(x1,x2,y)\ndf_new" }, { "code": null, "e": 2877, "s": 2478, "text": " x1 x2 y\n1 1 20 -0.69001432\n2 2 19 -0.22179423\n3 3 18 0.18290768\n4 4 17 0.41732329\n5 5 16 1.06540233\n6 6 15 0.97020202\n7 7 14 -0.10162924\n8 8 13 1.40320349\n9 9 12 -1.77677563\n10 10 11 0.62286739\n11 11 10 -0.52228335\n12 12 9 1.32223096\n13 13 8 -0.36344033\n14 14 7 1.31906574\n15 15 6 0.04377907\n16 16 5 -1.87865588\n17 17 4 -0.44706218\n18 18 3 -1.73859795\n19 19 2 0.17886485\n20 20 1 1.89746570" }, { "code": null, "e": 2956, "s": 2877, "text": "colnames(df_new) <-paste(\"MultipleRegression\",colnames(df_new),sep=\"_\")\ndf_new" }, { "code": null, "e": 3403, "s": 2956, "text": "MultipleRegression_x1 MultipleRegression_x2 MultipleRegression_y\n1 1 20 -0.69001432\n2 2 19 -0.22179423\n3 3 18 0.18290768\n4 4 17 0.41732329\n5 5 16 1.06540233\n6 6 15 0.97020202\n7 7 14 -0.10162924\n8 8 13 1.40320349\n9 9 12 -1.77677563\n10 10 11 0.62286739\n11 11 10 -0.52228335\n12 12 9 1.32223096\n13 13 8 -0.36344033\n14 14 7 1.31906574\n15 15 6 0.04377907\n16 16 5 -1.87865588\n17 17 4 -0.44706218\n18 18 3 -1.73859795\n19 19 2 0.17886485\n20 20 1 1.89746570" } ]
Traverse through a HashSet in Java
14 Dec, 2021 As we all know HashSet elements are unordered so the traversed elements can be printed in any order. In order to perform operations over our HashSet such as insertion, deletion, updating elements than first we need to reach out in order to access the HashSet. below are few ways with which we can iterate over elements to perform any kind of operations o Set elements as listed below. Methods: Using for-each loopUsing forEach methodUsing Iterators Using for-each loop Using forEach method Using Iterators Method 1: Using for-each loop It is another array traversing technique like for loop, while loop, do-while loop introduced in Java 5. It starts with the keyword for like a normal for-loop. Instead of declaring and initializing a loop counter variable, you declare a variable that is the same type as the base type of the array, followed by a colon, which is then followed by the array name array traversing technique like for loop, while loop, do-while loop introduced in Java 5. Example: Java // Java program to demonstrate iteration over// HashSet using an Enhanced for-loop import java.util.*; class IterationDemo { public static void main(String[] args) { // your code goes here HashSet<String> h = new HashSet<String>(); // Adding elements into HashSet using add() h.add("Geeks"); h.add("for"); h.add("Geeks"); // Iterating over hash set items for (String i : h) System.out.println(i); }} Geeks for Method 2: Using forEach() method of Stream class Stream forEach(Consumer action) performs an action for each element of the stream. Stream forEach(Consumer action) is a terminal operation that is, it may traverse the stream to produce a result or a side-effect. Tip: In Java 8 or above, we can iterate a List or Collection using forEach() method. Example: Java // Java program to demonstrate iteration over// HashSet using forEach() method import java.util.*; class IterationDemo { public static void main(String[] args) { // your code goes here HashSet<String> h = new HashSet<String>(); // Adding elements into HashSet using add() h.add("Geeks"); h.add("for"); h.add("Geeks"); // Iterating over hash set items h.forEach(i -> System.out.println(i)); }} Geeks for Method 3: Using an Iterator The iterator() method is used to get an iterator over the elements in this set. The elements are returned in no particular order. Below is the java program to demonstrate it. Example Java // Java program to Illustrate Traversal over HashSet// Using an iterator // Importing required classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating empty HashSet by declaring object // of HashSet class of string type HashSet<String> h = new HashSet<String>(); // Adding elements into HashSet // using add() method h.add("Geeks"); h.add("for"); h.add("Geeks"); // Iterating over HashSet elements // using iterator Iterator<String> i = h.iterator(); // Holds true till there is single element remaining // in the Set while (i.hasNext()) // Printing the elements System.out.println(i.next()); }} Geeks for solankimayank prachisoda1234 Java-Collections java-hashset Java-Set-Programs Picked Java Java Java-Collections Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 53, "s": 25, "text": "\n14 Dec, 2021" }, { "code": null, "e": 438, "s": 53, "text": "As we all know HashSet elements are unordered so the traversed elements can be printed in any order. In order to perform operations over our HashSet such as insertion, deletion, updating elements than first we need to reach out in order to access the HashSet. below are few ways with which we can iterate over elements to perform any kind of operations o Set elements as listed below." }, { "code": null, "e": 447, "s": 438, "text": "Methods:" }, { "code": null, "e": 502, "s": 447, "text": "Using for-each loopUsing forEach methodUsing Iterators" }, { "code": null, "e": 522, "s": 502, "text": "Using for-each loop" }, { "code": null, "e": 543, "s": 522, "text": "Using forEach method" }, { "code": null, "e": 559, "s": 543, "text": "Using Iterators" }, { "code": null, "e": 589, "s": 559, "text": "Method 1: Using for-each loop" }, { "code": null, "e": 1040, "s": 589, "text": "It is another array traversing technique like for loop, while loop, do-while loop introduced in Java 5. It starts with the keyword for like a normal for-loop. Instead of declaring and initializing a loop counter variable, you declare a variable that is the same type as the base type of the array, followed by a colon, which is then followed by the array name array traversing technique like for loop, while loop, do-while loop introduced in Java 5. " }, { "code": null, "e": 1049, "s": 1040, "text": "Example:" }, { "code": null, "e": 1054, "s": 1049, "text": "Java" }, { "code": "// Java program to demonstrate iteration over// HashSet using an Enhanced for-loop import java.util.*; class IterationDemo { public static void main(String[] args) { // your code goes here HashSet<String> h = new HashSet<String>(); // Adding elements into HashSet using add() h.add(\"Geeks\"); h.add(\"for\"); h.add(\"Geeks\"); // Iterating over hash set items for (String i : h) System.out.println(i); }}", "e": 1545, "s": 1054, "text": null }, { "code": null, "e": 1555, "s": 1545, "text": "Geeks\nfor" }, { "code": null, "e": 1604, "s": 1555, "text": "Method 2: Using forEach() method of Stream class" }, { "code": null, "e": 1818, "s": 1604, "text": "Stream forEach(Consumer action) performs an action for each element of the stream. Stream forEach(Consumer action) is a terminal operation that is, it may traverse the stream to produce a result or a side-effect. " }, { "code": null, "e": 1904, "s": 1818, "text": "Tip: In Java 8 or above, we can iterate a List or Collection using forEach() method." }, { "code": null, "e": 1913, "s": 1904, "text": "Example:" }, { "code": null, "e": 1918, "s": 1913, "text": "Java" }, { "code": "// Java program to demonstrate iteration over// HashSet using forEach() method import java.util.*; class IterationDemo { public static void main(String[] args) { // your code goes here HashSet<String> h = new HashSet<String>(); // Adding elements into HashSet using add() h.add(\"Geeks\"); h.add(\"for\"); h.add(\"Geeks\"); // Iterating over hash set items h.forEach(i -> System.out.println(i)); }}", "e": 2382, "s": 1918, "text": null }, { "code": null, "e": 2392, "s": 2382, "text": "Geeks\nfor" }, { "code": null, "e": 2422, "s": 2394, "text": "Method 3: Using an Iterator" }, { "code": null, "e": 2597, "s": 2422, "text": "The iterator() method is used to get an iterator over the elements in this set. The elements are returned in no particular order. Below is the java program to demonstrate it." }, { "code": null, "e": 2606, "s": 2597, "text": "Example " }, { "code": null, "e": 2611, "s": 2606, "text": "Java" }, { "code": "// Java program to Illustrate Traversal over HashSet// Using an iterator // Importing required classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating empty HashSet by declaring object // of HashSet class of string type HashSet<String> h = new HashSet<String>(); // Adding elements into HashSet // using add() method h.add(\"Geeks\"); h.add(\"for\"); h.add(\"Geeks\"); // Iterating over HashSet elements // using iterator Iterator<String> i = h.iterator(); // Holds true till there is single element remaining // in the Set while (i.hasNext()) // Printing the elements System.out.println(i.next()); }}", "e": 3422, "s": 2611, "text": null }, { "code": null, "e": 3432, "s": 3422, "text": "Geeks\nfor" }, { "code": null, "e": 3446, "s": 3432, "text": "solankimayank" }, { "code": null, "e": 3461, "s": 3446, "text": "prachisoda1234" }, { "code": null, "e": 3478, "s": 3461, "text": "Java-Collections" }, { "code": null, "e": 3491, "s": 3478, "text": "java-hashset" }, { "code": null, "e": 3509, "s": 3491, "text": "Java-Set-Programs" }, { "code": null, "e": 3516, "s": 3509, "text": "Picked" }, { "code": null, "e": 3521, "s": 3516, "text": "Java" }, { "code": null, "e": 3526, "s": 3521, "text": "Java" }, { "code": null, "e": 3543, "s": 3526, "text": "Java-Collections" } ]
Python | Intersect two dictionaries through keys
28 Feb, 2019 Given two dictionaries, the task is to find the intersection of these two dictionaries through keys. Let’s see different ways to do this task.Method #1: Using dict comprehension # Python code to demonstrate# intersection of two dictionaries # using dict comprehension # inititialising dictionaryini_dict1 = {'nikhil': 1, 'vashu' : 5, 'manjeet' : 10, 'akshat' : 15}ini_dict2 = {'akshat' :15, 'nikhil' : 1, 'me' : 56} # printing initial jsonprint ("initial 1st dictionary", ini_dict1)print ("initial 2nd dictionary", ini_dict2) # intersecting two dictionariesfinal_dict = {x:ini_dict1[x] for x in ini_dict1 if x in ini_dict2} # printing final resultprint ("final dictionary", str(final_dict)) initial 1st dictionary {‘vashu’: 5, ‘manjeet’: 10, ‘nikhil’: 1, ‘akshat’: 15}initial 2nd dictionary {‘nikhil’: 1, ‘me’: 56, ‘akshat’: 15}final dictionary {‘nikhil’: 1, ‘akshat’: 15} Method #2: Using & operator # Python code to demonstrate# intersection of two dictionaries # using dict comprehension # inititialising dictionaryini_dict1 = {'nikhil': 1, 'vashu' : 5, 'manjeet' : 10, 'akshat' : 15}ini_dict2 = {'akshat' :15, 'nikhil' : 1, 'me' : 56} # printing initial jsonprint ("initial 1st dictionary", ini_dict1)print ("initial 2nd dictionary", ini_dict2) # intersecting two dictionariesfinal_dict = dict(ini_dict1.items() & ini_dict2.items()) # printing final resultprint ("final dictionary", str(final_dict)) initial 1st dictionary {‘vashu’: 5, ‘manjeet’: 10, ‘nikhil’: 1, ‘akshat’: 15}initial 2nd dictionary {‘nikhil’: 1, ‘akshat’: 15, ‘me’: 56}final dictionary {‘nikhil’: 1, ‘akshat’: 15} Marketing Python dictionary-programs python-dict Python Python Programs python-dict Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Iterate over a list in Python Rotate axis tick labels in Seaborn and Matplotlib Enumerate() in Python Deque in Python Stack in Python Defaultdict in Python Python | Split string into list of characters Python | Get dictionary keys as a list Iterate over characters of a string in Python Python | Convert set into a list
[ { "code": null, "e": 28, "s": 0, "text": "\n28 Feb, 2019" }, { "code": null, "e": 206, "s": 28, "text": "Given two dictionaries, the task is to find the intersection of these two dictionaries through keys. Let’s see different ways to do this task.Method #1: Using dict comprehension" }, { "code": "# Python code to demonstrate# intersection of two dictionaries # using dict comprehension # inititialising dictionaryini_dict1 = {'nikhil': 1, 'vashu' : 5, 'manjeet' : 10, 'akshat' : 15}ini_dict2 = {'akshat' :15, 'nikhil' : 1, 'me' : 56} # printing initial jsonprint (\"initial 1st dictionary\", ini_dict1)print (\"initial 2nd dictionary\", ini_dict2) # intersecting two dictionariesfinal_dict = {x:ini_dict1[x] for x in ini_dict1 if x in ini_dict2} # printing final resultprint (\"final dictionary\", str(final_dict))", "e": 766, "s": 206, "text": null }, { "code": null, "e": 948, "s": 766, "text": "initial 1st dictionary {‘vashu’: 5, ‘manjeet’: 10, ‘nikhil’: 1, ‘akshat’: 15}initial 2nd dictionary {‘nikhil’: 1, ‘me’: 56, ‘akshat’: 15}final dictionary {‘nikhil’: 1, ‘akshat’: 15}" }, { "code": null, "e": 977, "s": 948, "text": " Method #2: Using & operator" }, { "code": "# Python code to demonstrate# intersection of two dictionaries # using dict comprehension # inititialising dictionaryini_dict1 = {'nikhil': 1, 'vashu' : 5, 'manjeet' : 10, 'akshat' : 15}ini_dict2 = {'akshat' :15, 'nikhil' : 1, 'me' : 56} # printing initial jsonprint (\"initial 1st dictionary\", ini_dict1)print (\"initial 2nd dictionary\", ini_dict2) # intersecting two dictionariesfinal_dict = dict(ini_dict1.items() & ini_dict2.items()) # printing final resultprint (\"final dictionary\", str(final_dict))", "e": 1496, "s": 977, "text": null }, { "code": null, "e": 1678, "s": 1496, "text": "initial 1st dictionary {‘vashu’: 5, ‘manjeet’: 10, ‘nikhil’: 1, ‘akshat’: 15}initial 2nd dictionary {‘nikhil’: 1, ‘akshat’: 15, ‘me’: 56}final dictionary {‘nikhil’: 1, ‘akshat’: 15}" }, { "code": null, "e": 1688, "s": 1678, "text": "Marketing" }, { "code": null, "e": 1715, "s": 1688, "text": "Python dictionary-programs" }, { "code": null, "e": 1727, "s": 1715, "text": "python-dict" }, { "code": null, "e": 1734, "s": 1727, "text": "Python" }, { "code": null, "e": 1750, "s": 1734, "text": "Python Programs" }, { "code": null, "e": 1762, "s": 1750, "text": "python-dict" }, { "code": null, "e": 1860, "s": 1762, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 1890, "s": 1860, "text": "Iterate over a list in Python" }, { "code": null, "e": 1940, "s": 1890, "text": "Rotate axis tick labels in Seaborn and Matplotlib" }, { "code": null, "e": 1962, "s": 1940, "text": "Enumerate() in Python" }, { "code": null, "e": 1978, "s": 1962, "text": "Deque in Python" }, { "code": null, "e": 1994, "s": 1978, "text": "Stack in Python" }, { "code": null, "e": 2016, "s": 1994, "text": "Defaultdict in Python" }, { "code": null, "e": 2062, "s": 2016, "text": "Python | Split string into list of characters" }, { "code": null, "e": 2101, "s": 2062, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 2147, "s": 2101, "text": "Iterate over characters of a string in Python" } ]
Material Design Lite - Icons
MDL provides a range of CSS classes to apply various predefined visual and behavioral enhancements and display the different types of checkboxes as icons. The following tables lists down the available classes and their effects. mdl-icon-toggle Identifies label as an MDL component and is required on label element. mdl-js-icon-toggle Sets basic MDL behavior to label and is required on label element. mdl-icon-toggle__input Sets basic MDL behavior to icon-toggle and is required on input element (icon-toggle). mdl-icon-toggle__label Sets basic MDL behavior to caption and is required on on i element (icon). mdl-js-ripple-effect Sets ripple click effect and is optional; goes on the label element and not on the input element (icon-toggle). The following example showcases the use of mdl-icon-toggle classes to show different types of checkboxes as icons. <html> <head> <script src = "https://storage.googleapis.com/code.getmdl.io/1.0.6/material.min.js"> </script> <link rel = "stylesheet" href = "https://storage.googleapis.com/code.getmdl.io/1.0.6/material.indigo-pink.min.css"> <link rel = "stylesheet" href = "https://fonts.googleapis.com/icon?family=Material+Icons"> </head> <body> <table> <tr><td>On Icon</td><td>Off Icon</td> <td>Disabled Icon</td></tr> <tr> <td> <label class = "mdl-icon-toggle mdl-js-icon-toggle mdl-js-ripple-effect" for = "icon-toggle-1"> <input type = "checkbox" id = "icon-toggle-1" class = "mdl-icon-toggle__input" checked> <i class = "mdl-icon-toggle__label material-icons">format_bold</i> </label> </td> <td> <label class = "mdl-icon-toggle mdl-js-icon-toggle mdl-js-ripple-effect" for = "icon-toggle-2"> <input type = "checkbox" id = "icon-toggle-2" class = "mdl-icon-toggle__input"> <i class = "mdl-icon-toggle__label material-icons">format_italic</i> </label> </td> <td> <label class = "mdl-icon-toggle mdl-js-icon-toggle mdl-js-ripple-effect" for = "icon-toggle-2"> <input type = "checkbox" id = "icon-toggle-2" class = "mdl-icon-toggle__input" disabled> <i class = "mdl-icon-toggle__label material-icons">format_underline</i> </label> </td> </tr> </table> </body> </html> Verify the result. Print Add Notes Bookmark this page
[ { "code": null, "e": 2114, "s": 1886, "text": "MDL provides a range of CSS classes to apply various predefined visual and behavioral enhancements and display the different types of checkboxes as icons. The following tables lists down the available classes and their effects." }, { "code": null, "e": 2130, "s": 2114, "text": "mdl-icon-toggle" }, { "code": null, "e": 2201, "s": 2130, "text": "Identifies label as an MDL component and is required on label element." }, { "code": null, "e": 2220, "s": 2201, "text": "mdl-js-icon-toggle" }, { "code": null, "e": 2287, "s": 2220, "text": "Sets basic MDL behavior to label and is required on label element." }, { "code": null, "e": 2310, "s": 2287, "text": "mdl-icon-toggle__input" }, { "code": null, "e": 2397, "s": 2310, "text": "Sets basic MDL behavior to icon-toggle and is required on input element (icon-toggle)." }, { "code": null, "e": 2420, "s": 2397, "text": "mdl-icon-toggle__label" }, { "code": null, "e": 2495, "s": 2420, "text": "Sets basic MDL behavior to caption and is required on on i element (icon)." }, { "code": null, "e": 2516, "s": 2495, "text": "mdl-js-ripple-effect" }, { "code": null, "e": 2628, "s": 2516, "text": "Sets ripple click effect and is optional; goes on the label element and not on the input element (icon-toggle)." }, { "code": null, "e": 2743, "s": 2628, "text": "The following example showcases the use of mdl-icon-toggle classes to show different types of checkboxes as icons." }, { "code": null, "e": 4548, "s": 2743, "text": "<html>\n <head>\n <script \n src = \"https://storage.googleapis.com/code.getmdl.io/1.0.6/material.min.js\">\n </script>\n <link rel = \"stylesheet\" \n href = \"https://storage.googleapis.com/code.getmdl.io/1.0.6/material.indigo-pink.min.css\">\n <link rel = \"stylesheet\" \n href = \"https://fonts.googleapis.com/icon?family=Material+Icons\">\t \n </head>\n \n <body>\n <table>\n <tr><td>On Icon</td><td>Off Icon</td>\n <td>Disabled Icon</td></tr>\n <tr>\n <td> \n <label class = \"mdl-icon-toggle mdl-js-icon-toggle mdl-js-ripple-effect\" \n for = \"icon-toggle-1\">\n <input type = \"checkbox\" id = \"icon-toggle-1\" \n class = \"mdl-icon-toggle__input\" checked>\n <i class = \"mdl-icon-toggle__label material-icons\">format_bold</i>\n </label>\n </td>\n \n <td>\n <label class = \"mdl-icon-toggle mdl-js-icon-toggle mdl-js-ripple-effect\" \n for = \"icon-toggle-2\">\n <input type = \"checkbox\" id = \"icon-toggle-2\" \n class = \"mdl-icon-toggle__input\">\n <i class = \"mdl-icon-toggle__label material-icons\">format_italic</i>\n </label>\n </td>\n \n <td>\n <label class = \"mdl-icon-toggle mdl-js-icon-toggle mdl-js-ripple-effect\" \n for = \"icon-toggle-2\">\n <input type = \"checkbox\" id = \"icon-toggle-2\" \n class = \"mdl-icon-toggle__input\" disabled>\n <i class = \"mdl-icon-toggle__label material-icons\">format_underline</i>\n </label>\n </td>\n </tr>\n </table> \n \n </body>\n</html>" }, { "code": null, "e": 4567, "s": 4548, "text": "Verify the result." }, { "code": null, "e": 4574, "s": 4567, "text": " Print" }, { "code": null, "e": 4585, "s": 4574, "text": " Add Notes" } ]
Pascal - For-do Loop
A for-do loop is a repetition control structure that allows you to efficiently write a loop that needs to execute a specific number of times. The syntax for the for-do loop in Pascal is as follows − for < variable-name > := < initial_value > to [down to] < final_value > do S; Where, the variable-name specifies a variable of ordinal type, called control variable or index variable; initial_value and final_value values are values that the control variable can take; and S is the body of the for-do loop that could be a simple statement or a group of statements. For example, for i:= 1 to 10 do writeln(i); Here is the flow of control in a for-do loop − The initial step is executed first, and only once. This step allows you to declare and initialize any loop control variables. The initial step is executed first, and only once. This step allows you to declare and initialize any loop control variables. Next, the condition is evaluated. If it is true, the body of the loop is executed. If it is false, the body of the loop does not execute and flow of control jumps to the next statement just after the for-do loop. Next, the condition is evaluated. If it is true, the body of the loop is executed. If it is false, the body of the loop does not execute and flow of control jumps to the next statement just after the for-do loop. After the body of the for-do loop executes, the value of the variable is either increased or decreased. After the body of the for-do loop executes, the value of the variable is either increased or decreased. The condition is now evaluated again. If it is true, the loop executes and the process repeats itself (body of loop, then increment step, and then again condition). After the condition becomes false, the for-do loop terminates. The condition is now evaluated again. If it is true, the loop executes and the process repeats itself (body of loop, then increment step, and then again condition). After the condition becomes false, the for-do loop terminates. program forLoop; var a: integer; begin for a := 10 to 20 do begin writeln('value of a: ', a); end; end. When the above code is compiled and executed, it produces the following result − value of a: 10 value of a: 11 value of a: 12 value of a: 13 value of a: 14 value of a: 15 value of a: 16 value of a: 17 value of a: 18 value of a: 19 value of a: 20 94 Lectures 8.5 hours Stone River ELearning Print Add Notes Bookmark this page
[ { "code": null, "e": 2225, "s": 2083, "text": "A for-do loop is a repetition control structure that allows you to efficiently write a loop that needs to execute a specific number of times." }, { "code": null, "e": 2282, "s": 2225, "text": "The syntax for the for-do loop in Pascal is as follows −" }, { "code": null, "e": 2364, "s": 2282, "text": "for < variable-name > := < initial_value > to [down to] < final_value > do \n S;" }, { "code": null, "e": 2650, "s": 2364, "text": "Where, the variable-name specifies a variable of ordinal type, called control variable or index variable; initial_value and final_value values are values that the control variable can take; and S is the body of the for-do loop that could be a simple statement or a group of statements." }, { "code": null, "e": 2663, "s": 2650, "text": "For example," }, { "code": null, "e": 2694, "s": 2663, "text": "for i:= 1 to 10 do writeln(i);" }, { "code": null, "e": 2741, "s": 2694, "text": "Here is the flow of control in a for-do loop −" }, { "code": null, "e": 2867, "s": 2741, "text": "The initial step is executed first, and only once. This step allows you to declare and initialize any loop control variables." }, { "code": null, "e": 2993, "s": 2867, "text": "The initial step is executed first, and only once. This step allows you to declare and initialize any loop control variables." }, { "code": null, "e": 3206, "s": 2993, "text": "Next, the condition is evaluated. If it is true, the body of the loop is executed. If it is false, the body of the loop does not execute and flow of control jumps to the next statement just after the for-do loop." }, { "code": null, "e": 3419, "s": 3206, "text": "Next, the condition is evaluated. If it is true, the body of the loop is executed. If it is false, the body of the loop does not execute and flow of control jumps to the next statement just after the for-do loop." }, { "code": null, "e": 3523, "s": 3419, "text": "After the body of the for-do loop executes, the value of the variable is either increased or decreased." }, { "code": null, "e": 3627, "s": 3523, "text": "After the body of the for-do loop executes, the value of the variable is either increased or decreased." }, { "code": null, "e": 3855, "s": 3627, "text": "The condition is now evaluated again. If it is true, the loop executes and the process repeats itself (body of loop, then increment step, and then again condition). After the condition becomes false, the for-do loop terminates." }, { "code": null, "e": 4083, "s": 3855, "text": "The condition is now evaluated again. If it is true, the loop executes and the process repeats itself (body of loop, then increment step, and then again condition). After the condition becomes false, the for-do loop terminates." }, { "code": null, "e": 4211, "s": 4083, "text": "program forLoop;\nvar\n a: integer;\n\nbegin\n for a := 10 to 20 do\n \n begin\n writeln('value of a: ', a);\n end;\nend." }, { "code": null, "e": 4292, "s": 4211, "text": "When the above code is compiled and executed, it produces the following result −" }, { "code": null, "e": 4458, "s": 4292, "text": "value of a: 10\nvalue of a: 11\nvalue of a: 12\nvalue of a: 13\nvalue of a: 14\nvalue of a: 15\nvalue of a: 16\nvalue of a: 17\nvalue of a: 18\nvalue of a: 19\nvalue of a: 20\n" }, { "code": null, "e": 4493, "s": 4458, "text": "\n 94 Lectures \n 8.5 hours \n" }, { "code": null, "e": 4516, "s": 4493, "text": " Stone River ELearning" }, { "code": null, "e": 4523, "s": 4516, "text": " Print" }, { "code": null, "e": 4534, "s": 4523, "text": " Add Notes" } ]
T-SQL - UPDATE Statement
The SQL Server UPDATE Query is used to modify the existing records in a table. You can use WHERE clause with UPDATE query to update selected rows otherwise all the rows would be affected. Following is the basic syntax of UPDATE query with WHERE clause − UPDATE table_name SET column1 = value1, column2 = value2...., columnN = valueN WHERE [condition]; You can combine N number of conditions using AND or OR operators. Consider the CUSTOMERS table having the following records − ID NAME AGE ADDRESS SALARY 1 Ramesh 32 Ahmedabad 2000.00 2 Khilan 25 Delhi 1500.00 3 kaushik 23 Kota 2000.00 4 Chaitali 25 Mumbai 6500.00 5 Hardik 27 Bhopal 8500.00 6 Komal 22 MP 4500.00 7 Muffy 24 Indore 10000.00 Following command is an example, which would update ADDRESS for a customer whose ID is 6 − UPDATE CUSTOMERS SET ADDRESS = 'Pune' WHERE ID = 6; CUSTOMERS table will now have the following records − ID NAME AGE ADDRESS SALARY 1 Ramesh 32 Ahmedabad 2000.00 2 Khilan 25 Delhi 1500.00 3 kaushik 23 Kota 2000.00 4 Chaitali 25 Mumbai 6500.00 5 Hardik 27 Bhopal 8500.00 6 Komal 22 Pune 4500.00 7 Muffy 24 Indore 10000.00 If you want to modify all ADDRESS and SALARY column values in CUSTOMERS table, you do not need to use WHERE clause. UPDATE query would be as follows − UPDATE CUSTOMERS SET ADDRESS = 'Pune', SALARY = 1000.00; CUSTOMERS table will now have the following records. ID NAME AGE ADDRESS SALARY 1 Ramesh 32 Pune 1000.00 2 Khilan 25 Pune 1000.00 3 kaushik 23 Pune 1000.00 4 Chaitali 25 Pune 1000.00 5 Hardik 27 Pune 1000.00 6 Komal 22 Pune 1000.00 7 Muffy 24 Pune 1000.00 12 Lectures 2 hours Nishant Malik 10 Lectures 1.5 hours Nishant Malik 12 Lectures 2.5 hours Nishant Malik 20 Lectures 2 hours Asif Hussain 10 Lectures 1.5 hours Nishant Malik 48 Lectures 6.5 hours Asif Hussain Print Add Notes Bookmark this page
[ { "code": null, "e": 2139, "s": 2060, "text": "The SQL Server UPDATE Query is used to modify the existing records in a table." }, { "code": null, "e": 2248, "s": 2139, "text": "You can use WHERE clause with UPDATE query to update selected rows otherwise all the rows would be affected." }, { "code": null, "e": 2314, "s": 2248, "text": "Following is the basic syntax of UPDATE query with WHERE clause −" }, { "code": null, "e": 2415, "s": 2314, "text": "UPDATE table_name \nSET column1 = value1, column2 = value2...., columnN = valueN \nWHERE [condition];\n" }, { "code": null, "e": 2481, "s": 2415, "text": "You can combine N number of conditions using AND or OR operators." }, { "code": null, "e": 2541, "s": 2481, "text": "Consider the CUSTOMERS table having the following records −" }, { "code": null, "e": 2974, "s": 2541, "text": "ID NAME AGE ADDRESS SALARY \n1 Ramesh 32 Ahmedabad 2000.00 \n2 Khilan 25 Delhi 1500.00 \n3 kaushik 23 Kota 2000.00 \n4 Chaitali 25 Mumbai 6500.00 \n5 Hardik 27 Bhopal 8500.00 \n6 Komal 22 MP 4500.00 \n7 Muffy 24 Indore 10000.00 \n" }, { "code": null, "e": 3065, "s": 2974, "text": "Following command is an example, which would update ADDRESS for a customer whose ID is 6 −" }, { "code": null, "e": 3120, "s": 3065, "text": "UPDATE CUSTOMERS \nSET ADDRESS = 'Pune' \nWHERE ID = 6; " }, { "code": null, "e": 3174, "s": 3120, "text": "CUSTOMERS table will now have the following records −" }, { "code": null, "e": 3607, "s": 3174, "text": "ID NAME AGE ADDRESS SALARY \n1 Ramesh 32 Ahmedabad 2000.00 \n2 Khilan 25 Delhi 1500.00 \n3 kaushik 23 Kota 2000.00 \n4 Chaitali 25 Mumbai 6500.00 \n5 Hardik 27 Bhopal 8500.00 \n6 Komal 22 Pune 4500.00 \n7 Muffy 24 Indore 10000.00 \n" }, { "code": null, "e": 3758, "s": 3607, "text": "If you want to modify all ADDRESS and SALARY column values in CUSTOMERS table, you do not need to use WHERE clause. UPDATE query would be as follows −" }, { "code": null, "e": 3816, "s": 3758, "text": "UPDATE CUSTOMERS \nSET ADDRESS = 'Pune', SALARY = 1000.00;" }, { "code": null, "e": 3869, "s": 3816, "text": "CUSTOMERS table will now have the following records." }, { "code": null, "e": 4277, "s": 3869, "text": "ID NAME AGE ADDRESS SALARY \n1 Ramesh 32 Pune 1000.00 \n2 Khilan 25 Pune 1000.00 \n3 kaushik 23 Pune 1000.00 \n4 Chaitali 25 Pune 1000.00 \n5 Hardik 27 Pune 1000.00 \n6 Komal 22 Pune 1000.00 \n7 Muffy 24 Pune 1000.00 \n" }, { "code": null, "e": 4310, "s": 4277, "text": "\n 12 Lectures \n 2 hours \n" }, { "code": null, "e": 4325, "s": 4310, "text": " Nishant Malik" }, { "code": null, "e": 4360, "s": 4325, "text": "\n 10 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4375, "s": 4360, "text": " Nishant Malik" }, { "code": null, "e": 4410, "s": 4375, "text": "\n 12 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4425, "s": 4410, "text": " Nishant Malik" }, { "code": null, "e": 4458, "s": 4425, "text": "\n 20 Lectures \n 2 hours \n" }, { "code": null, "e": 4472, "s": 4458, "text": " Asif Hussain" }, { "code": null, "e": 4507, "s": 4472, "text": "\n 10 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4522, "s": 4507, "text": " Nishant Malik" }, { "code": null, "e": 4557, "s": 4522, "text": "\n 48 Lectures \n 6.5 hours \n" }, { "code": null, "e": 4571, "s": 4557, "text": " Asif Hussain" }, { "code": null, "e": 4578, "s": 4571, "text": " Print" }, { "code": null, "e": 4589, "s": 4578, "text": " Add Notes" } ]
Understanding Multicollinearity and How to Detect it in Python | by Terence Shin | Towards Data Science
Be sure to subscribe here or to my personal newsletter to never miss another article on data science guides, tricks and tips, life lessons, and more! Over the next few articles, I want to write about some really powerful topics related to regression analysis. For the longest time, I didn’t think there was much to linear regression — I simply thought it was the simplest machine learning model that was nothing more than a line of best fit. However, as I continue to learn more about regression analysis and what it has to offer, I’m realizing that there’s a lot of powerful tools and tricks that most people don’t know about. And so, to start it off, I wanted to talk about multicollinearity. Specifically, I’m going to cover: What multicollinearity isWhat causes multicollinearityWhy it’s bad for linear regression modelsHow you can detect and eliminate multicollinearityHow to detect multicollinearity in Python What multicollinearity is What causes multicollinearity Why it’s bad for linear regression models How you can detect and eliminate multicollinearity How to detect multicollinearity in Python With that said, let’s dive into it! Multicollinearity (or collinearity) occurs when one independent variable in a regression model is linearly correlated with another independent variable. An example of this is if we used “Age” and “Number of Rings” in a regression model for predicting the weight of a tree. Because there is a high correlation between the age of a tree and the number of rings that a tree has (generally, one ring per year), multicollinearity would be present in this model. Be sure to subscribe here or to my personal newsletter to never miss another article on data science guides, tricks and tips, life lessons, and more! In order to understand why multicollinearity is bad, we’re going to have to look at how the regression coefficients (or the model’s parameters) are estimated. This part involves a little bit of linear algebra, but feel free to skip this section if you’re not interested. NOTE: Don’t worry if you skip this part, as long as you focus on why multicollinearity is bad and how you can eliminate it. :) Note that the regression coefficients refer to the “slope” of each variable — in the equation y = B0 + B1x+ B2x2 , B1 and B2 are the regression coefficients. Remember that the point of a linear regression model is finding the best regression coefficients that represents the data. In order to find the optimal regression coefficients we want to find values for them that minimize the squared error. After doing a little bit of math, you can find the optimal parameters (B1, B2, ..., Bp) with the following equation: where B_hat is the vector that includes all individual regression coefficients and X is the design matrix which consists of the predicting variables. Notice that we assume in the equation about that (XTX) is invertible in order to estimate B_hat. If the columns of X are linearly dependent of each other (i.e. if multicollinearity is present), XTX is not invertible, and this results in several consequences, which you’ll see in the next section. To recap, XTX has to be invertible in order to properly estimate regression coefficients for a multiple regression model. If XTX is not invertible, it means that the columns of X are linearly dependent of each other and multicollinearity is present. Ultimately, the presence of multicollinearity results in several problems: The fitted regression coefficients (beta hat) will change substantially if one of the values of one of the x variables is changed only a bit.The variance of the estimated coefficients will be inflated, which means that it will be hard to detect statistical significance. Furthermore, it’s possible that the F statistic is significant but the individual t statistics are not.Ultimately, multicollinearity makes prediction less accurate. For a given model, the underlying assumption is that the relationships among the predicting variables, as well as their relationship with the target variable, will be the same. However, when multicollinearity is present, this is less likely to be the case. The fitted regression coefficients (beta hat) will change substantially if one of the values of one of the x variables is changed only a bit. The variance of the estimated coefficients will be inflated, which means that it will be hard to detect statistical significance. Furthermore, it’s possible that the F statistic is significant but the individual t statistics are not. Ultimately, multicollinearity makes prediction less accurate. For a given model, the underlying assumption is that the relationships among the predicting variables, as well as their relationship with the target variable, will be the same. However, when multicollinearity is present, this is less likely to be the case. A simple method to detect multicollinearity in a model is by using something called the variance inflation factor or the VIF for each predicting variable. VIF measures the ratio between the variance for a given regression coefficient with only that variable in the model versus the variance for a given regression coefficient with all variables in the model. A VIF of 1 (the minimum possible VIF) means the tested predictor is not correlated with the other predictors.A VIF of 1 (the minimum possible VIF) means the tested predictor is not correlated with the other predictors. The higher the VIF, The more correlated a predictor is with the other predictors The more the standard error is inflated The larger the confidence interval The less likely it is that a coefficient will be evaluated as statistically significant An acceptable VIF is if it’s less than the max of 10 and 1/1-R2model (see below): To give an example, I’m going to use Kaggle’s California Housing Prices dataset. First, I imported all relevant libraries and data: import pandas as pdimport numpy as npfrom statsmodels.stats.outliers_influence import variance_inflation_factor Next, for simplicity, I selected only 3 columns to be my features (X variables) and removed all nulls and infinite values: df = pd.read_csv('housing.csv')df = df[~df.isin([np.nan, np.inf, -np.inf]).any(1)]X_variables = df[['total_rooms','total_bedrooms','median_income']] Finally, I calculated the VIF for my X variables: vif_data = pd.DataFrame()vif_data["feature"] = X_variables.columnsvif_data["VIF"] = [variance_inflation_factor(X_variables.values, i) for i in range(len(X_variables.columns))] Just like that, we get the final result: Intuitively this makes complete sense. Total rooms and total bedrooms are far above the VIF threshold, indicating that there is high collinearity between these variables. We can intuitively understand this because there is a strong correlation between the number of rooms and number of bedrooms (the more bedrooms, the more rooms, and vice versa). Be sure to subscribe here or to my personal newsletter to never miss another article on data science guides, tricks and tips, life lessons, and more! I hope you found this useful and that you learned something new! Multicollinearity is an extremely important concept for regression analysis, so it’s definitely an important concept to understand. Not sure what to read next? I’ve picked another article for you: towardsdatascience.com and another one! towardsdatascience.com If you enjoyed this, follow me on Medium for more Interested in collaborating? Let’s connect on LinkedIn
[ { "code": null, "e": 322, "s": 172, "text": "Be sure to subscribe here or to my personal newsletter to never miss another article on data science guides, tricks and tips, life lessons, and more!" }, { "code": null, "e": 614, "s": 322, "text": "Over the next few articles, I want to write about some really powerful topics related to regression analysis. For the longest time, I didn’t think there was much to linear regression — I simply thought it was the simplest machine learning model that was nothing more than a line of best fit." }, { "code": null, "e": 800, "s": 614, "text": "However, as I continue to learn more about regression analysis and what it has to offer, I’m realizing that there’s a lot of powerful tools and tricks that most people don’t know about." }, { "code": null, "e": 901, "s": 800, "text": "And so, to start it off, I wanted to talk about multicollinearity. Specifically, I’m going to cover:" }, { "code": null, "e": 1088, "s": 901, "text": "What multicollinearity isWhat causes multicollinearityWhy it’s bad for linear regression modelsHow you can detect and eliminate multicollinearityHow to detect multicollinearity in Python" }, { "code": null, "e": 1114, "s": 1088, "text": "What multicollinearity is" }, { "code": null, "e": 1144, "s": 1114, "text": "What causes multicollinearity" }, { "code": null, "e": 1186, "s": 1144, "text": "Why it’s bad for linear regression models" }, { "code": null, "e": 1237, "s": 1186, "text": "How you can detect and eliminate multicollinearity" }, { "code": null, "e": 1279, "s": 1237, "text": "How to detect multicollinearity in Python" }, { "code": null, "e": 1315, "s": 1279, "text": "With that said, let’s dive into it!" }, { "code": null, "e": 1468, "s": 1315, "text": "Multicollinearity (or collinearity) occurs when one independent variable in a regression model is linearly correlated with another independent variable." }, { "code": null, "e": 1588, "s": 1468, "text": "An example of this is if we used “Age” and “Number of Rings” in a regression model for predicting the weight of a tree." }, { "code": null, "e": 1772, "s": 1588, "text": "Because there is a high correlation between the age of a tree and the number of rings that a tree has (generally, one ring per year), multicollinearity would be present in this model." }, { "code": null, "e": 1922, "s": 1772, "text": "Be sure to subscribe here or to my personal newsletter to never miss another article on data science guides, tricks and tips, life lessons, and more!" }, { "code": null, "e": 2193, "s": 1922, "text": "In order to understand why multicollinearity is bad, we’re going to have to look at how the regression coefficients (or the model’s parameters) are estimated. This part involves a little bit of linear algebra, but feel free to skip this section if you’re not interested." }, { "code": null, "e": 2320, "s": 2193, "text": "NOTE: Don’t worry if you skip this part, as long as you focus on why multicollinearity is bad and how you can eliminate it. :)" }, { "code": null, "e": 2601, "s": 2320, "text": "Note that the regression coefficients refer to the “slope” of each variable — in the equation y = B0 + B1x+ B2x2 , B1 and B2 are the regression coefficients. Remember that the point of a linear regression model is finding the best regression coefficients that represents the data." }, { "code": null, "e": 2836, "s": 2601, "text": "In order to find the optimal regression coefficients we want to find values for them that minimize the squared error. After doing a little bit of math, you can find the optimal parameters (B1, B2, ..., Bp) with the following equation:" }, { "code": null, "e": 2986, "s": 2836, "text": "where B_hat is the vector that includes all individual regression coefficients and X is the design matrix which consists of the predicting variables." }, { "code": null, "e": 3083, "s": 2986, "text": "Notice that we assume in the equation about that (XTX) is invertible in order to estimate B_hat." }, { "code": null, "e": 3283, "s": 3083, "text": "If the columns of X are linearly dependent of each other (i.e. if multicollinearity is present), XTX is not invertible, and this results in several consequences, which you’ll see in the next section." }, { "code": null, "e": 3533, "s": 3283, "text": "To recap, XTX has to be invertible in order to properly estimate regression coefficients for a multiple regression model. If XTX is not invertible, it means that the columns of X are linearly dependent of each other and multicollinearity is present." }, { "code": null, "e": 3608, "s": 3533, "text": "Ultimately, the presence of multicollinearity results in several problems:" }, { "code": null, "e": 4301, "s": 3608, "text": "The fitted regression coefficients (beta hat) will change substantially if one of the values of one of the x variables is changed only a bit.The variance of the estimated coefficients will be inflated, which means that it will be hard to detect statistical significance. Furthermore, it’s possible that the F statistic is significant but the individual t statistics are not.Ultimately, multicollinearity makes prediction less accurate. For a given model, the underlying assumption is that the relationships among the predicting variables, as well as their relationship with the target variable, will be the same. However, when multicollinearity is present, this is less likely to be the case." }, { "code": null, "e": 4443, "s": 4301, "text": "The fitted regression coefficients (beta hat) will change substantially if one of the values of one of the x variables is changed only a bit." }, { "code": null, "e": 4677, "s": 4443, "text": "The variance of the estimated coefficients will be inflated, which means that it will be hard to detect statistical significance. Furthermore, it’s possible that the F statistic is significant but the individual t statistics are not." }, { "code": null, "e": 4996, "s": 4677, "text": "Ultimately, multicollinearity makes prediction less accurate. For a given model, the underlying assumption is that the relationships among the predicting variables, as well as their relationship with the target variable, will be the same. However, when multicollinearity is present, this is less likely to be the case." }, { "code": null, "e": 5151, "s": 4996, "text": "A simple method to detect multicollinearity in a model is by using something called the variance inflation factor or the VIF for each predicting variable." }, { "code": null, "e": 5355, "s": 5151, "text": "VIF measures the ratio between the variance for a given regression coefficient with only that variable in the model versus the variance for a given regression coefficient with all variables in the model." }, { "code": null, "e": 5574, "s": 5355, "text": "A VIF of 1 (the minimum possible VIF) means the tested predictor is not correlated with the other predictors.A VIF of 1 (the minimum possible VIF) means the tested predictor is not correlated with the other predictors." }, { "code": null, "e": 5594, "s": 5574, "text": "The higher the VIF," }, { "code": null, "e": 5655, "s": 5594, "text": "The more correlated a predictor is with the other predictors" }, { "code": null, "e": 5695, "s": 5655, "text": "The more the standard error is inflated" }, { "code": null, "e": 5730, "s": 5695, "text": "The larger the confidence interval" }, { "code": null, "e": 5818, "s": 5730, "text": "The less likely it is that a coefficient will be evaluated as statistically significant" }, { "code": null, "e": 5900, "s": 5818, "text": "An acceptable VIF is if it’s less than the max of 10 and 1/1-R2model (see below):" }, { "code": null, "e": 5981, "s": 5900, "text": "To give an example, I’m going to use Kaggle’s California Housing Prices dataset." }, { "code": null, "e": 6032, "s": 5981, "text": "First, I imported all relevant libraries and data:" }, { "code": null, "e": 6144, "s": 6032, "text": "import pandas as pdimport numpy as npfrom statsmodels.stats.outliers_influence import variance_inflation_factor" }, { "code": null, "e": 6267, "s": 6144, "text": "Next, for simplicity, I selected only 3 columns to be my features (X variables) and removed all nulls and infinite values:" }, { "code": null, "e": 6416, "s": 6267, "text": "df = pd.read_csv('housing.csv')df = df[~df.isin([np.nan, np.inf, -np.inf]).any(1)]X_variables = df[['total_rooms','total_bedrooms','median_income']]" }, { "code": null, "e": 6466, "s": 6416, "text": "Finally, I calculated the VIF for my X variables:" }, { "code": null, "e": 6642, "s": 6466, "text": "vif_data = pd.DataFrame()vif_data[\"feature\"] = X_variables.columnsvif_data[\"VIF\"] = [variance_inflation_factor(X_variables.values, i) for i in range(len(X_variables.columns))]" }, { "code": null, "e": 6683, "s": 6642, "text": "Just like that, we get the final result:" }, { "code": null, "e": 7031, "s": 6683, "text": "Intuitively this makes complete sense. Total rooms and total bedrooms are far above the VIF threshold, indicating that there is high collinearity between these variables. We can intuitively understand this because there is a strong correlation between the number of rooms and number of bedrooms (the more bedrooms, the more rooms, and vice versa)." }, { "code": null, "e": 7181, "s": 7031, "text": "Be sure to subscribe here or to my personal newsletter to never miss another article on data science guides, tricks and tips, life lessons, and more!" }, { "code": null, "e": 7378, "s": 7181, "text": "I hope you found this useful and that you learned something new! Multicollinearity is an extremely important concept for regression analysis, so it’s definitely an important concept to understand." }, { "code": null, "e": 7443, "s": 7378, "text": "Not sure what to read next? I’ve picked another article for you:" }, { "code": null, "e": 7466, "s": 7443, "text": "towardsdatascience.com" }, { "code": null, "e": 7483, "s": 7466, "text": "and another one!" }, { "code": null, "e": 7506, "s": 7483, "text": "towardsdatascience.com" }, { "code": null, "e": 7556, "s": 7506, "text": "If you enjoyed this, follow me on Medium for more" } ]
D3.js - Colors API
Colors are displayed combining RED, GREEN and BLUE. Colors can be specified in the following different ways − By color names As RGB values As hexadecimal values As HSL values As HWB values The d3-color API provides representations for various colors. You can perform conversion and manipulation operations in API. Let us understand these operations in detail. You can directly load API using the following script. <script src = "https://d3js.org/d3-color.v1.min.js"></script> <script> </script> Let us go through the basic color operations in D3. Convert color value to HSL − To convert color value to HSL, use the following Example − var convert = d3.hsl("green"); You can rotate the hue by 45° as shown below. convert.h + = 45; Similarly, you can change the saturation level as well. To fade the color value, you can change the opacity value as shown below. convert.opacity = 0.5; Following are some of the most important Color API Methods. d3.color(specifier) color.opacity color.rgb() color.toString() color.displayable() d3.rgb(color) d3.hsl(color) d3.lab(color) d3.hcl(color) d3.cubehelix(color) Let us understand each of these Color API Methods in detail. It is used to parse the specified CSS color and return RGB or HSL color. If specifier is not given, then null is returned. Example − Let us consider the following example. <script> var color = d3.color("green"); // asign color name directly console.log(color); </script> We will see the following response on our screen − {r: 0, g: 128, b: 0, opacity: 1} If we want to fade the color, we can change the opacity value. It is in the range of [0, 1]. Example − Let us consider the following example. <script> var color = d3.color("green"); console.log(color.opacity); </script> We will see the following response on the screen − 1 It returns the RGB value for the color. Let us consider the following example. <script> var color = d3.color("green"); console.log(color.rgb()); </script> We will see the following response on our screen. {r: 0, g: 128, b: 0, opacity: 1} It returns a string representing the color according to the CSS Object Model specification. Let us consider the following example. <script> var color = d3.color("green"); console.log(color.toString()); </script> We will see the following response on our screen. rgb(0, 128, 0) Returns true, if the color is displayable. Returns false, if RGB color value is less than 0 or greater than 255, or if the opacity is not in the range [0, 1]. Let us consider the following example. <script> var color = d3.color("green"); console.log(color.displayable()); </script> We will see the following response on our screen. true This method is used to construct a new RGB color. Let us consider the following example. <script> console.log(d3.rgb("yellow")); console.log(d3.rgb(200,100,0)); </script> We will see the following response on the screen. {r: 255, g: 255, b: 0, opacity: 1} {r: 200, g: 100, b: 0, opacity: 1} It is used to construct a new HSL color. Values are exposed as h, s and l properties on the returned instance. Let us consider the following example. <script> var hsl = d3.hsl("blue"); console.log(hsl.h + = 90); console.log(hsl.opacity = 0.5); </script> We will see the following response on the screen. 330 0.5 It constructs a new Lab color. The channel values are exposed as ‘l’, ‘a’ and ‘b’ properties on the returned instance. <script> var lab = d3.lab("blue"); console.log(lab); </script> We will see the following response on the screen. {l: 32.29701093285073, a: 79.18751984512221, b: -107.8601617541481, opacity: 1} Constructs a new HCL color. The channel values are exposed as h, c and l properties on the returned instance. Let us consider the following example. <script> var hcl = d3.hcl("blue"); console.log(hcl); </script> We will see the following response on the screen. {h: 306.2849380699878, c: 133.80761485376166, l: 32.29701093285073, opacity: 1} Constructs a new Cubehelix color. Values are exposed as h, s and l properties on the returned instance. Let us consider the following example. <script> var hcl = d3.hcl("blue"); console.log(hcl); </script> We will see the following response on the screen, {h: 236.94217167732103, s: 4.614386868039719, l: 0.10999954957200976, opacity: 1} Let us create a new webpage – color.html to perform all the color API methods. The complete code listing is defined below. <html> <head> <script type = "text/javascript" src = "https://d3js.org/d3.v4.min.js"></script> </head> <body> <h3>D3 colors API</h3> <script> var color = d3.color("green"); console.log(color); console.log(color.opacity); console.log(color.rgb()); console.log(color.toString()); console.log(color.displayable()); console.log(d3.rgb("yellow")); console.log(d3.rgb(200,100,0)); var hsl = d3.hsl("blue"); console.log(hsl.h + = 90); console.log(hsl.opacity = 0.5); var lab = d3.lab("blue"); console.log(lab); var hcl = d3.hcl("blue"); console.log(hcl); var cube = d3.cubehelix("blue"); console.log(cube); </script> </body> </html> Now, request the browser and we will see the following response. Print Add Notes Bookmark this page
[ { "code": null, "e": 2240, "s": 2130, "text": "Colors are displayed combining RED, GREEN and BLUE. Colors can be specified in the following different ways −" }, { "code": null, "e": 2255, "s": 2240, "text": "By color names" }, { "code": null, "e": 2269, "s": 2255, "text": "As RGB values" }, { "code": null, "e": 2291, "s": 2269, "text": "As hexadecimal values" }, { "code": null, "e": 2305, "s": 2291, "text": "As HSL values" }, { "code": null, "e": 2319, "s": 2305, "text": "As HWB values" }, { "code": null, "e": 2490, "s": 2319, "text": "The d3-color API provides representations for various colors. You can perform conversion and manipulation operations in API. Let us understand these operations in detail." }, { "code": null, "e": 2544, "s": 2490, "text": "You can directly load API using the following script." }, { "code": null, "e": 2626, "s": 2544, "text": "<script src = \"https://d3js.org/d3-color.v1.min.js\"></script>\n<script>\n\n</script>" }, { "code": null, "e": 2678, "s": 2626, "text": "Let us go through the basic color operations in D3." }, { "code": null, "e": 2766, "s": 2678, "text": "Convert color value to HSL − To convert color value to HSL, use the following Example −" }, { "code": null, "e": 2797, "s": 2766, "text": "var convert = d3.hsl(\"green\");" }, { "code": null, "e": 2843, "s": 2797, "text": "You can rotate the hue by 45° as shown below." }, { "code": null, "e": 2862, "s": 2843, "text": "convert.h + = 45;" }, { "code": null, "e": 2992, "s": 2862, "text": "Similarly, you can change the saturation level as well. To fade the color value, you can change the opacity value as shown below." }, { "code": null, "e": 3015, "s": 2992, "text": "convert.opacity = 0.5;" }, { "code": null, "e": 3075, "s": 3015, "text": "Following are some of the most important Color API Methods." }, { "code": null, "e": 3095, "s": 3075, "text": "d3.color(specifier)" }, { "code": null, "e": 3109, "s": 3095, "text": "color.opacity" }, { "code": null, "e": 3121, "s": 3109, "text": "color.rgb()" }, { "code": null, "e": 3138, "s": 3121, "text": "color.toString()" }, { "code": null, "e": 3158, "s": 3138, "text": "color.displayable()" }, { "code": null, "e": 3172, "s": 3158, "text": "d3.rgb(color)" }, { "code": null, "e": 3186, "s": 3172, "text": "d3.hsl(color)" }, { "code": null, "e": 3200, "s": 3186, "text": "d3.lab(color)" }, { "code": null, "e": 3214, "s": 3200, "text": "d3.hcl(color)" }, { "code": null, "e": 3234, "s": 3214, "text": "d3.cubehelix(color)" }, { "code": null, "e": 3295, "s": 3234, "text": "Let us understand each of these Color API Methods in detail." }, { "code": null, "e": 3418, "s": 3295, "text": "It is used to parse the specified CSS color and return RGB or HSL color. If specifier is not given, then null is returned." }, { "code": null, "e": 3467, "s": 3418, "text": "Example − Let us consider the following example." }, { "code": null, "e": 3573, "s": 3467, "text": "<script>\n var color = d3.color(\"green\"); // asign color name directly\n console.log(color);\n</script>" }, { "code": null, "e": 3624, "s": 3573, "text": "We will see the following response on our screen −" }, { "code": null, "e": 3658, "s": 3624, "text": "{r: 0, g: 128, b: 0, opacity: 1}\n" }, { "code": null, "e": 3751, "s": 3658, "text": "If we want to fade the color, we can change the opacity value. It is in the range of [0, 1]." }, { "code": null, "e": 3800, "s": 3751, "text": "Example − Let us consider the following example." }, { "code": null, "e": 3884, "s": 3800, "text": "<script>\n var color = d3.color(\"green\");\n console.log(color.opacity);\n</script>" }, { "code": null, "e": 3935, "s": 3884, "text": "We will see the following response on the screen −" }, { "code": null, "e": 3938, "s": 3935, "text": "1\n" }, { "code": null, "e": 4017, "s": 3938, "text": "It returns the RGB value for the color. Let us consider the following example." }, { "code": null, "e": 4099, "s": 4017, "text": "<script>\n var color = d3.color(\"green\");\n console.log(color.rgb());\n</script>" }, { "code": null, "e": 4149, "s": 4099, "text": "We will see the following response on our screen." }, { "code": null, "e": 4183, "s": 4149, "text": "{r: 0, g: 128, b: 0, opacity: 1}\n" }, { "code": null, "e": 4314, "s": 4183, "text": "It returns a string representing the color according to the CSS Object Model specification. Let us consider the following example." }, { "code": null, "e": 4401, "s": 4314, "text": "<script>\n var color = d3.color(\"green\");\n console.log(color.toString());\n</script>" }, { "code": null, "e": 4451, "s": 4401, "text": "We will see the following response on our screen." }, { "code": null, "e": 4467, "s": 4451, "text": "rgb(0, 128, 0)\n" }, { "code": null, "e": 4665, "s": 4467, "text": "Returns true, if the color is displayable. Returns false, if RGB color value is less than 0 or greater than 255, or if the opacity is not in the range [0, 1]. Let us consider the following example." }, { "code": null, "e": 4755, "s": 4665, "text": "<script>\n var color = d3.color(\"green\");\n console.log(color.displayable());\n</script>" }, { "code": null, "e": 4805, "s": 4755, "text": "We will see the following response on our screen." }, { "code": null, "e": 4811, "s": 4805, "text": "true\n" }, { "code": null, "e": 4900, "s": 4811, "text": "This method is used to construct a new RGB color. Let us consider the following example." }, { "code": null, "e": 4988, "s": 4900, "text": "<script>\n console.log(d3.rgb(\"yellow\"));\n console.log(d3.rgb(200,100,0));\n</script>" }, { "code": null, "e": 5038, "s": 4988, "text": "We will see the following response on the screen." }, { "code": null, "e": 5109, "s": 5038, "text": "{r: 255, g: 255, b: 0, opacity: 1}\n{r: 200, g: 100, b: 0, opacity: 1}\n" }, { "code": null, "e": 5259, "s": 5109, "text": "It is used to construct a new HSL color. Values are exposed as h, s and l properties on the returned instance. Let us consider the following example." }, { "code": null, "e": 5373, "s": 5259, "text": "<script>\n var hsl = d3.hsl(\"blue\");\n console.log(hsl.h + = 90);\n console.log(hsl.opacity = 0.5);\n</script>" }, { "code": null, "e": 5423, "s": 5373, "text": "We will see the following response on the screen." }, { "code": null, "e": 5432, "s": 5423, "text": "330\n0.5\n" }, { "code": null, "e": 5551, "s": 5432, "text": "It constructs a new Lab color. The channel values are exposed as ‘l’, ‘a’ and ‘b’ properties on the returned instance." }, { "code": null, "e": 5620, "s": 5551, "text": "<script>\n var lab = d3.lab(\"blue\");\n console.log(lab);\n</script>" }, { "code": null, "e": 5670, "s": 5620, "text": "We will see the following response on the screen." }, { "code": null, "e": 5751, "s": 5670, "text": "{l: 32.29701093285073, a: 79.18751984512221, b: -107.8601617541481, opacity: 1}\n" }, { "code": null, "e": 5900, "s": 5751, "text": "Constructs a new HCL color. The channel values are exposed as h, c and l properties on the returned instance. Let us consider the following example." }, { "code": null, "e": 5969, "s": 5900, "text": "<script>\n var hcl = d3.hcl(\"blue\");\n console.log(hcl);\n</script>" }, { "code": null, "e": 6019, "s": 5969, "text": "We will see the following response on the screen." }, { "code": null, "e": 6100, "s": 6019, "text": "{h: 306.2849380699878, c: 133.80761485376166, l: 32.29701093285073, opacity: 1}\n" }, { "code": null, "e": 6243, "s": 6100, "text": "Constructs a new Cubehelix color. Values are exposed as h, s and l properties on the returned instance. Let us consider the following example." }, { "code": null, "e": 6312, "s": 6243, "text": "<script>\n var hcl = d3.hcl(\"blue\");\n console.log(hcl);\n</script>" }, { "code": null, "e": 6362, "s": 6312, "text": "We will see the following response on the screen," }, { "code": null, "e": 6445, "s": 6362, "text": "{h: 236.94217167732103, s: 4.614386868039719, l: 0.10999954957200976, opacity: 1}\n" }, { "code": null, "e": 6568, "s": 6445, "text": "Let us create a new webpage – color.html to perform all the color API methods. The complete code listing is defined below." }, { "code": null, "e": 7385, "s": 6568, "text": "<html>\n <head>\n <script type = \"text/javascript\" src = \"https://d3js.org/d3.v4.min.js\"></script>\n </head>\n\n <body>\n <h3>D3 colors API</h3>\n <script>\n var color = d3.color(\"green\");\n console.log(color);\n console.log(color.opacity);\n console.log(color.rgb());\n console.log(color.toString());\n console.log(color.displayable());\n console.log(d3.rgb(\"yellow\"));\n console.log(d3.rgb(200,100,0));\n var hsl = d3.hsl(\"blue\");\n console.log(hsl.h + = 90);\n console.log(hsl.opacity = 0.5);\n var lab = d3.lab(\"blue\");\n console.log(lab);\n var hcl = d3.hcl(\"blue\");\n console.log(hcl);\n var cube = d3.cubehelix(\"blue\");\n console.log(cube);\n </script>\n </body>\n</html>" }, { "code": null, "e": 7450, "s": 7385, "text": "Now, request the browser and we will see the following response." }, { "code": null, "e": 7457, "s": 7450, "text": " Print" }, { "code": null, "e": 7468, "s": 7457, "text": " Add Notes" } ]