content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: Why extending property of existing object with arrow function fails Why arrow function fails to identify this pointer in the following case. I know that regular functions have their own execution scope and this but I could not identify why following failed. In the arrow function case, this is undefined. If someone can shed light, it would be great. Thank you! PS: This code is just for experimental purposes, not anything serious. const addProperty = function(op, func) { String.prototype.__defineGetter__(op, func); }; // works addProperty('upper', function() { return this.toUpperCase(); }); // fails to identify this addProperty('lower', () => { return this.toLowerCase(); }); A: Arrow functions preserve the this of the outside scope. function keyword functions, in this context, get the this of the object they're put on. That's just what they do.
Why extending property of existing object with arrow function fails
Why arrow function fails to identify this pointer in the following case. I know that regular functions have their own execution scope and this but I could not identify why following failed. In the arrow function case, this is undefined. If someone can shed light, it would be great. Thank you! PS: This code is just for experimental purposes, not anything serious. const addProperty = function(op, func) { String.prototype.__defineGetter__(op, func); }; // works addProperty('upper', function() { return this.toUpperCase(); }); // fails to identify this addProperty('lower', () => { return this.toLowerCase(); });
[ "Arrow functions preserve the this of the outside scope. function keyword functions, in this context, get the this of the object they're put on. That's just what they do.\n" ]
[ 2 ]
[]
[]
[ "arrow_functions", "closures", "function", "javascript", "this" ]
stackoverflow_0074673291_arrow_functions_closures_function_javascript_this.txt
Q: Converting text file to keys and value pairs so i have this text file that i want to convert into an javascript object The text file is the data of a private api My text file: Kevin 25 Mary 24 I want the result like this { Kevin: "25" Mary: "24" } A: import fs from 'fs' // ... const data = fs.readFileSync('/path/to/file'), const rawLines = data.split(/\r?\n/) // Turn file into array of lines, considering input file might use windows or UNIX style carriage returns. const object = rawLines.reduce((parsed, currentLine, index) => { if (((index + 1) % 2) !== 0) return parsed // If we are on a entry on lines 1,3,5,7 etc, then don't need to do anything because next loop can backreference return { ...parsed, [rawLines[index - 1]]: currentLine} // We are on lines 2,4,6,8 etc and so now we can populate the object, by backreferencing the previous line and setting the value as the current one }, {}) // ... console.log(object)
Converting text file to keys and value pairs
so i have this text file that i want to convert into an javascript object The text file is the data of a private api My text file: Kevin 25 Mary 24 I want the result like this { Kevin: "25" Mary: "24" }
[ "import fs from 'fs'\n\n// ...\n\nconst data = fs.readFileSync('/path/to/file'),\nconst rawLines = data.split(/\\r?\\n/) // Turn file into array of lines, considering input file might use windows or UNIX style carriage returns.\nconst object = rawLines.reduce((parsed, currentLine, index) => {\n if (((index + 1) % 2) !== 0) return parsed // If we are on a entry on lines 1,3,5,7 etc, then don't need to do anything because next loop can backreference\n return { ...parsed, [rawLines[index - 1]]: currentLine} // We are on lines 2,4,6,8 etc and so now we can populate the object, by backreferencing the previous line and setting the value as the current one\n}, {})\n\n// ...\n\nconsole.log(object)\n\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "javascript_objects", "node.js", "object" ]
stackoverflow_0074673265_javascript_javascript_objects_node.js_object.txt
Q: Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource - react js I run my project on my Mac OS device and I want to access from another laptop. the first device gets all responses from the server as well: http://192.168.1.101:3000/ but another laptop I got this error message: Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://www.shadyab.com/api/Api/coupons. (Reason: missing token ‘access-control-allow-origin’ in CORS header ‘Access-Control-Allow-Headers’ from CORS preflight channel). const requestOptions = { method: 'POST', headers: { 'Content-Type': 'multipart/form-data', 'Access-Control-Allow-Origin': '*'}, body: JSON.stringify(formData) }; A: Add headers: {'Access-Control-Allow-Origin': '*'} to your server where the API is fetching from A: I think this is something related to your backend sometimes backend only allows some origins and your new front-end domain must be added to Access-Control-Allow-Origin but sometimes that could be related to the webserver and its configuration needs to be change, for example if you are using Apache .htaccess file must be changed A: Assuming you are using cors() in the backend (like in a node server). Then in your react app, what you can do is setup proxy for the api endpoints. in the src directory create a file named setupProxy.js. What it does is, create proxies for your api endpoints. What you can do is something like below setupProxy.js const { createProxyMiddleware } = require('http-proxy-middleware'); const BACKEND_HOST = process.env.REACT_APP_BACKEND_HOST || 'localhost'; const BACKEND_PORT = process.env.BACKEND_PORT || 8000; module.exports = function(app) { app.use( '/', createProxyMiddleware({ target: target, changeOrigin: true, logLevel: 'debug' }) ); /** * You can create other proxies using app.use() method. */ }; Note: You do not need to import this file anywhere. It is automatically registered when you start the development server. And after creating proxies, you should send request to your backend server only specifying the endpoints. Like if you want to send request you should use / instead of http://localhost:8000. Let me if it works. Thanks
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource - react js
I run my project on my Mac OS device and I want to access from another laptop. the first device gets all responses from the server as well: http://192.168.1.101:3000/ but another laptop I got this error message: Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://www.shadyab.com/api/Api/coupons. (Reason: missing token ‘access-control-allow-origin’ in CORS header ‘Access-Control-Allow-Headers’ from CORS preflight channel). const requestOptions = { method: 'POST', headers: { 'Content-Type': 'multipart/form-data', 'Access-Control-Allow-Origin': '*'}, body: JSON.stringify(formData) };
[ "Add \nheaders: {'Access-Control-Allow-Origin': '*'}\n\nto your server where the API is fetching from \n", "I think this is something related to your backend sometimes backend only allows some origins and your new front-end domain must be added to Access-Control-Allow-Origin\nbut sometimes that could be related to the webserver and its configuration needs to be change, for example if you are using Apache .htaccess file must be changed\n", "Assuming you are using cors() in the backend (like in a node server).\nThen in your react app, what you can do is setup proxy for the api endpoints.\nin the src directory create a file named setupProxy.js. What it does is, create proxies for your api endpoints. What you can do is something like below\nsetupProxy.js\nconst { createProxyMiddleware } = require('http-proxy-middleware');\n\nconst BACKEND_HOST = process.env.REACT_APP_BACKEND_HOST || 'localhost';\nconst BACKEND_PORT = process.env.BACKEND_PORT || 8000;\n\nmodule.exports = function(app) {\n\n app.use(\n '/',\n createProxyMiddleware({\n target: target,\n changeOrigin: true,\n logLevel: 'debug'\n })\n );\n\n /**\n * You can create other proxies using app.use() method.\n */\n};\n\n\nNote: You do not need to import this file anywhere. It is automatically registered when you start the development server.\nAnd after creating proxies, you should send request to your backend server only specifying the endpoints. Like if you want to send request you should use / instead of http://localhost:8000.\nLet me if it works. Thanks\n" ]
[ 0, 0, 0 ]
[]
[]
[ "javascript", "reactjs" ]
stackoverflow_0049956900_javascript_reactjs.txt
Q: Understanding how the "is" operator works int Python for result from function For example we have this code. x = 1 y = 1 print(x is y) # TRUE print(id(x), id(y)) y = pow(10, 30, 10**30-1) # 1 print(type(y)) print(x, y, x is y) # FALSE It`s return: True 140516304938720 140516304938720 <class 'int'> 1 1 False The last result is False. Please help me understand why this is happening? Result of function is 1, doesn`t it literal, which cach in python? If we change to y = pow(1, 10) It will return "True". A: The "is" operator checks whether two items are the same object. In your example it returns False because x is not the same object as y, even though they have the same content. for example: Here the x and y variables have the same content, but they are not the same object! x = ["apple", "banana"] y = ["apple", "banana"] print(x is y) #False print(x == y) #True To understand more, I suggest you check this link. https://www.w3schools.com/python/python_operators.asp
Understanding how the "is" operator works int Python for result from function
For example we have this code. x = 1 y = 1 print(x is y) # TRUE print(id(x), id(y)) y = pow(10, 30, 10**30-1) # 1 print(type(y)) print(x, y, x is y) # FALSE It`s return: True 140516304938720 140516304938720 <class 'int'> 1 1 False The last result is False. Please help me understand why this is happening? Result of function is 1, doesn`t it literal, which cach in python? If we change to y = pow(1, 10) It will return "True".
[ "The \"is\" operator checks whether two items are the same object.\nIn your example it returns False because x is not the same object as y, even though they have the same content.\nfor example:\nHere the x and y variables have the same content, but they are not the same object!\nx = [\"apple\", \"banana\"]\ny = [\"apple\", \"banana\"]\n\nprint(x is y) #False\nprint(x == y) #True\n\nTo understand more, I suggest you check this link.\nhttps://www.w3schools.com/python/python_operators.asp\n" ]
[ 0 ]
[ "x1 = 5\ny1 = 5\nx2 = 'Hello'\ny2 = 'Hello'\nx3 = [1,2,3]\ny3 = [1,2,3]\nprint(x1 is not y1) # prints False\nprint(x2 is y2) # prints True\nprint(x3 is y3) # prints False\n", "x = [\"apple\", \"banana\"]\n\ny = [\"apple\", \"banana\"]\nprint(x is y) #False\nprint(x == y) #True\n" ]
[ -1, -1 ]
[ "function", "literals", "operators", "python", "syntax" ]
stackoverflow_0074509703_function_literals_operators_python_syntax.txt
Q: python Streamlit update/overwrite column values without blinking Problem statement: I have 2 columns on streamlit: one for ticker_symbol and other for it's current value. I want to update the current_value every second (column2) but the code I have so far first removes the written value of the current_price and then writes the new value. I would like the current_value to be overwritten without being removed at all. This also means the last ticker_symbol in the col has to wait for a long time to show it's current value since the previous value gets removed by st.empty. What can I do to achieve the goal mentioned above? Should I not use st.empty ? Are there any other alternatives in streamlit ? import time import yfinance as yf import streamlit as st st.set_page_config(page_title="Test", layout='wide') stock_list = ['NVDA', 'AAPL', 'MSFT'] left, right, blank_col1, blank_col2, blank_col3, blank_col4, blank_col5, blank_col6, blank_col7, blank_col8, blank_col9, \ blank_col10 = st.columns(12, gap='small') with left: for index, val in enumerate(stock_list): st.write(val) with right: while True: numbers = st.empty() with numbers.container(): for index, val in enumerate(stock_list): stock = yf.Ticker(val) price = stock.info['regularMarketPrice'] # st.write(": ", price) st.write(": ", price) time.sleep(0.5) numbers.empty() A: Do like this. with right: # The number holder. numbers = st.empty() # The infinite loop prevents number holder from being emptied. while True: with numbers.container(): for index, val in enumerate(stock_list): stock = yf.Ticker(val) price = stock.info['regularMarketPrice'] st.write(": ", price) time.sleep(0.5) Sample simulation code with random. import time import streamlit as st import random st.set_page_config(page_title="Test", layout='wide') stock_list = ['NVDA', 'AAPL', 'MSFT'] (left, right, blank_col1, blank_col2, blank_col3, blank_col4, blank_col5, blank_col6, blank_col7, blank_col8, blank_col9, blank_col10) = st.columns(12, gap='small') with left: for index, val in enumerate(stock_list): st.write(val) with right: numbers = st.empty() while True: with numbers.container(): for index, val in enumerate(stock_list): price = random.randint(-100, 100) st.write(": ", price) time.sleep(0.5)
python Streamlit update/overwrite column values without blinking
Problem statement: I have 2 columns on streamlit: one for ticker_symbol and other for it's current value. I want to update the current_value every second (column2) but the code I have so far first removes the written value of the current_price and then writes the new value. I would like the current_value to be overwritten without being removed at all. This also means the last ticker_symbol in the col has to wait for a long time to show it's current value since the previous value gets removed by st.empty. What can I do to achieve the goal mentioned above? Should I not use st.empty ? Are there any other alternatives in streamlit ? import time import yfinance as yf import streamlit as st st.set_page_config(page_title="Test", layout='wide') stock_list = ['NVDA', 'AAPL', 'MSFT'] left, right, blank_col1, blank_col2, blank_col3, blank_col4, blank_col5, blank_col6, blank_col7, blank_col8, blank_col9, \ blank_col10 = st.columns(12, gap='small') with left: for index, val in enumerate(stock_list): st.write(val) with right: while True: numbers = st.empty() with numbers.container(): for index, val in enumerate(stock_list): stock = yf.Ticker(val) price = stock.info['regularMarketPrice'] # st.write(": ", price) st.write(": ", price) time.sleep(0.5) numbers.empty()
[ "Do like this.\nwith right:\n\n # The number holder.\n numbers = st.empty()\n\n # The infinite loop prevents number holder from being emptied.\n while True:\n with numbers.container():\n for index, val in enumerate(stock_list):\n stock = yf.Ticker(val)\n price = stock.info['regularMarketPrice']\n st.write(\": \", price)\n time.sleep(0.5)\n\nSample simulation code with random.\nimport time\nimport streamlit as st\nimport random\n\n\nst.set_page_config(page_title=\"Test\", layout='wide')\nstock_list = ['NVDA', 'AAPL', 'MSFT']\n\n(left, right, blank_col1, blank_col2, blank_col3, blank_col4,\nblank_col5, blank_col6, blank_col7, blank_col8, blank_col9,\nblank_col10) = st.columns(12, gap='small')\n\nwith left:\n for index, val in enumerate(stock_list):\n st.write(val)\n\n\nwith right:\n numbers = st.empty()\n while True:\n with numbers.container():\n for index, val in enumerate(stock_list):\n price = random.randint(-100, 100)\n st.write(\": \", price)\n time.sleep(0.5)\n\n" ]
[ 1 ]
[]
[]
[ "python_3.x", "streamlit" ]
stackoverflow_0074670216_python_3.x_streamlit.txt
Q: event Received trigger is not fired when sending an event activity in bot framework composer I am trying to send an activity from the botController.cs and I want to catch it from the bot framework composer. here is the code when I am sending the event activity: var userAccount = new ChannelAccount("e7h84gd7-fbb5-y3u6-h9d8-f8q3789649ec", "User"); var botAccount = new ChannelAccount("274d8t53-7492-98hr-r625-b11e3ht7e6wq", "Bot"); Activity activity = new Activity { From = userAccount, Recipient = botAccount, Type = ActivityTypes.Event, Name = "Agent_Closed_Session", }; await turnContext.SendActivityAsync(activity); this is in the bot composer to catch it: this is the response, it shows that the sender of the event is the bot and the recipient is the user, but in the request, I mentioned that the sender should be the user and the recipient should be the bot So in the action in the event activity trigger (response text; testtt) is not performed A: It looks like the issue may be with the way you are setting the From and Recipient properties on the Activity object you are creating. When sending an event activity, the From property should be set to the bot account and the Recipient property should be set to the user account. However, in the code you posted, you have it the other way around, with the From property set to the user account and the Recipient property set to the bot account. Here is an example of how you could modify your code to set the From and Recipient properties correctly when sending an event activity: var userAccount = new ChannelAccount("e7h84gd7-fbb5-y3u6-h9d8-f8q3789649ec", "User"); var botAccount = new ChannelAccount("274d8t53-7492-98hr-r625-b11e3ht7e6wq", "Bot"); // Set the From property to the bot account and the Recipient property to the user account Activity activity = new Activity { From = botAccount, Recipient = userAccount, Type = ActivityTypes.Event, Name = "Agent_Closed_Session", }; await turnContext.SendActivityAsync(activity); After making this change, the event activity should be received correctly by the bot framework composer and the associated action should be triggered. A: In the event trigger condition field, please provide the event name turn.activity.Name == "Agent_Closed_Session" event received condition Please accept the answer, if it answers your question
event Received trigger is not fired when sending an event activity in bot framework composer
I am trying to send an activity from the botController.cs and I want to catch it from the bot framework composer. here is the code when I am sending the event activity: var userAccount = new ChannelAccount("e7h84gd7-fbb5-y3u6-h9d8-f8q3789649ec", "User"); var botAccount = new ChannelAccount("274d8t53-7492-98hr-r625-b11e3ht7e6wq", "Bot"); Activity activity = new Activity { From = userAccount, Recipient = botAccount, Type = ActivityTypes.Event, Name = "Agent_Closed_Session", }; await turnContext.SendActivityAsync(activity); this is in the bot composer to catch it: this is the response, it shows that the sender of the event is the bot and the recipient is the user, but in the request, I mentioned that the sender should be the user and the recipient should be the bot So in the action in the event activity trigger (response text; testtt) is not performed
[ "It looks like the issue may be with the way you are setting the From and Recipient properties on the Activity object you are creating. When sending an event activity, the From property should be set to the bot account and the Recipient property should be set to the user account. However, in the code you posted, you have it the other way around, with the From property set to the user account and the Recipient property set to the bot account.\nHere is an example of how you could modify your code to set the From and Recipient properties correctly when sending an event activity:\nvar userAccount = new ChannelAccount(\"e7h84gd7-fbb5-y3u6-h9d8-f8q3789649ec\", \"User\");\nvar botAccount = new ChannelAccount(\"274d8t53-7492-98hr-r625-b11e3ht7e6wq\", \"Bot\");\n\n// Set the From property to the bot account and the Recipient property to the user account\nActivity activity = new Activity\n{\n From = botAccount,\n Recipient = userAccount,\n Type = ActivityTypes.Event,\n Name = \"Agent_Closed_Session\",\n};\n\nawait turnContext.SendActivityAsync(activity);\n\nAfter making this change, the event activity should be received correctly by the bot framework composer and the associated action should be triggered.\n", "In the event trigger condition field, please provide the event name\nturn.activity.Name == \"Agent_Closed_Session\"\n\nevent received condition\nPlease accept the answer, if it answers your question\n" ]
[ 1, 0 ]
[]
[]
[ "bot_framework_composer", "botframework", "bots" ]
stackoverflow_0074672118_bot_framework_composer_botframework_bots.txt
Q: Python Pandas Data frame Pivoting I have such .txt file: Field Value First 1 Second alfa First 23 Second beta First 55 Second omega I need to read and transform this file to get data like this: First Second 1 alfa 23 beta 55 omega I start with this: file = './data.txt' df = pd.read_csv(file, sep='\t',header=None, skiprows=89, skipfooter=11, engine='python') df = df.pivot(values=1, columns=0) but it looks as I need to generate some indexes otherwise my pivoted table looks not very well First Second 1 alfa 23 beta 55 omega Is any other solution hot to read that data and get the results that I need? A: The trick is you need to create common keys for the index. Using .assign create a column named CommonKeys which is the cumcount of grouping on the Fields column. Finally chain functions to pivot and clean up the df. df = ( df.assign(CommonKeys=df.groupby("Field").cumcount()) .pivot(index="CommonKeys", columns="Field", values="Value") .reset_index(drop=True) .rename_axis(None, axis=1) ) print(df) Output: First Second 0 1 alfa 1 23 beta 2 55 omega A: in order to make your code work I had to modify the way you access the .csv file, as I don't have that many rows. import pandas as pd file = './data.txt' df = pd.read_csv(file, sep='\t',header=0, engine='python') df = df.pivot(values='Value', columns='Field') # for each column on the dataframe, sort the value and ignore the index for col in df.columns: df[col] = df[col].sort_values(ignore_index=True) # drop NaN df.dropna(axis=0, how='all', inplace=True) # Show dataframe print(df) Here some more info about .sort_values: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html Hope it can help :)
Python Pandas Data frame Pivoting
I have such .txt file: Field Value First 1 Second alfa First 23 Second beta First 55 Second omega I need to read and transform this file to get data like this: First Second 1 alfa 23 beta 55 omega I start with this: file = './data.txt' df = pd.read_csv(file, sep='\t',header=None, skiprows=89, skipfooter=11, engine='python') df = df.pivot(values=1, columns=0) but it looks as I need to generate some indexes otherwise my pivoted table looks not very well First Second 1 alfa 23 beta 55 omega Is any other solution hot to read that data and get the results that I need?
[ "The trick is you need to create common keys for the index.\nUsing .assign create a column named CommonKeys which is the cumcount of grouping on the Fields column. Finally chain functions to pivot and clean up the df.\ndf = (\n df.assign(CommonKeys=df.groupby(\"Field\").cumcount())\n .pivot(index=\"CommonKeys\", columns=\"Field\", values=\"Value\")\n .reset_index(drop=True)\n .rename_axis(None, axis=1)\n)\n\nprint(df)\n\nOutput:\n First Second\n0 1 alfa\n1 23 beta\n2 55 omega\n\n", "in order to make your code work I had to modify the way you access the .csv file, as I don't have that many rows.\nimport pandas as pd\n\nfile = './data.txt'\ndf = pd.read_csv(file, sep='\\t',header=0, engine='python')\ndf = df.pivot(values='Value', columns='Field')\n\n# for each column on the dataframe, sort the value and ignore the index\nfor col in df.columns:\n df[col] = df[col].sort_values(ignore_index=True)\n\n# drop NaN\ndf.dropna(axis=0, how='all', inplace=True)\n\n# Show dataframe\nprint(df)\n\nHere some more info about .sort_values:\nhttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html\nHope it can help :)\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074671429_pandas_python.txt
Q: Partitioning in Hive table with multiple values? I want to create a simple hive partitioned table and have a sqoop import command to populate it. 1.Table have say 4 columns, ID, col1, col2, col3. One of the column say col2 is int type and contains values 1 to 10 only. I need to partition table based on col2 column with 1 to 5 value data should be in one partition and rest in another. I am currently trying this which doesnt work: Alter table tblname add partition (col2=1,col2=2,col2=3,col2=4,col2=5) location 'Part1'; Once done i need to populate this table with sqoop import from my sql server. I have tried many ways but not able to do it. Can anyone please help? A: Create a partitioned table and manually add a partition e.g. 1_to_3 create table ptable(name string) partitioned by (id string); alter table ptable add partition (id='1_to_3'); show partitions ptable; +------------+--+ | partition | +------------+--+ | id=1_to_3 | +------------+--+ I know that I should load data from department table into this partition, if department id 1 or 2 or 3. insert into ptable partition(id = '1_to_3') select department_name from departments where department_id between 1 and 3; See screenshot select * from ptable; +------------------+------------+--+ | ptable.name | ptable.id | +------------------+------------+--+ | Marketing | 1_to_3 | | Finance | 1_to_3 | | Human Resources | 1_to_3 | +------------------+------------+--+ You may need to add another partition to hold other values e.g department_id > 3
Partitioning in Hive table with multiple values?
I want to create a simple hive partitioned table and have a sqoop import command to populate it. 1.Table have say 4 columns, ID, col1, col2, col3. One of the column say col2 is int type and contains values 1 to 10 only. I need to partition table based on col2 column with 1 to 5 value data should be in one partition and rest in another. I am currently trying this which doesnt work: Alter table tblname add partition (col2=1,col2=2,col2=3,col2=4,col2=5) location 'Part1'; Once done i need to populate this table with sqoop import from my sql server. I have tried many ways but not able to do it. Can anyone please help?
[ "Create a partitioned table and manually add a partition e.g. 1_to_3\ncreate table ptable(name string) partitioned by (id string);\nalter table ptable add partition (id='1_to_3');\n\nshow partitions ptable;\n+------------+--+\n| partition |\n+------------+--+\n| id=1_to_3 |\n+------------+--+\n\nI know that I should load data from department table into this partition, if department id 1 or 2 or 3.\ninsert into ptable partition(id = '1_to_3') select department_name from departments where department_id between 1 and 3;\n\nSee screenshot\n\nselect * from ptable;\n+------------------+------------+--+\n| ptable.name | ptable.id |\n+------------------+------------+--+\n| Marketing | 1_to_3 |\n| Finance | 1_to_3 |\n| Human Resources | 1_to_3 |\n+------------------+------------+--+\n\nYou may need to add another partition to hold other values e.g department_id > 3\n" ]
[ 1 ]
[ "Static Partitioning\nTo perform this example, we have created a table “USER_DATA” with DATE_DT and COUNTRY as Partition columns. We will load data into “USER_DATA”.\nCreate Table Syntax:\nCREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name \n[(col_name data_type [column_constraint_specification] [COMMENT col_comment], \n[COMMENT table_comment] \n[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)];\n\nCreate Table Statement:\nCREATE TABLE USER_DATA (USER_ID INT \n,USER_NAME STRING \n,SITE_DATA STRING) \nPARTITIONED BY (DATE_DT STRING,COUNTRY STRING) \nROW FORMAT DELIMITED \nFIELDS TERMINATED BY '\\t' \nSTORED AS TEXTFILE\n\nFor More Details... https://www.cloudduggu.com/hive/partitioning/\n" ]
[ -1 ]
[ "hive", "sqoop" ]
stackoverflow_0052758705_hive_sqoop.txt
Q: How to substitute JUnit in Spring Boot with TestNG for more robust testing purpose Recently, while running some tests, I wanted to use TestNG instead of JUnit. however when I added the dependency to pom file, imported the jar files. imported annotation, when I ran it, it passes for JUnit, but it's failing for TestNG with NPE (NullPointerException) java.lang.NullPointerException: Cannot invoke "org.springframework.context.ApplicationContext.getBean(java.lang.Class)" because "this.context" is null can someone show me the way to fix this issue? Thank you very much in advance! A: Make sure you have done the correct injection. Refer to the below answer - https://stackoverflow.com/a/2608580/14899650
How to substitute JUnit in Spring Boot with TestNG for more robust testing purpose
Recently, while running some tests, I wanted to use TestNG instead of JUnit. however when I added the dependency to pom file, imported the jar files. imported annotation, when I ran it, it passes for JUnit, but it's failing for TestNG with NPE (NullPointerException) java.lang.NullPointerException: Cannot invoke "org.springframework.context.ApplicationContext.getBean(java.lang.Class)" because "this.context" is null can someone show me the way to fix this issue? Thank you very much in advance!
[ "Make sure you have done the correct injection. Refer to the below answer -\nhttps://stackoverflow.com/a/2608580/14899650\n" ]
[ 0 ]
[]
[]
[ "junit", "spring", "spring_boot", "spring_test", "testng" ]
stackoverflow_0074673018_junit_spring_spring_boot_spring_test_testng.txt
Q: NetSuite I can't customize Transaction Forms even I have Administrator role I have Administrator role but I can't see edit/customize button in Transaction Form list Customization/Forms/Transaction Forms When I click the form, it goes to that record. ie, if I click any sales order form it goes to new Sales Order No Edit/Customize button Any idea please? I have Administrator role and I tried to check permissions but I couldn't find any specific one A: At the custom transaction forms page, if you click on the name it will be a preview of the form i.e. to input a new record view. If you want to change the layout of the form, click the "Edit" link instead. Another way is at any view transaction record page, click on "customise > Customised Form" link at the top right of the page. Hope this help.
NetSuite I can't customize Transaction Forms even I have Administrator role
I have Administrator role but I can't see edit/customize button in Transaction Form list Customization/Forms/Transaction Forms When I click the form, it goes to that record. ie, if I click any sales order form it goes to new Sales Order No Edit/Customize button Any idea please? I have Administrator role and I tried to check permissions but I couldn't find any specific one
[ "At the custom transaction forms page, if you click on the name it will be a preview of the form i.e. to input a new record view. If you want to change the layout of the form, click the \"Edit\" link instead.\nAnother way is at any view transaction record page, click on \"customise > Customised Form\" link at the top right of the page.\nHope this help.\n" ]
[ 0 ]
[]
[]
[ "netsuite" ]
stackoverflow_0074654053_netsuite.txt
Q: how to check if any of the files in a directory changed Given a directory, I'd like to know whether the files in the directory have been modified or not. (Boolean) i.e. if the directory's state has changed from before. I don't want to run a file watcher service for this as I don't need to know which file has been modified (or if many files change receive many events) I've looked at atime, mtime, ctime from stat eg: for a dir named taskmaster which already contains sample.txt stat taskmaster output File: taskmaster Size: 245760 Blocks: 480 IO Block: 4096 directory Device: 802h/2050d Inode: 1309314 Links: 1 Access: (0777/drwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2020-05-22 21:25:06.226421200 +0530 Modify: 2020-05-22 21:25:06.222175900 +0530 Change: 2020-05-22 21:25:06.222175900 +0530 Birth: - After I modify the folder contents # modify an existing file echo modify > taskmaster/sample.txt stat taskmaster gives File: taskmaster Size: 245760 Blocks: 480 IO Block: 4096 directory Device: 802h/2050d Inode: 1309314 Links: 1 Access: (0777/drwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2020-05-22 21:25:06.226421200 +0530 Modify: 2020-05-22 21:25:06.222175900 +0530 Change: 2020-05-22 21:25:06.222175900 +0530 Birth: - The exact same output. If no file is removed or deleted the access and modify times do not change. How can I achieve this? A: I think you need to do stat on individual files, something like this : previous="$(stat *)" while sleep 60; do current="$(stat *)" if [[ $current != $previous ]]; then echo "Some files changed." fi previous=$current done A: Earlier comment: stat -c %Y /path/to/directory also works, does have a ceveat. There are several fields that the stat command reads and accesses and it depends on how the contents of the direcctory were modified. stat command output uses st_mtime printf("Last file modification: %s", ctime(&sb.st_mtime)); Source: https://www.pdl.cmu.edu/posix/docs/POSIX-stat-manpages.pdf Stat documentation The field st_mtime for a file is changed by file modifications, e.g. by mknod(2), truncate(2), utime(2) and write(2) (of more than zero bytes). st_mtime of a directory is changed by the creation or deletion of files in that directory. The st_mtime field is not changed for changes in owner, group, hard link count, or mode. Source: http://man.he.net/man2/stat So if a file is created, written to or deleted within a directory, the directory modified time will be automatically updated, but not if an additional hardlink or permissions are changed on a file within that directory. I cannot find information on casading the changes up towards / but the directories utime cchange 'should' propagate this upwards but possibly not immediately. I would cetainly test that though for your use case.
how to check if any of the files in a directory changed
Given a directory, I'd like to know whether the files in the directory have been modified or not. (Boolean) i.e. if the directory's state has changed from before. I don't want to run a file watcher service for this as I don't need to know which file has been modified (or if many files change receive many events) I've looked at atime, mtime, ctime from stat eg: for a dir named taskmaster which already contains sample.txt stat taskmaster output File: taskmaster Size: 245760 Blocks: 480 IO Block: 4096 directory Device: 802h/2050d Inode: 1309314 Links: 1 Access: (0777/drwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2020-05-22 21:25:06.226421200 +0530 Modify: 2020-05-22 21:25:06.222175900 +0530 Change: 2020-05-22 21:25:06.222175900 +0530 Birth: - After I modify the folder contents # modify an existing file echo modify > taskmaster/sample.txt stat taskmaster gives File: taskmaster Size: 245760 Blocks: 480 IO Block: 4096 directory Device: 802h/2050d Inode: 1309314 Links: 1 Access: (0777/drwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2020-05-22 21:25:06.226421200 +0530 Modify: 2020-05-22 21:25:06.222175900 +0530 Change: 2020-05-22 21:25:06.222175900 +0530 Birth: - The exact same output. If no file is removed or deleted the access and modify times do not change. How can I achieve this?
[ "I think you need to do stat on individual files, something like this :\nprevious=\"$(stat *)\"\n\nwhile sleep 60; do\n current=\"$(stat *)\"\n if [[ $current != $previous ]]; then\n echo \"Some files changed.\"\n fi\n previous=$current\ndone\n\n", "Earlier comment: stat -c %Y /path/to/directory also works, does have a ceveat.\nThere are several fields that the stat command reads and accesses and it depends on how the contents of the direcctory were modified.\nstat command output uses st_mtime\nprintf(\"Last file modification: %s\", ctime(&sb.st_mtime));\n\nSource: https://www.pdl.cmu.edu/posix/docs/POSIX-stat-manpages.pdf\nStat documentation\nThe field st_mtime for a file is changed by file modifications, e.g. by mknod(2), truncate(2), utime(2) and write(2) (of more than zero bytes).\nst_mtime of a directory is changed by the creation or deletion of files in that directory. The st_mtime field is not changed for changes in owner, group, hard link count, or mode.\nSource: http://man.he.net/man2/stat\nSo if a file is created, written to or deleted within a directory, the directory modified time will be automatically updated, but not if an additional hardlink or permissions are changed on a file within that directory.\nI cannot find information on casading the changes up towards / but the directories utime cchange 'should' propagate this upwards but possibly not immediately. I would cetainly test that though for your use case.\n" ]
[ 2, 0 ]
[]
[]
[ "directory", "file", "filesystems", "linux", "stat" ]
stackoverflow_0061959164_directory_file_filesystems_linux_stat.txt
Q: How to delete edge and vertices from directed graph Given u and v I want to delete the u vertex and u,v edge from an adjacency list graph. The approach I followed is to delete the vertex from root and from the other sets of other roots. exemple : I want to delete the vertex u u -> a b c d x -> u // u should be deleted here s a brief description of my class : template <class T> class Digraph { public: Digraph(); ~Digraph(); void delete_vertex(T u); void delete_edge(T u, T v); private: std::map<T, std::set<T>> graph; } What I tried : template <class T> void Digraph<T>::delete_vertex(T u) { graphe.erase(u); for (auto const &pair : graphe) { for (auto const &elem : pair.second) { if(elem == u){ pair->second.erase(u); } } } } template <class T> void Digraph<T>::delete_edge(T u, T v) { std::set<T> s = graphe[u]; s.erase(v); } I wonder if what I'm doing in the delete_vertex function is right, because it doesn't work, maybe I forgot something, can someone help me with that? A: Both functions are bugged. The problem with delete_edge is that you are copying the set of edges, and you then erase from the copy not the original. This is the common beginner mistake of treating C++ objects as if they are references when they aren't. Here's version that really uses a reference template <class T> void Digraph<T>::delete_edge(T u, T v) { std::set<T>& s = graphe[u]; // s is a reference s.erase(v); } but just eliminating the variable is even simpler (IMHO) template <class T> void Digraph<T>::delete_edge(T u, T v) { graphe[u].erase(v); } The problem with delete_vertex is as Some Programmer Dude says, you cannot erase from a container if you are iterating through it. Here's an iterator based solution template <class T> void Digraph<T>::delete_vertex(T u) { graphe.erase(u); for (auto const &pair : graphe) { auto i = pair.second.begin(); while (i != pair.second.end()) { if (*i == u) i = pair.second.erase(i); else ++i; } } } It's even illegal to use ++ on an iterator to an erased element, which is why erase returns an iterator to the next element. Untested code, so apologies for any mistakes.
How to delete edge and vertices from directed graph
Given u and v I want to delete the u vertex and u,v edge from an adjacency list graph. The approach I followed is to delete the vertex from root and from the other sets of other roots. exemple : I want to delete the vertex u u -> a b c d x -> u // u should be deleted here s a brief description of my class : template <class T> class Digraph { public: Digraph(); ~Digraph(); void delete_vertex(T u); void delete_edge(T u, T v); private: std::map<T, std::set<T>> graph; } What I tried : template <class T> void Digraph<T>::delete_vertex(T u) { graphe.erase(u); for (auto const &pair : graphe) { for (auto const &elem : pair.second) { if(elem == u){ pair->second.erase(u); } } } } template <class T> void Digraph<T>::delete_edge(T u, T v) { std::set<T> s = graphe[u]; s.erase(v); } I wonder if what I'm doing in the delete_vertex function is right, because it doesn't work, maybe I forgot something, can someone help me with that?
[ "Both functions are bugged. The problem with delete_edge is that you are copying the set of edges, and you then erase from the copy not the original. This is the common beginner mistake of treating C++ objects as if they are references when they aren't. Here's version that really uses a reference\ntemplate <class T>\nvoid Digraph<T>::delete_edge(T u, T v)\n{\n std::set<T>& s = graphe[u]; // s is a reference\n s.erase(v);\n}\n\nbut just eliminating the variable is even simpler (IMHO)\ntemplate <class T>\nvoid Digraph<T>::delete_edge(T u, T v)\n{\n graphe[u].erase(v);\n}\n\nThe problem with delete_vertex is as Some Programmer Dude says, you cannot erase from a container if you are iterating through it. Here's an iterator based solution\ntemplate <class T>\nvoid Digraph<T>::delete_vertex(T u)\n{\n graphe.erase(u);\n for (auto const &pair : graphe)\n {\n auto i = pair.second.begin();\n while (i != pair.second.end())\n {\n if (*i == u)\n i = pair.second.erase(i);\n else\n ++i;\n }\n }\n}\n\nIt's even illegal to use ++ on an iterator to an erased element, which is why erase returns an iterator to the next element.\nUntested code, so apologies for any mistakes.\n" ]
[ 0 ]
[]
[]
[ "algorithm", "c++", "graph" ]
stackoverflow_0074673116_algorithm_c++_graph.txt
Q: Google sheets sumif with odd data I have sales data that gives me dates in a bad format. Every new sale gets automatically added to the sheet. Looks like this: Column A Column B Column C Order 1 2022-12-02T02:09:37Z $1025.19 Order 2 2022-12-02T01:25:15Z $873.65 This will continue on for all sales. Now the date format is UTC for whatever reason and I can't adjust that, so within this formula I have to subtract 6 hours to get it to central time. I'm trying to create an auto-updating chart that shows an average day for 7 days, so I'm trying to do a sumif formula. Here's what I have on Sheet2: =sumif(Sheet1!C:C,index(split((index(split(Sheet1!B:B,"T"),1)+index(split(left(Sheet1!B:B,19),"T"),2))-0.25,"."),1),A1) Where A1 is a single date. Testing this with one date and not the range shows that it does match. When I do the range, the total comes to 0, even though multiple different dates should match. What am I doing wrong? A: Use regexreplace() and query(), like this: =arrayformula( query( { weeknum( regexreplace(B2:B, "([-\d]+)T(\d\d:\d\d).+", "$1 $2") - "6:00" ), C2:C }, "select Col1, avg(Col2) where Col1 is not null group by Col1 label Col1 'week #' ", 0 ) ) A: I think you're trying to split the values and sum them. I can't understand fully what's the purpose of 19 in LEFT function, and why are you again splitting it? Maybe some approach similar to yours is use LEFT function with 10 characters for the date, and MID from 12th character to get the time. Then substract .25 for the 6 hours as you did, and ROUNDDOWN with 0 digits to get the only the day =ARRAYFORMULA(ROUNDDOWN(LEFT('Sheet1'!B:B,10)+MID('Sheet1'!B:B,12,8)-0.25,0)) And then you can insert it in your SUMIF: =SUMIF(Sheet1!C:C,ARRAYFORMULA(ROUNDDOWN(LEFT(Sheet1!B:B,10)+MID(Sheet1!B:B,12,8)-0.25,0)),A1) A: Assume A1 has the value: 2022-12-02T02:09:37Z Apply this formula: =LAMBDA(RAW,TUNEHOUR, LAMBDA(DATE,TIME, TEXT((DATE&" "&TIME)+TUNEHOUR/24,"yyyy-mm-dd hh:mm:ss") )(TEXT(INDEX(RAW,,1),"yyyy-mm-dd"),REGEXREPLACE(INDEX(RAW,,2),"Z","")) )(SPLIT(A1,"T"),-6) returns: 2022-12-01 20:09:37
Google sheets sumif with odd data
I have sales data that gives me dates in a bad format. Every new sale gets automatically added to the sheet. Looks like this: Column A Column B Column C Order 1 2022-12-02T02:09:37Z $1025.19 Order 2 2022-12-02T01:25:15Z $873.65 This will continue on for all sales. Now the date format is UTC for whatever reason and I can't adjust that, so within this formula I have to subtract 6 hours to get it to central time. I'm trying to create an auto-updating chart that shows an average day for 7 days, so I'm trying to do a sumif formula. Here's what I have on Sheet2: =sumif(Sheet1!C:C,index(split((index(split(Sheet1!B:B,"T"),1)+index(split(left(Sheet1!B:B,19),"T"),2))-0.25,"."),1),A1) Where A1 is a single date. Testing this with one date and not the range shows that it does match. When I do the range, the total comes to 0, even though multiple different dates should match. What am I doing wrong?
[ "Use regexreplace() and query(), like this:\n=arrayformula( \n query( \n { \n weeknum( \n regexreplace(B2:B, \"([-\\d]+)T(\\d\\d:\\d\\d).+\", \"$1 $2\") \n - \n \"6:00\" \n ), \n C2:C \n }, \n \"select Col1, avg(Col2) \n where Col1 is not null \n group by Col1 \n label Col1 'week #' \", \n 0 \n ) \n)\n\n", "I think you're trying to split the values and sum them. I can't understand fully what's the purpose of 19 in LEFT function, and why are you again splitting it? Maybe some approach similar to yours is use LEFT function with 10 characters for the date, and MID from 12th character to get the time. Then substract .25 for the 6 hours as you did, and ROUNDDOWN with 0 digits to get the only the day\n=ARRAYFORMULA(ROUNDDOWN(LEFT('Sheet1'!B:B,10)+MID('Sheet1'!B:B,12,8)-0.25,0))\n\n\nAnd then you can insert it in your SUMIF:\n=SUMIF(Sheet1!C:C,ARRAYFORMULA(ROUNDDOWN(LEFT(Sheet1!B:B,10)+MID(Sheet1!B:B,12,8)-0.25,0)),A1)\n\n", "Assume A1 has the value: 2022-12-02T02:09:37Z\nApply this formula:\n=LAMBDA(RAW,TUNEHOUR,\n LAMBDA(DATE,TIME,\n TEXT((DATE&\" \"&TIME)+TUNEHOUR/24,\"yyyy-mm-dd hh:mm:ss\")\n )(TEXT(INDEX(RAW,,1),\"yyyy-mm-dd\"),REGEXREPLACE(INDEX(RAW,,2),\"Z\",\"\"))\n)(SPLIT(A1,\"T\"),-6)\n\nreturns:\n2022-12-01 20:09:37\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "google_sheets" ]
stackoverflow_0074668311_google_sheets.txt
Q: How to get only the initial NaN values and leading non NaN values from a pandas dataframe? I have a dataframe where the rows contain NaN values. The df contains original columns namely Heading 1 Heading 2 and Heading 3 and extra columns called Unnamed: 1 Unnamed: 2 and Unnamed: 3 as shown: Heading 1 Heading 2 Heading 3 Unnamed: 1 Unnamed: 2 Unnamed: 3 NaN 34 24 45 NaN NaN NaN NaN 24 45 11 NaN NaN NaN NaN 45 45 33 4 NaN 24 NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN 34 24 NaN NaN NaN 22 34 24 NaN NaN NaN NaN 34 NaN 45 NaN NaN I want to iterate through each row and find out the amount of initial NaN values in original columns (Heading 1 Heading 2 and Heading 3) and the amount of non NaN values in the extra columns (Unnamed: 1 Unnamed: 2 and Unnamed: 3). For each and every row this should be calculated and returned in a dictionary where the key is the index of the row and the value for that key is a list containing the amount of initial NaN values in original columns (Heading 1 Heading 2 and Heading 3) and the second element of the list would the amount of non NaN values in the extra columns (Unnamed: 1 Unnamed: 2 and Unnamed: 3). So the result for the above dataframe would be: {0 : [1, 1], 1 : [2, 2], 2 : [3, 3], 3 : [0, 0], 4 : [2, 0], 5 : [1, 0], 6 : [0, 0], 7 : [1, 1]} Notice how in row 3 and row 7 the original columns contain 1 and 2 NaN respectively but only the initial NaN's are counted and not the in between ones! UPDATE / RESULTS: Both @mozaway and @Panda Kim gave the correct solution for the current dataframe but @mozway solution does not work at all for another test dataframe. @Panda Kim gave 2 solutions but both the methods he gave (cumsum() and x.first_valid_index()) are giving slightly different results for the different dataframe. Heading 1 Heading 2 Heading 3 Unnamed: 1 Unnamed: 2 Unnamed: 3 Unnamed: 4 NaN 34 24 45 NaN NaN NaN NaN NaN 24 45 11 NaN NaN NaN NaN NaN 45 45 33 NaN 4 NaN 24 NaN NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN NaN 34 24 NaN NaN NaN NaN 22 34 24 NaN NaN NaN NaN NaN 34 NaN 45 NaN NaN NaN NaN NaN NaN NaN 12 22 45 NaN NaN NaN NaN NaN 11 69 NaN NaN NaN NaN 12 NaN 45 NaN NaN NaN NaN NaN NaN 45 NaN NaN NaN NaN NaN 44 NaN For the above df here are the results: @Panda KIM (first_valid_index()) {0: [1, 1], 1: [2, 2], 2: [3, 3], 3: [0, 0], 4: [2, 0], 5: [1, 0], 6: [0, 0], 7: [1, 1], 8: [3, 3], 9: [3, 2], 10: [3, 2], 11: [3, 1], 12: [3, 1]} @Panda Kim (cumsum()) {0: [1, 1], 1: [2, 2], 2: [3, 3], 3: [0, 0], 4: [2, 0], 5: [1, 0], 6: [0, 0], 7: [1, 1], 8: [4, 3], 9: [5, 2], 10: [4, 2], 11: [6, 1], 12: [5, 1]} @mozway solution {0: [1, 1], 1: [2, 2], 2: [3, 3], 3: [0, 0], 4: [2, 0], 5: [1, 0], 6: [0, 0], 7: [1, 1], 8: [3, 0], 9: [3, 0], 10: [3, 0], 11: [3, 0], 12: [3, 0]} A: First divide dataframe (iloc or filter or and so on) df1 = df.iloc[:, :3] df2 = df.iloc[:, 3:] Second count initial NaNs in df1 and count notnull in df2 s1 = df1.apply(lambda x: (x.notnull().cumsum() == 0).sum(), axis=1) s2 = df2.notnull().sum(axis=1) Last concat and make dict pd.concat([s1, s2], axis=1).T.to_dict('list') result: {0: [1, 1], 1: [2, 2], 2: [3, 3], 3: [0, 0], 4: [2, 0], 5: [1, 0], 6: [0, 0], 7: [1, 1]} Update data = [[None, 34.0, 24.0, 45.0, None, None, None], [None, None, 24.0, 45.0, 11.0, None, None], [None, None, None, 45.0, 45.0, 33.0, None], [4.0, None, 24.0, None, None, None, None], [None, None, 4.0, None, None, None, None], [None, 34.0, 24.0, None, None, None, None], [22.0, 34.0, 24.0, None, None, None, None], [None, 34.0, None, 45.0, None, None, None], [None, None, None, None, 12.0, 22.0, 45.0], [None, None, None, None, None, 11.0, 69.0], [None, None, None, None, 12.0, None, 45.0], [None, None, None, None, None, None, 45.0], [None, None, None, None, None, 44.0, None]] col = ['Heading 1', 'Heading 2', 'Heading 3', 'Unnamed: 1', 'Unnamed: 2', 'Unnamed: 3', 'Unnamed: 4'] df = pd.DataFrame(data, columns=col) df1 = df.iloc[:, :3] df2 = df.iloc[:, 3:] s1 = df1.apply(lambda x: (x.notnull().cumsum() == 0).sum(), axis=1) s2 = df2.notnull().sum(axis=1) pd.concat([s1, s2], axis=1).T.to_dict('list') result: {0: [1, 1], 1: [2, 2], 2: [3, 3], 3: [0, 0], 4: [2, 0], 5: [1, 0], 6: [0, 0], 7: [1, 1], 8: [3, 3], 9: [3, 2], 10: [3, 2], 11: [3, 1], 12: [3, 1]} Anyone can know that this is different from questioner's result (@Panda Kim (cumsum())). Of course, if function is not applied to df1, the result is different. Let's apply cumsum code to df instead of df1 for wrong result: df2 = df.iloc[:, 3:] s1 = df.apply(lambda x: (x.notnull().cumsum() == 0).sum(), axis=1) # apply cumsum to df instead df1 s2 = df2.notnull().sum(axis=1) pd.concat([s1, s2], axis=1).T.to_dict('list') wrong result(same to questioner's result that he think result of my code) {0: [1, 1], 1: [2, 2], 2: [3, 3], 3: [0, 0], 4: [2, 0], 5: [1, 0], 6: [0, 0], 7: [1, 1], 8: [4, 3], 9: [5, 2], 10: [4, 2], 11: [6, 1], 12: [5, 1]} It is common for the person to apply and get different results, but that should be checked by the person himself before endless question. A: You can use: m = df.columns.str.startswith('Unnamed') out = (df .groupby(m, axis=1) .apply(lambda g: (g.notna() if g.name else g.isna()) .cummin(axis=1).sum(axis=1) ) .set_axis(['named', 'unnamed'], axis=1) ) Output: named unnamed 0 1 1 1 2 2 2 3 3 3 0 0 4 2 0 5 1 0 6 0 0 7 1 1 as dictionary out.T.to_dict('list') Output: {0: [1, 1], 1: [2, 2], 2: [3, 3], 3: [0, 0], 4: [2, 0], 5: [1, 0], 6: [0, 0], 7: [1, 1]}
How to get only the initial NaN values and leading non NaN values from a pandas dataframe?
I have a dataframe where the rows contain NaN values. The df contains original columns namely Heading 1 Heading 2 and Heading 3 and extra columns called Unnamed: 1 Unnamed: 2 and Unnamed: 3 as shown: Heading 1 Heading 2 Heading 3 Unnamed: 1 Unnamed: 2 Unnamed: 3 NaN 34 24 45 NaN NaN NaN NaN 24 45 11 NaN NaN NaN NaN 45 45 33 4 NaN 24 NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN 34 24 NaN NaN NaN 22 34 24 NaN NaN NaN NaN 34 NaN 45 NaN NaN I want to iterate through each row and find out the amount of initial NaN values in original columns (Heading 1 Heading 2 and Heading 3) and the amount of non NaN values in the extra columns (Unnamed: 1 Unnamed: 2 and Unnamed: 3). For each and every row this should be calculated and returned in a dictionary where the key is the index of the row and the value for that key is a list containing the amount of initial NaN values in original columns (Heading 1 Heading 2 and Heading 3) and the second element of the list would the amount of non NaN values in the extra columns (Unnamed: 1 Unnamed: 2 and Unnamed: 3). So the result for the above dataframe would be: {0 : [1, 1], 1 : [2, 2], 2 : [3, 3], 3 : [0, 0], 4 : [2, 0], 5 : [1, 0], 6 : [0, 0], 7 : [1, 1]} Notice how in row 3 and row 7 the original columns contain 1 and 2 NaN respectively but only the initial NaN's are counted and not the in between ones! UPDATE / RESULTS: Both @mozaway and @Panda Kim gave the correct solution for the current dataframe but @mozway solution does not work at all for another test dataframe. @Panda Kim gave 2 solutions but both the methods he gave (cumsum() and x.first_valid_index()) are giving slightly different results for the different dataframe. Heading 1 Heading 2 Heading 3 Unnamed: 1 Unnamed: 2 Unnamed: 3 Unnamed: 4 NaN 34 24 45 NaN NaN NaN NaN NaN 24 45 11 NaN NaN NaN NaN NaN 45 45 33 NaN 4 NaN 24 NaN NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN NaN 34 24 NaN NaN NaN NaN 22 34 24 NaN NaN NaN NaN NaN 34 NaN 45 NaN NaN NaN NaN NaN NaN NaN 12 22 45 NaN NaN NaN NaN NaN 11 69 NaN NaN NaN NaN 12 NaN 45 NaN NaN NaN NaN NaN NaN 45 NaN NaN NaN NaN NaN 44 NaN For the above df here are the results: @Panda KIM (first_valid_index()) {0: [1, 1], 1: [2, 2], 2: [3, 3], 3: [0, 0], 4: [2, 0], 5: [1, 0], 6: [0, 0], 7: [1, 1], 8: [3, 3], 9: [3, 2], 10: [3, 2], 11: [3, 1], 12: [3, 1]} @Panda Kim (cumsum()) {0: [1, 1], 1: [2, 2], 2: [3, 3], 3: [0, 0], 4: [2, 0], 5: [1, 0], 6: [0, 0], 7: [1, 1], 8: [4, 3], 9: [5, 2], 10: [4, 2], 11: [6, 1], 12: [5, 1]} @mozway solution {0: [1, 1], 1: [2, 2], 2: [3, 3], 3: [0, 0], 4: [2, 0], 5: [1, 0], 6: [0, 0], 7: [1, 1], 8: [3, 0], 9: [3, 0], 10: [3, 0], 11: [3, 0], 12: [3, 0]}
[ "First\ndivide dataframe (iloc or filter or and so on)\ndf1 = df.iloc[:, :3]\ndf2 = df.iloc[:, 3:]\n\nSecond\ncount initial NaNs in df1 and count notnull in df2\ns1 = df1.apply(lambda x: (x.notnull().cumsum() == 0).sum(), axis=1)\ns2 = df2.notnull().sum(axis=1)\n\nLast\nconcat and make dict\npd.concat([s1, s2], axis=1).T.to_dict('list')\n\nresult:\n{0: [1, 1],\n 1: [2, 2],\n 2: [3, 3],\n 3: [0, 0],\n 4: [2, 0],\n 5: [1, 0],\n 6: [0, 0],\n 7: [1, 1]}\n\n\nUpdate\ndata = [[None, 34.0, 24.0, 45.0, None, None, None],\n [None, None, 24.0, 45.0, 11.0, None, None],\n [None, None, None, 45.0, 45.0, 33.0, None],\n [4.0, None, 24.0, None, None, None, None],\n [None, None, 4.0, None, None, None, None],\n [None, 34.0, 24.0, None, None, None, None],\n [22.0, 34.0, 24.0, None, None, None, None],\n [None, 34.0, None, 45.0, None, None, None],\n [None, None, None, None, 12.0, 22.0, 45.0],\n [None, None, None, None, None, 11.0, 69.0],\n [None, None, None, None, 12.0, None, 45.0],\n [None, None, None, None, None, None, 45.0],\n [None, None, None, None, None, 44.0, None]]\ncol = ['Heading 1', 'Heading 2', 'Heading 3', 'Unnamed: 1', 'Unnamed: 2', 'Unnamed: 3', 'Unnamed: 4']\ndf = pd.DataFrame(data, columns=col)\n\n\ndf1 = df.iloc[:, :3]\ndf2 = df.iloc[:, 3:]\ns1 = df1.apply(lambda x: (x.notnull().cumsum() == 0).sum(), axis=1)\ns2 = df2.notnull().sum(axis=1)\npd.concat([s1, s2], axis=1).T.to_dict('list')\n\nresult:\n{0: [1, 1],\n 1: [2, 2],\n 2: [3, 3],\n 3: [0, 0],\n 4: [2, 0],\n 5: [1, 0],\n 6: [0, 0],\n 7: [1, 1],\n 8: [3, 3],\n 9: [3, 2],\n 10: [3, 2],\n 11: [3, 1],\n 12: [3, 1]}\n\nAnyone can know that this is different from questioner's result (@Panda Kim (cumsum())).\n\nOf course, if function is not applied to df1, the result is different.\nLet's apply cumsum code to df instead of df1 for wrong result:\ndf2 = df.iloc[:, 3:]\ns1 = df.apply(lambda x: (x.notnull().cumsum() == 0).sum(), axis=1) # apply cumsum to df instead df1\ns2 = df2.notnull().sum(axis=1)\npd.concat([s1, s2], axis=1).T.to_dict('list')\n\nwrong result(same to questioner's result that he think result of my code)\n{0: [1, 1],\n 1: [2, 2],\n 2: [3, 3],\n 3: [0, 0],\n 4: [2, 0],\n 5: [1, 0],\n 6: [0, 0],\n 7: [1, 1],\n 8: [4, 3],\n 9: [5, 2],\n 10: [4, 2],\n 11: [6, 1],\n 12: [5, 1]}\n\nIt is common for the person to apply and get different results, but that should be checked by the person himself before endless question.\n", "You can use:\nm = df.columns.str.startswith('Unnamed')\n\nout = (df\n .groupby(m, axis=1)\n .apply(lambda g: (g.notna() if g.name else g.isna())\n .cummin(axis=1).sum(axis=1)\n )\n .set_axis(['named', 'unnamed'], axis=1)\n )\n\nOutput:\n named unnamed\n0 1 1\n1 2 2\n2 3 3\n3 0 0\n4 2 0\n5 1 0\n6 0 0\n7 1 1\n\nas dictionary\nout.T.to_dict('list')\n\nOutput:\n{0: [1, 1],\n 1: [2, 2],\n 2: [3, 3],\n 3: [0, 0],\n 4: [2, 0],\n 5: [1, 0],\n 6: [0, 0],\n 7: [1, 1]}\n\n" ]
[ 1, 1 ]
[]
[]
[ "data_preprocessing", "dataframe", "nan", "pandas", "python" ]
stackoverflow_0074673249_data_preprocessing_dataframe_nan_pandas_python.txt
Q: HTML stop marquee tag I wanted to know if there is a way of stopping the flow of images in marquee tag of html using only HTML/CSS. <marquee><a href="#" style="font-size:24px"><img src="hp.jpg" width="134" height="202" style="float:left;padding-right:10px;" alt=""></a></marquee> Several linked images like these are in between the marquee tags and I wanted to stop their flow preferably on mouse hover. If you think it's possible please tell me the solution. Thanks in advance! A: Here you go Fiddle <marquee behavior="scroll" direction="left" onmouseover="this.stop();" onmouseout="this.start();"><img src="hp.jpg" width="134" height="202" style="float:left;padding-right:10px;" alt="llsdasdada"></marquee> A: Try this:- <marquee onmouseover="this.setAttribute('scrollamount', 0, 0);" onmouseout="this.setAttribute('scrollamount', 6, 0);"> your text here </marquee>
HTML stop marquee tag
I wanted to know if there is a way of stopping the flow of images in marquee tag of html using only HTML/CSS. <marquee><a href="#" style="font-size:24px"><img src="hp.jpg" width="134" height="202" style="float:left;padding-right:10px;" alt=""></a></marquee> Several linked images like these are in between the marquee tags and I wanted to stop their flow preferably on mouse hover. If you think it's possible please tell me the solution. Thanks in advance!
[ "Here you go \n\nFiddle\n\n<marquee behavior=\"scroll\" direction=\"left\" onmouseover=\"this.stop();\" onmouseout=\"this.start();\"><img src=\"hp.jpg\" width=\"134\" height=\"202\" style=\"float:left;padding-right:10px;\" alt=\"llsdasdada\"></marquee>\n\n", "Try this:-\n<marquee onmouseover=\"this.setAttribute('scrollamount', 0, 0);\" onmouseout=\"this.setAttribute('scrollamount', 6, 0);\">\nyour text here\n</marquee>\n\n" ]
[ 1, 0 ]
[ "try this!\nPress Button\n\n\nFirstly i have used marquee tag and then used buttons to start\nand stop the flow.\n" ]
[ -2 ]
[ "css", "html", "marquee" ]
stackoverflow_0040409172_css_html_marquee.txt
Q: React Router issue with Wildcard Problem In the code below, I have a basic Router with several routes ending with a catch-all wildcard. All my routes work in development, but when I host my website on a remote server, my routes will work as I navigate the site via links and break if I refresh or type in a URL to a sub-page. For example, clicking a link that takes me to /about works fine, but if I go to www.mywebsite.com/about via my browser navigation bar, I experience a 404 error. Question Why is it that navigating to a sub-page via browser link, or refreshing a sub-page, results in a 404 error? Version I am using React Router v6.4.4. import { BrowserRouter, Outlet, Route, Routes } from 'react-router-dom'; import './App.css'; import Navbar from './components/Navbar'; import About from './pages/about'; import Contact from './pages/contact'; import Home from './pages/home'; import LeafletMap from './pages/map'; import Property from './pages/property'; const App = () => { const HeaderLayout = () => ( <> <header> <Navbar /> </header> <Outlet /> </> ); return ( <BrowserRouter> <Routes> <Route element={<HeaderLayout />}> <Route path='/' element={<Home />} /> <Route path='about' element={<About />} /> <Route path='contact' element={<Contact />} /> <Route path='property/:propertyId' element={<Property />} /> <Route path='map' element={<LeafletMap />} /> <Route path='*' element={<Home />} /> </Route> </Routes> </BrowserRouter> ); }; export default App; A: possibly that the server you are hosting your website on is not configured to handle client-side routing, which is what React Router is doing. When you navigate to a sub-page via a link, the browser sends a request to the server for that sub-page, but the server is not able to serve it because it does not have a corresponding route defined. When you refresh a sub-page, the browser makes a request to the server for the current URL, which again does not have a corresponding route defined and results in a 404 error. You can fix this by using the component's basename prop and specify the base URL for your app. This will make sure that the server is able to find the correct route for the request. <BrowserRouter basename="/my-app"> ... </BrowserRouter> Another option is to use a server-side router, such as , which is designed to handle routing on the server. This can be used in conjunction with server-side rendering to make sure that the server always returns the correct content for a given URL. import { StaticRouter } from 'react-router-dom'; const App = () => { return ( <StaticRouter> <Routes> ... </Routes> </StaticRouter> ); }; Check React Router documentation again.
React Router issue with Wildcard
Problem In the code below, I have a basic Router with several routes ending with a catch-all wildcard. All my routes work in development, but when I host my website on a remote server, my routes will work as I navigate the site via links and break if I refresh or type in a URL to a sub-page. For example, clicking a link that takes me to /about works fine, but if I go to www.mywebsite.com/about via my browser navigation bar, I experience a 404 error. Question Why is it that navigating to a sub-page via browser link, or refreshing a sub-page, results in a 404 error? Version I am using React Router v6.4.4. import { BrowserRouter, Outlet, Route, Routes } from 'react-router-dom'; import './App.css'; import Navbar from './components/Navbar'; import About from './pages/about'; import Contact from './pages/contact'; import Home from './pages/home'; import LeafletMap from './pages/map'; import Property from './pages/property'; const App = () => { const HeaderLayout = () => ( <> <header> <Navbar /> </header> <Outlet /> </> ); return ( <BrowserRouter> <Routes> <Route element={<HeaderLayout />}> <Route path='/' element={<Home />} /> <Route path='about' element={<About />} /> <Route path='contact' element={<Contact />} /> <Route path='property/:propertyId' element={<Property />} /> <Route path='map' element={<LeafletMap />} /> <Route path='*' element={<Home />} /> </Route> </Routes> </BrowserRouter> ); }; export default App;
[ "possibly that the server you are hosting your website on is not configured to handle client-side routing, which is what React Router is doing.\nWhen you navigate to a sub-page via a link, the browser sends a request to the server for that sub-page, but the server is not able to serve it because it does not have a corresponding route defined.\nWhen you refresh a sub-page, the browser makes a request to the server for the current URL, which again does not have a corresponding route defined and results in a 404 error.\nYou can fix this by using the component's basename prop and specify the base URL for your app. This will make sure that the server is able to find the correct route for the request.\n<BrowserRouter basename=\"/my-app\">\n ...\n</BrowserRouter>\n\nAnother option is to use a server-side router, such as , which is designed to handle routing on the server.\nThis can be used in conjunction with server-side rendering to make sure that the server always returns the correct content for a given URL.\nimport { StaticRouter } from 'react-router-dom';\n\nconst App = () => {\n return (\n <StaticRouter>\n <Routes>\n ...\n </Routes>\n </StaticRouter>\n );\n};\n\nCheck React Router documentation again.\n" ]
[ 0 ]
[]
[]
[ "react_router", "react_router_dom", "reactjs" ]
stackoverflow_0074673268_react_router_react_router_dom_reactjs.txt
Q: I get the error "Reload already in progress, ignoring request" After receiving this error, the screen stops working ("Syncing files to Device Edge ...") What could be the reason of this problem? enter image description here I can't understand why it happening. after this error my widget render before async function fun. A: Assuming ("Syncing files to Device Edge ...") is a async function, program waits for it to finish. But since you try to hotreload before, it can not respond. It is better to just stop the program and re-run again. Or refresh instead of hotreload
I get the error "Reload already in progress, ignoring request"
After receiving this error, the screen stops working ("Syncing files to Device Edge ...") What could be the reason of this problem? enter image description here I can't understand why it happening. after this error my widget render before async function fun.
[ "Assuming (\"Syncing files to Device Edge ...\") is a async function, program waits for it to finish. But since you try to hotreload before, it can not respond. It is better to just stop the program and re-run again. Or refresh instead of hotreload\n" ]
[ 0 ]
[]
[]
[ "async_await", "flutter", "flutter_web" ]
stackoverflow_0074673191_async_await_flutter_flutter_web.txt
Q: Is there a way to get detailed logs on LUIS classifications? I'm building chatbots with the Microsoft Bot Framework (and the Composer). To help troubleshoot problems with my bot, or identify issues, it would be helpful if I could see detailed information on LUIS's classification of user intents. I have used other bot frameworks that have a way to see, for example, intent classification confidence. This information would be extremely useful to identify times when the bot is more likely to have screwed up in its responses. A: You can use LUIS API to get predictions from your LUIS APP rather than using the default composer mechanism of getting predictions from LUIS. Make a LUIS API call with your query. LUIS returned the predictions. Store the LUIS response in application insights (or any logging application you are using) Periodically, you can see the logs, which will give you insights into LUIS prediction for each query. If this answers your question, please accept this answer.
Is there a way to get detailed logs on LUIS classifications?
I'm building chatbots with the Microsoft Bot Framework (and the Composer). To help troubleshoot problems with my bot, or identify issues, it would be helpful if I could see detailed information on LUIS's classification of user intents. I have used other bot frameworks that have a way to see, for example, intent classification confidence. This information would be extremely useful to identify times when the bot is more likely to have screwed up in its responses.
[ "You can use LUIS API to get predictions from your LUIS APP rather than using the default composer mechanism of getting predictions from LUIS.\n\nMake a LUIS API call with your query.\nLUIS returned the predictions.\nStore the LUIS response in application insights (or any logging application you are using)\nPeriodically, you can see the logs, which will give you insights into LUIS prediction for each query.\n\nIf this answers your question, please accept this answer.\n" ]
[ 0 ]
[]
[]
[ "azure_language_understanding", "bot_framework_composer", "botframework", "direct_line_botframework" ]
stackoverflow_0074538272_azure_language_understanding_bot_framework_composer_botframework_direct_line_botframework.txt
Q: Confirmation before closing the browser tab or warn the user on unsaved changes using v6 (migration from v5 to v6)? all I am currently in a process of migrating from v5 to v6. I have a use case where in I want to show a confirmation before the user is trying to close a browser tab or warn the user about unsaved changes. When a user is in the process of filling out an application form and decides to drop out I want to warn him that the stored data will be lost, and for an existing user if he has already signed up and lands on the dashboard and wants to close the tab or wants to go back to a previous route which in this case is signup I want to prompt him to either logout or continue. Previously I was using CRA and react-router dom v5 and the following is the code that I used to achieve the above results:- import React, { useEffect, useState } from "react"; import { Prompt } from "react-router-dom"; const useUnsavedUsageWarning = ( message = "Are you sure you want to discard changes?" ) => { const [isDirty, setDirty] = useState(false); useEffect(() => { // Detecting browser closing window.onbeforeunload = isDirty && (() => message); return () => { window.onbeforeunload = null; }; }, [isDirty]); const routerPrompt = <Prompt when={isDirty} message={message} />; return [routerPrompt, () => setDirty(true), () => setDirty(false)]; }; export default useUnsavedUsageWarning; Any help or suggestion is appreciated. Thank you. A: the <Prompt component from React Router v5 has been removed in v6. However, you can achieve the same functionality by using the history.block method from the history package, which is a dependency of React Router. The history.block method lets you register a callback that will be invoked whenever the user attempts to navigate away from the current page, giving you an opportunity to show a confirmation prompt. Here is an example of how you can use the history.block method to implement a hook that shows a confirmation prompt when the user attempts to navigate away from a dirty form: import { useHistory } from 'react-router-dom'; const useUnsavedChangesWarning = ( message = 'Are you sure you want to discard changes?' ) => { const history = useHistory(); const [isDirty, setDirty] = useState(false); useEffect(() => { // Register a callback that will be invoked when the user // attempts to navigate away from the current page const unblock = history.block((location, action) => { // If the form is dirty, show a confirmation prompt if (isDirty) { return message; } }); // Clean up the callback when the component unmounts return () => { unblock(); }; }, [history, isDirty]); // Return a function that can be called to set the form as dirty return () => setDirty(true); }; then use this hook in your component like: const MyForm = () => { const setFormDirty = useUnsavedChangesWarning(); // Call the setFormDirty function whenever the form becomes dirty return ( <form onChange={setFormDirty}> ... </form> ); }; Note that the history.block method does not prevent the user from navigating away from the current page. It simply allows you to show a confirmation prompt before the navigation occurs. The user can still choose to ignore the prompt and proceed with the navigation. If you want to prevent the user from navigating away from the page until they have saved their changes, you can use the history.replace method to redirect the user back to the current page if they try to navigate away. You can also use the history.goBack method to take the user back to the previous page if they cancel the confirmation prompt.
Confirmation before closing the browser tab or warn the user on unsaved changes using v6 (migration from v5 to v6)?
all I am currently in a process of migrating from v5 to v6. I have a use case where in I want to show a confirmation before the user is trying to close a browser tab or warn the user about unsaved changes. When a user is in the process of filling out an application form and decides to drop out I want to warn him that the stored data will be lost, and for an existing user if he has already signed up and lands on the dashboard and wants to close the tab or wants to go back to a previous route which in this case is signup I want to prompt him to either logout or continue. Previously I was using CRA and react-router dom v5 and the following is the code that I used to achieve the above results:- import React, { useEffect, useState } from "react"; import { Prompt } from "react-router-dom"; const useUnsavedUsageWarning = ( message = "Are you sure you want to discard changes?" ) => { const [isDirty, setDirty] = useState(false); useEffect(() => { // Detecting browser closing window.onbeforeunload = isDirty && (() => message); return () => { window.onbeforeunload = null; }; }, [isDirty]); const routerPrompt = <Prompt when={isDirty} message={message} />; return [routerPrompt, () => setDirty(true), () => setDirty(false)]; }; export default useUnsavedUsageWarning; Any help or suggestion is appreciated. Thank you.
[ "the\n\n<Prompt\n\ncomponent from React Router v5 has been removed in v6. However, you can achieve the same functionality by using the history.block method from the history package, which is a dependency of React Router.\nThe history.block method lets you register a callback that will be invoked whenever the user attempts to navigate away from the current page, giving you an opportunity to show a confirmation prompt.\nHere is an example of how you can use the history.block method to implement a hook that shows a confirmation prompt when the user attempts to navigate away from a dirty form:\nimport { useHistory } from 'react-router-dom';\n\nconst useUnsavedChangesWarning = (\n message = 'Are you sure you want to discard changes?'\n) => {\n const history = useHistory();\n const [isDirty, setDirty] = useState(false);\n\n useEffect(() => {\n // Register a callback that will be invoked when the user\n // attempts to navigate away from the current page\n const unblock = history.block((location, action) => {\n // If the form is dirty, show a confirmation prompt\n if (isDirty) {\n return message;\n }\n });\n\n // Clean up the callback when the component unmounts\n return () => {\n unblock();\n };\n }, [history, isDirty]);\n\n // Return a function that can be called to set the form as dirty\n return () => setDirty(true);\n};\n\nthen use this hook in your component like:\nconst MyForm = () => {\n const setFormDirty = useUnsavedChangesWarning();\n\n // Call the setFormDirty function whenever the form becomes dirty\n return (\n <form onChange={setFormDirty}>\n ...\n </form>\n );\n};\n\nNote that the history.block method does not prevent the user from navigating away from the current page. It simply allows you to show a confirmation prompt before the navigation occurs. The user can still choose to ignore the prompt and proceed with the navigation. If you want to prevent the user from navigating away from the page until they have saved their changes, you can use the history.replace method to redirect the user back to the current page if they try to navigate away. You can also use the history.goBack method to take the user back to the previous page if they cancel the confirmation prompt.\n" ]
[ 1 ]
[]
[]
[ "javascript", "migration", "react_router_dom", "reactjs" ]
stackoverflow_0074673244_javascript_migration_react_router_dom_reactjs.txt
Q: Create Hive Partitioned Table How to create a table T1 with partition P1 and table T2's columns? create table T2(F1 int, F2 varchar(101), ..., FN date); create table T1 as select * from T2 partitioned by (P1 int); Error thrown: AnalysisException: Syntax error in line 1:undefined: ...2 as (select * from T1) partitioned by (P1 int) ^ Encountered: PARTITIONED Expected: LIMIT, ORDER, UNION CAUSED BY: Exception: Syntax error Knowing this would be cumbersome: create table T1 (F1 int, F2 varchar(101), ..., FN date) partitioned by (P1 int); How could I achieve T1? A: 1. Static Partitioning To perform this example, we have created a table “USER_DATA” with DATE_DT and COUNTRY as Partition columns. We will load data into “USER_DATA”. Create Table Syntax: CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name [(col_name data_type [column_constraint_specification] [COMMENT col_comment], [COMMENT table_comment] [PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]; Create Table Statement: CREATE TABLE USER_DATA (USER_ID INT ,USER_NAME STRING ,SITE_DATA STRING) PARTITIONED BY (DATE_DT STRING,COUNTRY STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS TEXTFILE 2. Dynamic Partitioning To perform this example, we have created two tables “USER_DATA_DYN” and “USER_LOG_DATA”. Table “USER_DATA_DYN” will be a Partition table with column DATE_DT and COUNTRY as Partition column and table “USER_LOG_DATA” will be a non-Partition table. We will insert data in “USER_DATA_DYN” using the non-Partition table “USER_LOG_DATA”. Create Table Syntax: CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name [(col_name data_type [column_constraint_specification] [COMMENT col_comment], [COMMENT table_comment] [PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]; Create Table Statement: Let us create the table “USER_DATA_DYN”. CREATE TABLE USER_DATA_DYN (USER_ID INT ,USER_NAME STRING ,SITE_DATA STRING ) PARTITIONED BY (DATE_DT STRING,COUNTRY STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS TEXTFILE; For More Details... https://www.cloudduggu.com/hive/partitioning/
Create Hive Partitioned Table
How to create a table T1 with partition P1 and table T2's columns? create table T2(F1 int, F2 varchar(101), ..., FN date); create table T1 as select * from T2 partitioned by (P1 int); Error thrown: AnalysisException: Syntax error in line 1:undefined: ...2 as (select * from T1) partitioned by (P1 int) ^ Encountered: PARTITIONED Expected: LIMIT, ORDER, UNION CAUSED BY: Exception: Syntax error Knowing this would be cumbersome: create table T1 (F1 int, F2 varchar(101), ..., FN date) partitioned by (P1 int); How could I achieve T1?
[ "1. Static Partitioning\nTo perform this example, we have created a table “USER_DATA” with DATE_DT and COUNTRY as Partition columns. We will load data into “USER_DATA”.\nCreate Table Syntax:\nCREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name \n[(col_name data_type [column_constraint_specification] [COMMENT col_comment], \n[COMMENT table_comment] \n[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)];\n\nCreate Table Statement:\nCREATE TABLE USER_DATA (USER_ID INT \n,USER_NAME STRING \n,SITE_DATA STRING) \nPARTITIONED BY (DATE_DT STRING,COUNTRY STRING) \nROW FORMAT DELIMITED \nFIELDS TERMINATED BY '\\t' \nSTORED AS TEXTFILE \n\n2. Dynamic Partitioning\nTo perform this example, we have created two tables “USER_DATA_DYN” and “USER_LOG_DATA”. Table “USER_DATA_DYN” will be a Partition table with column DATE_DT and COUNTRY as Partition column and table “USER_LOG_DATA” will be a non-Partition table. We will insert data in “USER_DATA_DYN” using the non-Partition table “USER_LOG_DATA”.\nCreate Table Syntax:\nCREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name \n[(col_name data_type [column_constraint_specification] [COMMENT col_comment], \n[COMMENT table_comment] \n[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)];\n\nCreate Table Statement:\nLet us create the table “USER_DATA_DYN”.\nCREATE TABLE USER_DATA_DYN (USER_ID INT \n,USER_NAME STRING \n,SITE_DATA STRING \n) \nPARTITIONED BY (DATE_DT STRING,COUNTRY STRING) \nROW FORMAT DELIMITED \nFIELDS TERMINATED BY '\\t' \nSTORED AS TEXTFILE;\n\nFor More Details... https://www.cloudduggu.com/hive/partitioning/\n" ]
[ 0 ]
[]
[]
[ "cloudera", "create_table", "hadoop", "hive" ]
stackoverflow_0049848140_cloudera_create_table_hadoop_hive.txt
Q: angular-typescript dynamic object creation Hi i am trying to get ID card details in a component and form an object "idCardDetails". In JS i was able to dynamicaly add key and values : let myObj = {}; myObj.name = "sid"; myObj.num= 4444; I cant create such a dynamic obj in Type script. I tried this in TS angular IdCardDetails: {name: string, empcode: number, bloodgroup: string}; IdCardDetails.name = this.idName; // error cannot set property for undefined Is this a error occurring with rules of Typescript or Angular? what will be solution to dynamicaly create an object? Is it necessary always go with class based object creation in angular? A: If you don't know all keys in advance then you can use TypeScript's Record utility in the following fashion where Keys are the object types and Type is the type you want the object should have Record<Keys, Type> So, you'd write your code as below, const idCardDetails: Record<string, number | string> = {}; idCardDetails.name = 'John'; idCardDetails.id = 10; You can read more about Record utility A: You have couple of options for the task you want. first if you know the fields create an interface export interface ITest { name:string, // this will make sure ITest must have name in string type. age?:number, // this will allow you not to enter age (can be undefined) [key:string]: any, // will allow you to put what ever key you would like (is more powerful but typescript will not always complete example below, } // notice I can avoid using age. const test:ITest = { name:'test' // I must provide some initial string } test.value = true; // will be accepted because of [key:string]
angular-typescript dynamic object creation
Hi i am trying to get ID card details in a component and form an object "idCardDetails". In JS i was able to dynamicaly add key and values : let myObj = {}; myObj.name = "sid"; myObj.num= 4444; I cant create such a dynamic obj in Type script. I tried this in TS angular IdCardDetails: {name: string, empcode: number, bloodgroup: string}; IdCardDetails.name = this.idName; // error cannot set property for undefined Is this a error occurring with rules of Typescript or Angular? what will be solution to dynamicaly create an object? Is it necessary always go with class based object creation in angular?
[ "If you don't know all keys in advance then you can use TypeScript's Record utility in the following fashion where Keys are the object types and Type is the type you want the object should have\nRecord<Keys, Type>\n\nSo, you'd write your code as below,\nconst idCardDetails: Record<string, number | string> = {};\n\nidCardDetails.name = 'John';\nidCardDetails.id = 10;\n\nYou can read more about Record utility\n", "You have couple of options for the task you want.\nfirst if you know the fields create an interface\nexport interface ITest {\nname:string, // this will make sure ITest must have name in string type.\nage?:number, // this will allow you not to enter age (can be undefined)\n[key:string]: any, // will allow you to put what ever key you would like (is more powerful but typescript will not always complete example below,\n}\n\n\n// notice I can avoid using age.\nconst test:ITest = {\nname:'test' // I must provide some initial string \n}\n\ntest.value = true; // will be accepted because of [key:string] \n\n" ]
[ 0, 0 ]
[]
[]
[ "angular", "javascript", "typescript" ]
stackoverflow_0074672802_angular_javascript_typescript.txt
Q: Difference between "DateTime?" and "DateTime". (trying to get value from datepicker) I'm quite new to WPF and C# so don't blame me for asking this maybe silly question. I have my WPF app with two datepickers. I want to get the DateTime out of them when it changes and to use it as my variable for some other stuff in the app. So I have for each of them something like this(method was automatically generated by VS): private void datePicker1_SelectedDateChanged(object sender, SelectionChangedEventArgs e) { date1 = datePicker1.SelectedDate; } but the problem is that the date in the datepicker is format DateTime? not DateTime and I really don't know what does that question mark there mean and why it is there. I tried some research but didn't find anything that would help me. If u see some better way of getting the date from that datepicker u can help me with it too. I just need it in my xaml.cs code not in xaml and I'm not really into using bindings cause I'm not sure if it works how I need in this case. Thanks for any answer. Edit: I would like to add information that it shows me this error: Cannot implicitly convert type 'System.DateTime?' to 'System.DateTime'. An explicit conversion exists (are you missing a cast?) A: DateTime with ? is Nullable DateTime, it can hold null values. For your case you can do : private void datePicker1_SelectedDateChanged(object sender, SelectionChangedEventArgs e) { if(datePicker1.SelectedDate.HasValue) date1 = datePicker1.SelectedDate.Value; } Nullable<T> C# In C# and Visual Basic, you mark a value type as nullable by using the ? notation after the value type. For example, int? in C# or Integer? in Visual Basic declares an integer value type that can be assigned null. A: A nullable DateTime can be null. The DateTime struct itself does not provide a null option. But the DateTime? nullable type allows you to assign the null literal to the DateTime type. It provides another level of indirection in the object model. Program that uses null DateTime struct: C# using System; class Program { static void Main() { // // Declare a nullable DateTime instance and assign to null. // ... Change the DateTime and use the Test method (below). // DateTime? value = null; Test(value); value = DateTime.Now; Test(value); value = DateTime.Now.AddDays(1); Test(value); // // You can use the GetValueOrDefault method on nulls. // value = null; Console.WriteLine(value.GetValueOrDefault()); } static void Test(DateTime? value) { // // This method uses the HasValue property. // ... If there is no value, the number zero is written. // if (value.HasValue) { Console.WriteLine(value.Value); } else { Console.WriteLine(0); } } } Output 0 9/29/2009 9:56:21 AM 9/30/2009 9:56:21 AM 1/1/0001 12:00:00 AM Originally found Here Good luck! A: You can't create a DateTime without any value (ie, null). It will always have a default value (DateTime.MinValue). A DateTime?, on the other hand, is a sort of wrapper around DateTime, which allows you to keep it undefined. This can be very useful, for instance if you want to let the user leave one of the date-fields blank (no date selected = Null). Remember that you'll need to use DateTime? as the parameter type of any methods that need to do work related to this, though. If you rely on other libraries, or have to pass this data to other components, etc, then you can sometimes face awkward situations. Typical challenge/decision: "I'm using DateTime? in component X, which talks to component Y, which uses DateTime - should I rewrite Y to handle DateTime?, or should I translate any DateTime?-object with a value of Null into a DateTime-object with a value of DateTime.MinValue to represent unselected/invalid dates? Or perhaps I should just let it throw an exception in Y...?" Just something to be aware of and think about when working with DateTime?.. A: A nullable DateTime can be null. The DateTime struct itself does not provide a null option. But the "DateTime?" nullable type allows you to assign the null literal to the DateTime type. It provides another level of indirection. A: //Created Date without empty or nullable public DateTime CreatedDate { get; set; } //with Nullable or empty public DateTime? UpdatedDate { get; set; } public DateTime? DeletedDate { get; set; } Result View: Click here
Difference between "DateTime?" and "DateTime". (trying to get value from datepicker)
I'm quite new to WPF and C# so don't blame me for asking this maybe silly question. I have my WPF app with two datepickers. I want to get the DateTime out of them when it changes and to use it as my variable for some other stuff in the app. So I have for each of them something like this(method was automatically generated by VS): private void datePicker1_SelectedDateChanged(object sender, SelectionChangedEventArgs e) { date1 = datePicker1.SelectedDate; } but the problem is that the date in the datepicker is format DateTime? not DateTime and I really don't know what does that question mark there mean and why it is there. I tried some research but didn't find anything that would help me. If u see some better way of getting the date from that datepicker u can help me with it too. I just need it in my xaml.cs code not in xaml and I'm not really into using bindings cause I'm not sure if it works how I need in this case. Thanks for any answer. Edit: I would like to add information that it shows me this error: Cannot implicitly convert type 'System.DateTime?' to 'System.DateTime'. An explicit conversion exists (are you missing a cast?)
[ "DateTime with ? is Nullable DateTime, it can hold null values. For your case you can do :\nprivate void datePicker1_SelectedDateChanged(object sender, SelectionChangedEventArgs e)\n{\n if(datePicker1.SelectedDate.HasValue)\n date1 = datePicker1.SelectedDate.Value;\n}\n\nNullable<T> C# \n\nIn C# and Visual Basic, you mark a value type as nullable by using the\n ? notation after the value type. For example, int? in C# or Integer?\n in Visual Basic declares an integer value type that can be assigned\n null.\n\n", "A nullable DateTime can be null. \nThe DateTime struct itself does not provide a null option. But the DateTime? nullable type allows you to assign the null literal to the DateTime type. It provides another level of indirection in the object model.\n\nProgram that uses null DateTime struct: C#\n\nusing System;\n\nclass Program\n{\n static void Main()\n {\n //\n // Declare a nullable DateTime instance and assign to null.\n // ... Change the DateTime and use the Test method (below).\n //\n DateTime? value = null;\n Test(value);\n value = DateTime.Now;\n Test(value);\n value = DateTime.Now.AddDays(1);\n Test(value);\n //\n // You can use the GetValueOrDefault method on nulls.\n //\n value = null;\n Console.WriteLine(value.GetValueOrDefault());\n }\n\n static void Test(DateTime? value)\n {\n //\n // This method uses the HasValue property.\n // ... If there is no value, the number zero is written.\n //\n if (value.HasValue)\n {\n Console.WriteLine(value.Value);\n }\n else\n {\n Console.WriteLine(0);\n }\n }\n}\n\nOutput\n0\n9/29/2009 9:56:21 AM\n9/30/2009 9:56:21 AM\n1/1/0001 12:00:00 AM\n\nOriginally found Here\nGood luck!\n", "You can't create a DateTime without any value (ie, null). It will always have a default value (DateTime.MinValue).\nA DateTime?, on the other hand, is a sort of wrapper around DateTime, which allows you to keep it undefined. This can be very useful, for instance if you want to let the user leave one of the date-fields blank (no date selected = Null). \nRemember that you'll need to use DateTime? as the parameter type of any methods that need to do work related to this, though. If you rely on other libraries, or have to pass this data to other components, etc, then you can sometimes face awkward situations. \nTypical challenge/decision: \n\"I'm using DateTime? in component X, which talks to component Y, which uses DateTime - should I rewrite Y to handle DateTime?, or should I translate any DateTime?-object with a value of Null into a DateTime-object with a value of DateTime.MinValue to represent unselected/invalid dates? Or perhaps I should just let it throw an exception in Y...?\"\nJust something to be aware of and think about when working with DateTime?.. \n", "A nullable DateTime can be null. The DateTime struct itself does not provide a null option. But the \"DateTime?\" nullable type allows you to assign the null literal to the DateTime type. It provides another level of indirection.\n", "//Created Date without empty or nullable\npublic DateTime CreatedDate { get; set; }\n\n//with Nullable or empty\npublic DateTime? UpdatedDate { get; set; }\npublic DateTime? DeletedDate { get; set; }\n\nResult View: Click here\n" ]
[ 14, 0, 0, 0, 0 ]
[]
[]
[ "c#", "datetime", "datetimepicker", "visual_studio_2010", "wpf" ]
stackoverflow_0016397228_c#_datetime_datetimepicker_visual_studio_2010_wpf.txt
Q: Using Oracle SQL-Function JSON_EXISTS in JPQL In my Oracle-database table mytable I have a column columnx with JSON-Arrays (VARCHAR2) and I would like to find all entries where the value valueX is inside that array. In native Oracle-SQL the following query is working very well: SELECT * FROM mytable t WHERE JSON_EXISTS(columnx, '$?(@ == "valueX")'); In my Spring Boot Application I write queries in JPQL, so I have to convert it. The following queries were unsuccessful: I found out that I have to use 'FUNCTION()' for specific SQL-Oracle-functions: @Query(value = "SELECT t FROM mytable t WHERE FUNCTION('JSON_EXISTS',t.columnx, '$?(@ == \"valueX\")')") That results in a JPQL-Parsing-Error: "QuerySyntaxException: unexpected AST node: function (JSON_EXISTS)" I found out that JPQL needs a real boolean-comparison, so I tried this: @Query(value = "SELECT t FROM mytable t WHERE FUNCTION('JSON_EXISTS',t.columnx, '$?(@ == \"valueX\")') = TRUE") Now the JPQL-Converter can parse it to native SQL successfully, but I got an Oracle-Error while executing the query: "ORA-00933: SQL command not properly ended." That's understandable since the parsed native ... WHERE JSON_EXISTS(columnx, '$?(@ == "valueX")') = 1 won't run either. What is the right way to solve this problem? Do you have any idea? A: you can try the below using native query. @Query(value = "SELECT * FROM mytable t WHERE JSON_EXISTS(columnx, '$?(@ == \"valueX\")')", nativeQuery = true) OR You can hide the JSON_EXISTS implementation inside a view in oracle and call the view in JPQL. create or replace view my_table_json_exists as SELECT * FROM mytable t WHERE JSON_EXISTS(t.columnx, '$?(@ == "valueX")'); @Query(value ="select t from my_table_json_exists t"); If you are only looking for valueX inside a json key then you can explore JSON_VALUE. A: In JPQL use MEMBER OF operator to check if a value is contained in a JSON array stored in a column. The JPQL query would look something like this: @Query("SELECT t FROM mytable t WHERE :valueX MEMBER OF t.columnx") List<Mytable> findByValueX(@Param("valueX") String valueX);
Using Oracle SQL-Function JSON_EXISTS in JPQL
In my Oracle-database table mytable I have a column columnx with JSON-Arrays (VARCHAR2) and I would like to find all entries where the value valueX is inside that array. In native Oracle-SQL the following query is working very well: SELECT * FROM mytable t WHERE JSON_EXISTS(columnx, '$?(@ == "valueX")'); In my Spring Boot Application I write queries in JPQL, so I have to convert it. The following queries were unsuccessful: I found out that I have to use 'FUNCTION()' for specific SQL-Oracle-functions: @Query(value = "SELECT t FROM mytable t WHERE FUNCTION('JSON_EXISTS',t.columnx, '$?(@ == \"valueX\")')") That results in a JPQL-Parsing-Error: "QuerySyntaxException: unexpected AST node: function (JSON_EXISTS)" I found out that JPQL needs a real boolean-comparison, so I tried this: @Query(value = "SELECT t FROM mytable t WHERE FUNCTION('JSON_EXISTS',t.columnx, '$?(@ == \"valueX\")') = TRUE") Now the JPQL-Converter can parse it to native SQL successfully, but I got an Oracle-Error while executing the query: "ORA-00933: SQL command not properly ended." That's understandable since the parsed native ... WHERE JSON_EXISTS(columnx, '$?(@ == "valueX")') = 1 won't run either. What is the right way to solve this problem? Do you have any idea?
[ "you can try the below using native query.\n@Query(value = \"SELECT * FROM mytable t WHERE \nJSON_EXISTS(columnx, '$?(@ == \\\"valueX\\\")')\", \nnativeQuery = true)\n\nOR You can hide the JSON_EXISTS implementation inside a view in oracle and call the view in JPQL.\n create or replace view my_table_json_exists as SELECT * FROM \n mytable t WHERE \n JSON_EXISTS(t.columnx, '$?(@ == \"valueX\")');\n\n @Query(value =\"select t from my_table_json_exists t\");\n\nIf you are only looking for valueX inside a json key then you can explore JSON_VALUE.\n", "In JPQL use MEMBER OF operator to check if a value is contained in a JSON array stored in a column. The JPQL query would look something like this:\n@Query(\"SELECT t FROM mytable t WHERE :valueX MEMBER OF t.columnx\")\nList<Mytable> findByValueX(@Param(\"valueX\") String valueX);\n\n" ]
[ 1, 0 ]
[]
[]
[ "jpa", "jpql", "oracle", "spring", "sql" ]
stackoverflow_0074610787_jpa_jpql_oracle_spring_sql.txt
Q: how can we find length of word in python without using len function? #No using of Len function a=len b=len(a) print(b) I want this without Len function how can we find length of word in python without using len function? A: Here is one way to find the length of a word in Python without using the len function: word = "hello" count = 0 for letter in word: count += 1 print(count) # this will print 5, the length of the word This works by iterating through each letter in the word and adding 1 to a counter variable for each letter. After the loop finishes, the counter variable will hold the length of the word. Another way to find the length of a word in Python without using the len function is to use the str.format method: word = "hello" # this will print 5, the length of the word print("{:d}".format(word.count(""))) In this example, the str.format method is used to print the number of empty strings in word. Since every character in the word is a non-empty string, the number of empty strings in word is equal to the length of the word.
how can we find length of word in python without using len function?
#No using of Len function a=len b=len(a) print(b) I want this without Len function how can we find length of word in python without using len function?
[ "Here is one way to find the length of a word in Python without using the len function:\nword = \"hello\"\ncount = 0\n\nfor letter in word:\n count += 1\n\nprint(count) # this will print 5, the length of the word\n\nThis works by iterating through each letter in the word and adding 1 to a counter variable for each letter. After the loop finishes, the counter variable will hold the length of the word.\nAnother way to find the length of a word in Python without using the len function is to use the str.format method:\nword = \"hello\"\n\n# this will print 5, the length of the word\nprint(\"{:d}\".format(word.count(\"\")))\n\nIn this example, the str.format method is used to print the number of empty strings in word. Since every character in the word is a non-empty string, the number of empty strings in word is equal to the length of the word.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074673350_python.txt
Q: How Can I send multiple API request from one function altogether I want to send Multiple API request to a API end point all together. just to test performance of the Api. what are the various ways to do it. I have a list of API endpoints of API which I want to send request. and want to know best, average and worst response time. using Axios request in JavaScript I am trying to send request but only able to do it one by one and in this approach not getting result I want as the request are asynchronous in nature. What I want to Do:- Api End Point list-> request to all end point in one go -> best, Worst, average response time. any approach to do it. A: You can but you cannot test API backend performance with this because you cannot send more than 6 AJAX requests at a time (unless you configure your browser to be able to execute more) So I would recommend considering using a dedicated load testing tool for performance testing of your API backend A: Since you're familiar to javascript, I recommend checking out K6: https://k6.io/ It's a dedicated performance test tool using JS libraries and fast to setup
How Can I send multiple API request from one function altogether
I want to send Multiple API request to a API end point all together. just to test performance of the Api. what are the various ways to do it. I have a list of API endpoints of API which I want to send request. and want to know best, average and worst response time. using Axios request in JavaScript I am trying to send request but only able to do it one by one and in this approach not getting result I want as the request are asynchronous in nature. What I want to Do:- Api End Point list-> request to all end point in one go -> best, Worst, average response time. any approach to do it.
[ "You can but you cannot test API backend performance with this because you cannot send more than 6 AJAX requests at a time (unless you configure your browser to be able to execute more)\nSo I would recommend considering using a dedicated load testing tool for performance testing of your API backend\n", "Since you're familiar to javascript, I recommend checking out K6: https://k6.io/\nIt's a dedicated performance test tool using JS libraries and fast to setup\n" ]
[ 1, 0 ]
[]
[]
[ "api", "cypress", "javascript", "performance_testing", "web_api_testing" ]
stackoverflow_0074609927_api_cypress_javascript_performance_testing_web_api_testing.txt
Q: How can i find all roots for a function using Newton Rapshon method in Scilab? I need to find all roots for this function f(x)=-2x^4+x^3-2x+3 in given interval [-1.5 ; 1.5] with accuracy being <0.0001. Then i have to plot the relative mistakes. So i got this code but it's not working. deff('y=f(x)','y=-2x^4+x^3-2x+3') a=input("Enter value of interval a:") b=input("Enter value of interval b:") n=input("Enter the number of iteration n:") x0=(a+b)/2 for i=1:n disp([i,x0]) x1=x0-f(x0)/z(x0) if abs(x1-x0)<0.00001 then disp("We get required accuracy") break; end x0=x1 end A: There are 2 pbs in your code; The definition of f function must be deff('y=f(x)','y=-2*x^4+x^3-2*x+3') You did not define the z function in your code. It should compute the derivative of f at x deff('y=z(x)','y=-8*x^3+3*x^2-2') A: I don't know of the existence of a function already developed in Scilab, which can be used to find the roots of equations, I believe so. In python we can use scipy.optimize.newton (Among others like bisect, toms748 etc). For comparison we have an implementation of a Newton-Rapshon algorithm. You can use the logic developed in this algorithm, and rewrite a scilab version. Python import numpy as np import matplotlib.pyplot as plt from scipy.optimize import newton, bisect, toms748 from scipy.misc import derivative # Function f(x), where x* in f(x = x*) = 0 def f(x): return -2*np.power(x,4)+np.power(x,3)-2*x+3 intervalo_x = np.linspace(-1.5,1.5,1000) tol = 1E-04 #x_chute_inicial = np.median(intervalo_x) x_chute_inicial = 10 N0 = 50 ######################### # scipy.optimize.newton # ######################### raiz = newton(func = f , x0 = x_chute_inicial, fprime = None, tol = 1.48e-08, maxiter = N0, fprime2 = None, x1 = None, rtol = 0.0, full_output = False, disp = True) print('Implementation of scipy.optimize.newton') print(f'raiz = {raiz} | f(raiz) = {f(raiz)}') print(f'\n') ############################ # Newton-Rapshon algorithm # ############################ erro_absoluto = np.zeros(N0) erro_relativo = np.zeros(N0) iteracoes = np.zeros(N0) i = 1 while i<=N0: x = x_chute_inicial - (f(x_chute_inicial)/derivative(f,x0=x_chute_inicial,dx=tol,n=1)) # Root erro_abs = np.abs(x-x_chute_inicial) # Absolute error if erro_abs<tol: print('Implementation of the Newton-Rapshon algorithm') print(f"x = {x}\n" f"Number of iterations: {i}\n" f"Absolute error: {erro_abs}\n" f"Relative error: {np.abs((x-x_chute_inicial)/x_chute_inicial)}\n" f"f({x}) = {f(x)}") break erro_relativo[i] = np.abs((x-x_chute_inicial)/x_chute_inicial) erro_absoluto[i] = np.abs(x-x_chute_inicial) iteracoes[i] = i i = i + 1 x_chute_inicial = x ########### # graphic # ########### plt.figure(figsize = (21,6)) plt.subplot(1,3,1) plt.plot(intervalo_x,f(intervalo_x),'b-') plt.title(r'f(x) = $-2x^{4}+x^{3}-2x+3$') plt.xlabel('x') plt.ylabel('f(x)') plt.grid(lw = 0.95,color = 'm',linestyle = '--') plt.subplot(1,3,2) plt.plot(iteracoes,erro_relativo,'y*') plt.title('relative error') plt.xlabel('number of iterations i') plt.ylabel('relative error') plt.grid(lw = 0.95,color = 'g',linestyle = '--') plt.subplot(1,3,3) plt.plot(iteracoes,erro_absoluto,'g^') plt.title('absolute error') plt.xlabel('number of iterations i') plt.ylabel('absolute error') plt.grid(lw = 0.95,color = 'y',linestyle = '--') The roots of this equation, contained in the closed interval [-1.5,1.5] are approximately x0 = -1.1693 and x1 = 1.0. Finally below we have the graphs of f(x) by x, absolute error by iterations and relative error by interactions. Figure generated
How can i find all roots for a function using Newton Rapshon method in Scilab?
I need to find all roots for this function f(x)=-2x^4+x^3-2x+3 in given interval [-1.5 ; 1.5] with accuracy being <0.0001. Then i have to plot the relative mistakes. So i got this code but it's not working. deff('y=f(x)','y=-2x^4+x^3-2x+3') a=input("Enter value of interval a:") b=input("Enter value of interval b:") n=input("Enter the number of iteration n:") x0=(a+b)/2 for i=1:n disp([i,x0]) x1=x0-f(x0)/z(x0) if abs(x1-x0)<0.00001 then disp("We get required accuracy") break; end x0=x1 end
[ "There are 2 pbs in your code;\n\nThe definition of f function must be\ndeff('y=f(x)','y=-2*x^4+x^3-2*x+3')\nYou did not define the z function in your code. It should compute the derivative of f at x\ndeff('y=z(x)','y=-8*x^3+3*x^2-2')\n\n", "I don't know of the existence of a function already developed in Scilab, which can be used to find the roots of equations, I believe so. In python we can use scipy.optimize.newton (Among others like bisect, toms748 etc). For comparison we have an implementation of a Newton-Rapshon algorithm. You can use the logic developed in this algorithm, and rewrite a scilab version.\nPython\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import newton, bisect, toms748\nfrom scipy.misc import derivative\n\n# Function f(x), where x* in f(x = x*) = 0\ndef f(x):\n return -2*np.power(x,4)+np.power(x,3)-2*x+3\n\nintervalo_x = np.linspace(-1.5,1.5,1000)\ntol = 1E-04\n#x_chute_inicial = np.median(intervalo_x)\nx_chute_inicial = 10\nN0 = 50\n\n#########################\n# scipy.optimize.newton #\n#########################\nraiz = newton(func = f , x0 = x_chute_inicial, fprime = None, tol = 1.48e-08, maxiter = N0, fprime2 = None, x1 = None, rtol = 0.0, \n full_output = False, disp = True)\nprint('Implementation of scipy.optimize.newton')\nprint(f'raiz = {raiz} | f(raiz) = {f(raiz)}')\nprint(f'\\n')\n\n############################\n# Newton-Rapshon algorithm #\n############################\nerro_absoluto = np.zeros(N0)\nerro_relativo = np.zeros(N0)\niteracoes = np.zeros(N0)\n\ni = 1\nwhile i<=N0:\n x = x_chute_inicial - (f(x_chute_inicial)/derivative(f,x0=x_chute_inicial,dx=tol,n=1)) # Root\n erro_abs = np.abs(x-x_chute_inicial) # Absolute error\n if erro_abs<tol:\n print('Implementation of the Newton-Rapshon algorithm')\n print(f\"x = {x}\\n\"\n f\"Number of iterations: {i}\\n\"\n f\"Absolute error: {erro_abs}\\n\"\n f\"Relative error: {np.abs((x-x_chute_inicial)/x_chute_inicial)}\\n\"\n f\"f({x}) = {f(x)}\")\n break\n erro_relativo[i] = np.abs((x-x_chute_inicial)/x_chute_inicial)\n erro_absoluto[i] = np.abs(x-x_chute_inicial)\n iteracoes[i] = i\n i = i + 1\n x_chute_inicial = x\n\n###########\n# graphic #\n###########\nplt.figure(figsize = (21,6))\nplt.subplot(1,3,1)\nplt.plot(intervalo_x,f(intervalo_x),'b-')\nplt.title(r'f(x) = $-2x^{4}+x^{3}-2x+3$')\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.grid(lw = 0.95,color = 'm',linestyle = '--')\n\nplt.subplot(1,3,2)\nplt.plot(iteracoes,erro_relativo,'y*')\nplt.title('relative error')\nplt.xlabel('number of iterations i')\nplt.ylabel('relative error')\nplt.grid(lw = 0.95,color = 'g',linestyle = '--')\n\nplt.subplot(1,3,3)\nplt.plot(iteracoes,erro_absoluto,'g^')\nplt.title('absolute error')\nplt.xlabel('number of iterations i')\nplt.ylabel('absolute error')\nplt.grid(lw = 0.95,color = 'y',linestyle = '--')\n\nThe roots of this equation, contained in the closed interval [-1.5,1.5] are approximately x0 = -1.1693 and x1 = 1.0. Finally below we have the graphs of f(x) by x, absolute error by iterations and relative error by interactions.\nFigure generated\n" ]
[ 0, 0 ]
[]
[]
[ "function", "scilab" ]
stackoverflow_0059117445_function_scilab.txt
Q: Why I can't acces this property in my class I am trying to use this class to play and pause an interval timer but I am having problems with pause(). I don't know why I can't acces the timerId property from pause(). this.timerId return null in pause() . What am I doing wrong? class IntervalTimer { callbackStartTime; remaining = 0; paused = false; timerId = null; _callback; _delay; constructor(callback, delay) { this._callback = callback; this._delay = delay; } pause() { console.log("this.timerId", this.timerId); //return null if (!this.paused) { this.clear(); this.remaining = new Date().getTime() - this.callbackStartTime; this.paused = true; } } resume() { if (this.paused) { if (this.remaining) { setTimeout(() => { this.run(); this.paused = false; this.start(); }, this.remaining); } else { this.paused = false; this.start(); } } } clear() { clearInterval(this.timerId); } start() { console.log("this.timerId", this.timerId); // return timerId correctly this.clear(); this.timerId = setInterval(() => { this.run(); }, this._delay); } run() { this.callbackStartTime = new Date().getTime(); this._callback(); } } export default IntervalTimer; Here my code where I use the class const interval = new IntervalTimer(() => { // Coger un número aleatorio de los que quedan y borrarlo let randomIndex = Math.floor(Math.random() * numbersLeft.length); let randomNumber = numbersLeft[randomIndex]; if (randomIndex > -1) numbersLeft.splice(randomIndex, 1); // Pasarle el número cogido al estado const tempNList = numberList; if (lastNumber) numberList.push(lastNumber); setNumberList(tempNList); setNumber(randomNumber); lastNumber = randomNumber; }, BALLTIME); const startGame = () => { interval.start(); }; const pauseGame = () => { interval.pause(); }; A: Finally I solved it this way. I made start() return the this.timerId and then I can call pause(timerId) so basically I am taking the this.timerId outside the class to use it via parameter. Now it's working, but honestly I don't know why it wasn't working before. Thanks anyway for spending time on it! class IntervalTimer { callbackStartTime; remaining = 0; paused = false; timerId = null; _callback; _delay; constructor(callback, delay) { this._callback = callback; this._delay = delay; } pause(timerIDfromOutside) { //using the returned this.timerID in start() if (!this.paused) { this.clear(timerIDfromOutside); this.remaining = new Date().getTime() - this.callbackStartTime; this.paused = true; } } resume() { if (this.paused) { if (this.remaining) { setTimeout(() => { this.run(); this.paused = false; this.start(); }, this.remaining); } else { this.paused = false; this.start(); } } } clear(timerIDfromOutside) { clearInterval(timerIDfromOutside); } start() { this.paused = false this.clear(); this.timerId = setInterval(() => { this.run(); }, this._delay); return this.timerId //return this.timerId so I can pass it as a parameter in // pause() } run() { this.callbackStartTime = new Date().getTime(); this._callback(); } }
Why I can't acces this property in my class
I am trying to use this class to play and pause an interval timer but I am having problems with pause(). I don't know why I can't acces the timerId property from pause(). this.timerId return null in pause() . What am I doing wrong? class IntervalTimer { callbackStartTime; remaining = 0; paused = false; timerId = null; _callback; _delay; constructor(callback, delay) { this._callback = callback; this._delay = delay; } pause() { console.log("this.timerId", this.timerId); //return null if (!this.paused) { this.clear(); this.remaining = new Date().getTime() - this.callbackStartTime; this.paused = true; } } resume() { if (this.paused) { if (this.remaining) { setTimeout(() => { this.run(); this.paused = false; this.start(); }, this.remaining); } else { this.paused = false; this.start(); } } } clear() { clearInterval(this.timerId); } start() { console.log("this.timerId", this.timerId); // return timerId correctly this.clear(); this.timerId = setInterval(() => { this.run(); }, this._delay); } run() { this.callbackStartTime = new Date().getTime(); this._callback(); } } export default IntervalTimer; Here my code where I use the class const interval = new IntervalTimer(() => { // Coger un número aleatorio de los que quedan y borrarlo let randomIndex = Math.floor(Math.random() * numbersLeft.length); let randomNumber = numbersLeft[randomIndex]; if (randomIndex > -1) numbersLeft.splice(randomIndex, 1); // Pasarle el número cogido al estado const tempNList = numberList; if (lastNumber) numberList.push(lastNumber); setNumberList(tempNList); setNumber(randomNumber); lastNumber = randomNumber; }, BALLTIME); const startGame = () => { interval.start(); }; const pauseGame = () => { interval.pause(); };
[ "Finally I solved it this way. I made start() return the this.timerId and then I can call pause(timerId) so basically I am taking the this.timerId outside the class to use it via parameter. Now it's working, but honestly I don't know why it wasn't working before. Thanks anyway for spending time on it!\nclass IntervalTimer {\n callbackStartTime;\n remaining = 0;\n paused = false;\n timerId = null;\n _callback;\n _delay;\n\n constructor(callback, delay) {\n this._callback = callback;\n this._delay = delay;\n }\n\n pause(timerIDfromOutside) { //using the returned this.timerID in start()\n if (!this.paused) {\n this.clear(timerIDfromOutside);\n this.remaining = new Date().getTime() - this.callbackStartTime;\n this.paused = true;\n }\n }\n\n resume() {\n if (this.paused) {\n if (this.remaining) {\n setTimeout(() => {\n this.run();\n this.paused = false;\n this.start();\n }, this.remaining);\n } else {\n this.paused = false;\n this.start();\n }\n }\n }\n\n clear(timerIDfromOutside) {\n clearInterval(timerIDfromOutside); \n }\n\n start() {\n this.paused = false\n this.clear();\n this.timerId = setInterval(() => {\n this.run();\n }, this._delay);\n return this.timerId //return this.timerId so I can pass it as a parameter in \n // pause()\n }\n\n run() {\n this.callbackStartTime = new Date().getTime();\n this._callback();\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "properties", "setinterval", "settimeout" ]
stackoverflow_0074668752_javascript_properties_setinterval_settimeout.txt
Q: keras save and load model, accuracy drop Link to colab https://colab.research.google.com/drive/1u_jRl3uMlxEne667aCxt5Qh8eMlhme8V?usp=sharing link to training data https://drive.google.com/file/d/1jcu7ZTnTF2obGb5OM4dD6T_GlU0sMWmL/view?usp=sharing So i train a model that have 70% and save it into drive and deleted runtime Then restart runtime and load the model from drive use the exact same code the accuracy drop to 40%-50% why? i tried save n load only the weights, or json, or .5 file, save n load using pickle etc etc. it doesnt work. after i deleted runtiime or open a new ipynb file and load the model the accuracy is always not the same A: I see your question and would like to clarify your understanding of the following: Your understanding of Model Training Your understanding of Training Accuracy and Validation Accuracy General rule-of-thumbs regarding model evaluation. When training your model, you do not want to have a "Perfect Model accuracy" (100% Accuracy during training). At the same time, you do not want your accuracies to be too low. (anything below 70%). During training, you want your training and testing accuracies to be as similar as possible. Having a large gap in accuracies can tell you that your model has 1 of 2 problems, overfitting and underfitting. # Example 1 Epoch 12/60 44/44 [==============================] - 0s 5ms/step - loss: 0.5669 - acc: 0.7429 - val_loss: 0.6224 - val_acc: 0.7133 Overfitting is like your model does not accept new and different information Underfitting is your model not understanding information/bad information used for training. Now, I refer your attention to Example 1, this random epoch I have selected in random from your training, this epoch shows a decent training dynamic, the difference between your acc and val_acc have a difference of 0.296 (2.96%) However, your last epoch: Epoch 60/60 44/44 [==============================] - 0s 6ms/step - loss: 0.0697 - acc: 1.0000 - val_loss: 0.5494 - val_acc: 0.7400 Has a acc difference of 0.2600(26%), this tells me that you have overfitted your model as your model has more or less memorized your validation dataset, thus, any new data that is passed into the model will be predicted less accurately. That is why when you are validating your dataset with a fresh new shuffle of your dataset your accuracy drops (There is no correlation between this drop and the accuracy of epoch accuracy delta). for a macro view, you can refer to your model graph: for a general rule of thumb, the best training and validation accuracies are between 70%(0.7) and 89%(0.89). This can change depending on your model requirements. Disclaimer: information in this post may not be 100% accurate
keras save and load model, accuracy drop
Link to colab https://colab.research.google.com/drive/1u_jRl3uMlxEne667aCxt5Qh8eMlhme8V?usp=sharing link to training data https://drive.google.com/file/d/1jcu7ZTnTF2obGb5OM4dD6T_GlU0sMWmL/view?usp=sharing So i train a model that have 70% and save it into drive and deleted runtime Then restart runtime and load the model from drive use the exact same code the accuracy drop to 40%-50% why? i tried save n load only the weights, or json, or .5 file, save n load using pickle etc etc. it doesnt work. after i deleted runtiime or open a new ipynb file and load the model the accuracy is always not the same
[ "I see your question and would like to clarify your understanding of the following:\n\nYour understanding of Model Training\nYour understanding of Training Accuracy and Validation Accuracy\nGeneral rule-of-thumbs regarding model evaluation.\n\n\nWhen training your model, you do not want to have a \"Perfect Model accuracy\" (100% Accuracy during training).\nAt the same time, you do not want your accuracies to be too low. (anything below 70%).\nDuring training, you want your training and testing accuracies to be as similar as possible. Having a large gap in accuracies can tell you that your model has 1 of 2 problems, overfitting and underfitting.\n# Example 1\nEpoch 12/60\n44/44 [==============================] - 0s 5ms/step - loss: 0.5669 - acc: 0.7429 - val_loss: 0.6224 - val_acc: 0.7133\n\nOverfitting is like your model does not accept new and different information\nUnderfitting is your model not understanding information/bad information used for training.\nNow, I refer your attention to Example 1, this random epoch I have selected in random from your training, this epoch shows a decent training dynamic, the difference between your acc and val_acc have a difference of 0.296 (2.96%)\nHowever, your last epoch:\nEpoch 60/60\n44/44 [==============================] - 0s 6ms/step - loss: 0.0697 - acc: 1.0000 - val_loss: 0.5494 - val_acc: 0.7400\n\nHas a acc difference of 0.2600(26%), this tells me that you have overfitted your model as your model has more or less memorized your validation dataset, thus, any new data that is passed into the model will be predicted less accurately.\nThat is why when you are validating your dataset with a fresh new shuffle of your dataset your accuracy drops (There is no correlation between this drop and the accuracy of epoch accuracy delta).\nfor a macro view, you can refer to your model graph:\n\nfor a general rule of thumb, the best training and validation accuracies are between 70%(0.7) and 89%(0.89). This can change depending on your model requirements.\nDisclaimer: information in this post may not be 100% accurate\n" ]
[ 0 ]
[]
[]
[ "artificial_intelligence", "keras", "machine_learning", "model", "python" ]
stackoverflow_0074665262_artificial_intelligence_keras_machine_learning_model_python.txt
Q: Laravel - how to group a Date string on table using collection i want to group my date Month but i don't know how to do that. i try used groupBy() but its giving me a error $user = Auth::user()->id; $date = productSold::where('user_id',$user)->get(); $collect = collect($date)->map(function ($date) { return date('M',strtotime($date->created_at))->groupBy('created_at'); }); return $collect; error : Call to a member function groupBy() on string without groupby this is the output : [ "Oct", "Oct", "Nov", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec" ] i want to group "Oct", "Nov", "Dec" A: Its error because you group by it from a string. So you should take out groupBy statement from map function. So the result should be like : $collect = collect($date)->map(function ($date) { return [ "created_at" => date('M',strtotime($date->created_at)) ]; })->groupBy('created_at'); The result should be like this.
Laravel - how to group a Date string on table using collection
i want to group my date Month but i don't know how to do that. i try used groupBy() but its giving me a error $user = Auth::user()->id; $date = productSold::where('user_id',$user)->get(); $collect = collect($date)->map(function ($date) { return date('M',strtotime($date->created_at))->groupBy('created_at'); }); return $collect; error : Call to a member function groupBy() on string without groupby this is the output : [ "Oct", "Oct", "Nov", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec", "Dec" ] i want to group "Oct", "Nov", "Dec"
[ "Its error because you group by it from a string. So you should take out groupBy statement from map function. So the result should be like :\n$collect = collect($date)->map(function ($date) {\n return [\n \"created_at\" => date('M',strtotime($date->created_at))\n ];\n })->groupBy('created_at');\n\nThe result should be like this.\n" ]
[ 1 ]
[]
[]
[ "eloquent", "laravel", "laravel_5", "laravel_collection" ]
stackoverflow_0074673314_eloquent_laravel_laravel_5_laravel_collection.txt
Q: Textjoin values of column B if duplicates are present in column A I want to consolidate the data of column B into a single cell ONLY IF the index (ie., Column A) is duplicated. For example: Currently, I'm doing manually for each duplicated index by using the following formula: =TEXTJOIN(", ",TRUE,B4:B6) Is there a better way to do this all at once? Any help is appreciated. A: There may easier way but you can try this formula- =BYROW(A2:A17,LAMBDA(p,IF(INDEX(MAP(A2:A17,LAMBDA(x,SUM(--(A2:INDEX(A2:A17,ROW(x)-1)=x)))),ROW(p)-1,1)=1,TEXTJOIN(", ",1,FILTER(B2:B17,A2:A17=p)),""))) A: Using REDUCE might be possible for a more succinct solution, though try this for now: =BYROW(A2:A17,LAMBDA(ζ,LET(α,A2:A17,IF((COUNTIF(α,ζ)>1)*(COUNTIF(INDEX(α,1):ζ,ζ)=1),TEXTJOIN(", ",,FILTER(B2:B17,α=ζ)),""))))
Textjoin values of column B if duplicates are present in column A
I want to consolidate the data of column B into a single cell ONLY IF the index (ie., Column A) is duplicated. For example: Currently, I'm doing manually for each duplicated index by using the following formula: =TEXTJOIN(", ",TRUE,B4:B6) Is there a better way to do this all at once? Any help is appreciated.
[ "There may easier way but you can try this formula-\n=BYROW(A2:A17,LAMBDA(p,IF(INDEX(MAP(A2:A17,LAMBDA(x,SUM(--(A2:INDEX(A2:A17,ROW(x)-1)=x)))),ROW(p)-1,1)=1,TEXTJOIN(\", \",1,FILTER(B2:B17,A2:A17=p)),\"\")))\n\n\n", "Using REDUCE might be possible for a more succinct solution, though try this for now:\n=BYROW(A2:A17,LAMBDA(ζ,LET(α,A2:A17,IF((COUNTIF(α,ζ)>1)*(COUNTIF(INDEX(α,1):ζ,ζ)=1),TEXTJOIN(\", \",,FILTER(B2:B17,α=ζ)),\"\"))))\n" ]
[ 1, 1 ]
[]
[]
[ "excel", "excel_2010", "excel_formula", "powerquery" ]
stackoverflow_0074673181_excel_excel_2010_excel_formula_powerquery.txt
Q: How do I trick the app into thinking the mouse movement is really me? I am attempting to get my script to open up a game through steam and then start a world. This all works fine up until the part where I need to navigate the game. I'm assuming that the modules that let you control the mouse just move to points rather than actually moving the mouse which the game doesn't pick up. The game only becomes responsive when I myself move the mouse around IRL. I've tried using pyautogui, mouse, pynput, and a couple of more but have had no luck. The game doesn't respond until I intervene. I'd love any help ya'll could give me. Thanks. Here's some of my code, if it helps. import pyautogui import time def openWorld(): #pyautogui.moveTo(230, 530, 1) pyautogui.click(x=230, y=530) A: It sounds like the game is not picking up the simulated mouse movements from the modules you are using. One possible solution is to try using the pyautogui module's moveRel() function to move the mouse relative to its current position, rather than using moveTo() to move it to a specific set of coordinates. This may help the game to recognize the mouse movements. Additionally, you can try using the pyautogui.PAUSE variable to add a delay between each mouse movement to allow the game to catch up and recognize the input. Here is an example of how you could use these techniques in your code: import pyautogui import time def openWorld(): pyautogui.PAUSE = 0.5 # add a delay of 0.5 seconds between each movement pyautogui.moveRel(100, 100, duration=1) # move the mouse 100 pixels to the right and 100 pixels down from its current position pyautogui.click() # click at the current mouse position You may need to adjust the parameters of the moveRel() and click() functions to fit your specific game and screen setup. Hope this helps!
How do I trick the app into thinking the mouse movement is really me?
I am attempting to get my script to open up a game through steam and then start a world. This all works fine up until the part where I need to navigate the game. I'm assuming that the modules that let you control the mouse just move to points rather than actually moving the mouse which the game doesn't pick up. The game only becomes responsive when I myself move the mouse around IRL. I've tried using pyautogui, mouse, pynput, and a couple of more but have had no luck. The game doesn't respond until I intervene. I'd love any help ya'll could give me. Thanks. Here's some of my code, if it helps. import pyautogui import time def openWorld(): #pyautogui.moveTo(230, 530, 1) pyautogui.click(x=230, y=530)
[ "It sounds like the game is not picking up the simulated mouse movements from the modules you are using. One possible solution is to try using the pyautogui module's moveRel() function to move the mouse relative to its current position, rather than using moveTo() to move it to a specific set of coordinates. This may help the game to recognize the mouse movements. Additionally, you can try using the pyautogui.PAUSE variable to add a delay between each mouse movement to allow the game to catch up and recognize the input. Here is an example of how you could use these techniques in your code:\nimport pyautogui\nimport time\n\ndef openWorld():\n pyautogui.PAUSE = 0.5 # add a delay of 0.5 seconds between each movement\n pyautogui.moveRel(100, 100, duration=1) # move the mouse 100 pixels to the right and 100 pixels down from its current position\n pyautogui.click() # click at the current mouse position\n\nYou may need to adjust the parameters of the moveRel() and click() functions to fit your specific game and screen setup. Hope this helps!\n" ]
[ 0 ]
[]
[]
[ "automation", "mouse", "pyautogui", "python" ]
stackoverflow_0074673336_automation_mouse_pyautogui_python.txt
Q: Using Iceberg tables and time travel feature in AWS Quicksight I have Iceberg tables in AWS data catalog which I want to use to create dashboards in AWS QuickSight. The idea is to set a date paremter in QuickSight and then to be able to use it with Iceberg time travel feature. I.e. I'd like QuickSight to filter the data as of specific date using Iceberg capability to execute queries "as of timestamp" (e.g. select * from table FOR TIMESTAMP AS OF (timestamp '2022-12-01 22:00:00'). My questions are: Does Quicksight support Iceberg tables as a data source? Is it possible to use the time travel feature of Iceberg tables in Quicksight when writing custom sql queries for the data source? Is it possible to use Quicksight parameter with Iceberg time travel? If this is possible, it would be extremely powerful combination Iceberge timetravel + Quicksight dashboards. If it is not possible what is the best alternatives, assuming that my data are in Iceberg tables. It seems that Quicksight can work with Icerberg tables as a data source, but I can't figure out if quciksight paramters somehow can be used im time travel for Iceberg tables. A: You can use Iceberg tables as a data source with the Athena connector, but you can't use the QuickSight parameters for time travel. QuickSight parameters don't interact with the datasets, they only apply to the following: Calculated fields (except for multivalue parameters) Filters Dashboard and analysis URLs Actions Titles and descriptions throughout an analysis https://docs.aws.amazon.com/athena/latest/ug/querying-iceberg.html https://docs.aws.amazon.com/quicksight/latest/user/parameters-in-quicksight.html
Using Iceberg tables and time travel feature in AWS Quicksight
I have Iceberg tables in AWS data catalog which I want to use to create dashboards in AWS QuickSight. The idea is to set a date paremter in QuickSight and then to be able to use it with Iceberg time travel feature. I.e. I'd like QuickSight to filter the data as of specific date using Iceberg capability to execute queries "as of timestamp" (e.g. select * from table FOR TIMESTAMP AS OF (timestamp '2022-12-01 22:00:00'). My questions are: Does Quicksight support Iceberg tables as a data source? Is it possible to use the time travel feature of Iceberg tables in Quicksight when writing custom sql queries for the data source? Is it possible to use Quicksight parameter with Iceberg time travel? If this is possible, it would be extremely powerful combination Iceberge timetravel + Quicksight dashboards. If it is not possible what is the best alternatives, assuming that my data are in Iceberg tables. It seems that Quicksight can work with Icerberg tables as a data source, but I can't figure out if quciksight paramters somehow can be used im time travel for Iceberg tables.
[ "You can use Iceberg tables as a data source with the Athena connector, but you can't use the QuickSight parameters for time travel. QuickSight parameters don't interact with the datasets, they only apply to the following:\n\nCalculated fields (except for multivalue parameters)\nFilters\nDashboard and analysis URLs\nActions\nTitles and descriptions throughout an analysis\n\nhttps://docs.aws.amazon.com/athena/latest/ug/querying-iceberg.html\nhttps://docs.aws.amazon.com/quicksight/latest/user/parameters-in-quicksight.html\n" ]
[ 0 ]
[]
[]
[ "amazon_quicksight", "apache_iceberg" ]
stackoverflow_0074649577_amazon_quicksight_apache_iceberg.txt
Q: SpringBoot Page redirection error on controller I've tried a lot, but I can't find a reason, so I'm registering a question. The following three examples are the output results according to the input. 'login_success' and 'login_failed' are both html pages in the template path. The method 'loginSuccess()' or 'loginFailed()' was not executed in the failed debug result. What's wrong with that? I need a hand..... ;( Ex 1) (Successfully Run) [input value] (Query values that are not in DB) id = abc password = 1234 [Expected output results] redirect:login_failed [Actual output results] redirect:login_failed Ex 2) (Failed Run) [input value] (Query values that are not in DB) id = abc@1234 password = abcd1234 [Expected output results] redirect:login_failed [Actual output results] Redirection does not appear. Ex 3) (Failed Run) [input value] (Query values that exist in DB) id = [email protected] password = byebye0000 [Expected output results] redirect:login_success [Actual output results] Redirection does not appear. And the following code is the logic of receiving View's customer information from the controller as DTo and querying it from the internal logic. Controller package My_Project.integration.controller; import My_Project.integration.entity.Dto.LoginDto; import My_Project.integration.service.UserService; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.GetMapping; @Controller public class LoginController { @Autowired private UserService userService; @GetMapping("/trylogin") public String login(LoginDto loginDto) { try { if (userService.login(loginDto)) { return "redirect:/loginSuccess"; } } catch (Exception e) { return "redirect:/loginFailed"; } return "redirect:/loginFailed"; } @GetMapping("/loginSuccess") public String loginSuccess() throws Exception { return "login_success"; } @GetMapping("/loginFailed") public String loginFailed() throws Exception { return "login_failed"; } // 예외처리가 잘못된듯?.... } Service package My_Project.integration.service; import My_Project.integration.entity.Dto.LoginDto; import My_Project.integration.entity.Users; import My_Project.integration.repository.UsersRepository; import lombok.Getter; import lombok.RequiredArgsConstructor; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import javax.transaction.Transactional; @Service @RequiredArgsConstructor @Getter public class UserService { @Autowired private UsersRepository usersRepository; public boolean login(LoginDto loginDto) throws Exception { return usersRepository.checkUserInfo(loginDto.getId(), loginDto.getPassword()); } } Repository package My_Project.integration.repository.UserCustom.Impl; import My_Project.integration.entity.Users; import My_Project.integration.repository.UserCustom.UserCustomRepository; import lombok.RequiredArgsConstructor; import javax.persistence.EntityManager; import javax.persistence.Query; import java.util.List; import java.util.Optional; @RequiredArgsConstructor public class UserCustomRepositoryImpl implements UserCustomRepository { private final EntityManager em; @Override public boolean checkUserInfo(String id,String password) throws Exception{ Optional<Users> matchedUser = Optional.ofNullable(em.createQuery("select u from Users u where u.id = :id", Users.class) .setParameter("id", id) .getSingleResult()); if (matchedUser.isPresent() || matchedUser.isEmpty()) { if(matchedUser.get().getPassword().equals(password)) { return true; }else { return false; } } return false; } } A: I see the issue in UserCustomRepositoryImpl Class, you are calling .get() even when matchedUser is empty which might be causing the problem. Separate both cases i.e. when the user is present and when it isn't and handle them accordingly. When matchedUser is empty you can directly return a false. Also use the try-catch block here to check issue is here or not. In catch use Sysout to see the error.
SpringBoot Page redirection error on controller
I've tried a lot, but I can't find a reason, so I'm registering a question. The following three examples are the output results according to the input. 'login_success' and 'login_failed' are both html pages in the template path. The method 'loginSuccess()' or 'loginFailed()' was not executed in the failed debug result. What's wrong with that? I need a hand..... ;( Ex 1) (Successfully Run) [input value] (Query values that are not in DB) id = abc password = 1234 [Expected output results] redirect:login_failed [Actual output results] redirect:login_failed Ex 2) (Failed Run) [input value] (Query values that are not in DB) id = abc@1234 password = abcd1234 [Expected output results] redirect:login_failed [Actual output results] Redirection does not appear. Ex 3) (Failed Run) [input value] (Query values that exist in DB) id = [email protected] password = byebye0000 [Expected output results] redirect:login_success [Actual output results] Redirection does not appear. And the following code is the logic of receiving View's customer information from the controller as DTo and querying it from the internal logic. Controller package My_Project.integration.controller; import My_Project.integration.entity.Dto.LoginDto; import My_Project.integration.service.UserService; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.GetMapping; @Controller public class LoginController { @Autowired private UserService userService; @GetMapping("/trylogin") public String login(LoginDto loginDto) { try { if (userService.login(loginDto)) { return "redirect:/loginSuccess"; } } catch (Exception e) { return "redirect:/loginFailed"; } return "redirect:/loginFailed"; } @GetMapping("/loginSuccess") public String loginSuccess() throws Exception { return "login_success"; } @GetMapping("/loginFailed") public String loginFailed() throws Exception { return "login_failed"; } // 예외처리가 잘못된듯?.... } Service package My_Project.integration.service; import My_Project.integration.entity.Dto.LoginDto; import My_Project.integration.entity.Users; import My_Project.integration.repository.UsersRepository; import lombok.Getter; import lombok.RequiredArgsConstructor; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import javax.transaction.Transactional; @Service @RequiredArgsConstructor @Getter public class UserService { @Autowired private UsersRepository usersRepository; public boolean login(LoginDto loginDto) throws Exception { return usersRepository.checkUserInfo(loginDto.getId(), loginDto.getPassword()); } } Repository package My_Project.integration.repository.UserCustom.Impl; import My_Project.integration.entity.Users; import My_Project.integration.repository.UserCustom.UserCustomRepository; import lombok.RequiredArgsConstructor; import javax.persistence.EntityManager; import javax.persistence.Query; import java.util.List; import java.util.Optional; @RequiredArgsConstructor public class UserCustomRepositoryImpl implements UserCustomRepository { private final EntityManager em; @Override public boolean checkUserInfo(String id,String password) throws Exception{ Optional<Users> matchedUser = Optional.ofNullable(em.createQuery("select u from Users u where u.id = :id", Users.class) .setParameter("id", id) .getSingleResult()); if (matchedUser.isPresent() || matchedUser.isEmpty()) { if(matchedUser.get().getPassword().equals(password)) { return true; }else { return false; } } return false; } }
[ "I see the issue in UserCustomRepositoryImpl Class, you are calling .get() even when matchedUser is empty which might be causing the problem. Separate both cases i.e. when the user is present and when it isn't and handle them accordingly.\nWhen matchedUser is empty you can directly return a false. Also use the try-catch block here to check issue is here or not. In catch use Sysout to see the error.\n" ]
[ 0 ]
[]
[]
[ "spring", "spring_boot", "spring_mvc" ]
stackoverflow_0074673305_spring_spring_boot_spring_mvc.txt
Q: what is the correct Pointer to members definition? i took the below code from a different question on stackoverflow, im not sure What do the lines int (Foo :: * ptr); and int (Foo :: * ptr) (); mean? Can anyone share some answers? struct Foo { int a; int b; }; int main () { Foo foo; int (Foo :: * ptr); ptr = & Foo :: a; foo .*ptr = 123; // foo.a = 123; ptr = & Foo :: b; foo .*ptr = 234; // foo.b = 234; } Member functions are almost the same. struct Foo { int a (); int b (); }; int main () { Foo foo; int (Foo :: * ptr) (); ptr = & Foo :: a; (foo .*ptr) (); // foo.a (); ptr = & Foo :: b; (foo .*ptr) (); // foo.b (); } Debugging to no avail A: Pointer to members is a long story to tell. First we assume that you've known what a normal pointer is. Pointer to members suggests that it can point to the specific member of any instance of class. There are two types of pointer to members, first to member variables and second to member functions. Before that, the variables and functions can be static or non-static. For static ones, it's no other than a normal function from the program's perspective, e.g. in Linux ELF, static data are stored in .data directly, where the global variables are also stored. From the angle of the programmers, they are just access a special global function / variable as well, just adding some Class::. So, the pointer to static member variable / function is just the same as the pointer to a normal variable / function. Now let's talk about the non-static ones. Non-static members should always bind to some specific object, e.g. obj.a or obj.func() and Class::a or Class::func() is illegal. Then, is it possible to use a pointer to suggest that "I hope to point to a specific member of any instance, and when I want to use it, I will bind an instance"? That's what the pointer to members do. Wait... you may think: "That bothers! Why can't I just use the .?". To maintain the consistency, we will go back to this question finally. Now we assume it's useful first, and see what syntax it uses. class ClassOpTest { public: int nsVar; // non-static variable. void nsFunc(int){return;} // non-static function. }; int ClassOpTest::* nsVarPtr = &ClassOpTest::nsVar; void (ClassOpTest::*nsFuncPtr)(int) = &ClassOpTest::nsFunc; int main() { ClassOpTest object2; ClassOpTest* object2Ptr = &object2; object.*nsVarPtr = 1; // equals to object.nsVar = 1; object2.*nsVarPtr = 2; // equals to object2.nsVar = 2; object2Ptr->*nsVarPtr = 3; // equals to the last statement. // Note that these paratheses are necessary, considering the operation order. If there are not, it will be nsFuncPtr() be resolved first rather than object.*nsFuncPtr(), which reports errors. (object.*nsFuncPtr)(1); // equals to object.nsFunc(1); (object2Ptr->*nsFuncPtr)(2); // equals to object2.nsFunc(2); return 0; } You may find it's troublesome to write types like this, so you can use deduced type in C++11 as: using ClassOpTestIntPtr = decltype(&ClassOpTest::nsVar); using ClassOpTestFuncPtr = decltype(&ClassOpTest::nsFunc); ClassOpTestIntPtr nsVarPtr = &ClassOpTest::nsVar; ClassOpTestFuncPtr nsFuncPtr = &ClassOpTest::nsFunc; Notice that the decltype doesn't mean it always points to nsVar or nsFunc; it means the type same as them. You may also think .* or ->* is oblique(me too!), then you can use std::invoke in C++17 like this : std::invoke(nsVarPtr, object2) = 1; // equals to object.*nsVarPtr = 1; std::invoke(nsVarPtr, &object2) = 2; // equals to object2Ptr->*nsVarPtr = 2; // both work. std::invoke(nsFuncPtr, object2, 1); // equals to (object.*nsFunc)(1); std::invoke(nsFuncPtr, &object2, 2); // equals to (object2Ptr->*nsFunc)(2); std::invoke is significantly useful, but that's not the point of the answer. In a nutshell, it will use corresponding operator when the second calling parameter varies. Finally, why is it useful? In my point of view, that's mostly because the pointer only conveys the type, and the type may infer lots of members. For instance: struct RGB { std::uint8_t r; std::uint8_t g; std::uint8_t b; }; and I hope to do the blend two std::vector<RGB> using Intel's SIMD intrinsics. First for r, that is: reg1 = _mm_set_epi16(RGBdata1[i + 7].r, RGBdata1[i + 6].r, RGBdata1[i + 5].r, RGBdata1[i + 4].r, RGBdata1[i + 3].r, RGBdata1[i + 2].r, RGBdata1[i + 1].r, RGBdata1[i].r); reg2 = _mm_set_epi16(RGBdata2[i + 7].r, RGBdata2[i + 6].r, RGBdata2[i + 5].r, RGBdata2[i + 4].r, RGBdata2[i + 3].r, RGBdata2[i + 2].r, RGBdata2[i + 1].r, RGBdata2[i].r); reg1 = _mm_mullo_epi16(reg1, alphaReg1); reg2 = _mm_mullo_epi16(reg2, alphaReg2); resultReg1 = _mm_add_epi16(reg1, reg2); // for simplicity, code below omitted; there are also manys operation to get the result. // ... // store back _mm_store_si128((__m128i*)buffer, resultReg1); for(int k = 0; k < 16; k++) { outRGBdata[i + k].r = buffer[k]; } So what about g and b? Oops, okay, you have to paste the code twice. What if you find some bugs and want to change something? You have to paste again for g and b. That suffers! If we use pointer to members, then : using RGBColorPtr = std::uint8_t RGB::*; void SIMDBlendColor(RGB* begin1, RGB* begin2, RGB* outBegin, RGBColorPtr color, __m128i alphaReg1, __m128i alphaReg2) { __m128i resultReg1; alignas(16) std::uint8_t buffer[16]; reg1 = _mm_set_epi16((begin1 + 7)->*color, (begin1 + 6)->*color, (begin1 + 5)->*color, (begin1 + 4)->*color, (begin1 + 3)->*color, (begin1 + 2)->*color, (begin1 + 1)->*color, begin1->*color); reg2 = _mm_set_epi16((begin2 + 7)->*color, (begin2 + 6)->*color, (begin2 + 5)->*color, (begin2 + 4)->*color, (begin2 + 3)->*color, (begin2 + 2)->*color, (begin2 + 1)->*color, begin2->*color); reg1 = _mm_mullo_epi16(reg1, alphaReg1); reg2 = _mm_mullo_epi16(reg2, alphaReg2); resultReg1 = _mm_add_epi16(reg1, reg2); // ... _mm_store_si128((__m128i*)buffer, resultReg1); for(int k = 0; k < 16; k++) { (outBegin + k)->*color = buffer[k]; } return; } Then, you can just call like this : SIMDBlendColor(RGBdata1.data() + i, RGBdata2.data() + i, outRGBdata.data() + i, &RGB::r, alphaReg1. alphaReg2); SIMDBlendColor(RGBdata1.data() + i, RGBdata2.data() + i, outRGBdata.data() + i, &RGB::g, alphaReg1. alphaReg2); SIMDBlendColor(RGBdata1.data() + i, RGBdata2.data() + i, outRGBdata.data() + i, &RGB::b, alphaReg1. alphaReg2); Clean and beautiful! BTW, I strongly recommend you to check iso-cpp-wiki for more information.
what is the correct Pointer to members definition?
i took the below code from a different question on stackoverflow, im not sure What do the lines int (Foo :: * ptr); and int (Foo :: * ptr) (); mean? Can anyone share some answers? struct Foo { int a; int b; }; int main () { Foo foo; int (Foo :: * ptr); ptr = & Foo :: a; foo .*ptr = 123; // foo.a = 123; ptr = & Foo :: b; foo .*ptr = 234; // foo.b = 234; } Member functions are almost the same. struct Foo { int a (); int b (); }; int main () { Foo foo; int (Foo :: * ptr) (); ptr = & Foo :: a; (foo .*ptr) (); // foo.a (); ptr = & Foo :: b; (foo .*ptr) (); // foo.b (); } Debugging to no avail
[ "Pointer to members is a long story to tell. First we assume that you've known what a normal pointer is.\nPointer to members suggests that it can point to the specific member of any instance of class. There are two types of pointer to members, first to member variables and second to member functions.\nBefore that, the variables and functions can be static or non-static. For static ones, it's no other than a normal function from the program's perspective, e.g. in Linux ELF, static data are stored in .data directly, where the global variables are also stored. From the angle of the programmers, they are just access a special global function / variable as well, just adding some Class::. So, the pointer to static member variable / function is just the same as the pointer to a normal variable / function.\nNow let's talk about the non-static ones. Non-static members should always bind to some specific object, e.g. obj.a or obj.func() and Class::a or Class::func() is illegal. Then, is it possible to use a pointer to suggest that \"I hope to point to a specific member of any instance, and when I want to use it, I will bind an instance\"? That's what the pointer to members do.\nWait... you may think: \"That bothers! Why can't I just use the .?\". To maintain the consistency, we will go back to this question finally. Now we assume it's useful first, and see what syntax it uses.\nclass ClassOpTest\n{\npublic:\n int nsVar; // non-static variable.\n void nsFunc(int){return;} // non-static function.\n};\n\nint ClassOpTest::* nsVarPtr = &ClassOpTest::nsVar;\nvoid (ClassOpTest::*nsFuncPtr)(int) = &ClassOpTest::nsFunc;\nint main()\n{\n ClassOpTest object2;\n ClassOpTest* object2Ptr = &object2;\n \n object.*nsVarPtr = 1; // equals to object.nsVar = 1;\n object2.*nsVarPtr = 2; // equals to object2.nsVar = 2;\n object2Ptr->*nsVarPtr = 3; // equals to the last statement.\n \n // Note that these paratheses are necessary, considering the operation order. If there are not, it will be nsFuncPtr() be resolved first rather than object.*nsFuncPtr(), which reports errors.\n (object.*nsFuncPtr)(1); // equals to object.nsFunc(1);\n (object2Ptr->*nsFuncPtr)(2); // equals to object2.nsFunc(2);\n return 0;\n}\n\nYou may find it's troublesome to write types like this, so you can use deduced type in C++11 as:\nusing ClassOpTestIntPtr = decltype(&ClassOpTest::nsVar);\nusing ClassOpTestFuncPtr = decltype(&ClassOpTest::nsFunc);\nClassOpTestIntPtr nsVarPtr = &ClassOpTest::nsVar;\nClassOpTestFuncPtr nsFuncPtr = &ClassOpTest::nsFunc;\n\nNotice that the decltype doesn't mean it always points to nsVar or nsFunc; it means the type same as them.\nYou may also think .* or ->* is oblique(me too!), then you can use std::invoke in C++17 like this :\nstd::invoke(nsVarPtr, object2) = 1; // equals to object.*nsVarPtr = 1;\nstd::invoke(nsVarPtr, &object2) = 2; // equals to object2Ptr->*nsVarPtr = 2; \n// both work.\nstd::invoke(nsFuncPtr, object2, 1); // equals to (object.*nsFunc)(1);\nstd::invoke(nsFuncPtr, &object2, 2); // equals to (object2Ptr->*nsFunc)(2);\n\nstd::invoke is significantly useful, but that's not the point of the answer. In a nutshell, it will use corresponding operator when the second calling parameter varies.\nFinally, why is it useful? In my point of view, that's mostly because the pointer only conveys the type, and the type may infer lots of members. For instance:\nstruct RGB\n{\n std::uint8_t r;\n std::uint8_t g;\n std::uint8_t b;\n};\n\nand I hope to do the blend two std::vector<RGB> using Intel's SIMD intrinsics. First for r, that is:\nreg1 = _mm_set_epi16(RGBdata1[i + 7].r, RGBdata1[i + 6].r, RGBdata1[i + 5].r,\n RGBdata1[i + 4].r, RGBdata1[i + 3].r, RGBdata1[i + 2].r,\n RGBdata1[i + 1].r, RGBdata1[i].r);\n\nreg2 = _mm_set_epi16(RGBdata2[i + 7].r, RGBdata2[i + 6].r, RGBdata2[i + 5].r,\n RGBdata2[i + 4].r, RGBdata2[i + 3].r, RGBdata2[i + 2].r,\n RGBdata2[i + 1].r, RGBdata2[i].r);\n\nreg1 = _mm_mullo_epi16(reg1, alphaReg1);\nreg2 = _mm_mullo_epi16(reg2, alphaReg2);\nresultReg1 = _mm_add_epi16(reg1, reg2);\n// for simplicity, code below omitted; there are also manys operation to get the result.\n// ...\n// store back\n_mm_store_si128((__m128i*)buffer, resultReg1);\nfor(int k = 0; k < 16; k++)\n{\n outRGBdata[i + k].r = buffer[k];\n}\n\nSo what about g and b? Oops, okay, you have to paste the code twice. What if you find some bugs and want to change something? You have to paste again for g and b. That suffers! If we use pointer to members, then :\nusing RGBColorPtr = std::uint8_t RGB::*;\nvoid SIMDBlendColor(RGB* begin1, RGB* begin2, RGB* outBegin, RGBColorPtr color, __m128i alphaReg1, __m128i alphaReg2)\n{\n __m128i resultReg1;\n alignas(16) std::uint8_t buffer[16];\n reg1 = _mm_set_epi16((begin1 + 7)->*color, (begin1 + 6)->*color, \n (begin1 + 5)->*color, (begin1 + 4)->*color, \n (begin1 + 3)->*color, (begin1 + 2)->*color, \n (begin1 + 1)->*color, begin1->*color);\n\n reg2 = _mm_set_epi16((begin2 + 7)->*color, (begin2 + 6)->*color, \n (begin2 + 5)->*color, (begin2 + 4)->*color, \n (begin2 + 3)->*color, (begin2 + 2)->*color, \n (begin2 + 1)->*color, begin2->*color);\n \n reg1 = _mm_mullo_epi16(reg1, alphaReg1);\n reg2 = _mm_mullo_epi16(reg2, alphaReg2);\n resultReg1 = _mm_add_epi16(reg1, reg2);\n // ...\n _mm_store_si128((__m128i*)buffer, resultReg1);\n for(int k = 0; k < 16; k++)\n {\n (outBegin + k)->*color = buffer[k];\n }\n return;\n}\n\nThen, you can just call like this :\nSIMDBlendColor(RGBdata1.data() + i, RGBdata2.data() + i, outRGBdata.data() + i, &RGB::r, alphaReg1. alphaReg2);\nSIMDBlendColor(RGBdata1.data() + i, RGBdata2.data() + i, outRGBdata.data() + i, &RGB::g, alphaReg1. alphaReg2);\nSIMDBlendColor(RGBdata1.data() + i, RGBdata2.data() + i, outRGBdata.data() + i, &RGB::b, alphaReg1. alphaReg2);\n\nClean and beautiful!\nBTW, I strongly recommend you to check iso-cpp-wiki for more information.\n" ]
[ 0 ]
[]
[]
[ "c++" ]
stackoverflow_0074672554_c++.txt
Q: Replace underscore with spaces in Jinja I'm trying to replace underscore with spaces, I tried the solution below: Django Template: remove underscore and capitalize each word but it just keep the first word and remove the rest of the string: example: min_wall_hight output: min my code: . . . {% for i in t[1:] %} <input type="text" name={{i[0]}} value={{i[0]|replace("_"," ")|capitalize}} readonly> {% endfor %} . . . when I use the filter in this way :{{i[0]|replace("_","-")|capitalize}} or without space just "" it works fine. but when it's space " " it will discard the rest of string can someone help I'm new jinja A: It looks like you are using the capitalize filter in your Jinja template, but you are only applying it to the first word of the string. This is why the output is only showing the first word and discarding the rest of the string. To fix this issue, you can apply the capitalize filter to each individual word in the string instead of just the first word. You can do this by splitting the string into a list of words, applying the capitalize and replace filters to each word, and then joining the words back together into a single string. Here is an example of how you could do this in your Jinja template: {% for i in t[1:] %} {% set words = i[0]|split("_") %} {% for word in words %} {{ word | capitalize | replace("_", " ") }} {% endfor %} <input type="text" name={{i[0]}} value={{ words | join(" ") }} readonly> {% endfor %} In this example, the split filter is used to split the string into a list of words, and then a nested for loop is used to iterate over the words. For each word, the capitalize and replace filters are applied to capitalize the first letter of the word and replace underscores with spaces. After all the words have been processed, the join filter is used to join the words back together into a single string with spaces between each word. This resulting string is then used as the value for the input element.
Replace underscore with spaces in Jinja
I'm trying to replace underscore with spaces, I tried the solution below: Django Template: remove underscore and capitalize each word but it just keep the first word and remove the rest of the string: example: min_wall_hight output: min my code: . . . {% for i in t[1:] %} <input type="text" name={{i[0]}} value={{i[0]|replace("_"," ")|capitalize}} readonly> {% endfor %} . . . when I use the filter in this way :{{i[0]|replace("_","-")|capitalize}} or without space just "" it works fine. but when it's space " " it will discard the rest of string can someone help I'm new jinja
[ "It looks like you are using the capitalize filter in your Jinja template, but you are only applying it to the first word of the string. This is why the output is only showing the first word and discarding the rest of the string.\nTo fix this issue, you can apply the capitalize filter to each individual word in the string instead of just the first word. You can do this by splitting the string into a list of words, applying the capitalize and replace filters to each word, and then joining the words back together into a single string.\nHere is an example of how you could do this in your Jinja template:\n{% for i in t[1:] %}\n {% set words = i[0]|split(\"_\") %}\n {% for word in words %}\n {{ word | capitalize | replace(\"_\", \" \") }}\n {% endfor %}\n<input type=\"text\" name={{i[0]}} value={{ words | join(\" \") }} readonly>\n{% endfor %}\n\nIn this example, the split filter is used to split the string into a list of words, and then a nested for loop is used to iterate over the words. For each word, the capitalize and replace filters are applied to capitalize the first letter of the word and replace underscores with spaces.\nAfter all the words have been processed, the join filter is used to join the words back together into a single string with spaces between each word. This resulting string is then used as the value for the input element.\n" ]
[ 0 ]
[]
[]
[ "html", "jinja2" ]
stackoverflow_0074673361_html_jinja2.txt
Q: Angular start ngFor index from 1 Is it possible to start ngFor index from 1 instead of 0? let data of datas;let i=index+1 didn't work. A: *ngFor="let item of items | slice:1; let i = index; SlicePipe A: There are 2 possible answers to the question, depending on what was actually being asked. If the intent is to skip the first element of the array, then the answers involving slice are in the right direction. However, if the intent is to simply shift the index while still iterating over all of the array, then slice is NOT the correct approach, as it will skip the 0th element in the array, thereby outputting only n-1 items from an array of length n. @Taylor gave a real-world example of when the index might need to be shifted for display purposes, such as when outputting a list where the first entry should read 1, not 0. Here's another similar example: <li *ngFor="let book of books; let i = index"> {{ i + 1 }}. {{ book.title }} </li> which would produce output like: Sample Book Title Another book title ... A: That's not possible, but you could use Array.prototype.slice() to skip the first element: <li *ngFor="let item of list.slice(1)">{{ item }}</li> The SlicePipe is also an option if you prefer that syntax: <li *ngFor="let item of items | slice:1">{{ item }}</li> Description All behavior is based on the expected behavior of the JavaScript API Array.prototype.slice() and String.prototype.slice(). When operating on an Array, the returned Array is always a copy even when all the elements are being returned. If you also need the index to match, just add the number of elements you skipped to it: <li *ngFor="let item of list.slice(1); let i = index">{{ i + 1 }} {{ item }}</li> Or: <li *ngFor="let item of items | slice:1; let i = index">{{ i + 1 }} {{ item }}</li> Anyway, if you need to put too much logic in the template to make this work for your use case, then you should probably move that logic to the controller and just build another array with the exact elements and data that you need or cache the sliced array to avoid creating a new one if the data hasn't changed. A: <li *ngFor="let info of data; let i = index"> {{i + 1}} {{info.title}} </li> A: This works fine for me. With single quote *ngFor="let data of datas; let i = 'index+1'"; In this way I don't remove any data from the array datas and at the same time the index starts from 1 and it ends to datas length. A: You can't at least for now, it seems the team behind angular 2 is trying to keep ngFor really simple, there's a similar issue opened on Angular 2 repo about doing multiple assigning of the index and the answer was: syntax has to be simple for tools to support it. A: We can approach it like the below for custom tags/default tags: <custom-tag *ngFor="let item of items; let i = index" [text]="Item + getIndex(i)"></custom-tag> In Javascript: function getIndex(i) {return Number(i + 1).toString();} A: I am using let i = index in *ngFor class. As it is start from 0th element, so I am using here {{i+1}} instead of {{i}}, it will skip the 0th element in the array. It works fine for me... <ion-row *ngFor="let key of ques; let i = index"> <ion-text>{{i+1}}) {{key.question}}</ion-text> A: You can wrap it around a div then use *ngIf="i != selectedIndex" like this: <div *ngFor="let item of yourArray; let i = index"> <div *ngIf="i != selectedIndex"> {{ item }} </div> </div> where selectedIndex is the item index you want to remove
Angular start ngFor index from 1
Is it possible to start ngFor index from 1 instead of 0? let data of datas;let i=index+1 didn't work.
[ " *ngFor=\"let item of items | slice:1; let i = index;\n\nSlicePipe\n", "There are 2 possible answers to the question, depending on what was actually being asked.\nIf the intent is to skip the first element of the array, then the answers involving slice are in the right direction.\nHowever, if the intent is to simply shift the index while still iterating over all of the array, then slice is NOT the correct approach, as it will skip the 0th element in the array, thereby outputting only n-1 items from an array of length n.\n@Taylor gave a real-world example of when the index might need to be shifted for display purposes, such as when outputting a list where the first entry should read 1, not 0. \nHere's another similar example:\n<li *ngFor=\"let book of books; let i = index\">\n {{ i + 1 }}. {{ book.title }}\n</li>\n\nwhich would produce output like:\n\nSample Book Title\nAnother book title\n\n...\n", "That's not possible, but you could use Array.prototype.slice() to skip the first element:\n<li *ngFor=\"let item of list.slice(1)\">{{ item }}</li>\n\nThe SlicePipe is also an option if you prefer that syntax:\n<li *ngFor=\"let item of items | slice:1\">{{ item }}</li>\n\n\nDescription\nAll behavior is based on the expected behavior of the JavaScript API Array.prototype.slice() and String.prototype.slice().\nWhen operating on an Array, the returned Array is always a copy even when all the elements are being returned.\n\nIf you also need the index to match, just add the number of elements you skipped to it:\n<li *ngFor=\"let item of list.slice(1); let i = index\">{{ i + 1 }} {{ item }}</li>\n\nOr:\n<li *ngFor=\"let item of items | slice:1; let i = index\">{{ i + 1 }} {{ item }}</li>\n\nAnyway, if you need to put too much logic in the template to make this work for your use case, then you should probably move that logic to the controller and just build another array with the exact elements and data that you need or cache the sliced array to avoid creating a new one if the data hasn't changed.\n", " <li *ngFor=\"let info of data; let i = index\">\n {{i + 1}} {{info.title}}\n </li>\n\n", "This works fine for me. With single quote\n*ngFor=\"let data of datas; let i = 'index+1'\";\n\nIn this way I don't remove any data from the array datas and at the same time the index starts from 1 and it ends to datas length.\n", "You can't at least for now, it seems the team behind angular 2 is trying to keep ngFor really simple, there's a similar issue opened on Angular 2 repo about doing multiple assigning of the index and the answer was:\n\nsyntax has to be simple for tools to support it.\n\n", "We can approach it like the below for custom tags/default tags: \n <custom-tag *ngFor=\"let item of items; let i = index\" [text]=\"Item + getIndex(i)\"></custom-tag>\n\nIn Javascript:\nfunction getIndex(i) {return Number(i + 1).toString();}\n\n", "I am using let i = index in *ngFor class. As it is start from 0th element, so I am using here {{i+1}} instead of {{i}}, it will skip the 0th element in the array. It works fine for me...\n<ion-row *ngFor=\"let key of ques; let i = index\">\n <ion-text>{{i+1}}) {{key.question}}</ion-text>\n\n", "You can wrap it around a div then use *ngIf=\"i != selectedIndex\" like this:\n <div *ngFor=\"let item of yourArray; let i = index\">\n <div *ngIf=\"i != selectedIndex\"> {{ item }} </div>\n </div>\n\nwhere selectedIndex is the item index you want to remove\n" ]
[ 72, 29, 23, 15, 9, 4, 1, 0, 0 ]
[]
[]
[ "angular" ]
stackoverflow_0039057119_angular.txt
Q: How can I "disable" zoom on a mobile web page? I am creating a mobile web page that is basically a big form with several text inputs. However (at least on my Android cellphone), every time I click on some input the whole page zooms there, obscuring the rest of the page. Is there some HTML or CSS command to disable this kind of zoom on moble web pages? A: This should be everything you need: <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=0"> A: For those of you late to the party, kgutteridge's answer doesn't work for me and Benny Neugebauer's answer includes target-densitydpi (a feature that is being deprecated). This however does work for me: <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" /> A: There are a number of approaches here- and though the position is that typically users should not be restricted when it comes to zooming for accessibility purposes, there may be incidences where is it required: Render the page at the width of the device, dont scale: <meta name="viewport" content="width=device-width, initial-scale=1.0"> Prevent scaling- and prevent the user from being able to zoom: <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no"> Removing all zooming, all scaling <meta name="viewport" content="user-scalable=no, initial-scale=1, maximum-scale=1, minimum-scale=1, width=device-width, height=device-height, target-densitydpi=device-dpi" /> A: Mobile browsers (most of them) require font-size in inputs to be 16px. And since there is still no solution for initial issue, here's a pure CSS solution. input[type="text"], input[type="number"], input[type="email"], input[type="tel"], input[type="password"] { font-size: 16px; } solves the issue. So you don't need to disable zoom and loose accessibility features of you site. If your base font-size is not 16px or not 16px on mobiles, you can use media queries. @media screen and (max-width: 767px) { input[type="text"], input[type="number"], input[type="email"], input[type="tel"], input[type="password"] { font-size: 16px; } } A: Seems like just adding meta tags to index.html doesn't prevent page from zooming. Adding below style will do the magic. :root { touch-action: pan-x pan-y; height: 100% } EDIT: Demo: https://no-mobile-zoom.stackblitz.io A: You can use: <head> <meta name="viewport" content="target-densitydpi=device-dpi, initial-scale=1.0, user-scalable=no" /> ... </head> But please note that with Android 4.4 the property target-densitydpi is no longer supported. So for Android 4.4 and later the following is suggested as best practice: <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no" /> A: please try adding this meta-tag and style <meta content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" name="viewport"/> <style> body{ touch-action: manipulation; } </style> A: Possible Solution for Web Apps: While zooming can not be disabled in iOS Safari anymore, it will be disabled when opening the site from a home screen shortcut. Add these meta tags to declare your App as "Web App capable": <meta content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" name="viewport" > <meta name="apple-mobile-web-app-capable" content="yes" > However only use this feature if your app is self sustaining, as the forward/backward buttons and URL bar as well as the sharing options are disabled. (You can still swipe left and right though) This approach however enables quite the app like ux. The fullscreen browser only starts when the site is loaded from the homescreen. I also only got it to work after I included an apple-touch-icon-180x180.png in my root folder. As a bonus, you probably also want to include a variant of this as well: <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent"> A: <script type="text/javascript"> document.addEventListener('touchmove', function (event) { if (event.scale !== 1) { event.preventDefault(); } }, { passive: false }); </script> Please Add the Script to Disable pinch, tap, focus Zoom A: You can accomplish the task by simply adding the following 'meta' element into your 'head': <meta name="viewport" content="user-scalable=no"> Adding all the attributes like 'width','initial-scale', 'maximum-width', 'maximum-scale' might not work. Therefore, just add the above element. A: The solution using a meta-tag did not work for me (tested on Chrome win10 and safari IOS 14.3), and I also believe that the concerns regarding accessibility, as mentioned by Jack and others, should be honored. My solution is to disable zooming only on elements that are damaged by the default zoom. I did this by registering event listeners for zoom-gestures and using event.preventDefault() to suppress the browsers default zoom-behavior. This needs to be done with several events (touch gestures, mouse wheel and keys). The following snippet is an example for the mouse wheel and pinch gestures on touchpads: noteSheetCanvas.addEventListener("wheel", e => { // suppress browsers default zoom-behavior: e.preventDefault(); // execution of my own custom zooming-behavior: if (e.deltaY > 0) { this._zoom(1); } else { this._zoom(-1); } }); How to detect touch gestures is described here: https://stackoverflow.com/a/11183333/1134856 I used this to keep the standard zooming behavior for most parts of my application and to define custom zooming-behavior on a canvas-element. A: Using this post and a few others I managed to sort this out so that is compatible with Android and iPhone/iPad/iPod using the following code. This is for PHP, you can use the same concept for any other language with string searches. <?php //Device specific headers $iPod = stripos($_SERVER['HTTP_USER_AGENT'],"iPod"); $iPhone = stripos($_SERVER['HTTP_USER_AGENT'],"iPhone"); $iPad = stripos($_SERVER['HTTP_USER_AGENT'],"iPad"); $Android = stripos($_SERVER['HTTP_USER_AGENT'],"Android"); $webOS = stripos($_SERVER['HTTP_USER_AGENT'],"webOS"); if($iPhone || $iPod || $iPad){ echo '<meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=0" />'; } else { echo '<meta name="viewport" content="width=device-width, initial-scale=1.0" />'; } ?> A: This may help someone: on iOS, my problem was fixed by swapping <button>s for <div>s, along with the other things mentioned. A: <header> <meta name="viewport" content="user-scalable=no"> </header> A pure "user-scalable=no" is sufficient and excluding other width and scale parameters should be fine for your case. In my case, I have just simply added the snippet on my . Tested on iOS 16.
How can I "disable" zoom on a mobile web page?
I am creating a mobile web page that is basically a big form with several text inputs. However (at least on my Android cellphone), every time I click on some input the whole page zooms there, obscuring the rest of the page. Is there some HTML or CSS command to disable this kind of zoom on moble web pages?
[ "This should be everything you need:\n<meta name=\"viewport\" \n content=\"width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=0\">\n\n", "For those of you late to the party, kgutteridge's answer doesn't work for me and Benny Neugebauer's answer includes target-densitydpi (a feature that is being deprecated).\nThis however does work for me: \n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no\" />\n\n", "There are a number of approaches here- and though the position is that typically users should not be restricted when it comes to zooming for accessibility purposes, there may be incidences where is it required:\nRender the page at the width of the device, dont scale:\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n\nPrevent scaling- and prevent the user from being able to zoom:\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no\">\n\nRemoving all zooming, all scaling\n<meta name=\"viewport\" content=\"user-scalable=no, initial-scale=1, maximum-scale=1, minimum-scale=1, width=device-width, height=device-height, target-densitydpi=device-dpi\" />\n\n", "Mobile browsers (most of them) require font-size in inputs to be 16px.\nAnd since there is still no solution for initial issue, here's a pure CSS solution.\ninput[type=\"text\"],\ninput[type=\"number\"],\ninput[type=\"email\"],\ninput[type=\"tel\"],\ninput[type=\"password\"] {\n font-size: 16px;\n}\n\nsolves the issue. So you don't need to disable zoom and loose accessibility features of you site.\nIf your base font-size is not 16px or not 16px on mobiles, you can use media queries.\n@media screen and (max-width: 767px) {\n input[type=\"text\"],\n input[type=\"number\"],\n input[type=\"email\"],\n input[type=\"tel\"],\n input[type=\"password\"] {\n font-size: 16px;\n }\n}\n\n", "Seems like just adding meta tags to index.html doesn't prevent page from zooming. Adding below style will do the magic. \n:root {\n touch-action: pan-x pan-y;\n height: 100% \n}\n\nEDIT:\nDemo: https://no-mobile-zoom.stackblitz.io\n", "You can use:\n<head>\n <meta name=\"viewport\" content=\"target-densitydpi=device-dpi, initial-scale=1.0, user-scalable=no\" />\n ...\n</head>\n\nBut please note that with Android 4.4 the property target-densitydpi is no longer supported. So for Android 4.4 and later the following is suggested as best practice:\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1, user-scalable=no\" />\n\n", "please try adding this meta-tag and style \n<meta content=\"width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no\" name=\"viewport\"/>\n\n\n<style>\nbody{\n touch-action: manipulation;\n }\n</style>\n\n", "Possible Solution for Web Apps: While zooming can not be disabled in iOS Safari anymore,\nit will be disabled when opening the site from a home screen shortcut.\nAdd these meta tags to declare your App as \"Web App capable\":\n <meta content=\"width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no\" name=\"viewport\" >\n <meta name=\"apple-mobile-web-app-capable\" content=\"yes\" >\n\nHowever only use this feature if your app is self sustaining, as the forward/backward buttons and URL bar as well as the sharing options are disabled. (You can still swipe left and right though) This approach however enables quite the app like ux. The fullscreen browser only starts when the site is loaded from the homescreen. I also only got it to work after I included an apple-touch-icon-180x180.png in my root folder.\nAs a bonus, you probably also want to include a variant of this as well:\n<meta name=\"apple-mobile-web-app-status-bar-style\" content=\"black-translucent\">\n\n", "<script type=\"text/javascript\">\ndocument.addEventListener('touchmove', function (event) {\n if (event.scale !== 1) { event.preventDefault(); }\n}, { passive: false });\n</script>\n\nPlease Add the Script to Disable pinch, tap, focus Zoom\n", "You can accomplish the task by simply adding the following 'meta' element into your 'head':\n<meta name=\"viewport\" content=\"user-scalable=no\">\n\nAdding all the attributes like 'width','initial-scale', 'maximum-width', 'maximum-scale' might not work. Therefore, just add the above element.\n", "The solution using a meta-tag did not work for me (tested on Chrome win10 and safari IOS 14.3), and I also believe that the concerns regarding accessibility, as mentioned by Jack and others, should be honored.\nMy solution is to disable zooming only on elements that are damaged by the default zoom.\nI did this by registering event listeners for zoom-gestures and using event.preventDefault() to suppress the browsers default zoom-behavior.\nThis needs to be done with several events (touch gestures, mouse wheel and keys). The following snippet is an example for the mouse wheel and pinch gestures on touchpads:\nnoteSheetCanvas.addEventListener(\"wheel\", e => {\n // suppress browsers default zoom-behavior:\n e.preventDefault();\n\n // execution of my own custom zooming-behavior:\n if (e.deltaY > 0) {\n this._zoom(1);\n } else {\n this._zoom(-1);\n }\n });\n\nHow to detect touch gestures is described here: https://stackoverflow.com/a/11183333/1134856\nI used this to keep the standard zooming behavior for most parts of my application and to define custom zooming-behavior on a canvas-element.\n", "Using this post and a few others I managed to sort this out so that is compatible with Android and iPhone/iPad/iPod using the following code. This is for PHP, you can use the same concept for any other language with string searches.\n<?php //Device specific headers\n$iPod = stripos($_SERVER['HTTP_USER_AGENT'],\"iPod\");\n$iPhone = stripos($_SERVER['HTTP_USER_AGENT'],\"iPhone\");\n$iPad = stripos($_SERVER['HTTP_USER_AGENT'],\"iPad\");\n$Android = stripos($_SERVER['HTTP_USER_AGENT'],\"Android\");\n$webOS = stripos($_SERVER['HTTP_USER_AGENT'],\"webOS\");\n\nif($iPhone || $iPod || $iPad){\n echo '<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=0\" />';\n} else {\n echo '<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />';\n}\n\n?>\n", "This may help someone: on iOS, my problem was fixed by swapping <button>s for <div>s, along with the other things mentioned.\n", "<header>\n <meta name=\"viewport\" content=\"user-scalable=no\">\n</header>\n\nA pure \"user-scalable=no\" is sufficient and excluding other width and scale parameters should be fine for your case. In my case, I have just simply added the snippet on my . Tested on iOS 16.\n" ]
[ 577, 189, 67, 51, 47, 43, 14, 10, 7, 3, 0, 0, 0, 0 ]
[ "document.addEventListener('dblclick', (event) => {\n event.preventDefault()\n}, { passive: false });\n\n" ]
[ -3 ]
[ "css", "html", "mobile" ]
stackoverflow_0004472891_css_html_mobile.txt
Q: Form validation? make textbox border green/red depending on conditions How do i make it so the border of the phone number box is green when it follows the format: 123 456 7890 (3 digits, space, 3 digits, space, 4 digits) and red if it does not? additionally, how do i make it so the product id/product info text box border is green when RW100, RW101, RW102, RW103, RW200, RW201, RW202, or RW203 are entered and red when not? thank you https://jsfiddle.net/MangoMelody_/yfv8rcx2/5/ <script> //product id validation function idCheck() { var prodNameBox = document.getElementById("prodname"); var prodname = prodNameBox.innerText; if (prodname === 'RW100' || prodname === 'RW101' || prodname === 'RW102' || prodname === 'RW103' || prodname === 'RW200' || prodname === 'RW201' || prodname === 'RW202' || prodname === 'RW203') { prodNameBox.style.borderColor = "green"; } else { prodNameBox.style.borderColor = "red"; } } // phone number validation function phoneNumber() { var phonenoBox = document.getElementById("phoneno"); var phoneno = phonenoBox.value; if ((phoneno.value.match("^[1-9]\d{2}\s\d{3}\s\d{4}"))) { phonenoBox.style.borderColor = "green"; } else { phonenoBox.style.borderColor = "red"; } } </script> A: If you want to do some simple client-side validation on inputs, such as matching a string against a regular expression, you can use the blur event and check the value entered once the user leaves that input: const phonenoBox = document.getElementById("phoneno"); phonenoBox.addEventListener("blur", () => { // check inputted value and apply styles accordingly here });
Form validation? make textbox border green/red depending on conditions
How do i make it so the border of the phone number box is green when it follows the format: 123 456 7890 (3 digits, space, 3 digits, space, 4 digits) and red if it does not? additionally, how do i make it so the product id/product info text box border is green when RW100, RW101, RW102, RW103, RW200, RW201, RW202, or RW203 are entered and red when not? thank you https://jsfiddle.net/MangoMelody_/yfv8rcx2/5/ <script> //product id validation function idCheck() { var prodNameBox = document.getElementById("prodname"); var prodname = prodNameBox.innerText; if (prodname === 'RW100' || prodname === 'RW101' || prodname === 'RW102' || prodname === 'RW103' || prodname === 'RW200' || prodname === 'RW201' || prodname === 'RW202' || prodname === 'RW203') { prodNameBox.style.borderColor = "green"; } else { prodNameBox.style.borderColor = "red"; } } // phone number validation function phoneNumber() { var phonenoBox = document.getElementById("phoneno"); var phoneno = phonenoBox.value; if ((phoneno.value.match("^[1-9]\d{2}\s\d{3}\s\d{4}"))) { phonenoBox.style.borderColor = "green"; } else { phonenoBox.style.borderColor = "red"; } } </script>
[ "If you want to do some simple client-side validation on inputs, such as matching a string against a regular expression, you can use the blur event and check the value entered once the user leaves that input:\nconst phonenoBox = document.getElementById(\"phoneno\");\n\nphonenoBox.addEventListener(\"blur\", () => {\n // check inputted value and apply styles accordingly here\n});\n\n" ]
[ 0 ]
[]
[]
[ "javascript" ]
stackoverflow_0074672911_javascript.txt
Q: Fixing the "Build Type contains custom BuildConfig fields, but the feature is disabled" error w/ buildConfigField In the new version of Android Studio (Flamingo | 2022.2.1 Canary 9) with the org.jetbrains.kotlin (1.8.0-Beta) plugin and 8.0.0-alpha09 gradle plugin, a new build suddenly gets this error: Build Type 'release' contains custom BuildConfig fields, but the feature is disabled. Is there a way to make this go away? A: Answering my own question -- there is a quick solution. Try adding the following line to gradle.properties, and the problem should hopefully stop bothering you (for now): android.defaults.buildfeatures.buildconfig=true This issue is due to the deprecation of buildConfigField (from android.packageBuildConfig) as described in this commit. UPDATE 12/12/22: Per a note from Roar Grønmo below, there is a newer way to sneak the timestamp into the BuildConfig.java file than the one I suggested back in 2014. To use this newer method, first delete any lines in your current build.gradle (or build.gradle.kts) file that looks like: buildConfigField("String", "BUILD_TIME", "\"" + System.currentTimeMillis().toString() + "\"") Instead, first add the following to the top of your build.gradle.kts: import com.android.build.api.variant.BuildConfigField and outside of the android { ... } part of build.config.kts add this: androidComponents { onVariants { it.buildConfigFields.put( "BUILD_TIME", BuildConfigField( "String", "\"" + System.currentTimeMillis().toString() + "\"", "build timestamp" ) ) } } You shouldn't have to make any new changes to your main codebase-- the timestamp can still be accessed in Kotlin like this: private val buildDate = Date(BuildConfig.BUILD_TIME.toLong()) Log.i("MyProgram", "This .apk was built on ${buildDate.toString()}"); That's it! Note this still requires the change to gradle.properties described above or you will see an Accessing value buildConfigFields in variant ____ has no effect as the feature buildConfig is disabled. warning. There may still be a better way to do this without using BuildConfigField, but if so, I don't know it. If anyone has a more permanent fix, please let me (us) know. A: adding android.defaults.buildfeatures.buildconfig=true to your gradle.properties would fix this.
Fixing the "Build Type contains custom BuildConfig fields, but the feature is disabled" error w/ buildConfigField
In the new version of Android Studio (Flamingo | 2022.2.1 Canary 9) with the org.jetbrains.kotlin (1.8.0-Beta) plugin and 8.0.0-alpha09 gradle plugin, a new build suddenly gets this error: Build Type 'release' contains custom BuildConfig fields, but the feature is disabled. Is there a way to make this go away?
[ "Answering my own question -- there is a quick solution. Try adding the following line to gradle.properties, and the problem should hopefully stop bothering you (for now):\nandroid.defaults.buildfeatures.buildconfig=true\n\nThis issue is due to the deprecation of buildConfigField (from android.packageBuildConfig) as described in this commit.\nUPDATE 12/12/22:\nPer a note from Roar Grønmo below, there is a newer way to sneak the timestamp into the BuildConfig.java file than the one I suggested back in 2014.\nTo use this newer method, first delete any lines in your current build.gradle (or build.gradle.kts) file that looks like:\nbuildConfigField(\"String\", \"BUILD_TIME\", \"\\\"\" + System.currentTimeMillis().toString() + \"\\\"\")\n\nInstead, first add the following to the top of your build.gradle.kts:\nimport com.android.build.api.variant.BuildConfigField\n\nand outside of the android { ... } part of build.config.kts add this:\nandroidComponents {\n onVariants {\n it.buildConfigFields.put(\n \"BUILD_TIME\", BuildConfigField(\n \"String\", \"\\\"\" + System.currentTimeMillis().toString() + \"\\\"\", \"build timestamp\"\n )\n )\n }\n}\n\nYou shouldn't have to make any new changes to your main codebase-- the timestamp can still be accessed in Kotlin like this:\nprivate val buildDate = Date(BuildConfig.BUILD_TIME.toLong())\nLog.i(\"MyProgram\", \"This .apk was built on ${buildDate.toString()}\");\n\nThat's it! Note this still requires the change to gradle.properties described above or you will see an Accessing value buildConfigFields in variant ____ has no effect as the feature buildConfig is disabled. warning.\nThere may still be a better way to do this without using BuildConfigField, but if so, I don't know it. If anyone has a more permanent fix, please let me (us) know.\n", "adding android.defaults.buildfeatures.buildconfig=true to your gradle.properties would fix this.\n" ]
[ 4, 0 ]
[]
[]
[ "android", "android_buildconfig", "android_studio", "gradle", "java" ]
stackoverflow_0074634321_android_android_buildconfig_android_studio_gradle_java.txt
Q: Adding new content type with filtering and search options in a wordpress theme I am using a wordpress theme to which I need to add a new content type with filtering and search options. How can I do that ? A: The CPT UI plugin is the plugin you're looking for.
Adding new content type with filtering and search options in a wordpress theme
I am using a wordpress theme to which I need to add a new content type with filtering and search options. How can I do that ?
[ "The CPT UI plugin is the plugin you're looking for.\n" ]
[ 0 ]
[]
[]
[ "custom_wordpress_pages", "wordpress" ]
stackoverflow_0074498821_custom_wordpress_pages_wordpress.txt
Q: Receive a JSON input from [FormBody] and bind it in c# model Im using .net core 7 I wan to do very basic thing but my binding class is empty. [ProducesResponseType(StatusCodes.Status200OK)] [ProducesResponseType(StatusCodes.Status400BadRequest)] [ProducesResponseType(StatusCodes.Status500InternalServerError)] [ProducesDefaultResponseType] [Route(nameof(UploadSalesInvoice))] [HttpPost] public async Task<IActionResult> UploadSalesInvoice([FromBody] InvoiceDto content , [FromQuery]string requestId = "007", long apiKeyId = 1, [FromQuery]string sendToCir = "Auto"){...} My binding model: public class InvoiceDto { [JsonProperty("buyer")] public Buyer? Buyer { get; set; } } /// <summary> /// BG-7: Buyer /// </summary> public class Buyer { /// <summary> /// BT-48: Buyer vat /// </summary> [JsonProperty("buyer_vat")] public string? BuyerVat { get; set; } } When I run the program, it goes into this webApi method above but my binding object is empty. Im using swagger, and in the box for content I put: {"buyer":{"buyer_vat":"something"}} Is there any better approach?
Receive a JSON input from [FormBody] and bind it in c# model
Im using .net core 7 I wan to do very basic thing but my binding class is empty. [ProducesResponseType(StatusCodes.Status200OK)] [ProducesResponseType(StatusCodes.Status400BadRequest)] [ProducesResponseType(StatusCodes.Status500InternalServerError)] [ProducesDefaultResponseType] [Route(nameof(UploadSalesInvoice))] [HttpPost] public async Task<IActionResult> UploadSalesInvoice([FromBody] InvoiceDto content , [FromQuery]string requestId = "007", long apiKeyId = 1, [FromQuery]string sendToCir = "Auto"){...} My binding model: public class InvoiceDto { [JsonProperty("buyer")] public Buyer? Buyer { get; set; } } /// <summary> /// BG-7: Buyer /// </summary> public class Buyer { /// <summary> /// BT-48: Buyer vat /// </summary> [JsonProperty("buyer_vat")] public string? BuyerVat { get; set; } } When I run the program, it goes into this webApi method above but my binding object is empty. Im using swagger, and in the box for content I put: {"buyer":{"buyer_vat":"something"}} Is there any better approach?
[]
[]
[ "By default ASP.NET in .NET Core 7 is going to be set up to use System.Text for serialization. JsonProperty is a Newtonsoft.Json attribute. Try using JsonPropertyName instead or alter your serializer to use Newtonsoft.Json.\n" ]
[ -1 ]
[ "asp.net_core_webapi", "c#", "json", "swagger" ]
stackoverflow_0074671072_asp.net_core_webapi_c#_json_swagger.txt
Q: How to link from drawer Item to external website react-native-expo I have here this drawer return ( <NavigationContainer > <Drawer.Navigator initialRouteName="MetalDetector" screenOptions={{ drawerStyle: { backgroundColor: '#8e9dad', width: 220 } }}> <Drawer.Screen name="MetalDetector" component={Home} options={{ headerRight: () => ( <Entypo name="sound" size={24} color="black" /> ), drawerLabel: ' MetalDetector' }} /> <Drawer.Screen name="Settings" component={Settings} options={{drawerLabel: ' Settings'}} /> <Drawer.Screen name="Calibrate" component={Home} options={{drawerLabel: ' Calibration'}}/> <Drawer.Screen name="Feedback" component={Home} options={{drawerLabel: '‍‍‍ Feedback'}}/> <Drawer.Screen name="Website" component={Home} options={{drawerLabel: ' Website'}} /> </Drawer.Navigator> </NavigationContainer> ); } I have added a few drawerScreens, like settings calibration feedback and website, but how can i make it that when someone clicks on website that he gets actually redirected to a website? Currently I have <Drawer.Screen name="Website" component={Home} options={{drawerLabel: ' Website'}} /> I tried to add an onclick and onPress function but when removing the component I get an error and with using the component I cant link to anywhere How can i fix this? A: You can do this using drawerContent prop. Check out this Snack to see how it's done.
How to link from drawer Item to external website react-native-expo
I have here this drawer return ( <NavigationContainer > <Drawer.Navigator initialRouteName="MetalDetector" screenOptions={{ drawerStyle: { backgroundColor: '#8e9dad', width: 220 } }}> <Drawer.Screen name="MetalDetector" component={Home} options={{ headerRight: () => ( <Entypo name="sound" size={24} color="black" /> ), drawerLabel: ' MetalDetector' }} /> <Drawer.Screen name="Settings" component={Settings} options={{drawerLabel: ' Settings'}} /> <Drawer.Screen name="Calibrate" component={Home} options={{drawerLabel: ' Calibration'}}/> <Drawer.Screen name="Feedback" component={Home} options={{drawerLabel: '‍‍‍ Feedback'}}/> <Drawer.Screen name="Website" component={Home} options={{drawerLabel: ' Website'}} /> </Drawer.Navigator> </NavigationContainer> ); } I have added a few drawerScreens, like settings calibration feedback and website, but how can i make it that when someone clicks on website that he gets actually redirected to a website? Currently I have <Drawer.Screen name="Website" component={Home} options={{drawerLabel: ' Website'}} /> I tried to add an onclick and onPress function but when removing the component I get an error and with using the component I cant link to anywhere How can i fix this?
[ "You can do this using drawerContent prop. Check out this Snack to see how it's done.\n" ]
[ 0 ]
[]
[]
[ "expo", "react_native", "react_navigation" ]
stackoverflow_0074569849_expo_react_native_react_navigation.txt
Q: How to use an SVG to trigger a HTML select tag to open I'm using an SVG to replace the default arrow on a select html tag. However if I place the SVG in the select tag bar. It blocks the ability to click the select bar beneath it. I'm NextJs with Tailwind and Heroicons <select htmlFor="selector" name="selected" className="w-full p-2.5 border-2 border-gray-500 rounded-md outline-none text-black cursor-pointer appearance-none" > {/* //leave as the default text// */} <option disabled selected value> {" "} -- Select a cycle --{" "} </option> {/* //dummy options//fill with data pull// */} <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option value="audi">Audi</option> <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option value="audi">Audi</option> </select> <span className="absolute bottom-9 left-2 px-1 border-l-2 border-r-2 border-white bg-white text-gray-500"> Cycle </span> {/* //down arrow// */} <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" className="w-5 h-5 absolute right-2 top-3.5 text-gray-500 " > <path stroke-linecap="round" stroke-linejoin="round" d="M19.5 13.5L12 21m0 0l-7.5-7.5M12 21V3" /> </svg> If you can help me, show me how the down arrow SVG can also open the select tag it sits on top of it A: Give the SVG a pointer-events: none property. <select htmlFor="selector" name="selected" className="w-full p-2.5 border-2 border-gray-500 rounded-md outline-none text-black cursor-pointer appearance-none" > {/* //leave as the default text// */} <option disabled selected value> {" "} -- Select a cycle --{" "} </option> {/* //dummy options//fill with data pull// */} <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option value="audi">Audi</option> <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option value="audi">Audi</option> </select> <span className="absolute bottom-9 left-2 px-1 border-l-2 border-r-2 border-white bg-white text-gray-500"> Cycle </span> {/* //down arrow// */} <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" className="w-5 h-5 absolute right-2 top-3.5 text-gray-500" pointerEvents="none" > <path stroke-linecap="round" stroke-linejoin="round" d="M19.5 13.5L12 21m0 0l-7.5-7.5M12 21V3" /> </svg> This will mean clicks "fall through" the element.
How to use an SVG to trigger a HTML select tag to open
I'm using an SVG to replace the default arrow on a select html tag. However if I place the SVG in the select tag bar. It blocks the ability to click the select bar beneath it. I'm NextJs with Tailwind and Heroicons <select htmlFor="selector" name="selected" className="w-full p-2.5 border-2 border-gray-500 rounded-md outline-none text-black cursor-pointer appearance-none" > {/* //leave as the default text// */} <option disabled selected value> {" "} -- Select a cycle --{" "} </option> {/* //dummy options//fill with data pull// */} <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option value="audi">Audi</option> <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option value="audi">Audi</option> </select> <span className="absolute bottom-9 left-2 px-1 border-l-2 border-r-2 border-white bg-white text-gray-500"> Cycle </span> {/* //down arrow// */} <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" className="w-5 h-5 absolute right-2 top-3.5 text-gray-500 " > <path stroke-linecap="round" stroke-linejoin="round" d="M19.5 13.5L12 21m0 0l-7.5-7.5M12 21V3" /> </svg> If you can help me, show me how the down arrow SVG can also open the select tag it sits on top of it
[ "Give the SVG a pointer-events: none property.\n<select\n htmlFor=\"selector\"\n name=\"selected\"\n className=\"w-full p-2.5 border-2 border-gray-500 rounded-md outline-none text-black cursor-pointer appearance-none\"\n >\n {/* //leave as the default text// */}\n <option disabled selected value>\n {\" \"}\n -- Select a cycle --{\" \"}\n </option>\n\n {/* //dummy options//fill with data pull// */}\n <option value=\"volvo\">Volvo</option>\n <option value=\"saab\">Saab</option>\n <option value=\"mercedes\">Mercedes</option>\n <option value=\"audi\">Audi</option>\n <option value=\"volvo\">Volvo</option>\n <option value=\"saab\">Saab</option>\n <option value=\"mercedes\">Mercedes</option>\n <option value=\"audi\">Audi</option>\n </select>\n <span className=\"absolute bottom-9 left-2 px-1 border-l-2 border-r-2 border-white bg-white text-gray-500\">\n Cycle\n </span>\n {/* //down arrow// */}\n <svg\n xmlns=\"http://www.w3.org/2000/svg\"\n fill=\"none\"\n viewBox=\"0 0 24 24\"\n stroke-width=\"1.5\"\n stroke=\"currentColor\"\n className=\"w-5 h-5 absolute right-2 top-3.5 text-gray-500\"\n pointerEvents=\"none\"\n >\n <path\n stroke-linecap=\"round\"\n stroke-linejoin=\"round\"\n d=\"M19.5 13.5L12 21m0 0l-7.5-7.5M12 21V3\"\n />\n </svg>\n\nThis will mean clicks \"fall through\" the element.\n" ]
[ 1 ]
[]
[]
[ "html", "next.js", "reactjs", "svg", "tailwind_css" ]
stackoverflow_0074673398_html_next.js_reactjs_svg_tailwind_css.txt
Q: Minio: how to get right link to display image on html I need to get images from Minio bucket, but I cannot display that image. I found out that problem was in link. I cannot open it even with browser. So, here is the problem: GET https://127.0.0.1:9000/myphotos/Jungles.jpeg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=7PAB237ARMGX7RTYHUSL%2F20221202%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221202T133028Z&X-Amz-Expires=604800&X-Amz-Security-Token=eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiI3UEFCMjM3QVJNR1g3UlRZSFVTTCIsImV4cCI6MTY3MDAyNzIyNiwicGFyZW50IjoiS2VtYWxBdGRheWV3In0.okb2wO_iLhOlwWeNbixec4R5MRgGw2_KCY_SB9NfuseUI3g9gzTccycbaA6UnZiuuLzbpxPM5tR_hnxa_Y8zWQ&X-Amz-SignedHeaders=host&versionId=null&X-Amz-Signature=281fab24bbe3d651f89c160f5a613512f5e4503f40300ef0008ac94bd9c8f90b net::ERR_CONNECTION_REFUSED My code that has been used to upload that file: package main import ( "context" "log" "github.com/minio/minio-go/v7" "github.com/minio/minio-go/v7/pkg/credentials" ) func main() { ctx := context.Background() endpoint := "play.minio.io" accessKeyId := "KemalAtdayew" secretAccessKey := "K862008971a!" useSSL := true // init minio client object minioClient, err := minio.New(endpoint, &minio.Options{ Creds: credentials.NewStaticV4(accessKeyId, secretAccessKey, ""), Secure: useSSL, }) if err != nil { log.Fatalln(err) } // make a new bucket called myphoto bucketName := "photobucket" location := "us-east-1" err = minioClient.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{Region: location}) if err != nil { // check to see if we already own this bucket exists, errBucketExists := minioClient.BucketExists(ctx, bucketName) if errBucketExists == nil && exists { log.Printf("We already own %s\n", bucketName) } else { log.Fatalln(err) } } else { log.Printf("Successfully created %s\n", bucketName) } // upload you photos objectName := "Jungles.jpeg" filePath := "/minio-1/Jungles.jpeg" contentType := "image/jpeg" // upload the zip file FPutObject info, err := minioClient.FPutObject(ctx, bucketName, objectName, filePath, minio.PutObjectOptions{ContentType: contentType}) if err != nil { log.Fatalln(err) } log.Printf("Successfully uploaded %s of size %d\n", objectName, info.Size) } I also gave permission and made it public. Still nothing. <!DOCTYPE html> <html> <head> <title> Minio </title> <meta charset="utf-8"> </head> <body> <div> <img src="https://127.0.0.1:9000/myphotos/Jungles.jpeg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=7PAB237ARMGX7RTYHUSL%2F20221202%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221202T124101Z&X-Amz-Expires=604800&X-Amz-Security-Token=eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiI3UEFCMjM3QVJNR1g3UlRZSFVTTCIsImV4cCI6MTY3MDAyNzIyNiwicGFyZW50IjoiS2VtYWxBdGRheWV3In0.okb2wO_iLhOlwWeNbixec4R5MRgGw2_KCY_SB9NfuseUI3g9gzTccycbaA6UnZiuuLzbpxPM5tR_hnxa_Y8zWQ&X-Amz-SignedHeaders=host&versionId=null&X-Amz-Signature=5027bd8021a58548ce6be5dead3b622afd951f157a289320ef7dab7701baa7d2" alt="Photo from Minio"> </div> </body> </html> Tried to change html code. Then, found out that it's not html problem. Tried to share in any other possible way except than, "bucket->click on photo -> click on share" Link is invalid, but there is no other proper way to get link to that image in bucket. A: The path to your local image seems to be strange. Verify if you can open your image manually, and remove all the parameters after the image extension, it should be Forest.jpg A: The path to your local image seems to be strange. Verify if you can open your image manually, and remove all the parameters after the image extension, it should be Forest.jpg
Minio: how to get right link to display image on html
I need to get images from Minio bucket, but I cannot display that image. I found out that problem was in link. I cannot open it even with browser. So, here is the problem: GET https://127.0.0.1:9000/myphotos/Jungles.jpeg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=7PAB237ARMGX7RTYHUSL%2F20221202%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221202T133028Z&X-Amz-Expires=604800&X-Amz-Security-Token=eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiI3UEFCMjM3QVJNR1g3UlRZSFVTTCIsImV4cCI6MTY3MDAyNzIyNiwicGFyZW50IjoiS2VtYWxBdGRheWV3In0.okb2wO_iLhOlwWeNbixec4R5MRgGw2_KCY_SB9NfuseUI3g9gzTccycbaA6UnZiuuLzbpxPM5tR_hnxa_Y8zWQ&X-Amz-SignedHeaders=host&versionId=null&X-Amz-Signature=281fab24bbe3d651f89c160f5a613512f5e4503f40300ef0008ac94bd9c8f90b net::ERR_CONNECTION_REFUSED My code that has been used to upload that file: package main import ( "context" "log" "github.com/minio/minio-go/v7" "github.com/minio/minio-go/v7/pkg/credentials" ) func main() { ctx := context.Background() endpoint := "play.minio.io" accessKeyId := "KemalAtdayew" secretAccessKey := "K862008971a!" useSSL := true // init minio client object minioClient, err := minio.New(endpoint, &minio.Options{ Creds: credentials.NewStaticV4(accessKeyId, secretAccessKey, ""), Secure: useSSL, }) if err != nil { log.Fatalln(err) } // make a new bucket called myphoto bucketName := "photobucket" location := "us-east-1" err = minioClient.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{Region: location}) if err != nil { // check to see if we already own this bucket exists, errBucketExists := minioClient.BucketExists(ctx, bucketName) if errBucketExists == nil && exists { log.Printf("We already own %s\n", bucketName) } else { log.Fatalln(err) } } else { log.Printf("Successfully created %s\n", bucketName) } // upload you photos objectName := "Jungles.jpeg" filePath := "/minio-1/Jungles.jpeg" contentType := "image/jpeg" // upload the zip file FPutObject info, err := minioClient.FPutObject(ctx, bucketName, objectName, filePath, minio.PutObjectOptions{ContentType: contentType}) if err != nil { log.Fatalln(err) } log.Printf("Successfully uploaded %s of size %d\n", objectName, info.Size) } I also gave permission and made it public. Still nothing. <!DOCTYPE html> <html> <head> <title> Minio </title> <meta charset="utf-8"> </head> <body> <div> <img src="https://127.0.0.1:9000/myphotos/Jungles.jpeg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=7PAB237ARMGX7RTYHUSL%2F20221202%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221202T124101Z&X-Amz-Expires=604800&X-Amz-Security-Token=eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiI3UEFCMjM3QVJNR1g3UlRZSFVTTCIsImV4cCI6MTY3MDAyNzIyNiwicGFyZW50IjoiS2VtYWxBdGRheWV3In0.okb2wO_iLhOlwWeNbixec4R5MRgGw2_KCY_SB9NfuseUI3g9gzTccycbaA6UnZiuuLzbpxPM5tR_hnxa_Y8zWQ&X-Amz-SignedHeaders=host&versionId=null&X-Amz-Signature=5027bd8021a58548ce6be5dead3b622afd951f157a289320ef7dab7701baa7d2" alt="Photo from Minio"> </div> </body> </html> Tried to change html code. Then, found out that it's not html problem. Tried to share in any other possible way except than, "bucket->click on photo -> click on share" Link is invalid, but there is no other proper way to get link to that image in bucket.
[ "The path to your local image seems to be strange. Verify if you can open your image manually, and remove all the parameters after the image extension, it should be Forest.jpg\n", "The path to your local image seems to be strange. Verify if you can open your image manually, and remove all the parameters after the image extension, it should be Forest.jpg\n" ]
[ 1, 0 ]
[]
[]
[ "html", "minio" ]
stackoverflow_0074656068_html_minio.txt
Q: This version of ChromeDriver only supports Chrome version 102 I'm using VS Code and Anaconda3. Currently trying to install ChromeDriver_Binary but, when I try to execute code, I get this error: selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 102 Current browser version is 100.0.4896.127 with binary path C:\Program Files (x86)\Google\Chrome\Application\chrome.exe A: One option is to use chromedriver-autoinstaller to do it all at once: import chromedriver_autoinstaller as chromedriver chromedriver.install() Alternatively use chromedriver-binary-auto to find the required version and install the driver: pip install --upgrade --force-reinstall chromedriver-binary-auto import chromedriver_binary No restarting is required. A: I fixed it, by updating chrome to version 101, downloading chromedriver from https://chromedriver.chromium.org/downloads and rebooting. A: You need to check your current chrome version first, and then download the chrome driver following this version: https://chromedriver.chromium.org/downloads The point here is that we have to make sure both of chrome version are the same A: I had the same issue, I'm running MacOS Monterrey. My Chrome version is Version 104.0.5112.79. I was getting the same error as you; This version of ChromeDriver only supports Chrome version 102 What I did was: Download the version of chromedriver that matched the version of Chrome, in this case 104 https://chromedriver.chromium.org/downloads Opened the location of chromedriver, it's usually under this path: /usr/local/bin Open Finder. Press Command-Shift-G to open the dialogue box Input the following search: /usr/local/bin Replaced the previous chromedriver from that location with the new one I just downloaded. A: chrome browser and the chromedriver.exe(Path provided by the project) versions should match to the same version. A: please follow the below steps: 1- Delete the current chrome driver from visual studio. 2- Download the latest release of the chrome driver. 3- Add the new chrome drive to the project ( follow below steps) 3.a- Copy the chrome driver to the application path on your pc for example:C:\Users\xxx\source\repos\APPAutomation\APPAutomation 3.b- Select the project in Visual Studio and press on Add – existing item 3.c- Select the chromedriver exe file in Visual Studio. 3.d- Go to the properties of the chrome driver and change the “Copy to Output Directory” to “Copy if newer” 4- End the chrome driver task from task manager (back ground process) 5- Delete the Bin file from the project path 6- Build the project in Visual Studio . A: This version of ChromeDriver only supports Chrome version 106 Current browser version is 108.0.4896.127 with binary path C:\Program Files (x86)\Google\Chrome\Application\chrome.exe I was getting exact same error steps I followed: delete the old chromedriver.exe file. download the chromedriver.exe file which is compatible with my chrome version which is 108. so I downloaded 108 and it worked. downloadable link
This version of ChromeDriver only supports Chrome version 102
I'm using VS Code and Anaconda3. Currently trying to install ChromeDriver_Binary but, when I try to execute code, I get this error: selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 102 Current browser version is 100.0.4896.127 with binary path C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
[ "One option is to use chromedriver-autoinstaller to do it all at once:\nimport chromedriver_autoinstaller as chromedriver\nchromedriver.install()\n\nAlternatively use chromedriver-binary-auto to find the required version and install the driver:\npip install --upgrade --force-reinstall chromedriver-binary-auto\nimport chromedriver_binary\n\nNo restarting is required.\n", "I fixed it, by updating chrome to version 101, downloading chromedriver from https://chromedriver.chromium.org/downloads and rebooting.\n", "You need to check your current chrome version first, and then download the chrome driver following this version: https://chromedriver.chromium.org/downloads\nThe point here is that we have to make sure both of chrome version are the same\n\n\n", "I had the same issue, I'm running MacOS Monterrey. My Chrome version is Version 104.0.5112.79.\nI was getting the same error as you; This version of ChromeDriver only supports Chrome version 102\nWhat I did was:\n\nDownload the version of chromedriver that matched the version of Chrome, in this case 104\nhttps://chromedriver.chromium.org/downloads\n\nOpened the location of chromedriver, it's usually under this path: /usr/local/bin\n\n\n\nOpen Finder.\nPress Command-Shift-G to open the dialogue box\nInput the following search: /usr/local/bin\n\n\nReplaced the previous chromedriver from that location with the new one I just downloaded.\n\n", "chrome browser and the chromedriver.exe(Path provided by the project) versions should match to the same version.\n", "please follow the below steps:\n1- Delete the current chrome driver from visual studio.\n2- Download the latest release of the chrome driver.\n3- Add the new chrome drive to the project ( follow below steps)\n3.a- Copy the chrome driver to the application path on your pc for\nexample:C:\\Users\\xxx\\source\\repos\\APPAutomation\\APPAutomation\n3.b- Select the project in Visual Studio and press on Add – existing item\n3.c- Select the chromedriver exe file in Visual Studio.\n3.d- Go to the properties of the chrome driver and change the “Copy to Output Directory” to “Copy if newer”\n4- End the chrome driver task from task manager (back ground process)\n5- Delete the Bin file from the project path\n6- Build the project in Visual Studio .\n", "This version of ChromeDriver only supports Chrome version 106\nCurrent browser version is 108.0.4896.127 with binary path C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe\nI was getting exact same error\nsteps I followed:\n\ndelete the old chromedriver.exe file.\n\ndownload the chromedriver.exe file which is compatible with my\nchrome version which is 108. so I downloaded 108 and it worked.\n\n\ndownloadable link\n\n" ]
[ 14, 4, 1, 1, 0, 0, 0 ]
[]
[]
[ "python", "selenium_chromedriver" ]
stackoverflow_0072111139_python_selenium_chromedriver.txt
Q: Problem with building a calculator (Beginner) I'm following an online tutorial on how to build a calculator in C program, but it doesn't work even if I follow it step to step. It always give me 0.000000 as the answer. So I copied the tutor's own code to try, but it doesn't work either. Here's my code: double num1; double num2; char op; printf("Enter a number: "); scanf("%If", &num1); printf("Enter operator: "); scanf(" %c",&op); printf("Enter a number: "); scanf("%If", &num2); if(op == '+'){ printf(" %f", num1 + num2); } else if(op == '-'){ printf(" %f", num1 - num2); } else if (op == '/'){ printf(" %f", num1 / num2); } else if (op == '*'){ printf(" %f", num1 * num2); } else { printf("Invalid Operator"); } Here's the tutorial I'm following: https://www.mikedane.com/programming-languages/c/building-a-better-calculator/ I tried typing the code again, but nothing changes. I really appreciate any help you can provide. A: It should be '%lf' and not 'If'. Keep learning! A: Try this, made small changes to the program and tested it here, https://www.onlinegdb.com/online_c_compiler int main(){ double num1; double num2; char op; char *fmt = "%lf"; printf("Enter a number: "); scanf(fmt, &num1); printf("Enter operator: "); scanf(" %c",&op); printf("Enter a number: "); scanf(fmt, &num2); if(op == '+'){ printf(fmt, num1 + num2); } else if(op == '-'){ printf(fmt, num1 - num2); } else if (op == '/'){ printf(fmt, num1 / num2); } else if (op == '*'){ printf(fmt, num1 * num2); } else { printf("Invalid Operator"); } return 0; } Output: Enter a number: 3 Enter operator: * Enter a number: 3 9.000000
Problem with building a calculator (Beginner)
I'm following an online tutorial on how to build a calculator in C program, but it doesn't work even if I follow it step to step. It always give me 0.000000 as the answer. So I copied the tutor's own code to try, but it doesn't work either. Here's my code: double num1; double num2; char op; printf("Enter a number: "); scanf("%If", &num1); printf("Enter operator: "); scanf(" %c",&op); printf("Enter a number: "); scanf("%If", &num2); if(op == '+'){ printf(" %f", num1 + num2); } else if(op == '-'){ printf(" %f", num1 - num2); } else if (op == '/'){ printf(" %f", num1 / num2); } else if (op == '*'){ printf(" %f", num1 * num2); } else { printf("Invalid Operator"); } Here's the tutorial I'm following: https://www.mikedane.com/programming-languages/c/building-a-better-calculator/ I tried typing the code again, but nothing changes. I really appreciate any help you can provide.
[ "It should be '%lf' and not 'If'.\nKeep learning!\n", "Try this, made small changes to the program and tested it here, https://www.onlinegdb.com/online_c_compiler\nint main(){\n double num1;\n double num2;\n char op;\n char *fmt = \"%lf\";\n printf(\"Enter a number: \");\n scanf(fmt, &num1);\n printf(\"Enter operator: \");\n scanf(\" %c\",&op);\n printf(\"Enter a number: \");\n scanf(fmt, &num2);\n\n if(op == '+'){\n printf(fmt, num1 + num2);\n } else if(op == '-'){\n printf(fmt, num1 - num2);\n } else if (op == '/'){\n printf(fmt, num1 / num2);\n } else if (op == '*'){\n printf(fmt, num1 * num2);\n } else {\n printf(\"Invalid Operator\");\n }\n return 0;\n}\n\nOutput:\nEnter a number: 3\nEnter operator: *\nEnter a number: 3\n 9.000000\n\n" ]
[ 0, 0 ]
[]
[]
[ "c", "calculator" ]
stackoverflow_0074673310_c_calculator.txt
Q: Flutter: Not able to select items in Radio button, in ListTile Im a newbie, Not able to select items in Radio button, inside a ListTile. I tied to use same code without ListTile and working as expected. Looks like combination is not correct or i might be missing something. class _TempState extends State<Temp> { int selectedValue = 0; @override Widget build(BuildContext context) { return Scaffold( body: SafeArea( child: Container( child: Column(children: [ Row( children: [ Expanded( child: Text("Radio button with ListView",))],), Expanded( child: ListView.builder( itemCount: 1, itemBuilder: (BuildContext context, int index) { return OrderItem(); }),), ]))));} Widget OrderItem() { int selectedValue = 0; return ListTile( title: Container( child: Column(children: [ Row( children: [ Expanded( child: Text( "Product Type :", )), Radio<int>( value: 1, groupValue: selectedValue, onChanged: (value) { setState(() { selectedValue = value != null ? value.toInt() : 1; }); }, ), Text('NRML'), Radio<int>( value: 2, groupValue: selectedValue, onChanged: (value) { setState(() { selectedValue = value != null ? value.toInt() : 1; }); }), Text('MARKET'), ],), ]))); }} A: In your second radio you should change your setState to selectedValue = value != null ? value.toInt() : 2; Value you assign to radio will be used to determine which radio is selected. So if you want to select second radio you should assign its value when selecting A: You are updating your selectedValue in wrong way ,first define your selectedValue like this: int? selectedValue; then update your widget like this: Widget OrderItem() {//remove int selectedValue = 0; here return ListTile( title: Container( child: Column( children: [ Row( children: [ Expanded( child: Text( "Product Type :", )), Radio<int>( value: 1, groupValue: selectedValue, onChanged: (value) { setState(() { selectedValue = value; //<-- change this }); }, ), Text('NRML'), Radio<int>( value: 2, groupValue: selectedValue, onChanged: (value) { setState(() { selectedValue = value; //<-- change this }); }), Text('MARKET'), ], ), ], ), ), ); } Also another reason that is not working is that you are define another int selectedValue = 0; in your OrderItem method, you need to remove it. A: selectedValue = value != null ? value.toInt() : 2; Value you assign to radio will be used to determine which radio is selected. So if you want to select second radio you should assign its value when selecting
Flutter: Not able to select items in Radio button, in ListTile
Im a newbie, Not able to select items in Radio button, inside a ListTile. I tied to use same code without ListTile and working as expected. Looks like combination is not correct or i might be missing something. class _TempState extends State<Temp> { int selectedValue = 0; @override Widget build(BuildContext context) { return Scaffold( body: SafeArea( child: Container( child: Column(children: [ Row( children: [ Expanded( child: Text("Radio button with ListView",))],), Expanded( child: ListView.builder( itemCount: 1, itemBuilder: (BuildContext context, int index) { return OrderItem(); }),), ]))));} Widget OrderItem() { int selectedValue = 0; return ListTile( title: Container( child: Column(children: [ Row( children: [ Expanded( child: Text( "Product Type :", )), Radio<int>( value: 1, groupValue: selectedValue, onChanged: (value) { setState(() { selectedValue = value != null ? value.toInt() : 1; }); }, ), Text('NRML'), Radio<int>( value: 2, groupValue: selectedValue, onChanged: (value) { setState(() { selectedValue = value != null ? value.toInt() : 1; }); }), Text('MARKET'), ],), ]))); }}
[ "In your second radio you should change your setState to\nselectedValue = value != null ? value.toInt() : 2;\n\nValue you assign to radio will be used to determine which radio is selected. So if you want to select second radio you should assign its value when selecting\n", "You are updating your selectedValue in wrong way ,first define your selectedValue like this:\nint? selectedValue;\n\nthen update your widget like this:\nWidget OrderItem() {//remove int selectedValue = 0; here\n return ListTile(\n title: Container(\n child: Column(\n children: [\n Row(\n children: [\n Expanded(\n child: Text(\n \"Product Type :\",\n )),\n Radio<int>(\n value: 1,\n groupValue: selectedValue,\n onChanged: (value) {\n setState(() {\n selectedValue = value; //<-- change this\n });\n },\n ),\n Text('NRML'),\n Radio<int>(\n value: 2,\n groupValue: selectedValue,\n onChanged: (value) {\n setState(() {\n selectedValue = value; //<-- change this\n });\n }),\n Text('MARKET'),\n ],\n ),\n ],\n ),\n ),\n );\n }\n\nAlso another reason that is not working is that you are define another int selectedValue = 0; in your OrderItem method, you need to remove it.\n\n", "selectedValue = value != null ? value.toInt() : 2;\nValue you assign to radio will be used to determine which radio is selected. So if you want to select second radio you should assign its value when selecting\n" ]
[ 0, 0, 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0074673300_dart_flutter.txt
Q: Can you explain me the output I was in class section of python programming and I am confused here. I have learned that super is used to call the method of parent class but here Employee is not a parent of Programmer yet it's called (showing the result of getLanguage method). What I am missing? This is the code. class Employee: company= "Google" language = "java" def showDetails(self): print("This is an employee"); def getLanguage(self): print(f"1. The language is {self.language}"); class Programmer: language= "Python" company = "Youtubeeee" def getLanguage(self): super().getLanguage(); print(f"2. The language is {self.language}") def showDetails(self): print("This is an programmer") class Programmer2(Programmer , Employee): language= "C++" def getLanguage(self): super().getLanguage(); print(f"3. The language is {self.language}") p2 = Programmer2(); p2.getLanguage(); This is the output, 1. The language is C++ 2. The language is C++ 3. The language is C++ A: You've bumped into one of the reasons why super exists. From the docs, super delegates method calls to a parent or sibling class of type. Python bases class inheritance on a dynamic Method Resolution Order (MRO). When you created a class with multiple inheritance, those two parent classes became siblings. The left most is first in MRO and the right one is next. This isn't a property of the Programmer class, Its a property of the Programmer2 class that decided to do multiple inheritance. If you use Programmer differently, as in, p3 = Programmer() p3.getLanguage() You get the error AttributeError: 'super' object has no attribute 'getLanguage' because its MRO only goes to the base object which doesn't have the method. You can view the MRO of the class with its __mro__ attribute. Programmer.__mro__: (<class '__main__.Programmer'>, <class 'object'>) Programmer2.__mro__: (<class '__main__.Programmer2'>, <class '__main__.Programmer'>, <class '__main__.Employee'>, <class 'object'>) A: Here is some more explanation about the mechanics how the code in the question works. In its general form, super() can be called with two arguments super(C, obj) where C is a class and obj is an object. The object obj determines which classes should be searched for a given attribute and the Method Resolution Order (MRO) in which these classes should be searched. The class argument is used to restrict this search to only these classes that appear in MRO after C. By PEP 3135 when super() is used without arguments inside a class definition, the class argument is automatically taken to be the class being defined, and the object argument is the object upon which super acts. In the code in the question, when you call p2.getLanguage() then inside the definition of Programmer2, the code super().getLanguage() is tacitly replaced by super(Programmer2, p2).getLanguage(). MRO of p2 is Programmer2 -> Programmer -> Employee -> object, and the search for getLanguage starts after Programmer2 i.e. with Programmer class, and it succeeds in this class. Then, in the process of executing getLanguage method of Programmer, we again encounter super().getLanguage(). This is now replaced by super(Programmer, p2).getLanguage() since we are still working with the same object, but the call to super is inside Programmer class. MRO of p2 is still Programmer2 -> Programmer -> Employee -> object, but now the search for getLanguage starts after Programmer i.e. with Employee class, and it succeeds in this class. In this way, even though Programmer does not inherit from Employee, the code super().getLanguage() inside Programmer succeeds, since it is applied to an object that has Employee in its MRO.
Can you explain me the output
I was in class section of python programming and I am confused here. I have learned that super is used to call the method of parent class but here Employee is not a parent of Programmer yet it's called (showing the result of getLanguage method). What I am missing? This is the code. class Employee: company= "Google" language = "java" def showDetails(self): print("This is an employee"); def getLanguage(self): print(f"1. The language is {self.language}"); class Programmer: language= "Python" company = "Youtubeeee" def getLanguage(self): super().getLanguage(); print(f"2. The language is {self.language}") def showDetails(self): print("This is an programmer") class Programmer2(Programmer , Employee): language= "C++" def getLanguage(self): super().getLanguage(); print(f"3. The language is {self.language}") p2 = Programmer2(); p2.getLanguage(); This is the output, 1. The language is C++ 2. The language is C++ 3. The language is C++
[ "You've bumped into one of the reasons why super exists. From the docs, super delegates method calls to a parent or sibling class of type. Python bases class inheritance on a dynamic Method Resolution Order (MRO). When you created a class with multiple inheritance, those two parent classes became siblings. The left most is first in MRO and the right one is next.\nThis isn't a property of the Programmer class, Its a property of the Programmer2 class that decided to do multiple inheritance. If you use Programmer differently, as in,\np3 = Programmer()\np3.getLanguage()\n\nYou get the error AttributeError: 'super' object has no attribute 'getLanguage' because its MRO only goes to the base object which doesn't have the method.\nYou can view the MRO of the class with its __mro__ attribute.\nProgrammer.__mro__:\n (<class '__main__.Programmer'>, <class 'object'>)\n\nProgrammer2.__mro__:\n (<class '__main__.Programmer2'>, <class '__main__.Programmer'>, \n <class '__main__.Employee'>, <class 'object'>)\n\n", "Here is some more explanation about the mechanics how the code in the question works. In its general form, super() can be called with two arguments super(C, obj) where C is a class and obj is an object. The object obj determines which classes should be searched for a given attribute and the Method Resolution Order (MRO) in which these classes should be searched. The class argument is used to restrict this search to only these classes that appear in MRO after C.\nBy PEP 3135 when super() is used without arguments inside a class definition, the class argument is automatically taken to be the class being defined, and the object argument is the object upon which super acts.\nIn the code in the question, when you call p2.getLanguage() then inside the definition of Programmer2, the code super().getLanguage() is tacitly replaced by super(Programmer2, p2).getLanguage(). MRO of p2 is Programmer2 -> Programmer -> Employee -> object, and the search for getLanguage starts after Programmer2 i.e. with Programmer class, and it succeeds in this class.\nThen, in the process of executing getLanguage method of Programmer, we again encounter super().getLanguage(). This is now replaced by super(Programmer, p2).getLanguage() since we are still working with the same object, but the call to super is inside Programmer class. MRO of p2 is still Programmer2 -> Programmer -> Employee -> object, but now the search for getLanguage starts after Programmer i.e. with Employee class, and it succeeds in this class.\nIn this way, even though Programmer does not inherit from Employee, the code super().getLanguage() inside Programmer succeeds, since it is applied to an object that has Employee in its MRO.\n" ]
[ 3, 1 ]
[]
[]
[ "python" ]
stackoverflow_0074673076_python.txt
Q: Query for an integer array from PostreSQL always returns []uint8 Take a simple PostreSQL db with an integer array: CREATE TABLE foo ( id serial PRIMARY KEY, bar integer[] ); INSERT INTO foo VALUES(DEFAULT, '{1234567, 20, 30, 40}'); Using pq, these values are for some reason being retrieved as an array of []uint8. The documentation says that integer types are returned as int64. Does this not apply to arrays as well? db, err := sql.Open("postgres", "user=a_user password=your_pwd dbname=blah") if err != nil { fmt.Println(err) } var ret []int err = db.QueryRow("SELECT bar FROM foo WHERE id=$1", 1).Scan(&ret) if err != nil { fmt.Println(err) } fmt.Println(ret) Output: sql: Scan error on column index 0: unsupported Scan, storing driver.Value type []uint8 into type *[]int64 [] A: You cannot use a slice of int as a driver.Value. The arguments to Scan must be of one of the supported types, or implement the sql.Scanner interface. The reason you're seeing []uint8 in the error message is that the raw value returned from the database is a []byte slice, for which []uint8 is a synonym. To interpret that []byte slice appropriately as a custom PostgreSQL array type, you should use the appropriate array types defined in the pq package, such as the Int64Array. Try something like this: var ret pq.Int64Array err = db.QueryRow("SELECT bar FROM foo WHERE id=$1", 1).Scan(&ret) if err != nil { fmt.Println(err) } fmt.Println(ret) A: The problem will be more severe if you use fetching multiple rows. The above code works for a single row, to fetch multiple rows use like this `rows, err := db.QueryContext(ctx, stmt, courseCode) if err != nil { return nil, err } defer rows.Close() var feedbacks []*Feedback1 for rows.Next() { var feedback Feedback1 var ret pq.Int64Array var ret1 pq.Int64Array err := rows.Scan( &feedback.ID, &ret, &ret1, ) if err != nil { return nil, err } //for loop to convert int64 to int for i:=0;i<len(ret);i++{ feedback.UnitFeedback = append(feedback.UnitFeedback,int(ret[i]))} for i:=0;i<len(ret1);i++{ feedback.GeneralFeedback = append(feedback.GeneralFeedback,int(ret1[i]))} feedbacks = append(feedbacks, &feedback) }`
Query for an integer array from PostreSQL always returns []uint8
Take a simple PostreSQL db with an integer array: CREATE TABLE foo ( id serial PRIMARY KEY, bar integer[] ); INSERT INTO foo VALUES(DEFAULT, '{1234567, 20, 30, 40}'); Using pq, these values are for some reason being retrieved as an array of []uint8. The documentation says that integer types are returned as int64. Does this not apply to arrays as well? db, err := sql.Open("postgres", "user=a_user password=your_pwd dbname=blah") if err != nil { fmt.Println(err) } var ret []int err = db.QueryRow("SELECT bar FROM foo WHERE id=$1", 1).Scan(&ret) if err != nil { fmt.Println(err) } fmt.Println(ret) Output: sql: Scan error on column index 0: unsupported Scan, storing driver.Value type []uint8 into type *[]int64 []
[ "You cannot use a slice of int as a driver.Value. The arguments to Scan must be of one of the supported types, or implement the sql.Scanner interface.\nThe reason you're seeing []uint8 in the error message is that the raw value returned from the database is a []byte slice, for which []uint8 is a synonym.\nTo interpret that []byte slice appropriately as a custom PostgreSQL array type, you should use the appropriate array types defined in the pq package, such as the Int64Array.\nTry something like this:\nvar ret pq.Int64Array\nerr = db.QueryRow(\"SELECT bar FROM foo WHERE id=$1\", 1).Scan(&ret)\nif err != nil {\n fmt.Println(err)\n}\n\nfmt.Println(ret)\n\n", "The problem will be more severe if you use fetching multiple rows.\nThe above code works for a single row, to fetch multiple rows use like this\n`rows, err := db.QueryContext(ctx, stmt, courseCode)\nif err != nil {\nreturn nil, err\n}\ndefer rows.Close()\nvar feedbacks []*Feedback1\n\nfor rows.Next() {\n var feedback Feedback1\n var ret pq.Int64Array\n var ret1 pq.Int64Array\n err := rows.Scan(\n &feedback.ID,\n &ret,\n &ret1,\n )\n if err != nil {\n return nil, err\n }\n\n //for loop to convert int64 to int\n for i:=0;i<len(ret);i++{\n feedback.UnitFeedback = append(feedback.UnitFeedback,int(ret[i]))}\n\n for i:=0;i<len(ret1);i++{\n feedback.GeneralFeedback = append(feedback.GeneralFeedback,int(ret1[i]))}\n\n feedbacks = append(feedbacks, &feedback)\n}`\n\n" ]
[ 16, 0 ]
[]
[]
[ "go", "pq" ]
stackoverflow_0047962615_go_pq.txt
Q: Dynamic name for a spring batch job I defined a spring batch job very simple like below. I want to change its registered name using a parameter received (which is added to the spring batch parameter list of the job as jobName) @Bean @JobScope //this doesn't work throws exception 'No context holder available for job scope' Job genericJob (JobNotifierListener listener, Step genericStep1, Step genericStep2, @Value("#{jobParameters['jobName']}") String jobName ) { return jobBuilderFactory.get(jobName + "GenericJob") .incrementer(new RunIdIncrementer()) .listener(listener) .start(genericStep1) .next(genericStep2) .build(); } How can I configure the job so that the name of the job is dynamically changed using the input batch parameter jobName? (as adding @JobScope to access the spring batch context doesn't work, throws error) A: The job name should not be a job parameter. Job parameters are designed for "business" runtime parameters, not technical configuration parameters. An application property or a system property is better suited for your case: @Bean //@JobScope // no need for this Job genericJob (JobNotifierListener listener, Step genericStep1, Step genericStep2, @Value("#{systemProperties['jobName']}") String jobName ) { return jobBuilderFactory.get(jobName + "GenericJob") .incrementer(new RunIdIncrementer()) .listener(listener) .start(genericStep1) .next(genericStep2) .build(); }
Dynamic name for a spring batch job
I defined a spring batch job very simple like below. I want to change its registered name using a parameter received (which is added to the spring batch parameter list of the job as jobName) @Bean @JobScope //this doesn't work throws exception 'No context holder available for job scope' Job genericJob (JobNotifierListener listener, Step genericStep1, Step genericStep2, @Value("#{jobParameters['jobName']}") String jobName ) { return jobBuilderFactory.get(jobName + "GenericJob") .incrementer(new RunIdIncrementer()) .listener(listener) .start(genericStep1) .next(genericStep2) .build(); } How can I configure the job so that the name of the job is dynamically changed using the input batch parameter jobName? (as adding @JobScope to access the spring batch context doesn't work, throws error)
[ "The job name should not be a job parameter. Job parameters are designed for \"business\" runtime parameters, not technical configuration parameters. An application property or a system property is better suited for your case:\n@Bean\n//@JobScope // no need for this\nJob genericJob (JobNotifierListener listener,\n Step genericStep1, Step genericStep2,\n @Value(\"#{systemProperties['jobName']}\") String jobName\n ) {\n return jobBuilderFactory.get(jobName + \"GenericJob\")\n .incrementer(new RunIdIncrementer())\n .listener(listener)\n .start(genericStep1)\n .next(genericStep2)\n .build();\n }\n\n" ]
[ 0 ]
[]
[]
[ "spring", "spring_batch" ]
stackoverflow_0074586632_spring_spring_batch.txt
Q: How can I establish a decreasing relationship between variables? I am a teacher of Primary Education and Early Childhood Education and I am trying to generate a simulator through NetLogo on how fertilization and pesticides are decimating the butterfly population. However, despite having read the manual, I am not managing to program the code to make it work. My problem is that although I set the turtles I can't establish the following relationship between the variables/buttons: If butterflies randomly touch a plant (which is fertilized with pesticide) its pollinating capacity is reduced by a certain percentage (depends on the amount of pesticide) My problem is that I can't get the pollination capacity of the butterfly to be set to 100% initially and that the greater the amount of pesticide, the lower its pollination capacity is when touching a flower. Currently, although the amount of pesticide is the highest, there are peaks where its pollination capacity increases instead of being reduced. breed [butterflies butterfly] breed [flowers flower] globals [ butterfliesless-neighborhoods ;; how many patches have no butterflies in any neighboring patches? pollinating-capacity ;; measures how well-bivouaced the butterflies are ] patches-own [ butterflies-nearby ;; how many butterflies in neighboring patches? ] flowers-own [ carried-butterflies ;; the butterflies I'm carrying (or nobody if I'm not carrying in) found-bivouac? ;; becomes true when I find a bivouac to drop it in ] to setup clear-all set-default-shape butterflies "butterflies" set-default-shape flowers "flower" ask patches [ set pcolor green + (random-float 0.8) - 0.4] ;; varying the green just makes it look nicer create-butterflies num-butterflies [ set color white set size 1.5 ;; easier to see setxy random-xcor random-ycor ] create-flowers num-flowers [ set color brown set size 1.5 ;; easier to see set carried-butterflies nobody set found-bivouac? false setxy random-xcor random-ycor ] reset-ticks end to update-butterflies-counts ask patches [ set butterflies-nearby (sum [count butterflies-here] of neighbors) ] set butterfliesless-neighborhoods (count patches with [butterflies-nearby = 0]) end to calculate-pollinating-capacity set pollinating-capacity (butterfliesless-neighborhoods / (count patches with [not any? butterflies-here])) * 100 end to go ask flowers [ ifelse carried-butterflies = nobody [ search-for-butterflies ] ;; find a butterflies and pick it up [ ifelse found-bivouac? [ find-empty-spot ] ;; find an empty spot to drop the butterflies [ find-new-bivouac ] ] ;; find a bivouac to drop the butterflies in wiggle fd 1 if carried-butterflies != nobody ;; bring my butterflies to where I just moved to [ ask carried-butterflies [ move-to myself ] ] ] ask butterflies with [not hidden?] [ wiggle fd pesticide-amount ] tick end to wiggle ;; turtle procedure rt random 50 - random 50 end to search-for-butterflies ;; flowers procedure set carried-butterflies one-of butterflies-here with [not hidden?] if (carried-butterflies != nobody) [ ask carried-butterflies [ hide-turtle ] ;; make the butterflies invisible to other flowers set color blue ;; turn flower blue while carrying butterflies fd 1 ] end to find-new-bivouac ;; flowers procedure if any? butterflies-here with [not hidden?] [ set found-bivouac? true ] end to find-empty-spot ;; flowers procedure if all? butterflies-here [hidden?] [ ask carried-butterflies [ show-turtle ] ;; make the butterflies visible again set color brown ;; set my own color back to brown set carried-butterflies nobody set found-bivouac? false rt random 360 fd 20 ] end Defined Buttons A: Your screenshot is a nice start with sliders, buttons and plots. In the code tab consider making two breeds of turtles: butterflies and plants. breed [butterflies butterfly] butterflies-own [pollinatingCapacity] breed [plants plant] plants-own [pesticideAmount] And then think about how you might decrease pollinatingCapacity of a butterfly as it on the same patch as a plant. A butterfly can see if any plants are on the same patch using plants-here
How can I establish a decreasing relationship between variables?
I am a teacher of Primary Education and Early Childhood Education and I am trying to generate a simulator through NetLogo on how fertilization and pesticides are decimating the butterfly population. However, despite having read the manual, I am not managing to program the code to make it work. My problem is that although I set the turtles I can't establish the following relationship between the variables/buttons: If butterflies randomly touch a plant (which is fertilized with pesticide) its pollinating capacity is reduced by a certain percentage (depends on the amount of pesticide) My problem is that I can't get the pollination capacity of the butterfly to be set to 100% initially and that the greater the amount of pesticide, the lower its pollination capacity is when touching a flower. Currently, although the amount of pesticide is the highest, there are peaks where its pollination capacity increases instead of being reduced. breed [butterflies butterfly] breed [flowers flower] globals [ butterfliesless-neighborhoods ;; how many patches have no butterflies in any neighboring patches? pollinating-capacity ;; measures how well-bivouaced the butterflies are ] patches-own [ butterflies-nearby ;; how many butterflies in neighboring patches? ] flowers-own [ carried-butterflies ;; the butterflies I'm carrying (or nobody if I'm not carrying in) found-bivouac? ;; becomes true when I find a bivouac to drop it in ] to setup clear-all set-default-shape butterflies "butterflies" set-default-shape flowers "flower" ask patches [ set pcolor green + (random-float 0.8) - 0.4] ;; varying the green just makes it look nicer create-butterflies num-butterflies [ set color white set size 1.5 ;; easier to see setxy random-xcor random-ycor ] create-flowers num-flowers [ set color brown set size 1.5 ;; easier to see set carried-butterflies nobody set found-bivouac? false setxy random-xcor random-ycor ] reset-ticks end to update-butterflies-counts ask patches [ set butterflies-nearby (sum [count butterflies-here] of neighbors) ] set butterfliesless-neighborhoods (count patches with [butterflies-nearby = 0]) end to calculate-pollinating-capacity set pollinating-capacity (butterfliesless-neighborhoods / (count patches with [not any? butterflies-here])) * 100 end to go ask flowers [ ifelse carried-butterflies = nobody [ search-for-butterflies ] ;; find a butterflies and pick it up [ ifelse found-bivouac? [ find-empty-spot ] ;; find an empty spot to drop the butterflies [ find-new-bivouac ] ] ;; find a bivouac to drop the butterflies in wiggle fd 1 if carried-butterflies != nobody ;; bring my butterflies to where I just moved to [ ask carried-butterflies [ move-to myself ] ] ] ask butterflies with [not hidden?] [ wiggle fd pesticide-amount ] tick end to wiggle ;; turtle procedure rt random 50 - random 50 end to search-for-butterflies ;; flowers procedure set carried-butterflies one-of butterflies-here with [not hidden?] if (carried-butterflies != nobody) [ ask carried-butterflies [ hide-turtle ] ;; make the butterflies invisible to other flowers set color blue ;; turn flower blue while carrying butterflies fd 1 ] end to find-new-bivouac ;; flowers procedure if any? butterflies-here with [not hidden?] [ set found-bivouac? true ] end to find-empty-spot ;; flowers procedure if all? butterflies-here [hidden?] [ ask carried-butterflies [ show-turtle ] ;; make the butterflies visible again set color brown ;; set my own color back to brown set carried-butterflies nobody set found-bivouac? false rt random 360 fd 20 ] end Defined Buttons
[ "Your screenshot is a nice start with sliders, buttons and plots. In the code tab consider making two breeds of turtles: butterflies and plants.\nbreed [butterflies butterfly]\nbutterflies-own [pollinatingCapacity]\n\nbreed [plants plant]\nplants-own [pesticideAmount]\n\nAnd then think about how you might decrease pollinatingCapacity of a butterfly as it on the same patch as a plant. A butterfly can see if any plants are on the same patch using plants-here\n" ]
[ 0 ]
[]
[]
[ "netlogo" ]
stackoverflow_0074660399_netlogo.txt
Q: How to make unit test folder and files appear in Android Studio Chipmunk I'm doing the "Android Basics in Kotlin" course from Google, and I'm in the testing and unit test part. In the website, that uses the Android Studio version 2020, there is 2 folder that does no appear in my version(Chipmunk). Link to the part I struggle with the Google course Tried solutions like create manually the folders in src/test y en /src/test/java/, but in both the IDE warns me that the folders already exist. Google pov My pov How can I make them visibles like in the website to progress the course? Apparently the folders create automatically when you start a project. Thank you in advance. A: See first my screenshot to compare if folders really exist. You can also go to your windows file explorer in your app folder diceroller/app/src/test/java/com/example/diceroller and see if ExampleUnitTest.kt is there. Maybe the last 3 folders which is the package folders might not exist. Honestly i dont know if the package folders and the ExampleUnitTest.kt exist. To create the package folders, right click on "java"->new->package: then: and typing on com.example.diceroller and then enter. If you want, I'll tell you for creating the ExampleUnitTest.kt too. Sorry if I don't know any button to show the folders and the tests. Anything you want ask me!
How to make unit test folder and files appear in Android Studio Chipmunk
I'm doing the "Android Basics in Kotlin" course from Google, and I'm in the testing and unit test part. In the website, that uses the Android Studio version 2020, there is 2 folder that does no appear in my version(Chipmunk). Link to the part I struggle with the Google course Tried solutions like create manually the folders in src/test y en /src/test/java/, but in both the IDE warns me that the folders already exist. Google pov My pov How can I make them visibles like in the website to progress the course? Apparently the folders create automatically when you start a project. Thank you in advance.
[ "See first my screenshot to compare if folders really exist.\nYou can also go to your windows file explorer in your app folder diceroller/app/src/test/java/com/example/diceroller and see if ExampleUnitTest.kt is there.\nMaybe the last 3 folders which is the package folders might not exist.\nHonestly i dont know if the package folders and the ExampleUnitTest.kt exist.\nTo create the package folders, right click on \"java\"->new->package:\n\nthen:\n\nand typing on com.example.diceroller and then enter.\n\nIf you want, I'll tell you for creating the ExampleUnitTest.kt too.\nSorry if I don't know any button to show the folders and the tests.\nAnything you want ask me!\n" ]
[ 0 ]
[]
[]
[ "android_studio", "kotlin", "unit_testing" ]
stackoverflow_0074672126_android_studio_kotlin_unit_testing.txt
Q: Apply geographically weighted regression's model parameters to a finer spatial scale I have two raster layers, one coarse resolution and one fine resolution. My goal is to extract GWR's coefficients (intercept and slope) and apply them to my fine resolution raster. I can do this easily when I perform simple linear regression. For example: library(terra) library(sp) # focal terra tirs = rast("path/tirs.tif") # fine res raster ntl = rast("path/ntl.tif") # coarse res raster # fill null values tirs = focal(tirs, w = 9, fun = mean, na.policy = "only", na.rm = TRUE) gf <- focalMat(tirs, 0.10*400, "Gauss", 11) r_gf <- focal(tirs, w = gf, na.rm = TRUE) r_gf = resample(r_gf, ntl, method = "bilinear") s = c(ntl, r_gf) names(s) = c('ntl', 'r_gf') model <- lm(formula = ntl ~ tirs, data = s) # apply the lm coefficients to the fine res raster lm_pred = model$coefficients[1] + model$coefficients[2] * tirs But when I run GWR, the slope and intercept are not just two numbers (like in linear model) but it's a range. For example, below are the results of the GWR: Summary of GWR coefficient estimates: Min. 1st Qu. Median 3rd Qu. Max. Intercept -1632.61196 -55.79680 -15.99683 15.01596 1133.299 tirs20 -42.43020 0.43446 1.80026 3.75802 70.987 My question is how can extract GWR model parameters (intercept and slope) and apply them to my fine resolution raster? In the end I would like to do the same thing as I did with the linear model, that is, GWR_intercept + GWR_slope * fine resolution raster. Here is the code of GWR: library(GWmodel) library(raster) block.data = read.csv(file = "path/block.data00.csv") #create mararate df for the x & y coords x = as.data.frame(block.data$x) y = as.data.frame(block.data$y) sint = as.matrix(cbind(x, y)) #convert the data to spatialPointsdf and then to spatialPixelsdf coordinates(block.data) = c("x", "y") #gridded(block.data) <- TRUE # specify a model equation eq1 <- ntl ~ tirs dist = GWmodel::gw.dist(dp.locat = sint, focus = 0, longlat = FALSE) abw = bw.gwr(eq1, data = block.data, approach = "AIC", kernel = "tricube", adaptive = TRUE, p = 2, longlat = F, dMat = dist, parallel.method = "omp", parallel.arg = "omp") ab_gwr = gwr.basic(eq1, data = block.data, bw = abw, kernel = "tricube", adaptive = TRUE, p = 2, longlat = FALSE, dMat = dist, F123.test = FALSE, cv = FALSE, parallel.method = "omp", parallel.arg = "omp") ab_gwr You can download the csv from here. The rasters I am using: ntl = rast(ncols=101, nrows=85, nlyrs=1, xmin=509634.6325, xmax=550034.6325, ymin=161598.158, ymax=195598.158, names=c('ntl'), crs='EPSG:27700') tirs = rast(ncols=407, nrows=342, nlyrs=1, xmin=509600, xmax=550300, ymin=161800, ymax=196000, names=c('tirs'), crs='EPSG:27700') A: This is how you can do global regression and predict to a higher resolution (downscale) library(terra) r <- rast(system.file("ex/logo.tif", package="terra")) a <- aggregate(r, 10, mean) model <- lm(formula = red ~ green, data=a) p <- predict(r, model) And d <- as.data.frame(a[[1:2]], xy=TRUE) Perhaps this helps to write a better example in your question.
Apply geographically weighted regression's model parameters to a finer spatial scale
I have two raster layers, one coarse resolution and one fine resolution. My goal is to extract GWR's coefficients (intercept and slope) and apply them to my fine resolution raster. I can do this easily when I perform simple linear regression. For example: library(terra) library(sp) # focal terra tirs = rast("path/tirs.tif") # fine res raster ntl = rast("path/ntl.tif") # coarse res raster # fill null values tirs = focal(tirs, w = 9, fun = mean, na.policy = "only", na.rm = TRUE) gf <- focalMat(tirs, 0.10*400, "Gauss", 11) r_gf <- focal(tirs, w = gf, na.rm = TRUE) r_gf = resample(r_gf, ntl, method = "bilinear") s = c(ntl, r_gf) names(s) = c('ntl', 'r_gf') model <- lm(formula = ntl ~ tirs, data = s) # apply the lm coefficients to the fine res raster lm_pred = model$coefficients[1] + model$coefficients[2] * tirs But when I run GWR, the slope and intercept are not just two numbers (like in linear model) but it's a range. For example, below are the results of the GWR: Summary of GWR coefficient estimates: Min. 1st Qu. Median 3rd Qu. Max. Intercept -1632.61196 -55.79680 -15.99683 15.01596 1133.299 tirs20 -42.43020 0.43446 1.80026 3.75802 70.987 My question is how can extract GWR model parameters (intercept and slope) and apply them to my fine resolution raster? In the end I would like to do the same thing as I did with the linear model, that is, GWR_intercept + GWR_slope * fine resolution raster. Here is the code of GWR: library(GWmodel) library(raster) block.data = read.csv(file = "path/block.data00.csv") #create mararate df for the x & y coords x = as.data.frame(block.data$x) y = as.data.frame(block.data$y) sint = as.matrix(cbind(x, y)) #convert the data to spatialPointsdf and then to spatialPixelsdf coordinates(block.data) = c("x", "y") #gridded(block.data) <- TRUE # specify a model equation eq1 <- ntl ~ tirs dist = GWmodel::gw.dist(dp.locat = sint, focus = 0, longlat = FALSE) abw = bw.gwr(eq1, data = block.data, approach = "AIC", kernel = "tricube", adaptive = TRUE, p = 2, longlat = F, dMat = dist, parallel.method = "omp", parallel.arg = "omp") ab_gwr = gwr.basic(eq1, data = block.data, bw = abw, kernel = "tricube", adaptive = TRUE, p = 2, longlat = FALSE, dMat = dist, F123.test = FALSE, cv = FALSE, parallel.method = "omp", parallel.arg = "omp") ab_gwr You can download the csv from here. The rasters I am using: ntl = rast(ncols=101, nrows=85, nlyrs=1, xmin=509634.6325, xmax=550034.6325, ymin=161598.158, ymax=195598.158, names=c('ntl'), crs='EPSG:27700') tirs = rast(ncols=407, nrows=342, nlyrs=1, xmin=509600, xmax=550300, ymin=161800, ymax=196000, names=c('tirs'), crs='EPSG:27700')
[ "This is how you can do global regression and predict to a higher resolution (downscale)\nlibrary(terra)\nr <- rast(system.file(\"ex/logo.tif\", package=\"terra\"))\na <- aggregate(r, 10, mean)\n\nmodel <- lm(formula = red ~ green, data=a)\np <- predict(r, model)\n\nAnd\nd <- as.data.frame(a[[1:2]], xy=TRUE)\n\nPerhaps this helps to write a better example in your question.\n" ]
[ 0 ]
[]
[]
[ "coefficients", "gwr", "r", "raster" ]
stackoverflow_0074669285_coefficients_gwr_r_raster.txt
Q: If I remove Visual Studio will this affect the Script files I previously did in the Unity project? I have decided to remove Visual studio; Because it does not auto-fill for me when I write the code, so I want to know if I delete it will that will affect the previous C# files I wrote in Unity !? I haven't deleted it yet. I also went into the program settings to solve the autofill problem but to no avail. A: It will not. Uninstalling Visual studio will not affect any project files as the uninstall program won't even know they exist. I often have to uninstall and reinstall VS Code + VS 2022 when updates break it or I'm just quessing at issues and it has never removed my code files. A: No. As long as you don't delete the script files, you should be fine.
If I remove Visual Studio will this affect the Script files I previously did in the Unity project?
I have decided to remove Visual studio; Because it does not auto-fill for me when I write the code, so I want to know if I delete it will that will affect the previous C# files I wrote in Unity !? I haven't deleted it yet. I also went into the program settings to solve the autofill problem but to no avail.
[ "It will not.\nUninstalling Visual studio will not affect any project files as the uninstall program won't even know they exist.\nI often have to uninstall and reinstall VS Code + VS 2022 when updates break it or I'm just quessing at issues and it has never removed my code files.\n", "No. As long as you don't delete the script files, you should be fine.\n" ]
[ 0, 0 ]
[]
[]
[ "unityscript", "visual_studio" ]
stackoverflow_0074671424_unityscript_visual_studio.txt
Q: Summing multiple items in a column Type Date Cost Shampoo 01/31/2022 $10 Shampoo 01/31/2022 $15 Shampoo 02/22/2019 $15 Conditioner 03/15/2020 $17 Conditioner 05/16/2022 $19 Soap. 01/31/2021 $5 Soap 01/06/2022 $2 Soap 12/31/2019 $3 Soap 10/10/2022 $5 How would I approach summing total cost for specific items in a year, months, quarter and total cost Example Output: Type | Number Items | Year | Total Cost Shampoo | 2 | 2022 | 25 Shampoo | 1. | 2019 | 15 etc... split by month, and quarter Trying summarize and library(lubridate) A: library(tidyverse) library(lubridate) df %>% group_by(Type, Date)%>% summarise(Number_Items = n(), Year = year(mdy(Date[1])), Total_Cost = sum(parse_number(Cost)), .groups = 'drop') # A tibble: 8 × 5 Type Date Number_Items Year Total_Cost <chr> <chr> <int> <dbl> <dbl> 1 Conditioner 03/15/2020 1 2020 17 2 Conditioner 05/16/2022 1 2022 19 3 Shampoo 01/31/2022 2 2022 25 4 Shampoo 02/22/2019 1 2019 15 5 Soap 01/06/2022 1 2022 2 6 Soap 10/10/2022 1 2022 5 7 Soap 12/31/2019 1 2019 3 8 Soap. 01/31/2021 1 2021 5 A: You have to group your data by type and year, and then count the number of items and total cost, here an example for you to adapt. library(dplyr) library(lubridate) your_data_frame %>% group_by(type, year = year(dmy(Date))) %>% summarise( number_of_items = n(), total_cost = sum(cost,na.rm = TRUE) ) A: This one is very similar to @onyambu's solution. But it differs in the grouping: library(dplyr) library(readr) # parse_number() library(lubridate df %>% mutate(Year = year(mdy(Date))) %>% group_by(Year, Type) %>% summarise(`Number Items` = n(), `Total Cost` = sum(parse_number(Cost))) Year Type `Number Items` `Total Cost` <dbl> <chr> <int> <dbl> 1 2019 Shampoo 1 15 2 2019 Soap 1 3 3 2020 Conditioner 1 17 4 2021 Soap. 1 5 5 2022 Conditioner 1 19 6 2022 Shampoo 2 25 7 2022 Soap 2 7
Summing multiple items in a column
Type Date Cost Shampoo 01/31/2022 $10 Shampoo 01/31/2022 $15 Shampoo 02/22/2019 $15 Conditioner 03/15/2020 $17 Conditioner 05/16/2022 $19 Soap. 01/31/2021 $5 Soap 01/06/2022 $2 Soap 12/31/2019 $3 Soap 10/10/2022 $5 How would I approach summing total cost for specific items in a year, months, quarter and total cost Example Output: Type | Number Items | Year | Total Cost Shampoo | 2 | 2022 | 25 Shampoo | 1. | 2019 | 15 etc... split by month, and quarter Trying summarize and library(lubridate)
[ "library(tidyverse)\nlibrary(lubridate)\n\ndf %>%\n group_by(Type, Date)%>%\n summarise(Number_Items = n(),\n Year = year(mdy(Date[1])),\n Total_Cost = sum(parse_number(Cost)),\n .groups = 'drop')\n\n# A tibble: 8 × 5\n Type Date Number_Items Year Total_Cost\n <chr> <chr> <int> <dbl> <dbl>\n1 Conditioner 03/15/2020 1 2020 17\n2 Conditioner 05/16/2022 1 2022 19\n3 Shampoo 01/31/2022 2 2022 25\n4 Shampoo 02/22/2019 1 2019 15\n5 Soap 01/06/2022 1 2022 2\n6 Soap 10/10/2022 1 2022 5\n7 Soap 12/31/2019 1 2019 3\n8 Soap. 01/31/2021 1 2021 5\n\n", "You have to group your data by type and year, and then count the number of items and total cost, here an example for you to adapt.\nlibrary(dplyr)\nlibrary(lubridate)\n\nyour_data_frame %>% \n group_by(type, year = year(dmy(Date))) %>% \n summarise(\n number_of_items = n(),\n total_cost = sum(cost,na.rm = TRUE)\n )\n\n", "This one is very similar to @onyambu's solution. But it differs in the grouping:\nlibrary(dplyr)\nlibrary(readr) # parse_number()\nlibrary(lubridate\n\ndf %>% \n mutate(Year = year(mdy(Date))) %>% \n group_by(Year, Type) %>% \n summarise(`Number Items` = n(), \n `Total Cost` = sum(parse_number(Cost)))\n\n Year Type `Number Items` `Total Cost`\n <dbl> <chr> <int> <dbl>\n1 2019 Shampoo 1 15\n2 2019 Soap 1 3\n3 2020 Conditioner 1 17\n4 2021 Soap. 1 5\n5 2022 Conditioner 1 19\n6 2022 Shampoo 2 25\n7 2022 Soap 2 7\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "r" ]
stackoverflow_0074672222_r.txt
Q: TypeError: global.__reanimatedWorkletInit is not a function. (In 'global.__reanimatedWorkletInit(_f)', 'global.__reanimatedWorkletInit' is undefined) TypeError: global.__reanimatedWorkletInit is not a function. (In 'global.__reanimatedWorkletInit(_f)', 'global.__reanimatedWorkletInit' is undefined) I am using React Native (not expo). I don't even have reanimated downloaded. I had it downloaded then I removed it and rebuilt the app, and it's giving me this error now. Things I've tried: deleting node_modules and re-running yarn install -git reset HEAD~ to a prev commit where I didn't install the reanimated2 packages A: I just followed the below steps to solve this issue. step1: npx react-native run-android step2: npm start -- --reset-cache and it solved A: Ok, what I ended up doing to solve this was: -just deleted the whole repo from my local -cloned it again from github -uninstalled the app from Android emulator Then it seemed to work. So maybe it was an error related to cache or something lingering around even after I had removed all instances of the word/package "reanimated" from the whole codebase. A: Using Expo in a bareworkflow Clear app memory run expo start --dev-client --clear A: I solved my issue doing this: https://github.com/wcandillon/react-native-redash/issues/395 On top you just have to do this: import 'react-native-reanimated'; on your app or index file. A: I have tried all the solutions from stack Overflow. (Not working) Here is the fix: first check your version for react-native-reanimated and then see the actual documentation of the right version for the configuration. I am using version 2.4.1 and have solved by this link A: I had a require cycle warning from a git submodule inside the src folder which I thought wasn't doing any harm but turns out fixing that solved this issue. I am unsure why the require cycle was causing so much grief but I guess if you've got a require cycle in your output try solving that and it may fix this. A: What I did was degrading react-native-reanimated to ^2.6.0. It solved the issue for me. A: I had this problem too and simply moved the babel plugin react-native-reanimated/plugin to the last place in the babel's config as stated in the doc. I should probably mention it worked for me before but when I started migrating the react-native app for web this was the problem for me. I am using expo. I had to run expo with --clear CLI arg as expo start --dev-client --clear.
TypeError: global.__reanimatedWorkletInit is not a function. (In 'global.__reanimatedWorkletInit(_f)', 'global.__reanimatedWorkletInit' is undefined)
TypeError: global.__reanimatedWorkletInit is not a function. (In 'global.__reanimatedWorkletInit(_f)', 'global.__reanimatedWorkletInit' is undefined) I am using React Native (not expo). I don't even have reanimated downloaded. I had it downloaded then I removed it and rebuilt the app, and it's giving me this error now. Things I've tried: deleting node_modules and re-running yarn install -git reset HEAD~ to a prev commit where I didn't install the reanimated2 packages
[ "I just followed the below steps to solve this issue.\nstep1: npx react-native run-android\nstep2: npm start -- --reset-cache\nand it solved\n", "Ok, what I ended up doing to solve this was:\n-just deleted the whole repo from my local\n-cloned it again from github\n-uninstalled the app from Android emulator\nThen it seemed to work. So maybe it was an error related to cache or something lingering around even after I had removed all instances of the word/package \"reanimated\" from the whole codebase.\n", "Using Expo in a bareworkflow\n\nClear app memory\nrun expo start --dev-client --clear\n\n", "I solved my issue doing this:\nhttps://github.com/wcandillon/react-native-redash/issues/395\nOn top you just have to do this: import 'react-native-reanimated';\non your app or index file.\n", "I have tried all the solutions from stack Overflow. (Not working)\nHere is the fix:\nfirst check your version for react-native-reanimated and then see the actual documentation of the right version for the configuration.\nI am using version 2.4.1 and have solved by this link\n", "I had a require cycle warning from a git submodule inside the src folder which I thought wasn't doing any harm but turns out fixing that solved this issue. I am unsure why the require cycle was causing so much grief but I guess if you've got a require cycle in your output try solving that and it may fix this.\n", "What I did was degrading react-native-reanimated to ^2.6.0. It solved the issue for me.\n", "I had this problem too and simply moved the babel plugin react-native-reanimated/plugin to the last place in the babel's config as stated in the doc.\nI should probably mention it worked for me before but when I started migrating the react-native app for web this was the problem for me. I am using expo. I had to run expo with --clear CLI arg as expo start --dev-client --clear.\n" ]
[ 11, 2, 2, 1, 1, 0, 0, 0 ]
[ "I Just solved this issue by doing these steps:\n\nclose Metro bundler\nrun this command\n\n\nnpm start -- --reset-cache\n\n\nreact-native start --reset-cache\n\n\nrebuild the project again\n\n" ]
[ -1 ]
[ "react_native", "react_native_reanimated", "react_native_reanimated_v2" ]
stackoverflow_0070750047_react_native_react_native_reanimated_react_native_reanimated_v2.txt
Q: "The certificate chain was issued by an authority that is not trusted" when connecting DB in VM Role from Azure website I am experiencing an error when connecting MY DB which is in VM Role (I have SQL VM Role) from Azure Website. Both VM Role and Azure Website are in West zone. I am facing the following issue: SqlException (0x80131904): A connection was successfully established with the server, but then an error occurred during the login process. (provider: SSL Provider, error: 0 - The certificate chain was issued by an authority that is not trusted.)] I am able to connect to my DB using SSMS. Port 1433 is open on my VM role. What is wrong with my connection? A: 2022 Update - This answer (as comments point out) provides an explanation and stop gap, but also offers some better recommendations including purchasing and installing a proper cert (thanks to numerous community edits). Please see also the other highly voted answers in this thread, including the one by @Alex From Jitbit below about a breaking change when migrating from System.Data.Sql to Microsoft.Data.Sql (spoiler: Encrypt is now set to true by default). Original answer: You likely don't have a CA signed certificate installed in your SQL VM's trusted root store. If you have Encrypt=True in the connection string, either set that to off (not recommended), or add the following in the connection string (also not recommended): TrustServerCertificate=True SQL Server will create a self-signed certificate if you don't install one for it to use, but it won't be trusted by the caller since it's not CA-signed, unless you tell the connection string to trust any server cert by default. Long term, I'd recommend leveraging Let's Encrypt to get a CA signed certificate from a known trusted CA for free, and install it on the VM. Don't forget to set it up to automatically refresh. You can read more on this topic in SQL Server books online under the topic of "Encryption Hierarchy", and "Using Encryption Without Validation". A: I decided to add another answer, because this post pops-up as the first Google result for this error. If you're getting this error after January 2022, possibly after migrating from System.Data.SqlClient to Microsoft.Data.SqlClient or just updating Microsoft.Data.SqlClient to version 4.0.0 or later, it's because MS has introduced a breaking change: https://learn.microsoft.com/sql/connect/ado-net/introduction-microsoft-data-sqlclient-namespace?view=sql-server-ver15#breaking-changes-in-40 Breaking changes in 4.0 Changed Encrypt connection string property to be true by default. The default value of the Encrypt connection setting has been changed from false to true. With the growing use of cloud databases and the need to ensure those connections are secure, it's time for this backwards-compatibility-breaking change. Ensure connections fail when encryption is required In scenarios where client encryption libraries were disabled or unavailable, it was possible for unencrypted connections to be made when Encrypt was set to true or the server required encryption. The change was made in this SqlClient pull-request in August 2021, where there is additional discussion about the change. The quick-fix is to add Encrypt=False to your connection-strings. A: If you're using SQL Management Studio, please goto connection properties and click on "Trust server certificated" A: If you're seeing this error message when attempting to connect using SSMS, add TrustServerCertificate=True to the Additional Connection Parameters. A: I was getting this message in Entity Framework migrations. I was able to connect with Win Auth to the Sql Server and create table manually. But EF wouldn't work. This connection string finally worked Server=MyServerName;Database=MyDbName;Trusted_Connection=SSPI;Encrypt=false;TrustServerCertificate=true A: While the general answer was in itself correct, I found it did not go far enough for my SQL Server Import and Export Wizard orientated issue. Assuming you have a valid (and automatic) Windows Security based login: ConnectionString Data Source=localhost; Initial Catalog=<YOUR DATABASE HERE>; Integrated Security=True; Encrypt=True; TrustServerCertificate=True; User Instance=False That can either be your complete ConnectionString (all on one line), or you can apply those values individually to their fields. A: If You are trying to access it through Data Connections in Visual Studio 2015, and getting the above Error, Then Go to Advanced and set TrustServerCertificate=True for error to go away. A: I got this Issue while importing Excel data into SQLDatabase through SSMS. The solution is to set TrustServerCertificate = True in the security section A: For those who don't like the TrustServerCertificate=True answer, if you have sufficient access you can export the SQL Server certificate and install where you're trying to connect from. This probably doesn't work for a SQL Server self-generated certificate but if you used something like New-SelfSignedCertificate you can use MMC to export the certificate, then MMC on the client to import it. On SQL Server: In MMC add the certificate Snap-In Browse to Certificates > Personal > Certificate Select the new certificate, right-click, and select All Tasks > Manage Private Keys (this step and the following is part of making the key work with SQL server) Add the identity running SQL Server (look the identity up in Services if in doubt) with READ permission. Select the new certificate, right-click, and select All Tasks > Export... Use default settings and save as a file. On the client: Use MMS with the same snap-in choices and in Certificates > Trusted Root Certification Authorities right-click Certificates and select All Tasks > Import... Import the previously exported file (I was doing everything on the same server and still had issues with SSMS complaining until I restarted the SQL instance. Then I could connect encrypted without the Trust... checkbox checked) A: If you use Version 18 and access via pyodbc, it is "TrustServerCertificate=yes", you need to add to the connection A: "ConnectionStrings": { "DefaultConnection": "Server=DESKTOP-O5SR0H0\\SQLEXPRESS;Database=myDataBase;Trusted_Connection=True;TrustServerCertificate=True;" } } A: If you are using any connection attributes mentioned in the answers, the values accepted are yes/no , if true/false doesn't seem to work. TrustServerCertificate - Accepts the strings "yes" and "no" as values. The default value is "no", which means that the server certificate will be validated. Using ODBC 18.0 - hope it helps. Connection String Attributes A: I ran into this error trying to run the profiler, even though my connection had Trust server certificate checked and I added TrustServerCertificate=True in the Advanced Section. I changed to an instance of SSMS running as administrator and the profiler started with no problem. (I previously had found that when my connections even to local took a long time to connect, running as administrator helped). A: Got hit by the same issue while accessing SQLServer from IIS. Adding TrustServerCertificate=True did not help. Could see a comment in MS docs: Make sure the SQLServer service account has access to the TLS Certificate you are using. (NT Service\MSSQLSERVER) Open personal store and right click on the certificate -> manage private keys -> Add the SQL service account and give full control. Restart the SQL service. It worked. A: I had the same issue after migrating a project from .NET 5 to .NET 6. I have tried suggested solutions (either TrustServerCertificate=True or Encrypt=False) and they worked as expected but I had a limitation to not change connection string. So if that is the case, you can still use System.Data.SqlClient as a nuget package. Like explained here it is still maintained but all the new stuff will go to Microsoft.Data.SqlClient. A: If you have created an ODBC connection to the server (using ODBC Driver 18 for SQL server) in ODBC settings (32 or 64), configure the connection and press Next 3 times. In the final screen, there is a "Trust server certificate" checkbox in the middle. Set it to checked. That will do the trick. Adding "TrustServerCertificate=True" to the connectionstring as suggested in other answers did not work for me. A: Add Encrypt=False to your connection string and that's it A: The same can be achieved from ssms client itself. Just open the ssms, insert the server name and then from options under heading connection properties make sure Trust server certificate is checked. A: I was getting the same error when trying to connect to MS SQL Server instance hosted on Google Cloud Platform using SSMS with unchecked Trust server certificate under the connection properties tab. I managed to trust the certificate by importing the GCP's provided certificate's authority to my local computer's list of Trusted Root Certification Authorities. Read full description and resolution here. A: Well in my case the Database was bad. When I re created a new database name the error got resolved. It's an error coming from SQL Server database. Try re creating a new database. A: I added this 2 lines to the ConnectionString and it worked Trusted_Connection=True TrustServerCertificate=True
"The certificate chain was issued by an authority that is not trusted" when connecting DB in VM Role from Azure website
I am experiencing an error when connecting MY DB which is in VM Role (I have SQL VM Role) from Azure Website. Both VM Role and Azure Website are in West zone. I am facing the following issue: SqlException (0x80131904): A connection was successfully established with the server, but then an error occurred during the login process. (provider: SSL Provider, error: 0 - The certificate chain was issued by an authority that is not trusted.)] I am able to connect to my DB using SSMS. Port 1433 is open on my VM role. What is wrong with my connection?
[ "2022 Update - This answer (as comments point out) provides an explanation and stop gap, but also offers some better recommendations including purchasing and installing a proper cert (thanks to numerous community edits).\nPlease see also the other highly voted answers in this thread, including the one by @Alex From Jitbit below about a breaking change when migrating from System.Data.Sql to Microsoft.Data.Sql (spoiler: Encrypt is now set to true by default).\nOriginal answer:\nYou likely don't have a CA signed certificate installed in your SQL VM's trusted root store.\nIf you have Encrypt=True in the connection string, either set that to off (not recommended), or add the following in the connection string (also not recommended):\nTrustServerCertificate=True\n\nSQL Server will create a self-signed certificate if you don't install one for it to use, but it won't be trusted by the caller since it's not CA-signed, unless you tell the connection string to trust any server cert by default.\nLong term, I'd recommend leveraging Let's Encrypt to get a CA signed certificate from a known trusted CA for free, and install it on the VM. Don't forget to set it up to automatically refresh. You can read more on this topic in SQL Server books online under the topic of \"Encryption Hierarchy\", and \"Using Encryption Without Validation\".\n", "I decided to add another answer, because this post pops-up as the first Google result for this error.\nIf you're getting this error after January 2022, possibly after migrating from System.Data.SqlClient to Microsoft.Data.SqlClient or just updating Microsoft.Data.SqlClient to version 4.0.0 or later, it's because MS has introduced a breaking change:\n\nhttps://learn.microsoft.com/sql/connect/ado-net/introduction-microsoft-data-sqlclient-namespace?view=sql-server-ver15#breaking-changes-in-40\n\n\nBreaking changes in 4.0\nChanged Encrypt connection string property to be true by default.\nThe default value of the Encrypt connection setting has been changed from false to true. With the growing use of cloud databases and the need to ensure those connections are secure, it's time for this backwards-compatibility-breaking change.\nEnsure connections fail when encryption is required\nIn scenarios where client encryption libraries were disabled or unavailable, it was possible for unencrypted connections to be made when Encrypt was set to true or the server required encryption.\n\nThe change was made in this SqlClient pull-request in August 2021, where there is additional discussion about the change.\nThe quick-fix is to add Encrypt=False to your connection-strings.\n", "\nIf you're using SQL Management Studio, please goto connection properties and click on \"Trust server certificated\"\n", "If you're seeing this error message when attempting to connect using SSMS, add TrustServerCertificate=True to the Additional Connection Parameters. \n", "I was getting this message in Entity Framework migrations. I was able to connect with Win Auth to the Sql Server and create table manually. But EF wouldn't work. This connection string finally worked\nServer=MyServerName;Database=MyDbName;Trusted_Connection=SSPI;Encrypt=false;TrustServerCertificate=true\n\n", "While the general answer was in itself correct, I found it did not go far enough for my SQL Server Import and Export Wizard orientated issue. Assuming you have a valid (and automatic) Windows Security based login:\nConnectionString\nData Source=localhost; \nInitial Catalog=<YOUR DATABASE HERE>; \nIntegrated Security=True; \nEncrypt=True; \nTrustServerCertificate=True; \nUser Instance=False\n\nThat can either be your complete ConnectionString (all on one line), or you can apply those values individually to their fields.\n", "If You are trying to access it through Data Connections in Visual Studio 2015, and getting the above Error, Then Go to Advanced and set \nTrustServerCertificate=True\nfor error to go away.\n", "I got this Issue while importing Excel data into SQLDatabase through SSMS. The solution is to set TrustServerCertificate = True in the security section\n", "For those who don't like the TrustServerCertificate=True answer, if you have sufficient access you can export the SQL Server certificate and install where you're trying to connect from. This probably doesn't work for a SQL Server self-generated certificate but if you used something like New-SelfSignedCertificate you can use MMC to export the certificate, then MMC on the client to import it.\nOn SQL Server:\n\nIn MMC add the certificate Snap-In\nBrowse to Certificates > Personal > Certificate\nSelect the new certificate, right-click, and select All Tasks > Manage Private Keys (this step and the following is part of making the key work with SQL server)\nAdd the identity running SQL Server (look the identity up in Services if in doubt) with READ permission.\nSelect the new certificate, right-click, and select All Tasks > Export...\nUse default settings and save as a file.\n\nOn the client:\n\nUse MMS with the same snap-in choices and in Certificates > Trusted Root Certification Authorities right-click Certificates and select All Tasks > Import...\nImport the previously exported file\n\n(I was doing everything on the same server and still had issues with SSMS complaining until I restarted the SQL instance. Then I could connect encrypted without the Trust... checkbox checked)\n", "If you use Version 18 and access via pyodbc, it is \"TrustServerCertificate=yes\", you need to add to the connection\n", "\"ConnectionStrings\": {\n \"DefaultConnection\": \"Server=DESKTOP-O5SR0H0\\\\SQLEXPRESS;Database=myDataBase;Trusted_Connection=True;TrustServerCertificate=True;\"\n }\n}\n\n", "If you are using any connection attributes mentioned in the answers, the values accepted are yes/no , if true/false doesn't seem to work.\nTrustServerCertificate - Accepts the strings \"yes\" and \"no\" as values. The default value is \"no\", which means that the server certificate will be validated.\nUsing ODBC 18.0 - hope it helps.\nConnection String Attributes\n", "I ran into this error trying to run the profiler, even though my connection had Trust server certificate checked and I added TrustServerCertificate=True in the Advanced Section. I changed to an instance of SSMS running as administrator and the profiler started with no problem. (I previously had found that when my connections even to local took a long time to connect, running as administrator helped).\n", "Got hit by the same issue while accessing SQLServer from IIS. Adding TrustServerCertificate=True did not help.\nCould see a comment in MS docs: Make sure the SQLServer service account has access to the TLS Certificate you are using. (NT Service\\MSSQLSERVER)\nOpen personal store and right click on the certificate -> manage private keys -> Add the SQL service account and give full control.\nRestart the SQL service. It worked.\n", "I had the same issue after migrating a project from .NET 5 to .NET 6. I have tried suggested solutions (either TrustServerCertificate=True or Encrypt=False) and they worked as expected but I had a limitation to not change connection string. So if that is the case, you can still use System.Data.SqlClient as a nuget package. Like explained here it is still maintained but all the new stuff will go to Microsoft.Data.SqlClient.\n", "If you have created an ODBC connection to the server (using ODBC Driver 18 for SQL server) in ODBC settings (32 or 64), configure the connection and press Next 3 times. In the final screen, there is a \"Trust server certificate\" checkbox in the middle. Set it to checked. That will do the trick. Adding \"TrustServerCertificate=True\" to the connectionstring as suggested in other answers did not work for me.\n\n", "Add Encrypt=False to your connection string and that's it\n", "The same can be achieved from ssms client itself. Just open the ssms, insert the server name and then from options under heading connection properties make sure Trust server certificate is checked. \n", "I was getting the same error when trying to connect to MS SQL Server instance hosted on Google Cloud Platform using SSMS with unchecked Trust server certificate under the connection properties tab. I managed to trust the certificate by importing the GCP's provided certificate's authority to my local computer's list of Trusted Root Certification Authorities.\nRead full description and resolution here.\n", "Well in my case the Database was bad. When I re created a new database name the error got resolved. It's an error coming from SQL Server database. Try re creating a new database.\n", "I added this 2 lines to the ConnectionString and it worked\nTrusted_Connection=True\nTrustServerCertificate=True\n\n" ]
[ 808, 178, 160, 64, 25, 13, 8, 6, 5, 5, 5, 4, 2, 2, 2, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "azure_vm_role", "azure_web_roles" ]
stackoverflow_0017615260_azure_vm_role_azure_web_roles.txt
Q: Android switch view track size Current state: enter image description here Need: enter image description here track changes size depending on the size of thumb Doesn't work android:switchMinWidth="52dp" My xml code: activity_main.xml <Switch android:id="@+id/switchViewOn" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginBottom="8dp" android:text="Switch On" android:thumb="@drawable/switch_thumb" android:track="@drawable/switch_track" tools:ignore="UseSwitchCompatOrMaterialXml" /> <Switch android:id="@+id/switchViewOff" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Switch Off" android:thumb="@drawable/switch_thumb" android:track="@drawable/switch_track" tools:ignore="UseSwitchCompatOrMaterialXml" /> switch_thumb.xml <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_checked="false"> <shape android:dither="true" android:shape="rectangle" android:useLevel="false" android:visible="true"> <solid android:color="#ffffff" /> <corners android:radius="32dp" /> <size android:width="32dp" android:height="32dp" /> <stroke android:width="8dp" android:color="#00FFFFFF" /> </shape> </item> </selector> switch_track.xml <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_checked="true"> <shape android:dither="true" android:shape="rectangle" android:useLevel="false" android:visible="true"> <solid android:color="#FF7733" /> <corners android:radius="32dp" /> <size android:width="52dp" android:height="32dp" /> </shape> </item> <item android:state_checked="false"> <shape android:dither="true" android:shape="rectangle" android:useLevel="false" android:visible="true"> <solid android:color="#D5D5D6" /> <corners android:radius="32dp" /> <size android:width="52dp" android:height="32dp" /> </shape> </item> </selector> A: I tried changing Switch to "androidx.appcompat.widget.SwitchCompat" because the thumb is not letting the width change also your android:switchMinWidth="52dp" did not work. The output reached is not similar but to some extent it is nearly working. Also since using androidx so had to change these as well - app:thumb="@drawable/switch_thumb" app:track="@drawable/switch_track" Output: activity_mail.xml is below: <?xml version="1.0" encoding="utf-8"?> <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <androidx.appcompat.widget.SwitchCompat android:id="@+id/switchViewOn" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginStart="140dp" android:layout_marginBottom="380dp" android:text="@string/switch_on" app:thumb="@drawable/switch_thumb" app:track="@drawable/switch_track" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toBottomOf="@+id/switchViewOff" tools:ignore="UseSwitchCompatOrMaterialXml" /> <androidx.appcompat.widget.SwitchCompat android:id="@+id/switchViewOff" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginTop="188dp" android:text="@string/switch_off" app:thumb="@drawable/switch_thumb" app:track="@drawable/switch_track" app:layout_constraintEnd_toEndOf="@+id/switchViewOn" app:layout_constraintStart_toStartOf="@+id/switchViewOn" app:layout_constraintTop_toTopOf="parent" tools:ignore="UseSwitchCompatOrMaterialXml" /> </androidx.constraintlayout.widget.ConstraintLayout>
Android switch view track size
Current state: enter image description here Need: enter image description here track changes size depending on the size of thumb Doesn't work android:switchMinWidth="52dp" My xml code: activity_main.xml <Switch android:id="@+id/switchViewOn" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginBottom="8dp" android:text="Switch On" android:thumb="@drawable/switch_thumb" android:track="@drawable/switch_track" tools:ignore="UseSwitchCompatOrMaterialXml" /> <Switch android:id="@+id/switchViewOff" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Switch Off" android:thumb="@drawable/switch_thumb" android:track="@drawable/switch_track" tools:ignore="UseSwitchCompatOrMaterialXml" /> switch_thumb.xml <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_checked="false"> <shape android:dither="true" android:shape="rectangle" android:useLevel="false" android:visible="true"> <solid android:color="#ffffff" /> <corners android:radius="32dp" /> <size android:width="32dp" android:height="32dp" /> <stroke android:width="8dp" android:color="#00FFFFFF" /> </shape> </item> </selector> switch_track.xml <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_checked="true"> <shape android:dither="true" android:shape="rectangle" android:useLevel="false" android:visible="true"> <solid android:color="#FF7733" /> <corners android:radius="32dp" /> <size android:width="52dp" android:height="32dp" /> </shape> </item> <item android:state_checked="false"> <shape android:dither="true" android:shape="rectangle" android:useLevel="false" android:visible="true"> <solid android:color="#D5D5D6" /> <corners android:radius="32dp" /> <size android:width="52dp" android:height="32dp" /> </shape> </item> </selector>
[ "I tried changing Switch to\n\"androidx.appcompat.widget.SwitchCompat\"\nbecause the thumb is not letting the width\nchange also your android:switchMinWidth=\"52dp\" did not work. The\noutput reached is not similar but to some extent it is nearly\nworking.\nAlso since using androidx so had to change these as well -\napp:thumb=\"@drawable/switch_thumb\" app:track=\"@drawable/switch_track\"\nOutput:\nactivity_mail.xml is below:\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<androidx.constraintlayout.widget.ConstraintLayout \nxmlns:android=\"http://schemas.android.com/apk/res/android\"\nxmlns:app=\"http://schemas.android.com/apk/res-auto\"\nxmlns:tools=\"http://schemas.android.com/tools\"\nandroid:layout_width=\"match_parent\"\nandroid:layout_height=\"match_parent\"\ntools:context=\".MainActivity\">\n\n<androidx.appcompat.widget.SwitchCompat\n android:id=\"@+id/switchViewOn\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_marginStart=\"140dp\"\n android:layout_marginBottom=\"380dp\"\n android:text=\"@string/switch_on\"\n app:thumb=\"@drawable/switch_thumb\"\n app:track=\"@drawable/switch_track\"\n app:layout_constraintBottom_toBottomOf=\"parent\"\n app:layout_constraintStart_toStartOf=\"parent\"\n app:layout_constraintTop_toBottomOf=\"@+id/switchViewOff\"\n tools:ignore=\"UseSwitchCompatOrMaterialXml\" />\n\n <androidx.appcompat.widget.SwitchCompat\n\n android:id=\"@+id/switchViewOff\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_marginTop=\"188dp\"\n android:text=\"@string/switch_off\"\n app:thumb=\"@drawable/switch_thumb\"\n app:track=\"@drawable/switch_track\"\n app:layout_constraintEnd_toEndOf=\"@+id/switchViewOn\"\n app:layout_constraintStart_toStartOf=\"@+id/switchViewOn\"\n app:layout_constraintTop_toTopOf=\"parent\"\n tools:ignore=\"UseSwitchCompatOrMaterialXml\" />\n\n</androidx.constraintlayout.widget.ConstraintLayout>\n\n" ]
[ 0 ]
[]
[]
[ "android", "uiswitch", "view" ]
stackoverflow_0074672460_android_uiswitch_view.txt
Q: npm run dev not working with vite laravel 9 users-iMac-2:backend NEHAL$ npm run dev > dev > vite file:///Users/user/Desktop/backend/node_modules/vite/bin/vite.js:7 await import('source-map-support').then((r) => r.default.install()) ^^^^^ SyntaxError: Unexpected reserved word at Loader.moduleStrategy (internal/modules/esm/translators.js:122:18) at async link (internal/modules/esm/module_job.js:42:21) users-iMac-2:backend NEHAL$ A: Same problem. Updated node to v16.16.0 and it worked. A: I had the same problem; the installed version of NodeJS on your OS is incompatible with vite; mine was v12.22.9; upgrade yours. If you're using a debian-based OS, run the following. curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash - sudo apt-get install -y nodejs A: Primarily, you try installing Node.js like v16/v18. you got rid of the error message on Ubuntu. Err:10 https://ppa.launchpadcontent.net/certbot/certbot/ubuntu jammy Release 404 Not Found [IP: 2620:2d:4000:1::3e 443] Reading package lists... Done W: https://repos.insights.digitalocean.com/apt/do-agent/dists/main/InRelease: Key is , see the DEPRECATION section in apt-key(8) for details. E: The repository 'https://ppa.launchpadcontent.net/certbot/certbot/ubuntu jammy Rele N: Updating from such a repository can't be done securely, and is therefore disabled N: See apt-secure(8) manpage for repository creation and user configuration details. Error executing command, exiting Used these commands to solve the ERROR : sudo apt-add-repository -r ppa:certbot/certbot After that, the following commands do not generate any errors: sudo apt update sudo apt-get update Then install Nodejs from the Official github repo and make Vuejs build from npx vite build for Npm build from npm run build. A: Use yarn instead First remove package-lock.json and node_modules folder. rm -rf node_modules/ package-lock.json Then run: yarn install Run build command: yarn build # or yran dev A: Because you are using lower Nodejs version, you can install multi Nodejs version and change between them In my case I install v14.0.0 v16.0.0 v18.0.0 In your case you should use version v16.0.0 OR v18.0.0 run terminal as admin and run this command to use a specific version: nvm use v18.0.0 And rerun command again npm run dev It works without any problem
npm run dev not working with vite laravel 9
users-iMac-2:backend NEHAL$ npm run dev > dev > vite file:///Users/user/Desktop/backend/node_modules/vite/bin/vite.js:7 await import('source-map-support').then((r) => r.default.install()) ^^^^^ SyntaxError: Unexpected reserved word at Loader.moduleStrategy (internal/modules/esm/translators.js:122:18) at async link (internal/modules/esm/module_job.js:42:21) users-iMac-2:backend NEHAL$
[ "Same problem. Updated node to v16.16.0 and it worked.\n", "I had the same problem; the installed version of NodeJS on your OS is incompatible with vite; mine was v12.22.9; upgrade yours.\nIf you're using a debian-based OS, run the following.\ncurl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -\nsudo apt-get install -y nodejs\n\n", "Primarily, you try installing Node.js like v16/v18. you got rid of the error message on Ubuntu.\nErr:10 https://ppa.launchpadcontent.net/certbot/certbot/ubuntu jammy Release\n 404 Not Found [IP: 2620:2d:4000:1::3e 443]\nReading package lists... Done\nW: https://repos.insights.digitalocean.com/apt/do-agent/dists/main/InRelease: Key is , see the DEPRECATION section in apt-key(8) for details.\nE: The repository 'https://ppa.launchpadcontent.net/certbot/certbot/ubuntu jammy Rele\nN: Updating from such a repository can't be done securely, and is therefore disabled \nN: See apt-secure(8) manpage for repository creation and user configuration details.\nError executing command, exiting\n\nUsed these commands to solve the ERROR :\nsudo apt-add-repository -r ppa:certbot/certbot\nAfter that, the following commands do not generate any errors:\nsudo apt update\nsudo apt-get update\n\nThen install Nodejs from the Official github repo\nand make Vuejs build from npx vite build\nfor Npm build from npm run build.\n", "Use yarn instead\nFirst remove package-lock.json and node_modules folder.\nrm -rf node_modules/ package-lock.json\n\nThen run:\nyarn install \n\nRun build command:\nyarn build # or yran dev\n\n", "Because you are using lower Nodejs version, you can install multi Nodejs version and change between them In my case I install\nv14.0.0\nv16.0.0\nv18.0.0\n\nIn your case you should use version v16.0.0 OR v18.0.0 run terminal as admin and run this command to use a specific version:\nnvm use v18.0.0\n\nAnd rerun command again\nnpm run dev\n\nIt works without any problem\n" ]
[ 8, 7, 1, 0, 0 ]
[]
[]
[ "laravel_9", "npm", "vite" ]
stackoverflow_0073048645_laravel_9_npm_vite.txt
Q: Is it a good practice to reuse components angular There is a component in my project of table I need to display this table in several places Is it a good practice to use this component while having to override styles and sometimes even adding columns or modifying names or do you have a better practice to acheive this. like inheritance for example. Thanks A: In Angular, it's typically regarded as a good practice to reuse components. By avoiding repetitive code through the use of components, you may improve the maintainability and updateability of your program. There are numerous methods you can use when you need to reuse a component but modify its styling or other features. One method is to extend the original component by creating a new component using Angular's component inheritance capability. By doing so, you can override the old component's styles or other properties in the new component while still maintaining its functionality and behavior. Another strategy is to introduce extra content into the original component using Angular's content projection functionality. This enables you to change the names of existing columns or add new ones without changing the original component. Overall, your particular demands and the way your application is designed will determine whether you employ component inheritance or content projection. You must choose which method is the best fit for your project out of the two options for reusing components in Angular.
Is it a good practice to reuse components angular
There is a component in my project of table I need to display this table in several places Is it a good practice to use this component while having to override styles and sometimes even adding columns or modifying names or do you have a better practice to acheive this. like inheritance for example. Thanks
[ "In Angular, it's typically regarded as a good practice to reuse components. By avoiding repetitive code through the use of components, you may improve the maintainability and updateability of your program.\nThere are numerous methods you can use when you need to reuse a component but modify its styling or other features. One method is to extend the original component by creating a new component using Angular's component inheritance capability. By doing so, you can override the old component's styles or other properties in the new component while still maintaining its functionality and behavior.\nAnother strategy is to introduce extra content into the original component using Angular's content projection functionality. This enables you to change the names of existing columns or add new ones without changing the original component.\nOverall, your particular demands and the way your application is designed will determine whether you employ component inheritance or content projection. You must choose which method is the best fit for your project out of the two options for reusing components in Angular.\n" ]
[ 0 ]
[]
[]
[ "angular" ]
stackoverflow_0074673443_angular.txt
Q: Ubuntu 20.04 GDM3 Root Login GUI How Can I Disable Security Notification firstly sorry for my english because im trying to explain my question yours. System Details; Ubuntu 20.04 GDM3 (GUI) Installed VNC Connection Installed Root Login Enabled for login screen & SSH Automatic Login Enabled login screen everythings good working but. on internet everywhere i checked and ubuntu sources i checked no have about this information to disable when login after notification about security message Message is ; title : ' Logged in as a privileged user ', message : ' Running a session as a privileged user should be avoided for security reasons. If possible, you should log in as a normal user. ' I Want To Disable This Notification Message. Please if you know can you help me my colleagues A: This is baked into gnome. For anyone who stumble here, this is what I did on Ubuntu 22.04: Follow the general method described here to make any change to gnome js files. In this case, the file we need is ui/main.js. Replace this chunk let credentials = new Gio.Credentials(); if (credentials.get_unix_user() === 0) { notify(_('Logged in as a privileged user'), _('Running a session as a privileged user should be avoided for security reasons. If possible, you should log in as a normal user.')); } else if (sessionMode.showWelcomeDialog) { _handleShowWelcomeScreen(); } with if (sessionMode.showWelcomeDialog) { _handleShowWelcomeScreen(); } Disclaimer: Running a session as a privileged user should be avoided for security reasons. If possible, you should log in as a normal user.
Ubuntu 20.04 GDM3 Root Login GUI How Can I Disable Security Notification
firstly sorry for my english because im trying to explain my question yours. System Details; Ubuntu 20.04 GDM3 (GUI) Installed VNC Connection Installed Root Login Enabled for login screen & SSH Automatic Login Enabled login screen everythings good working but. on internet everywhere i checked and ubuntu sources i checked no have about this information to disable when login after notification about security message Message is ; title : ' Logged in as a privileged user ', message : ' Running a session as a privileged user should be avoided for security reasons. If possible, you should log in as a normal user. ' I Want To Disable This Notification Message. Please if you know can you help me my colleagues
[ "This is baked into gnome. For anyone who stumble here, this is what I did on Ubuntu 22.04:\nFollow the general method described here to make any change to gnome js files.\nIn this case, the file we need is ui/main.js. Replace this chunk\n let credentials = new Gio.Credentials();\n if (credentials.get_unix_user() === 0) {\n notify(_('Logged in as a privileged user'),\n _('Running a session as a privileged user should be avoided for security reasons. If possible, you should log in as a normal user.'));\n } else if (sessionMode.showWelcomeDialog) {\n _handleShowWelcomeScreen();\n }\n \n\nwith\nif (sessionMode.showWelcomeDialog) {\n _handleShowWelcomeScreen();\n}\n\nDisclaimer: Running a session as a privileged user should be avoided for security reasons. If possible, you should log in as a normal user.\n" ]
[ 0 ]
[]
[]
[ "gdm", "gnome", "root", "security_policy", "ubuntu" ]
stackoverflow_0073043884_gdm_gnome_root_security_policy_ubuntu.txt
Q: How to configure transaction management for different read/write data sources in Spring Batch We are using Spring Batch with 2 data sources: 1 for reading (source db), 1 for writing (destination db). Spring Batch is configured to use the destination data source/transaction manager for the JobRepository and JobExplorer: @EnableBatchProcessing(transactionManagerRef = "destinationTransactionManager", dataSourceRef = "destinationDataSource") For the job config, JpaCursorItemReader is configured to use the EntityManagerFactory that belongs to the source db (with a PlatformTransactionManager belonging to the source db). JpaItemWriter is configured to use the EntityManagerFactory and PlatformTransactionManager that belongs to the destination db. This PlatformTransactionManager is the same one that is being used in @EnableBatchProcessing. Our chunk-oriented step uses the PlatformTransactionManager that belongs to the destination db (the same one that is being used in @EnableBatchProcessing). My question is: is this a correct setup (especially regarding transaction management)? It hasn't been giving us any problems so far. I'm a bit concerned since the reader side uses a different data source. My assumption is that this should work, since the PlatformTransactionManager of the chunk is the same one that is being used for the JobRepository and JpaItemWriter. So I'm assuming that when something fails, rollbacking progress (in the metadata tables) and written items should at least work, since they are using the same data source and transaction manager. Moreover, JpaCursorItemReader doesn't seem to be transaction aware. Our configuration looks like this (slightly modified to omit domain language): @Configuration @AllArgsConstructor @EnableBatchProcessing(transactionManagerRef = "destinationTransactionManager", dataSourceRef = "destinationDataSource") public class JobConfiguration { @Bean public JpaCursorItemReader<SourceEntity> sourceReader( @Qualifier("sourceEntityManagerFactory") final LocalContainerEntityManagerFactoryBean sourceEntityManagerFactory ) { return new JpaCursorItemReaderBuilder<SourceEntity>() .name("SourceEntity") .entityManagerFactory(Objects.requireNonNull(sourceEntityManagerFactory.getObject())) .queryString("from SourceEntity") .build(); } @Bean public JpaItemWriter<DestinationEntity> destinationWriter( @Qualifier("destinationEntityManagerFactory") final LocalContainerEntityManagerFactoryBean destinationEntityManagerFactory ) { return new JpaItemWriterBuilder<DestinationEntity>() .entityManagerFactory(Objects.requireNonNull(destinationEntityManagerFactory.getObject())) .build(); } @Bean public Step step( @Qualifier("sourceReader") final JpaCursorItemReader<SourceEntity> reader, @Qualifier("destinationWriter") final JpaItemWriter<DestinationEntity> writer, final CustomProcessor processor, // implementation omitted for brevity @Qualifier("destinationTransactionManager") final PlatformTransactionManager transactionManager, final JobRepository jobRepository ) { return new StepBuilder("step", jobRepository) .<SourceEntity, DestinationEntity>chunk(10, transactionManager) .reader(reader) .processor(processor) .writer(writer) .build(); } @Bean public Job job(final Step step, final JobRepository jobRepository) { return new JobBuilder("job", jobRepository) .incrementer(new RunIdIncrementer()) .flow(step) .end() .build(); } } This works as expected, but I want to know if this is a correct setup regarding tx management. A: For this kind of configurations, (reading/writing from/to different data sources, or writing to two different transactional resources), the biggest risk is that your business data and technical meta-data get out of sync in case of failure. What you need to do is configure Spring Batch with a JtaTransactionManager that coordinates the transactions managers of all transactional resources involved in the same step. What Spring Batch guarantees if correctly configured, is that the reader, writer and job repository interactions are executed in the same transaction, so that both business data and technical meta-data get committed or rolled back as a unit.
How to configure transaction management for different read/write data sources in Spring Batch
We are using Spring Batch with 2 data sources: 1 for reading (source db), 1 for writing (destination db). Spring Batch is configured to use the destination data source/transaction manager for the JobRepository and JobExplorer: @EnableBatchProcessing(transactionManagerRef = "destinationTransactionManager", dataSourceRef = "destinationDataSource") For the job config, JpaCursorItemReader is configured to use the EntityManagerFactory that belongs to the source db (with a PlatformTransactionManager belonging to the source db). JpaItemWriter is configured to use the EntityManagerFactory and PlatformTransactionManager that belongs to the destination db. This PlatformTransactionManager is the same one that is being used in @EnableBatchProcessing. Our chunk-oriented step uses the PlatformTransactionManager that belongs to the destination db (the same one that is being used in @EnableBatchProcessing). My question is: is this a correct setup (especially regarding transaction management)? It hasn't been giving us any problems so far. I'm a bit concerned since the reader side uses a different data source. My assumption is that this should work, since the PlatformTransactionManager of the chunk is the same one that is being used for the JobRepository and JpaItemWriter. So I'm assuming that when something fails, rollbacking progress (in the metadata tables) and written items should at least work, since they are using the same data source and transaction manager. Moreover, JpaCursorItemReader doesn't seem to be transaction aware. Our configuration looks like this (slightly modified to omit domain language): @Configuration @AllArgsConstructor @EnableBatchProcessing(transactionManagerRef = "destinationTransactionManager", dataSourceRef = "destinationDataSource") public class JobConfiguration { @Bean public JpaCursorItemReader<SourceEntity> sourceReader( @Qualifier("sourceEntityManagerFactory") final LocalContainerEntityManagerFactoryBean sourceEntityManagerFactory ) { return new JpaCursorItemReaderBuilder<SourceEntity>() .name("SourceEntity") .entityManagerFactory(Objects.requireNonNull(sourceEntityManagerFactory.getObject())) .queryString("from SourceEntity") .build(); } @Bean public JpaItemWriter<DestinationEntity> destinationWriter( @Qualifier("destinationEntityManagerFactory") final LocalContainerEntityManagerFactoryBean destinationEntityManagerFactory ) { return new JpaItemWriterBuilder<DestinationEntity>() .entityManagerFactory(Objects.requireNonNull(destinationEntityManagerFactory.getObject())) .build(); } @Bean public Step step( @Qualifier("sourceReader") final JpaCursorItemReader<SourceEntity> reader, @Qualifier("destinationWriter") final JpaItemWriter<DestinationEntity> writer, final CustomProcessor processor, // implementation omitted for brevity @Qualifier("destinationTransactionManager") final PlatformTransactionManager transactionManager, final JobRepository jobRepository ) { return new StepBuilder("step", jobRepository) .<SourceEntity, DestinationEntity>chunk(10, transactionManager) .reader(reader) .processor(processor) .writer(writer) .build(); } @Bean public Job job(final Step step, final JobRepository jobRepository) { return new JobBuilder("job", jobRepository) .incrementer(new RunIdIncrementer()) .flow(step) .end() .build(); } } This works as expected, but I want to know if this is a correct setup regarding tx management.
[ "For this kind of configurations, (reading/writing from/to different data sources, or writing to two different transactional resources), the biggest risk is that your business data and technical meta-data get out of sync in case of failure.\nWhat you need to do is configure Spring Batch with a JtaTransactionManager that coordinates the transactions managers of all transactional resources involved in the same step.\nWhat Spring Batch guarantees if correctly configured, is that the reader, writer and job repository interactions are executed in the same transaction, so that both business data and technical meta-data get committed or rolled back as a unit.\n" ]
[ 0 ]
[]
[]
[ "java", "spring_batch", "spring_boot" ]
stackoverflow_0074612772_java_spring_batch_spring_boot.txt
Q: Access DB Context in program.cs Is there a way to access the DB context of a .NET Core application in the program.cs file? I am basically looking to configure Kestrel with specific options that are stored in the database so I would need access to the database context. I am basically trying to do something like this: WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>() .UseSentry() .UseKestrel(opts => { opts.Listen(IPAddress.Any, 443, listenOptions => { var storedCert = _db.Certificates.First(c => c.Id == 1); var certBytes = Convert.FromBase64String(storedCert.CertificatePfx); var certPassword = storedCert.CertificatePassword; var cert = new X509Certificate2(certBytes, certPassword); listenOptions.UseHttps(cert); }); }); A: The trick is about how to create a scoped service within a singleton : public static void Main(string[] args) { CreateWebHostBuilder(args).Build().Run(); } public static IWebHostBuilder CreateWebHostBuilder(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>() .UseKestrel(opt => { var sp = opt.ApplicationServices; using(var scope = sp.CreateScope() ){ var dbContext=scope.ServiceProvider.GetService<AppDbContext>(); var e= dbContext.Certificates.FirstOrDefault(); // now you get the certificates } }); } A: Try to do the following: var host = CreateWebHostBuilder(args).Build(); var scope = host.Services.CreateScope(); var ctx = scope.ServiceProvider.GetRequiredService<MyDbContext>(); //get a new WebHostBuilder CreateWebHostBuilder(args) //Configure here using the ctx .Build() .Run(); A: Yes you can get service using following code var host = BuildWebHost(args); DbContext context = host.Services.GetService<DbContext>(); and code to register your DbContext in startup.cs is as follows services.AddDbContext<DbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))); A: I figured out how to go about doing this. You need to manually build up the configuration for the DB context then instantiate the context, then you are able to use it. Here is the code in case anyone is in this position. public static IWebHostBuilder CreateWebHostBuilder(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>() .UseSentry() .UseKestrel(opts => { opts.Listen(IPAddress.Any, 443, listenOpts => { //Create the configuration to read from appsettings.json var configuration = new ConfigurationBuilder().AddJsonFile( Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "appsettings.json")) .Build(); //Create the DB Context options var optionsBuilder = new DbContextOptionsBuilder<DBContext>() .UseSqlServer(configuration["ConnectionString:Development"]); //Create a new database context var context = new DBContext(optionsBuilder.Options); //Get the certificate var certificate = context.Certificates.First(); }); }); A: Well the below code worked for me for .net 6 using (var scope = app.Services.CreateScope()) { var service = scope.ServiceProvider; var context = service.GetService<MyAppDbContext>(); } Using this you can use not only get dbcontext but any other service. Consider below example for usermanager object of identity class: var userManager = services.GetRequiredService<UserManager<ApplicationUser>>(); A: for build deb context and program.cs - builder.Services.AddDbContext(option => option.UseSqlServer(builder.Configuration.GetConnectionString("DBCS")));
Access DB Context in program.cs
Is there a way to access the DB context of a .NET Core application in the program.cs file? I am basically looking to configure Kestrel with specific options that are stored in the database so I would need access to the database context. I am basically trying to do something like this: WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>() .UseSentry() .UseKestrel(opts => { opts.Listen(IPAddress.Any, 443, listenOptions => { var storedCert = _db.Certificates.First(c => c.Id == 1); var certBytes = Convert.FromBase64String(storedCert.CertificatePfx); var certPassword = storedCert.CertificatePassword; var cert = new X509Certificate2(certBytes, certPassword); listenOptions.UseHttps(cert); }); });
[ "The trick is about how to create a scoped service within a singleton :\n\n public static void Main(string[] args)\n {\n CreateWebHostBuilder(args).Build().Run();\n }\n\n public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>\n WebHost.CreateDefaultBuilder(args)\n .UseStartup<Startup>()\n .UseKestrel(opt => {\n var sp = opt.ApplicationServices;\n using(var scope = sp.CreateScope() ){\n var dbContext=scope.ServiceProvider.GetService<AppDbContext>();\n var e= dbContext.Certificates.FirstOrDefault();\n // now you get the certificates\n }\n });\n }\n\n", "Try to do the following: \nvar host = CreateWebHostBuilder(args).Build();\n\nvar scope = host.Services.CreateScope();\n\nvar ctx = scope.ServiceProvider.GetRequiredService<MyDbContext>(); \n\n//get a new WebHostBuilder\nCreateWebHostBuilder(args)\n//Configure here using the ctx\n.Build()\n.Run();\n\n", "Yes you can get service using following code\n var host = BuildWebHost(args);\n\nDbContext context = host.Services.GetService<DbContext>();\n\nand code to register your DbContext in startup.cs is as follows\nservices.AddDbContext<DbContext>(options => options.UseSqlServer(Configuration.GetConnectionString(\"DefaultConnection\")));\n\n", "I figured out how to go about doing this. You need to manually build up the configuration for the DB context then instantiate the context, then you are able to use it. Here is the code in case anyone is in this position.\npublic static IWebHostBuilder CreateWebHostBuilder(string[] args) =>\n WebHost.CreateDefaultBuilder(args)\n .UseStartup<Startup>()\n .UseSentry()\n .UseKestrel(opts =>\n {\n opts.Listen(IPAddress.Any, 443, listenOpts =>\n {\n //Create the configuration to read from appsettings.json\n var configuration = new ConfigurationBuilder().AddJsonFile(\n Path.Combine(AppDomain.CurrentDomain.BaseDirectory, \"appsettings.json\"))\n .Build();\n\n //Create the DB Context options\n var optionsBuilder = new DbContextOptionsBuilder<DBContext>()\n .UseSqlServer(configuration[\"ConnectionString:Development\"]);\n\n //Create a new database context\n var context = new DBContext(optionsBuilder.Options);\n\n //Get the certificate\n var certificate = context.Certificates.First();\n });\n });\n\n", "Well the below code worked for me for .net 6\nusing (var scope = app.Services.CreateScope())\n{\n var service = scope.ServiceProvider;\n var context = service.GetService<MyAppDbContext>();\n}\n\nUsing this you can use not only get dbcontext but any other service.\nConsider below example for usermanager object of identity class:\nvar userManager = services.GetRequiredService<UserManager<ApplicationUser>>();\n\n", "for build deb context and program.cs -\nbuilder.Services.AddDbContext(option =>\noption.UseSqlServer(builder.Configuration.GetConnectionString(\"DBCS\")));\n" ]
[ 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "asp.net_core", "c#", "kestrel_http_server" ]
stackoverflow_0055039647_asp.net_core_c#_kestrel_http_server.txt
Q: how can i save a stream of Json data to a text file in c# windows form? so ive got a stream of data incoming as a Json file and I'm trying to save it to a text file, I've got it working here below however, when i check the file, it only has the last Json message received saved, I am trying to get it so that once it saves a line it goes onto a new line and prints the latest Json message below. at the moment it will print let's say 1000 lines but they are all the same and they match the latest Json received. Any help would be much appreciated. void ReceiveData() //This function is used to listen for messages from the flight simulator { while (true) { NetworkStream stream = client.GetStream(); //sets the newtwork stream to the client's stream byte[] buffer = new byte[256]; //Defines the max amount of bytes that can be sent int bytesRead = stream.Read(buffer, 0, buffer.Length); if (bytesRead > 0) { string jsonreceived = Encoding.ASCII.GetString(buffer, 0, bytesRead); //Converts the recieved data into ASCII for the json variable JavaScriptSerializer serializer = new JavaScriptSerializer(); TelemetryUpdate telemetry = serializer.Deserialize<TelemetryUpdate>(jsonreceived); this.Invoke(new Action(() => { TelemetryReceivedLabel.Text = jsonreceived; })) ; Updatelabels(telemetry); //runs the update labels function with the telemetry data as an arguement File.Delete(@"c:\temp\BLACKBOX.txt"); // this deletes the original file string path = @"c:\temp\BLACKBOX.txt"; //this stores the path of the file in a string using (StreamWriter sw = File.CreateText(path)) // Create a file to write to. { for (int i = 0; i<10000; i++) { sw.Write(jsonreceived.ToString()); //writes the json data to the file } } } } A: As per the .NET documentation for File.CreateText: Creates or opens a file for writing UTF-8 encoded text. If the file already exists, its contents are overwritten. So, every time you call File.CreateText you're creating a new StreamWriter that's going to overwrite the contents of your file. Try using File.AppendText instead to pick up where you left off.
how can i save a stream of Json data to a text file in c# windows form?
so ive got a stream of data incoming as a Json file and I'm trying to save it to a text file, I've got it working here below however, when i check the file, it only has the last Json message received saved, I am trying to get it so that once it saves a line it goes onto a new line and prints the latest Json message below. at the moment it will print let's say 1000 lines but they are all the same and they match the latest Json received. Any help would be much appreciated. void ReceiveData() //This function is used to listen for messages from the flight simulator { while (true) { NetworkStream stream = client.GetStream(); //sets the newtwork stream to the client's stream byte[] buffer = new byte[256]; //Defines the max amount of bytes that can be sent int bytesRead = stream.Read(buffer, 0, buffer.Length); if (bytesRead > 0) { string jsonreceived = Encoding.ASCII.GetString(buffer, 0, bytesRead); //Converts the recieved data into ASCII for the json variable JavaScriptSerializer serializer = new JavaScriptSerializer(); TelemetryUpdate telemetry = serializer.Deserialize<TelemetryUpdate>(jsonreceived); this.Invoke(new Action(() => { TelemetryReceivedLabel.Text = jsonreceived; })) ; Updatelabels(telemetry); //runs the update labels function with the telemetry data as an arguement File.Delete(@"c:\temp\BLACKBOX.txt"); // this deletes the original file string path = @"c:\temp\BLACKBOX.txt"; //this stores the path of the file in a string using (StreamWriter sw = File.CreateText(path)) // Create a file to write to. { for (int i = 0; i<10000; i++) { sw.Write(jsonreceived.ToString()); //writes the json data to the file } } } }
[ "As per the .NET documentation for File.CreateText:\n\nCreates or opens a file for writing UTF-8 encoded text. If the file already exists, its contents are overwritten.\n\nSo, every time you call File.CreateText you're creating a new StreamWriter that's going to overwrite the contents of your file. Try using File.AppendText instead to pick up where you left off.\n" ]
[ 0 ]
[]
[]
[ "c#", "json", "winforms" ]
stackoverflow_0074668729_c#_json_winforms.txt
Q: How to add images to Git webpage? I have the images in the repository, and I've linked to them via URL in my HTML file, but when I check the webpage on Github, the images do not appear. Am I doing something wrong? How is it usually done? Name and info redacted for my privacy. I of course immediately ruled out the mistake of linking the images to locations on my PC I uploaded the images, then changed the link to them in the HTML file, which worked. A: Could you check your naming convention? If anyone has the same problem. GitHub Pages are case-sensitive. Not only for folders but also for image names.
How to add images to Git webpage?
I have the images in the repository, and I've linked to them via URL in my HTML file, but when I check the webpage on Github, the images do not appear. Am I doing something wrong? How is it usually done? Name and info redacted for my privacy. I of course immediately ruled out the mistake of linking the images to locations on my PC I uploaded the images, then changed the link to them in the HTML file, which worked.
[ "Could you check your naming convention?\nIf anyone has the same problem.\nGitHub Pages are case-sensitive. Not only for folders but also for image names.\n" ]
[ 0 ]
[]
[]
[ "css", "github", "html", "web" ]
stackoverflow_0074673440_css_github_html_web.txt
Q: Not able to login to opensearch Dashboard on Windows machine I am using Filebeat-opensearch-Opensearch Dashboard very first time. And I am using it only on local Windows machine. I am able to launch Filebeat,Opensearch[http://localhost:9200/] and Opensearch-dashboards[http://localhost:5601/app/login?]. But not able to login to Opensearch Dashboard. When I enter credentials admin/admin, It says, "[error][plugins][securityDashboards] Failed authentication: Error: no handler found for uri [/_plugins/_security/authinfo] and method [GET]". opensearch-dashboard.yml: server.port: 5601 opensearch.hosts: [http://localhost:9200] opensearch.ssl.verificationMode: none opensearch.username: admin opensearch.password: admin opensearch.requestHeadersWhitelist: [authorization, securitytenant] opensearch_security.multitenancy.enabled: false opensearch_security.cookie.secure: false opensearch.yml: plugins.security.disabled: true Only 1 entry is there in opensearch.yml file. Is there any solution? A: If you are not able to login to the Opensearch Dashboard on a Windows machine, then here are a few things that you can try: Make sure that you have the correct username and password.www.traditionalsoul.com Check if the Opensearch Dashboard service is running on your machine. Check if your internet connection is working properly. If you are using a proxy server, make sure it is configured correctly for the Opensearch Dashboard. Try restarting your machine and try to login again. Try using a different web browser and try to login again. If all else fails, try reinstalling the Opensearch Dashboard application on your machine.
Not able to login to opensearch Dashboard on Windows machine
I am using Filebeat-opensearch-Opensearch Dashboard very first time. And I am using it only on local Windows machine. I am able to launch Filebeat,Opensearch[http://localhost:9200/] and Opensearch-dashboards[http://localhost:5601/app/login?]. But not able to login to Opensearch Dashboard. When I enter credentials admin/admin, It says, "[error][plugins][securityDashboards] Failed authentication: Error: no handler found for uri [/_plugins/_security/authinfo] and method [GET]". opensearch-dashboard.yml: server.port: 5601 opensearch.hosts: [http://localhost:9200] opensearch.ssl.verificationMode: none opensearch.username: admin opensearch.password: admin opensearch.requestHeadersWhitelist: [authorization, securitytenant] opensearch_security.multitenancy.enabled: false opensearch_security.cookie.secure: false opensearch.yml: plugins.security.disabled: true Only 1 entry is there in opensearch.yml file. Is there any solution?
[ "If you are not able to login to the Opensearch Dashboard on a Windows machine, then here are a few things that you can try:\n\nMake sure that you have the correct username and password.www.traditionalsoul.com\n\nCheck if the Opensearch Dashboard service is running on your machine.\n\nCheck if your internet connection is working properly.\n\nIf you are using a proxy server, make sure it is configured correctly for the Opensearch Dashboard.\n\nTry restarting your machine and try to login again.\n\nTry using a different web browser and try to login again.\n\nIf all else fails, try reinstalling the Opensearch Dashboard application on your machine.\n\n\n" ]
[ 0 ]
[]
[]
[ "opensearch", "opensearch_dashboards" ]
stackoverflow_0074673439_opensearch_opensearch_dashboards.txt
Q: How to concatenate lists into single merged DataFrame from for loop output? I'm tring to pull data by using API, I have a list of IDs from csv,and I use for loop to pull request for each ID, the output is in the form of lists, and I tried to convert them into DataFrame, they come out into seperate DataFrames and I'm not able to merge them into one since they are inside of a for loop. The code looks like this: ================== `` # Read ios id from CSV file data = pd.read_csv('File.csv') ios = (data['ios_id']) ios_data= [] # Convert ios id into a list for i in ios: ios_data.append(i) for id in ios_data: params = { "os": "ios", "app_id": id, "country": "US", "search_term": "kid", "auth_token": AUTH_TOKEN } response = requests.get(BASE_URL, params) # print(response.status_code) raw = response.json() feedback = raw['feedback'] if feedback != []: feedback_dict = feedback[0] df = pd.DataFrame(feedback_dict) print(df) else: pass `` And Output looks like this: content version ... country tags 0 So I love tiles of hop it’s fun but I don’t th... 4.4.0 ... US Family 1 So I love tiles of hop it’s fun but I don’t th... 4.4.0 ... US Love it [2 rows x 9 columns] content ... tags 0 This game is, well, fantastic and I love how B... ... Ads 1 This game is, well, fantastic and I love how B... ... Family 2 This game is, well, fantastic and I love how B... ... Hate it 3 This game is, well, fantastic and I love how B... ... Inappropriate 4 This game is, well, fantastic and I love how B... ... Love it 5 This game is, well, fantastic and I love how B... ... Strenuousness [6 rows x 9 columns] A: You can start by making an empty list/basket, then put in the dataframes collected in every iteration/pull and finally use pandas.concat to make a whole and single dataframe right after the loop. Try this : # Read ios id from CSV file data = pd.read_csv('File.csv') ios_data= data['ios_id'].tolist() list_dfs = [] for id in ios_data: params = { "os": "ios", "app_id": id, "country": "US", "search_term": "kid", "auth_token": AUTH_TOKEN } response = requests.get(BASE_URL, params) # print(response.status_code) raw = response.json() feedback = raw['feedback'] if feedback != []: feedback_dict = feedback[0] df = pd.DataFrame(feedback_dict) list_dfs.append(df) else: pass df_all= pd.concat(list_dfs, ignore_index=True) # Output : print(df_all) content ... tags 0 So I love tiles of hop it’s fun but I don’t th... ... Family 1 So I love tiles of hop it’s fun but I don’t th... ... Love it 2 This game is, well, fantastic and I love how B... ... Ads 3 This game is, well, fantastic and I love how B... ... Family 4 This game is, well, fantastic and I love how B... ... Hate it 5 This game is, well, fantastic and I love how B... ... Inappropriate 6 This game is, well, fantastic and I love how B... ... Love it 7 This game is, well, fantastic and I love how B... ... Strenuousness
How to concatenate lists into single merged DataFrame from for loop output?
I'm tring to pull data by using API, I have a list of IDs from csv,and I use for loop to pull request for each ID, the output is in the form of lists, and I tried to convert them into DataFrame, they come out into seperate DataFrames and I'm not able to merge them into one since they are inside of a for loop. The code looks like this: ================== `` # Read ios id from CSV file data = pd.read_csv('File.csv') ios = (data['ios_id']) ios_data= [] # Convert ios id into a list for i in ios: ios_data.append(i) for id in ios_data: params = { "os": "ios", "app_id": id, "country": "US", "search_term": "kid", "auth_token": AUTH_TOKEN } response = requests.get(BASE_URL, params) # print(response.status_code) raw = response.json() feedback = raw['feedback'] if feedback != []: feedback_dict = feedback[0] df = pd.DataFrame(feedback_dict) print(df) else: pass `` And Output looks like this: content version ... country tags 0 So I love tiles of hop it’s fun but I don’t th... 4.4.0 ... US Family 1 So I love tiles of hop it’s fun but I don’t th... 4.4.0 ... US Love it [2 rows x 9 columns] content ... tags 0 This game is, well, fantastic and I love how B... ... Ads 1 This game is, well, fantastic and I love how B... ... Family 2 This game is, well, fantastic and I love how B... ... Hate it 3 This game is, well, fantastic and I love how B... ... Inappropriate 4 This game is, well, fantastic and I love how B... ... Love it 5 This game is, well, fantastic and I love how B... ... Strenuousness [6 rows x 9 columns]
[ "You can start by making an empty list/basket, then put in the dataframes collected in every iteration/pull and finally use pandas.concat to make a whole and single dataframe right after the loop.\nTry this :\n# Read ios id from CSV file\ndata = pd.read_csv('File.csv')\n\nios_data= data['ios_id'].tolist()\n\nlist_dfs = []\n\nfor id in ios_data:\n params = {\n \"os\": \"ios\",\n \"app_id\": id,\n \"country\": \"US\",\n \"search_term\": \"kid\",\n \"auth_token\": AUTH_TOKEN\n }\n\n response = requests.get(BASE_URL, params)\n # print(response.status_code)\n raw = response.json()\n feedback = raw['feedback']\n if feedback != []:\n feedback_dict = feedback[0]\n df = pd.DataFrame(feedback_dict)\n list_dfs.append(df)\n else:\n pass\n\ndf_all= pd.concat(list_dfs, ignore_index=True)\n\n# Output :\nprint(df_all)\n content ... tags\n0 So I love tiles of hop it’s fun but I don’t th... ... Family\n1 So I love tiles of hop it’s fun but I don’t th... ... Love it\n2 This game is, well, fantastic and I love how B... ... Ads\n3 This game is, well, fantastic and I love how B... ... Family\n4 This game is, well, fantastic and I love how B... ... Hate it\n5 This game is, well, fantastic and I love how B... ... Inappropriate\n6 This game is, well, fantastic and I love how B... ... Love it\n7 This game is, well, fantastic and I love how B... ... Strenuousness\n\n" ]
[ 0 ]
[]
[]
[ "api", "dataframe", "for_loop", "pandas", "python" ]
stackoverflow_0074673399_api_dataframe_for_loop_pandas_python.txt
Q: How do I get this error? I want my character to move with wasd and it prints 100 errors NullReferenceException: Object reference not set to an instance of an object StarterAssets.ThirdPersonController.Move () (at Assets/Scripts/ThirdPersonController.cs:258) StarterAssets.ThirdPersonController.Update () (at Assets/Scripts/ThirdPersonController.cs:161) from 155 to 161 line: private void Update() { _hasAnimator = TryGetComponent(out _animator); JumpAndGravity(); GroundedCheck(); Move(); from 257 to 265 { _targetRotation = Mathf.Atan2(inputDirection.x, inputDirection.z) * Mathf.Rad2Deg + _mainCamera.transform.eulerAngles.y; float rotation = Mathf.SmoothDampAngle(transform.eulerAngles.y, _targetRotation, ref _rotationVelocity, RotationSmoothTime); // rotate to face input direction relative to camera position transform.rotation = Quaternion.Euler(0.0f, rotation, 0.0f); } What causes the error? A: In Unity, the most times i see this Exception, it is because I did not set a public variable or a [SerializeField] in the Inspector. If you then try to use it in your script, this Error occures. A: Errors like these just points that you are either accessing a private/non-existent variable/method. Check whether if you declare public on your variables.
How do I get this error? I want my character to move with wasd and it prints 100 errors
NullReferenceException: Object reference not set to an instance of an object StarterAssets.ThirdPersonController.Move () (at Assets/Scripts/ThirdPersonController.cs:258) StarterAssets.ThirdPersonController.Update () (at Assets/Scripts/ThirdPersonController.cs:161) from 155 to 161 line: private void Update() { _hasAnimator = TryGetComponent(out _animator); JumpAndGravity(); GroundedCheck(); Move(); from 257 to 265 { _targetRotation = Mathf.Atan2(inputDirection.x, inputDirection.z) * Mathf.Rad2Deg + _mainCamera.transform.eulerAngles.y; float rotation = Mathf.SmoothDampAngle(transform.eulerAngles.y, _targetRotation, ref _rotationVelocity, RotationSmoothTime); // rotate to face input direction relative to camera position transform.rotation = Quaternion.Euler(0.0f, rotation, 0.0f); } What causes the error?
[ "In Unity, the most times i see this Exception, it is because I did not set a public variable or a [SerializeField] in the Inspector.\nIf you then try to use it in your script, this Error occures.\n", "Errors like these just points that you are either accessing a private/non-existent variable/method. Check whether if you declare public on your variables.\n" ]
[ 0, 0 ]
[]
[]
[ "unity3d" ]
stackoverflow_0074665768_unity3d.txt
Q: File path error Cannot open file, path = 'lib/assets/posts.csv' (OS Error: No such file or directory, errno = 2) Not sure exactly what's going wrong here, but I've added my assets directory in my project folder and added it to the pubspec.yaml but when I try to read the csv file it get the above error. final posts = File('assets/posts.csv').readAsLinesSync().map((lines) { final parts = lines.split(','); return Post( title: parts[0], numDownVotes: int.tryParse(parts[1]), numUpVotes: int.tryParse(parts[2]), ); } ).toList(); A: How often and for what reason does the csv file get updated? Given that your code refers to down and up votes, I guess that it is changed often. In which case it is not an asset. The assets folder contains things that are incorporated into your app only when it is built. It is not used when you run the app, the csv file needs to be written to and read from the devices local storage....as @gwhyyy indicates.
File path error Cannot open file, path = 'lib/assets/posts.csv' (OS Error: No such file or directory, errno = 2)
Not sure exactly what's going wrong here, but I've added my assets directory in my project folder and added it to the pubspec.yaml but when I try to read the csv file it get the above error. final posts = File('assets/posts.csv').readAsLinesSync().map((lines) { final parts = lines.split(','); return Post( title: parts[0], numDownVotes: int.tryParse(parts[1]), numUpVotes: int.tryParse(parts[2]), ); } ).toList();
[ "How often and for what reason does the csv file get updated? Given that your code refers to down and up votes, I guess that it is changed often. In which case it is not an asset. The assets folder contains things that are incorporated into your app only when it is built. It is not used when you run the app, the csv file needs to be written to and read from the devices local storage....as @gwhyyy indicates.\n" ]
[ 0 ]
[]
[]
[ "csv", "dart", "flutter" ]
stackoverflow_0074672267_csv_dart_flutter.txt
Q: Is using unsigned short instead of int any better? I need to iterate over an array with 25,000 items at most. Does using an iterator of type unsigned short (instead of int) lead to any improvement, such as faster run times, less memory, or more readability? (I know that 25,000 is not "much" in terms of programming - but I still want to be as efficient as possible). The question is translated so that, assuming I have #define SIZE 25000 Is for (int i = 0; i < SIZE; i++) any better than for (unsigned short i = 0; i < SIZE; i++) A: You should use size_t for most for loops that iterate over an array, because this is the most natural thing to do for a machine. The compiler is probably smart enough though to just ignore your types and use an equivalent of size_t anyway in most cases. Live demo.
Is using unsigned short instead of int any better?
I need to iterate over an array with 25,000 items at most. Does using an iterator of type unsigned short (instead of int) lead to any improvement, such as faster run times, less memory, or more readability? (I know that 25,000 is not "much" in terms of programming - but I still want to be as efficient as possible). The question is translated so that, assuming I have #define SIZE 25000 Is for (int i = 0; i < SIZE; i++) any better than for (unsigned short i = 0; i < SIZE; i++)
[ "You should use size_t for most for loops that iterate over an array, because this is the most natural thing to do for a machine. The compiler is probably smart enough though to just ignore your types and use an equivalent of size_t anyway in most cases.\nLive demo.\n" ]
[ 5 ]
[]
[]
[ "c", "c++" ]
stackoverflow_0074673419_c_c++.txt
Q: Isn't a quad always composed of 4 vertex? I'm getting into OpenGL. I'm following learnopengl.com and I reached the text rendering part, where I read (at the end of In Practice > Text Rendering > Shaders section) The 2D quad requires 6 vertices of 4 floats each, so we reserve 6 * 4 floats of memory. Because we'll be updating the content of the VBO's memory quite often we'll allocate the memory with GL_DYNAMIC_DRAW. So far I was thinking of a quad as a pair of triangles, four vertex. How a quad can require 6 vertex? A: If a quad contains 2 triangles it can take 6 vertices, 2 vertices will be the duplicated in this case. Alternatively, you can use 4 vertices and GL_TRIANGLE_STRIP. All vertices will be unique, no duplicates. Alternatively, there is a trick with only one triangle, 3 vertices only. Vertex shader would look like: out vec2 texCoord; void main() { float x = -1.0 + float((gl_VertexID & 1) << 2); float y = -1.0 + float((gl_VertexID & 2) << 1); texCoord.x = (x+1.0)*0.5; texCoord.y = (y+1.0)*0.5; gl_Position = vec4(x, y, 0, 1); } And a discussion what pros and cons have these methods.
Isn't a quad always composed of 4 vertex?
I'm getting into OpenGL. I'm following learnopengl.com and I reached the text rendering part, where I read (at the end of In Practice > Text Rendering > Shaders section) The 2D quad requires 6 vertices of 4 floats each, so we reserve 6 * 4 floats of memory. Because we'll be updating the content of the VBO's memory quite often we'll allocate the memory with GL_DYNAMIC_DRAW. So far I was thinking of a quad as a pair of triangles, four vertex. How a quad can require 6 vertex?
[ "If a quad contains 2 triangles it can take 6 vertices, 2 vertices will be the duplicated in this case.\nAlternatively, you can use 4 vertices and GL_TRIANGLE_STRIP. All vertices will be unique, no duplicates.\nAlternatively, there is a trick with only one triangle, 3 vertices only. Vertex shader would look like:\nout vec2 texCoord;\n \nvoid main()\n{\n float x = -1.0 + float((gl_VertexID & 1) << 2);\n float y = -1.0 + float((gl_VertexID & 2) << 1);\n texCoord.x = (x+1.0)*0.5;\n texCoord.y = (y+1.0)*0.5;\n gl_Position = vec4(x, y, 0, 1);\n}\n\nAnd a discussion what pros and cons have these methods.\n" ]
[ 1 ]
[]
[]
[ "c++", "opengl", "shader", "text_rendering", "vertex" ]
stackoverflow_0074672456_c++_opengl_shader_text_rendering_vertex.txt
Q: ViteJS: Error: ENOENT: no such file or directory, rename .vite/deps_temp -> .vite/deps I'm using Vite latest version then checkout to branch which using older version. When I back to the branch where using latest version this issue happened although the app able to running. This is my vite.config.ts import * as path from "path" import react from "@vitejs/plugin-react" import { defineConfig } from "vite" // https://vitejs.dev/config/ export default defineConfig({ build: { sourcemap: true }, plugins: [react()], resolve: { alias: { "@": path.resolve(__dirname, "./src") } } }) A: The solution to turn on resolve.preserveSymlinks works for me. Please also refer to vite config. A: remove node_modules, pnpm-lock.yaml and reinstall dependencies works for me
ViteJS: Error: ENOENT: no such file or directory, rename .vite/deps_temp -> .vite/deps
I'm using Vite latest version then checkout to branch which using older version. When I back to the branch where using latest version this issue happened although the app able to running. This is my vite.config.ts import * as path from "path" import react from "@vitejs/plugin-react" import { defineConfig } from "vite" // https://vitejs.dev/config/ export default defineConfig({ build: { sourcemap: true }, plugins: [react()], resolve: { alias: { "@": path.resolve(__dirname, "./src") } } })
[ "The solution to turn on resolve.preserveSymlinks works for me. Please also refer to vite config.\n", "remove node_modules, pnpm-lock.yaml and reinstall dependencies works for me\n" ]
[ 1, 0 ]
[]
[]
[ "reactjs", "typescript", "vite" ]
stackoverflow_0073361153_reactjs_typescript_vite.txt
Q: SESSION Problem in PHP. Dont know what to do The php SESSION is not working and I have no idea why. Just a beginner in this world. Pleasecheck out my code and tell me the problem. <?php session_start(); ?> PHP SESSION CODE: $abc = "select * from registeration where email='$email' and pass='$password'"; $res = mysqli_query($con1,$abc); if (mysqli_num_rows($res)>0) { if ($rec = mysqli_fetch_assoc($res)){ $_SESSION["usnam"] = $rec[0]; $_SESSION["usrol"] = $rec[4]; ..... <?php session_start(); ?> <li class="nav-item"> <?php if (!isset($_SESSION["usnam"])){ echo "<a class='nav-link' href='login.php'>Login</a>"; } ?> </li> A: You are trying to access the fetched data by $rec[0] , $rec[4], etc. (as a numeric array). In that case, please use fetch_array() instead of fetch_assoc() So change this if ($rec = mysqli_fetch_assoc($res)){ ..... } to if ($rec = mysqli_fetch_array($res)){ ..... } On the other hand, please change your query to parameterized prepared statement which is resilient against SQL injection as mentioned in my comment.
SESSION Problem in PHP. Dont know what to do
The php SESSION is not working and I have no idea why. Just a beginner in this world. Pleasecheck out my code and tell me the problem. <?php session_start(); ?> PHP SESSION CODE: $abc = "select * from registeration where email='$email' and pass='$password'"; $res = mysqli_query($con1,$abc); if (mysqli_num_rows($res)>0) { if ($rec = mysqli_fetch_assoc($res)){ $_SESSION["usnam"] = $rec[0]; $_SESSION["usrol"] = $rec[4]; ..... <?php session_start(); ?> <li class="nav-item"> <?php if (!isset($_SESSION["usnam"])){ echo "<a class='nav-link' href='login.php'>Login</a>"; } ?> </li>
[ "You are trying to access the fetched data by $rec[0] , $rec[4], etc. (as a numeric array). In that case, please use fetch_array() instead of fetch_assoc()\nSo change this\nif ($rec = mysqli_fetch_assoc($res)){\n .....\n}\n\nto\nif ($rec = mysqli_fetch_array($res)){\n .....\n}\n\nOn the other hand, please change your query to parameterized prepared statement which is resilient against SQL injection as mentioned in my comment.\n" ]
[ 1 ]
[]
[]
[ "php", "session" ]
stackoverflow_0074673342_php_session.txt
Q: MS Access - Group By and Sum if all values are available query I am trying to create a query in ms access which sums up costs but want to exclude those that have no value from the result. I am struggling because I want to exclude items from one column based on another column. I have a column with finished product, column with components that make up finished product, column with quantity of components required for finished product and column with cost for each of components. What I need is to get total cost for each component required and then sum up costs as total cost for finished product which is simple enough. However there are some blank fields where cost for one or more components are not available, I want those to show in the result with text " Pending " for finished product instead of it summing up only available values. Below example of what I am trying to do: BOM_List What I need in result is below: BOM_Result Would really appreciate any help on this :) A: Elaborating on June 7ths comment and assuming Table2: ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ID | FinishedProductid | ItemNumber | Component | Uom | ComponentQuantity | ComponentUnit | stdCost | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 1 | 199127402 | 10 | 123123123 | PC | 3 | PC | $1.50 | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 2 | 199127402 | 20 | 321321321 | PC | 1 | PC | $2.50 | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 3 | 199127402 | 30 | 123456789 | PC | 1 | PC | $3.55 | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 4 | 199127402 | 40 | 987654321 | PC | 0.25 | H | $82.00 | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 5 | 199127403 | 10 | 111222333 | PC | 3 | PC | $1.50 | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 6 | 199127403 | 20 | 333222111 | PC | 1 | PC | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 7 | 199127403 | 30 | 444555666 | PC | 1 | PC | $3.55 | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 8 | 199127403 | 40 | 666555444 | PC | 0.25 | H | $82.00 | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 'resulting sql to get all FinishedProductids SELECT Table2.FinishedProductid FROM Table2 GROUP BY Table2.FinishedProductid; 'once have FinishedProductid's out of the group statement you can just use a calculated variable SELECT Query2.FinishedProductid, getTotalCost([FinishedProductid]) AS TotalCost FROM Query2; 'of course a getTotalCost function is needed. Add the following to a code module Public Function getTotalCost(FinishedProductid As Long) As String If isPending(FinishedProductid) Then getTotalCost = "Pending" Else getTotalCost = DSum("ComponentQuantity*stdCost", "Table2", "FinishedProductid = " & FinishedProductid) End If End Function Public Function isPending(FinishedProductid) As Boolean ' if any values of stdCost are null set isPending to true 'public functions can be accessed most anywhere in Access Dim nullCount As Long nullCount = DCount("ID", "Table2", "FinishedProductid = " & FinishedProductid & " AND ISNULL(stdCost)") If nullCount > 0 Then isPending = True Else isPending = False End If End Function 'result ------------------------------------------------------- | FinishedProductid | TotalCost | ------------------------------------------------------- | 199127402 | 31.05 | ------------------------------------------------------- | 199127403 | Pending | ------------------------------------------------------- Explanation: I use a two query approach to get around the sql complications that arise from the group by. The functions are relatively declaritive and hence self-commenting. Finally, functions can be reused. Edit: getTotalCost returns a string so it can meet the requirement of returning both the string "pending" and the total cost
MS Access - Group By and Sum if all values are available query
I am trying to create a query in ms access which sums up costs but want to exclude those that have no value from the result. I am struggling because I want to exclude items from one column based on another column. I have a column with finished product, column with components that make up finished product, column with quantity of components required for finished product and column with cost for each of components. What I need is to get total cost for each component required and then sum up costs as total cost for finished product which is simple enough. However there are some blank fields where cost for one or more components are not available, I want those to show in the result with text " Pending " for finished product instead of it summing up only available values. Below example of what I am trying to do: BOM_List What I need in result is below: BOM_Result Would really appreciate any help on this :)
[ "Elaborating on June 7ths comment and assuming Table2:\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n| ID | FinishedProductid | ItemNumber | Component | Uom | ComponentQuantity | ComponentUnit | stdCost |\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n| 1 | 199127402 | 10 | 123123123 | PC | 3 | PC | $1.50 |\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n| 2 | 199127402 | 20 | 321321321 | PC | 1 | PC | $2.50 |\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n| 3 | 199127402 | 30 | 123456789 | PC | 1 | PC | $3.55 |\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n| 4 | 199127402 | 40 | 987654321 | PC | 0.25 | H | $82.00 |\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n| 5 | 199127403 | 10 | 111222333 | PC | 3 | PC | $1.50 |\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n| 6 | 199127403 | 20 | 333222111 | PC | 1 | PC | |\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n| 7 | 199127403 | 30 | 444555666 | PC | 1 | PC | $3.55 |\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n| 8 | 199127403 | 40 | 666555444 | PC | 0.25 | H | $82.00 |\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n\n'resulting sql to get all FinishedProductids\n\n\nSELECT Table2.FinishedProductid\nFROM Table2\nGROUP BY Table2.FinishedProductid;\n\n\n'once have FinishedProductid's out of the group statement you can just use a calculated variable\n\nSELECT Query2.FinishedProductid, getTotalCost([FinishedProductid]) AS TotalCost\nFROM Query2;\n\n'of course a getTotalCost function is needed. Add the following to a code module\n\nPublic Function getTotalCost(FinishedProductid As Long) As String\nIf isPending(FinishedProductid) Then\ngetTotalCost = \"Pending\"\nElse\ngetTotalCost = DSum(\"ComponentQuantity*stdCost\", \"Table2\", \"FinishedProductid = \" & FinishedProductid)\nEnd If\nEnd Function\n\n\nPublic Function isPending(FinishedProductid) As Boolean\n' if any values of stdCost are null set isPending to true\n'public functions can be accessed most anywhere in Access\n\nDim nullCount As Long\nnullCount = DCount(\"ID\", \"Table2\", \"FinishedProductid = \" & FinishedProductid & \" AND ISNULL(stdCost)\")\nIf nullCount > 0 Then\nisPending = True\nElse\nisPending = False\nEnd If\nEnd Function\n\n'result\n\n-------------------------------------------------------\n| FinishedProductid | TotalCost |\n-------------------------------------------------------\n| 199127402 | 31.05 |\n-------------------------------------------------------\n| 199127403 | Pending |\n-------------------------------------------------------\n\nExplanation: I use a two query approach to get around the sql complications that arise from the group by. The functions are relatively declaritive and hence self-commenting. Finally, functions can be reused.\nEdit: getTotalCost returns a string so it can meet the requirement of returning both the string \"pending\" and the total cost\n" ]
[ 0 ]
[]
[]
[ "ms_access", "sum" ]
stackoverflow_0074621501_ms_access_sum.txt
Q: How to change the button position when pressed in tkinter python please tell me how to change the position of the button in tkinter. I predicted that it could be done by button['padx' = 4], but it doesn't work. Do you know how to do it? from tkinter import ttk import random window = tk.Tk() window.geometry('512x512') def click(): pass button = ttk.Button( text="No", command=click ).pack(padx=5, pady=15) window.mainloop() A: I instead assigned 2 variables to random integers, and then updated the position of the button with those variables. If you want the random position to be a wider area increase the number "100" in "random_int". from tkinter import * import random window = Tk() window.geometry('512x512') x = 5 y = 15 def click(): random_int = random.randint(0, 100) x = (random_int) random_int = random.randint(0, 100) y = (random_int) button.place(x=x, y=y) button = Button(text="No", command=click) button.pack(padx=5, pady=15) window.mainloop()
How to change the button position when pressed in tkinter python
please tell me how to change the position of the button in tkinter. I predicted that it could be done by button['padx' = 4], but it doesn't work. Do you know how to do it? from tkinter import ttk import random window = tk.Tk() window.geometry('512x512') def click(): pass button = ttk.Button( text="No", command=click ).pack(padx=5, pady=15) window.mainloop()
[ "I instead assigned 2 variables to random integers, and then updated the position of the button with those variables. If you want the random position to be a wider area increase the number \"100\" in \"random_int\".\nfrom tkinter import *\nimport random\n\nwindow = Tk()\nwindow.geometry('512x512')\n\nx = 5\ny = 15\n\n\ndef click():\n random_int = random.randint(0, 100)\n x = (random_int)\n random_int = random.randint(0, 100)\n y = (random_int)\n button.place(x=x, y=y)\n\n\nbutton = Button(text=\"No\", command=click)\n\nbutton.pack(padx=5, pady=15)\n\nwindow.mainloop()\n\n" ]
[ 0 ]
[]
[]
[ "python_3.x", "tkinter" ]
stackoverflow_0073973577_python_3.x_tkinter.txt
Q: How do I use binding to change the position of an arc? I am having trouble setting the x position of an arc called "pac_man", and then changing it using += with a function called "xChange()". I have tried multiple things, but I think using a dictionary would suffice. This is because the variable "coord" needs 4 values to assign shape and position for "pac_man." #Imports from tkinter import * #Functions def xChange(): print("Change the value of pac_man's x position here") #Object Attributes wn = Tk() wn.geometry('512x320') wn.title('Moving with Keys') cvs = Canvas(wn, bg='limegreen', height=320, width=512) coord = {'x_pos': 10, 'y_pos': 10, 'x_size': 50, 'y_size': 50} pac_man = cvs.create_arc( coord['x_pos'], coord['y_pos'], coord['x_size'], coord['y_size'], start=45, extent=270, fill='yellow', outline='black', width=4, ) cvs.bind('<Right>', xChange) cvs.pack() A: See comments in the code: #Imports from tkinter import * #Functions def move(event):#add event parameter pixels = 1 #local variable, amount of pixels to "move" direction = event.keysym #get keysym from event object if direction == 'Right':#if keysym is Left cvs.move('packman',+pixels, 0) #canvas has already a method for move, use it! #move "packman" +1 pixel on the x-axis and 0 on the y-axis #Window wn = Tk() wn.geometry('512x320') wn.title('Moving with Keys') #Canvas cvs = Canvas(wn, bg='limegreen', height=320, width=512, takefocus=True) #coord = {'x_pos': 10, 'y_pos': 10, 'x_size': 50, 'y_size': 50} #tkinter stores these values in form of a list #You can retrieve it with like this print(cvs['coords']) pac_man = cvs.create_arc( 10,10,50,50, start=45, extent=270, fill='yellow', outline='black', width=4, tags=('packman',)#add a tag to access this item #tags are tuple^!! ) #cvs.bind('<Right>', xChange) #You could bind to canvas but would've made sure #that canvas has the keyboard focus #it is easier to bind to the window wn.bind('<Left>', move) wn.bind('<Right>', move) wn.bind('<Up>', move) wn.bind('<Down>', move) cvs.pack() wn.mainloop() Additional resource, in case you wonder. event parameter
How do I use binding to change the position of an arc?
I am having trouble setting the x position of an arc called "pac_man", and then changing it using += with a function called "xChange()". I have tried multiple things, but I think using a dictionary would suffice. This is because the variable "coord" needs 4 values to assign shape and position for "pac_man." #Imports from tkinter import * #Functions def xChange(): print("Change the value of pac_man's x position here") #Object Attributes wn = Tk() wn.geometry('512x320') wn.title('Moving with Keys') cvs = Canvas(wn, bg='limegreen', height=320, width=512) coord = {'x_pos': 10, 'y_pos': 10, 'x_size': 50, 'y_size': 50} pac_man = cvs.create_arc( coord['x_pos'], coord['y_pos'], coord['x_size'], coord['y_size'], start=45, extent=270, fill='yellow', outline='black', width=4, ) cvs.bind('<Right>', xChange) cvs.pack()
[ "See comments in the code:\n#Imports\nfrom tkinter import *\n\n#Functions\ndef move(event):#add event parameter\n pixels = 1 #local variable, amount of pixels to \"move\"\n direction = event.keysym #get keysym from event object\n if direction == 'Right':#if keysym is Left\n cvs.move('packman',+pixels, 0)\n #canvas has already a method for move, use it!\n #move \"packman\" +1 pixel on the x-axis and 0 on the y-axis\n\n#Window\nwn = Tk()\nwn.geometry('512x320')\nwn.title('Moving with Keys')\n#Canvas\ncvs = Canvas(wn, bg='limegreen', height=320, width=512, takefocus=True)\n#coord = {'x_pos': 10, 'y_pos': 10, 'x_size': 50, 'y_size': 50}\n#tkinter stores these values in form of a list\n#You can retrieve it with like this print(cvs['coords'])\npac_man = cvs.create_arc(\n 10,10,50,50,\n start=45,\n extent=270,\n fill='yellow',\n outline='black',\n width=4,\n tags=('packman',)#add a tag to access this item\n #tags are tuple^!!\n )\n#cvs.bind('<Right>', xChange)\n#You could bind to canvas but would've made sure\n#that canvas has the keyboard focus\n#it is easier to bind to the window\nwn.bind('<Left>', move)\nwn.bind('<Right>', move)\nwn.bind('<Up>', move)\nwn.bind('<Down>', move)\ncvs.pack()\nwn.mainloop()\n\nAdditional resource, in case you wonder.\nevent parameter\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter", "tkinter_canvas" ]
stackoverflow_0074673409_python_tkinter_tkinter_canvas.txt
Q: Name 'X' is not defined [How to fix it] I attempted to execute the code but the problem shows up as "Name 'X' is not defined" Is X not defined if not how do I define it for the code to run. ` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42) todo_check([ (X_train.shape == (413, 29), 'X_train does not have the correct shape (413, 29)'), (X_test.shape == (104, 29), 'X_test does not have the correct shape (104, 29)'), (y_train.shape == (413,), 'y_train does not have the correct shape (413,)'), (y_test.shape == (104,), 'y_test does not have the correct shape (104,)'), (np.all(np.isclose(X_train.values[-5:, -4], np.array([17.7, 18.2, 21.8, 23.8, 20.1]),rtol=.01)), 'X_train does not contain the correct values! Make sure you used `X` when splitting!'), (np.all(np.isclose(y_test.values[-5:], np.array([1.25561604, 1.8531681 , 1.15373159, 4.01259206, 3.56558124]),rtol=.01)), 'y_test does not have the correct values! Make sure you used `y` when splitting!') ]) ` I want to the code to fully construct and find why X is not defined and how I can compile it A: You are using the train_test_split function: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42) train_test_split ( X, y .... But at no point have you defined X (or y for that matter). You need to provide the function with data. What is X ? What is y ? Once you define these variables, your error should go away. For example: from sklearn import datasets from sklearn.model_selection import train_test_split diabetes = datasets.load_diabetes() X, y = diabetes.data, diabetes.target X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.1, random_state=13)
Name 'X' is not defined [How to fix it]
I attempted to execute the code but the problem shows up as "Name 'X' is not defined" Is X not defined if not how do I define it for the code to run. ` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42) todo_check([ (X_train.shape == (413, 29), 'X_train does not have the correct shape (413, 29)'), (X_test.shape == (104, 29), 'X_test does not have the correct shape (104, 29)'), (y_train.shape == (413,), 'y_train does not have the correct shape (413,)'), (y_test.shape == (104,), 'y_test does not have the correct shape (104,)'), (np.all(np.isclose(X_train.values[-5:, -4], np.array([17.7, 18.2, 21.8, 23.8, 20.1]),rtol=.01)), 'X_train does not contain the correct values! Make sure you used `X` when splitting!'), (np.all(np.isclose(y_test.values[-5:], np.array([1.25561604, 1.8531681 , 1.15373159, 4.01259206, 3.56558124]),rtol=.01)), 'y_test does not have the correct values! Make sure you used `y` when splitting!') ]) ` I want to the code to fully construct and find why X is not defined and how I can compile it
[ "You are using the train_test_split function:\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)\n\n\ntrain_test_split ( X, y ....\n\nBut at no point have you defined X (or y for that matter).\nYou need to provide the function with data.\n\nWhat is X ?\nWhat is y ?\n\nOnce you define these variables, your error should go away.\n\n\nFor example:\nfrom sklearn import datasets\nfrom sklearn.model_selection import train_test_split\n\ndiabetes = datasets.load_diabetes()\nX, y = diabetes.data, diabetes.target\n\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.1, random_state=13)\n\n" ]
[ 0 ]
[]
[]
[ "google_colaboratory", "python" ]
stackoverflow_0074673348_google_colaboratory_python.txt
Q: Stripe form doesn't show properly I am building a subscription based website. For the checkout page, the stripe form doesn't show properly. Here is a screenshot of the problem. I looked to make sure that my scripts aren't overriding any element from stripe but i still have this problem. You can see below the checkout.html, checkout.css and checkout.js. checkout.html {% extends 'home/base.html' %} {% load static %} {% block head%} <head> <meta charset="UTF-8"> <title>Checkout</title> <link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" type="text/css" rel="stylesheet"> <link rel="stylesheet" type="text/css" href="{% static 'css/checkout.css' %}"> <style> #checkoutMethods { background: #fff; border-radius: 2px; display: inline-block; max-height: 700px; margin: 1rem; position: relative; width: 700px; box-shadow: 0 10px 20px rgba(0,0,0,0.19), 0 6px 6px rgba(0,0,0,0.23); } </style> </head> {% endblock head%} {% block content %} <body> <div class="container"> <section> <div class="row" id="tablerow"> <div class="col-md-4 col-xs-12"> <div class="panel panel-primary"> <div class="panel-body"> <h5>Enter Voucher Code Below<br><small>If multiple, separate each with comma</small></h5> <div> <form action="." method="post"> {% csrf_token %} <input type="text" name="voucher_codes" class="form-control" id="voucher_code" required> <input type="hidden" name="order_id" value="{{ order.id }}"> <br> <span> <input type="submit" class="btn btn-warning pull-right" value="Apply Voucher"> </span> </form> </div> </div> </div> </div> <div class="col-md-8 col-xs-12"> <table class="table"> <tr> <td><h4>Order Summary</h4></td> </tr> <tr> <td> {% for item in order.get_cart_items %} <tr> <td>{{ item }}</td> <td>${{ item.product.price }}</td> </tr> {% endfor %} </td> </tr> <tr> <td><strong>Order Total</strong> </td> <td> <strong>${{ order.get_cart_total }}</strong></td> </tr> </table> <button onclick="toggleDisplay();" class="btn btn-warning" style="width: 100%;">Checkout with a credit card</button> </div> </div> </section> <div id="collapseStripe" class="wrapper"> <script src="https://js.stripe.com/v3/"></script> <!-- can't do this --> <!-- <script src="{% static 'js/stripeV3.js' %}"></script> --> <form action="." method="post" id="payment-form"> {% csrf_token %} <div id="checkoutMethods"> <div style="margin: 10px;"> <h2>Checkout with Braintree</h2> <div id="bt-dropin"></div> <h2>Checkout with Stripe</h2> <div class="form-row"> <label for="card-element"> Credit or debit card </label> <div id="card-element" class="StripeElement StripeElement--empty"><div class="__PrivateStripeElement" style="margin: 0px !important; padding: 0px !important; border: none !important; display: block !important; background: transparent !important; position: relative !important; opacity: 1 !important;"><iframe frameborder="0" allowtransparency="true" scrolling="no" name="__privateStripeFrame3" allowpaymentrequest="true" src="https://js.stripe.com/v3/elements-inner-card-8a434729e4eb82355db4882974049278.html#style[base][color]=%2332325d&amp;style[base][lineHeight]=18px&amp;style[base][fontFamily]=%22Helvetica+Neue%22%2C+Helvetica%2C+sans-serif&amp;style[base][fontSmoothing]=antialiased&amp;style[base][fontSize]=16px&amp;style[base][::placeholder][color]=%23aab7c4&amp;style[invalid][color]=%23fa755a&amp;style[invalid][iconColor]=%23fa755a&amp;componentName=card&amp;wait=false&amp;rtl=false&amp;features[noop]=false&amp;origin=https%3A%2F%2Fstripe.com&amp;referrer=https%3A%2F%2Fstripe.com%2Fdocs%2Fstripe-js%2Felements%2Fquickstart&amp;controllerId=__privateStripeController0" title="Secure payment input frame" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; height: 18px;"></iframe><input class="__PrivateStripeElement-input" aria-hidden="true" style="border: none !important; display: block !important; position: absolute !important; height: 1px !important; top: 0px !important; left: 0px !important; padding: 0px !important; margin: 0px !important; width: 100% !important; opacity: 0 !important; background: transparent !important; pointer-events: none !important; font-size: 16px !important;"><input class="__PrivateStripeElement-safariInput" aria-hidden="true" tabindex="-1" style="border: none !important; display: block !important; position: absolute !important; height: 1px !important; top: 0px !important; left: 0px !important; padding: 0px !important; margin: 0px !important; width: 100% !important; opacity: 0 !important; background: transparent !important; pointer-events: none !important; font-size: 16px !important;"></div></div> <!-- Used to display form errors. --> <div id="card-errors" role="alert"></div> </div> <input type="hidden" id="nonce" name="payment_method_nonce" /> </div> </div> <button>Submit Payment</button> </form> </div> <div id="stripe-token-handler" class="is-hidden">Success! Got token: <span class="token"></span></div> </div> <!-- script for the stripe form --> <script src="{% static 'js/checkout.js' %}"></script> <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script> <script src="https://js.braintreegateway.com/web/dropin/1.13.0/js/dropin.min.js"></script> <script> var form = document.querySelector('#payment-form'); var client_token = '{{ client_token }}'; braintree.dropin.create({ authorization: client_token, container: '#bt-dropin', paypal: { flow: 'vault' } }, function (createErr, instance) { form.addEventListener('submit', function (event) { event.preventDefault(); instance.requestPaymentMethod(function (err, payload) { if (err) { console.log('Error', err); return; } // Add the nonce to the form and submit document.querySelector('#nonce').value = payload.nonce; form.submit(); }); }); }); </script> <!-- script for toggling display of the form --> <script type="text/javascript"> function toggleDisplay() { var x = document.getElementById("collapseStripe"); if (x.style.display === "none") { x.style.display = "block"; } else { x.style.display = "none"; } }; </script> </body> {% endblock content %} checkout.css body, html { height: 100%; background-color: #f7f8f9; color: #6b7c93; } *, label { font-family: "Helvetica Neue", Helvetica, sans-serif; font-size: 16px; font-variant: normal; padding: 0; margin: 0; -webkit-font-smoothing: antialiased; } button { border: none; border-radius: 4px; outline: none; text-decoration: none; color: #fff; background: #32325d; white-space: nowrap; display: inline-block; height: 40px; line-height: 40px; padding: 0 14px; box-shadow: 0 4px 6px rgba(50, 50, 93, .11), 0 1px 3px rgba(0, 0, 0, .08); border-radius: 4px; font-size: 15px; font-weight: 600; letter-spacing: 0.025em; text-decoration: none; -webkit-transition: all 150ms ease; transition: all 150ms ease; float: left; margin-left: 12px; margin-top: 28px; } button:hover { transform: translateY(-1px); box-shadow: 0 7px 14px rgba(50, 50, 93, .10), 0 3px 6px rgba(0, 0, 0, .08); background-color: #43458b; } form { padding: 300px; height: 1200px; } label { font-weight: 500; font-size: 14px; display: block; margin-bottom: 8px; } #card-errors { height: 20px; padding: 4px 0; color: #fa755a; } .form-row { width: 70%; float: left; } .token { color: #32325d; font-family: 'Source Code Pro', monospace; font-weight: 500; } .wrapper { width: 670px; margin: 0 auto; height: 100%; } #stripe-token-handler { position: absolute; top: 0; left: 25%; right: 25%; padding: 20px 30px; border-radius: 0 0 4px 4px; box-sizing: border-box; box-shadow: 0 50px 100px rgba(50, 50, 93, 0.1), 0 15px 35px rgba(50, 50, 93, 0.15), 0 5px 15px rgba(0, 0, 0, 0.1); -webkit-transition: all 500ms ease-in-out; transition: all 500ms ease-in-out; transform: translateY(0); opacity: 1; background-color: white; } #stripe-token-handler.is-hidden { opacity: 0; transform: translateY(-80px); } /** * The CSS shown here will not be introduced in the Quickstart guide, but shows * how you can use CSS to style your Element's container. */ .StripeElement { background-color: white; height: 40px; padding: 10px 12px; border-radius: 4px; border: 1px solid transparent; box-shadow: 0 1px 3px 0 #e6ebf1; -webkit-transition: box-shadow 150ms ease; transition: box-shadow 150ms ease; } .StripeElement--focus { box-shadow: 0 1px 3px 0 #cfd7df; } .StripeElement--invalid { border-color: #fa755a; } .StripeElement--webkit-autofill { background-color: #fefde5 !important; } checkout.js // Create a Stripe client. var stripe = Stripe('pk_test_1gVJE1B3TYjY3ykiEhZgWeGe00mRO0lVwa'); // Create an instance of Elements. var elements = stripe.elements(); // Custom styling can be passed to options when creating an Element. // (Note that this demo uses a wider set of styles than the guide below.) var style = { base: { color: '#32325d', lineHeight: '18px', fontFamily: '"Helvetica Neue", Helvetica, sans-serif', fontSmoothing: 'antialiased', fontSize: '16px', '::placeholder': { color: '#aab7c4' } }, invalid: { color: '#fa755a', iconColor: '#fa755a' } }; // Create an instance of the card Element. var card = elements.create('card', {style: style}); // Add an instance of the card Element into the `card-element` <div>. card.mount('#card-element'); // Handle real-time validation errors from the card Element. card.addEventListener('change', function(event) { var displayError = document.getElementById('card-errors'); if (event.error) { displayError.textContent = event.error.message; } else { displayError.textContent = ''; } }); // Handle form submission. var form = document.getElementById('payment-form'); form.addEventListener('submit', function(event) { event.preventDefault(); stripe.createToken(card).then(function(result) { if (result.error) { // Inform the user if there was an error. var errorElement = document.getElementById('card-errors'); errorElement.textContent = result.error.message; } else { // Send the token to your server. stripeTokenHandler(result.token); } }); }); var successElement = document.getElementById('stripe-token-handler'); document.querySelector('.wrapper').addEventListener('click', function() { successElement.className = 'is-hidden'; }); function stripeTokenHandler(token) { successElement.className = ''; successElement.querySelector('.token').textContent = token.id; // Insert the token ID into the form so it gets submitted to the server var form = document.getElementById('payment-form'); var hiddenInput = document.createElement('input'); hiddenInput.setAttribute('type', 'hidden'); hiddenInput.setAttribute('name', 'stripeToken'); hiddenInput.setAttribute('value', token.id); form.appendChild(hiddenInput); // Submit the form form.submit(); } Thank you very much for your help. A: It looks like the issue with the Stripe form not showing properly on your checkout page may be related to a missing script tag in your HTML code. In the section of your code where you're trying to include the Stripe JavaScript library, it looks like you're using a Django template tag instead of the proper HTML script tag.
Stripe form doesn't show properly
I am building a subscription based website. For the checkout page, the stripe form doesn't show properly. Here is a screenshot of the problem. I looked to make sure that my scripts aren't overriding any element from stripe but i still have this problem. You can see below the checkout.html, checkout.css and checkout.js. checkout.html {% extends 'home/base.html' %} {% load static %} {% block head%} <head> <meta charset="UTF-8"> <title>Checkout</title> <link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" type="text/css" rel="stylesheet"> <link rel="stylesheet" type="text/css" href="{% static 'css/checkout.css' %}"> <style> #checkoutMethods { background: #fff; border-radius: 2px; display: inline-block; max-height: 700px; margin: 1rem; position: relative; width: 700px; box-shadow: 0 10px 20px rgba(0,0,0,0.19), 0 6px 6px rgba(0,0,0,0.23); } </style> </head> {% endblock head%} {% block content %} <body> <div class="container"> <section> <div class="row" id="tablerow"> <div class="col-md-4 col-xs-12"> <div class="panel panel-primary"> <div class="panel-body"> <h5>Enter Voucher Code Below<br><small>If multiple, separate each with comma</small></h5> <div> <form action="." method="post"> {% csrf_token %} <input type="text" name="voucher_codes" class="form-control" id="voucher_code" required> <input type="hidden" name="order_id" value="{{ order.id }}"> <br> <span> <input type="submit" class="btn btn-warning pull-right" value="Apply Voucher"> </span> </form> </div> </div> </div> </div> <div class="col-md-8 col-xs-12"> <table class="table"> <tr> <td><h4>Order Summary</h4></td> </tr> <tr> <td> {% for item in order.get_cart_items %} <tr> <td>{{ item }}</td> <td>${{ item.product.price }}</td> </tr> {% endfor %} </td> </tr> <tr> <td><strong>Order Total</strong> </td> <td> <strong>${{ order.get_cart_total }}</strong></td> </tr> </table> <button onclick="toggleDisplay();" class="btn btn-warning" style="width: 100%;">Checkout with a credit card</button> </div> </div> </section> <div id="collapseStripe" class="wrapper"> <script src="https://js.stripe.com/v3/"></script> <!-- can't do this --> <!-- <script src="{% static 'js/stripeV3.js' %}"></script> --> <form action="." method="post" id="payment-form"> {% csrf_token %} <div id="checkoutMethods"> <div style="margin: 10px;"> <h2>Checkout with Braintree</h2> <div id="bt-dropin"></div> <h2>Checkout with Stripe</h2> <div class="form-row"> <label for="card-element"> Credit or debit card </label> <div id="card-element" class="StripeElement StripeElement--empty"><div class="__PrivateStripeElement" style="margin: 0px !important; padding: 0px !important; border: none !important; display: block !important; background: transparent !important; position: relative !important; opacity: 1 !important;"><iframe frameborder="0" allowtransparency="true" scrolling="no" name="__privateStripeFrame3" allowpaymentrequest="true" src="https://js.stripe.com/v3/elements-inner-card-8a434729e4eb82355db4882974049278.html#style[base][color]=%2332325d&amp;style[base][lineHeight]=18px&amp;style[base][fontFamily]=%22Helvetica+Neue%22%2C+Helvetica%2C+sans-serif&amp;style[base][fontSmoothing]=antialiased&amp;style[base][fontSize]=16px&amp;style[base][::placeholder][color]=%23aab7c4&amp;style[invalid][color]=%23fa755a&amp;style[invalid][iconColor]=%23fa755a&amp;componentName=card&amp;wait=false&amp;rtl=false&amp;features[noop]=false&amp;origin=https%3A%2F%2Fstripe.com&amp;referrer=https%3A%2F%2Fstripe.com%2Fdocs%2Fstripe-js%2Felements%2Fquickstart&amp;controllerId=__privateStripeController0" title="Secure payment input frame" style="border: none !important; margin: 0px !important; padding: 0px !important; width: 1px !important; min-width: 100% !important; overflow: hidden !important; display: block !important; height: 18px;"></iframe><input class="__PrivateStripeElement-input" aria-hidden="true" style="border: none !important; display: block !important; position: absolute !important; height: 1px !important; top: 0px !important; left: 0px !important; padding: 0px !important; margin: 0px !important; width: 100% !important; opacity: 0 !important; background: transparent !important; pointer-events: none !important; font-size: 16px !important;"><input class="__PrivateStripeElement-safariInput" aria-hidden="true" tabindex="-1" style="border: none !important; display: block !important; position: absolute !important; height: 1px !important; top: 0px !important; left: 0px !important; padding: 0px !important; margin: 0px !important; width: 100% !important; opacity: 0 !important; background: transparent !important; pointer-events: none !important; font-size: 16px !important;"></div></div> <!-- Used to display form errors. --> <div id="card-errors" role="alert"></div> </div> <input type="hidden" id="nonce" name="payment_method_nonce" /> </div> </div> <button>Submit Payment</button> </form> </div> <div id="stripe-token-handler" class="is-hidden">Success! Got token: <span class="token"></span></div> </div> <!-- script for the stripe form --> <script src="{% static 'js/checkout.js' %}"></script> <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script> <script src="https://js.braintreegateway.com/web/dropin/1.13.0/js/dropin.min.js"></script> <script> var form = document.querySelector('#payment-form'); var client_token = '{{ client_token }}'; braintree.dropin.create({ authorization: client_token, container: '#bt-dropin', paypal: { flow: 'vault' } }, function (createErr, instance) { form.addEventListener('submit', function (event) { event.preventDefault(); instance.requestPaymentMethod(function (err, payload) { if (err) { console.log('Error', err); return; } // Add the nonce to the form and submit document.querySelector('#nonce').value = payload.nonce; form.submit(); }); }); }); </script> <!-- script for toggling display of the form --> <script type="text/javascript"> function toggleDisplay() { var x = document.getElementById("collapseStripe"); if (x.style.display === "none") { x.style.display = "block"; } else { x.style.display = "none"; } }; </script> </body> {% endblock content %} checkout.css body, html { height: 100%; background-color: #f7f8f9; color: #6b7c93; } *, label { font-family: "Helvetica Neue", Helvetica, sans-serif; font-size: 16px; font-variant: normal; padding: 0; margin: 0; -webkit-font-smoothing: antialiased; } button { border: none; border-radius: 4px; outline: none; text-decoration: none; color: #fff; background: #32325d; white-space: nowrap; display: inline-block; height: 40px; line-height: 40px; padding: 0 14px; box-shadow: 0 4px 6px rgba(50, 50, 93, .11), 0 1px 3px rgba(0, 0, 0, .08); border-radius: 4px; font-size: 15px; font-weight: 600; letter-spacing: 0.025em; text-decoration: none; -webkit-transition: all 150ms ease; transition: all 150ms ease; float: left; margin-left: 12px; margin-top: 28px; } button:hover { transform: translateY(-1px); box-shadow: 0 7px 14px rgba(50, 50, 93, .10), 0 3px 6px rgba(0, 0, 0, .08); background-color: #43458b; } form { padding: 300px; height: 1200px; } label { font-weight: 500; font-size: 14px; display: block; margin-bottom: 8px; } #card-errors { height: 20px; padding: 4px 0; color: #fa755a; } .form-row { width: 70%; float: left; } .token { color: #32325d; font-family: 'Source Code Pro', monospace; font-weight: 500; } .wrapper { width: 670px; margin: 0 auto; height: 100%; } #stripe-token-handler { position: absolute; top: 0; left: 25%; right: 25%; padding: 20px 30px; border-radius: 0 0 4px 4px; box-sizing: border-box; box-shadow: 0 50px 100px rgba(50, 50, 93, 0.1), 0 15px 35px rgba(50, 50, 93, 0.15), 0 5px 15px rgba(0, 0, 0, 0.1); -webkit-transition: all 500ms ease-in-out; transition: all 500ms ease-in-out; transform: translateY(0); opacity: 1; background-color: white; } #stripe-token-handler.is-hidden { opacity: 0; transform: translateY(-80px); } /** * The CSS shown here will not be introduced in the Quickstart guide, but shows * how you can use CSS to style your Element's container. */ .StripeElement { background-color: white; height: 40px; padding: 10px 12px; border-radius: 4px; border: 1px solid transparent; box-shadow: 0 1px 3px 0 #e6ebf1; -webkit-transition: box-shadow 150ms ease; transition: box-shadow 150ms ease; } .StripeElement--focus { box-shadow: 0 1px 3px 0 #cfd7df; } .StripeElement--invalid { border-color: #fa755a; } .StripeElement--webkit-autofill { background-color: #fefde5 !important; } checkout.js // Create a Stripe client. var stripe = Stripe('pk_test_1gVJE1B3TYjY3ykiEhZgWeGe00mRO0lVwa'); // Create an instance of Elements. var elements = stripe.elements(); // Custom styling can be passed to options when creating an Element. // (Note that this demo uses a wider set of styles than the guide below.) var style = { base: { color: '#32325d', lineHeight: '18px', fontFamily: '"Helvetica Neue", Helvetica, sans-serif', fontSmoothing: 'antialiased', fontSize: '16px', '::placeholder': { color: '#aab7c4' } }, invalid: { color: '#fa755a', iconColor: '#fa755a' } }; // Create an instance of the card Element. var card = elements.create('card', {style: style}); // Add an instance of the card Element into the `card-element` <div>. card.mount('#card-element'); // Handle real-time validation errors from the card Element. card.addEventListener('change', function(event) { var displayError = document.getElementById('card-errors'); if (event.error) { displayError.textContent = event.error.message; } else { displayError.textContent = ''; } }); // Handle form submission. var form = document.getElementById('payment-form'); form.addEventListener('submit', function(event) { event.preventDefault(); stripe.createToken(card).then(function(result) { if (result.error) { // Inform the user if there was an error. var errorElement = document.getElementById('card-errors'); errorElement.textContent = result.error.message; } else { // Send the token to your server. stripeTokenHandler(result.token); } }); }); var successElement = document.getElementById('stripe-token-handler'); document.querySelector('.wrapper').addEventListener('click', function() { successElement.className = 'is-hidden'; }); function stripeTokenHandler(token) { successElement.className = ''; successElement.querySelector('.token').textContent = token.id; // Insert the token ID into the form so it gets submitted to the server var form = document.getElementById('payment-form'); var hiddenInput = document.createElement('input'); hiddenInput.setAttribute('type', 'hidden'); hiddenInput.setAttribute('name', 'stripeToken'); hiddenInput.setAttribute('value', token.id); form.appendChild(hiddenInput); // Submit the form form.submit(); } Thank you very much for your help.
[ "It looks like the issue with the Stripe form not showing properly on your checkout page may be related to a missing script tag in your HTML code. In the section of your code where you're trying to include the Stripe JavaScript library, it looks like you're using a Django template tag instead of the proper HTML script tag.\n" ]
[ 0 ]
[]
[]
[ "css", "django_templates", "html", "javascript", "stripe_payments" ]
stackoverflow_0061006554_css_django_templates_html_javascript_stripe_payments.txt
Q: Realtime Database vs Firestore for an Authentication Web Application I want to make an Auth Application for my site. But I am confused about Realtime Database and Firestore. Which database should I use for an Auth Application where I can store users' posts, reviews, etc. And why should I choose one? I am currently using Realtime Database but some of my friends recommended to use Firestore. A: Try this https://firebase.google.com/docs/database/rtdb-vs-firestore, answer the interactive questions that are present and that should give you a somewhat general idea. For your question, the kind of app that you want to build I'll suggest look into firestore. Till you have a specific requirement of using Realtime database(which you will know if you have it), you should go with Firestore. It gives several advantages: AutoScaling Multiregional deployment. Non cascading rules. great defeault Indexing support. Read from the mentioned doc, firebase documentation is really good if you give it time and actually read (not just skim throught it)
Realtime Database vs Firestore for an Authentication Web Application
I want to make an Auth Application for my site. But I am confused about Realtime Database and Firestore. Which database should I use for an Auth Application where I can store users' posts, reviews, etc. And why should I choose one? I am currently using Realtime Database but some of my friends recommended to use Firestore.
[ "Try this https://firebase.google.com/docs/database/rtdb-vs-firestore, answer the interactive questions that are present and that should give you a somewhat general idea.\nFor your question, the kind of app that you want to build I'll suggest look into firestore.\nTill you have a specific requirement of using Realtime database(which you will know if you have it), you should go with Firestore. It gives several advantages:\n\nAutoScaling\nMultiregional deployment.\nNon cascading rules.\ngreat defeault Indexing support.\n\nRead from the mentioned doc, firebase documentation is really good if you give it time and actually read (not just skim throught it)\n" ]
[ 0 ]
[]
[]
[ "firebase", "javascript" ]
stackoverflow_0074673359_firebase_javascript.txt
Q: Rust: Why do I get FromIterator<&Shoe>` is not implemented for a struct? I'm implementing a collection type that holds a vector of structs. I want to implements a bunch of methods to sort my vector in various ways. The struct is very basic: #[derive(PartialEq, Debug, Clone)] pub struct Shoe { size: u32, style: String, } The collection type just wraps the struct into a vector, like so: #[derive(Debug, PartialEq, Clone)] pub struct ShoesInventory { shoes: Vec<Shoe> } I want to filter all existing shoes according to given size, and return the result as a separate vector. Basically, iterate, filter, and collect. However, when I write this, impl ShoesInventory { pub fn new(shoes: Vec<Shoe>) -> ShoesInventory { ShoesInventory { shoes } } pub fn shoes_in_size(&self, shoe_size: u32) -> Vec<Shoe> { self.shoes.iter().filter(| s| s.size == shoe_size).collect() } } I'm getting the following compiler error error[E0277]: a value of type `Vec<Shoe>` cannot be built from an iterator over elements of type `&Shoe` --> src/shoes.rs:18:9 | 18 | self.shoes.iter().filter(| s| s.size == shoe_size).collect() | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ------- required by a bound introduced by this call | | | value of type `Vec<Shoe>` cannot be built from `std::iter::Iterator<Item=&Shoe>` | = help: the trait `FromIterator<&Shoe>` is not implemented for `Vec<Shoe>` I did some research online, but really couldn't figure out the essence of the problem. If I make the return type Vec<&Shoe>, I would leak internal private data, so that doesn't work. If I try to clone the element in the closure, it doesn't fix anything and I still get the same error. It's not so clear what the problem is b/c on another vector this code pattern actually works. For example, when you use another vector with a primitive type, say integer, the iterator, map/filter, collect pattern just works fine. let v1: Vec<i32> = vec![1, 2, 3]; let v2: Vec<_> = v1.iter().map(|x| x + 1).collect(); // no problem here However, when the vector element contains a struct or a string, things get hairy. I understand the error basically says, that the FromIterator is not implemented, but why? And how do I fix this? Thank you Playground code Solution Thanks to first comment: iter returns references, but in order to iterate over values, you have to clone the iterator first. Example: pub fn shoes_in_size(&self,shoe_size: u32) -> Vec<Shoe> { self.shoes.iter() .cloned()// https://stackoverflow.com/questions/40613725/iterating-over-a-slices-values-instead-of-references-in-rust .filter(| s| s.size == shoe_size) .collect() } A: You have told Rust that you want to return a Vec<Shoe> - ie. a collection of owned shoe objects. However, you are feeding it a sequence of references to shoes. Depending on the use case for this function, there are at least a couple of directions you could go. As long as the callers to this function don't actually need owned shoes, then you can just change the function to this: pub fn shoes_in_size(&self, shoe_size: u32) -> Vec<&Shoe> { ... The caller will then receive a vec containing references to the shoes that meet the filter condition. This is often exactly what you want - why make expensive clones of the Shoe struct if you don't need to? Note that the rust compiler will automatically ensure that the lifetime of the returned references does not exceed the lifetime of the ShoeInventory that they refer to. Typically the cases where you would want to return owned items is if one of the following will hold: The caller to the function will want to modify the items returned by the function The lifetime of the returned items may need to outlive the collection that is handing them out In either of these cases it might be more appropriate to clone them, as you indicated in the comments. In that case, though, I would clone them after filtering, so that you are not cloning items and then immediately discarding them: pub fn shoes_in_size(&self, shoe_size: u32) -> Vec<Shoe> { self.shoes.iter().filter(| s| s.size == shoe_size).cloned().collect() }
Rust: Why do I get FromIterator<&Shoe>` is not implemented for a struct?
I'm implementing a collection type that holds a vector of structs. I want to implements a bunch of methods to sort my vector in various ways. The struct is very basic: #[derive(PartialEq, Debug, Clone)] pub struct Shoe { size: u32, style: String, } The collection type just wraps the struct into a vector, like so: #[derive(Debug, PartialEq, Clone)] pub struct ShoesInventory { shoes: Vec<Shoe> } I want to filter all existing shoes according to given size, and return the result as a separate vector. Basically, iterate, filter, and collect. However, when I write this, impl ShoesInventory { pub fn new(shoes: Vec<Shoe>) -> ShoesInventory { ShoesInventory { shoes } } pub fn shoes_in_size(&self, shoe_size: u32) -> Vec<Shoe> { self.shoes.iter().filter(| s| s.size == shoe_size).collect() } } I'm getting the following compiler error error[E0277]: a value of type `Vec<Shoe>` cannot be built from an iterator over elements of type `&Shoe` --> src/shoes.rs:18:9 | 18 | self.shoes.iter().filter(| s| s.size == shoe_size).collect() | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ------- required by a bound introduced by this call | | | value of type `Vec<Shoe>` cannot be built from `std::iter::Iterator<Item=&Shoe>` | = help: the trait `FromIterator<&Shoe>` is not implemented for `Vec<Shoe>` I did some research online, but really couldn't figure out the essence of the problem. If I make the return type Vec<&Shoe>, I would leak internal private data, so that doesn't work. If I try to clone the element in the closure, it doesn't fix anything and I still get the same error. It's not so clear what the problem is b/c on another vector this code pattern actually works. For example, when you use another vector with a primitive type, say integer, the iterator, map/filter, collect pattern just works fine. let v1: Vec<i32> = vec![1, 2, 3]; let v2: Vec<_> = v1.iter().map(|x| x + 1).collect(); // no problem here However, when the vector element contains a struct or a string, things get hairy. I understand the error basically says, that the FromIterator is not implemented, but why? And how do I fix this? Thank you Playground code Solution Thanks to first comment: iter returns references, but in order to iterate over values, you have to clone the iterator first. Example: pub fn shoes_in_size(&self,shoe_size: u32) -> Vec<Shoe> { self.shoes.iter() .cloned()// https://stackoverflow.com/questions/40613725/iterating-over-a-slices-values-instead-of-references-in-rust .filter(| s| s.size == shoe_size) .collect() }
[ "You have told Rust that you want to return a Vec<Shoe> - ie. a collection of owned shoe objects. However, you are feeding it a sequence of references to shoes.\nDepending on the use case for this function, there are at least a couple of directions you could go.\nAs long as the callers to this function don't actually need owned shoes, then you can just change the function to this:\npub fn shoes_in_size(&self, shoe_size: u32) -> Vec<&Shoe> {\n...\n\nThe caller will then receive a vec containing references to the shoes that meet the filter condition. This is often exactly what you want - why make expensive clones of the Shoe struct if you don't need to?\nNote that the rust compiler will automatically ensure that the lifetime of the returned references does not exceed the lifetime of the ShoeInventory that they refer to.\nTypically the cases where you would want to return owned items is if one of the following will hold:\n\nThe caller to the function will want to modify the items returned by the function\nThe lifetime of the returned items may need to outlive the collection that is handing them out\n\nIn either of these cases it might be more appropriate to clone them, as you indicated in the comments. In that case, though, I would clone them after filtering, so that you are not cloning items and then immediately discarding them:\npub fn shoes_in_size(&self, shoe_size: u32) -> Vec<Shoe> {\n self.shoes.iter().filter(| s| s.size == shoe_size).cloned().collect()\n}\n\n" ]
[ 3 ]
[]
[]
[ "collections", "filter", "functional_programming", "iterator", "rust" ]
stackoverflow_0074673403_collections_filter_functional_programming_iterator_rust.txt
Q: Another way to install Multiplayer HLAPI in Unity Is there any other way to download Multiplayer HLAPI in Unity other than installing it in pakagemanager? This is absolutely important for my project and I couldn't find a way no matter what I searched A: Apart from using github, the only way I found to download HLAPI was to get the HLAPI from a person who has already downloaded it, either through messengers or through USB.
Another way to install Multiplayer HLAPI in Unity
Is there any other way to download Multiplayer HLAPI in Unity other than installing it in pakagemanager? This is absolutely important for my project and I couldn't find a way no matter what I searched
[ "Apart from using github, the only way I found to download HLAPI was to get the HLAPI from a person who has already downloaded it, either through messengers or through USB.\n" ]
[ 0 ]
[]
[]
[ "unity3d" ]
stackoverflow_0074568214_unity3d.txt
Q: Nginx url port appears I hosted a website using php-fpm and nginx on Termux and everytime I type the url localhost/sth and it redirects to localhost:8443/sth ,I don't want to see the port 8443, how to solve it? Type url localhost/something and it redirects to localhost:8443/something A: You should investigate your nginx.conf file, it probably has some misconfiguration: Open the nginx configuration file located at /etc/nginx/nginx.conf Locate the server block that contains the localhost:8443 server name, and change it to localhost In the same server block, locate the listen directive and change the port number from 8443 to 80 (the default port for HTTP) Save the changes and restart the nginx service using the command "service nginx restart" Open your browser and try accessing the website using the localhost/sth URL. It should no longer redirect to localhost:8443/sth.
Nginx url port appears
I hosted a website using php-fpm and nginx on Termux and everytime I type the url localhost/sth and it redirects to localhost:8443/sth ,I don't want to see the port 8443, how to solve it? Type url localhost/something and it redirects to localhost:8443/something
[ "You should investigate your nginx.conf file, it probably has some misconfiguration:\n\nOpen the nginx configuration file located at /etc/nginx/nginx.conf\nLocate the server block that contains the localhost:8443 server name, and change it to localhost\n\nIn the same server block, locate the listen directive and change the port number from 8443 to 80 (the default port for HTTP)\n\nSave the changes and restart the nginx service using the command \"service nginx restart\"\n\nOpen your browser and try accessing the website using the localhost/sth URL. It should no longer redirect to localhost:8443/sth.\n\n\n" ]
[ 0 ]
[]
[]
[ "https", "nginx", "php", "port", "termux" ]
stackoverflow_0074673322_https_nginx_php_port_termux.txt
Q: Find bit location in groovy and put into an list The groovy script needs to find the x location of string1 and put them into a list. The final output will be like finaList=[[2,3],[5]] The following scripts are created by me, but it doesn't work checkStr='01xx0x1' i=0 tempaList=[] finalList=[] while (i<checkStr.length()){ if(checkStr[i]=='x'){ tempaList.add(i) }else if(tempaList.length()>1){ finalList.add([tempaList[0],tempaList[-1]]) tempaList=[] }else if (tempaList.length()==1){ finaList.add([tempaList]) tempaList=[] } i=i+1 } println finaList A: I'm a bit confused by your desired result of a list of lists. If, as you stated, your only goal is to find all indexes that 'x' is at, you can use: def str ='01xx0x' str.findIndexValues { it == 'x' } // result -> [2, 3, 5] If you do want to maintain your groupings, you need to make sure you handle the case of your last character being an 'x'. Your current code will not handle instances where tempaList has a value but the loop has finished iterating over the string. def checkStr = '01xx0x1' def temp = [] def end = [] for (int i=0; i < checkStr.size(); i++) { if (checkStr[i] == 'x') { temp << i } else if (temp.size()) { // temp has size (indexes) and we've hit a non-'x' char end << temp temp = [] } } // If the final char was an x (or multiple x's), handle that here if (temp) { end << temp } println end // result -> [[2, 3], [5]]
Find bit location in groovy and put into an list
The groovy script needs to find the x location of string1 and put them into a list. The final output will be like finaList=[[2,3],[5]] The following scripts are created by me, but it doesn't work checkStr='01xx0x1' i=0 tempaList=[] finalList=[] while (i<checkStr.length()){ if(checkStr[i]=='x'){ tempaList.add(i) }else if(tempaList.length()>1){ finalList.add([tempaList[0],tempaList[-1]]) tempaList=[] }else if (tempaList.length()==1){ finaList.add([tempaList]) tempaList=[] } i=i+1 } println finaList
[ "I'm a bit confused by your desired result of a list of lists. If, as you stated, your only goal is to find all indexes that 'x' is at, you can use:\ndef str ='01xx0x'\nstr.findIndexValues { it == 'x' } // result -> [2, 3, 5]\n\nIf you do want to maintain your groupings, you need to make sure you handle the case of your last character being an 'x'. Your current code will not handle instances where tempaList has a value but the loop has finished iterating over the string.\ndef checkStr = '01xx0x1'\ndef temp = []\ndef end = []\n\nfor (int i=0; i < checkStr.size(); i++) {\n if (checkStr[i] == 'x') {\n temp << i\n } else if (temp.size()) {\n // temp has size (indexes) and we've hit a non-'x' char\n end << temp\n temp = []\n }\n } \n\n// If the final char was an x (or multiple x's), handle that here \nif (temp) {\n end << temp\n}\n\nprintln end // result -> [[2, 3], [5]]\n\n" ]
[ 0 ]
[ "checkStr='01xx0x1'\ni=0\ntempaList=[]\nfinalList=[]\n\nwhile(i<checkStr.length()){\n if(checkStr[i]==\"x\"){\n // println i\n tempaList.add(i)\n }else if (tempaList.size()>1){\n finalList.add([tempaList[0],tempaList[-1]])\n tempaList=[]\n }else if (tempaList.size()==1){\n finalList.add(tempaList)\n tempaList=[]\n }\n i=i+1\n}\n\nprintln finalList\n\n" ]
[ -1 ]
[ "groovy" ]
stackoverflow_0074672284_groovy.txt
Q: Netlify Build Crashing On Installing Material UI I am trying to deploy my react application on netlify. It is working fine on my desktop and even npm run build is properly working. The material ui packages that I am using are also working fine on my desktop but when I am deploying on netlify the build fails. This is the error log 12:16:02 PM: No npm workspaces detected 12:16:02 PM: Started restoring cached node modules 12:16:02 PM: Finished restoring cached node modules 12:16:02 PM: Installing NPM modules using NPM version 8.19.2 12:16:04 PM: npm ERR! code ERESOLVE 12:16:04 PM: npm ERR! ERESOLVE could not resolve 12:16:04 PM: npm ERR! 12:16:04 PM: Creating deploy upload records 12:16:04 PM: npm ERR! While resolving: @material-ui/[email protected] 12:16:04 PM: npm ERR! Found: [email protected] 12:16:04 PM: npm ERR! node_modules/react 12:16:04 PM: npm ERR! react@"^18.2.0" from the root project 12:16:04 PM: npm ERR! peer react@">=16.8.0" from @emotion/[email protected] 12:16:04 PM: Failed during stage 'building site': Build script returned non-zero exit code: 1 (https://ntl.fyi/exit-code-1) 12:16:04 PM: npm ERR! node_modules/@emotion/react 12:16:04 PM: npm ERR! @emotion/react@"^11.10.5" from the root project 12:16:04 PM: npm ERR! peer @emotion/react@"^11.0.0-rc.0" from @emotion/[email protected] 12:16:04 PM: npm ERR! node_modules/@emotion/styled 12:16:04 PM: npm ERR! @emotion/styled@"^11.10.5" from the root project 12:16:04 PM: npm ERR! 3 more (@mui/material, @mui/styled-engine, @mui/system) 12:16:04 PM: npm ERR! 3 more (@mui/material, @mui/styled-engine, @mui/system) 12:16:04 PM: npm ERR! 16 more (@emotion/styled, ...) 12:16:04 PM: npm ERR! 12:16:04 PM: npm ERR! Could not resolve dependency: 12:16:04 PM: npm ERR! peer react@"^16.8.0 || ^17.0.0" from @material-ui/[email protected] 12:16:04 PM: npm ERR! node_modules/@material-ui/core 12:16:04 PM: npm ERR! @material-ui/core@"^4.12.4" from the root project 12:16:04 PM: npm ERR! 12:16:04 PM: npm ERR! Conflicting peer dependency: [email protected] 12:16:04 PM: npm ERR! node_modules/react 12:16:04 PM: npm ERR! peer react@"^16.8.0 || ^17.0.0" from @material-ui/[email protected] 12:16:04 PM: npm ERR! node_modules/@material-ui/core 12:16:04 PM: npm ERR! @material-ui/core@"^4.12.4" from the root project 12:16:04 PM: npm ERR! 12:16:04 PM: npm ERR! Fix the upstream dependency conflict, or retry 12:16:04 PM: npm ERR! this command with --force, or --legacy-peer-deps 12:16:04 PM: npm ERR! to accept an incorrect (and potentially broken) dependency resolution. 12:16:04 PM: npm ERR! 12:16:04 PM: npm ERR! See /opt/buildhome/.npm/eresolve-report.txt for a full report. 12:16:04 PM: npm ERR! A complete log of this run can be found in: 12:16:04 PM: npm ERR! /opt/buildhome/.npm/_logs/2022-12-04T06_46_03_020Z-debug-0.log 12:16:04 PM: Error during NPM install 12:16:04 PM: Build was terminated: Build script returned non-zero exit code: 1 12:16:04 PM: Failing build: Failed to build site 12:16:04 PM: Finished processing build request in 5.438038347s``` A: This happens due to incompatibility with versions within the dependencies. Add --legacy-peer-deps to ignore the dependencies. like npm build --legacy-peer-deps A: Error indicates that there is a conflict between the versions of React specified as peer dependencies in your dependencies. Be sure that all of the packages in your project are using the same version of React. Either by updating the packages, or by updating the version of React installed in your project to match the version specified by the conflicting packages. npm install react@<desired version> Replace with the version of React that you want to use. After that, you also need to update the versions of any other packages that depend on React to make sure that they are compatible with the new version of React. npm update This will update all of the packages in your project to their latest versions. A: I think this is something related to package versions and there might be some conflict you are using react 18 but @material-ui/core requires react 16 or 17 so can downgrade your react or upgrade material-ui to version 5 or add --force to ignore the conflict of packages npm i @material-ui/core --force
Netlify Build Crashing On Installing Material UI
I am trying to deploy my react application on netlify. It is working fine on my desktop and even npm run build is properly working. The material ui packages that I am using are also working fine on my desktop but when I am deploying on netlify the build fails. This is the error log 12:16:02 PM: No npm workspaces detected 12:16:02 PM: Started restoring cached node modules 12:16:02 PM: Finished restoring cached node modules 12:16:02 PM: Installing NPM modules using NPM version 8.19.2 12:16:04 PM: npm ERR! code ERESOLVE 12:16:04 PM: npm ERR! ERESOLVE could not resolve 12:16:04 PM: npm ERR! 12:16:04 PM: Creating deploy upload records 12:16:04 PM: npm ERR! While resolving: @material-ui/[email protected] 12:16:04 PM: npm ERR! Found: [email protected] 12:16:04 PM: npm ERR! node_modules/react 12:16:04 PM: npm ERR! react@"^18.2.0" from the root project 12:16:04 PM: npm ERR! peer react@">=16.8.0" from @emotion/[email protected] 12:16:04 PM: Failed during stage 'building site': Build script returned non-zero exit code: 1 (https://ntl.fyi/exit-code-1) 12:16:04 PM: npm ERR! node_modules/@emotion/react 12:16:04 PM: npm ERR! @emotion/react@"^11.10.5" from the root project 12:16:04 PM: npm ERR! peer @emotion/react@"^11.0.0-rc.0" from @emotion/[email protected] 12:16:04 PM: npm ERR! node_modules/@emotion/styled 12:16:04 PM: npm ERR! @emotion/styled@"^11.10.5" from the root project 12:16:04 PM: npm ERR! 3 more (@mui/material, @mui/styled-engine, @mui/system) 12:16:04 PM: npm ERR! 3 more (@mui/material, @mui/styled-engine, @mui/system) 12:16:04 PM: npm ERR! 16 more (@emotion/styled, ...) 12:16:04 PM: npm ERR! 12:16:04 PM: npm ERR! Could not resolve dependency: 12:16:04 PM: npm ERR! peer react@"^16.8.0 || ^17.0.0" from @material-ui/[email protected] 12:16:04 PM: npm ERR! node_modules/@material-ui/core 12:16:04 PM: npm ERR! @material-ui/core@"^4.12.4" from the root project 12:16:04 PM: npm ERR! 12:16:04 PM: npm ERR! Conflicting peer dependency: [email protected] 12:16:04 PM: npm ERR! node_modules/react 12:16:04 PM: npm ERR! peer react@"^16.8.0 || ^17.0.0" from @material-ui/[email protected] 12:16:04 PM: npm ERR! node_modules/@material-ui/core 12:16:04 PM: npm ERR! @material-ui/core@"^4.12.4" from the root project 12:16:04 PM: npm ERR! 12:16:04 PM: npm ERR! Fix the upstream dependency conflict, or retry 12:16:04 PM: npm ERR! this command with --force, or --legacy-peer-deps 12:16:04 PM: npm ERR! to accept an incorrect (and potentially broken) dependency resolution. 12:16:04 PM: npm ERR! 12:16:04 PM: npm ERR! See /opt/buildhome/.npm/eresolve-report.txt for a full report. 12:16:04 PM: npm ERR! A complete log of this run can be found in: 12:16:04 PM: npm ERR! /opt/buildhome/.npm/_logs/2022-12-04T06_46_03_020Z-debug-0.log 12:16:04 PM: Error during NPM install 12:16:04 PM: Build was terminated: Build script returned non-zero exit code: 1 12:16:04 PM: Failing build: Failed to build site 12:16:04 PM: Finished processing build request in 5.438038347s```
[ "This happens due to incompatibility with versions within the dependencies.\nAdd --legacy-peer-deps to ignore the dependencies. like npm build --legacy-peer-deps\n", "Error indicates that there is a conflict between the versions of React specified as peer dependencies in your dependencies.\nBe sure that all of the packages in your project are using the same version of React. Either by updating the packages, or by updating the version of React installed in your project to match the version specified by the conflicting packages.\nnpm install react@<desired version>\n\nReplace with the version of React that you want to use.\nAfter that, you also need to update the versions of any other packages that depend on React to make sure that they are compatible with the new version of React.\nnpm update\n\nThis will update all of the packages in your project to their latest versions.\n", "I think this is something related to package versions and there might be some conflict\nyou are using react 18 but @material-ui/core requires react 16 or 17\nso can downgrade your react or upgrade material-ui to version 5\nor add --force to ignore the conflict of packages\n\nnpm i @material-ui/core --force\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "deployment", "error_log", "netlify", "reactjs", "web_deployment" ]
stackoverflow_0074673489_deployment_error_log_netlify_reactjs_web_deployment.txt
Q: How to extract values from a while loop to print in python? So I could print out the odd numbers. However, the output isn't what i want. It should look like 1+3+5+7 = 16 but I could not make it into a single line. I couldn't figure out how to extract the values from the while loop as with my method it only gives the latest odd number which is 7 while 1,3 and 5 could not be taken out num = int(input("Insert a postive integer:")) #4 oddNum = 1 total = 0 count = 1 while count <= num: odd = (str(oddNum)) print (odd) total = total + oddNum oddNum = oddNum + 2 count += 1 print (odd + "=" + str(total)) #output will be: ''' 1 3 5 7 7=16 but it should look like 1+3+5+7=16 ''' A: An alternative method would be the use of: range() method to generate the list of odd numbers .join() method to stitch the odd numbers together (eg. 1+3+5+7) f-strings to print odds together with the total = sum(odd_nums) Code: num = int(input("Insert a postive integer:")) #4 odd_nums = range(1, num * 2, 2) sum_nums = "+".join(map(str, odd_nums)) print(f"{sum_nums}={sum(odd_nums)}") Output: 1+3+5+7=16 Note: Same but using two lines of code: num = int(input("Insert a postive integer:")) #4 print(f"{'+'.join(map(str, range(1, num * 2, 2)))}={sum(range(1, num * 2, 2))}") Output: 1+3+5+7=16 A: You are not storing old oddNum values in odd. With minimal changes can be fixed like this: num = int(input("Insert a positive integer:")) oddNum = 1 total = 0 count = 1 odd = "" while count <= num: total = total + oddNum odd += f"{oddNum}" oddNum = oddNum + 2 count += 1 odd = "+".join(odd) print(odd + "=" + str(total)) A: There are a few options, you can either create a string during the loop and print that at the end, or create a list and transform that into a string at the end, or python3 has the ability to modify the default end of line with print(oddNum, end=''). Using a string: num = int(input("Insert a postive integer:")) #4 oddNum = 1 total = 0 count = 1 sequence = '' while count <= num: sequence += ("+" if sequence != "" else "") + str(oddNum) total = total + oddNum oddNum = oddNum + 2 count += 1 print (sequence + "=" + str(total)) Using print: num = int(input("Insert a postive integer:")) #4 oddNum = 1 total = 0 count = 1 while count <= num: if count != 1: print('+', end='') print (oddNum, end='') total = total + oddNum oddNum = oddNum + 2 count += 1 print ("=" + str(total)) A: Alternatively using walrus (:=), range,print, sep, and end: print(*(odd:=[*range(1,int(input('Insert a postive integer:'))*2,2)]),sep='+',end='=');print(sum(odd)) # Insert a postive integer:4 # 1+3+5+7=16
How to extract values from a while loop to print in python?
So I could print out the odd numbers. However, the output isn't what i want. It should look like 1+3+5+7 = 16 but I could not make it into a single line. I couldn't figure out how to extract the values from the while loop as with my method it only gives the latest odd number which is 7 while 1,3 and 5 could not be taken out num = int(input("Insert a postive integer:")) #4 oddNum = 1 total = 0 count = 1 while count <= num: odd = (str(oddNum)) print (odd) total = total + oddNum oddNum = oddNum + 2 count += 1 print (odd + "=" + str(total)) #output will be: ''' 1 3 5 7 7=16 but it should look like 1+3+5+7=16 '''
[ "An alternative method would be the use of:\n\nrange() method to generate the list of odd numbers\n.join() method to stitch the odd numbers together (eg. 1+3+5+7)\nf-strings to print odds together with the total = sum(odd_nums)\n\nCode:\nnum = int(input(\"Insert a postive integer:\")) #4\nodd_nums = range(1, num * 2, 2)\nsum_nums = \"+\".join(map(str, odd_nums))\nprint(f\"{sum_nums}={sum(odd_nums)}\")\n\nOutput:\n1+3+5+7=16\n\n\n\n\nNote:\nSame but using two lines of code:\nnum = int(input(\"Insert a postive integer:\")) #4\n \nprint(f\"{'+'.join(map(str, range(1, num * 2, 2)))}={sum(range(1, num * 2, 2))}\")\n\nOutput:\n1+3+5+7=16\n\n", "You are not storing old oddNum values in odd. With minimal changes can be fixed like this:\nnum = int(input(\"Insert a positive integer:\"))\noddNum = 1\ntotal = 0\ncount = 1\nodd = \"\"\nwhile count <= num:\n total = total + oddNum\n odd += f\"{oddNum}\"\n oddNum = oddNum + 2\n count += 1\nodd = \"+\".join(odd)\nprint(odd + \"=\" + str(total))\n\n", "There are a few options, you can either create a string during the loop and print that at the end, or create a list and transform that into a string at the end, or python3 has the ability to modify the default end of line with print(oddNum, end='').\nUsing a string:\nnum = int(input(\"Insert a postive integer:\")) #4\noddNum = 1\ntotal = 0\ncount = 1\nsequence = ''\nwhile count <= num:\n sequence += (\"+\" if sequence != \"\" else \"\") + str(oddNum)\n total = total + oddNum\n oddNum = oddNum + 2\n count += 1\n\nprint (sequence + \"=\" + str(total))\n\nUsing print:\nnum = int(input(\"Insert a postive integer:\")) #4\noddNum = 1\ntotal = 0\ncount = 1\nwhile count <= num:\n if count != 1:\n print('+', end='')\n print (oddNum, end='')\n total = total + oddNum\n oddNum = oddNum + 2\n count += 1\n\nprint (\"=\" + str(total)) \n\n", "Alternatively using walrus (:=), range,print, sep, and end:\nprint(*(odd:=[*range(1,int(input('Insert a postive integer:'))*2,2)]),sep='+',end='=');print(sum(odd))\n\n# Insert a postive integer:4\n# 1+3+5+7=16\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "numbers", "python", "while_loop" ]
stackoverflow_0074673156_numbers_python_while_loop.txt
Q: PostgreSQL ERROR: INSERT has more target columns than expressions, when it doesn't So I'm starting with this... SELECT * FROM parts_finishing; ...I get this... id, id_part, id_finish, id_metal, id_description, date, inside_hours_k, inside_rate, outside_material (0 rows) ...so everything looks fine so far so I do this... INSERT INTO parts_finishing ( id_part, id_finish, id_metal, id_description, date, inside_hours_k, inside_rate, outside_material ) VALUES ( ('1013', '6', '30', '1', NOW(), '0', '0', '22.43'), ('1013', '6', '30', '2', NOW(), '0', '0', '32.45')); ...and I get... ERROR: INSERT has more target columns than expressions Now I've done a few things like ensuring numbers aren't in quotes, are in quotes (would love a table guide to that in regards to integers, numeric types, etc) after I obviously counted the number of column names and values being inserted. I also tried making sure that all the commas are commas...really at a loss here. There are no other columns except for id which is the bigserial primary key. A: Remove the extra () : INSERT INTO parts_finishing ( id_part, id_finish, id_metal, id_description, date, inside_hours_k, inside_rate, outside_material ) VALUES ('1013', '6', '30', '1', NOW(), '0', '0', '22.43') , ('1013', '6', '30', '2', NOW(), '0', '0', '32.45') ; the (..., ...) in Postgres is the syntax for a tuple literal; The extra set of ( ) would create a tuple of tuples, which makes no sense. Also: for numeric literals you don't want the quotes: (1013, 6, 30, 1, NOW(), 0, 0, 22.43) , ... , assuming all these types are numerical. A: I had a similar problem when using SQL string composition with psycopg2 in Python, but the problem was slightly different. I was missing a comma after one of the fields. INSERT INTO parts_finishing (id_part, id_finish, id_metal) VALUES ( %(id_part)s <-------------------- missing comma %(id_finish)s, %(id_metal)s ); This caused psycopg2 to yield this error: ERROR: INSERT has more target columns than expressions. A: This happened to me in a large insert, everything was ok (comma-wise), it took me a while to notice I was inserting in the wrong table of course the DB does not know your intentions. Copy-paste is the root of all evil ... :-) A: I faced the same issue as well.It will be raised, when the count of columns given and column values given is mismatched. A: I have the same error on express js with PostgreSQL I Solved it. This is my answer. error fire at the time of inserting record. error occurred due to invalid column name with values passing error: INSERT has more target columns than expressions ERROR : error: INSERT has more target columns than expressions name: 'error', length: 116, severity: 'ERROR', code: '42601', detail: undefined, hint: undefined, position: '294', internalPosition: undefined, internalQuery: undefined, where: undefined, schema: undefined, table: undefined, column: undefined, dataType: undefined, constraint: undefined, file: 'analyze.c', line: '945', here is my code dome INSERT INTO student( first_name, last_name, email, phone ) VALUES ($1, $2, $3, $4), values : [ first_name, last_name, email, phone ]
PostgreSQL ERROR: INSERT has more target columns than expressions, when it doesn't
So I'm starting with this... SELECT * FROM parts_finishing; ...I get this... id, id_part, id_finish, id_metal, id_description, date, inside_hours_k, inside_rate, outside_material (0 rows) ...so everything looks fine so far so I do this... INSERT INTO parts_finishing ( id_part, id_finish, id_metal, id_description, date, inside_hours_k, inside_rate, outside_material ) VALUES ( ('1013', '6', '30', '1', NOW(), '0', '0', '22.43'), ('1013', '6', '30', '2', NOW(), '0', '0', '32.45')); ...and I get... ERROR: INSERT has more target columns than expressions Now I've done a few things like ensuring numbers aren't in quotes, are in quotes (would love a table guide to that in regards to integers, numeric types, etc) after I obviously counted the number of column names and values being inserted. I also tried making sure that all the commas are commas...really at a loss here. There are no other columns except for id which is the bigserial primary key.
[ "Remove the extra () :\nINSERT INTO parts_finishing \n(\n id_part, id_finish, id_metal, id_description, \n date, inside_hours_k, inside_rate, outside_material\n) VALUES \n ('1013', '6', '30', '1', NOW(), '0', '0', '22.43')\n, ('1013', '6', '30', '2', NOW(), '0', '0', '32.45')\n ;\n\n\nthe (..., ...) in Postgres is the syntax for a tuple literal; The extra set of ( ) would create a tuple of tuples, which makes no sense.\nAlso: for numeric literals you don't want the quotes:\n(1013, 6, 30, 1, NOW(), 0, 0, 22.43)\n, ...\n\n, assuming all these types are numerical.\n", "I had a similar problem when using SQL string composition with psycopg2 in Python, but the problem was slightly different. I was missing a comma after one of the fields.\nINSERT INTO parts_finishing\n(id_part, id_finish, id_metal)\nVALUES (\n %(id_part)s <-------------------- missing comma\n %(id_finish)s,\n %(id_metal)s\n);\n\nThis caused psycopg2 to yield this error:\n\nERROR: INSERT has more target columns than expressions.\n\n", "This happened to me in a large insert, everything was ok (comma-wise), it took me a while to notice I was inserting in the wrong table of course the DB does not know your intentions.\nCopy-paste is the root of all evil ... :-)\n", "I faced the same issue as well.It will be raised, when the count of columns given and column values given is mismatched.\n", "I have the same error on express js with PostgreSQL\nI Solved it. This is my answer.\nerror fire at the time of inserting record. \n\nerror occurred due to invalid column name with values passing \nerror: INSERT has more target columns than expressions\nERROR : error: INSERT has more target columns than expressions\n name: 'error',\n length: 116,\n severity: 'ERROR',\n code: '42601',\n detail: undefined,\n hint: undefined,\n position: '294',\n internalPosition: undefined,\n internalQuery: undefined,\n where: undefined,\n schema: undefined,\n table: undefined,\n column: undefined,\n dataType: undefined,\n constraint: undefined,\n file: 'analyze.c',\n line: '945',\n\nhere is my code dome\nINSERT INTO student(\n first_name, last_name, email, phone\n) \nVALUES \n ($1, $2, $3, $4), \nvalues \n : [ first_name, \n last_name, \n email, \n phone ]\n\n" ]
[ 74, 32, 7, 6, 2 ]
[ "IN my case there was syntax error in sub query.\n" ]
[ -1 ]
[ "postgresql" ]
stackoverflow_0027639239_postgresql.txt
Q: How to write a PHP Program to display all the $_SERVER elements Just started the class and started learning PHP. Professor asked us to write a PHP Program to display all the $_SERVER elements. I read the chapters and I am confused on what he wants, can someone explain? I tried what I found in the textbook <?php //hw1.php //Write a PHP Program to display all the $_SERVER elements $came_from = htmlentities($_SERVER['HTTP_REFERER']); echo "$_SERVER"; ?> A: If display all of array elements echo '<pre>'; print_r($_SERVER['HTTP_REFERER']); Array convert to string $string='[ '; foreach($_SERVER['HTTP_REFERER'] as $key=>$val) { $string.=$key.'=>'.$val.",\n"; } $string.=']'; echo $string; For example display $_SERVER with echo string='[ '; foreach($_SERVER as $key=>$val) { $string.=$key.'=>'.$val.",<br>"; } $string.=']'; echo $string; A: Simply do this, if you want to see it in a browser. echo "<pre>"; print_r($_SERVER); echo "</pre>";
How to write a PHP Program to display all the $_SERVER elements
Just started the class and started learning PHP. Professor asked us to write a PHP Program to display all the $_SERVER elements. I read the chapters and I am confused on what he wants, can someone explain? I tried what I found in the textbook <?php //hw1.php //Write a PHP Program to display all the $_SERVER elements $came_from = htmlentities($_SERVER['HTTP_REFERER']); echo "$_SERVER"; ?>
[ "If display all of array elements\necho '<pre>';\n\n\nprint_r($_SERVER['HTTP_REFERER']);\n\n\nArray convert to string\n$string='[ ';\nforeach($_SERVER['HTTP_REFERER'] as $key=>$val)\n{\n\n\n$string.=$key.'=>'.$val.\",\\n\";\n\n}\n\n$string.=']';\n\n\necho $string;\n\n\nFor example display $_SERVER with echo\nstring='[ ';\nforeach($_SERVER as $key=>$val)\n{\n\n\n$string.=$key.'=>'.$val.\",<br>\";\n\n}\n\n$string.=']';\n\n\necho $string;\n\n\n", "Simply do this, if you want to see it in a browser.\necho \"<pre>\";\nprint_r($_SERVER);\necho \"</pre>\";\n\n" ]
[ 2, 0 ]
[]
[]
[ "php" ]
stackoverflow_0057877723_php.txt
Q: UiDevice.hasObject method takes too long time in UiAutomator I write a auto testcase to test Tiktok In the testcase, before I switch to next video, I will check the current video type, and do something. Log.i(TAG, "check if it's a vr video") val byRule = By.clazz("android.view.View").descContains("点击体验VR直播,按钮") if(device.hasObject(byRule)){ Log.i(TAG, "vr video") device.findObject(byRule)?.click() SystemClock.sleep(time*1000L) device.pressBack() } else { Log.d(TAG, "no") } the UiDevice.hasObject method will return immediately at first time when this code run, however the second time to run becomes very slow, it's takes about more than 10 secs. Anyone can tell me why? full code is here package com.dvdface.qq.uitestdemo import android.content.Context import android.content.Intent import android.os.SystemClock import android.util.Log import android.view.Surface import androidx.test.core.app.ApplicationProvider import androidx.test.platform.app.InstrumentationRegistry import androidx.test.ext.junit.runners.AndroidJUnit4 import androidx.test.filters.LargeTest import androidx.test.uiautomator.* import org.junit.After import org.junit.Test import org.junit.runner.RunWith import org.junit.Assert.* import org.junit.Before private const val TIMEOUT = 5000L private const val PKG_NAME = "com.ss.android.ugc.aweme" private const val TAG = "TiktokTest" /** * Instrumented test, which will execute on an Android device. * * See [testing documentation](http://d.android.com/tools/testing). */ @RunWith(AndroidJUnit4::class) class DouYinTest { private lateinit var device:UiDevice private lateinit var context:Context @Before fun setUp() { // init device device = UiDevice.getInstance(InstrumentationRegistry.getInstrumentation()) // init context context = ApplicationProvider.getApplicationContext<Context>() assertNotNull(context) } @After fun tearDown() { } @Test @LargeTest fun fastFlingLiveVideos() { // launch launch(PKG_NAME) // click suggestion menu gotoSuggestionMenu() // fling video flingVideo(true, 5, 2*60*60) } /** * watch video by fling gesture * videos have many categories: * short video * fullscreen video * live video * VR video * picture video * ai video * reminder video * Params: * enter - whether to enter play page by click full screen watch / landscape watch / VR watch * time - how long to play in single video, in seconds * duration - how long to test, in seconds * Returns: * None */ private fun flingVideo(enter:Boolean=false, time:Int=3, duration:Long=14400) { val actionsForVideos = listOf<()->Unit>( { // fullscreen video Log.i(TAG, "check if it's a fullscreen video") val byRule = By.clazz("android.widget.LinearLayout").descContains("全屏观看,按钮") if(device.hasObject(byRule)) { Log.i(TAG, "fullscreen video") device.findObject(byRule)?.click() SystemClock.sleep(time*1000L) device.pressBack() } else { Log.d(TAG, "no") } }, { // live video Log.i(TAG, "check if it's a live video") val byRule = By.clazz("android.widget.TextView").text("点击进入直播间") if(device.hasObject(byRule)){ Log.i(TAG, "live video") device.findObject(byRule)?.click() SystemClock.sleep(time*1000L) device.pressBack() } else { Log.d(TAG, "no") } }, { // vr video Log.i(TAG, "check if it's a vr video") val byRule = By.clazz("android.view.View").descContains("点击体验VR直播,按钮") if(device.hasObject(byRule)){ Log.i(TAG, "vr video") device.findObject(byRule)?.click() SystemClock.sleep(time*1000L) device.pressBack() } else { Log.d(TAG, "no") } }, { // picture video Log.i(TAG, "check if it's a picture video") val byRule = By.clazz("android.widget.LinearLayout").hasChild(By.clazz("android.widget.TextView").text("图文")) if(device.hasObject(byRule)) { Log.i(TAG, "picture video") SystemClock.sleep(time*1000L) } else { Log.d(TAG, "no") } }, { // ai video Log.i(TAG, "check if it's a ai video") val byRule = By.clazz("android.widget.TextView").textContains("特效") if(device.hasObject(byRule)){ Log.i(TAG, "ai video") SystemClock.sleep(time*1000L) } else { Log.d(TAG, "no") } } ) val startTime = SystemClock.elapsedRealtime() while((SystemClock.elapsedRealtime() - startTime) < duration * 1000L) { // according to video type , do something if(enter) { actionsForVideos.forEach{ it() } } // next fling() Log.i(TAG, "elapse ${(SystemClock.elapsedRealtime() - startTime)/1000}s") } } /** * fling gesture * Params: * step - steps to fling, more steps more slower, default 6 * Returns: * none */ private fun fling(step:Int = 6) { Log.i(TAG, "fling") when(device.displayRotation) { Surface.ROTATION_0, Surface.ROTATION_180 -> { Log.d(TAG, "fling in portrait"); device.swipe(500, 1400, 500, 800, step) } Surface.ROTATION_90, Surface.ROTATION_270 -> { Log.d(TAG, "fling in landscape"); device.swipe(1200, 800, 1200, 300, step) } else -> Log.e(TAG, "unknown direction") } } /** * goto suggestion menu * Params: * None * Returns: * None */ private fun gotoSuggestionMenu() { // click suggestion menu Log.i(TAG, "enter suggestion") Log.d(TAG, "find first page button") device.findObject(By.clazz("android.widget.TextView").textStartsWith("首页").descContains("首页,按钮"))?.let { Log.d(TAG, "click first page button") it.click() } Log.d(TAG, "find suggestion") device.findObject(By.clazz("android.widget.TextView").textStartsWith("推荐").descContains("推荐,按钮"))?.let { Log.d(TAG, "click suggestion button") it.click() it.wait(Until.descContains("已选中"), TIMEOUT) } } /** * launch app by clear Intent.FLAG_ACTIVITY_CLEAR_TASK * Params: * package - package to launch * timeout - launching timeout, default 5000 ms * Returns: * None */ private fun launch(packageName:String, timeout:Long=5000) { Log.i(TAG, "launch app") // get launch intent Log.d(TAG, "get launch intent") var intent = context.packageManager.getLaunchIntentForPackage(packageName) // intent can't be null assertNotNull(intent) intent?.apply { addFlags(Intent.FLAG_ACTIVITY_CLEAR_TASK) } // start app Log.d(TAG, "launch app by intent") context.startActivity(intent) // wait app Log.d(TAG, "wait app to launch") device.wait(Until.hasObject(By.pkg(packageName).depth(0)), timeout) } } A: by looking into the logs, I found it's caused by this code: My Testcases use androidx.test.uiautomator androidx.test.uiautomator is a test library, which source code is https://androidx.tech/artifacts/test.uiautomator/uiautomator/2.2.0 there is a Configurator class in the androidx.test.uiautomator , can be used to configure running parameters in this library: this Configurator is singleton, we can get its instance by Configurator.getInstance() we can change wait time by call setWaitForXXXXTimeout method to change waiting time. /** * Sets the timeout for waiting for the user interface to go into an idle * state before starting a uiautomator action. * * By default, all core uiautomator objects except {@link UiDevice} will perform * this wait before starting to search for the widget specified by the * object's {@link UiSelector}. Once the idle state is detected or the * timeout elapses (whichever occurs first), the object will start to wait * for the selector to find a match. * See {@link #setWaitForSelectorTimeout(long)} * * @param timeout Timeout value in milliseconds * @return self * @since API Level 18 */ public Configurator setWaitForIdleTimeout(long timeout) { mWaitForIdleTimeout = timeout; return this; } /** * Sets the timeout for waiting for a widget to become visible in the user * interface so that it can be matched by a selector. * * Because user interface content is dynamic, sometimes a widget may not * be visible immediately and won't be detected by a selector. This timeout * allows the uiautomator framework to wait for a match to be found, up until * the timeout elapses. * * @param timeout Timeout value in milliseconds. * @return self * @since API Level 18 */ public Configurator setWaitForSelectorTimeout(long timeout) { mWaitForSelector = timeout; return this; } /** * Sets the timeout for waiting for an acknowledgement of an * uiautomtor scroll swipe action. * * The acknowledgment is an <a href="http://developer.android.com/reference/android/view/accessibility/AccessibilityEvent.html">AccessibilityEvent</a>, * corresponding to the scroll action, that lets the framework determine if * the scroll action was successful. Generally, this timeout should not be modified. * See {@link UiScrollable} * * @param timeout Timeout value in milliseconds * @return self * @since API Level 18 */ public Configurator setScrollAcknowledgmentTimeout(long timeout) { mScrollEventWaitTimeout = timeout; return this; } /** * Sets the timeout for waiting for an acknowledgment of generic uiautomator * actions, such as clicks, text setting, and menu presses. * * The acknowledgment is an <a href="http://developer.android.com/reference/android/view/accessibility/AccessibilityEvent.html">AccessibilityEvent</a>, * corresponding to an action, that lets the framework determine if the * action was successful. Generally, this timeout should not be modified. * See {@link UiObject} * * @param timeout Timeout value in milliseconds * @return self * @since API Level 18 */ public Configurator setActionAcknowledgmentTimeout(long timeout)
UiDevice.hasObject method takes too long time in UiAutomator
I write a auto testcase to test Tiktok In the testcase, before I switch to next video, I will check the current video type, and do something. Log.i(TAG, "check if it's a vr video") val byRule = By.clazz("android.view.View").descContains("点击体验VR直播,按钮") if(device.hasObject(byRule)){ Log.i(TAG, "vr video") device.findObject(byRule)?.click() SystemClock.sleep(time*1000L) device.pressBack() } else { Log.d(TAG, "no") } the UiDevice.hasObject method will return immediately at first time when this code run, however the second time to run becomes very slow, it's takes about more than 10 secs. Anyone can tell me why? full code is here package com.dvdface.qq.uitestdemo import android.content.Context import android.content.Intent import android.os.SystemClock import android.util.Log import android.view.Surface import androidx.test.core.app.ApplicationProvider import androidx.test.platform.app.InstrumentationRegistry import androidx.test.ext.junit.runners.AndroidJUnit4 import androidx.test.filters.LargeTest import androidx.test.uiautomator.* import org.junit.After import org.junit.Test import org.junit.runner.RunWith import org.junit.Assert.* import org.junit.Before private const val TIMEOUT = 5000L private const val PKG_NAME = "com.ss.android.ugc.aweme" private const val TAG = "TiktokTest" /** * Instrumented test, which will execute on an Android device. * * See [testing documentation](http://d.android.com/tools/testing). */ @RunWith(AndroidJUnit4::class) class DouYinTest { private lateinit var device:UiDevice private lateinit var context:Context @Before fun setUp() { // init device device = UiDevice.getInstance(InstrumentationRegistry.getInstrumentation()) // init context context = ApplicationProvider.getApplicationContext<Context>() assertNotNull(context) } @After fun tearDown() { } @Test @LargeTest fun fastFlingLiveVideos() { // launch launch(PKG_NAME) // click suggestion menu gotoSuggestionMenu() // fling video flingVideo(true, 5, 2*60*60) } /** * watch video by fling gesture * videos have many categories: * short video * fullscreen video * live video * VR video * picture video * ai video * reminder video * Params: * enter - whether to enter play page by click full screen watch / landscape watch / VR watch * time - how long to play in single video, in seconds * duration - how long to test, in seconds * Returns: * None */ private fun flingVideo(enter:Boolean=false, time:Int=3, duration:Long=14400) { val actionsForVideos = listOf<()->Unit>( { // fullscreen video Log.i(TAG, "check if it's a fullscreen video") val byRule = By.clazz("android.widget.LinearLayout").descContains("全屏观看,按钮") if(device.hasObject(byRule)) { Log.i(TAG, "fullscreen video") device.findObject(byRule)?.click() SystemClock.sleep(time*1000L) device.pressBack() } else { Log.d(TAG, "no") } }, { // live video Log.i(TAG, "check if it's a live video") val byRule = By.clazz("android.widget.TextView").text("点击进入直播间") if(device.hasObject(byRule)){ Log.i(TAG, "live video") device.findObject(byRule)?.click() SystemClock.sleep(time*1000L) device.pressBack() } else { Log.d(TAG, "no") } }, { // vr video Log.i(TAG, "check if it's a vr video") val byRule = By.clazz("android.view.View").descContains("点击体验VR直播,按钮") if(device.hasObject(byRule)){ Log.i(TAG, "vr video") device.findObject(byRule)?.click() SystemClock.sleep(time*1000L) device.pressBack() } else { Log.d(TAG, "no") } }, { // picture video Log.i(TAG, "check if it's a picture video") val byRule = By.clazz("android.widget.LinearLayout").hasChild(By.clazz("android.widget.TextView").text("图文")) if(device.hasObject(byRule)) { Log.i(TAG, "picture video") SystemClock.sleep(time*1000L) } else { Log.d(TAG, "no") } }, { // ai video Log.i(TAG, "check if it's a ai video") val byRule = By.clazz("android.widget.TextView").textContains("特效") if(device.hasObject(byRule)){ Log.i(TAG, "ai video") SystemClock.sleep(time*1000L) } else { Log.d(TAG, "no") } } ) val startTime = SystemClock.elapsedRealtime() while((SystemClock.elapsedRealtime() - startTime) < duration * 1000L) { // according to video type , do something if(enter) { actionsForVideos.forEach{ it() } } // next fling() Log.i(TAG, "elapse ${(SystemClock.elapsedRealtime() - startTime)/1000}s") } } /** * fling gesture * Params: * step - steps to fling, more steps more slower, default 6 * Returns: * none */ private fun fling(step:Int = 6) { Log.i(TAG, "fling") when(device.displayRotation) { Surface.ROTATION_0, Surface.ROTATION_180 -> { Log.d(TAG, "fling in portrait"); device.swipe(500, 1400, 500, 800, step) } Surface.ROTATION_90, Surface.ROTATION_270 -> { Log.d(TAG, "fling in landscape"); device.swipe(1200, 800, 1200, 300, step) } else -> Log.e(TAG, "unknown direction") } } /** * goto suggestion menu * Params: * None * Returns: * None */ private fun gotoSuggestionMenu() { // click suggestion menu Log.i(TAG, "enter suggestion") Log.d(TAG, "find first page button") device.findObject(By.clazz("android.widget.TextView").textStartsWith("首页").descContains("首页,按钮"))?.let { Log.d(TAG, "click first page button") it.click() } Log.d(TAG, "find suggestion") device.findObject(By.clazz("android.widget.TextView").textStartsWith("推荐").descContains("推荐,按钮"))?.let { Log.d(TAG, "click suggestion button") it.click() it.wait(Until.descContains("已选中"), TIMEOUT) } } /** * launch app by clear Intent.FLAG_ACTIVITY_CLEAR_TASK * Params: * package - package to launch * timeout - launching timeout, default 5000 ms * Returns: * None */ private fun launch(packageName:String, timeout:Long=5000) { Log.i(TAG, "launch app") // get launch intent Log.d(TAG, "get launch intent") var intent = context.packageManager.getLaunchIntentForPackage(packageName) // intent can't be null assertNotNull(intent) intent?.apply { addFlags(Intent.FLAG_ACTIVITY_CLEAR_TASK) } // start app Log.d(TAG, "launch app by intent") context.startActivity(intent) // wait app Log.d(TAG, "wait app to launch") device.wait(Until.hasObject(By.pkg(packageName).depth(0)), timeout) } }
[ "by looking into the logs, I found it's caused by this code:\n\nMy Testcases use androidx.test.uiautomator\n\nandroidx.test.uiautomator is a test library, which source code is https://androidx.tech/artifacts/test.uiautomator/uiautomator/2.2.0\n\nthere is a Configurator class in the androidx.test.uiautomator , can be used to configure running parameters in this library:\n\n\nthis Configurator is singleton, we can get its instance by\nConfigurator.getInstance()\n\nwe can change wait time by call setWaitForXXXXTimeout method to change waiting time.\n\n /**\n * Sets the timeout for waiting for the user interface to go into an idle\n * state before starting a uiautomator action.\n *\n * By default, all core uiautomator objects except {@link UiDevice} will perform\n * this wait before starting to search for the widget specified by the\n * object's {@link UiSelector}. Once the idle state is detected or the\n * timeout elapses (whichever occurs first), the object will start to wait\n * for the selector to find a match.\n * See {@link #setWaitForSelectorTimeout(long)}\n *\n * @param timeout Timeout value in milliseconds\n * @return self\n * @since API Level 18\n */\n public Configurator setWaitForIdleTimeout(long timeout) {\n mWaitForIdleTimeout = timeout;\n return this;\n }\n\n\n /**\n * Sets the timeout for waiting for a widget to become visible in the user\n * interface so that it can be matched by a selector.\n *\n * Because user interface content is dynamic, sometimes a widget may not\n * be visible immediately and won't be detected by a selector. This timeout\n * allows the uiautomator framework to wait for a match to be found, up until\n * the timeout elapses.\n *\n * @param timeout Timeout value in milliseconds.\n * @return self\n * @since API Level 18\n */\n public Configurator setWaitForSelectorTimeout(long timeout) {\n mWaitForSelector = timeout;\n return this;\n }\n\n /**\n * Sets the timeout for waiting for an acknowledgement of an\n * uiautomtor scroll swipe action.\n *\n * The acknowledgment is an <a href=\"http://developer.android.com/reference/android/view/accessibility/AccessibilityEvent.html\">AccessibilityEvent</a>,\n * corresponding to the scroll action, that lets the framework determine if\n * the scroll action was successful. Generally, this timeout should not be modified.\n * See {@link UiScrollable}\n *\n * @param timeout Timeout value in milliseconds\n * @return self\n * @since API Level 18\n */\n public Configurator setScrollAcknowledgmentTimeout(long timeout) {\n mScrollEventWaitTimeout = timeout;\n return this;\n }\n\n\n\n /**\n * Sets the timeout for waiting for an acknowledgment of generic uiautomator\n * actions, such as clicks, text setting, and menu presses.\n *\n * The acknowledgment is an <a href=\"http://developer.android.com/reference/android/view/accessibility/AccessibilityEvent.html\">AccessibilityEvent</a>,\n * corresponding to an action, that lets the framework determine if the\n * action was successful. Generally, this timeout should not be modified.\n * See {@link UiObject}\n *\n * @param timeout Timeout value in milliseconds\n * @return self\n * @since API Level 18\n */\n public Configurator setActionAcknowledgmentTimeout(long timeout)\n\n\n" ]
[ 0 ]
[]
[]
[ "android", "android_uiautomator" ]
stackoverflow_0074630986_android_android_uiautomator.txt
Q: How to update state shared between Parent and Child React components within the Parent I've built a simple interface for selecting items from a menu, contained in a component called Sodas, and it's displayed in the Parent component: VendingMachine. As it is, Sodas is able to successfully change the state in the VendingMachine, however, the state cannot be changed within the VendingMachine itself. The following code represents the VendingMachine component: import Sodas from './Sodas'; const VendingMachine = () => { // Track selected soda const [selectedSoda, setSoda] = useState({ sodaName: '', stock: 0 }); const handleSodaSelection = (soda) => setSoda(soda); // Reset the selected soda const sellSoda = () => setSoda({ sodaName: '', stock: 0 }); return ( <div> Soda to Purchase: {selectedSoda.sodaName} <Sodas handleSodaSelection={handleSodaSelection} /> <div onClick={sellSoda}>Buy Selected Soda</div> </div } The following code represents the Sodas Component function Sodas({ handleSodaSelection }) { // Tracks soda selected, and returns to Parent component const [sodaSelected, setSodaSelected] = useState({ sodaName: '', stock: 0 }); React.useEffect(() => handleSodaSelection(sodaSelected), [sodaSelected, handleSodaSelection]); return ( <div className='soda_container'> <div onClick={() => setSodaSelected({ sodaName: 'Cola', stock: 7 })}>Soda</div> </div>) } Specifically, the issue is that setSoda does not work within VendingMachine and only works when passed to the Sodas component. I'm not sure if this can only work as a one way relationship or if there is something I'm missing in the syntax. Any help or references to relevant documentation would be greatly appreciated. A: You should pass the state and the function to update the state from the parent component, VendingMachine, to the child component, Sodas, as props. Then, the child component, Sodas, can use the function passed as a prop to update the state in the parent component. VendingMachine component: import Sodas from './Sodas'; const VendingMachine = () => { // Track selected soda const [selectedSoda, setSoda] = useState({ sodaName: '', stock: 0 }); // Reset the selected soda const sellSoda = () => setSoda({ sodaName: '', stock: 0 }); return ( <div> Soda to Purchase: {selectedSoda.sodaName} <Sodas selectedSoda={selectedSoda} setSoda={setSoda} /> <div onClick={sellSoda}>Buy Selected Soda</div> </div> ); } Sodas component: function Sodas({ selectedSoda, setSoda }) { return ( <div className='soda_container'> <div onClick={() => setSoda({ sodaName: 'Cola', stock: 7 })}>Soda</div> </div> ); } the VendingMachine component passes the selectedSoda state and the setSoda function to the Sodas component as props. The Sodas component then uses the setSoda function passed as a prop to update the selectedSoda state in the VendingMachine component. A: In the code you provided, the setSoda function is only passed to the Sodas component as the handleSodaSelection prop. This means that setSoda can only be called within the Sodas component. If you want to be able to call setSoda in the VendingMachine component as well, you can pass it to the Sodas component as a prop. Here is how you could modify the Sodas component to accept the setSoda function as a prop and pass it down to any child components that need to be able to update the selectedSoda state: function Sodas({ handleSodaSelection, setSoda }) { // Tracks soda selected, and returns to Parent component const [sodaSelected, setSodaSelected] = useState({ sodaName: '', stock: 0 }); React.useEffect(() => handleSodaSelection(sodaSelected), [sodaSelected, handleSodaSelection]); // Pass setSoda down to any child components that need to be able to update the selectedSoda state const handleSodaSelection = (soda) => setSoda(soda); return ( <div className='soda_container'> <div onClick={() => setSodaSelected({ sodaName: 'Cola', stock: 7 })}>Soda</div> </div>) } Then, in the VendingMachine component, you can pass the setSoda function to the Sodas component as a prop: const VendingMachine = () => { // Track selected soda const [selectedSoda, setSoda] = useState({ sodaName: '', stock: 0 }); // Reset the selected soda const sellSoda = () => setSoda({ sodaName: '', stock: 0 }); return ( <div> Soda to Purchase: {selectedSoda.sodaName} <Sodas handleSodaSelection={handleSodaSelection} setSoda={setSoda} /> <div onClick={sellSoda}>Buy Selected Soda</div> </div } This way, both the Sodas component and the VendingMachine component will be able to call the setSoda function to update the selectedSoda state.
How to update state shared between Parent and Child React components within the Parent
I've built a simple interface for selecting items from a menu, contained in a component called Sodas, and it's displayed in the Parent component: VendingMachine. As it is, Sodas is able to successfully change the state in the VendingMachine, however, the state cannot be changed within the VendingMachine itself. The following code represents the VendingMachine component: import Sodas from './Sodas'; const VendingMachine = () => { // Track selected soda const [selectedSoda, setSoda] = useState({ sodaName: '', stock: 0 }); const handleSodaSelection = (soda) => setSoda(soda); // Reset the selected soda const sellSoda = () => setSoda({ sodaName: '', stock: 0 }); return ( <div> Soda to Purchase: {selectedSoda.sodaName} <Sodas handleSodaSelection={handleSodaSelection} /> <div onClick={sellSoda}>Buy Selected Soda</div> </div } The following code represents the Sodas Component function Sodas({ handleSodaSelection }) { // Tracks soda selected, and returns to Parent component const [sodaSelected, setSodaSelected] = useState({ sodaName: '', stock: 0 }); React.useEffect(() => handleSodaSelection(sodaSelected), [sodaSelected, handleSodaSelection]); return ( <div className='soda_container'> <div onClick={() => setSodaSelected({ sodaName: 'Cola', stock: 7 })}>Soda</div> </div>) } Specifically, the issue is that setSoda does not work within VendingMachine and only works when passed to the Sodas component. I'm not sure if this can only work as a one way relationship or if there is something I'm missing in the syntax. Any help or references to relevant documentation would be greatly appreciated.
[ "You should pass the state and the function to update the state from the parent component, VendingMachine, to the child component, Sodas, as props.\nThen, the child component, Sodas, can use the function passed as a prop to update the state in the parent component.\nVendingMachine component:\nimport Sodas from './Sodas';\n\nconst VendingMachine = () => {\n// Track selected soda\nconst [selectedSoda, setSoda] = useState({ sodaName: '', stock: 0 });\n\n// Reset the selected soda\nconst sellSoda = () => setSoda({ sodaName: '', stock: 0 });\n\nreturn (\n<div>\nSoda to Purchase: {selectedSoda.sodaName}\n<Sodas selectedSoda={selectedSoda} setSoda={setSoda} />\n<div onClick={sellSoda}>Buy Selected Soda</div>\n</div>\n);\n}\n\nSodas component:\nfunction Sodas({ selectedSoda, setSoda }) {\nreturn (\n<div className='soda_container'>\n<div onClick={() => setSoda({ sodaName: 'Cola', stock: 7 })}>Soda</div>\n</div>\n);\n}\n\nthe VendingMachine component passes the selectedSoda state and the setSoda function to the Sodas component as props. The Sodas component then uses the setSoda function passed as a prop to update the selectedSoda state in the VendingMachine component.\n", "In the code you provided, the setSoda function is only passed to the Sodas component as the handleSodaSelection prop. This means that setSoda can only be called within the Sodas component. If you want to be able to call setSoda in the VendingMachine component as well, you can pass it to the Sodas component as a prop.\nHere is how you could modify the Sodas component to accept the setSoda function as a prop and pass it down to any child components that need to be able to update the selectedSoda state:\nfunction Sodas({ handleSodaSelection, setSoda })\n{\n // Tracks soda selected, and returns to Parent component\n const [sodaSelected, setSodaSelected] = useState({ sodaName: '', stock: 0 });\n React.useEffect(() => handleSodaSelection(sodaSelected), [sodaSelected, handleSodaSelection]);\n\n // Pass setSoda down to any child components that need to be able to update the selectedSoda state\n const handleSodaSelection = (soda) => setSoda(soda);\n\nreturn (\n <div className='soda_container'>\n <div onClick={() => setSodaSelected({ sodaName: 'Cola', stock: 7 })}>Soda</div>\n </div>)\n}\n\nThen, in the VendingMachine component, you can pass the setSoda function to the Sodas component as a prop:\n\nconst VendingMachine = () =>\n{\n // Track selected soda\n const [selectedSoda, setSoda] = useState({ sodaName: '', stock: 0 });\n\n // Reset the selected soda\n const sellSoda = () => setSoda({ sodaName: '', stock: 0 });\n\n return (\n <div>\n Soda to Purchase: {selectedSoda.sodaName}\n <Sodas handleSodaSelection={handleSodaSelection} setSoda={setSoda} />\n <div onClick={sellSoda}>Buy Selected Soda</div>\n </div\n}\n\nThis way, both the Sodas component and the VendingMachine component will be able to call the setSoda function to update the selectedSoda state.\n" ]
[ 0, 0 ]
[]
[]
[ "javascript", "parent_child", "reactjs" ]
stackoverflow_0074673168_javascript_parent_child_reactjs.txt
Q: Bottom overloaded by 213 pixels in flutter Hi I am trying to create login screen. It is working fine for me. When I open the keyboard then it is giving me an error Bottom overloaded by 213 pixels. Widget LoginPage() { return new Scaffold(body: Container( height: MediaQuery.of(context).size.height, decoration: BoxDecoration( color: Colors.white, image: DecorationImage( colorFilter: new ColorFilter.mode( Colors.black.withOpacity(0.05), BlendMode.dstATop), image: AssetImage('assets/images/mountains.jpg'), fit: BoxFit.cover, ), ), child: new Column( children: <Widget>[ Container( padding: EdgeInsets.all(120.0), child: Center( child: Icon( Icons.headset_mic, color: Colors.redAccent, size: 50.0, ), ), ), new Row( children: <Widget>[ new Expanded( child: new Padding( padding: const EdgeInsets.only(left: 40.0), child: new Text( "EMAIL", style: TextStyle( fontWeight: FontWeight.bold, color: Colors.redAccent, fontSize: 15.0, ), ), ), ), ], ), new Container( width: MediaQuery.of(context).size.width, margin: const EdgeInsets.only(left: 40.0, right: 40.0, top: 10.0), alignment: Alignment.center, decoration: BoxDecoration( border: Border( bottom: BorderSide( color: Colors.redAccent, width: 0.5, style: BorderStyle.solid), ), ), padding: const EdgeInsets.only(left: 0.0, right: 10.0), child: new Row( crossAxisAlignment: CrossAxisAlignment.center, mainAxisAlignment: MainAxisAlignment.start, children: <Widget>[ new Expanded( child: TextField( obscureText: true, textAlign: TextAlign.left, decoration: InputDecoration( border: InputBorder.none, hintText: '[email protected]', hintStyle: TextStyle(color: Colors.grey), ), ), ), ], ), ), Divider( height: 24.0, ), new Row( children: <Widget>[ new Expanded( child: new Padding( padding: const EdgeInsets.only(left: 40.0), child: new Text( "PASSWORD", style: TextStyle( fontWeight: FontWeight.bold, color: Colors.redAccent, fontSize: 15.0, ), ), ), ), ], ), new Container( width: MediaQuery.of(context).size.width, margin: const EdgeInsets.only(left: 40.0, right: 40.0, top: 10.0), alignment: Alignment.center, decoration: BoxDecoration( border: Border( bottom: BorderSide( color: Colors.redAccent, width: 0.5, style: BorderStyle.solid), ), ), padding: const EdgeInsets.only(left: 0.0, right: 10.0), child: new Row( crossAxisAlignment: CrossAxisAlignment.center, mainAxisAlignment: MainAxisAlignment.start, children: <Widget>[ new Expanded( child: TextField( obscureText: true, textAlign: TextAlign.left, decoration: InputDecoration( border: InputBorder.none, hintText: '*********', hintStyle: TextStyle(color: Colors.grey), ), ), ), ], ), ), Divider( height: 24.0, ), new Row( mainAxisAlignment: MainAxisAlignment.end, children: <Widget>[ Padding( padding: const EdgeInsets.only(right: 20.0), child: new FlatButton( child: new Text( "Forgot Password?", style: TextStyle( fontWeight: FontWeight.bold, color: Colors.redAccent, fontSize: 15.0, ), textAlign: TextAlign.end, ), onPressed: () => {}, ), ), ], ), new Container( width: MediaQuery.of(context).size.width, margin: const EdgeInsets.only(left: 30.0, right: 30.0, top: 20.0), alignment: Alignment.center, child: new Row( children: <Widget>[ new Expanded( child: new FlatButton( shape: new RoundedRectangleBorder( borderRadius: new BorderRadius.circular(30.0), ), color: Colors.redAccent, onPressed: () => {}, child: new Container( padding: const EdgeInsets.symmetric( vertical: 20.0, horizontal: 20.0, ), child: new Row( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ new Expanded( child: Text( "LOGIN", textAlign: TextAlign.center, style: TextStyle( color: Colors.white, fontWeight: FontWeight.bold), ), ), ], ), ), ), ), ], ), ), new Container( width: MediaQuery.of(context).size.width, margin: const EdgeInsets.only(left: 30.0, right: 30.0, top: 20.0), alignment: Alignment.center, child: Row( children: <Widget>[ new Expanded( child: new Container( margin: EdgeInsets.all(8.0), decoration: BoxDecoration(border: Border.all(width: 0.25)), ), ), Text( "OR CONNECT WITH", style: TextStyle( color: Colors.grey, fontWeight: FontWeight.bold, ), ), new Expanded( child: new Container( margin: EdgeInsets.all(8.0), decoration: BoxDecoration(border: Border.all(width: 0.25)), ), ), ], ), ), new Container( width: MediaQuery.of(context).size.width, margin: const EdgeInsets.only(left: 30.0, right: 30.0, top: 20.0), child: new Row( children: <Widget>[ new Expanded( child: new Container( margin: EdgeInsets.only(right: 8.0), alignment: Alignment.center, child: new Row( children: <Widget>[ new Expanded( child: new FlatButton( shape: new RoundedRectangleBorder( borderRadius: new BorderRadius.circular(30.0), ), color: Color(0Xff3B5998), onPressed: () => {}, child: new Container( child: new Row( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ new Expanded( child: new FlatButton( padding: EdgeInsets.only( top: 20.0, bottom: 20.0, ), child: new Row( mainAxisAlignment: MainAxisAlignment.spaceEvenly, children: <Widget>[ Icon( const IconData(0xea90, fontFamily: 'icomoon'), color: Colors.white, size: 15.0, ), Text( "FACEBOOK", textAlign: TextAlign.center, style: TextStyle( color: Colors.white, fontWeight: FontWeight.bold), ), ], ), ), ), ], ), ), ), ), ], ), ), ), new Expanded( child: new Container( margin: EdgeInsets.only(left: 8.0), alignment: Alignment.center, child: new Row( children: <Widget>[ new Expanded( child: new FlatButton( shape: new RoundedRectangleBorder( borderRadius: new BorderRadius.circular(30.0), ), color: Color(0Xffdb3236), onPressed: () => {}, child: new Container( child: new Row( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ new Expanded( child: new FlatButton( padding: EdgeInsets.only( top: 20.0, bottom: 20.0, ), child: new Row( mainAxisAlignment: MainAxisAlignment.spaceEvenly, children: <Widget>[ Icon( const IconData(0xea88, fontFamily: 'icomoon'), color: Colors.white, size: 15.0, ), Text( "GOOGLE", textAlign: TextAlign.center, style: TextStyle( color: Colors.white, fontWeight: FontWeight.bold), ), ], ), ), ), ], ), ), ), ), ], ), ), ), ], ), ) ], ), )); } Does anyone know what could be the issue ? A: I would suggest replacing the top Column widget with a ListView, that automatically resizes on keyboard input, whilst also supporting scrolling. If you really want this setup as it is, you can edit your Scaffold with the parameter resizeToAvoidBottomPadding: false That should make the error disappear A: Scaffold( resizeToAvoidBottomInset: false, // set it to false ... ) As Andrey said, you may have issues with scrolling, so you may try Scaffold( resizeToAvoidBottomInset: false, // set it to false body: SingleChildScrollView(child: YourBody()), ) A: you usually need to provide a scroll widget on top of your widgets because if you try to open the keyboard or change the orientation of your phone, flutter needs to know how to handle the distribution of the widgets on the screen. Please review this resource, you can check the different options that flutter provide Out of the box, and choose the best option for your scenario. https://flutter.io/widgets/scrolling/ A: With resizeToAvoidBottomPadding: false in Scaffold, You don't see all the widgets that are below the open textfield. The solution is to insert a container with a SingleChildScrollView inside. Example: Container( alignment: Alignment.center, width: double.infinity, height: double.infinity, color: viewModel.color, child: SingleChildScrollView(child:"Your widgets")); A: wrap your child view into ListView will solve the prob. Please check this class _LoginScreenState extends State<LoginScreen> { @override Widget build(BuildContext context) { return new Scaffold( body: new Container( child: ListView( children: <Widget>[ Padding( padding: const EdgeInsets.all(8.0), child: new Column( crossAxisAlignment: CrossAxisAlignment.start, children: <Widget>[ new Padding( padding: EdgeInsets.only(left: 1,top: 50,right: 1,bottom: 1), child: new Text("jk", style: TextStyle(fontFamily: "mono_bold")), ), new Padding( padding: EdgeInsets.only(left: 1,top: 50,right: 1,bottom: 1), child: new TextField( style: new TextStyle(), decoration: InputDecoration( labelText: "Email", contentPadding: EdgeInsets.all(8.0) ), keyboardType: TextInputType.emailAddress, ) ), new Padding( padding: EdgeInsets.only(left: 1,top: 50,right: 1,bottom: 1), child: new TextField( style: new TextStyle( ), decoration: InputDecoration( labelText: "Password" ), keyboardType: TextInputType.text, obscureText: true, ), ), ], ), ), ], ) ), ); } A: wrap your column into SingleChildScrollView will solve the problem. Please check this: Widget build(BuildContext context) { return Scaffold( body: Container( child: Center( child: SingleChildScrollView( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ TextField(), TextField(), ], ), )))); } And You also use for removing yellow black overflow line resizeToAvoidBottomPadding: false but in this case, TextField does not move up on click time. A: wrap with SingleChildScrollView Widget here's my code to solve this situation: it is best and easiest method SingleChildScrollView( child: Column( mainAxisAlignment: MainAxisAlignment.center, crossAxisAlignment: CrossAxisAlignment.stretch, children: <Widget>[ Hero( tag: 'logo', child: Container( height: 200.0, child: Image.asset('images/logo.png'), ), ), SizedBox( height: 48.0, ), TextField( onChanged: (value) { //Do something with the user input. }, decoration: kTextFieldDecoration.copyWith(hintText: 'Enter username'), ), SizedBox( height: 8.0, ), TextField( onChanged: (value) { //Do something with the user input. }, decoration: kTextFieldDecoration.copyWith(hintText: 'Enter password'), ), SizedBox( height: 24.0, ), RoundedButton( colour: Colors.blueAccent, text: 'Register', onPressed: () { //later todo }, ), ], ), ), A: You can set resizeToAvoidBottomInset: false for avoiding overflow, but you can't reach fields in bottom of page, which can be covered by keyboard. Or you can wrap body of Scaffold inside SingleChildScrollView A: You can enclose all the widgets within the ListView. So you can scroll it and the overloaded will disappear. A: you should add resizeToAvoidBottomInset: false, and put your button in child:SingleChildScrollView() like the following code below : Widget build(BuildContext context) { return Scaffold( resizeToAvoidBottomInset: false, appBar: PreferredSize( preferredSize:const Size(double.infinity,100), child:(ResponsiveLayout.isTinyLimit(context) || ResponsiveLayout.isTinyHeightLimit(context)) ? Container() : const AppBarWidget(), ), body: Center( child:SingleChildScrollView( child: Padding( padding: const EdgeInsets.all(8.0), child: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center , children: [ Text("Temperature"), LineChartSample2(), Text("Gas Level"), LineChartSample2(), on?Icon(Icons.lightbulb, size:100, color: Colors.lightBlue.shade700 , ):const Icon(Icons.lightbulb_outline, size:100, ), ElevatedButton( style: TextButton.styleFrom( backgroundColor: on? Colors.green: Colors.white10), onPressed: (){ dbR.child("movement").set({"Switch":!on}); setState(() { on = !on; }); }, child:on ? const Text("On"):const Text("Off")) ], ), ), ) ), ), ); A: Put your content in to SingleChildScrollView and add a ConstrainedBox and try https://api.flutter.dev/flutter/widgets/SingleChildScrollView-class.html A: Wrap your top column inside SingleChildScrollView. Make you layout scrollable. A: You can wrap your Fields in single child ScrollView of Flutter. A: This align items bottom to top, Try this.. child: SizedBox( height: MediaQuery.of(context).size.height, child: SingleChildScrollView( reverse: true, A: Remove unwanted padding top and bottom child: Container( height: size.height, width: size.width, padding: EdgeInsets.only( left: size.width * 0.15, right: size.width * 0.15, top: size.width * 0.15, bottom: size.width * 0.15, ), to change by this child: Container( height: size.height, width: size.width, padding: EdgeInsets.only( left: size.width * 0.15, right: size.width * 0.15, ), This worked for me. A: please set resizeToAvoidBottomPadding: false and set scrollPadding in textField A: Instead of column widget try to use flex widget, which might solve your problem.
Bottom overloaded by 213 pixels in flutter
Hi I am trying to create login screen. It is working fine for me. When I open the keyboard then it is giving me an error Bottom overloaded by 213 pixels. Widget LoginPage() { return new Scaffold(body: Container( height: MediaQuery.of(context).size.height, decoration: BoxDecoration( color: Colors.white, image: DecorationImage( colorFilter: new ColorFilter.mode( Colors.black.withOpacity(0.05), BlendMode.dstATop), image: AssetImage('assets/images/mountains.jpg'), fit: BoxFit.cover, ), ), child: new Column( children: <Widget>[ Container( padding: EdgeInsets.all(120.0), child: Center( child: Icon( Icons.headset_mic, color: Colors.redAccent, size: 50.0, ), ), ), new Row( children: <Widget>[ new Expanded( child: new Padding( padding: const EdgeInsets.only(left: 40.0), child: new Text( "EMAIL", style: TextStyle( fontWeight: FontWeight.bold, color: Colors.redAccent, fontSize: 15.0, ), ), ), ), ], ), new Container( width: MediaQuery.of(context).size.width, margin: const EdgeInsets.only(left: 40.0, right: 40.0, top: 10.0), alignment: Alignment.center, decoration: BoxDecoration( border: Border( bottom: BorderSide( color: Colors.redAccent, width: 0.5, style: BorderStyle.solid), ), ), padding: const EdgeInsets.only(left: 0.0, right: 10.0), child: new Row( crossAxisAlignment: CrossAxisAlignment.center, mainAxisAlignment: MainAxisAlignment.start, children: <Widget>[ new Expanded( child: TextField( obscureText: true, textAlign: TextAlign.left, decoration: InputDecoration( border: InputBorder.none, hintText: '[email protected]', hintStyle: TextStyle(color: Colors.grey), ), ), ), ], ), ), Divider( height: 24.0, ), new Row( children: <Widget>[ new Expanded( child: new Padding( padding: const EdgeInsets.only(left: 40.0), child: new Text( "PASSWORD", style: TextStyle( fontWeight: FontWeight.bold, color: Colors.redAccent, fontSize: 15.0, ), ), ), ), ], ), new Container( width: MediaQuery.of(context).size.width, margin: const EdgeInsets.only(left: 40.0, right: 40.0, top: 10.0), alignment: Alignment.center, decoration: BoxDecoration( border: Border( bottom: BorderSide( color: Colors.redAccent, width: 0.5, style: BorderStyle.solid), ), ), padding: const EdgeInsets.only(left: 0.0, right: 10.0), child: new Row( crossAxisAlignment: CrossAxisAlignment.center, mainAxisAlignment: MainAxisAlignment.start, children: <Widget>[ new Expanded( child: TextField( obscureText: true, textAlign: TextAlign.left, decoration: InputDecoration( border: InputBorder.none, hintText: '*********', hintStyle: TextStyle(color: Colors.grey), ), ), ), ], ), ), Divider( height: 24.0, ), new Row( mainAxisAlignment: MainAxisAlignment.end, children: <Widget>[ Padding( padding: const EdgeInsets.only(right: 20.0), child: new FlatButton( child: new Text( "Forgot Password?", style: TextStyle( fontWeight: FontWeight.bold, color: Colors.redAccent, fontSize: 15.0, ), textAlign: TextAlign.end, ), onPressed: () => {}, ), ), ], ), new Container( width: MediaQuery.of(context).size.width, margin: const EdgeInsets.only(left: 30.0, right: 30.0, top: 20.0), alignment: Alignment.center, child: new Row( children: <Widget>[ new Expanded( child: new FlatButton( shape: new RoundedRectangleBorder( borderRadius: new BorderRadius.circular(30.0), ), color: Colors.redAccent, onPressed: () => {}, child: new Container( padding: const EdgeInsets.symmetric( vertical: 20.0, horizontal: 20.0, ), child: new Row( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ new Expanded( child: Text( "LOGIN", textAlign: TextAlign.center, style: TextStyle( color: Colors.white, fontWeight: FontWeight.bold), ), ), ], ), ), ), ), ], ), ), new Container( width: MediaQuery.of(context).size.width, margin: const EdgeInsets.only(left: 30.0, right: 30.0, top: 20.0), alignment: Alignment.center, child: Row( children: <Widget>[ new Expanded( child: new Container( margin: EdgeInsets.all(8.0), decoration: BoxDecoration(border: Border.all(width: 0.25)), ), ), Text( "OR CONNECT WITH", style: TextStyle( color: Colors.grey, fontWeight: FontWeight.bold, ), ), new Expanded( child: new Container( margin: EdgeInsets.all(8.0), decoration: BoxDecoration(border: Border.all(width: 0.25)), ), ), ], ), ), new Container( width: MediaQuery.of(context).size.width, margin: const EdgeInsets.only(left: 30.0, right: 30.0, top: 20.0), child: new Row( children: <Widget>[ new Expanded( child: new Container( margin: EdgeInsets.only(right: 8.0), alignment: Alignment.center, child: new Row( children: <Widget>[ new Expanded( child: new FlatButton( shape: new RoundedRectangleBorder( borderRadius: new BorderRadius.circular(30.0), ), color: Color(0Xff3B5998), onPressed: () => {}, child: new Container( child: new Row( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ new Expanded( child: new FlatButton( padding: EdgeInsets.only( top: 20.0, bottom: 20.0, ), child: new Row( mainAxisAlignment: MainAxisAlignment.spaceEvenly, children: <Widget>[ Icon( const IconData(0xea90, fontFamily: 'icomoon'), color: Colors.white, size: 15.0, ), Text( "FACEBOOK", textAlign: TextAlign.center, style: TextStyle( color: Colors.white, fontWeight: FontWeight.bold), ), ], ), ), ), ], ), ), ), ), ], ), ), ), new Expanded( child: new Container( margin: EdgeInsets.only(left: 8.0), alignment: Alignment.center, child: new Row( children: <Widget>[ new Expanded( child: new FlatButton( shape: new RoundedRectangleBorder( borderRadius: new BorderRadius.circular(30.0), ), color: Color(0Xffdb3236), onPressed: () => {}, child: new Container( child: new Row( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ new Expanded( child: new FlatButton( padding: EdgeInsets.only( top: 20.0, bottom: 20.0, ), child: new Row( mainAxisAlignment: MainAxisAlignment.spaceEvenly, children: <Widget>[ Icon( const IconData(0xea88, fontFamily: 'icomoon'), color: Colors.white, size: 15.0, ), Text( "GOOGLE", textAlign: TextAlign.center, style: TextStyle( color: Colors.white, fontWeight: FontWeight.bold), ), ], ), ), ), ], ), ), ), ), ], ), ), ), ], ), ) ], ), )); } Does anyone know what could be the issue ?
[ "I would suggest replacing the top Column widget with a ListView, that automatically resizes on keyboard input, whilst also supporting scrolling.\nIf you really want this setup as it is, you can edit your Scaffold with the parameter\nresizeToAvoidBottomPadding: false \n\nThat should make the error disappear\n", "Scaffold(\n resizeToAvoidBottomInset: false, // set it to false\n ... \n)\n\n\nAs Andrey said, you may have issues with scrolling, so you may try\nScaffold(\n resizeToAvoidBottomInset: false, // set it to false\n body: SingleChildScrollView(child: YourBody()),\n)\n\n", "you usually need to provide a scroll widget on top of your widgets because if you try to open the keyboard or change the orientation of your phone, flutter needs to know how to handle the distribution of the widgets on the screen.\nPlease review this resource, you can check the different options that flutter provide Out of the box, and choose the best option for your scenario.\nhttps://flutter.io/widgets/scrolling/\n", "With resizeToAvoidBottomPadding: false in Scaffold, You don't see all the widgets that are below the open textfield. The solution is to insert a container with a SingleChildScrollView inside. Example:\nContainer(\n alignment: Alignment.center,\n width: double.infinity,\n height: double.infinity,\n color: viewModel.color,\n child: SingleChildScrollView(child:\"Your widgets\"));\n\n", "wrap your child view into ListView will solve the prob. Please check this \n class _LoginScreenState extends State<LoginScreen> {\n @override\n Widget build(BuildContext context) {\n return new Scaffold(\n body: new Container(\n child: ListView(\n children: <Widget>[\n Padding(\n padding: const EdgeInsets.all(8.0),\n child: new Column(\n crossAxisAlignment: CrossAxisAlignment.start,\n children: <Widget>[\n\n new Padding(\n padding: EdgeInsets.only(left: 1,top: 50,right: 1,bottom: 1),\n child: new Text(\"jk\", style: TextStyle(fontFamily: \"mono_bold\")),\n ),\n\n new Padding(\n padding: EdgeInsets.only(left: 1,top: 50,right: 1,bottom: 1),\n child: new TextField(\n style: new TextStyle(),\n decoration: InputDecoration(\n labelText: \"Email\",\n contentPadding: EdgeInsets.all(8.0)\n ),\n keyboardType: TextInputType.emailAddress,\n )\n ),\n new Padding(\n padding: EdgeInsets.only(left: 1,top: 50,right: 1,bottom: 1),\n child: new TextField(\n style: new TextStyle(\n\n ),\n decoration: InputDecoration(\n labelText: \"Password\"\n\n ),\n keyboardType: TextInputType.text,\n obscureText: true,\n ),\n ),\n ],\n ),\n ),\n ],\n )\n ),\n);\n}\n\n", "wrap your column into SingleChildScrollView will solve the problem. Please check this:\n Widget build(BuildContext context) {\n return Scaffold(\n body: Container(\n child: Center(\n child: SingleChildScrollView(\n child: Column(\n mainAxisAlignment: MainAxisAlignment.center,\n children: <Widget>[\n TextField(),\n TextField(),\n ],\n ),\n ))));\n }\n\nAnd You also use for removing yellow black overflow line \nresizeToAvoidBottomPadding: false \n\nbut in this case, TextField does not move up on click time.\n", "wrap with SingleChildScrollView Widget here's my code to solve this situation:\nit is best and easiest method\nSingleChildScrollView(\n child: Column(\n mainAxisAlignment: MainAxisAlignment.center,\n crossAxisAlignment: CrossAxisAlignment.stretch,\n children: <Widget>[\n Hero(\n tag: 'logo',\n child: Container(\n height: 200.0,\n child: Image.asset('images/logo.png'),\n ),\n ),\n SizedBox(\n height: 48.0,\n ),\n TextField(\n onChanged: (value) {\n //Do something with the user input.\n },\n decoration: kTextFieldDecoration.copyWith(hintText: 'Enter username'),\n ),\n SizedBox(\n height: 8.0,\n ),\n TextField(\n onChanged: (value) {\n //Do something with the user input.\n },\n decoration: kTextFieldDecoration.copyWith(hintText: 'Enter password'),\n ),\n SizedBox(\n height: 24.0,\n ),\n\n RoundedButton(\n colour: Colors.blueAccent,\n text: 'Register',\n onPressed: () {\n //later todo\n },\n ),\n\n ],\n ),\n ),\n\n", "You can set resizeToAvoidBottomInset: false for avoiding overflow, but you can't reach fields in bottom of page, which can be covered by keyboard.\nOr you can wrap body of Scaffold inside SingleChildScrollView\n", "You can enclose all the widgets within the ListView.\nSo you can scroll it and the overloaded will disappear.\n", "you should add resizeToAvoidBottomInset: false, and put your button in child:SingleChildScrollView() like the following code below :\nWidget build(BuildContext context) {\n return Scaffold(\n resizeToAvoidBottomInset: false,\n appBar: PreferredSize(\n preferredSize:const Size(double.infinity,100),\n child:(ResponsiveLayout.isTinyLimit(context) ||\n ResponsiveLayout.isTinyHeightLimit(context))\n ? Container()\n : const AppBarWidget(),\n),\n\n body: Center(\n child:SingleChildScrollView(\n child: Padding(\n\n padding: const EdgeInsets.all(8.0),\n\n child: Center(\n\n\n child: Column(\n\n mainAxisAlignment: MainAxisAlignment.center ,\n children: [\n Text(\"Temperature\"),\n LineChartSample2(),\n\n Text(\"Gas Level\"),\n LineChartSample2(),\n on?Icon(Icons.lightbulb,\n size:100,\n color: Colors.lightBlue.shade700 ,\n ):const Icon(Icons.lightbulb_outline,\n size:100,\n\n ),\n ElevatedButton(\n style: TextButton.styleFrom(\n backgroundColor: on? Colors.green: Colors.white10),\n onPressed: (){\n dbR.child(\"movement\").set({\"Switch\":!on});\n setState(() {\n on = !on;\n });\n },\n child:on ? const Text(\"On\"):const Text(\"Off\"))\n ],\n ),\n ),\n ) ),\n ),\n);\n\n", "Put your content in to SingleChildScrollView and add a ConstrainedBox and try\nhttps://api.flutter.dev/flutter/widgets/SingleChildScrollView-class.html\n", "Wrap your top column inside SingleChildScrollView.\nMake you layout scrollable.\n", "You can wrap your Fields in single child ScrollView of Flutter.\n", "This align items bottom to top,\nTry this..\nchild: SizedBox(\n height: MediaQuery.of(context).size.height,\n child: SingleChildScrollView(\n reverse: true,\n\n", "Remove unwanted padding top and bottom\n child: Container(\n\n height: size.height,\n width: size.width,\n padding: EdgeInsets.only(\n left: size.width * 0.15,\n right: size.width * 0.15,\n top: size.width * 0.15,\n bottom: size.width * 0.15,\n ),\n\nto change by this\n child: Container(\n \n height: size.height,\n width: size.width,\n padding: EdgeInsets.only(\n left: size.width * 0.15,\n right: size.width * 0.15,\n ),\n\nThis worked for me.\n", "please set resizeToAvoidBottomPadding: false and set scrollPadding in textField\n", "Instead of column widget try to use flex widget, which might solve your problem.\n" ]
[ 44, 39, 13, 13, 10, 10, 7, 3, 1, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "Try wrapping your main Column with \n1.(ListView and give property shrinkWrap: true,) List view has property children: [], or\n2.( SingleChildScrollView() )but it has only child: , .\nSomething like:\n child: ListView(shrinkWrap: true, children: <Widget>[\n new Column(children: <Widget>[\n Container(\n padding: EdgeInsets.all(120.0),\n child: Center(\n child: Icon(\n Icons.headset_mic,\n color: Colors.redAccent,\n size: 50.0,\n ),\n ),\n ),\n new Row(\n children: <Widget>[\n new Expanded(\n child: new Padding(\n padding: const EdgeInsets.only(left: 40.0),\n child: new Text(\n \"EMAIL\",\n style: TextStyle(\n fontWeight: FontWeight.bold,\n color: Colors.redAccent,\n fontSize: 15.0,\n ),\n ),\n ),\n ),\n ],\n ),\n new Container(\n width: MediaQuery.of(context).size.width,\n margin: const EdgeInsets.only(\n left: 40.0, right: 40.0, top: 10.0),\n alignment: Alignment.center,\n decoration: BoxDecoration(\n border: Border(\n bottom: BorderSide(\n color: Colors.redAccent,\n width: 0.5,\n style: BorderStyle.solid),\n ),\n ),\n padding: const EdgeInsets.only(left: 0.0, right: 10.0),\n child: new Row(\n crossAxisAlignment: CrossAxisAlignment.center,\n mainAxisAlignment: MainAxisAlignment.start,\n children: <Widget>[\n new Expanded(\n child: TextField(\n obscureText: true,\n textAlign: TextAlign.left,\n decoration: InputDecoration(\n border: InputBorder.none,\n hintText: '[email protected]',\n hintStyle: TextStyle(color: Colors.grey),\n ),\n ),\n ),\n ],\n ),\n ),\n Divider(\n height: 24.0,\n ),\n new Row(\n children: <Widget>[\n new Expanded(\n child: new Padding(\n padding: const EdgeInsets.only(left: 40.0),\n child: new Text(\n \"PASSWORD\",\n style: TextStyle(\n fontWeight: FontWeight.bold,\n color: Colors.redAccent,\n fontSize: 15.0,\n ),\n ),\n ),\n ),\n ],\n ),\n new Container(\n width: MediaQuery.of(context).size.width,\n margin: const EdgeInsets.only(\n left: 40.0, right: 40.0, top: 10.0),\n alignment: Alignment.center,\n decoration: BoxDecoration(\n border: Border(\n bottom: BorderSide(\n color: Colors.redAccent,\n width: 0.5,\n style: BorderStyle.solid),\n ),\n ),\n padding: const EdgeInsets.only(left: 0.0, right: 10.0),\n child: new Row(\n crossAxisAlignment: CrossAxisAlignment.center,\n mainAxisAlignment: MainAxisAlignment.start,\n children: <Widget>[\n new Expanded(\n child: TextField(\n obscureText: true,\n textAlign: TextAlign.left,\n decoration: InputDecoration(\n border: InputBorder.none,\n hintText: '*********',\n hintStyle: TextStyle(color: Colors.grey),\n ),\n ),\n ),\n ],\n ),\n ),\n ])\n ]),\n\n" ]
[ -1 ]
[ "android", "flutter", "ios", "material_design" ]
stackoverflow_0051774252_android_flutter_ios_material_design.txt
Q: Select column value based on other row values I tried for hours and read many posts but I still can't figure out how to handle this request: I have a table like this : Gender Marks M 75 F 88 M 93 M 88 F 98 I'd like to select all boys from the table and set the sameMarks column to 1 when the boy marks match the girl marks, otherwise it should be 0. The output should look like this: Gender Marks Same_Marks M 75 0 M 93 0 M 88 1 A: One possible approach would be aggregation: SELECT MAX(Gender) AS Gender, Marks, CASE WHEN MIN(Gender) = MAX(Gender) THEN 0 ELSE 1 END AS Same_Marks FROM yourTable GROUP BY Marks; A: One option is using EXISTS to check if women with the same mark exist and applying a CASE WHEN on the result: SELECT y.gender, y.marks, CASE WHEN EXISTS(SELECT 1 FROM yourtable WHERE gender <> 'M' and marks = y.marks) THEN 1 ELSE 0 END AS Same_Marks FROM yourtable y WHERE y.gender = 'M'; Note: This answer assumes you really want to get boys only, no women (according to your description). If this is incorrect, please review and improve your question. Like Tim already mentioned, it would be much better to use 'B' for both genders.
Select column value based on other row values
I tried for hours and read many posts but I still can't figure out how to handle this request: I have a table like this : Gender Marks M 75 F 88 M 93 M 88 F 98 I'd like to select all boys from the table and set the sameMarks column to 1 when the boy marks match the girl marks, otherwise it should be 0. The output should look like this: Gender Marks Same_Marks M 75 0 M 93 0 M 88 1
[ "One possible approach would be aggregation:\nSELECT MAX(Gender) AS Gender,\n Marks,\n CASE WHEN MIN(Gender) = MAX(Gender) THEN 0 ELSE 1 END AS Same_Marks\nFROM yourTable\nGROUP BY Marks;\n\n", "One option is using EXISTS to check if women with the same mark exist and applying a CASE WHEN on the result:\nSELECT \ny.gender, y.marks,\nCASE WHEN \n EXISTS(SELECT 1 FROM yourtable WHERE gender <> 'M' and marks = y.marks) \n THEN 1 ELSE 0 END AS Same_Marks\nFROM yourtable y\nWHERE y.gender = 'M';\n\nNote: This answer assumes you really want to get boys only, no women (according to your description). If this is incorrect, please review and improve your question.\nLike Tim already mentioned, it would be much better to use 'B' for both genders.\n" ]
[ 1, 0 ]
[]
[]
[ "sql" ]
stackoverflow_0074673445_sql.txt
Q: How to position selected item at first while category selection with listview I am creating a demo where I have used listview builder to show all available categories from a list, now when I tap on item, it should be placed at the first place.... I have attached an image to make it more clear... I tried with swiping but looks like not a proper way as it is affecting original list order; here is my coding class _Stack17State extends State<Stack17> { @override List<String> names = ['Shoes','Trousers','Jeans','Jacket','Belt','Others']; int selectedindex = 0; @override Widget build(BuildContext context) { return Scaffold( body: Column( children: [ Container( height: 50, color: Colors.white, child: ListView.builder( scrollDirection: Axis.horizontal, itemCount: names.length, itemBuilder: (context, index) { return InkWell( onTap: (){ setState(() { // I applied this logic, but looks like not proper way as it is spoiling or original list's order; selectedindex=index; swap(selectedindex); selected index=0; }); }, child: Padding( padding: EdgeInsets.symmetric(horizontal: 4), child: Container( padding: EdgeInsets.symmetric(horizontal: 30,vertical: 10), decoration: BoxDecoration( color: selectedindex==index?Colors.blue:Colors.blue.shade100, borderRadius: BorderRadius.circular(20)), child: Center(child: Text(names[index])), ), ), ); }), ), SizedBox(height: 10,), Center(child: Text('Selected Category : '+names[selectedindex]),) ], ), ); } void swapitems(int index) { String temp; temp=names[index]; names[index]=names[0]; names[0]=temp; } } A: Try this: onTap: () { var item = names[index]; names.removeAt(index); names.insert(0, item); setState(() {}); }, names[0]=temp; just replace first Item with selected one. you need to use insert which shift the list and insert the selected item to the first place.
How to position selected item at first while category selection with listview
I am creating a demo where I have used listview builder to show all available categories from a list, now when I tap on item, it should be placed at the first place.... I have attached an image to make it more clear... I tried with swiping but looks like not a proper way as it is affecting original list order; here is my coding class _Stack17State extends State<Stack17> { @override List<String> names = ['Shoes','Trousers','Jeans','Jacket','Belt','Others']; int selectedindex = 0; @override Widget build(BuildContext context) { return Scaffold( body: Column( children: [ Container( height: 50, color: Colors.white, child: ListView.builder( scrollDirection: Axis.horizontal, itemCount: names.length, itemBuilder: (context, index) { return InkWell( onTap: (){ setState(() { // I applied this logic, but looks like not proper way as it is spoiling or original list's order; selectedindex=index; swap(selectedindex); selected index=0; }); }, child: Padding( padding: EdgeInsets.symmetric(horizontal: 4), child: Container( padding: EdgeInsets.symmetric(horizontal: 30,vertical: 10), decoration: BoxDecoration( color: selectedindex==index?Colors.blue:Colors.blue.shade100, borderRadius: BorderRadius.circular(20)), child: Center(child: Text(names[index])), ), ), ); }), ), SizedBox(height: 10,), Center(child: Text('Selected Category : '+names[selectedindex]),) ], ), ); } void swapitems(int index) { String temp; temp=names[index]; names[index]=names[0]; names[0]=temp; } }
[ "Try this:\nonTap: () {\n var item = names[index];\n names.removeAt(index);\n names.insert(0, item);\n setState(() {});\n},\n\nnames[0]=temp; just replace first Item with selected one. you need to use insert which shift the list and insert the selected item to the first place.\n" ]
[ 1 ]
[]
[]
[ "flutter" ]
stackoverflow_0074673475_flutter.txt
Q: Why is my nested loop only finding one duplicate character in a string? My apologies if this is a duplicate, I couldn't find an answer after searching for a while on Stackoverflow. I am trying to use a nested loop to find any duplicate characters in a string. So far, all I can manage to do is to find one duplicate the string. For example, when I try the string "aabbcde", the function returns ['a', 'a'], whereas I was expecting ['a', 'a', 'b', 'b']. I obviously have an error in my code, can anybody help point me towards what it could be? const myStr = "aabbcde"; function duplicateCount(text){ const duplicates = []; for (let i = 0; i < text.length; i++) { for (let j = 0; j < text[i].length; j++) { if (text[i] === text[j]) { duplicates.push(text[i]); } } } return duplicates; } duplicateCount(myStr); A: It should be something like this. issues in this loop for (let j = 0; j < text[i].length; j++) const myStr = "aabbcde"; function duplicateCount(text){ const duplicates = []; for (let i = 0; i < text.length; i++) { for (let j = i+1; j < text.length; j++) { if (text[i] === text[j]) { duplicates.push(text[i]); } } } return duplicates; } console.log(duplicateCount(myStr)); A: Using nested loop will make it very hard to do it,we can use a Object to store the appear count,and then filter the count const myStr1 = "aabbcde"; const myStr2 = "ffeddbaa"; const duplicateCount = str => { let map = {} for(c of str){ map[c] = (map[c]??0) + 1 } let result = [] for(m in map){ if(map[m] <= 1){ continue } result.push(...Array(map[m]).fill(m)) } return result } console.log(duplicateCount(myStr1)) console.log(duplicateCount(myStr2)) A: You can simply achieve the result you're looking for by creating an object map of the string (meaning each key of the object will be each unique character of the string and their associated values will be the number of times each character is repeated in the string). After you create an object map of the string, you can loop through the object and check if each value is greater than one or not. If they're you would push that item into a result array by the number of times the character is repeated. Please find my code here: const myStr = 'aabbcde'; const duplicateCount = (str) => { const result = []; const obj = {}; str.split('').map((char) => { obj[char] = obj[char] + 1 || 1; }); for (key in obj) { if (obj[key] > 1) { for (let i = 0; i < obj[key]; i++) { result.push(key); } } } return result; }; console.log(duplicateCount(myStr));
Why is my nested loop only finding one duplicate character in a string?
My apologies if this is a duplicate, I couldn't find an answer after searching for a while on Stackoverflow. I am trying to use a nested loop to find any duplicate characters in a string. So far, all I can manage to do is to find one duplicate the string. For example, when I try the string "aabbcde", the function returns ['a', 'a'], whereas I was expecting ['a', 'a', 'b', 'b']. I obviously have an error in my code, can anybody help point me towards what it could be? const myStr = "aabbcde"; function duplicateCount(text){ const duplicates = []; for (let i = 0; i < text.length; i++) { for (let j = 0; j < text[i].length; j++) { if (text[i] === text[j]) { duplicates.push(text[i]); } } } return duplicates; } duplicateCount(myStr);
[ "It should be something like this.\nissues in this loop for (let j = 0; j < text[i].length; j++) \n\n\nconst myStr = \"aabbcde\";\n\nfunction duplicateCount(text){\n const duplicates = [];\n for (let i = 0; i < text.length; i++) {\n for (let j = i+1; j < text.length; j++) {\n\n if (text[i] === text[j]) {\n duplicates.push(text[i]);\n }\n }\n }\n return duplicates;\n\n}\n\nconsole.log(duplicateCount(myStr));\n\n\n\n", "Using nested loop will make it very hard to do it,we can use a Object to store the appear count,and then filter the count\n\n\nconst myStr1 = \"aabbcde\";\nconst myStr2 = \"ffeddbaa\";\n\nconst duplicateCount = str => {\n let map = {}\n for(c of str){\n map[c] = (map[c]??0) + 1 \n }\n let result = []\n for(m in map){\n if(map[m] <= 1){\n continue \n }\n result.push(...Array(map[m]).fill(m))\n }\n return result\n}\n\nconsole.log(duplicateCount(myStr1))\nconsole.log(duplicateCount(myStr2))\n\n\n\n", "You can simply achieve the result you're looking for by creating an object map of the string (meaning each key of the object will be each unique character of the string and their associated values will be the number of times each character is repeated in the string).\nAfter you create an object map of the string, you can loop through the object and check if each value is greater than one or not. If they're you would push that item into a result array by the number of times the character is repeated. Please find my code here:\n\n\nconst myStr = 'aabbcde';\n\nconst duplicateCount = (str) => {\n const result = [];\n const obj = {};\n str.split('').map((char) => {\n obj[char] = obj[char] + 1 || 1;\n });\n for (key in obj) {\n if (obj[key] > 1) {\n for (let i = 0; i < obj[key]; i++) {\n result.push(key);\n }\n }\n }\n return result;\n};\n\nconsole.log(duplicateCount(myStr));\n\n\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "javascript" ]
stackoverflow_0074673358_javascript.txt
Q: Does refresh_token make sense with client credentials oauth flow? I am testing keycloak for learning purposes. I am testing the client credentials flow token endpoint to return a jwt for rest api use. The endpoint returns an access_token and a refresh_token (refresh token is disabled by default unless I enable it in console for the client). I can call the same token endpoint with a refresh token generated from the first client credentials call but it still requires a client secret. Is it not possible to regenerate an access token in the client credentials flow with just a refresh token? If not why would I ever bother to pass a grant_type of refresh_token - wouldn't I just call the client_credential flow again since they both require a client secret? I have to guess the answer will be that refresh tokens don't make sense to be used with client_credential flows? token parameters: refresh token parameters: A: You guess right, refresh tokens don't make sense for the client_credentials grant type. Refresh tokens are used for interactive clients i.e a person. The idea of the refresh token is to remove the requirement for them to have to frequently re-authenticate e.g re-enter their username and password, whilst still allowing the token expiry time to be kept short. The reason you want to keep the expiry time short is that once it is issued it is usually not possible to revoke it. On the other hand if an account has been suspended or the password has been changed and a refresh token has been presented the reissuing of the token can be refused by the identity provider. As the client credentials flow is used for machine to machine authentication frequently re-authenticating is not a problem. The OAuth RFC specifically states "refresh token SHOULD NOT be included." in the response for the client_credentials grant type. A: @James Adcock's answer is right on the spot, aside from a minor detail that I will hopefully clarify with my answer since I have seen this inaccuracy a few times already on stack overflow: The OAuth2 documentation link states explicitly that "A refresh token SHOULD NOT be included" for client_credentials grant type. First, one needs to clarify the meaning of "SHOULD NOT" in that context. According to rfc2119: The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119. Note that the force of these words is modified by the requirement level of the document in which they are used. MUST This word, or the terms "REQUIRED" or "SHALL", mean that the definition is an absolute requirement of the specification. MUST NOT This phrase, or the phrase "SHALL NOT", mean that the definition is an absolute prohibition of the specification. SHOULD This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course. SHOULD NOT This phrase, or the phrase "NOT RECOMMENDED" mean that there may exist valid reasons in particular circumstances when the particular behavior is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behavior described with this label. As you can read, "SHOULD NOT" does not mean that the specification prohibits it but rather that it does not recommend it. I have to guess the answer will be that refresh tokens dont make sense to be used with client_credential flows? As you have seen, the specification does not explicitly prohibits it. Notwithstanding, conceptually, (as already point out by James) it does not make sense to have a refresh token when using the client credentials flow since the backend already has the client secret stored locally anyway, and can simply acquire a new access token without the hassle of either having to store user credentials or request the user to enter them again. A bit of historical reasoning on why Keycloak allows you to use the refresh token. From this thread of the keycloak mailing list: Hi, Currently, every time a confidential client tries to get a new access token from the token endpoint a new session is created on the server. This can lead to multiple active sessions for a single client/service account when doing multiple requests to token endpoint. To avoid that the client should store the access token/refresh token and use a refresh token when appropriate in case the access token has expired. That is fine. one can infer that the reason why Keycloak had (at that time) the refresh token available (by default) for the client credential flows was to be used as a workaround to a technical issue with their client credential flow implementation (i.e., creating too many sessions).
Does refresh_token make sense with client credentials oauth flow?
I am testing keycloak for learning purposes. I am testing the client credentials flow token endpoint to return a jwt for rest api use. The endpoint returns an access_token and a refresh_token (refresh token is disabled by default unless I enable it in console for the client). I can call the same token endpoint with a refresh token generated from the first client credentials call but it still requires a client secret. Is it not possible to regenerate an access token in the client credentials flow with just a refresh token? If not why would I ever bother to pass a grant_type of refresh_token - wouldn't I just call the client_credential flow again since they both require a client secret? I have to guess the answer will be that refresh tokens don't make sense to be used with client_credential flows? token parameters: refresh token parameters:
[ "You guess right, refresh tokens don't make sense for the client_credentials grant type. Refresh tokens are used for interactive clients i.e a person. The idea of the refresh token is to remove the requirement for them to have to frequently re-authenticate e.g re-enter their username and password, whilst still allowing the token expiry time to be kept short. The reason you want to keep the expiry time short is that once it is issued it is usually not possible to revoke it. On the other hand if an account has been suspended or the password has been changed and a refresh token has been presented the reissuing of the token can be refused by the identity provider.\nAs the client credentials flow is used for machine to machine authentication frequently re-authenticating is not a problem. The OAuth RFC specifically states \"refresh token SHOULD NOT be included.\" in the response for the client_credentials grant type.\n", "@James Adcock's answer is right on the spot, aside from a minor detail that I will hopefully clarify with my answer since I have seen this inaccuracy a few times already on stack overflow:\n\nThe OAuth2 documentation link states explicitly that \"A refresh token SHOULD NOT be included\" for client_credentials grant type.\n\nFirst, one needs to clarify the meaning of \"SHOULD NOT\" in that context. According to rfc2119:\n\n The key words \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL\n NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"MAY\", and\n \"OPTIONAL\" in this document are to be interpreted as described in\n RFC 2119.\n\nNote that the force of these words is modified by the requirement\nlevel of the document in which they are used.\n\nMUST This word, or the terms \"REQUIRED\" or \"SHALL\", mean that the definition is an absolute requirement of the specification.\n\nMUST NOT This phrase, or the phrase \"SHALL NOT\", mean that the definition is an absolute prohibition of the specification.\n\nSHOULD This word, or the adjective \"RECOMMENDED\", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.\n\nSHOULD NOT This phrase, or the phrase \"NOT RECOMMENDED\" mean that there may exist valid reasons in particular circumstances when the particular behavior is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behavior described with this label.\n\n\n\nAs you can read, \"SHOULD NOT\" does not mean that the specification prohibits it but rather that it does not recommend it.\n\nI have to guess the answer will be that refresh tokens dont make sense to be used with client_credential flows?\n\nAs you have seen, the specification does not explicitly prohibits it. Notwithstanding, conceptually, (as already point out by James) it does not make sense to have a refresh token when using the client credentials flow since the backend already has the client secret stored locally anyway, and can simply acquire a new access token without the hassle of either having to store user credentials or request the user to enter them again.\nA bit of historical reasoning on why Keycloak allows you to use the refresh token. From this thread of the keycloak mailing list:\n\nHi,\n\nCurrently, every time a confidential client tries to get a new access token\nfrom the token endpoint a new session is created on the server. This can\nlead to multiple active sessions for a single client/service account when\ndoing multiple requests to token endpoint.\nTo avoid that the client should store the access token/refresh token and\nuse a refresh token when appropriate in case the access token has expired.\nThat is fine.\n\n\none can infer that the reason why Keycloak had (at that time) the refresh token available (by default) for the client credential flows was to be used as a workaround to a technical issue with their client credential flow implementation (i.e., creating too many sessions).\n" ]
[ 1, 0 ]
[]
[]
[ "jwt", "keycloak", "oauth_2.0" ]
stackoverflow_0074670852_jwt_keycloak_oauth_2.0.txt
Q: Creating a site programmatically in Netlify with NuxtJS using NetlifyAPI I am trying to automate website deploys in Netlify using the Netlify REST API and I keep getting this error: These dependencies were not found: friendly-errors 07:27:07 friendly-errors 07:27:07 * node:buffer in ./node_modules/netlify/node_modules/node-fetch/src/index.js * node:http in ./node_modules/netlify/node_modules/node-fetch/src/index.js * node:https in ./node_modules/netlify/node_modules/node-fetch/src/index.js * node:net in ./node_modules/netlify/node_modules/node-fetch/src/utils/referrer.js * node:stream in ./node_modules/netlify/node_modules/node-fetch/src/index.js * node:url in ./node_modules/netlify/node_modules/node-fetch/src/request.js * node:util in ./node_modules/netlify/node_modules/node-fetch/src/body.js * node:zlib in ./node_modules/netlify/node_modules/node-fetch/src/index.js friendly-errors 07:27:07 To install them, you can run: npm install --save node:buffer node:http node:https node:net node:stream node:url node:util node:zlib Tried both: import { NetlifyAPI } from 'netlify' and import NetlifyAPI from 'netlify' A: Had to downgrade the "netlify" package version to v.9.0.0
Creating a site programmatically in Netlify with NuxtJS using NetlifyAPI
I am trying to automate website deploys in Netlify using the Netlify REST API and I keep getting this error: These dependencies were not found: friendly-errors 07:27:07 friendly-errors 07:27:07 * node:buffer in ./node_modules/netlify/node_modules/node-fetch/src/index.js * node:http in ./node_modules/netlify/node_modules/node-fetch/src/index.js * node:https in ./node_modules/netlify/node_modules/node-fetch/src/index.js * node:net in ./node_modules/netlify/node_modules/node-fetch/src/utils/referrer.js * node:stream in ./node_modules/netlify/node_modules/node-fetch/src/index.js * node:url in ./node_modules/netlify/node_modules/node-fetch/src/request.js * node:util in ./node_modules/netlify/node_modules/node-fetch/src/body.js * node:zlib in ./node_modules/netlify/node_modules/node-fetch/src/index.js friendly-errors 07:27:07 To install them, you can run: npm install --save node:buffer node:http node:https node:net node:stream node:url node:util node:zlib Tried both: import { NetlifyAPI } from 'netlify' and import NetlifyAPI from 'netlify'
[ "Had to downgrade the \"netlify\" package version to v.9.0.0\n" ]
[ 1 ]
[]
[]
[ "netlify", "nuxt.js" ]
stackoverflow_0074673366_netlify_nuxt.js.txt
Q: How to pad sequences with variable length in more than 1 dimension in pytorch? Is there any clean way to create a batch of 3D sequences in pytorch? I have 3D sequences with the shape of (sequence_length_lvl1, sequence_length_lvl2, D), the sequences have different values for sequence_length_lvl1 and sequence_length_lvl2 but all of them have the same value for D, and I want to pad these sequences in the first and second dimensions and create a batch of them, but I can't use pytorch pad_sequence function, because it works only if the sequences have variable length in only one dimension. I wanted to ask if anyone knows any clean way to do this? To be more clear, I provide an example. Assume the input sequence is something like: input1 = [ [[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5]] ] input2 = [ [[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[6, 6, 6]], [[4, 4, 4], [5, 5, 5]] ] And I want to pad [input1, input2]. The desired output would be: output = [ [[[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]]], [[[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[6, 6, 6], [0, 0, 0], [0, 0, 0]], [[4, 4, 4], [5, 5, 5], [0, 0, 0]]] ] So the desired output has the shape of (2, 3, 3, 3). A: this works with your example, maybe there is a faster way. input1 = [ [[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5]] ] input2 = [ [[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[6, 6, 6]], [[4, 4, 4], [5, 5, 5]] ] len_max = max(len(input1), len(input2)) output_val = [[], []] no_val = [[0, 0, 0], [0, 0, 0], [0, 0, 0]] for i in range(len_max): try: a = [] a = input1[i] except Exception: a = no_val add_empty = 3 - len(a) for j in range(add_empty): a += [[0, 0, 0]] try: b = [] b = input2[i] except Exception: b = no_val add_empty = 3 - len(b) for j in range(add_empty): b += [[0, 0, 0]] output_val[0] += [a] output_val[1] += [b] print('-------------\n', output_val) A: You can use text2array library that can perform such padding no matter how deeply nested the sequences are (disclaimer: I'm the author). Install with pip install text2array, then: from text2array import Batch arr = Batch([{'x': input1}, {'x': input2}]).to_array() print(arr['x']) will print array([[[[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]]], [[[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[6, 6, 6], [0, 0, 0], [0, 0, 0]], [[4, 4, 4], [5, 5, 5], [0, 0, 0]]]]) as desired. The output is a NumPy array, but you can easily convert it to PyTorch tensor with torch.from_numpy. A: I am not sure about the pytorch data structure but if they are list-like data, you can use my solution. This function is to fill a missing value in every dimension (i.e., width, height, and depth) with 0 to adjust the dimension to be the same as the max one. This can be applied to any number of inputs, not just 2. At first, find the maximum width, maximum height, and maximum depth across all inputs (e.g., input1 and input2). After that, fill a missing cell with 0 for each input and then concatenate them together. This method doesn't require any additional libraries. def fill_missing_dimension(inputs): output = [] # find max width, height, depth among all inputs max_width = max([len(i) for i in inputs]) max_height = max([len(j) for i in inputs for j in i]) max_depth = max([len(k) for i in inputs for j in i for k in j]) print(max_width, max_height, max_depth) # fill missing dimension with 0 for all inputs for input in inputs: for i in range(len(input)): for j in range(len(input[i])): for k in range(len(input[i][j]), max_depth): input[i][j].append(0) for j in range(len(input[i]), max_height): input[i].append([0] * max_depth) for i in range(len(input), max_width): input.append([[0] * max_depth] * max_height) # concate all inputs output.append(input) return output If you think that the code above is too long, below is the shorter and cleaner (list comprehension) version (but hard to read and understand) of the function above: # comprehension version of fill_missing_dimension def fill_missing_dimension(inputs): max_width = max([len(i) for i in inputs]) max_height = max([len(j) for i in inputs for j in i]) max_depth = max([len(k) for i in inputs for j in i for k in j]) return [[[[[input[i][j][k] if k < len(input[i][j]) else 0 for k in range(max_depth)] if j < len(input[i]) else [0] * max_depth for j in range(max_height)] if i < len(input) else [[0] * max_depth] * max_height for i in range(max_width)] for input in inputs]] EXAMPLE input1 = [ [[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5]] ] input2 = [ [[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[6, 6, 6]], [[4, 4, 4], [5, 5, 5]] ] output = fill_missing_dimension([input1, input2]) output: > output [[[[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]]], [[[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[6, 6, 6], [0, 0, 0], [0, 0, 0]], [[4, 4, 4], [5, 5, 5], [0, 0, 0]]]] If you would like to use the output as a numpy array, you can use np.array() as shown below: import numpy as np # convert to numpy array output = np.array(output) print(output.shape) # (2, 3, 3, 3)
How to pad sequences with variable length in more than 1 dimension in pytorch?
Is there any clean way to create a batch of 3D sequences in pytorch? I have 3D sequences with the shape of (sequence_length_lvl1, sequence_length_lvl2, D), the sequences have different values for sequence_length_lvl1 and sequence_length_lvl2 but all of them have the same value for D, and I want to pad these sequences in the first and second dimensions and create a batch of them, but I can't use pytorch pad_sequence function, because it works only if the sequences have variable length in only one dimension. I wanted to ask if anyone knows any clean way to do this? To be more clear, I provide an example. Assume the input sequence is something like: input1 = [ [[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5]] ] input2 = [ [[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[6, 6, 6]], [[4, 4, 4], [5, 5, 5]] ] And I want to pad [input1, input2]. The desired output would be: output = [ [[[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[4, 4, 4], [5, 5, 5], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]]], [[[1, 1, 1], [2, 2, 2], [3, 3, 3]], [[6, 6, 6], [0, 0, 0], [0, 0, 0]], [[4, 4, 4], [5, 5, 5], [0, 0, 0]]] ] So the desired output has the shape of (2, 3, 3, 3).
[ "this works with your example, maybe there is a faster way.\ninput1 = [\n [[1, 1, 1], [2, 2, 2], [3, 3, 3]],\n [[4, 4, 4], [5, 5, 5]]\n ]\n\ninput2 = [\n [[1, 1, 1], [2, 2, 2], [3, 3, 3]],\n [[6, 6, 6]],\n [[4, 4, 4], [5, 5, 5]]\n ]\n\nlen_max = max(len(input1), len(input2))\noutput_val = [[], []]\nno_val = [[0, 0, 0], [0, 0, 0], [0, 0, 0]]\n\nfor i in range(len_max):\n try:\n a = []\n a = input1[i]\n except Exception:\n a = no_val\n\n add_empty = 3 - len(a)\n for j in range(add_empty):\n a += [[0, 0, 0]]\n\n try:\n b = []\n b = input2[i]\n except Exception:\n b = no_val\n\n add_empty = 3 - len(b)\n for j in range(add_empty):\n b += [[0, 0, 0]]\n\n output_val[0] += [a]\n output_val[1] += [b]\n\nprint('-------------\\n', output_val)\n\n", "You can use text2array library that can perform such padding no matter how deeply nested the sequences are (disclaimer: I'm the author). Install with pip install text2array, then:\nfrom text2array import Batch\n\narr = Batch([{'x': input1}, {'x': input2}]).to_array()\nprint(arr['x'])\n\nwill print\narray([[[[1, 1, 1],\n [2, 2, 2],\n [3, 3, 3]],\n\n [[4, 4, 4],\n [5, 5, 5],\n [0, 0, 0]],\n\n [[0, 0, 0],\n [0, 0, 0],\n [0, 0, 0]]],\n\n\n [[[1, 1, 1],\n [2, 2, 2],\n [3, 3, 3]],\n\n [[6, 6, 6],\n [0, 0, 0],\n [0, 0, 0]],\n\n [[4, 4, 4],\n [5, 5, 5],\n [0, 0, 0]]]])\n\nas desired. The output is a NumPy array, but you can easily convert it to PyTorch tensor with torch.from_numpy.\n", "I am not sure about the pytorch data structure but if they are list-like data, you can use my solution.\nThis function is to fill a missing value in every dimension (i.e., width, height, and depth) with 0 to adjust the dimension to be the same as the max one. This can be applied to any number of inputs, not just 2. At first, find the maximum width, maximum height, and maximum depth across all inputs (e.g., input1 and input2). After that, fill a missing cell with 0 for each input and then concatenate them together.\nThis method doesn't require any additional libraries.\ndef fill_missing_dimension(inputs):\n output = []\n\n # find max width, height, depth among all inputs\n max_width = max([len(i) for i in inputs])\n max_height = max([len(j) for i in inputs for j in i])\n max_depth = max([len(k) for i in inputs for j in i for k in j])\n\n print(max_width, max_height, max_depth)\n\n # fill missing dimension with 0 for all inputs\n for input in inputs:\n for i in range(len(input)):\n for j in range(len(input[i])):\n for k in range(len(input[i][j]), max_depth):\n input[i][j].append(0)\n for j in range(len(input[i]), max_height):\n input[i].append([0] * max_depth)\n for i in range(len(input), max_width):\n input.append([[0] * max_depth] * max_height)\n\n # concate all inputs\n output.append(input)\n\n return output\n\nIf you think that the code above is too long, below is the shorter and cleaner (list comprehension) version (but hard to read and understand) of the function above:\n# comprehension version of fill_missing_dimension\ndef fill_missing_dimension(inputs):\n max_width = max([len(i) for i in inputs])\n max_height = max([len(j) for i in inputs for j in i])\n max_depth = max([len(k) for i in inputs for j in i for k in j])\n return [[[[[input[i][j][k] if k < len(input[i][j]) else 0 for k in range(max_depth)] if j < len(input[i]) else [0] * max_depth for j in range(max_height)] if i < len(input) else [[0] * max_depth] * max_height for i in range(max_width)] for input in inputs]]\n\n\nEXAMPLE\ninput1 = [\n[[1, 1, 1], [2, 2, 2], [3, 3, 3]], \n[[4, 4, 4], [5, 5, 5]]\n]\n\ninput2 = [\n[[1, 1, 1], [2, 2, 2], [3, 3, 3]], \n[[6, 6, 6]],\n[[4, 4, 4], [5, 5, 5]]\n]\n\noutput = fill_missing_dimension([input1, input2])\n\noutput:\n> output\n\n[[[[1, 1, 1], [2, 2, 2], [3, 3, 3]],\n [[4, 4, 4], [5, 5, 5], [0, 0, 0]],\n [[0, 0, 0], [0, 0, 0], [0, 0, 0]]],\n [[[1, 1, 1], [2, 2, 2], [3, 3, 3]],\n [[6, 6, 6], [0, 0, 0], [0, 0, 0]],\n [[4, 4, 4], [5, 5, 5], [0, 0, 0]]]]\n\nIf you would like to use the output as a numpy array, you can use np.array() as shown below:\nimport numpy as np\n# convert to numpy array\noutput = np.array(output)\nprint(output.shape) # (2, 3, 3, 3)\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "deep_learning", "lstm", "python", "pytorch" ]
stackoverflow_0072488665_deep_learning_lstm_python_pytorch.txt
Q: TDengine 3.0.1.8 timestamp tdengine 3.0.1.8 test downsampling query refer to the official document can not query the timestamp enter image description here select max(v)-min(v) as v from u14_1759 interval(1h) limit 24; I want to add a time column to the query results A: You can try this for time computing in TDengine database select elasped(ts) as v from u14_1759 interval(1h) limit 24;
TDengine 3.0.1.8 timestamp
tdengine 3.0.1.8 test downsampling query refer to the official document can not query the timestamp enter image description here select max(v)-min(v) as v from u14_1759 interval(1h) limit 24; I want to add a time column to the query results
[ "You can try this for time computing in TDengine database\nselect elasped(ts) as v from u14_1759 interval(1h) limit 24;\n" ]
[ 0 ]
[]
[]
[ "tdengine" ]
stackoverflow_0074626520_tdengine.txt
Q: Display text file using node js from a remote server Good day! I'm having a hard time fixing this issue. I'm currently using node js webserver (http). I'm a beginner in using node js so any help would be appreciated. What I'm hoping to achieve is to display a string 'Hello World!' in the browser while accessing it through the URL. The problem is I'm running the script from a remote server and unfortunately I can't access it through the URL. The script is running fine but for the browser it returns an error saying: host didn’t send any data. ERR_EMPTY_RESPONSE Here is the script I'm running from the remote server: var http = require('http'); http.createServer(function (request, response) { response.writeHead(200, {'Content-Type': 'text/plain'}); response.write('Hello World!'); response.end(); }).listen(2000); I think my script doesn't have a problem. So I'm guessing it's from the setup of the server, but I don't have any idea in which part it's causing not to display it. I'm currently using a Linux Server. Thanks in advance! A: From what I can see you are listening on port 2000, are you sure that the url you are requesting the data from also contains the port e.g http://localhost:2000/ ?? Browsers by default tries to connect using port 80 on http and 443 for https, if you are listening on a different port than those, you have to define it in the url, by using a ":" after the domain/ip address Anyway, have a look at the express module for server side rest APIs, will make request handling so much easier: const express = require('express'); const app = express().listen(80); app.get("/",function(request,response){ response.send('Hello World'); }); A: Express allows you to handle the creation of web servers better. But node or express, do make sure that the URL you have entered consists of the port number that you have asked the server to listen to. Another possibility is that the port you have asked to display your response is already being used by another server. You can try using a different port number. You might have gotten the answer, but this is for the folks out there who are new to node at present and have stumbled upon this stackoverflow question! Good day :)
Display text file using node js from a remote server
Good day! I'm having a hard time fixing this issue. I'm currently using node js webserver (http). I'm a beginner in using node js so any help would be appreciated. What I'm hoping to achieve is to display a string 'Hello World!' in the browser while accessing it through the URL. The problem is I'm running the script from a remote server and unfortunately I can't access it through the URL. The script is running fine but for the browser it returns an error saying: host didn’t send any data. ERR_EMPTY_RESPONSE Here is the script I'm running from the remote server: var http = require('http'); http.createServer(function (request, response) { response.writeHead(200, {'Content-Type': 'text/plain'}); response.write('Hello World!'); response.end(); }).listen(2000); I think my script doesn't have a problem. So I'm guessing it's from the setup of the server, but I don't have any idea in which part it's causing not to display it. I'm currently using a Linux Server. Thanks in advance!
[ "From what I can see you are listening on port 2000, are you sure that the url you are requesting the data from also contains the port e.g http://localhost:2000/ ?? Browsers by default tries to connect using port 80 on http and 443 for https, if you are listening on a different port than those, you have to define it in the url, by using a \":\" after the domain/ip address\nAnyway, have a look at the express module for server side rest APIs, will make request handling so much easier:\nconst express = require('express');\nconst app = express().listen(80);\n\napp.get(\"/\",function(request,response){\n response.send('Hello World');\n});\n\n", "Express allows you to handle the creation of web servers better.\nBut node or express, do make sure that the URL you have entered consists of the port number that you have asked the server to listen to.\nAnother possibility is that the port you have asked to display your response is already being used by another server. You can try using a different port number.\nYou might have gotten the answer, but this is for the folks out there who are new to node at present and have stumbled upon this stackoverflow question! Good day :)\n" ]
[ 0, 0 ]
[]
[]
[ "http", "linux", "node.js", "remote_server", "webserver" ]
stackoverflow_0051902408_http_linux_node.js_remote_server_webserver.txt
Q: I can't get a Sql Server localdb connection to work on a computer that does not have SqlServer Express installed I have a C# console application written using Visual Studio 2012. In the application I am using a Sql Server localdb connection to a database to store information. This is working fine on several computers, all of which have Visual Studio installed. I would like to deploy a program that only has to install the Sql Server Express LocalDB, and not the larger Sql Server Express. However, my application is not running on the target computers. I have installed Sql Server Express LocalDB 2014 on a target computer. I can, using a command line, run commands using sqllocaldb to verify that it is installed and running. C:\Users\someuser\Desktop\Debug>sqllocaldb v Microsoft SQL Server 2014 (12.0.2000.8)` When I run my application on that same target computer, however, I get the following error. C:\Users\someuser\Desktop\Debug>Testing_Console 11:21:07,912 [1] INFO TestingConsole.Program - Current Directory is C:\Users\someuser\Desktop\Debug Extra Info: (null) Unhandled Exception: System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 50 - Local Database Runtime error occurred. Cannot create an automatic instance. See the Windows Application event log for error details. The following is the beginning of my app.config file, where I am defining the connection string. I have tried putting in the direct file path to the LM file, but that didn't fix the issue. That was to be expected, however, as the program works from any directory on the computers with Visual Studio installed. <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> </configSections> <connectionStrings> <add name="KomoLM_Console.Properties.Settings.LMConnectionString" providerName="System.Data.SqlClient" connectionString="Data Source=(LocalDB)\v11.0;AttachDbFilename=|DataDirectory|\LM.mdf;Integrated Security=True;MultipleActiveResultSets=True" /> </connectionStrings> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> </startup> I don't know if the issue is related to only have SQL Server Express LocalDB 2014 installed. Can anyone tell me what my problem might be? A: The problem was related to having Sql Server Express LocalDB 2014 installed instead of 2012. With that version MS has changed the connection string requirements. Instead of Data Source=(LocalDB)\V11.0, the connection string is Data Source=(LocalDB)\MSSQLLocalDB. After changing my connection string the program is running correctly on a computer that only has the LocalDB 2014 installed. Here is a link to an article about it: https://connect.microsoft.com/SQLServer/feedback/details/845278/sql-server-2014-express-localdb-does-not-create-automatic-instance-v12-0 also http://msdn.microsoft.com/en-us/library/hh510202(v=sql.120).aspx A: Using "Data Source=(LocalDB)\MSSQLLocalDB" also not worked form me. I had to access databasseusing "Data Source=(LocalDB)\V12.0" and for working that access to work I needed to run this command first "sqllocaldb create "v12.0". More details on this link https://dyball.wordpress.com/2014/04/28/sql-2014-localdb-error-cannot-connect-to-locaidbv12-o/ A: You'll want to make sure that you've installed .NET Framework 4.0 and, equally as importantly, the .NET Framework 4.0.2 update (KB #2544514). Once your system is up to date, you can download the SqlLocalDb installer from: http://www.microsoft.com/en-us/download/details.aspx?id=29062 A: The build having connection string of (LocalDB)\v11.0 will work with the localDB ENU\x64\SqlLocalDB.MSI given on this link Download SqlLocalDB I tried this on target system where no Visual Studio is installed. This build will connect with the database with only SqlLocalDB.msi installed. There is no need to install SqlExpress on target system. A: The problem is when you have both Visual Studio 2013 and Visual Studio 2017 there are two versions of loacal database installed. Visual Studio 2013 - (localdb)\v11.0 Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64) Oct 19 2012 13:38:57 Copyright (c) Microsoft Corporation Express Edition (64-bit) on Windows NT 6.2 (Build 9200: ) Visual Studio 2017 - (localdb)\MSSQLLOCALDB Microsoft SQL Server 2016 (SP1) (KB3182545) - 13.0.4001.0 (X64) Oct 28 2016 18:17:30 Copyright (c) Microsoft Corporation Express Edition (64-bit) on Windows 10 Pro 6.3 (Build 17134: ) Before installing Visual Studio 2017 I was able to connect to (localdb)\v11.0, but after installing Visual Studio 2017 I am not able to connect to previous version of SQL Express (localdb)\v11.0, but I am able to connect to (localdb)\MSSQLLOCALDB using C#. I am able to connect to both of them from SQL Server Management Studio without any issues. A: For me deleting and re-creating the MSSQLLocalDB solved the issue: Locate the most recent SqlLocalDB version: DIR "C:\Program Files\Microsoft SQL Server\sqllocaldb.exe" /S /B Move into the directory with the highest version number, e.g. CD "C:\Program Files\Microsoft SQL Server\150\Tools\Binn\" Delete default instance of LocalDB: SqlLocalDB.exe delete MSSQLLocalDB Re-create default instance of LocalDB: SqlLocalDB.exe create MSSQLLocalDB A: I took the following steps to connect to the SQL Local DB and it perfectly works for me: Download and install Download Microsoft SQL Server Management Studio (SSMS) 19: from the Microsoft official Site: https://learn.microsoft.com/en-us/sql/ssms/download-sql-server- management-studio-ssms-19?view=sql-server-ver16 Download and install SQL Server 2022 from Microsoft Site: https://www.microsoft.com/en-us/sql-server/sql-server-downloads? rtc=1 [After SQL Server Installation you will get the server name from the connection string: in my case, it was: " localhost\MSSQLSERVER01 "] 1 Open Microsoft SQL Server Management Studio and follow the steps: connect> Database Engine > server name: <server name from step 3> connect. Hope this will fix the issue.
I can't get a Sql Server localdb connection to work on a computer that does not have SqlServer Express installed
I have a C# console application written using Visual Studio 2012. In the application I am using a Sql Server localdb connection to a database to store information. This is working fine on several computers, all of which have Visual Studio installed. I would like to deploy a program that only has to install the Sql Server Express LocalDB, and not the larger Sql Server Express. However, my application is not running on the target computers. I have installed Sql Server Express LocalDB 2014 on a target computer. I can, using a command line, run commands using sqllocaldb to verify that it is installed and running. C:\Users\someuser\Desktop\Debug>sqllocaldb v Microsoft SQL Server 2014 (12.0.2000.8)` When I run my application on that same target computer, however, I get the following error. C:\Users\someuser\Desktop\Debug>Testing_Console 11:21:07,912 [1] INFO TestingConsole.Program - Current Directory is C:\Users\someuser\Desktop\Debug Extra Info: (null) Unhandled Exception: System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 50 - Local Database Runtime error occurred. Cannot create an automatic instance. See the Windows Application event log for error details. The following is the beginning of my app.config file, where I am defining the connection string. I have tried putting in the direct file path to the LM file, but that didn't fix the issue. That was to be expected, however, as the program works from any directory on the computers with Visual Studio installed. <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> </configSections> <connectionStrings> <add name="KomoLM_Console.Properties.Settings.LMConnectionString" providerName="System.Data.SqlClient" connectionString="Data Source=(LocalDB)\v11.0;AttachDbFilename=|DataDirectory|\LM.mdf;Integrated Security=True;MultipleActiveResultSets=True" /> </connectionStrings> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> </startup> I don't know if the issue is related to only have SQL Server Express LocalDB 2014 installed. Can anyone tell me what my problem might be?
[ "The problem was related to having Sql Server Express LocalDB 2014 installed instead of 2012. With that version MS has changed the connection string requirements. Instead of Data Source=(LocalDB)\\V11.0, the connection string is Data Source=(LocalDB)\\MSSQLLocalDB. After changing my connection string the program is running correctly on a computer that only has the LocalDB 2014 installed. Here is a link to an article about it: https://connect.microsoft.com/SQLServer/feedback/details/845278/sql-server-2014-express-localdb-does-not-create-automatic-instance-v12-0 \nalso\nhttp://msdn.microsoft.com/en-us/library/hh510202(v=sql.120).aspx\n", "Using \"Data Source=(LocalDB)\\MSSQLLocalDB\" also not worked form me. I had to access databasseusing \"Data Source=(LocalDB)\\V12.0\" and for working that access to work I needed to run this command first \"sqllocaldb create \"v12.0\". More details on this link https://dyball.wordpress.com/2014/04/28/sql-2014-localdb-error-cannot-connect-to-locaidbv12-o/\n", "You'll want to make sure that you've installed .NET Framework 4.0 and, equally as importantly, the .NET Framework 4.0.2 update (KB #2544514).\nOnce your system is up to date, you can download the SqlLocalDb installer from:\nhttp://www.microsoft.com/en-us/download/details.aspx?id=29062\n", "The build having connection string of (LocalDB)\\v11.0 will work with the localDB ENU\\x64\\SqlLocalDB.MSI given on this link Download SqlLocalDB\nI tried this on target system where no Visual Studio is installed. This build will connect with the database with only SqlLocalDB.msi installed. There is no need to install SqlExpress on target system.\n", "The problem is when you have both Visual Studio 2013 and Visual Studio 2017 there are two versions of loacal database installed.\nVisual Studio 2013 - (localdb)\\v11.0 Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64) \n Oct 19 2012 13:38:57 \n Copyright (c) Microsoft Corporation\n Express Edition (64-bit) on Windows NT 6.2 (Build 9200: )\nVisual Studio 2017 - (localdb)\\MSSQLLOCALDB Microsoft SQL Server 2016 (SP1) (KB3182545) - 13.0.4001.0 (X64) \n Oct 28 2016 18:17:30 \n Copyright (c) Microsoft Corporation\n Express Edition (64-bit) on Windows 10 Pro 6.3 (Build 17134: )\nBefore installing Visual Studio 2017 I was able to connect to (localdb)\\v11.0, but after installing Visual Studio 2017 I am not able to connect to previous version of SQL Express (localdb)\\v11.0, but I am able to connect to (localdb)\\MSSQLLOCALDB using C#.\nI am able to connect to both of them from SQL Server Management Studio without any issues.\n", "For me deleting and re-creating the MSSQLLocalDB solved the issue:\n\nLocate the most recent SqlLocalDB version:\nDIR \"C:\\Program Files\\Microsoft SQL Server\\sqllocaldb.exe\" /S /B\n\nMove into the directory with the highest version number, e.g.\nCD \"C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn\\\"\n\nDelete default instance of LocalDB:\nSqlLocalDB.exe delete MSSQLLocalDB\n\nRe-create default instance of LocalDB:\nSqlLocalDB.exe create MSSQLLocalDB\n\n\n", "I took the following steps to connect to the SQL Local DB and it perfectly works for me:\n\nDownload and install Download Microsoft SQL Server Management\nStudio (SSMS) 19: from the Microsoft official Site:\nhttps://learn.microsoft.com/en-us/sql/ssms/download-sql-server-\nmanagement-studio-ssms-19?view=sql-server-ver16\n\nDownload and install SQL Server 2022 from Microsoft Site:\nhttps://www.microsoft.com/en-us/sql-server/sql-server-downloads?\nrtc=1\n\n[After SQL Server Installation you will get the server name from the\nconnection string: in my case, it was: \" localhost\\MSSQLSERVER01 \"]\n1\n\n\n\nOpen Microsoft SQL Server Management Studio and follow the steps:\nconnect> Database Engine > server name: <server name from step 3>\nconnect.\n\nHope this will fix the issue.\n" ]
[ 35, 8, 4, 2, 2, 0, 0 ]
[]
[]
[ "c#", "localdb", "sql_server", "sql_server_2014_express", "visual_studio_2012" ]
stackoverflow_0027826245_c#_localdb_sql_server_sql_server_2014_express_visual_studio_2012.txt
Q: How to create dynamic hierarchy(nested key value dictionary) based on sub data customer_det = [ { "Customer": "A", "country_name": "USA", "region_name": "North", "state_name": "Florida", "subregion_name": "South Atlantic", "store": "Store1" }, { "Customer": "A", "country_name": "USA", "region_name": "North", "state_name": "Albama", "subregion_name": "Carribean", "store": "Store2" }, { "Customer": "A", "country_name": "USA", "region_name": "North", "state_name": "Albama", "subregion_name": "Carribean", "store": "Store2" }, { "Customer": "A", "country_name": "India", "region_name": "South East", "state_name": "Hyderabad", "subregion_name": "South-West", "store": "Store4" } ] I have the above list of dictionary. but i wanted to create hierarchy based on sub data(like country, region, state etc. in nested key value dictionary format). using python. Like i shown in below format. I have attached the sample output of my requirement: { "A": { "USA": { "North": { "South Atlantic": { "Florida": [ "Store1" ] }, "Carribean": { "Albama": [ "Store2", "Store3" ] } } }, "India": { "South": { "South-West": { "Telangana": [ "Store4" ] } } } } }
How to create dynamic hierarchy(nested key value dictionary) based on sub data
customer_det = [ { "Customer": "A", "country_name": "USA", "region_name": "North", "state_name": "Florida", "subregion_name": "South Atlantic", "store": "Store1" }, { "Customer": "A", "country_name": "USA", "region_name": "North", "state_name": "Albama", "subregion_name": "Carribean", "store": "Store2" }, { "Customer": "A", "country_name": "USA", "region_name": "North", "state_name": "Albama", "subregion_name": "Carribean", "store": "Store2" }, { "Customer": "A", "country_name": "India", "region_name": "South East", "state_name": "Hyderabad", "subregion_name": "South-West", "store": "Store4" } ] I have the above list of dictionary. but i wanted to create hierarchy based on sub data(like country, region, state etc. in nested key value dictionary format). using python. Like i shown in below format. I have attached the sample output of my requirement: { "A": { "USA": { "North": { "South Atlantic": { "Florida": [ "Store1" ] }, "Carribean": { "Albama": [ "Store2", "Store3" ] } } }, "India": { "South": { "South-West": { "Telangana": [ "Store4" ] } } } } }
[]
[]
[ "The question is a bit vague. But to get data from your database, you may first need to query it and then save the results. In python, you can utilize json.dumps() for getting your desired output.\nimport json\n\n# Example data\ndata = [\n {\n 'id': 1,\n 'name': 'John Doe',\n 'age': 30\n },\n {\n 'id': 2,\n 'name': 'Jane Doe',\n 'age': 25\n }\n]\n\n# Convert the data into JSON format\njson_data = json.dumps(data)\n\n# Print the JSON data\nprint(json_data)\n\nResult\n[\n {\n \"id\": 1,\n \"name\": \"John Doe\",\n \"age\": 30\n },\n {\n \"id\": 2,\n \"name\": \"Jane Doe\",\n \"age\": 25\n }\n]\n\n" ]
[ -1 ]
[ "django", "json", "python" ]
stackoverflow_0074673511_django_json_python.txt