content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
How to upgrade expo sdk to specific vresion
I want to upgrade my expo sdk version step by step as recommended in the documentation. These are the instructions:
Update to the latest version of Expo CLI: npm i -g expo-cli. [email protected] or greater is required.
Update to the latest version of EAS CLI if you use it: npm i -g eas-cli.
Run expo upgrade in your project directory.
I want to go from version 42 to version 43 and not to the last version 44. Is this possible?
Thanks a lot in advance
A:
Try this
expo upgrade 43
This should solve your problem!
A:
Try this command :
npm install expo@43 -g
Or :
expo update 43
A:
Tested on 29th Sep, 2022
Use expo upgrade @43.0.0
A:
Just run
expo upgrade -h
You will see the options. You just have to specify the version, as other answers suggest:
expo upgrade 46
A:
Try this:
expo install expo-updates
| How to upgrade expo sdk to specific vresion | I want to upgrade my expo sdk version step by step as recommended in the documentation. These are the instructions:
Update to the latest version of Expo CLI: npm i -g expo-cli. [email protected] or greater is required.
Update to the latest version of EAS CLI if you use it: npm i -g eas-cli.
Run expo upgrade in your project directory.
I want to go from version 42 to version 43 and not to the last version 44. Is this possible?
Thanks a lot in advance
| [
"Try this\nexpo upgrade 43\n\nThis should solve your problem!\n",
"Try this command :\n npm install expo@43 -g\n\nOr :\nexpo update 43\n\n",
"Tested on 29th Sep, 2022\nUse expo upgrade @43.0.0\n",
"Just run\nexpo upgrade -h\n\nYou will see the options. You just have to specify the version, as other answers suggest:\nexpo upgrade 46\n\n",
"Try this:\n\nexpo install expo-updates\n\n"
] | [
10,
7,
0,
0,
0
] | [] | [] | [
"expo",
"react_native"
] | stackoverflow_0072630357_expo_react_native.txt |
Q:
Monitoring Zabbix of MongoDb Atlas Cluster
I am configuring a Zabbix service to be able to monitor every hosts and service I'm currently using.
I tried to configure without any success the MongoDb [Cluster,Node] by Zabbix Agent 2 template.
I added a specific user and pwd to allow retrieving monitoring information and typed them into the Macros informations : {$MONGODB.USER}, {$MONGODB.PASSWORD}
I also typed in the URI to connect to one of the nodes of my actual MongoDb Atlas Cluster into the field : {$MONGODB.CONNSTRING} like the example following : tcp://clustername.instance.mongodb.net:27017.
With all those informations, I continualy receive a message "No reachable servers" / "zabbix_get [8700]: Get value error: ZBX_TCP_READ() failed: [104] Connection reset by peer"
The "ZBX_TCP_READ" is returned when I use the :
zabbix_get -p agent2_port -s host -k 'mongodb.ping["tcp://cluster.instance.mongodb.net:27017","zabbix_user","zabbix_password"]'
All I can achieve is being returned a:
zabbix_get [7647]: Get value error: ZBX_TCP_READ() failed: [104] Connection reset by peer
zabbix_get [7647]: Check access restrictions in Zabbix agent configuration
I'm expecting to be retrieving a "Connection Successful" then all the informations regarding the collections, the I/O, ...
I know I can use the MongoDb Atlas Monitoring Page, but would prefer to retrieve all my Monitoring informations into a unique service "Zabbix" I'm currently configuring.
What am I missing ? Has someone already managed with success to monitor MongoDb Atlas Cluster through Zabbix (did not find something relevant in my Google Searchs, nor in Stack Overflow) ?
Thank you by advance for any help you can provide.
A:
It is possible that the user and password you configured do not have the correct permissions to access the MongoDB Atlas cluster. You may need to grant the user access to the cluster and specific databases within the cluster in order to retrieve the necessary monitoring information.
Additionally, the URI you provided may not be correct or may not be formatted properly. You can verify the correct URI for your cluster by accessing the MongoDB Atlas cluster dashboard and looking for the connection string under the "Connect" tab.
Lastly, ensure that the Zabbix agent is properly configured and able to communicate with the MongoDB Atlas cluster. This may involve configuring the firewall rules and security settings to allow the Zabbix agent access to the cluster.
If all of these steps are properly followed, you should be able to retrieve the necessary monitoring information from your MongoDB Atlas cluster using the Zabbix service.
| Monitoring Zabbix of MongoDb Atlas Cluster | I am configuring a Zabbix service to be able to monitor every hosts and service I'm currently using.
I tried to configure without any success the MongoDb [Cluster,Node] by Zabbix Agent 2 template.
I added a specific user and pwd to allow retrieving monitoring information and typed them into the Macros informations : {$MONGODB.USER}, {$MONGODB.PASSWORD}
I also typed in the URI to connect to one of the nodes of my actual MongoDb Atlas Cluster into the field : {$MONGODB.CONNSTRING} like the example following : tcp://clustername.instance.mongodb.net:27017.
With all those informations, I continualy receive a message "No reachable servers" / "zabbix_get [8700]: Get value error: ZBX_TCP_READ() failed: [104] Connection reset by peer"
The "ZBX_TCP_READ" is returned when I use the :
zabbix_get -p agent2_port -s host -k 'mongodb.ping["tcp://cluster.instance.mongodb.net:27017","zabbix_user","zabbix_password"]'
All I can achieve is being returned a:
zabbix_get [7647]: Get value error: ZBX_TCP_READ() failed: [104] Connection reset by peer
zabbix_get [7647]: Check access restrictions in Zabbix agent configuration
I'm expecting to be retrieving a "Connection Successful" then all the informations regarding the collections, the I/O, ...
I know I can use the MongoDb Atlas Monitoring Page, but would prefer to retrieve all my Monitoring informations into a unique service "Zabbix" I'm currently configuring.
What am I missing ? Has someone already managed with success to monitor MongoDb Atlas Cluster through Zabbix (did not find something relevant in my Google Searchs, nor in Stack Overflow) ?
Thank you by advance for any help you can provide.
| [
"It is possible that the user and password you configured do not have the correct permissions to access the MongoDB Atlas cluster. You may need to grant the user access to the cluster and specific databases within the cluster in order to retrieve the necessary monitoring information.\nAdditionally, the URI you provided may not be correct or may not be formatted properly. You can verify the correct URI for your cluster by accessing the MongoDB Atlas cluster dashboard and looking for the connection string under the \"Connect\" tab.\nLastly, ensure that the Zabbix agent is properly configured and able to communicate with the MongoDB Atlas cluster. This may involve configuring the firewall rules and security settings to allow the Zabbix agent access to the cluster.\nIf all of these steps are properly followed, you should be able to retrieve the necessary monitoring information from your MongoDB Atlas cluster using the Zabbix service.\n"
] | [
0
] | [] | [] | [
"mongodb",
"mongodb_atlas",
"monitoring",
"zabbix"
] | stackoverflow_0074615620_mongodb_mongodb_atlas_monitoring_zabbix.txt |
Q:
AWS Cloudwatch insight shows no data
I want to build a Logging Dashboard to monitor a application in AWS EC2. So I configure the cloudwatch stuff and everything looks like a charm. But when I go to the cloudwatch logs insights and create a query for the logs, I'm getting 'no data found' for every query/time range I'm using.
I can see there are some logs in the stream when I click on it (in the logs panel) but it cannot discover from insights.
What I'm doing wrong?
Maybe someone could help me, thanks a lot
A:
Try changing the query to:
fields @logStream, @message | limit 20
And expand the time frame to, say, 4 weeks, making sure there are log streams within that time frame that contain log events.
For example:
A:
If the event appears in Log groups, but doesn't appear in Log Insights.
Did you use the Amazon CloudWatch Logs API PutLogEvents and inject logs with older timestamp ?
If yes.
You can't view the log Insights events that are previous to the log group creation.
Try inject events with timestamp newer than the log group creation time.
| AWS Cloudwatch insight shows no data | I want to build a Logging Dashboard to monitor a application in AWS EC2. So I configure the cloudwatch stuff and everything looks like a charm. But when I go to the cloudwatch logs insights and create a query for the logs, I'm getting 'no data found' for every query/time range I'm using.
I can see there are some logs in the stream when I click on it (in the logs panel) but it cannot discover from insights.
What I'm doing wrong?
Maybe someone could help me, thanks a lot
| [
"Try changing the query to:\nfields @logStream, @message | limit 20\n\nAnd expand the time frame to, say, 4 weeks, making sure there are log streams within that time frame that contain log events.\nFor example:\n\n",
"If the event appears in Log groups, but doesn't appear in Log Insights.\nDid you use the Amazon CloudWatch Logs API PutLogEvents and inject logs with older timestamp ?\nIf yes.\nYou can't view the log Insights events that are previous to the log group creation.\nTry inject events with timestamp newer than the log group creation time.\n"
] | [
1,
1
] | [] | [] | [
"amazon_cloudwatch",
"amazon_cloudwatchlogs",
"amazon_web_services"
] | stackoverflow_0072487935_amazon_cloudwatch_amazon_cloudwatchlogs_amazon_web_services.txt |
Q:
Specified function return value, yet error is appearing
I am for some reason getting this error from my code, yet I have specified the return type.
Unexpected non-void return value in void function
func performSearch(region: MKCoordinateRegion) -> MKLocalSearch.Response {
print("Searching...")
let searchRequest = MKLocalSearch.Request()
searchRequest.naturalLanguageQuery = "Coffee"
searchRequest.region = region
let search = MKLocalSearch(request: searchRequest)
search.start { response, error in
guard let response = response else {
print("Error: \(error?.localizedDescription ?? "Unknown error").")
return
}
return response
}
}
A:
This is probably because the completion block that is passed to search.start shouldn't have a return value. Since the return statement is inside the completion block, it is treated as a return statement for the completion block, and not for performSearch.
| Specified function return value, yet error is appearing | I am for some reason getting this error from my code, yet I have specified the return type.
Unexpected non-void return value in void function
func performSearch(region: MKCoordinateRegion) -> MKLocalSearch.Response {
print("Searching...")
let searchRequest = MKLocalSearch.Request()
searchRequest.naturalLanguageQuery = "Coffee"
searchRequest.region = region
let search = MKLocalSearch(request: searchRequest)
search.start { response, error in
guard let response = response else {
print("Error: \(error?.localizedDescription ?? "Unknown error").")
return
}
return response
}
}
| [
"This is probably because the completion block that is passed to search.start shouldn't have a return value. Since the return statement is inside the completion block, it is treated as a return statement for the completion block, and not for performSearch.\n"
] | [
0
] | [] | [] | [
"ios",
"swift",
"swiftui",
"uikit"
] | stackoverflow_0074674204_ios_swift_swiftui_uikit.txt |
Q:
ejs error saying could not find matching id
am new to mongoDB and eJS, am trying to display the content according to the collection id click, if i console the id am getting the id but if am pushing the content to the detials am getting an error: Error: Could not find matching close tag for "<%=". please what am I doing wrong here?
Here's my code
//my app.js file
app.get("/blogs/:id", (req, res) => {
const id = req.params.id
console.log(id)
Blog.findById(id)
.then((result) => {
res.render('details', {title: "details page", blog: result})
})
.catch(err => console.log(err))
})
my details file
<!DOCTYPE html>
<html lang="en">
<%- include("./partial/head.ejs") -%>
<body>
<div class="singleblog-page">
<%- include("./partial/nav.ejs") -%>
<div class="single-blog">
<h1><%=blog.title></h1>
<p><%=blog.body></p>
</div>
<%- include("./partial/footer.ejs") -%>
</div>
</body>
</html>
A:
error message says that EJS is unable to find a matching closing tag for the <%= tag that is used to output the value of the blog.title variable.
you have to add a closing tag for the <%= tag. In EJS, the closing tag for this tag is %>. So, you need to add this closing tag after the blog.title variable in your template.
<h1><%=blog.title%></h1>
Similarly, you need to add a closing tag for the <%= tag that is used to output the value of the blog.body variable, like;
<p><%=blog.body%></p>
| ejs error saying could not find matching id | am new to mongoDB and eJS, am trying to display the content according to the collection id click, if i console the id am getting the id but if am pushing the content to the detials am getting an error: Error: Could not find matching close tag for "<%=". please what am I doing wrong here?
Here's my code
//my app.js file
app.get("/blogs/:id", (req, res) => {
const id = req.params.id
console.log(id)
Blog.findById(id)
.then((result) => {
res.render('details', {title: "details page", blog: result})
})
.catch(err => console.log(err))
})
my details file
<!DOCTYPE html>
<html lang="en">
<%- include("./partial/head.ejs") -%>
<body>
<div class="singleblog-page">
<%- include("./partial/nav.ejs") -%>
<div class="single-blog">
<h1><%=blog.title></h1>
<p><%=blog.body></p>
</div>
<%- include("./partial/footer.ejs") -%>
</div>
</body>
</html>
| [
"error message says that EJS is unable to find a matching closing tag for the <%= tag that is used to output the value of the blog.title variable.\nyou have to add a closing tag for the <%= tag. In EJS, the closing tag for this tag is %>. So, you need to add this closing tag after the blog.title variable in your template.\n<h1><%=blog.title%></h1>\n\nSimilarly, you need to add a closing tag for the <%= tag that is used to output the value of the blog.body variable, like;\n<p><%=blog.body%></p>\n\n"
] | [
0
] | [] | [] | [
"ejs",
"express",
"mongodb",
"mongoose",
"node.js"
] | stackoverflow_0074674362_ejs_express_mongodb_mongoose_node.js.txt |
Q:
Selecting date(1st of every month) from a date range in Hive
Need to generate random date(1st of every month) selected from a given date range in hive (inclusive range).
For example if range is 25/12/2021 - 01/06/2022, then I want to select random date from this set of dates{01/01/2022, 01/02/2022, 01/03/2022, 01/04/2022, 01/05/2022, 01/06/2022).
Can any one guide me with my query?
I tried using
select concat('2019','-',lpad(floor(RAND()*100.0)%10+1,2,0),'-',lpad(floor(RAND()*100.0)%31+1,2,0));
but this needs date, I need to pass a column value as low range and a particular date as 2nd range. Since there are different dates for different columns for the low range to b passed.
A:
You can use below code to calculate a random date between two dates.
select trunc(date_add(start_dt, cast (datediff( end_dt,start_dt)*rand() as INT)),'MM') as random_dt
You can test the logic using below code-
select trunc(date_add('2021-01-17', cast (datediff( '2022-01-27','2021-01-17')*rand() as INT)), 'MM') as random_dt
Explanation -
Idea is to add a random number that is less than date difference to the start date.
datediff() - This returns diff of date as INT.
rand() - This returns a number between 0,1(both included). Which means, your start or end date can be same as random date sometime.
date_add - This adds the random integer to the start date to generate random date.
trunc(dt,'MM') - is going to return first day of the month.
| Selecting date(1st of every month) from a date range in Hive | Need to generate random date(1st of every month) selected from a given date range in hive (inclusive range).
For example if range is 25/12/2021 - 01/06/2022, then I want to select random date from this set of dates{01/01/2022, 01/02/2022, 01/03/2022, 01/04/2022, 01/05/2022, 01/06/2022).
Can any one guide me with my query?
I tried using
select concat('2019','-',lpad(floor(RAND()*100.0)%10+1,2,0),'-',lpad(floor(RAND()*100.0)%31+1,2,0));
but this needs date, I need to pass a column value as low range and a particular date as 2nd range. Since there are different dates for different columns for the low range to b passed.
| [
"You can use below code to calculate a random date between two dates.\nselect trunc(date_add(start_dt, cast (datediff( end_dt,start_dt)*rand() as INT)),'MM') as random_dt\n\nYou can test the logic using below code-\nselect trunc(date_add('2021-01-17', cast (datediff( '2022-01-27','2021-01-17')*rand() as INT)), 'MM') as random_dt\n\nExplanation -\nIdea is to add a random number that is less than date difference to the start date.\ndatediff() - This returns diff of date as INT.\nrand() - This returns a number between 0,1(both included). Which means, your start or end date can be same as random date sometime.\ndate_add - This adds the random integer to the start date to generate random date.\ntrunc(dt,'MM') - is going to return first day of the month.\n"
] | [
0
] | [] | [] | [
"date",
"hive",
"hiveql",
"random"
] | stackoverflow_0074673022_date_hive_hiveql_random.txt |
Q:
My form is not working, please do tell me where it went wrong(PHP validation)
I would like to know where in my code the form went wrong.
<?php
if(isset($_POST['submit'])) {
$arrayName = array("TeacherA", "TeacherB", "TeacherC" , "TeacherD", "TeacherE");
$minimum = 5;
$maximum = 10;
$name = $_POST['yourName'];
$email = $_POST['yourEmail'];
if(strlen($name) < $minimum) {
echo "Your name should be longer than 5 characters";
}
if(strlen($name) > $maximum) {
echo "Your name should not be longer than 10 characters";
}
if(!in_array($name,$arrayName)){
echo "Please do register with us before you can login";
} else {
echo "Welcome!";
}
}
?>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Blank</title>
</head>
<body>
<form action="form.php" method="post">
<label>Your Name: </label>
<input type="text" name="yourName"> <br>
<label for="">Your E mail:</label>
<input type="email" name="yourEmail" id=""><br>
<!-- <textarea name="yourMessage" id="" cols="30" rows="10"></textarea><br> -->
<input type="submit" name="submit" value="submit">
</form>
</body>
</html>
I run through the localhost, however, I could not get result that I wanted.
The result that I wanted is if I didn't enter the names in the "Your Name" field, then it should show the result:
Please do register with us before you can login
A:
if the the "Your Name" field is empty, then its strlen($name) should be 0 and the first if statement is true and it will show Your name should be longer than 5 characters
you can try this :
`<?php
if (isset($_POST['submit'])) {
$arrayName = array("TeacherA", "TeacherB", "TeacherC", "TeacherD", "TeacherE");
$minimum = 5;
$maximum = 10;
$name = $_POST['yourName'];
$email = $_POST['yourEmail'];
if (empty($name)) {
echo "Please do register with us before you can login";
} else {
if (strlen($name) < $minimum) {
echo "Your name should be longer than 5 characters";
} else {
if (strlen($name) > $maximum) {
echo "Your name should be less than 10 characters";
} else {
if (in_array($name, $arrayName)) {
echo "Welcome!";
} else {
echo "Your login is not correct";
}
}
}
}
}
?>
`
i used empty() to test if the field empty
and i used if-else statements because i want the script to stop if it founds a 'true' condition
in your script you can use return; but that will exit the rest of your script
have a nice code :)
| My form is not working, please do tell me where it went wrong(PHP validation) | I would like to know where in my code the form went wrong.
<?php
if(isset($_POST['submit'])) {
$arrayName = array("TeacherA", "TeacherB", "TeacherC" , "TeacherD", "TeacherE");
$minimum = 5;
$maximum = 10;
$name = $_POST['yourName'];
$email = $_POST['yourEmail'];
if(strlen($name) < $minimum) {
echo "Your name should be longer than 5 characters";
}
if(strlen($name) > $maximum) {
echo "Your name should not be longer than 10 characters";
}
if(!in_array($name,$arrayName)){
echo "Please do register with us before you can login";
} else {
echo "Welcome!";
}
}
?>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Blank</title>
</head>
<body>
<form action="form.php" method="post">
<label>Your Name: </label>
<input type="text" name="yourName"> <br>
<label for="">Your E mail:</label>
<input type="email" name="yourEmail" id=""><br>
<!-- <textarea name="yourMessage" id="" cols="30" rows="10"></textarea><br> -->
<input type="submit" name="submit" value="submit">
</form>
</body>
</html>
I run through the localhost, however, I could not get result that I wanted.
The result that I wanted is if I didn't enter the names in the "Your Name" field, then it should show the result:
Please do register with us before you can login
| [
"if the the \"Your Name\" field is empty, then its strlen($name) should be 0 and the first if statement is true and it will show Your name should be longer than 5 characters\nyou can try this :\n`<?php\n if (isset($_POST['submit'])) {\n\n $arrayName = array(\"TeacherA\", \"TeacherB\", \"TeacherC\", \"TeacherD\", \"TeacherE\");\n\n $minimum = 5;\n $maximum = 10;\n\n $name = $_POST['yourName'];\n $email = $_POST['yourEmail'];\n\n if (empty($name)) {\n\n echo \"Please do register with us before you can login\";\n\n } else {\n\n if (strlen($name) < $minimum) {\n\n echo \"Your name should be longer than 5 characters\";\n\n } else {\n\n if (strlen($name) > $maximum) {\n\n echo \"Your name should be less than 10 characters\";\n\n } else {\n\n if (in_array($name, $arrayName)) {\n\n echo \"Welcome!\";\n\n } else {\n\n echo \"Your login is not correct\";\n\n }\n\n }\n\n }\n\n }\n\n }\n\n?>\n`\ni used empty() to test if the field empty\nand i used if-else statements because i want the script to stop if it founds a 'true' condition\nin your script you can use return; but that will exit the rest of your script\nhave a nice code :)\n"
] | [
1
] | [
"Try this:\nif(!isset($name)){\n echo \"Please do register with us before you can login\";\n}\nelse{\n echo \"Welcome\";\n}\n\nAnd also:\n<input type=\"text\" name=\"yourName\" required>\n\n"
] | [
-2
] | [
"forms",
"html",
"php"
] | stackoverflow_0074674016_forms_html_php.txt |
Q:
Resize images in UIWebView to viewport size
I'm displaying HTML with some images inside a UIWebView in my iPhone App.
When the images are wider than the viewport of the iPhone I get horizontal scrollers which I don't want because its mostly about the text not the images.
Is there a way to resize images displayed inside the UIWebView according to the width (best: even if the device is rotated)?
A:
If you want the maximum width to be the width of the webView, use the max-width CSS property.
max-width: 100%; width: auto; height: auto;
A:
If the image is inside a UIWebView you could try using css to size them to 100%?
Something like . . .
<img style="width:100%" src="..." />
DISCLAIMER: I don't have XCode handy to test that but it should work! You'll have to try it for yourself!
A:
for AutoHeightWebView you can put in customStyle props max-width:
<AutoHeightWebView
customStyle={`
* {
max-width: 100%;
}
`}
| Resize images in UIWebView to viewport size | I'm displaying HTML with some images inside a UIWebView in my iPhone App.
When the images are wider than the viewport of the iPhone I get horizontal scrollers which I don't want because its mostly about the text not the images.
Is there a way to resize images displayed inside the UIWebView according to the width (best: even if the device is rotated)?
| [
"If you want the maximum width to be the width of the webView, use the max-width CSS property. \n\nmax-width: 100%; width: auto; height: auto;\n\n",
"If the image is inside a UIWebView you could try using css to size them to 100%?\nSomething like . . .\n<img style=\"width:100%\" src=\"...\" />\n\nDISCLAIMER: I don't have XCode handy to test that but it should work! You'll have to try it for yourself!\n",
"for AutoHeightWebView you can put in customStyle props max-width:\n<AutoHeightWebView\ncustomStyle={`\n * {\n max-width: 100%;\n }\n `}\n\n"
] | [
34,
5,
0
] | [] | [] | [
"image",
"iphone",
"uiwebview"
] | stackoverflow_0002676279_image_iphone_uiwebview.txt |
Q:
Zabbix 5.4.9 template Postgresql agent error
I'm doing this tutorial
https://www.zabbix.com/integrations/postgresql#postgresql
https://www.zabbix.com/documentation/4.4/en/manual/config/templates_out_of_the_box/requirements/postgresql
When I copy the file "template_db_postgresql.conf" to "/etc/zabbix/zabbix_agentd.d/"
agent service doesn't work
when I remove "template_db_postgresql.conf" the agent service it works (but without getting the data)
I'm using zabbix_agent2
Service Agent error image
I don't know is the problem "template_db_postgresql.conf".
Tried expanding file permissions.
I reinstalled the agent.
The agent on postgres server does not work when this file is put.
A:
It is possible that there is an error in the "template_db_postgresql.conf" file that is causing the agent service to not work properly. It is recommended to check the file for any syntax errors or incorrect configuration settings.
Additionally, you can try starting the agent service with the debug flag enabled to get more detailed information about the issue. This can be done by running the command "zabbix_agent2 -d" and checking the output for any error messages or clues about the cause of the issue.
It may also be helpful to reach out to the Zabbix community or support team for assistance with troubleshooting the issue.
| Zabbix 5.4.9 template Postgresql agent error | I'm doing this tutorial
https://www.zabbix.com/integrations/postgresql#postgresql
https://www.zabbix.com/documentation/4.4/en/manual/config/templates_out_of_the_box/requirements/postgresql
When I copy the file "template_db_postgresql.conf" to "/etc/zabbix/zabbix_agentd.d/"
agent service doesn't work
when I remove "template_db_postgresql.conf" the agent service it works (but without getting the data)
I'm using zabbix_agent2
Service Agent error image
I don't know is the problem "template_db_postgresql.conf".
Tried expanding file permissions.
I reinstalled the agent.
The agent on postgres server does not work when this file is put.
| [
"It is possible that there is an error in the \"template_db_postgresql.conf\" file that is causing the agent service to not work properly. It is recommended to check the file for any syntax errors or incorrect configuration settings.\nAdditionally, you can try starting the agent service with the debug flag enabled to get more detailed information about the issue. This can be done by running the command \"zabbix_agent2 -d\" and checking the output for any error messages or clues about the cause of the issue.\nIt may also be helpful to reach out to the Zabbix community or support team for assistance with troubleshooting the issue.\n"
] | [
0
] | [] | [] | [
"monitoring",
"zabbix"
] | stackoverflow_0074573332_monitoring_zabbix.txt |
Q:
How to restart android app programmatically
I want to re-start my app through Pending intent. Below code is not working.
val intent = Intent(this, Activity::class.java).apply {
flags = Intent.FLAG_ACTIVITY_CLEAR_TOP
}
val pendingIntentId = 1
val pendingIntent = PendingIntent.getActivity(this, pendingIntentId, intent, PendingIntent.FLAG_CANCEL_CURRENT)
val mgr = getSystemService(Context.ALARM_SERVICE) as AlarmManager
val timeToStart = System.currentTimeMillis() + 1000L
mgr.set(AlarmManager.RTC, timeToStart, pendingIntent)
exitProcess(0)
Target version is 31, so updated pending intent with PendingIntent.FLAG_MUTABLE still not working. I searched in many links related to this but no luck.
Restarting Android app programmatically
Force application to restart on first activity
https://www.folkstalk.com/tech/restart-application-programmatically-android-with-code-examples/#:~:text=How%20do%20I%20programmatically%20restart,finishes%20and%20automatically%20relaunches%20us.%20%7D
In Nov 2022, When target version is 31 & min sdk version is 29, above pending intent code is not restarting the App.
Any clue why above pending intent is not working or any other suggestion apart from re-launching the activity ?? I don't want to re-launch using startActivity(intent)
A:
When you want to resteart the entire app you could use the very easy libary: ProcessPhoenix
You can just simply insert the Library and execute:
ProcessPhoenix.triggerRebirth(context);
or with a specific intent:
Intent nextIntent = //...
ProcessPhoenix.triggerRebirth(context, nextIntent);
This is the easiest way to restart an android app programatically.
| How to restart android app programmatically | I want to re-start my app through Pending intent. Below code is not working.
val intent = Intent(this, Activity::class.java).apply {
flags = Intent.FLAG_ACTIVITY_CLEAR_TOP
}
val pendingIntentId = 1
val pendingIntent = PendingIntent.getActivity(this, pendingIntentId, intent, PendingIntent.FLAG_CANCEL_CURRENT)
val mgr = getSystemService(Context.ALARM_SERVICE) as AlarmManager
val timeToStart = System.currentTimeMillis() + 1000L
mgr.set(AlarmManager.RTC, timeToStart, pendingIntent)
exitProcess(0)
Target version is 31, so updated pending intent with PendingIntent.FLAG_MUTABLE still not working. I searched in many links related to this but no luck.
Restarting Android app programmatically
Force application to restart on first activity
https://www.folkstalk.com/tech/restart-application-programmatically-android-with-code-examples/#:~:text=How%20do%20I%20programmatically%20restart,finishes%20and%20automatically%20relaunches%20us.%20%7D
In Nov 2022, When target version is 31 & min sdk version is 29, above pending intent code is not restarting the App.
Any clue why above pending intent is not working or any other suggestion apart from re-launching the activity ?? I don't want to re-launch using startActivity(intent)
| [
"When you want to resteart the entire app you could use the very easy libary: ProcessPhoenix\nYou can just simply insert the Library and execute:\nProcessPhoenix.triggerRebirth(context);\n\nor with a specific intent:\nIntent nextIntent = //...\nProcessPhoenix.triggerRebirth(context, nextIntent);\n\nThis is the easiest way to restart an android app programatically.\n"
] | [
1
] | [
"Here is a rewritten version of your code - if you prefer that:\nval intent = Intent(this, Activity::class.java).apply {\n flags = Intent.FLAG_ACTIVITY_CLEAR_TOP or Intent.FLAG_ACTIVITY_NEW_TASK\n}\nval pendingIntentId = 1\nval pendingIntent = PendingIntent.getActivity(this, pendingIntentId, intent, PendingIntent.FLAG_CANCEL_CURRENT)\nval mgr = getSystemService(Context.ALARM_SERVICE) as AlarmManager\nval timeToStart = System.currentTimeMillis() + 1000L\nmgr.set(AlarmManager.RTC, timeToStart, pendingIntent)\nfinish() // Close the current Activity\n\nThis code creates a new Intent to start the main Activity of the app, with the FLAG_ACTIVITY_CLEAR_TOP and FLAG_ACTIVITY_NEW_TASK flags set. Then, it creates a PendingIntent with this Intent, and sets an alarm using the AlarmManager class to launch the Intent after 1 second (1000 milliseconds). Finally, it calls the finish method to close the current Activity.\nWhen the alarm is triggered, it will launch the Intent to restart the app, effectively closing and restarting the app. Note that this will only work if the app is in the foreground, and may not work if the app is in the background or has been killed by the system.\nThe main two changes I considered to be useful:\n1. You were using the exitProcess method to exit the app after setting the alarm. This method is not available in Android, and will cause the app to crash. Instead of using exitProcess, you should call the finish method on the current Activity to close it. (refer to line 8 - I marked it with a comment)\n2. The Intent you are using to restart the app is missing the Intent.FLAG_ACTIVITY_NEW_TASK flag. This flag is required for an Intent that is used to start a new task (as is the case with restarting the app). Without this flag, the app may not be restarted properly.\nWith those two small changes, you should be able to smoothly restart an android app programmatically.\n"
] | [
-1
] | [
"alarmmanager",
"android",
"android_pendingintent"
] | stackoverflow_0074571858_alarmmanager_android_android_pendingintent.txt |
Q:
How to open firebird .fdb file in VS Code?
I want to see and check .fdb file. And I am using VS Code to open it. I installed "DB Explorer For Firebird Databases" for this and file opened. However, the opened file was corrupt. How can I fix this?
A:
A Firebird database file is a file to be read by the Firebird database engine and queried using SQL. It is not something to be opened in a text editor, like you did in the screenshot.
The DB Explorer For Firebird Databases is a plugin for connecting to a Firebird database server, and executing queries against that server. It is not something for viewing the contents of a FDB file directly.
Unfortunately, as far as I'm aware, the DB Explorer For Firebird Databases has been broken for a while now, and it seems it is no longer maintained. You can connect to a database, and see the tables it has, but attempting to execute queries or view data will not show any query results, but instead shows a broken image icon (the issue for this bug has been open since 2019, so I guess it is unlikely to get fixed).
You may want to consider using something like DBeaver or FlameRobin to query a Firebird database, but that will still require having a Firebird database server installed.
| How to open firebird .fdb file in VS Code? | I want to see and check .fdb file. And I am using VS Code to open it. I installed "DB Explorer For Firebird Databases" for this and file opened. However, the opened file was corrupt. How can I fix this?
| [
"A Firebird database file is a file to be read by the Firebird database engine and queried using SQL. It is not something to be opened in a text editor, like you did in the screenshot.\nThe DB Explorer For Firebird Databases is a plugin for connecting to a Firebird database server, and executing queries against that server. It is not something for viewing the contents of a FDB file directly.\nUnfortunately, as far as I'm aware, the DB Explorer For Firebird Databases has been broken for a while now, and it seems it is no longer maintained. You can connect to a database, and see the tables it has, but attempting to execute queries or view data will not show any query results, but instead shows a broken image icon (the issue for this bug has been open since 2019, so I guess it is unlikely to get fixed).\nYou may want to consider using something like DBeaver or FlameRobin to query a Firebird database, but that will still require having a Firebird database server installed.\n"
] | [
0
] | [] | [] | [
"fdb",
"firebird",
"visual_studio_code"
] | stackoverflow_0074670355_fdb_firebird_visual_studio_code.txt |
Q:
is there any issue learning dot net technology with 4.0 version?
I have been watching videos of C#.net, Asp.net, and MVC from the kudvenkat youtube channel this man is so brilliant at teaching dot net technology but kudvenkat`s dot net videos are from 2012 and I think there will be dot net version 4.0 but still, currently so many peoples are learning dot net from him so here I want to know if there any issues with dot net version and can I still learn from him?
A:
Yes, why not, you can learn basic and slightly advanced issues from this training course, by the way, very good details are mentioned in this training. To complete the topics and update them, you can also use books or articles of newer versions of .net and deepen and update your knowledge.
A:
Learning is an excellent habit. I have also personally learnt many concepts from Kudvenkat's youtube videos, he provides slides and text versions as well.
He has created other playlists, one Asp.Net Core, Web API, and other related topics. Their tutorials are very good foe beginners.
You can also learn some advance topics as well from internet (YoutTube, Pluralsight, Udemy)
| is there any issue learning dot net technology with 4.0 version? | I have been watching videos of C#.net, Asp.net, and MVC from the kudvenkat youtube channel this man is so brilliant at teaching dot net technology but kudvenkat`s dot net videos are from 2012 and I think there will be dot net version 4.0 but still, currently so many peoples are learning dot net from him so here I want to know if there any issues with dot net version and can I still learn from him?
| [
"Yes, why not, you can learn basic and slightly advanced issues from this training course, by the way, very good details are mentioned in this training. To complete the topics and update them, you can also use books or articles of newer versions of .net and deepen and update your knowledge.\n",
"Learning is an excellent habit. I have also personally learnt many concepts from Kudvenkat's youtube videos, he provides slides and text versions as well.\nHe has created other playlists, one Asp.Net Core, Web API, and other related topics. Their tutorials are very good foe beginners.\nYou can also learn some advance topics as well from internet (YoutTube, Pluralsight, Udemy)\n"
] | [
0,
0
] | [] | [] | [
".net",
"asp.net",
"asp.net_mvc",
"asp.net_mvc_4",
"c#"
] | stackoverflow_0074673815_.net_asp.net_asp.net_mvc_asp.net_mvc_4_c#.txt |
Q:
Kivymd APK App (created with Buildozer) closes after opening up
I have created an APK file from Python Kivy & KivyMD, using Buildozer. When I open the app after installing it, it shows the splash image and then closes.
I have checked and found that their seems no issue in the main.py, as I have correctly listed Kivy & KivyMD in the requirements in the Buildozer.spec file. (kivy==2.0.0,kivymd==0.104.1)
This is my code..
main.py
import kivymd
from kivymd.app import MDApp
from kivymd.uix.screen import Screen
from kivy.lang import Builder
from kivymd.uix.button import MDRectangleFlatButton, MDFlatButton
from kivymd.uix.dialog import MDDialog
import helper
import model
class DemoApp(MDApp):
def build(self):
self.theme_cls.primary_palette = "Green"
self.screen = Builder.load_string(helper.navigation_helper)
return self.screen
def show_data(self): #(self,obj):
self.abc = model.chat(self.screen.ids.user_name.text)
close_button = MDFlatButton(text='Close', on_release=self.close_dialog)
self.dialog = MDDialog(title='First-aid Suggested..', text=self.abc, size_hint=(0.7, 1), buttons=[close_button])
self.dialog.open()
def close_dialog(self, obj):
self.dialog.dismiss()
DemoApp().run()
model.py
import nltk
# nltk.download('punkt')
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()
import numpy
import random
import json
from keras.layers import *
from keras.models import *
with open("intents.json") as file:
data = json.load(file)
words = []
labels = []
docs_x = []
docs_y = []
for intent in data["intents"]:
for pattern in intent["patterns"]:
wrds = nltk.word_tokenize(pattern) # ['What', 'to', 'do', 'if', 'Cuts', '?']
words.extend(wrds)
docs_x.append(wrds) # input data (x)
docs_y.append(intent["tag"]) # corresponding output data (y)
if intent["tag"] not in labels:
labels.append(intent["tag"]) # all possible output data
words = [stemmer.stem(w.lower()) for w in words if w != "?"]
words = sorted(list(set(words)))
labels = sorted(labels)
training = []
output = []
out_empty = [0 for _ in range(len(labels))] # [1,2,3] [0,0,0]
for x, doc in enumerate(docs_x):
bag = []
wrds = [stemmer.stem(w) for w in doc] # doc = ['What', 'to', 'do', 'if', 'Cuts', '?'] & wrds = ['What', 'to', 'do', 'if', 'Cut', '?']
for w in words:
if w in wrds:
bag.append(1)
else:
bag.append(0)
output_row = out_empty[:]
output_row[labels.index(docs_y[x])] = 1
training.append(bag)
output.append(output_row)
from keras.models import load_model
model = load_model("First_Aid_model.h5")
def bag_of_words(s,words):
bag = [0 for _ in range(len(words))]
s_words = nltk.word_tokenize(s)
s_words = [stemmer.stem(word.lower()) for word in s_words]
for se in s_words:
for i, w in enumerate(words):
if w == se:
bag[i] = 1
return bag
def chat(inp):
results = model.predict([bag_of_words(inp,words)])
result = results[0]
results_index = numpy.argmax(result)
tag = labels[results_index]
if result[results_index] > 0.5:
for tg in data["intents"]:
if tg['tag'] == tag:
responses = tg['responses']
res = random.choice(responses).split('. ')
res = [res[_]+'.' for _ in range(len(res)) if not res[_].endswith('.')]
res = ('\n').join(res)
return(res + "\n")
else:
return("I didnt get that, try again")
helper.py
navigation_helper = """
Screen:
MDNavigationLayout:
ScreenManager:
Screen:
BoxLayout:
orientation: 'vertical'
MDToolbar:
title: "Navigation Drawer"
elevation: 10
left_action_items: [['menu', lambda x: nav_drawer.set_state('toggle')]]
Widget:
MDTextField:
id: user_name
hint_text: "Enter username"
helper_text: "or click on forgot username"
helper_text_mode: "on_focus"
icon_right: "redhat"
icon_right_color: app.theme_cls.primary_color
pos_hint:{'center_x': 0.5, 'center_y': 0.5}
size_hint_x:None
width:300
MDRectangleFlatButton:
text: "Show"
pos_hint: {'center_x': 0.5, 'center_y': 0.5}
on_release: app.show_data()
Widget:
MDNavigationDrawer:
id: nav_drawer
BoxLayout:
orientation: 'vertical'
padding: "8dp"
spacing: "8dp"
Image:
id: avatar
size_hint: (1,1)
source: "Capture.PNG"
MDLabel:
text: "First-aid Bot"
font_style: "Subtitle1"
size_hint_y: None
height: self.texture_size[1]
MDLabel:
text: "[email protected]"
size_hint_y: None
font_style: "Caption"
height: self.texture_size[1]
ScrollView:
MDList:
OneLineIconListItem:
text: "Profile"
IconLeftWidget:
icon: "face-profile"
OneLineIconListItem:
text: "Upload"
IconLeftWidget:
icon: "upload"
OneLineIconListItem:
text: "Logout"
IconLeftWidget:
icon: "logout"
"""
buildozer.spec
[app]
# (str) Title of your application
title = Bot
# (str) Package name
package.name = bot
# (str) Package domain (needed for android/ios packaging)
package.domain = org.bot
# (str) Source code where the main.py live
source.dir = .
# (list) Source files to include (let empty to include all the files)
source.include_exts = py,png,jpg,kv,atlas
# (list) List of inclusions using pattern matching
#source.include_patterns = assets/*,images/*.png
# (list) Source files to exclude (let empty to not exclude anything)
#source.exclude_exts = spec
# (list) List of directory to exclude (let empty to not exclude anything)
#source.exclude_dirs = tests, bin
# (list) List of exclusions using pattern matching
#source.exclude_patterns = license,images/*/*.jpg
# (str) Application versioning (method 1)
version = 0.1
# (str) Application versioning (method 2)
# version.regex = __version__ = ['"](.*)['"]
# version.filename = %(source.dir)s/main.py
# (list) Application requirements
# comma separated e.g. requirements = sqlite3,kivy
requirements = python3,kivy==2.0.0,kivymd==0.104.1
# (str) Custom source folders for requirements
# Sets custom source for any requirements with recipes
# requirements.source.kivy = ../../kivy
# (list) Garden requirements
#garden_requirements =
# (str) Presplash of the application
#presplash.filename = %(source.dir)s/data/presplash.png
# (str) Icon of the application
#icon.filename = %(source.dir)s/data/icon.png
# (str) Supported orientation (one of landscape, sensorLandscape, portrait or all)
orientation = portrait
# (list) List of service to declare
#services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY
#
# OSX Specific
#
#
# author = © Copyright Info
# change the major version of python used by the app
osx.python_version = 3
# Kivy version to use
osx.kivy_version = 1.9.1
#
# Android specific
#
# (bool) Indicate if the application should be fullscreen or not
fullscreen = 0
# (string) Presplash background color (for new android toolchain)
# Supported formats are: #RRGGBB #AARRGGBB or one of the following names:
# red, blue, green, black, white, gray, cyan, magenta, yellow, lightgray,
# darkgray, grey, lightgrey, darkgrey, aqua, fuchsia, lime, maroon, navy,
# olive, purple, silver, teal.
#android.presplash_color = #FFFFFF
# (list) Permissions
#android.permissions = INTERNET
# (int) Target Android API, should be as high as possible.
#android.api = 27
# (int) Minimum API your APK will support.
#android.minapi = 21
# (int) Android SDK version to use
#android.sdk = 20
# (str) Android NDK version to use
#android.ndk = 19b
# (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi.
#android.ndk_api = 21
# (bool) Use --private data storage (True) or --dir public storage (False)
#android.private_storage = True
# (str) Android NDK directory (if empty, it will be automatically downloaded.)
#android.ndk_path =
# (str) Android SDK directory (if empty, it will be automatically downloaded.)
#android.sdk_path =
# (str) ANT directory (if empty, it will be automatically downloaded.)
#android.ant_path =
# (bool) If True, then skip trying to update the Android sdk
# This can be useful to avoid excess Internet downloads or save time
# when an update is due and you just want to test/build your package
# android.skip_update = False
# (bool) If True, then automatically accept SDK license
# agreements. This is intended for automation only. If set to False,
# the default, you will be shown the license when first running
# buildozer.
# android.accept_sdk_license = False
# (str) Android entry point, default is ok for Kivy-based app
#android.entrypoint = org.renpy.android.PythonActivity
# (str) Android app theme, default is ok for Kivy-based app
# android.apptheme = "@android:style/Theme.NoTitleBar"
# (list) Pattern to whitelist for the whole project
#android.whitelist =
# (str) Path to a custom whitelist file
#android.whitelist_src =
# (str) Path to a custom blacklist file
#android.blacklist_src =
# (list) List of Java .jar files to add to the libs so that pyjnius can access
# their classes. Don't add jars that you do not need, since extra jars can slow
# down the build process. Allows wildcards matching, for example:
# OUYA-ODK/libs/*.jar
#android.add_jars = foo.jar,bar.jar,path/to/more/*.jar
# (list) List of Java files to add to the android project (can be java or a
# directory containing the files)
#android.add_src =
# (list) Android AAR archives to add (currently works only with sdl2_gradle
# bootstrap)
#android.add_aars =
# (list) Gradle dependencies to add (currently works only with sdl2_gradle
# bootstrap)
#android.gradle_dependencies =
# (list) add java compile options
# this can for example be necessary when importing certain java libraries using the 'android.gradle_dependencies' option
# see https://developer.android.com/studio/write/java8-support for further information
# android.add_compile_options = "sourceCompatibility = 1.8", "targetCompatibility = 1.8"
# (list) Gradle repositories to add {can be necessary for some android.gradle_dependencies}
# please enclose in double quotes
# e.g. android.gradle_repositories = "maven { url 'https://kotlin.bintray.com/ktor' }"
#android.add_gradle_repositories =
# (list) packaging options to add
# see https://google.github.io/android-gradle-dsl/current/com.android.build.gradle.internal.dsl.PackagingOptions.html
# can be necessary to solve conflicts in gradle_dependencies
# please enclose in double quotes
# e.g. android.add_packaging_options = "exclude 'META-INF/common.kotlin_module'", "exclude 'META-INF/*.kotlin_module'"
#android.add_gradle_repositories =
# (list) Java classes to add as activities to the manifest.
#android.add_activities = com.example.ExampleActivity
# (str) OUYA Console category. Should be one of GAME or APP
# If you leave this blank, OUYA support will not be enabled
#android.ouya.category = GAME
# (str) Filename of OUYA Console icon. It must be a 732x412 png image.
#android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png
# (str) XML file to include as an intent filters in <activity> tag
#android.manifest.intent_filters =
# (str) launchMode to set for the main activity
#android.manifest.launch_mode = standard
# (list) Android additional libraries to copy into libs/armeabi
#android.add_libs_armeabi = libs/android/*.so
#android.add_libs_armeabi_v7a = libs/android-v7/*.so
#android.add_libs_arm64_v8a = libs/android-v8/*.so
#android.add_libs_x86 = libs/android-x86/*.so
#android.add_libs_mips = libs/android-mips/*.so
# (bool) Indicate whether the screen should stay on
# Don't forget to add the WAKE_LOCK permission if you set this to True
#android.wakelock = False
# (list) Android application meta-data to set (key=value format)
#android.meta_data =
# (list) Android library project to add (will be added in the
# project.properties automatically.)
#android.library_references =
# (list) Android shared libraries which will be added to AndroidManifest.xml using <uses-library> tag
#android.uses_library =
# (str) Android logcat filters to use
#android.logcat_filters = *:S python:D
# (bool) Copy library instead of making a libpymodules.so
#android.copy_libs = 1
# (str) The Android arch to build for, choices: armeabi-v7a, arm64-v8a, x86, x86_64
android.arch = armeabi-v7a
# (int) overrides automatic versionCode computation (used in build.gradle)
# this is not the same as app version and should only be edited if you know what you're doing
# android.numeric_version = 1
#
# Python for android (p4a) specific
#
# (str) python-for-android fork to use, defaults to upstream (kivy)
#p4a.fork = kivy
# (str) python-for-android branch to use, defaults to master
#p4a.branch = master
# (str) python-for-android git clone directory (if empty, it will be automatically cloned from github)
#p4a.source_dir =
# (str) The directory in which python-for-android should look for your own build recipes (if any)
#p4a.local_recipes =
# (str) Filename to the hook for p4a
#p4a.hook =
# (str) Bootstrap to use for android builds
# p4a.bootstrap = sdl2
# (int) port number to specify an explicit --port= p4a argument (eg for bootstrap flask)
#p4a.port =
#
# iOS specific
#
# (str) Path to a custom kivy-ios folder
#ios.kivy_ios_dir = ../kivy-ios
# Alternately, specify the URL and branch of a git checkout:
ios.kivy_ios_url = https://github.com/kivy/kivy-ios
ios.kivy_ios_branch = master
# Another platform dependency: ios-deploy
# Uncomment to use a custom checkout
#ios.ios_deploy_dir = ../ios_deploy
# Or specify URL and branch
ios.ios_deploy_url = https://github.com/phonegap/ios-deploy
ios.ios_deploy_branch = 1.7.0
# (str) Name of the certificate to use for signing the debug version
# Get a list of available identities: buildozer ios list_identities
#ios.codesign.debug = "iPhone Developer: <lastname> <firstname> (<hexstring>)"
# (str) Name of the certificate to use for signing the release version
#ios.codesign.release = %(ios.codesign.debug)s
[buildozer]
# (int) Log level (0 = error only, 1 = info, 2 = debug (with command output))
log_level = 2
# (int) Display warning if buildozer is run as root (0 = False, 1 = True)
warn_on_root = 1
# (str) Path to build artifact storage, absolute or relative to spec file
# build_dir = ./.buildozer
# (str) Path to build output (i.e. .apk, .ipa) storage
# bin_dir = ./bin
# -----------------------------------------------------------------------------
# List as sections
#
# You can define all the "list" as [section:key].
# Each line will be considered as a option to the list.
# Let's take [app] / source.exclude_patterns.
# Instead of doing:
#
#[app]
#source.exclude_patterns = license,data/audio/*.wav,data/images/original/*
#
# This can be translated into:
#
#[app:source.exclude_patterns]
#license
#data/audio/*.wav
#data/images/original/*
#
# -----------------------------------------------------------------------------
# Profiles
#
# You can extend section / key with a profile
# For example, you want to deploy a demo version of your application without
# HD content. You could first change the title to add "(demo)" in the name
# and extend the excluded directories to remove the HD content.
#
#[app@demo]
#title = My Application (demo)
#
#[app:source.exclude_patterns@demo]
#images/hd/*
#
# Then, invoke the command line with the "demo" profile:
#
#buildozer --profile demo android debug
According to me, the issue is coming as in the model.py file, I have imported modules like Keras, NLTK etc..and I am not mentioning them in requirements.
If this is the issue, then please give the complete statement which I should write in the requirements, according to my model.py & others
Please guide me.
A:
If you have some other plugins just add them like this:
# comma separated e.g. requirements = sqlite3,kivy
requirements = python3,kivy==2.0.0,kivymd==0.104.1,pluginname==version
A:
requirements = python3,kivy==2.0.0,kivymd==0.104.1,nltk,numpy,keras
That's all you need
| Kivymd APK App (created with Buildozer) closes after opening up | I have created an APK file from Python Kivy & KivyMD, using Buildozer. When I open the app after installing it, it shows the splash image and then closes.
I have checked and found that their seems no issue in the main.py, as I have correctly listed Kivy & KivyMD in the requirements in the Buildozer.spec file. (kivy==2.0.0,kivymd==0.104.1)
This is my code..
main.py
import kivymd
from kivymd.app import MDApp
from kivymd.uix.screen import Screen
from kivy.lang import Builder
from kivymd.uix.button import MDRectangleFlatButton, MDFlatButton
from kivymd.uix.dialog import MDDialog
import helper
import model
class DemoApp(MDApp):
def build(self):
self.theme_cls.primary_palette = "Green"
self.screen = Builder.load_string(helper.navigation_helper)
return self.screen
def show_data(self): #(self,obj):
self.abc = model.chat(self.screen.ids.user_name.text)
close_button = MDFlatButton(text='Close', on_release=self.close_dialog)
self.dialog = MDDialog(title='First-aid Suggested..', text=self.abc, size_hint=(0.7, 1), buttons=[close_button])
self.dialog.open()
def close_dialog(self, obj):
self.dialog.dismiss()
DemoApp().run()
model.py
import nltk
# nltk.download('punkt')
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()
import numpy
import random
import json
from keras.layers import *
from keras.models import *
with open("intents.json") as file:
data = json.load(file)
words = []
labels = []
docs_x = []
docs_y = []
for intent in data["intents"]:
for pattern in intent["patterns"]:
wrds = nltk.word_tokenize(pattern) # ['What', 'to', 'do', 'if', 'Cuts', '?']
words.extend(wrds)
docs_x.append(wrds) # input data (x)
docs_y.append(intent["tag"]) # corresponding output data (y)
if intent["tag"] not in labels:
labels.append(intent["tag"]) # all possible output data
words = [stemmer.stem(w.lower()) for w in words if w != "?"]
words = sorted(list(set(words)))
labels = sorted(labels)
training = []
output = []
out_empty = [0 for _ in range(len(labels))] # [1,2,3] [0,0,0]
for x, doc in enumerate(docs_x):
bag = []
wrds = [stemmer.stem(w) for w in doc] # doc = ['What', 'to', 'do', 'if', 'Cuts', '?'] & wrds = ['What', 'to', 'do', 'if', 'Cut', '?']
for w in words:
if w in wrds:
bag.append(1)
else:
bag.append(0)
output_row = out_empty[:]
output_row[labels.index(docs_y[x])] = 1
training.append(bag)
output.append(output_row)
from keras.models import load_model
model = load_model("First_Aid_model.h5")
def bag_of_words(s,words):
bag = [0 for _ in range(len(words))]
s_words = nltk.word_tokenize(s)
s_words = [stemmer.stem(word.lower()) for word in s_words]
for se in s_words:
for i, w in enumerate(words):
if w == se:
bag[i] = 1
return bag
def chat(inp):
results = model.predict([bag_of_words(inp,words)])
result = results[0]
results_index = numpy.argmax(result)
tag = labels[results_index]
if result[results_index] > 0.5:
for tg in data["intents"]:
if tg['tag'] == tag:
responses = tg['responses']
res = random.choice(responses).split('. ')
res = [res[_]+'.' for _ in range(len(res)) if not res[_].endswith('.')]
res = ('\n').join(res)
return(res + "\n")
else:
return("I didnt get that, try again")
helper.py
navigation_helper = """
Screen:
MDNavigationLayout:
ScreenManager:
Screen:
BoxLayout:
orientation: 'vertical'
MDToolbar:
title: "Navigation Drawer"
elevation: 10
left_action_items: [['menu', lambda x: nav_drawer.set_state('toggle')]]
Widget:
MDTextField:
id: user_name
hint_text: "Enter username"
helper_text: "or click on forgot username"
helper_text_mode: "on_focus"
icon_right: "redhat"
icon_right_color: app.theme_cls.primary_color
pos_hint:{'center_x': 0.5, 'center_y': 0.5}
size_hint_x:None
width:300
MDRectangleFlatButton:
text: "Show"
pos_hint: {'center_x': 0.5, 'center_y': 0.5}
on_release: app.show_data()
Widget:
MDNavigationDrawer:
id: nav_drawer
BoxLayout:
orientation: 'vertical'
padding: "8dp"
spacing: "8dp"
Image:
id: avatar
size_hint: (1,1)
source: "Capture.PNG"
MDLabel:
text: "First-aid Bot"
font_style: "Subtitle1"
size_hint_y: None
height: self.texture_size[1]
MDLabel:
text: "[email protected]"
size_hint_y: None
font_style: "Caption"
height: self.texture_size[1]
ScrollView:
MDList:
OneLineIconListItem:
text: "Profile"
IconLeftWidget:
icon: "face-profile"
OneLineIconListItem:
text: "Upload"
IconLeftWidget:
icon: "upload"
OneLineIconListItem:
text: "Logout"
IconLeftWidget:
icon: "logout"
"""
buildozer.spec
[app]
# (str) Title of your application
title = Bot
# (str) Package name
package.name = bot
# (str) Package domain (needed for android/ios packaging)
package.domain = org.bot
# (str) Source code where the main.py live
source.dir = .
# (list) Source files to include (let empty to include all the files)
source.include_exts = py,png,jpg,kv,atlas
# (list) List of inclusions using pattern matching
#source.include_patterns = assets/*,images/*.png
# (list) Source files to exclude (let empty to not exclude anything)
#source.exclude_exts = spec
# (list) List of directory to exclude (let empty to not exclude anything)
#source.exclude_dirs = tests, bin
# (list) List of exclusions using pattern matching
#source.exclude_patterns = license,images/*/*.jpg
# (str) Application versioning (method 1)
version = 0.1
# (str) Application versioning (method 2)
# version.regex = __version__ = ['"](.*)['"]
# version.filename = %(source.dir)s/main.py
# (list) Application requirements
# comma separated e.g. requirements = sqlite3,kivy
requirements = python3,kivy==2.0.0,kivymd==0.104.1
# (str) Custom source folders for requirements
# Sets custom source for any requirements with recipes
# requirements.source.kivy = ../../kivy
# (list) Garden requirements
#garden_requirements =
# (str) Presplash of the application
#presplash.filename = %(source.dir)s/data/presplash.png
# (str) Icon of the application
#icon.filename = %(source.dir)s/data/icon.png
# (str) Supported orientation (one of landscape, sensorLandscape, portrait or all)
orientation = portrait
# (list) List of service to declare
#services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY
#
# OSX Specific
#
#
# author = © Copyright Info
# change the major version of python used by the app
osx.python_version = 3
# Kivy version to use
osx.kivy_version = 1.9.1
#
# Android specific
#
# (bool) Indicate if the application should be fullscreen or not
fullscreen = 0
# (string) Presplash background color (for new android toolchain)
# Supported formats are: #RRGGBB #AARRGGBB or one of the following names:
# red, blue, green, black, white, gray, cyan, magenta, yellow, lightgray,
# darkgray, grey, lightgrey, darkgrey, aqua, fuchsia, lime, maroon, navy,
# olive, purple, silver, teal.
#android.presplash_color = #FFFFFF
# (list) Permissions
#android.permissions = INTERNET
# (int) Target Android API, should be as high as possible.
#android.api = 27
# (int) Minimum API your APK will support.
#android.minapi = 21
# (int) Android SDK version to use
#android.sdk = 20
# (str) Android NDK version to use
#android.ndk = 19b
# (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi.
#android.ndk_api = 21
# (bool) Use --private data storage (True) or --dir public storage (False)
#android.private_storage = True
# (str) Android NDK directory (if empty, it will be automatically downloaded.)
#android.ndk_path =
# (str) Android SDK directory (if empty, it will be automatically downloaded.)
#android.sdk_path =
# (str) ANT directory (if empty, it will be automatically downloaded.)
#android.ant_path =
# (bool) If True, then skip trying to update the Android sdk
# This can be useful to avoid excess Internet downloads or save time
# when an update is due and you just want to test/build your package
# android.skip_update = False
# (bool) If True, then automatically accept SDK license
# agreements. This is intended for automation only. If set to False,
# the default, you will be shown the license when first running
# buildozer.
# android.accept_sdk_license = False
# (str) Android entry point, default is ok for Kivy-based app
#android.entrypoint = org.renpy.android.PythonActivity
# (str) Android app theme, default is ok for Kivy-based app
# android.apptheme = "@android:style/Theme.NoTitleBar"
# (list) Pattern to whitelist for the whole project
#android.whitelist =
# (str) Path to a custom whitelist file
#android.whitelist_src =
# (str) Path to a custom blacklist file
#android.blacklist_src =
# (list) List of Java .jar files to add to the libs so that pyjnius can access
# their classes. Don't add jars that you do not need, since extra jars can slow
# down the build process. Allows wildcards matching, for example:
# OUYA-ODK/libs/*.jar
#android.add_jars = foo.jar,bar.jar,path/to/more/*.jar
# (list) List of Java files to add to the android project (can be java or a
# directory containing the files)
#android.add_src =
# (list) Android AAR archives to add (currently works only with sdl2_gradle
# bootstrap)
#android.add_aars =
# (list) Gradle dependencies to add (currently works only with sdl2_gradle
# bootstrap)
#android.gradle_dependencies =
# (list) add java compile options
# this can for example be necessary when importing certain java libraries using the 'android.gradle_dependencies' option
# see https://developer.android.com/studio/write/java8-support for further information
# android.add_compile_options = "sourceCompatibility = 1.8", "targetCompatibility = 1.8"
# (list) Gradle repositories to add {can be necessary for some android.gradle_dependencies}
# please enclose in double quotes
# e.g. android.gradle_repositories = "maven { url 'https://kotlin.bintray.com/ktor' }"
#android.add_gradle_repositories =
# (list) packaging options to add
# see https://google.github.io/android-gradle-dsl/current/com.android.build.gradle.internal.dsl.PackagingOptions.html
# can be necessary to solve conflicts in gradle_dependencies
# please enclose in double quotes
# e.g. android.add_packaging_options = "exclude 'META-INF/common.kotlin_module'", "exclude 'META-INF/*.kotlin_module'"
#android.add_gradle_repositories =
# (list) Java classes to add as activities to the manifest.
#android.add_activities = com.example.ExampleActivity
# (str) OUYA Console category. Should be one of GAME or APP
# If you leave this blank, OUYA support will not be enabled
#android.ouya.category = GAME
# (str) Filename of OUYA Console icon. It must be a 732x412 png image.
#android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png
# (str) XML file to include as an intent filters in <activity> tag
#android.manifest.intent_filters =
# (str) launchMode to set for the main activity
#android.manifest.launch_mode = standard
# (list) Android additional libraries to copy into libs/armeabi
#android.add_libs_armeabi = libs/android/*.so
#android.add_libs_armeabi_v7a = libs/android-v7/*.so
#android.add_libs_arm64_v8a = libs/android-v8/*.so
#android.add_libs_x86 = libs/android-x86/*.so
#android.add_libs_mips = libs/android-mips/*.so
# (bool) Indicate whether the screen should stay on
# Don't forget to add the WAKE_LOCK permission if you set this to True
#android.wakelock = False
# (list) Android application meta-data to set (key=value format)
#android.meta_data =
# (list) Android library project to add (will be added in the
# project.properties automatically.)
#android.library_references =
# (list) Android shared libraries which will be added to AndroidManifest.xml using <uses-library> tag
#android.uses_library =
# (str) Android logcat filters to use
#android.logcat_filters = *:S python:D
# (bool) Copy library instead of making a libpymodules.so
#android.copy_libs = 1
# (str) The Android arch to build for, choices: armeabi-v7a, arm64-v8a, x86, x86_64
android.arch = armeabi-v7a
# (int) overrides automatic versionCode computation (used in build.gradle)
# this is not the same as app version and should only be edited if you know what you're doing
# android.numeric_version = 1
#
# Python for android (p4a) specific
#
# (str) python-for-android fork to use, defaults to upstream (kivy)
#p4a.fork = kivy
# (str) python-for-android branch to use, defaults to master
#p4a.branch = master
# (str) python-for-android git clone directory (if empty, it will be automatically cloned from github)
#p4a.source_dir =
# (str) The directory in which python-for-android should look for your own build recipes (if any)
#p4a.local_recipes =
# (str) Filename to the hook for p4a
#p4a.hook =
# (str) Bootstrap to use for android builds
# p4a.bootstrap = sdl2
# (int) port number to specify an explicit --port= p4a argument (eg for bootstrap flask)
#p4a.port =
#
# iOS specific
#
# (str) Path to a custom kivy-ios folder
#ios.kivy_ios_dir = ../kivy-ios
# Alternately, specify the URL and branch of a git checkout:
ios.kivy_ios_url = https://github.com/kivy/kivy-ios
ios.kivy_ios_branch = master
# Another platform dependency: ios-deploy
# Uncomment to use a custom checkout
#ios.ios_deploy_dir = ../ios_deploy
# Or specify URL and branch
ios.ios_deploy_url = https://github.com/phonegap/ios-deploy
ios.ios_deploy_branch = 1.7.0
# (str) Name of the certificate to use for signing the debug version
# Get a list of available identities: buildozer ios list_identities
#ios.codesign.debug = "iPhone Developer: <lastname> <firstname> (<hexstring>)"
# (str) Name of the certificate to use for signing the release version
#ios.codesign.release = %(ios.codesign.debug)s
[buildozer]
# (int) Log level (0 = error only, 1 = info, 2 = debug (with command output))
log_level = 2
# (int) Display warning if buildozer is run as root (0 = False, 1 = True)
warn_on_root = 1
# (str) Path to build artifact storage, absolute or relative to spec file
# build_dir = ./.buildozer
# (str) Path to build output (i.e. .apk, .ipa) storage
# bin_dir = ./bin
# -----------------------------------------------------------------------------
# List as sections
#
# You can define all the "list" as [section:key].
# Each line will be considered as a option to the list.
# Let's take [app] / source.exclude_patterns.
# Instead of doing:
#
#[app]
#source.exclude_patterns = license,data/audio/*.wav,data/images/original/*
#
# This can be translated into:
#
#[app:source.exclude_patterns]
#license
#data/audio/*.wav
#data/images/original/*
#
# -----------------------------------------------------------------------------
# Profiles
#
# You can extend section / key with a profile
# For example, you want to deploy a demo version of your application without
# HD content. You could first change the title to add "(demo)" in the name
# and extend the excluded directories to remove the HD content.
#
#[app@demo]
#title = My Application (demo)
#
#[app:source.exclude_patterns@demo]
#images/hd/*
#
# Then, invoke the command line with the "demo" profile:
#
#buildozer --profile demo android debug
According to me, the issue is coming as in the model.py file, I have imported modules like Keras, NLTK etc..and I am not mentioning them in requirements.
If this is the issue, then please give the complete statement which I should write in the requirements, according to my model.py & others
Please guide me.
| [
"If you have some other plugins just add them like this:\n# comma separated e.g. requirements = sqlite3,kivy\nrequirements = python3,kivy==2.0.0,kivymd==0.104.1,pluginname==version\n\n",
"requirements = python3,kivy==2.0.0,kivymd==0.104.1,nltk,numpy,keras\nThat's all you need\n"
] | [
1,
0
] | [] | [] | [
"buildozer",
"keras",
"kivy",
"kivymd",
"python"
] | stackoverflow_0069593107_buildozer_keras_kivy_kivymd_python.txt |
Q:
How to do as guide "Re-run Spring Boot Configuration Annotation Processor to update generated metadata"?
For example, I have file DataSourceConfig.java
package com.apress.demo;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.stereotype.Component;
@Component
@ConfigurationProperties(prefix = "jdbc")
public class DataSourceConfig {
private String driver;
private String url;
private String username;
private String password;
@Override
public String toString() {
return "DataSourceConfig [driver=" + driver + ", url=" + url + ", username=" + username + "]";
}
public String getDriver() {
return driver;
}
public void setDriver(String driver) {
this.driver = driver;
}
public String getUrl() {
return url;
}
public void setUrl(String url) {
this.url = url;
}
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
}
How to do this notice?
A:
It is just a reminder that configuration properties will not be available until you Make changes and annotation processor is re-run
You can enable annotation processors in IntelliJ via the following:
Click on File
Click on Settings
In the little search box in the upper-left hand corner, search for "Annotation Processors"
Check "Enable annotation processing"
Click OK
or
add dependency in pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
Configuring the Annotation Processor
and add anatation @Configuration
| How to do as guide "Re-run Spring Boot Configuration Annotation Processor to update generated metadata"? | For example, I have file DataSourceConfig.java
package com.apress.demo;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.stereotype.Component;
@Component
@ConfigurationProperties(prefix = "jdbc")
public class DataSourceConfig {
private String driver;
private String url;
private String username;
private String password;
@Override
public String toString() {
return "DataSourceConfig [driver=" + driver + ", url=" + url + ", username=" + username + "]";
}
public String getDriver() {
return driver;
}
public void setDriver(String driver) {
this.driver = driver;
}
public String getUrl() {
return url;
}
public void setUrl(String url) {
this.url = url;
}
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
}
How to do this notice?
| [
"It is just a reminder that configuration properties will not be available until you Make changes and annotation processor is re-run\nYou can enable annotation processors in IntelliJ via the following:\n\nClick on File\nClick on Settings\nIn the little search box in the upper-left hand corner, search for \"Annotation Processors\"\nCheck \"Enable annotation processing\"\nClick OK\n\n\nor\nadd dependency in pom.xml\n<dependency>\n <groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-configuration-processor</artifactId>\n <optional>true</optional>\n</dependency> \n\nConfiguring the Annotation Processor\nand add anatation @Configuration\n"
] | [
1
] | [] | [] | [
"intellij_idea",
"java",
"spring",
"spring_boot",
"spring_boot_3"
] | stackoverflow_0074672505_intellij_idea_java_spring_spring_boot_spring_boot_3.txt |
Q:
PokeApi Graph QL how to get varieties?
In the PokeApi REST API Pokemon varieties are returned inside the species model on the varieties field at the endpoint https://pokeapi.co/api/v2/pokemon-species/{id or name}/
However I can't seem to find them in the new GraphQL console https://beta.pokeapi.co/graphql/console/
anybody know where this model lives?
A:
Found it
query MyQuery {
pokemon_v2_pokemon(where: {id: {_eq: 6}}) {
id
pokemon_v2_pokemonspecy {
pokemon_v2_pokemons {
pokemon_v2_pokemonforms {
name
}
}
}
}
}
| PokeApi Graph QL how to get varieties? | In the PokeApi REST API Pokemon varieties are returned inside the species model on the varieties field at the endpoint https://pokeapi.co/api/v2/pokemon-species/{id or name}/
However I can't seem to find them in the new GraphQL console https://beta.pokeapi.co/graphql/console/
anybody know where this model lives?
| [
"Found it\nquery MyQuery {\n pokemon_v2_pokemon(where: {id: {_eq: 6}}) {\n id\n pokemon_v2_pokemonspecy {\n pokemon_v2_pokemons {\n pokemon_v2_pokemonforms {\n name\n }\n }\n }\n }\n}\n\n"
] | [
0
] | [] | [] | [
"graphql",
"pokeapi"
] | stackoverflow_0074672263_graphql_pokeapi.txt |
Q:
Get private element outside the class
First of all it's an exercise given to me so i can't change things and have to work with it.
I have a 2d vector aka a matrix. My header file looks like this
#include <vector>
#include <iostream>
using namespace std;
class Matrix{
private:
vector<vector<double>> 2d;
public:
explicit Matrix(unsigned int sizeY=0,unsigned int sizeX=0,double value= 0.0);
~Matrix() = default;
Matrix(const Matrix &other);
Matrix(Matrix &&other) = default;
Matrix& operator=(const Matrix &other);
Matrix& operator=(Matrix &&other) = default;
//other + - operators
//INDEX
vector<double>& at(unsigned int i);
const vector<double>& at(unsigned int i)const;
const vector<double>& operator[] (double m) const;
vector<double>& operator[] (double m);
};
Matrix operator+(const Matrix& d1, const Matrix& d2);
Matrix operator-(const Matrix& d1, const Matrix& d2);
ostream& operator<<(ostream &o, const Matrix& v);
istream& operator>>(istream &i, Matrix& v);
So now I implemented everything except the << and >> operator.
Now the question if i want to go through the 2d vec matrix is there another way to get the "depth"
outside the Matrix class except writing a getter ?
If the Matrix is N X M e.g. 4x4 i can get the 2nd 4 the "width" with something like 2d[0].size() but i cant figure out how I can get the "depth" otherwise then use a getter.
Also i cant change 2d to public or use templates.
I tried for around 2-3 hours myself and couldnt find any solution and maybe its not possible under the given conditions.
A:
2d[0].size() is giving you the length of the first vector stored in 2d. To get the length of 2d you can just call the same directly on 2d.
2d.size() = length of 2d
2d[0].size() = length of first vector stored in 2d
| Get private element outside the class | First of all it's an exercise given to me so i can't change things and have to work with it.
I have a 2d vector aka a matrix. My header file looks like this
#include <vector>
#include <iostream>
using namespace std;
class Matrix{
private:
vector<vector<double>> 2d;
public:
explicit Matrix(unsigned int sizeY=0,unsigned int sizeX=0,double value= 0.0);
~Matrix() = default;
Matrix(const Matrix &other);
Matrix(Matrix &&other) = default;
Matrix& operator=(const Matrix &other);
Matrix& operator=(Matrix &&other) = default;
//other + - operators
//INDEX
vector<double>& at(unsigned int i);
const vector<double>& at(unsigned int i)const;
const vector<double>& operator[] (double m) const;
vector<double>& operator[] (double m);
};
Matrix operator+(const Matrix& d1, const Matrix& d2);
Matrix operator-(const Matrix& d1, const Matrix& d2);
ostream& operator<<(ostream &o, const Matrix& v);
istream& operator>>(istream &i, Matrix& v);
So now I implemented everything except the << and >> operator.
Now the question if i want to go through the 2d vec matrix is there another way to get the "depth"
outside the Matrix class except writing a getter ?
If the Matrix is N X M e.g. 4x4 i can get the 2nd 4 the "width" with something like 2d[0].size() but i cant figure out how I can get the "depth" otherwise then use a getter.
Also i cant change 2d to public or use templates.
I tried for around 2-3 hours myself and couldnt find any solution and maybe its not possible under the given conditions.
| [
"2d[0].size() is giving you the length of the first vector stored in 2d. To get the length of 2d you can just call the same directly on 2d.\n2d.size() = length of 2d\n2d[0].size() = length of first vector stored in 2d\n"
] | [
0
] | [] | [] | [
"c++",
"encapsulation",
"scope"
] | stackoverflow_0074672391_c++_encapsulation_scope.txt |
Q:
React native gradlew clean failed
1 configuration failure
A problem occurred configuring project ':app'.
Illegal character in authority at index 7: file://C:\Users\jains\OneDrive\Desktop[PROJECT-NAME]\node_modules\react-native/android
I recently had upgraded rn version from 0.64.4 to latest 0.71.0-rc.1. now i am facing this issue.
A:
The issue was in 0.71.0-rc.1 and been resolved in 0.71.0-rc.3.
https://github.com/facebook/react-native/issues/35472
| React native gradlew clean failed | 1 configuration failure
A problem occurred configuring project ':app'.
Illegal character in authority at index 7: file://C:\Users\jains\OneDrive\Desktop[PROJECT-NAME]\node_modules\react-native/android
I recently had upgraded rn version from 0.64.4 to latest 0.71.0-rc.1. now i am facing this issue.
| [
"The issue was in 0.71.0-rc.1 and been resolved in 0.71.0-rc.3.\nhttps://github.com/facebook/react-native/issues/35472\n"
] | [
0
] | [] | [] | [
"gradlew",
"react_native"
] | stackoverflow_0074556823_gradlew_react_native.txt |
Q:
Not able to login to azure using powershell when executing powershell via an API
I am trying to execute a powershell script via an API. I am connecting to azure using
**Connect-AzAccount -ServicePrincipal ** with correct credentials. But it is not able to login to azure.
Script I am running -
Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force
Write-Output "starting ************" | Out-File "test.txt" -Append
$SecureStringPwd = $secret | ConvertTo-SecureString -AsPlainText -Force
$pscredential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $accountid, $SecureStringPwd
#Connect-AzAccount -ServicePrincipal -Credential $pscredential -Tenant $tenantid | Out-File "test.txt" -Append
$context = Get-AzContext
if (!$context) {
Write-Output "not logged in"| Out-File "test.txt" -Append
}
It works fine when running locally as powershell command. Could someone please help on this?
I have tried giving installation command "
Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force
as part of script but nothing works.
A:
There are a few potential issues that could be causing the problem with your script:
Make sure that the service principal you are using has the correct permissions to login to Azure. You can check this by running the following command:
Get-AzRoleAssignment -ServicePrincipalName <service principal name>
If the service principal does not have the correct permissions, you will need to grant it the necessary permissions using the Add-AzRoleAssignment command.
If you are running the script in an Azure Function, make sure that you have enabled the Azure PowerShell runbook worker in the function app settings. This is required in order for the function to execute PowerShell scripts.
Check the Azure PowerShell version that you are using. The Az module requires PowerShell version 5.1 or later. You can check your PowerShell version by running the following command:
$PSVersionTable.PSVersion
If your PowerShell version is not high enough, you can update it by running the following command:
Update-Module -Name PowerShellGet -Force
Finally, make sure that the script is running in the correct PowerShell context. If you are running the script in an Azure Function, make sure that it is running in the Azure PowerShell runbook worker context. You can check this by looking for the "AzureRunAsConnection" and "AzureRMContext" variables in the script, which should be set automatically when running in the Azure PowerShell runbook worker context. If they are not set, you may need to manually set them using the following code:
$connectionName = "<your AzureRunAsConnection name>"
$servicePrincipalConnection = Get-AutomationConnection -Name $connectionName
Add-AzureRMAccount -ServicePrincipal -TenantId $servicePrincipalConnection.TenantId -ApplicationId $servicePrincipalConnection.ApplicationId -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
$AzureRMContext = Select-AzureRMSubscription -SubscriptionId $servicePrincipalConnection.SubscriptionId
I hope this helps! Let me know if you have any other questions.
| Not able to login to azure using powershell when executing powershell via an API | I am trying to execute a powershell script via an API. I am connecting to azure using
**Connect-AzAccount -ServicePrincipal ** with correct credentials. But it is not able to login to azure.
Script I am running -
Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force
Write-Output "starting ************" | Out-File "test.txt" -Append
$SecureStringPwd = $secret | ConvertTo-SecureString -AsPlainText -Force
$pscredential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $accountid, $SecureStringPwd
#Connect-AzAccount -ServicePrincipal -Credential $pscredential -Tenant $tenantid | Out-File "test.txt" -Append
$context = Get-AzContext
if (!$context) {
Write-Output "not logged in"| Out-File "test.txt" -Append
}
It works fine when running locally as powershell command. Could someone please help on this?
I have tried giving installation command "
Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force
as part of script but nothing works.
| [
"There are a few potential issues that could be causing the problem with your script:\nMake sure that the service principal you are using has the correct permissions to login to Azure. You can check this by running the following command:\nGet-AzRoleAssignment -ServicePrincipalName <service principal name>\n\nIf the service principal does not have the correct permissions, you will need to grant it the necessary permissions using the Add-AzRoleAssignment command.\nIf you are running the script in an Azure Function, make sure that you have enabled the Azure PowerShell runbook worker in the function app settings. This is required in order for the function to execute PowerShell scripts.\nCheck the Azure PowerShell version that you are using. The Az module requires PowerShell version 5.1 or later. You can check your PowerShell version by running the following command:\n$PSVersionTable.PSVersion\n\nIf your PowerShell version is not high enough, you can update it by running the following command:\nUpdate-Module -Name PowerShellGet -Force\n\nFinally, make sure that the script is running in the correct PowerShell context. If you are running the script in an Azure Function, make sure that it is running in the Azure PowerShell runbook worker context. You can check this by looking for the \"AzureRunAsConnection\" and \"AzureRMContext\" variables in the script, which should be set automatically when running in the Azure PowerShell runbook worker context. If they are not set, you may need to manually set them using the following code:\n$connectionName = \"<your AzureRunAsConnection name>\"\n$servicePrincipalConnection = Get-AutomationConnection -Name $connectionName\nAdd-AzureRMAccount -ServicePrincipal -TenantId $servicePrincipalConnection.TenantId -ApplicationId $servicePrincipalConnection.ApplicationId -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint\n$AzureRMContext = Select-AzureRMSubscription -SubscriptionId $servicePrincipalConnection.SubscriptionId\n\nI hope this helps! Let me know if you have any other questions.\n"
] | [
0
] | [] | [] | [
"azure",
"azure_powershell",
"powershell"
] | stackoverflow_0074672765_azure_azure_powershell_powershell.txt |
Q:
Segregation based on the list
I have a list:
list = ['aaazzz0', 'aaazzz1', 'aaazzz2', 'aaazzz3', 'aaazzz4', 'aaazzz5', 'bbbzzz0', 'bbbzzz0', 'bbbzzz1','bbbzzz2','bbbzzz3','bbbzzz4','bbbxxx0','bbbxxx1','bbbxxx2','bbbxxx3']
my desired state is:
`
{
"aaa": {
"zzz": [
"aaazzz0",
"aaazzz1",
"aaazzz2",
"aaazzz3",
"aaazzz4",
"aaazzz5"
]
},
"bbb": {
"zzz": [
"bbbzzz0",
"bbbzzz1",
"bbbzzz2",
"bbbzzz3",
"bbbzzz4"
],
"xxx": [
"bbbxxx0",
"bbbxxx1",
"bbbxxx2",
"bbbxxx3"
]
}
}
`
my code is:
`
import re, json
list = ['aaazzz0', 'aaazzz1', 'aaazzz2', 'aaazzz3', 'aaazzz4', 'aaazzz5', 'bbbzzz0', 'bbbzzz0', 'bbbzzz1','bbbzzz2','bbbzzz3','bbbzzz4','bbbxxx0','bbbxxx1','bbbxxx2','bbbxxx3']
regex = '^.{3}([a-z]{3})'
all_dict = {}
a = {}
a_list = []
b = {}
b_list = []
for item in list:
end_match = re.findall(regex, item)[0]
aaa_match = re.search('aaa', item)
bbb_match = re.search('bbb', item)
for suffix in end_match:
if aaa_match:
a[suffix] = []
a_list.append(item)
a[suffix] = a_list
elif bbb_match:
b[suffix] = []
b_list.append(item)
b[suffix] = b_list
all_dict["aaa"] = a
all_dict["bbb"] = b
print(json.dumps(all_dict,indent=4))
`
so zzz, xxx keys I need to generate dynamicaly based on what is in list element, it's always 4,5,6 char.
this code genearetes the following output, which is not what I'm looking for
`
{
"aaa": {
"z": [
"aaazzz0",
"aaazzz0",
"aaazzz0",
"aaazzz1",
"aaazzz1",
"aaazzz1",
"aaazzz2",
"aaazzz2",
"aaazzz2",
"aaazzz3",
"aaazzz3",
"aaazzz3",
"aaazzz4",
"aaazzz4",
"aaazzz4",
"aaazzz5",
"aaazzz5",
"aaazzz5"
]
},
"bbb": {
"z": [
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz1",
"bbbzzz1",
"bbbzzz1",
"bbbzzz2",
"bbbzzz2",
"bbbzzz2",
"bbbzzz3",
"bbbzzz3",
"bbbzzz3",
"bbbzzz4",
"bbbzzz4",
"bbbzzz4",
"bbbxxx0",
"bbbxxx0",
"bbbxxx0",
"bbbxxx1",
"bbbxxx1",
"bbbxxx1",
"bbbxxx2",
"bbbxxx2",
"bbbxxx2",
"bbbxxx3",
"bbbxxx3",
"bbbxxx3"
],
"x": [
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz1",
"bbbzzz1",
"bbbzzz1",
"bbbzzz2",
"bbbzzz2",
"bbbzzz2",
"bbbzzz3",
"bbbzzz3",
"bbbzzz3",
"bbbzzz4",
"bbbzzz4",
"bbbzzz4",
"bbbxxx0",
"bbbxxx0",
"bbbxxx0",
"bbbxxx1",
"bbbxxx1",
"bbbxxx1",
"bbbxxx2",
"bbbxxx2",
"bbbxxx2",
"bbbxxx3",
"bbbxxx3",
"bbbxxx3"
]
}
}
`
I stuck... with my code
A:
foo = ['aaazzz0', 'aaazzz1', 'aaazzz2', 'aaazzz3', 'aaazzz4', 'aaazzz5', 'bbbzzz0', 'bbbzzz0', 'bbbzzz1','bbbzzz2','bbbzzz3','bbbzzz4','bbbxxx0','bbbxxx1','bbbxxx2','bbbxxx3']
output = {}
for s in foo:
output.setdefault(s[:3], {}).setdefault(s[3:6], []).append(s)
print(output) # {'aaa': {'zzz': ['aaazzz0', 'aaazzz1', 'aaazzz2', 'aaazzz3', 'aaazzz4', 'aaazzz5']}, 'bbb': {'zzz': ['bbbzzz0', 'bbbzzz0', 'bbbzzz1', 'bbbzzz2', 'bbbzzz3', 'bbbzzz4'], 'xxx': ['bbbxxx0', 'bbbxxx1', 'bbbxxx2', 'bbbxxx3']}}
A:
I think this code will do the trick:
lst = ['aaazzz0', 'aaazzz1', 'aaazzz2', 'aaazzz3', 'aaazzz4', 'aaazzz5', 'bbbzzz0', 'bbbzzz0', 'bbbzzz1', 'bbbzzz2',
'bbbzzz3', 'bbbzzz4', 'bbbxxx0', 'bbbxxx1', 'bbbxxx2', 'bbbxxx3']
new_dict_result = {"aaa": {},
"bbb": {}
}
for item in lst:
first_letter = item[0]
first_three_letters = first_letter * 3
last_three_letters = item[3:6]
new_dict_result[first_three_letters].setdefault(last_three_letters, [])
if item not in new_dict_result[first_three_letters][last_three_letters]:
new_dict_result[first_three_letters][last_three_letters].append(item)
print(new_dict_result)
Good luck !
| Segregation based on the list | I have a list:
list = ['aaazzz0', 'aaazzz1', 'aaazzz2', 'aaazzz3', 'aaazzz4', 'aaazzz5', 'bbbzzz0', 'bbbzzz0', 'bbbzzz1','bbbzzz2','bbbzzz3','bbbzzz4','bbbxxx0','bbbxxx1','bbbxxx2','bbbxxx3']
my desired state is:
`
{
"aaa": {
"zzz": [
"aaazzz0",
"aaazzz1",
"aaazzz2",
"aaazzz3",
"aaazzz4",
"aaazzz5"
]
},
"bbb": {
"zzz": [
"bbbzzz0",
"bbbzzz1",
"bbbzzz2",
"bbbzzz3",
"bbbzzz4"
],
"xxx": [
"bbbxxx0",
"bbbxxx1",
"bbbxxx2",
"bbbxxx3"
]
}
}
`
my code is:
`
import re, json
list = ['aaazzz0', 'aaazzz1', 'aaazzz2', 'aaazzz3', 'aaazzz4', 'aaazzz5', 'bbbzzz0', 'bbbzzz0', 'bbbzzz1','bbbzzz2','bbbzzz3','bbbzzz4','bbbxxx0','bbbxxx1','bbbxxx2','bbbxxx3']
regex = '^.{3}([a-z]{3})'
all_dict = {}
a = {}
a_list = []
b = {}
b_list = []
for item in list:
end_match = re.findall(regex, item)[0]
aaa_match = re.search('aaa', item)
bbb_match = re.search('bbb', item)
for suffix in end_match:
if aaa_match:
a[suffix] = []
a_list.append(item)
a[suffix] = a_list
elif bbb_match:
b[suffix] = []
b_list.append(item)
b[suffix] = b_list
all_dict["aaa"] = a
all_dict["bbb"] = b
print(json.dumps(all_dict,indent=4))
`
so zzz, xxx keys I need to generate dynamicaly based on what is in list element, it's always 4,5,6 char.
this code genearetes the following output, which is not what I'm looking for
`
{
"aaa": {
"z": [
"aaazzz0",
"aaazzz0",
"aaazzz0",
"aaazzz1",
"aaazzz1",
"aaazzz1",
"aaazzz2",
"aaazzz2",
"aaazzz2",
"aaazzz3",
"aaazzz3",
"aaazzz3",
"aaazzz4",
"aaazzz4",
"aaazzz4",
"aaazzz5",
"aaazzz5",
"aaazzz5"
]
},
"bbb": {
"z": [
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz1",
"bbbzzz1",
"bbbzzz1",
"bbbzzz2",
"bbbzzz2",
"bbbzzz2",
"bbbzzz3",
"bbbzzz3",
"bbbzzz3",
"bbbzzz4",
"bbbzzz4",
"bbbzzz4",
"bbbxxx0",
"bbbxxx0",
"bbbxxx0",
"bbbxxx1",
"bbbxxx1",
"bbbxxx1",
"bbbxxx2",
"bbbxxx2",
"bbbxxx2",
"bbbxxx3",
"bbbxxx3",
"bbbxxx3"
],
"x": [
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz0",
"bbbzzz1",
"bbbzzz1",
"bbbzzz1",
"bbbzzz2",
"bbbzzz2",
"bbbzzz2",
"bbbzzz3",
"bbbzzz3",
"bbbzzz3",
"bbbzzz4",
"bbbzzz4",
"bbbzzz4",
"bbbxxx0",
"bbbxxx0",
"bbbxxx0",
"bbbxxx1",
"bbbxxx1",
"bbbxxx1",
"bbbxxx2",
"bbbxxx2",
"bbbxxx2",
"bbbxxx3",
"bbbxxx3",
"bbbxxx3"
]
}
}
`
I stuck... with my code
| [
"foo = ['aaazzz0', 'aaazzz1', 'aaazzz2', 'aaazzz3', 'aaazzz4', 'aaazzz5', 'bbbzzz0', 'bbbzzz0', 'bbbzzz1','bbbzzz2','bbbzzz3','bbbzzz4','bbbxxx0','bbbxxx1','bbbxxx2','bbbxxx3']\noutput = {}\nfor s in foo:\n output.setdefault(s[:3], {}).setdefault(s[3:6], []).append(s)\nprint(output) # {'aaa': {'zzz': ['aaazzz0', 'aaazzz1', 'aaazzz2', 'aaazzz3', 'aaazzz4', 'aaazzz5']}, 'bbb': {'zzz': ['bbbzzz0', 'bbbzzz0', 'bbbzzz1', 'bbbzzz2', 'bbbzzz3', 'bbbzzz4'], 'xxx': ['bbbxxx0', 'bbbxxx1', 'bbbxxx2', 'bbbxxx3']}}\n\n",
"I think this code will do the trick:\nlst = ['aaazzz0', 'aaazzz1', 'aaazzz2', 'aaazzz3', 'aaazzz4', 'aaazzz5', 'bbbzzz0', 'bbbzzz0', 'bbbzzz1', 'bbbzzz2',\n 'bbbzzz3', 'bbbzzz4', 'bbbxxx0', 'bbbxxx1', 'bbbxxx2', 'bbbxxx3']\n\nnew_dict_result = {\"aaa\": {},\n \"bbb\": {}\n }\n\nfor item in lst:\n first_letter = item[0]\n first_three_letters = first_letter * 3\n last_three_letters = item[3:6]\n new_dict_result[first_three_letters].setdefault(last_three_letters, [])\n if item not in new_dict_result[first_three_letters][last_three_letters]:\n new_dict_result[first_three_letters][last_three_letters].append(item)\n\nprint(new_dict_result)\n\nGood luck !\n"
] | [
1,
1
] | [] | [] | [
"python_3.x",
"python_re"
] | stackoverflow_0074674225_python_3.x_python_re.txt |
Q:
How to disable x-frame-options for a Chrome instance?
I run Chrome with the flags --disable-web-security --user-data-dir in order to disable the same origin policy and run some tests, and it really allows me to make JS post requests to some external URIs.
But when I try to include an iframe with an external URL as src in my HTML page, I get the following error message: "Refused to display 'https://trap-your-trip.com/search' in a frame because it set 'X-Frame-Options' to 'sameorigin'."
Any way to pass it without installing any extensions? (Maybe another flag)
| How to disable x-frame-options for a Chrome instance? | I run Chrome with the flags --disable-web-security --user-data-dir in order to disable the same origin policy and run some tests, and it really allows me to make JS post requests to some external URIs.
But when I try to include an iframe with an external URL as src in my HTML page, I get the following error message: "Refused to display 'https://trap-your-trip.com/search' in a frame because it set 'X-Frame-Options' to 'sameorigin'."
Any way to pass it without installing any extensions? (Maybe another flag)
| [] | [] | [
"The error message \"Refused to display 'https://trap-your-trip.com/search' in a frame because it set X-Frame-Options to sameorigin\" is due to the X-Frame-Options response header being set to sameorigin on the https://trap-your-trip.com/search URL. This response header is used by the server to prevent the URL from being displayed in a frame or iframe on a different origin.\nTo fix this problem, you can try using the --disable-web-security and --user-data-dir flags with the chrome.exe executable, instead of running Chrome with these flags. To do this, you will need to locate the chrome.exe executable on your computer, and then create a shortcut to it on your desktop. Right-click on the shortcut, select \"Properties\", and then add the flags --disable-web-security and --user-data-dir to the end of the \"Target\" field, separated by a space.\nFor example, the \"Target\" field might look like this:\n\"C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe\" --disable-web-security --user-data-dir\n\nAfter adding the flags to the shortcut, you can run Chrome using this shortcut to disable the same origin policy and allow the iframe to display the external URL. However, keep in mind that disabling the same origin policy can make your computer vulnerable to security risks, so it is not recommended to use these flags in a production environment. It is only intended for use in testing and development.\nAnswered by openAI chat\n"
] | [
-1
] | [
"google_chrome",
"javascript"
] | stackoverflow_0048562890_google_chrome_javascript.txt |
Q:
How do I pull values from another branch onto my local branch?
I am new to git and I am running into the following issue- I created a branch 'Test' from develop. I am trying to pull changes from branch 'Working' (which is also based on develop) onto my local branch. I don't want any of my changes to affect develop or 'Working'.
How would I do this?
I am not sure whether this is a merge or a rebase.
A:
Either will work, but the result is different.
git merge working will create a single, new merge commit which combines the history of both branches. The commits are shared by both branches.
git rebase test working^{commit} && git push . HEAD:test && git checkout test will rebase, i.e. copy, the commits on working to test. The commits are now duplicated and separate versions of the commits exist in both branches.
The third option is to cherry-pick, which is similar to rebase, but switches source and destination.
git cherry-pick test..working will cherry-pick, i.e. copy, the commits on working and apply them to your test branch. The commits are duplicated and separate in both branches.
History after merge:
D-E -- working
/ \
A-B-C \ -- development
\ \
F-G-----M -- test (M = merge commit)
History after rebase or cherry-pick:
D-E -- working
/
A-B-C -- development
\
F-G-C'-D'-E' -- test (C', D', E' = copied commits)
| How do I pull values from another branch onto my local branch? | I am new to git and I am running into the following issue- I created a branch 'Test' from develop. I am trying to pull changes from branch 'Working' (which is also based on develop) onto my local branch. I don't want any of my changes to affect develop or 'Working'.
How would I do this?
I am not sure whether this is a merge or a rebase.
| [
"Either will work, but the result is different.\n\ngit merge working will create a single, new merge commit which combines the history of both branches. The commits are shared by both branches.\ngit rebase test working^{commit} && git push . HEAD:test && git checkout test will rebase, i.e. copy, the commits on working to test. The commits are now duplicated and separate versions of the commits exist in both branches.\n\nThe third option is to cherry-pick, which is similar to rebase, but switches source and destination.\n\ngit cherry-pick test..working will cherry-pick, i.e. copy, the commits on working and apply them to your test branch. The commits are duplicated and separate in both branches.\n\nHistory after merge:\n D-E -- working\n / \\\nA-B-C \\ -- development\n \\ \\\n F-G-----M -- test (M = merge commit)\n\nHistory after rebase or cherry-pick:\n D-E -- working\n /\nA-B-C -- development\n \\ \n F-G-C'-D'-E' -- test (C', D', E' = copied commits)\n\n"
] | [
0
] | [] | [] | [
"git",
"git_merge",
"git_rebase"
] | stackoverflow_0074658321_git_git_merge_git_rebase.txt |
Q:
Rest API Request Body Parameters to be Encrypted?
I have a Rest API which is authenticated via OAUth Access token. The request body parameters posted to the API contains critical information like Customers Mobile Number and OTP code. My client is very concerned on the security. So, my question is should I ask client to encrypt these parameter values and submit to the API? I have gone through many articles and did not find anything relevant encouraging to encrypt request body data.
A:
Yes, you could encrypt the body data. The question is if you should do that. We usually do threat modeling to answer questions like this. I'd say, unless your application shots missiles TLS should be sufficient to ensure confidentiality and integrity of the whole HTTP payload. As long as you don't do defense in depth, this mitigation should be sufficient.
A:
You should define RSA encryption in your part. RSA means you have a private key and a public key, anything that is encrypted with the public key can be decrypted using the private key only.
So in your case you should generate a RSA key pair and tell to your client to encrypt the data using your PUBLIC key. This is the most secure way exists today.
This is how credit cards companies send requests, you can read about it here:
https://developer.mastercard.com/platform/documentation/security-and-authentication/securing-sensitive-data-using-payload-encryption/
| Rest API Request Body Parameters to be Encrypted? | I have a Rest API which is authenticated via OAUth Access token. The request body parameters posted to the API contains critical information like Customers Mobile Number and OTP code. My client is very concerned on the security. So, my question is should I ask client to encrypt these parameter values and submit to the API? I have gone through many articles and did not find anything relevant encouraging to encrypt request body data.
| [
"Yes, you could encrypt the body data. The question is if you should do that. We usually do threat modeling to answer questions like this. I'd say, unless your application shots missiles TLS should be sufficient to ensure confidentiality and integrity of the whole HTTP payload. As long as you don't do defense in depth, this mitigation should be sufficient.\n",
"You should define RSA encryption in your part. RSA means you have a private key and a public key, anything that is encrypted with the public key can be decrypted using the private key only.\nSo in your case you should generate a RSA key pair and tell to your client to encrypt the data using your PUBLIC key. This is the most secure way exists today.\nThis is how credit cards companies send requests, you can read about it here:\nhttps://developer.mastercard.com/platform/documentation/security-and-authentication/securing-sensitive-data-using-payload-encryption/\n"
] | [
0,
0
] | [] | [] | [
"api",
"asp.net_web_api",
"encryption",
"security"
] | stackoverflow_0073412491_api_asp.net_web_api_encryption_security.txt |
Q:
How to test and mock property wrappers in Swift?
Let's say I have a very common use case for a property wrapper using UserDefaults.
@propertyWrapper
struct DefaultsStorage<Value> {
private let key: String
private let storage: UserDefaults
var wrappedValue: Value? {
get {
guard let value = storage.value(forKey: key) as? Value else {
return nil
}
return value
}
nonmutating set {
storage.setValue(newValue, forKey: key)
}
}
init(key: String, storage: UserDefaults = .standard) {
self.key = key
self.storage = storage
}
}
I am now declaring an object that would hold all my values stored in UserDefaults.
struct UserDefaultsStorage {
@DefaultsStorage(key: "userName")
var userName: String?
}
Now when I want to use it somewhere, let's say in a view model, I would have something like this.
final class ViewModel {
func getUserName() -> String? {
UserDefaultsStorage().userName
}
}
Few questions arise here.
It seems that I am obliged to use .standard user defaults in this case. How to test that view model using other/mocked instance of UserDefaults?
How to test that property wrapper using other/mocked instance of UserDefaults? Do I have to create a new type that is a clean copy of the above's DefaultsStorage, pass mocked UserDefaults and test that object?
struct TestUserDefaultsStorage {
@DefaultsStorage(key: "userName", storage: UserDefaults(suiteName: #file)!)
var userName: String?
}
A:
As @mat already mentioned in the comments, you need a protocol to mock UserDefaults dependency. Something like this will do:
protocol UserDefaultsStorage {
func value(forKey key: String) -> Any?
func setValue(_ value: Any?, forKey key: String)
}
extension UserDefaults: UserDefaultsStorage {}
Then you can change your DefaultsStorage propertyWrapper to use a UserDefaultsStorage reference instead of UserDefaults:
@propertyWrapper
struct DefaultsStorage<Value> {
private let key: String
private let storage: UserDefaultsStorage
var wrappedValue: Value? {
get {
return storage.value(forKey: key) as? Value
}
nonmutating set {
storage.setValue(newValue, forKey: key)
}
}
init(key: String, storage: UserDefaultsStorage = UserDefaults.standard) {
self.key = key
self.storage = storage
}
}
After that a mock UserDefaultsStorage might look like this:
class UserDefaultsStorageMock: UserDefaultsStorage {
var values: [String: Any]
init(values: [String: Any] = [:]) {
self.values = values
}
func value(forKey key: String) -> Any? {
return values[key]
}
func setValue(_ value: Any?, forKey key: String) {
values[key] = value
}
}
And to test DefaultsStorage, pass an instance of UserDefaultsStorageMock as its storage parameter:
import XCTest
class DefaultsStorageTests: XCTestCase {
class TestUserDefaultsStorage {
@DefaultsStorage(
key: "userName",
storage: UserDefaultsStorageMock(values: ["userName": "TestUsername"])
)
var userName: String?
}
func test_userName() {
let testUserDefaultsStorage = TestUserDefaultsStorage()
XCTAssertEqual(testUserDefaultsStorage.userName, "TestUsername")
}
}
A:
This might not be the best solution, however, I haven't figured out a way to inject UserDefaults that use property wrappers into a ViewModel. If there is such an option, then gcharita's proposal to use another protocol would be a good one to implement.
I used the same UserDefaults in the test class as in the ViewModel. I save the original values before each test and restore them after each test.
class ViewModelTests: XCTestCase {
private lazy var userDefaults = newUserDefaults()
private var preTestsInitialValues: PreTestsInitialValues!
override func setUpWithError() throws {
savePreTestUserDefaults()
}
override func tearDownWithError() throws {
restoreUserDefaults()
}
private func newUserDefaults() -> UserDefaults.Type {
return UserDefaults.self
}
private func savePreTestUserDefaults() {
preTestsInitialValues = PreTestsInitialValues(userName: userDefaults.userName)
}
private func restoreUserDefaults() {
userDefaults.userName = preTestsInitialValues.userName
}
func testUsername() throws {
//"inject" User Defaults with the desired values
let username = "No one"
userDefaults.userName = username
let viewModel = ViewModel()
let usernameFromViewModel = viewModel.getUserName()
XCTAssertEqual(username, usernameFromViewModel)
}
}
struct PreTestsInitialValues {
let userName: String?
}
| How to test and mock property wrappers in Swift? | Let's say I have a very common use case for a property wrapper using UserDefaults.
@propertyWrapper
struct DefaultsStorage<Value> {
private let key: String
private let storage: UserDefaults
var wrappedValue: Value? {
get {
guard let value = storage.value(forKey: key) as? Value else {
return nil
}
return value
}
nonmutating set {
storage.setValue(newValue, forKey: key)
}
}
init(key: String, storage: UserDefaults = .standard) {
self.key = key
self.storage = storage
}
}
I am now declaring an object that would hold all my values stored in UserDefaults.
struct UserDefaultsStorage {
@DefaultsStorage(key: "userName")
var userName: String?
}
Now when I want to use it somewhere, let's say in a view model, I would have something like this.
final class ViewModel {
func getUserName() -> String? {
UserDefaultsStorage().userName
}
}
Few questions arise here.
It seems that I am obliged to use .standard user defaults in this case. How to test that view model using other/mocked instance of UserDefaults?
How to test that property wrapper using other/mocked instance of UserDefaults? Do I have to create a new type that is a clean copy of the above's DefaultsStorage, pass mocked UserDefaults and test that object?
struct TestUserDefaultsStorage {
@DefaultsStorage(key: "userName", storage: UserDefaults(suiteName: #file)!)
var userName: String?
}
| [
"As @mat already mentioned in the comments, you need a protocol to mock UserDefaults dependency. Something like this will do:\nprotocol UserDefaultsStorage {\n func value(forKey key: String) -> Any?\n func setValue(_ value: Any?, forKey key: String)\n}\n\nextension UserDefaults: UserDefaultsStorage {}\n\nThen you can change your DefaultsStorage propertyWrapper to use a UserDefaultsStorage reference instead of UserDefaults:\n@propertyWrapper\nstruct DefaultsStorage<Value> {\n private let key: String\n private let storage: UserDefaultsStorage\n\n var wrappedValue: Value? {\n get {\n return storage.value(forKey: key) as? Value\n }\n nonmutating set {\n storage.setValue(newValue, forKey: key)\n }\n }\n\n init(key: String, storage: UserDefaultsStorage = UserDefaults.standard) {\n self.key = key\n self.storage = storage\n }\n}\n\nAfter that a mock UserDefaultsStorage might look like this:\nclass UserDefaultsStorageMock: UserDefaultsStorage {\n var values: [String: Any]\n\n init(values: [String: Any] = [:]) {\n self.values = values\n }\n\n func value(forKey key: String) -> Any? {\n return values[key]\n }\n\n func setValue(_ value: Any?, forKey key: String) {\n values[key] = value\n }\n}\n\nAnd to test DefaultsStorage, pass an instance of UserDefaultsStorageMock as its storage parameter:\nimport XCTest\n\nclass DefaultsStorageTests: XCTestCase {\n class TestUserDefaultsStorage {\n @DefaultsStorage(\n key: \"userName\",\n storage: UserDefaultsStorageMock(values: [\"userName\": \"TestUsername\"])\n )\n var userName: String?\n }\n \n func test_userName() {\n let testUserDefaultsStorage = TestUserDefaultsStorage()\n \n XCTAssertEqual(testUserDefaultsStorage.userName, \"TestUsername\")\n }\n}\n\n",
"This might not be the best solution, however, I haven't figured out a way to inject UserDefaults that use property wrappers into a ViewModel. If there is such an option, then gcharita's proposal to use another protocol would be a good one to implement.\nI used the same UserDefaults in the test class as in the ViewModel. I save the original values before each test and restore them after each test.\nclass ViewModelTests: XCTestCase {\n \n private lazy var userDefaults = newUserDefaults()\n private var preTestsInitialValues: PreTestsInitialValues!\n\n override func setUpWithError() throws {\n savePreTestUserDefaults()\n }\n \n override func tearDownWithError() throws {\n restoreUserDefaults()\n }\n \n private func newUserDefaults() -> UserDefaults.Type {\n return UserDefaults.self\n }\n \n private func savePreTestUserDefaults() {\n preTestsInitialValues = PreTestsInitialValues(userName: userDefaults.userName)\n }\n \n private func restoreUserDefaults() {\n userDefaults.userName = preTestsInitialValues.userName\n }\n\n func testUsername() throws {\n //\"inject\" User Defaults with the desired values\n let username = \"No one\"\n userDefaults.userName = username\n \n let viewModel = ViewModel()\n let usernameFromViewModel = viewModel.getUserName()\n XCTAssertEqual(username, usernameFromViewModel)\n }\n}\n\nstruct PreTestsInitialValues {\n let userName: String?\n}\n\n"
] | [
3,
0
] | [] | [] | [
"ios",
"property_wrapper",
"swift"
] | stackoverflow_0065187507_ios_property_wrapper_swift.txt |
Q:
Only root-level navigation destinations are effective for a navigation stack with a homogeneous path
I am trying to integrate NavigationStack in my SwiftUI app, I have four views CealUIApp, OnBoardingView, UserTypeView and RegisterView. I want to navigate from OnBoardingView to UserTypeView when user presses a button in OnBoardingView and from UserTypeView to RegisterView when user presses a button in UserTypeView
Below is my code for CealUIApp
@main
struct CealUIApp: App {
@State private var path = [String]()
var body: some Scene {
WindowGroup {
NavigationStack(path: $path){
OnBoardingView(path: $path)
}
}
}
}
In OnBoardingView
Button {
path.append("UserTypeView")
} label: {
Text("Hello")
}.navigationDestination(for: String.self) { string in
UserTypeView(path: $path)
}
In UserTypeView
Button {
path.append("RegisterView")
} label: {
Text("Hello")
}.navigationDestination(for: String.self) { string in
RegisterView()
}
When button in UserTypeView is pressed I keep getting navigated to UserTypeView instead of RegisterView with msg in Xcode logs saying Only root-level navigation destinations are effective for a navigation stack with a homogeneous path.
A:
You can get rid of Only root-level navigation destinations are effective for a navigation stack with a homogeneous path by changing the path type to NavigationPath.
@State private var path: NavigationPath = .init()
But then you get a message/error that I think explains the issue better A navigationDestination for “Swift.String” was declared earlier on the stack. Only the destination declared closest to the root view of the stack will be used.
Apple has decided that scanning all views that are available is very inefficient so they will use the navigationDestination will take priority.
Just imagine if your OnBoardingView also had an option for "RegisterView"
.navigationDestination(for: String.self) { string in
switch string{
case "UserTypeView":
UserTypeView(path: $path)
case "RegisterView":
Text("fakie register view")
default:
Text("No view has been set for \(string)")
}
}
How would SwiftUI pick the right one?
So how to "fix"? You can try this alternative.
import SwiftUI
@available(iOS 16.0, *)
struct CealUIApp: View {
@State private var path: NavigationPath = .init()
var body: some View {
NavigationStack(path: $path){
OnBoardingView(path: $path)
.navigationDestination(for: ViewOptions.self) { option in
option.view($path)
}
}
}
//Create an `enum` so you can define your options
enum ViewOptions{
case userTypeView
case register
//Assign each case with a `View`
@ViewBuilder func view(_ path: Binding<NavigationPath>) -> some View{
switch self{
case .userTypeView:
UserTypeView(path: path)
case .register:
RegisterView()
}
}
}
}
@available(iOS 16.0, *)
struct OnBoardingView: View {
@Binding var path: NavigationPath
var body: some View {
Button {
//Append to the path the enum value
path.append(CealUIApp.ViewOptions.userTypeView)
} label: {
Text("Hello")
}
}
}
@available(iOS 16.0, *)
struct UserTypeView: View {
@Binding var path: NavigationPath
var body: some View {
Button {
//Append to the path the enum value
path.append(CealUIApp.ViewOptions.register)
} label: {
Text("Hello")
}
}
}
@available(iOS 16.0, *)
struct RegisterView: View {
var body: some View {
Text("Register")
}
}
@available(iOS 16.0, *)
struct CealUIApp_Previews: PreviewProvider {
static var previews: some View {
CealUIApp()
}
}
A:
Following @lorem ipsum example I think you can change this state variable @State private var path: NavigationPath = .init() with an @ObservableObject so you don't need to pass @Bindings on all the views. You just pass it down from the CealUIApp view as an WnvironmentObject
class NavigationStack: ObservableObject {
@Published var paths: NavigationPath = .init()
}
@main
struct CealUIApp: App {
let navstack = NavigationStack()
var body: some Scene {
WindowGroup {
AppEntry()
.environmentObject(navstack)
}
}
}
extension CealUIApp {
enum ScreenDestinations {
case userTypeView
case registerView
//Assign each case with a `View`
@ViewBuilder func view(_ path: Binding<NavigationPath>) -> some View {
switch self{
case .permissions:
UserTypeView()
case .seedPhrase:
RegisterView()
}
}
}
}
// An extra view after the AppView
struct AppEntry: View {
@EnvironmentObject var navStack: NavigationStack
var body: some View {
NavigationStack(path: $navStack.paths) {
OnBoardingView()
.navigationDestination(for: CealUIApp.ScreenDestinations.self) {
$0.view($navStack.paths)
}
}
}
}
And then the rest remain the same as @lorem ipsum said.
| Only root-level navigation destinations are effective for a navigation stack with a homogeneous path | I am trying to integrate NavigationStack in my SwiftUI app, I have four views CealUIApp, OnBoardingView, UserTypeView and RegisterView. I want to navigate from OnBoardingView to UserTypeView when user presses a button in OnBoardingView and from UserTypeView to RegisterView when user presses a button in UserTypeView
Below is my code for CealUIApp
@main
struct CealUIApp: App {
@State private var path = [String]()
var body: some Scene {
WindowGroup {
NavigationStack(path: $path){
OnBoardingView(path: $path)
}
}
}
}
In OnBoardingView
Button {
path.append("UserTypeView")
} label: {
Text("Hello")
}.navigationDestination(for: String.self) { string in
UserTypeView(path: $path)
}
In UserTypeView
Button {
path.append("RegisterView")
} label: {
Text("Hello")
}.navigationDestination(for: String.self) { string in
RegisterView()
}
When button in UserTypeView is pressed I keep getting navigated to UserTypeView instead of RegisterView with msg in Xcode logs saying Only root-level navigation destinations are effective for a navigation stack with a homogeneous path.
| [
"You can get rid of Only root-level navigation destinations are effective for a navigation stack with a homogeneous path by changing the path type to NavigationPath.\n@State private var path: NavigationPath = .init()\n\nBut then you get a message/error that I think explains the issue better A navigationDestination for “Swift.String” was declared earlier on the stack. Only the destination declared closest to the root view of the stack will be used.\nApple has decided that scanning all views that are available is very inefficient so they will use the navigationDestination will take priority.\nJust imagine if your OnBoardingView also had an option for \"RegisterView\"\n .navigationDestination(for: String.self) { string in\n switch string{\n case \"UserTypeView\":\n UserTypeView(path: $path)\n case \"RegisterView\":\n Text(\"fakie register view\")\n default:\n Text(\"No view has been set for \\(string)\")\n }\n \n }\n\nHow would SwiftUI pick the right one?\nSo how to \"fix\"? You can try this alternative.\nimport SwiftUI\n\n@available(iOS 16.0, *)\nstruct CealUIApp: View {\n @State private var path: NavigationPath = .init()\n var body: some View {\n NavigationStack(path: $path){\n OnBoardingView(path: $path)\n .navigationDestination(for: ViewOptions.self) { option in\n option.view($path)\n }\n }\n }\n //Create an `enum` so you can define your options\n enum ViewOptions{\n case userTypeView\n case register\n //Assign each case with a `View`\n @ViewBuilder func view(_ path: Binding<NavigationPath>) -> some View{\n switch self{\n case .userTypeView:\n UserTypeView(path: path)\n case .register:\n RegisterView()\n }\n }\n }\n}\n@available(iOS 16.0, *)\nstruct OnBoardingView: View {\n @Binding var path: NavigationPath\n var body: some View {\n Button {\n //Append to the path the enum value\n path.append(CealUIApp.ViewOptions.userTypeView)\n } label: {\n Text(\"Hello\")\n }\n \n }\n}\n@available(iOS 16.0, *)\nstruct UserTypeView: View {\n @Binding var path: NavigationPath\n var body: some View {\n Button {\n //Append to the path the enum value\n path.append(CealUIApp.ViewOptions.register)\n } label: {\n Text(\"Hello\")\n }\n \n }\n}\n@available(iOS 16.0, *)\nstruct RegisterView: View {\n var body: some View {\n Text(\"Register\")\n \n }\n}\n@available(iOS 16.0, *)\nstruct CealUIApp_Previews: PreviewProvider {\n static var previews: some View {\n CealUIApp()\n }\n}\n\n",
"Following @lorem ipsum example I think you can change this state variable @State private var path: NavigationPath = .init() with an @ObservableObject so you don't need to pass @Bindings on all the views. You just pass it down from the CealUIApp view as an WnvironmentObject\n\nclass NavigationStack: ObservableObject {\n @Published var paths: NavigationPath = .init()\n}\n\n\n@main\nstruct CealUIApp: App {\n \n let navstack = NavigationStack()\n \n var body: some Scene {\n WindowGroup {\n AppEntry()\n .environmentObject(navstack)\n }\n }\n}\n\nextension CealUIApp {\n enum ScreenDestinations {\n case userTypeView\n case registerView\n \n \n //Assign each case with a `View`\n @ViewBuilder func view(_ path: Binding<NavigationPath>) -> some View {\n switch self{\n case .permissions:\n UserTypeView()\n case .seedPhrase:\n RegisterView()\n }\n }\n }\n\n}\n\n\n// An extra view after the AppView\n\nstruct AppEntry: View {\n \n @EnvironmentObject var navStack: NavigationStack\n \n var body: some View {\n NavigationStack(path: $navStack.paths) {\n OnBoardingView()\n .navigationDestination(for: CealUIApp.ScreenDestinations.self) {\n $0.view($navStack.paths)\n }\n }\n }\n}\n\n\nAnd then the rest remain the same as @lorem ipsum said.\n"
] | [
1,
0
] | [] | [] | [
"ios",
"swift",
"swiftui"
] | stackoverflow_0074362455_ios_swift_swiftui.txt |
Q:
The email address is badly formatted problem in Firebase
I am using Firebase Auth and Firestore on my app and I using TextInputLayout for Login and Register screens but I got a problem it's like exception named "The email address is badly formatted" when I want to add new user from Register screen. I looked up but I can't find any Kotlin question about this problem I'll leave my codes below. I'm waiting your help. Have a good Codes :) .
LoginActivity.kt
class LoginActivity : AppCompatActivity() {
private val db = Firebase.firestore.collection("users")
private val auth = Firebase.auth
private lateinit var binding: ActivityLoginBinding
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = DataBindingUtil.setContentView(this, R.layout.activity_login)
binding.tvRegister.setOnClickListener {
val dialog = BottomSheetDialog(this@LoginActivity)
val view = layoutInflater.inflate(R.layout.bottom_sheet_layout, null)
dialog.setContentView(view)
val etName = view.findViewById<TextInputLayout>(R.id.etName).toString()
val etSurname = view.findViewById<TextInputLayout>(R.id.etSurname).toString()
val etMail = view.findViewById<TextInputLayout>(R.id.etRegisterEmail).toString()
val etPassword = view.findViewById<TextInputLayout>(R.id.etRegisterPassword).toString()
val etHeight = view.findViewById<TextInputLayout>(R.id.etHeight).toString()
val etWeight = view.findViewById<TextInputLayout>(R.id.etWeight).toString()
val btnRegister = view.findViewById<Button>(R.id.btnRegister)
btnRegister.setOnClickListener {
val user = hashMapOf<Any, String>(
"name" to etName,
"surname" to etSurname,
"email" to etMail,
"password" to etPassword,
"height" to etHeight,
"weight" to etWeight
)
if (etName.isNotEmpty() && etSurname.isNotEmpty() && etMail.isNotEmpty() && etPassword.isNotEmpty()) {
registerUser(etMail,etPassword,user)
} else {
Toast.makeText(
this@LoginActivity,
"You have to fill blanks",
Toast.LENGTH_SHORT
).show()
}
}
dialog.show()
}
}
private fun registerUser(email: String, password: String, user: HashMap<Any, String>) {
CoroutineScope(Dispatchers.IO).launch {
try {
auth.createUserWithEmailAndPassword(email, password)
.addOnSuccessListener {
db.document(auth.currentUser?.email.toString()).set(user)
.addOnSuccessListener {
Toast.makeText(this@LoginActivity, "Welcome", Toast.LENGTH_LONG)
.show()
checkLogged()
}
.addOnFailureListener {
Toast.makeText(this@LoginActivity, it.message, Toast.LENGTH_LONG)
.show()
}
}.await()
} catch (e: java.lang.Exception) {
withContext(Dispatchers.Main){
Toast.makeText(this@LoginActivity, e.message, Toast.LENGTH_LONG).show()
}
}
}
}
private fun checkLogged() {
if (auth.currentUser != null) {
startActivity(Intent(this@LoginActivity, MainActivity::class.java))
finish()
} else {
auth.signOut()
}
}
}
bottom_sheet_layout.xml (Register Screen)
<layout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto">
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="wrap_content">
<TextView
android:id="@+id/tvRegisterTitle"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="16dp"
android:fontFamily="monospace"
android:text="Register"
android:textColor="@color/primaryDarkColor"
android:textSize="36sp"
android:textStyle="bold"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<com.google.android.material.textfield.TextInputLayout
android:id="@+id/etName"
style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginHorizontal="48dp"
android:layout_marginTop="24dp"
android:hint="@string/name"
app:errorEnabled="true"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@id/tvRegisterTitle">
<com.google.android.material.textfield.TextInputEditText
android:layout_width="match_parent"
android:layout_height="match_parent"
android:inputType="textPersonName"
android:textColorHint="@color/primaryDarkColor" />
</com.google.android.material.textfield.TextInputLayout>
<com.google.android.material.textfield.TextInputLayout
android:id="@+id/etSurname"
style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginHorizontal="48dp"
android:layout_marginTop="8dp"
android:hint="@string/surname"
app:errorEnabled="true"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@id/etName">
<com.google.android.material.textfield.TextInputEditText
android:layout_width="match_parent"
android:layout_height="match_parent"
android:inputType="textPersonName"
android:textColorHint="@color/primaryDarkColor" />
</com.google.android.material.textfield.TextInputLayout>
<com.google.android.material.textfield.TextInputLayout
android:id="@+id/etRegisterEmail"
style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginHorizontal="48dp"
android:layout_marginTop="8dp"
android:hint="@string/e_mail"
app:endIconMode="clear_text"
app:endIconTint="@color/secondaryColor"
app:errorEnabled="true"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@id/etSurname">
<com.google.android.material.textfield.TextInputEditText
android:layout_width="match_parent"
android:layout_height="match_parent"
android:inputType="textPersonName"
android:textColorHint="@color/secondaryDarkColor" />
</com.google.android.material.textfield.TextInputLayout>
<com.google.android.material.textfield.TextInputLayout
android:id="@+id/etRegisterPassword"
style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginHorizontal="48dp"
android:layout_marginTop="8dp"
android:hint="@string/password"
app:endIconMode="password_toggle"
app:endIconTint="@color/secondaryColor"
app:errorEnabled="true"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@id/etRegisterEmail">
<com.google.android.material.textfield.TextInputEditText
android:layout_width="match_parent"
android:layout_height="match_parent"
android:inputType="textPassword"
android:textColorHint="@color/secondaryDarkColor" />
</com.google.android.material.textfield.TextInputLayout>
<com.google.android.material.textfield.TextInputLayout
android:id="@+id/etHeight"
style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox"
android:layout_width="110dp"
android:layout_height="80dp"
android:layout_marginTop="8dp"
android:hint="@string/height"
app:errorEnabled="true"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toEndOf="@id/etWeight"
app:layout_constraintTop_toBottomOf="@id/etRegisterPassword">
<com.google.android.material.textfield.TextInputEditText
android:layout_width="match_parent"
android:layout_height="match_parent"
android:inputType="number"
android:textColorHint="@color/secondaryDarkColor" />
</com.google.android.material.textfield.TextInputLayout>
<com.google.android.material.textfield.TextInputLayout
android:id="@+id/etWeight"
style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox"
android:layout_width="110dp"
android:layout_height="80dp"
android:layout_marginTop="8dp"
android:hint="@string/weight"
app:errorEnabled="true"
app:layout_constraintEnd_toStartOf="@id/etHeight"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@id/etRegisterPassword">
<com.google.android.material.textfield.TextInputEditText
android:layout_width="match_parent"
android:layout_height="match_parent"
android:inputType="number"
android:textColorHint="@color/secondaryDarkColor" />
</com.google.android.material.textfield.TextInputLayout>
<Button
android:id="@+id/btnRegister"
style="?attr/materialButtonOutlinedStyle"
android:layout_width="250dp"
android:layout_height="wrap_content"
android:layout_marginVertical="16sp"
android:backgroundTint="@color/primaryColor"
android:text="@string/register"
android:textColor="@color/secondaryTextColor"
app:layout_constraintBottom_toTopOf="@+id/imageView2"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@id/etWeight" />
<ImageView
android:id="@+id/imageView2"
android:layout_width="70dp"
android:layout_height="70dp"
android:rotation="26"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toStartOf="@+id/imageView3"
app:layout_constraintStart_toStartOf="parent"
app:srcCompat="@drawable/broccoli_png" />
<ImageView
android:id="@+id/imageView3"
android:layout_width="70dp"
android:layout_height="70dp"
android:rotation="26"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toStartOf="@+id/imageView4"
app:layout_constraintStart_toEndOf="@+id/imageView2"
app:srcCompat="@drawable/broccoli_png" />
<ImageView
android:id="@+id/imageView4"
android:layout_width="70dp"
android:layout_height="70dp"
android:rotation="26"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toEndOf="@+id/imageView3"
app:srcCompat="@drawable/broccoli_png" />
</androidx.constraintlayout.widget.ConstraintLayout>
A:
Use this :
auth.currentUser?.email.toString().trim()
Dont forget to add trim()
A:
I still haven't solved the problem but I changed somethings in code
1- Changed hash state to User named data class
2- I define etMail like:
val etMail =view.findViewById<TextInputLayout>(R.id.etRegisterEmail).editText?.text.toString()
3- And I carried defined properties to inside of btnRegister.setOnClickListener
Final Version of Code
class LoginActivity : AppCompatActivity() {
private val db = Firebase.firestore.collection("users")
private val auth = Firebase.auth
private lateinit var binding: ActivityLoginBinding
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = DataBindingUtil.setContentView(this, R.layout.activity_login)
binding.tvRegister.setOnClickListener {
val dialog = BottomSheetDialog(this@LoginActivity)
val view = layoutInflater.inflate(R.layout.bottom_sheet_layout, null)
dialog.setContentView(view)
val btnRegister = view.findViewById<Button>(R.id.btnRegister)
btnRegister.setOnClickListener {
val etName =
view.findViewById<TextInputLayout>(R.id.etName).editText?.text.toString()
val etSurname =
view.findViewById<TextInputLayout>(R.id.etSurname).editText?.text.toString()
val etMail =
view.findViewById<TextInputLayout>(R.id.etRegisterEmail).editText?.text.toString()
val etPassword =
view.findViewById<TextInputLayout>(R.id.etRegisterPassword).editText?.text.toString()
val etHeight =
view.findViewById<TextInputLayout>(R.id.etHeight).editText?.text.toString()
val etWeight =
view.findViewById<TextInputLayout>(R.id.etWeight).editText?.text.toString()
val user = User(etName, etSurname, etMail, etHeight, etWeight)
registerUser(etMail, etPassword, user)
}
dialog.show()
}
}
private fun registerUser(email: String, password: String, user: User) {
if(email.isNotEmpty()&&password.isNotEmpty()){
CoroutineScope(Dispatchers.IO).launch {
try {
auth.createUserWithEmailAndPassword(email, password)
.addOnSuccessListener {
db.document(auth.currentUser?.uid.toString()).set(user)
checkLogged()
Toast.makeText(this@LoginActivity,"Welcome",Toast.LENGTH_SHORT).show()
}.await()
} catch (e: java.lang.Exception) {
withContext(Dispatchers.Main) {
Toast.makeText(this@LoginActivity, e.message, Toast.LENGTH_LONG).show()
}
}
}
}
}
private fun checkLogged() {
if (auth.currentUser != null) {
startActivity(Intent(this@LoginActivity, MainActivity::class.java))
finish()
} else {
auth.signOut()
}
}
}
| The email address is badly formatted problem in Firebase | I am using Firebase Auth and Firestore on my app and I using TextInputLayout for Login and Register screens but I got a problem it's like exception named "The email address is badly formatted" when I want to add new user from Register screen. I looked up but I can't find any Kotlin question about this problem I'll leave my codes below. I'm waiting your help. Have a good Codes :) .
LoginActivity.kt
class LoginActivity : AppCompatActivity() {
private val db = Firebase.firestore.collection("users")
private val auth = Firebase.auth
private lateinit var binding: ActivityLoginBinding
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = DataBindingUtil.setContentView(this, R.layout.activity_login)
binding.tvRegister.setOnClickListener {
val dialog = BottomSheetDialog(this@LoginActivity)
val view = layoutInflater.inflate(R.layout.bottom_sheet_layout, null)
dialog.setContentView(view)
val etName = view.findViewById<TextInputLayout>(R.id.etName).toString()
val etSurname = view.findViewById<TextInputLayout>(R.id.etSurname).toString()
val etMail = view.findViewById<TextInputLayout>(R.id.etRegisterEmail).toString()
val etPassword = view.findViewById<TextInputLayout>(R.id.etRegisterPassword).toString()
val etHeight = view.findViewById<TextInputLayout>(R.id.etHeight).toString()
val etWeight = view.findViewById<TextInputLayout>(R.id.etWeight).toString()
val btnRegister = view.findViewById<Button>(R.id.btnRegister)
btnRegister.setOnClickListener {
val user = hashMapOf<Any, String>(
"name" to etName,
"surname" to etSurname,
"email" to etMail,
"password" to etPassword,
"height" to etHeight,
"weight" to etWeight
)
if (etName.isNotEmpty() && etSurname.isNotEmpty() && etMail.isNotEmpty() && etPassword.isNotEmpty()) {
registerUser(etMail,etPassword,user)
} else {
Toast.makeText(
this@LoginActivity,
"You have to fill blanks",
Toast.LENGTH_SHORT
).show()
}
}
dialog.show()
}
}
private fun registerUser(email: String, password: String, user: HashMap<Any, String>) {
CoroutineScope(Dispatchers.IO).launch {
try {
auth.createUserWithEmailAndPassword(email, password)
.addOnSuccessListener {
db.document(auth.currentUser?.email.toString()).set(user)
.addOnSuccessListener {
Toast.makeText(this@LoginActivity, "Welcome", Toast.LENGTH_LONG)
.show()
checkLogged()
}
.addOnFailureListener {
Toast.makeText(this@LoginActivity, it.message, Toast.LENGTH_LONG)
.show()
}
}.await()
} catch (e: java.lang.Exception) {
withContext(Dispatchers.Main){
Toast.makeText(this@LoginActivity, e.message, Toast.LENGTH_LONG).show()
}
}
}
}
private fun checkLogged() {
if (auth.currentUser != null) {
startActivity(Intent(this@LoginActivity, MainActivity::class.java))
finish()
} else {
auth.signOut()
}
}
}
bottom_sheet_layout.xml (Register Screen)
<layout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto">
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="wrap_content">
<TextView
android:id="@+id/tvRegisterTitle"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="16dp"
android:fontFamily="monospace"
android:text="Register"
android:textColor="@color/primaryDarkColor"
android:textSize="36sp"
android:textStyle="bold"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<com.google.android.material.textfield.TextInputLayout
android:id="@+id/etName"
style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginHorizontal="48dp"
android:layout_marginTop="24dp"
android:hint="@string/name"
app:errorEnabled="true"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@id/tvRegisterTitle">
<com.google.android.material.textfield.TextInputEditText
android:layout_width="match_parent"
android:layout_height="match_parent"
android:inputType="textPersonName"
android:textColorHint="@color/primaryDarkColor" />
</com.google.android.material.textfield.TextInputLayout>
<com.google.android.material.textfield.TextInputLayout
android:id="@+id/etSurname"
style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginHorizontal="48dp"
android:layout_marginTop="8dp"
android:hint="@string/surname"
app:errorEnabled="true"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@id/etName">
<com.google.android.material.textfield.TextInputEditText
android:layout_width="match_parent"
android:layout_height="match_parent"
android:inputType="textPersonName"
android:textColorHint="@color/primaryDarkColor" />
</com.google.android.material.textfield.TextInputLayout>
<com.google.android.material.textfield.TextInputLayout
android:id="@+id/etRegisterEmail"
style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginHorizontal="48dp"
android:layout_marginTop="8dp"
android:hint="@string/e_mail"
app:endIconMode="clear_text"
app:endIconTint="@color/secondaryColor"
app:errorEnabled="true"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@id/etSurname">
<com.google.android.material.textfield.TextInputEditText
android:layout_width="match_parent"
android:layout_height="match_parent"
android:inputType="textPersonName"
android:textColorHint="@color/secondaryDarkColor" />
</com.google.android.material.textfield.TextInputLayout>
<com.google.android.material.textfield.TextInputLayout
android:id="@+id/etRegisterPassword"
style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginHorizontal="48dp"
android:layout_marginTop="8dp"
android:hint="@string/password"
app:endIconMode="password_toggle"
app:endIconTint="@color/secondaryColor"
app:errorEnabled="true"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@id/etRegisterEmail">
<com.google.android.material.textfield.TextInputEditText
android:layout_width="match_parent"
android:layout_height="match_parent"
android:inputType="textPassword"
android:textColorHint="@color/secondaryDarkColor" />
</com.google.android.material.textfield.TextInputLayout>
<com.google.android.material.textfield.TextInputLayout
android:id="@+id/etHeight"
style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox"
android:layout_width="110dp"
android:layout_height="80dp"
android:layout_marginTop="8dp"
android:hint="@string/height"
app:errorEnabled="true"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toEndOf="@id/etWeight"
app:layout_constraintTop_toBottomOf="@id/etRegisterPassword">
<com.google.android.material.textfield.TextInputEditText
android:layout_width="match_parent"
android:layout_height="match_parent"
android:inputType="number"
android:textColorHint="@color/secondaryDarkColor" />
</com.google.android.material.textfield.TextInputLayout>
<com.google.android.material.textfield.TextInputLayout
android:id="@+id/etWeight"
style="@style/Widget.MaterialComponents.TextInputLayout.OutlinedBox"
android:layout_width="110dp"
android:layout_height="80dp"
android:layout_marginTop="8dp"
android:hint="@string/weight"
app:errorEnabled="true"
app:layout_constraintEnd_toStartOf="@id/etHeight"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@id/etRegisterPassword">
<com.google.android.material.textfield.TextInputEditText
android:layout_width="match_parent"
android:layout_height="match_parent"
android:inputType="number"
android:textColorHint="@color/secondaryDarkColor" />
</com.google.android.material.textfield.TextInputLayout>
<Button
android:id="@+id/btnRegister"
style="?attr/materialButtonOutlinedStyle"
android:layout_width="250dp"
android:layout_height="wrap_content"
android:layout_marginVertical="16sp"
android:backgroundTint="@color/primaryColor"
android:text="@string/register"
android:textColor="@color/secondaryTextColor"
app:layout_constraintBottom_toTopOf="@+id/imageView2"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@id/etWeight" />
<ImageView
android:id="@+id/imageView2"
android:layout_width="70dp"
android:layout_height="70dp"
android:rotation="26"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toStartOf="@+id/imageView3"
app:layout_constraintStart_toStartOf="parent"
app:srcCompat="@drawable/broccoli_png" />
<ImageView
android:id="@+id/imageView3"
android:layout_width="70dp"
android:layout_height="70dp"
android:rotation="26"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toStartOf="@+id/imageView4"
app:layout_constraintStart_toEndOf="@+id/imageView2"
app:srcCompat="@drawable/broccoli_png" />
<ImageView
android:id="@+id/imageView4"
android:layout_width="70dp"
android:layout_height="70dp"
android:rotation="26"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toEndOf="@+id/imageView3"
app:srcCompat="@drawable/broccoli_png" />
</androidx.constraintlayout.widget.ConstraintLayout>
| [
"Use this :\nauth.currentUser?.email.toString().trim()\n\nDont forget to add trim()\n",
"I still haven't solved the problem but I changed somethings in code\n\n1- Changed hash state to User named data class\n2- I define etMail like:\nval etMail =view.findViewById<TextInputLayout>(R.id.etRegisterEmail).editText?.text.toString()\n\n3- And I carried defined properties to inside of btnRegister.setOnClickListener\n\nFinal Version of Code\n\nclass LoginActivity : AppCompatActivity() {\n\n private val db = Firebase.firestore.collection(\"users\")\n private val auth = Firebase.auth\n private lateinit var binding: ActivityLoginBinding\n\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n binding = DataBindingUtil.setContentView(this, R.layout.activity_login)\n\n binding.tvRegister.setOnClickListener {\n val dialog = BottomSheetDialog(this@LoginActivity)\n val view = layoutInflater.inflate(R.layout.bottom_sheet_layout, null)\n\n dialog.setContentView(view)\n val btnRegister = view.findViewById<Button>(R.id.btnRegister)\n\n btnRegister.setOnClickListener {\n\n val etName =\n view.findViewById<TextInputLayout>(R.id.etName).editText?.text.toString()\n val etSurname =\n view.findViewById<TextInputLayout>(R.id.etSurname).editText?.text.toString()\n val etMail =\n view.findViewById<TextInputLayout>(R.id.etRegisterEmail).editText?.text.toString()\n val etPassword =\n view.findViewById<TextInputLayout>(R.id.etRegisterPassword).editText?.text.toString()\n val etHeight =\n view.findViewById<TextInputLayout>(R.id.etHeight).editText?.text.toString()\n val etWeight =\n view.findViewById<TextInputLayout>(R.id.etWeight).editText?.text.toString()\n\n val user = User(etName, etSurname, etMail, etHeight, etWeight)\n\n registerUser(etMail, etPassword, user)\n }\n dialog.show()\n }\n }\n\n private fun registerUser(email: String, password: String, user: User) {\n if(email.isNotEmpty()&&password.isNotEmpty()){\n CoroutineScope(Dispatchers.IO).launch {\n try {\n auth.createUserWithEmailAndPassword(email, password)\n .addOnSuccessListener {\n db.document(auth.currentUser?.uid.toString()).set(user)\n checkLogged()\n Toast.makeText(this@LoginActivity,\"Welcome\",Toast.LENGTH_SHORT).show()\n }.await()\n } catch (e: java.lang.Exception) {\n withContext(Dispatchers.Main) {\n Toast.makeText(this@LoginActivity, e.message, Toast.LENGTH_LONG).show()\n }\n }\n }\n }\n }\n\n private fun checkLogged() {\n if (auth.currentUser != null) {\n startActivity(Intent(this@LoginActivity, MainActivity::class.java))\n finish()\n } else {\n auth.signOut()\n }\n }\n}\n\n"
] | [
1,
0
] | [] | [] | [
"android",
"firebase",
"firebase_authentication",
"kotlin"
] | stackoverflow_0074669994_android_firebase_firebase_authentication_kotlin.txt |
Q:
SQL row_number with a condition
I want to configure row_number with a case condition. To look on "time_diffs" column and check - if there 1's go one by one, than it's a one group. If there 0's, than each 0 is the one group by itself. And after each itteration between 1's and 0's the row result will grow on +1.
select session_id,
player_id,
country,
start_time,
end_time,
case when timestampdiff(minute,
lag(end_time, 1) over(partition by player_id order by end_time)
, start_time) < 5 then 1
when timestampdiff(minute, end_time
, lead(start_time, 1) over(partition by player_id order by start_time)) < 5 then 1
else 0
end as time_diffs
/* , here is some new code with an expected result */
from game_sessions
where 1=1
and player_id = 1
order by player_id, start_time
The result of the current query:
session_id
player_id
country
start_time
end_time
time_diffs
1
1
UK
01.01.2021 00:01
01.01.2021 00:10
1
2
1
UK
01.01.2021 00:12
01.01.2021 01:24
1
13
1
UK
01.01.2021 01:27
01.01.2021 01:50
1
3
1
UK
01.01.2021 10:01
01.01.2021 15:10
0
16
1
UK
01.01.2021 17:10
01.01.2021 17:20
1
17
1
UK
01.01.2021 17:22
01.01.2021 17:55
1
54
1
UK
01.01.2021 18:15
01.01.2021 18:35
0
32
1
UK
01.01.2021 18:55
01.01.2021 19:35
0
What I expect to see with a new column added to the current query:
session_id
player_id
country
start_time
end_time
time_diffs
expected_result
1
1
UK
01.01.2021 00:01
01.01.2021 00:10
1
1
2
1
UK
01.01.2021 00:12
01.01.2021 01:24
1
1
13
1
UK
01.01.2021 01:27
01.01.2021 01:50
1
1
3
1
UK
01.01.2021 10:01
01.01.2021 15:10
0
2
16
1
UK
01.01.2021 17:10
01.01.2021 17:20
1
3
17
1
UK
01.01.2021 17:22
01.01.2021 17:55
1
3
54
1
UK
01.01.2021 18:15
01.01.2021 18:35
0
4
32
1
UK
01.01.2021 18:55
01.01.2021 19:35
0
5
A:
This is a type of [Gaps and islands problem], and will require a few Windowed functions (and subqueries) to get your desired result, the first step is to work out your gaps and your islands, which you can do with the use of two row_numbers, one having an additional partition by:
SELECT *,
ROW_NUMBER() OVER (PARTITION BY player_id ORDER BY start_time)
- ROW_NUMBER() OVER (PARTITION BY player_id, time_diffs ORDER BY start_time) AS GroupingSet
FROM game_sessions;
N.B. For this query and all other queries I have taken the step of simplifying your entire query to include the field time_diffs in the dataset to shorten the actual query
This gives:
session_id
player_id
country
start_time
end_time
time_diffs
GroupingSet
1
1
UK
2021-01-01 00:01:00
2021-01-01 00:10:00
1
0
2
1
UK
2021-01-01 00:12:00
2021-01-01 01:24:00
1
0
13
1
UK
2021-01-01 01:27:00
2021-01-01 01:50:00
1
0
3
1
UK
2021-01-01 10:01:00
2021-01-01 15:10:00
0
3
16
1
UK
2021-01-01 17:10:00
2021-01-01 17:20:00
1
1
17
1
UK
2021-01-01 17:22:00
2021-01-01 17:55:00
1
1
54
1
UK
2021-01-01 18:15:00
2021-01-01 18:35:00
0
5
32
1
UK
2021-01-01 18:55:00
2021-01-01 19:35:00
0
5
What you can see here is that the "GroupingSet" column changes each time your time_diff changes, this is the basis for identifying your islands (consecutive groups of the same value).
For your output you then need a couple of additional windowed functions, firstly you need to get the minimum start time per group, since you want to consider every row a unique group for time_diffs = 0, you need the following expression:
IF(time_diffs=1,MIN(start_time) OVER (PARTITION BY player_id, p.GroupingSet),start_time)
Adding this column then gives:
session_id
player_id
country
start_time
end_time
time_diffs
GroupingSet
GroupStart
1
1
UK
2021-01-01 00:01:00
2021-01-01 00:10:00
1
0
2021-01-01 00:01:00
2
1
UK
2021-01-01 00:12:00
2021-01-01 01:24:00
1
0
2021-01-01 00:01:00
13
1
UK
2021-01-01 01:27:00
2021-01-01 01:50:00
1
0
2021-01-01 00:01:00
3
1
UK
2021-01-01 10:01:00
2021-01-01 15:10:00
0
3
2021-01-01 10:01:00
16
1
UK
2021-01-01 17:10:00
2021-01-01 17:20:00
1
1
2021-01-01 17:10:00
17
1
UK
2021-01-01 17:22:00
2021-01-01 17:55:00
1
1
2021-01-01 17:10:00
54
1
UK
2021-01-01 18:15:00
2021-01-01 18:35:00
0
5
2021-01-01 18:15:00
32
1
UK
2021-01-01 18:55:00
2021-01-01 19:35:00
0
5
2021-01-01 18:55:00
Finally, you can use this MinStart column as the basis for DENSE_RANK(), giving a final query of
SELECT p.session_id,
p.player_id,
p.country,
p.start_time,
p.end_time,
p.time_diffs,
DENSE_RANK() OVER(PARTITION BY player_id ORDER BY p.GroupStart) AS ExpectedOutput
FROM
(
SELECT *, IF(time_diffs = 0,start_time,MIN(start_time) OVER (PARTITION BY player_id, p.GroupingSet)) AS GroupStart
FROM
(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY player_id ORDER BY start_time)
- ROW_NUMBER() OVER (PARTITION BY player_id, time_diffs ORDER BY start_time) AS GroupingSet
FROM game_sessions
) AS p
) AS p
ORDER BY
player_id, start_time;
A potentially simpler alternative is to identify rows where you don't want to increment the count, and return 0 otherwise 1 i.e
IF(time_diffs=1 AND LAG(time_diffs,1,0) OVER(PARTITION BY player_id ORDER BY start_time)=1,0,1)
Then sum this column:
SELECT p.session_id,
p.player_id,
p.country,
p.start_time,
p.end_time,
p.time_diffs,
SUM(TDChanges) OVER(PARTITION BY player_id ORDER BY p.time_start) AS ExpectedOutput
FROM
(
SELECT *,
IIF(time_diffs=1 AND LAG(time_diffs,1,0) OVER(PARTITION BY player_id ORDER BY time_start)=1,0,1) AS TDChanges
FROM game_sessions
) AS p
ORDER BY
player_id, start_time;
Both queries give your expected output - Examples on db<>fidle
| SQL row_number with a condition | I want to configure row_number with a case condition. To look on "time_diffs" column and check - if there 1's go one by one, than it's a one group. If there 0's, than each 0 is the one group by itself. And after each itteration between 1's and 0's the row result will grow on +1.
select session_id,
player_id,
country,
start_time,
end_time,
case when timestampdiff(minute,
lag(end_time, 1) over(partition by player_id order by end_time)
, start_time) < 5 then 1
when timestampdiff(minute, end_time
, lead(start_time, 1) over(partition by player_id order by start_time)) < 5 then 1
else 0
end as time_diffs
/* , here is some new code with an expected result */
from game_sessions
where 1=1
and player_id = 1
order by player_id, start_time
The result of the current query:
session_id
player_id
country
start_time
end_time
time_diffs
1
1
UK
01.01.2021 00:01
01.01.2021 00:10
1
2
1
UK
01.01.2021 00:12
01.01.2021 01:24
1
13
1
UK
01.01.2021 01:27
01.01.2021 01:50
1
3
1
UK
01.01.2021 10:01
01.01.2021 15:10
0
16
1
UK
01.01.2021 17:10
01.01.2021 17:20
1
17
1
UK
01.01.2021 17:22
01.01.2021 17:55
1
54
1
UK
01.01.2021 18:15
01.01.2021 18:35
0
32
1
UK
01.01.2021 18:55
01.01.2021 19:35
0
What I expect to see with a new column added to the current query:
session_id
player_id
country
start_time
end_time
time_diffs
expected_result
1
1
UK
01.01.2021 00:01
01.01.2021 00:10
1
1
2
1
UK
01.01.2021 00:12
01.01.2021 01:24
1
1
13
1
UK
01.01.2021 01:27
01.01.2021 01:50
1
1
3
1
UK
01.01.2021 10:01
01.01.2021 15:10
0
2
16
1
UK
01.01.2021 17:10
01.01.2021 17:20
1
3
17
1
UK
01.01.2021 17:22
01.01.2021 17:55
1
3
54
1
UK
01.01.2021 18:15
01.01.2021 18:35
0
4
32
1
UK
01.01.2021 18:55
01.01.2021 19:35
0
5
| [
"This is a type of [Gaps and islands problem], and will require a few Windowed functions (and subqueries) to get your desired result, the first step is to work out your gaps and your islands, which you can do with the use of two row_numbers, one having an additional partition by:\nSELECT *,\n ROW_NUMBER() OVER (PARTITION BY player_id ORDER BY start_time)\n - ROW_NUMBER() OVER (PARTITION BY player_id, time_diffs ORDER BY start_time) AS GroupingSet\nFROM game_sessions;\n\nN.B. For this query and all other queries I have taken the step of simplifying your entire query to include the field time_diffs in the dataset to shorten the actual query\nThis gives:\n\n\n\n\nsession_id\nplayer_id\ncountry\nstart_time\nend_time\ntime_diffs\nGroupingSet\n\n\n\n\n1\n1\nUK\n2021-01-01 00:01:00\n2021-01-01 00:10:00\n1\n0\n\n\n2\n1\nUK\n2021-01-01 00:12:00\n2021-01-01 01:24:00\n1\n0\n\n\n13\n1\nUK\n2021-01-01 01:27:00\n2021-01-01 01:50:00\n1\n0\n\n\n3\n1\nUK\n2021-01-01 10:01:00\n2021-01-01 15:10:00\n0\n3\n\n\n16\n1\nUK\n2021-01-01 17:10:00\n2021-01-01 17:20:00\n1\n1\n\n\n17\n1\nUK\n2021-01-01 17:22:00\n2021-01-01 17:55:00\n1\n1\n\n\n54\n1\nUK\n2021-01-01 18:15:00\n2021-01-01 18:35:00\n0\n5\n\n\n32\n1\nUK\n2021-01-01 18:55:00\n2021-01-01 19:35:00\n0\n5\n\n\n\n\nWhat you can see here is that the \"GroupingSet\" column changes each time your time_diff changes, this is the basis for identifying your islands (consecutive groups of the same value).\nFor your output you then need a couple of additional windowed functions, firstly you need to get the minimum start time per group, since you want to consider every row a unique group for time_diffs = 0, you need the following expression:\nIF(time_diffs=1,MIN(start_time) OVER (PARTITION BY player_id, p.GroupingSet),start_time)\n\nAdding this column then gives:\n\n\n\n\nsession_id\nplayer_id\ncountry\nstart_time\nend_time\ntime_diffs\nGroupingSet\nGroupStart\n\n\n\n\n1\n1\nUK\n2021-01-01 00:01:00\n2021-01-01 00:10:00\n1\n0\n2021-01-01 00:01:00\n\n\n2\n1\nUK\n2021-01-01 00:12:00\n2021-01-01 01:24:00\n1\n0\n2021-01-01 00:01:00\n\n\n13\n1\nUK\n2021-01-01 01:27:00\n2021-01-01 01:50:00\n1\n0\n2021-01-01 00:01:00\n\n\n3\n1\nUK\n2021-01-01 10:01:00\n2021-01-01 15:10:00\n0\n3\n2021-01-01 10:01:00\n\n\n16\n1\nUK\n2021-01-01 17:10:00\n2021-01-01 17:20:00\n1\n1\n2021-01-01 17:10:00\n\n\n17\n1\nUK\n2021-01-01 17:22:00\n2021-01-01 17:55:00\n1\n1\n2021-01-01 17:10:00\n\n\n54\n1\nUK\n2021-01-01 18:15:00\n2021-01-01 18:35:00\n0\n5\n2021-01-01 18:15:00\n\n\n32\n1\nUK\n2021-01-01 18:55:00\n2021-01-01 19:35:00\n0\n5\n2021-01-01 18:55:00\n\n\n\n\nFinally, you can use this MinStart column as the basis for DENSE_RANK(), giving a final query of\nSELECT p.session_id,\n p.player_id,\n p.country,\n p.start_time,\n p.end_time,\n p.time_diffs,\n DENSE_RANK() OVER(PARTITION BY player_id ORDER BY p.GroupStart) AS ExpectedOutput\nFROM\n (\n SELECT *, IF(time_diffs = 0,start_time,MIN(start_time) OVER (PARTITION BY player_id, p.GroupingSet)) AS GroupStart\n FROM\n (\n SELECT *,\n ROW_NUMBER() OVER (PARTITION BY player_id ORDER BY start_time)\n - ROW_NUMBER() OVER (PARTITION BY player_id, time_diffs ORDER BY start_time) AS GroupingSet\n FROM game_sessions\n ) AS p\n ) AS p\nORDER BY\n player_id, start_time;\n\n\nA potentially simpler alternative is to identify rows where you don't want to increment the count, and return 0 otherwise 1 i.e\nIF(time_diffs=1 AND LAG(time_diffs,1,0) OVER(PARTITION BY player_id ORDER BY start_time)=1,0,1)\n\nThen sum this column:\nSELECT p.session_id,\n p.player_id,\n p.country,\n p.start_time,\n p.end_time,\n p.time_diffs,\n SUM(TDChanges) OVER(PARTITION BY player_id ORDER BY p.time_start) AS ExpectedOutput\nFROM\n (\n SELECT *,\n IIF(time_diffs=1 AND LAG(time_diffs,1,0) OVER(PARTITION BY player_id ORDER BY time_start)=1,0,1) AS TDChanges\n FROM game_sessions\n ) AS p\nORDER BY\n player_id, start_time;\n\nBoth queries give your expected output - Examples on db<>fidle\n"
] | [
0
] | [] | [] | [
"mysql",
"row_number",
"sql"
] | stackoverflow_0074673651_mysql_row_number_sql.txt |
Q:
Angular/TypeScript – sort array of strings which includes numbers
I have an array like:
arr = ["100 abc", "ad", "5 star", "orange"];
I want to sort firstly strings with no numbers at the beginning and then strings with numbers add at the end, omit numbers and sort strings by name alphabetically.
Expected output:
ad, orange, 100 abc, 5 star.
How can I do that in TypeScript/Angular?
A:
The question is actually about partitioning rather than sorting. You can achieve this easily with two filter calls:
result = [
...arr.filter(a => !/\d/.test(a)),
...arr.filter(a => /\d/.test(a)),
]
A:
Probably something like this:
const startsWithNum = (str) => /^\d/.test(str);
const afterNumPart = (str) => str.match(/^\d+\s(.*)/)[0];
const compareStrings = (a, b) => {
if (startsWithNum(a)) {
if (startsWithNum(b)) {
// Both strings contain numbers, compare using parts after numbers
return afterNumPart (a) < afterNumPart (b) ? -1 : 1;
} else {
// A contains numbers, but B does not, B comes first
return 1;
}
} else if (startsWithNum(b)) {
// A does not contain numbers, but B does, A comes first
return -1;
} else {
// Neither string contains numbers, compare full strings
return a < b ? -1 : 1;
}
};
const arr = ["100 abc", "ad", "5 star", "orange"];
const sortedArr = arr.sort(compareStrings);
// ['ad', 'orange', '100 abc', '5 star']
A:
Here you go:
const arr = ["100 abc", "ad", "5 star", "orange"]
arr.map(item => {
return item.split(' ').map((subItem: string) => +subItem ? +subItem : subItem)}
).sort((a,b) => {
if(a.find((item: any) => typeof item === 'number')){
return 1;
}else return -1
}).map(item => item.join(' '))
| Angular/TypeScript – sort array of strings which includes numbers | I have an array like:
arr = ["100 abc", "ad", "5 star", "orange"];
I want to sort firstly strings with no numbers at the beginning and then strings with numbers add at the end, omit numbers and sort strings by name alphabetically.
Expected output:
ad, orange, 100 abc, 5 star.
How can I do that in TypeScript/Angular?
| [
"The question is actually about partitioning rather than sorting. You can achieve this easily with two filter calls:\nresult = [\n ...arr.filter(a => !/\\d/.test(a)),\n ...arr.filter(a => /\\d/.test(a)),\n]\n\n",
"Probably something like this:\nconst startsWithNum = (str) => /^\\d/.test(str);\n\nconst afterNumPart = (str) => str.match(/^\\d+\\s(.*)/)[0];\n\nconst compareStrings = (a, b) => {\n if (startsWithNum(a)) {\n if (startsWithNum(b)) {\n // Both strings contain numbers, compare using parts after numbers\n return afterNumPart (a) < afterNumPart (b) ? -1 : 1;\n } else {\n // A contains numbers, but B does not, B comes first\n return 1;\n }\n } else if (startsWithNum(b)) {\n // A does not contain numbers, but B does, A comes first\n return -1;\n } else {\n // Neither string contains numbers, compare full strings\n return a < b ? -1 : 1;\n }\n};\n\nconst arr = [\"100 abc\", \"ad\", \"5 star\", \"orange\"];\nconst sortedArr = arr.sort(compareStrings);\n// ['ad', 'orange', '100 abc', '5 star']\n\n",
"Here you go:\nconst arr = [\"100 abc\", \"ad\", \"5 star\", \"orange\"]\narr.map(item => {\n return item.split(' ').map((subItem: string) => +subItem ? +subItem : subItem)}\n ).sort((a,b) => {\n if(a.find((item: any) => typeof item === 'number')){\n return 1;\n }else return -1\n }).map(item => item.join(' '))\n\n"
] | [
1,
0,
0
] | [] | [] | [
"angular",
"javascript",
"sorting",
"typescript"
] | stackoverflow_0074673825_angular_javascript_sorting_typescript.txt |
Q:
MongoParseError: options useCreateIndex, useFindAndModify are not supported
I tried to run it and it said an error like the title. and
this is my code:
const URI = process.env.MONGODB_URL;
mongoose.connect(URI, {
useCreateIndex: true,
useFindAndModify: false,
useNewUrlParser: true,
useUnifiedTopology: true
}, err => {
if(err) throw err;
console.log('Connected to MongoDB!!!')
})
I set the MONGODB_URL in .env :
MONGODB_URL = mongodb+srv://username:<password>@cluster0.accdl.mongodb.net/website?retryWrites=true&w=majority
How to fix it?
A:
From the Mongoose 6.0 docs:
useNewUrlParser, useUnifiedTopology, useFindAndModify, and useCreateIndex are no longer supported options. Mongoose 6 always behaves as if useNewUrlParser, useUnifiedTopology, and useCreateIndex are true, and useFindAndModify is false. Please remove these options from your code.
A:
Same problem was with me but if you remove useCreateIndex, useFindAndModify it will solve the problem just write :
const URI = process.env.MONGODB_URL;
mongoose.connect(URI, {
useNewUrlParser: true,
useUnifiedTopology: true
}, err => {
if(err) throw err;
console.log('Connected to MongoDB!!!')
});
It worked for me.
A:
No More Deprecation Warning Options
useNewUrlParser, useUnifiedTopology, useFindAndModify, and useCreateIndex are no longer supported options. Mongoose 6 always behaves as if useNewUrlParser, useUnifiedTopology, and useCreateIndex are true, and useFindAndModify is false. Please remove these options from your code.
src --> https://mongoosejs.com/docs/migrating_to_6.html#no-more-deprecation-warning-options
// No longer necessary:
mongoose.set('useFindAndModify', false);
await mongoose.connect('mongodb://localhost:27017/test', {
useNewUrlParser: true, // <-- no longer necessary
useUnifiedTopology: true // <-- no longer necessary
});
A:
I have the same issue.
Instaead
mongoose.connect(URI, {
useCreatendex: true,
useFindAndModify: false,
useNewUrlParser: true,
useUnifiedTopology: true
}, err => {
if(err) throw err;
console.log('Connected to MongoDB!!!')
})
try this:
mongoose.connect(URI,
err => {
if(err) throw err;
console.log('connected to MongoDB')
});
A:
The error is because of the new version of the mongoose i.e version 6.0.6.
As the documentation says:
No More Deprecation Warning Options
useNewUrlParser, useUnifiedTopology, useFindAndModify, and useCreateIndex are no longer supported options. Mongoose 6 always behaves as if useNewUrlParser, useUnifiedTopology, and useCreateIndex are true, and useFindAndModify is false. Please remove these options from your code.
Also, there are some major changes in the new version.
For more info visit https://mongoosejs.com/docs/migrating_to_6.html#no-more-deprecation-warning-options
import mongoose from 'mongoose';
const db = process.env.MONGOURI;
const connectDB = async () => {
try {
console.log(db);
await mongoose.connect(`${db}`, {
useNewUrlParser: true,
useUnifiedTopology: true,
});
console.log('MongoDB connected');
} catch (error) {
console.log(error.message);
process.exit(1);
}
};
export default connectDB;
A:
When I commented useNewUrlParser and useCreateIndex it worked for me.
A:
remove usecreateindex, usefindandmodify options
const URI = process.env.MONGODB_URL;
mongoose.connect(URI, {
useNewUrlParser: true,
useUnifiedTopology: true
}, err => {
if(err) throw err;
console.log('Connected to MongoDB!!!')
})
A:
Mongoose.connect(
DB_URL,
async(err)=>{
if(err) throw err;
console.log("conncted to db")
}
)
A:
const URI = process.env.MONGODB_URL;
mongoose.connect(URI, {
//useCreatendex: true,
//useFindAndModify: false,
useNewUrlParser: true,
useUnifiedTopology: true
}, err => {
if(err) throw err;
console.log('Connected to MongoDB!!!')
})
A:
//this is working for me at date/version (08-2021
const mongoose = require('mongoose');
var url = "mongodb+srv://username:<password>@cluster0.accdl.mongodb.net/website?
retryWrites=true&w=majority";
mongoose.connect(url, function(err, db) {
if (err) throw err;
console.log("Database created!");
db.close();
});
A:
Use this to check your database connection:
const mongoose = require("mongoose");
const url = ... /* path of your db */;
//to connect or create our database
mongoose.connect(url, { useUnifiedTopology : true, useNewUrlParser : true , }).then(() => {
console.log("Connection successfull");
}).catch((e) => console.log("No connection"))
A:
Options useCreateIndex, useFindAndModify are not supported in version v6
try {
await mongoose.connect(process.env.MONGODB_URL, {
useNewUrlParser: true,
useUnifiedTopology: true,
});
console.log("Connect successfully!");
} catch (error) {
console.log("Connect Failure!");
}
A:
const URI = process.env.MONGODB_URL;
mongoose.connect(URI, { useUnifiedTopology: true }
);
const connection = mongoose.connection;
connection.once('open', () => {
console.log("MongoDB database connection established successfully");
} )
A:
useNewUrlParser, useUnifiedTopology, useFindAndModify, and useCreateIndex are no longer supported options.
basically, just remove that object and you'll be fine:)
A:
I faced the same error. Just remove the "useCreateIndex: true" and it will work but make sure the mongoDB service is running on your local machine in the first place ~ brew services start [email protected] :)
#HappyCoding
A:
mongoose.connect(URL,{
}).then(()=>{
console.log('database connected')
}).catch(err=>{
console.log('database not connected',err)
})
A:
I experienced same thing, all i did was to just omit both (useFindAndModify: false) and (useCreateIndex), the reason being that my mongoose version never supported any of this..
Sure this will work for you.
A:
The asPromise() Method for Connections
Mongoose connections are no longer thenable. This means that
await mongoose.createConnection(uri) no longer waits for Mongoose to connect.
Use mongoose.createConnection(uri).asPromise() instead.
// The below no longer works in Mongoose 6
await mongoose.createConnection(uri);
// Do this instead
await mongoose.createConnection(uri).asPromise();
A:
const ConnectDB = async ()=>{
try{
const con= await mongoose.connect("mongodb+srv://MahmoudReda:[email protected]/?retryWrites=true&w=majority",{
useUnifiedTopology:true,
// useNewUrlParser:true,
// useCreateIndex:true
})
console.log("the connection is stable");
}
catch(error)
{
console.log(error.message)
process.exit(1);
}
}
this code is working well useNewUrlParser and useCreateIndex not supported
A:
in new mongodb version you dont't have to use those:
useCreateIndex: true,
useFindAndModify: false,
useNewUrlParser: true,
useUnifiedTopology: true
| MongoParseError: options useCreateIndex, useFindAndModify are not supported | I tried to run it and it said an error like the title. and
this is my code:
const URI = process.env.MONGODB_URL;
mongoose.connect(URI, {
useCreateIndex: true,
useFindAndModify: false,
useNewUrlParser: true,
useUnifiedTopology: true
}, err => {
if(err) throw err;
console.log('Connected to MongoDB!!!')
})
I set the MONGODB_URL in .env :
MONGODB_URL = mongodb+srv://username:<password>@cluster0.accdl.mongodb.net/website?retryWrites=true&w=majority
How to fix it?
| [
"From the Mongoose 6.0 docs:\n\nuseNewUrlParser, useUnifiedTopology, useFindAndModify, and useCreateIndex are no longer supported options. Mongoose 6 always behaves as if useNewUrlParser, useUnifiedTopology, and useCreateIndex are true, and useFindAndModify is false. Please remove these options from your code.\n\n",
"Same problem was with me but if you remove useCreateIndex, useFindAndModify it will solve the problem just write :\nconst URI = process.env.MONGODB_URL;\n\nmongoose.connect(URI, {\n\nuseNewUrlParser: true, \n\nuseUnifiedTopology: true \n\n}, err => {\nif(err) throw err;\nconsole.log('Connected to MongoDB!!!')\n});\n\nIt worked for me.\n",
"No More Deprecation Warning Options\nuseNewUrlParser, useUnifiedTopology, useFindAndModify, and useCreateIndex are no longer supported options. Mongoose 6 always behaves as if useNewUrlParser, useUnifiedTopology, and useCreateIndex are true, and useFindAndModify is false. Please remove these options from your code.\nsrc --> https://mongoosejs.com/docs/migrating_to_6.html#no-more-deprecation-warning-options\n// No longer necessary:\nmongoose.set('useFindAndModify', false);\n\nawait mongoose.connect('mongodb://localhost:27017/test', {\n useNewUrlParser: true, // <-- no longer necessary\n useUnifiedTopology: true // <-- no longer necessary\n});\n\n",
"I have the same issue.\nInstaead\nmongoose.connect(URI, {\n useCreatendex: true, \n useFindAndModify: false, \n useNewUrlParser: true, \n useUnifiedTopology: true \n}, err => {\n if(err) throw err;\n console.log('Connected to MongoDB!!!')\n})\n\ntry this:\nmongoose.connect(URI,\n err => {\n if(err) throw err;\n console.log('connected to MongoDB')\n });\n\n",
"The error is because of the new version of the mongoose i.e version 6.0.6.\nAs the documentation says:\n\nNo More Deprecation Warning Options\nuseNewUrlParser, useUnifiedTopology, useFindAndModify, and useCreateIndex are no longer supported options. Mongoose 6 always behaves as if useNewUrlParser, useUnifiedTopology, and useCreateIndex are true, and useFindAndModify is false. Please remove these options from your code.\n\nAlso, there are some major changes in the new version.\nFor more info visit https://mongoosejs.com/docs/migrating_to_6.html#no-more-deprecation-warning-options\nimport mongoose from 'mongoose';\n \n const db = process.env.MONGOURI;\n \n const connectDB = async () => {\n try {\n console.log(db);\n await mongoose.connect(`${db}`, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n });\n console.log('MongoDB connected');\n } catch (error) {\n console.log(error.message);\n process.exit(1);\n }\n };\n \n \n export default connectDB;\n\n",
"\nWhen I commented useNewUrlParser and useCreateIndex it worked for me.\n",
"remove usecreateindex, usefindandmodify options\nconst URI = process.env.MONGODB_URL;\n\nmongoose.connect(URI, {\n useNewUrlParser: true, \n useUnifiedTopology: true \n}, err => {\n if(err) throw err;\n console.log('Connected to MongoDB!!!')\n})\n\n",
"Mongoose.connect(\n DB_URL,\n async(err)=>{\n if(err) throw err;\n console.log(\"conncted to db\")\n }\n)\n\n",
"const URI = process.env.MONGODB_URL;\n\nmongoose.connect(URI, {\n //useCreatendex: true, \n //useFindAndModify: false, \n useNewUrlParser: true, \n useUnifiedTopology: true \n}, err => {\n if(err) throw err;\n console.log('Connected to MongoDB!!!')\n}) \n\n",
"//this is working for me at date/version (08-2021\nconst mongoose = require('mongoose');\nvar url = \"mongodb+srv://username:<password>@cluster0.accdl.mongodb.net/website? \nretryWrites=true&w=majority\";\nmongoose.connect(url, function(err, db) {\n if (err) throw err;\n console.log(\"Database created!\");\n db.close();\n});\n\n",
"Use this to check your database connection:\nconst mongoose = require(\"mongoose\");\nconst url = ... /* path of your db */;\n\n//to connect or create our database\nmongoose.connect(url, { useUnifiedTopology : true, useNewUrlParser : true , }).then(() => {\n console.log(\"Connection successfull\");\n}).catch((e) => console.log(\"No connection\"))\n\n",
"Options useCreateIndex, useFindAndModify are not supported in version v6\ntry {\n await mongoose.connect(process.env.MONGODB_URL, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n });\n console.log(\"Connect successfully!\");\n} catch (error) {\n console.log(\"Connect Failure!\");\n}\n\n",
"const URI = process.env.MONGODB_URL;\nmongoose.connect(URI, { useUnifiedTopology: true } \n);\n\nconst connection = mongoose.connection;\nconnection.once('open', () => {\n console.log(\"MongoDB database connection established successfully\");\n} )\n\n\n",
"useNewUrlParser, useUnifiedTopology, useFindAndModify, and useCreateIndex are no longer supported options.\nbasically, just remove that object and you'll be fine:)\n",
"I faced the same error. Just remove the \"useCreateIndex: true\" and it will work but make sure the mongoDB service is running on your local machine in the first place ~ brew services start [email protected] :)\n#HappyCoding\n",
"mongoose.connect(URL,{\n }).then(()=>{\n console.log('database connected')\n}).catch(err=>{\n console.log('database not connected',err)\n})\n\n",
"I experienced same thing, all i did was to just omit both (useFindAndModify: false) and (useCreateIndex), the reason being that my mongoose version never supported any of this..\nSure this will work for you.\n",
"The asPromise() Method for Connections\nMongoose connections are no longer thenable. This means that\nawait mongoose.createConnection(uri) no longer waits for Mongoose to connect.\nUse mongoose.createConnection(uri).asPromise() instead.\n// The below no longer works in Mongoose 6\nawait mongoose.createConnection(uri);\n\n// Do this instead\n await mongoose.createConnection(uri).asPromise();\n\n",
"const ConnectDB = async ()=>{\ntry{\nconst con= await mongoose.connect(\"mongodb+srv://MahmoudReda:[email protected]/?retryWrites=true&w=majority\",{\nuseUnifiedTopology:true,\n// useNewUrlParser:true,\n// useCreateIndex:true\n})\nconsole.log(\"the connection is stable\");\n}\ncatch(error)\n{\nconsole.log(error.message)\nprocess.exit(1);\n}\n}\nthis code is working well useNewUrlParser and useCreateIndex not supported\n",
"in new mongodb version you dont't have to use those:\nuseCreateIndex: true,\nuseFindAndModify: false,\nuseNewUrlParser: true,\nuseUnifiedTopology: true\n"
] | [
154,
33,
25,
11,
10,
8,
7,
4,
4,
2,
2,
2,
1,
1,
0,
0,
0,
0,
0,
0
] | [
"October 27th 2021\nWithout a try catch it won't work. I am just posting this to keep everyone up to date with mongo connections.\n const uri = process.env.ATLAS_CONNECTION;\n mongoose.connect(uri, { useNewUrlParser: true, useUnifiedTopology: \n true });\n\n const connection = mongoose.connection;\n\n\n try{\n connection.once('open', () => {\n console.log(\"MongoDB database connection established \n successfully\");\n })\n } catch(e) {\n console.log(e);\n }\n\n function close_connection() {\n connection.close();\n }\n\nConnection Established\n",
"I have faced the same error the new mongoose version does already have these options by default for you so all you need to do is to remove these options and it will work perfectly.\n"
] | [
-1,
-1
] | [
"mongodb"
] | stackoverflow_0068958221_mongodb.txt |
Q:
Reverting changes in Git and keeping later changesets
I'm unsure if I wrongly understood how the revert works or is it Visual Studio just doing something weird.
Firstly I made the following commit inside of master branch
WriteNumbers(100, 2);
void WriteNumbers(int toWhere, int dividableByWhat)
{
for (int i = 1; i <= toWhere; i++)
if (i % dividableByWhat == 0)
Console.WriteLine(i);
Console.WriteLine();
}
Then I created new branch, switched to it and just added the new line as follows
WriteNumbers(100, 2);
WriteNumbers(100, 3);
void WriteNumbers(int toWhere, int dividableByWhat)
{
for (int i = 1; i <= toWhere; i++)
if (i % dividableByWhat == 0)
Console.WriteLine(i);
Console.WriteLine();
}
I merged this branch into master. Afterwards I made another commit in master where I just added a new WriteNumbers(100, 4); line.
Now from my understanding If I revert the changeset which introduced WriteNumbers(100, 3); I should still have WriteNumbers(100, 4); in my file but that just doesn't seem to be case, at least in Visual Studio.
As it can be seen when I run revert on changeset, I get option to either delete both lines (as was before I merged the second branch into master) or to keep both changes (which is also invalid state). Is there any other way to just delete WriteNumbers(100, 3); line or I'm just doing something wrongly?
A:
When you revert a commit which changes a line that was altered in later commits (or its surrounding lines were altered), then you get a merge conflict. You must resolve this conflict manually and that is what Visual Studio is asking you to do.
Git nor Visual Studio can know which lines are the correct ones after the revert, so you have to pick.
| Reverting changes in Git and keeping later changesets | I'm unsure if I wrongly understood how the revert works or is it Visual Studio just doing something weird.
Firstly I made the following commit inside of master branch
WriteNumbers(100, 2);
void WriteNumbers(int toWhere, int dividableByWhat)
{
for (int i = 1; i <= toWhere; i++)
if (i % dividableByWhat == 0)
Console.WriteLine(i);
Console.WriteLine();
}
Then I created new branch, switched to it and just added the new line as follows
WriteNumbers(100, 2);
WriteNumbers(100, 3);
void WriteNumbers(int toWhere, int dividableByWhat)
{
for (int i = 1; i <= toWhere; i++)
if (i % dividableByWhat == 0)
Console.WriteLine(i);
Console.WriteLine();
}
I merged this branch into master. Afterwards I made another commit in master where I just added a new WriteNumbers(100, 4); line.
Now from my understanding If I revert the changeset which introduced WriteNumbers(100, 3); I should still have WriteNumbers(100, 4); in my file but that just doesn't seem to be case, at least in Visual Studio.
As it can be seen when I run revert on changeset, I get option to either delete both lines (as was before I merged the second branch into master) or to keep both changes (which is also invalid state). Is there any other way to just delete WriteNumbers(100, 3); line or I'm just doing something wrongly?
| [
"When you revert a commit which changes a line that was altered in later commits (or its surrounding lines were altered), then you get a merge conflict. You must resolve this conflict manually and that is what Visual Studio is asking you to do.\nGit nor Visual Studio can know which lines are the correct ones after the revert, so you have to pick.\n"
] | [
0
] | [] | [] | [
"git",
"git_rebase",
"git_revert",
"visual_studio"
] | stackoverflow_0074338815_git_git_rebase_git_revert_visual_studio.txt |
Q:
Design Minimal API and use HttpClient to post a file to it
I have a legacy system interfacing issue that my team has elected to solve by standing up a .NET 7 Minimal API which needs to accept a file upload. It should work for small and large files (let's say at least 500 MiB). The API will be called from a legacy system using HttpClient in a .NET Framework 4.7.1 app.
I can't quite seem to figure out how to design the signature of the Minimal API and how to call it with HttpClient in a way that totally works. It's something I've been hacking at on and off for several days, and haven't documented all of my approaches, but suffice it to say there have been varying results involving, among other things:
4XX and 500 errors returned by the HTTP call
An assortment of exceptions on either side
Calls that throw and never hit a breakpoint on the API side
Calls that get through but the Stream on the API end is not what I expect
Errors being different depending on whether the file being uploaded is small or large
Text files being persisted on the server that contain some of the HTTP headers in addition to their original contents
On the Minimal API side, I've tried all sorts of things in the signature (IFormFile, Stream, PipeReader, HttpRequest). On the calling side, I've tried several approaches (messing with headers, using the Flurl library, various content encodings and MIME types, multipart, etc).
This seems like it should be dead simple, so I'm trying to wipe the slate clean here, start with an example of something that partially works, and hope someone might be able to illuminate the path forward for me.
Example of Minimal API:
// IDocumentStorageManager is an injected dependency that takes an int and a Stream and returns a string of the newly uploaded file's URI
app.MapPost(
"DocumentStorage/CreateDocument2/{documentId:int}",
async (PipeReader pipeReader, int documentId, IDocumentStorageManager documentStorageManager) =>
{
using var ms = new MemoryStream();
await pipeReader.CopyToAsync(ms);
ms.Position = 0;
return await documentStorageManager.CreateDocument(documentId, ms);
});
Call the Minimal API using HttpClient:
// filePath is the path on local disk, uri is the Minimal API's URI
private static async Task<string> UploadWithHttpClient2(string filePath, string uri)
{
var fileStream = File.Open(filePath, FileMode.Open);
var content = new StreamContent(fileStream);
var httpRequestMessage = new HttpRequestMessage(HttpMethod.Post, uri);
var httpClient = new HttpClient();
httpRequestMessage.Content = content;
httpClient.Timeout = TimeSpan.FromMinutes(5);
var result = await httpClient.SendAsync(httpRequestMessage);
return await result.Content.ReadAsStringAsync();
}
In the particular example above, a small (6 bytes) .txt file is uploaded without issue. However, a large (619 MiB) .tif file runs into problems on the call to httpClient.SendAsync which results in the following set of nested Exceptions:
System.Net.Http.HttpRequestException - "Error while copying content to a stream."
System.IO.IOException - "Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.."
System.Net.Sockets.SocketException - "An existing connection was forcibly closed by the remote host."
What's a decent way of writing a Minimal API and calling it with HttpClient that will work for small and large files?
A:
Kestrel allows uploading 30MB per default.
To upload larger files via kestrel you might need to increase the max size limit. This can be done by adding the "RequestSizeLimit" attribute. So for example for 1GB:
app.MapPost(
"DocumentStorage/CreateDocument2/{documentId:int}",
[RequestSizeLimit(1_000_000_000)] async (PipeReader pipeReader, int documentId) =>
{
using var ms = new MemoryStream();
await pipeReader.CopyToAsync(ms);
ms.Position = 0;
return "";
});
You can also remove the size limit globally by setting
builder.WebHost.UseKestrel(o => o.Limits.MaxRequestBodySize = null);
A:
This answer is good but the RequestSizeLimit filter doesn't work for minimal APIs, it's an MVC filter. You can use the IHttpMaxRequestBodySizeFeature to limit the size (assuming you're not running on IIS). Also, I made a change to accept the body as a Stream. This avoids the memory stream copy before calling the CreateDocument API:
app.MapPost(
"DocumentStorage/CreateDocument2/{documentId:int}",
async (Stream stream, int documentId, IDocumentStorageManager documentStorageManager) =>
{
return await documentStorageManager.CreateDocument(documentId, stream);
})
.AddEndpointFilter((context, next) =>
{
const int MaxBytes = 1024 * 1024 * 1024;
var maxRequestBodySizeFeature = context.HttpContext.Features.Get<IHttpMaxRequestBodySizeFeature>();
if (maxRequestBodySizeFeature is not null and { IsReadOnly: true })
{
maxRequestBodySizeFeature.MaxRequestBodySize = MaxBytes;
}
return next(context);
});
If you're running on IIS then https://learn.microsoft.com/en-us/iis/configuration/system.webserver/security/requestfiltering/requestlimits/#configuration
| Design Minimal API and use HttpClient to post a file to it | I have a legacy system interfacing issue that my team has elected to solve by standing up a .NET 7 Minimal API which needs to accept a file upload. It should work for small and large files (let's say at least 500 MiB). The API will be called from a legacy system using HttpClient in a .NET Framework 4.7.1 app.
I can't quite seem to figure out how to design the signature of the Minimal API and how to call it with HttpClient in a way that totally works. It's something I've been hacking at on and off for several days, and haven't documented all of my approaches, but suffice it to say there have been varying results involving, among other things:
4XX and 500 errors returned by the HTTP call
An assortment of exceptions on either side
Calls that throw and never hit a breakpoint on the API side
Calls that get through but the Stream on the API end is not what I expect
Errors being different depending on whether the file being uploaded is small or large
Text files being persisted on the server that contain some of the HTTP headers in addition to their original contents
On the Minimal API side, I've tried all sorts of things in the signature (IFormFile, Stream, PipeReader, HttpRequest). On the calling side, I've tried several approaches (messing with headers, using the Flurl library, various content encodings and MIME types, multipart, etc).
This seems like it should be dead simple, so I'm trying to wipe the slate clean here, start with an example of something that partially works, and hope someone might be able to illuminate the path forward for me.
Example of Minimal API:
// IDocumentStorageManager is an injected dependency that takes an int and a Stream and returns a string of the newly uploaded file's URI
app.MapPost(
"DocumentStorage/CreateDocument2/{documentId:int}",
async (PipeReader pipeReader, int documentId, IDocumentStorageManager documentStorageManager) =>
{
using var ms = new MemoryStream();
await pipeReader.CopyToAsync(ms);
ms.Position = 0;
return await documentStorageManager.CreateDocument(documentId, ms);
});
Call the Minimal API using HttpClient:
// filePath is the path on local disk, uri is the Minimal API's URI
private static async Task<string> UploadWithHttpClient2(string filePath, string uri)
{
var fileStream = File.Open(filePath, FileMode.Open);
var content = new StreamContent(fileStream);
var httpRequestMessage = new HttpRequestMessage(HttpMethod.Post, uri);
var httpClient = new HttpClient();
httpRequestMessage.Content = content;
httpClient.Timeout = TimeSpan.FromMinutes(5);
var result = await httpClient.SendAsync(httpRequestMessage);
return await result.Content.ReadAsStringAsync();
}
In the particular example above, a small (6 bytes) .txt file is uploaded without issue. However, a large (619 MiB) .tif file runs into problems on the call to httpClient.SendAsync which results in the following set of nested Exceptions:
System.Net.Http.HttpRequestException - "Error while copying content to a stream."
System.IO.IOException - "Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.."
System.Net.Sockets.SocketException - "An existing connection was forcibly closed by the remote host."
What's a decent way of writing a Minimal API and calling it with HttpClient that will work for small and large files?
| [
"Kestrel allows uploading 30MB per default.\nTo upload larger files via kestrel you might need to increase the max size limit. This can be done by adding the \"RequestSizeLimit\" attribute. So for example for 1GB:\napp.MapPost(\n \"DocumentStorage/CreateDocument2/{documentId:int}\",\n [RequestSizeLimit(1_000_000_000)] async (PipeReader pipeReader, int documentId) =>\n {\n using var ms = new MemoryStream();\n await pipeReader.CopyToAsync(ms);\n ms.Position = 0;\n return \"\";\n });\n\nYou can also remove the size limit globally by setting\nbuilder.WebHost.UseKestrel(o => o.Limits.MaxRequestBodySize = null);\n\n",
"This answer is good but the RequestSizeLimit filter doesn't work for minimal APIs, it's an MVC filter. You can use the IHttpMaxRequestBodySizeFeature to limit the size (assuming you're not running on IIS). Also, I made a change to accept the body as a Stream. This avoids the memory stream copy before calling the CreateDocument API:\napp.MapPost(\n \"DocumentStorage/CreateDocument2/{documentId:int}\",\n async (Stream stream, int documentId, IDocumentStorageManager documentStorageManager) =>\n {\n return await documentStorageManager.CreateDocument(documentId, stream);\n })\n .AddEndpointFilter((context, next) =>\n {\n const int MaxBytes = 1024 * 1024 * 1024;\n\n var maxRequestBodySizeFeature = context.HttpContext.Features.Get<IHttpMaxRequestBodySizeFeature>();\n\n if (maxRequestBodySizeFeature is not null and { IsReadOnly: true })\n {\n maxRequestBodySizeFeature.MaxRequestBodySize = MaxBytes;\n }\n\n return next(context);\n });\n\nIf you're running on IIS then https://learn.microsoft.com/en-us/iis/configuration/system.webserver/security/requestfiltering/requestlimits/#configuration\n"
] | [
3,
0
] | [] | [] | [
".net_core",
"asp.net_core"
] | stackoverflow_0074524280_.net_core_asp.net_core.txt |
Q:
Generic function accepting both channels and slices
I'm trying to write generic function in Golang that would search for a value in slices and in channels in the similar way. Here is an example:
// MinOf returns the smallest number found among the channel / slice contents
func MinOf[T chan int | []int](input T) (result int) {
for _, value := range input {
if result > value {
result = value
}
}
return
}
But I'm getting following compilation error: cannot range over input (variable of type T constrained by chan int|[]int) (T has no core type).
I have tried to create common interface, like so:
type Rangable interface {
chan int | []int
}
// MinOf returns the smallest number found among the channel / slice contents
func MinOf[T Rangable](input T) (result int) {
for _, value := range input {
if result > value {
result = value
}
}
return
}
Though, error has changed to cannot range over input (variable of type T constrained by Rangable) (T has no core type) it remains basically the same...
Is there any way how to solve this task using generics or channels and slices could not be "casted" to same core type?
Thank you for any suggestions and ideas!
A:
You can't do this.
The range expression must have a core type to begin with. Unions with diverse type terms, do not have a core type because there isn't one single underlying type in common.
You can also intuitively see why range requires a core type: the semantics of ranging over slices and channels are different.
Ranging over a channel is potentially a blocking operation, ranging over a slice isn't
The iteration variables are different
for i, item := range someSlice {}
With slices i is the index of type int and item is the type of the slice elements.
for item := range someChan {}
With channels, item is the type of the chan elements and that's the only possible range variable.
The best you can have is a type switch:
func MinOf[T any, U chan T | []T](input U) (result int) {
switch t := any(input).(type) {
case chan T:
// range over chan
case []T:
// range over slice
}
return
}
But again, the behavior of this function (blocking vs. non-blocking) is type dependant, and it's unclear what advantages you get by using generics here.
| Generic function accepting both channels and slices | I'm trying to write generic function in Golang that would search for a value in slices and in channels in the similar way. Here is an example:
// MinOf returns the smallest number found among the channel / slice contents
func MinOf[T chan int | []int](input T) (result int) {
for _, value := range input {
if result > value {
result = value
}
}
return
}
But I'm getting following compilation error: cannot range over input (variable of type T constrained by chan int|[]int) (T has no core type).
I have tried to create common interface, like so:
type Rangable interface {
chan int | []int
}
// MinOf returns the smallest number found among the channel / slice contents
func MinOf[T Rangable](input T) (result int) {
for _, value := range input {
if result > value {
result = value
}
}
return
}
Though, error has changed to cannot range over input (variable of type T constrained by Rangable) (T has no core type) it remains basically the same...
Is there any way how to solve this task using generics or channels and slices could not be "casted" to same core type?
Thank you for any suggestions and ideas!
| [
"You can't do this.\nThe range expression must have a core type to begin with. Unions with diverse type terms, do not have a core type because there isn't one single underlying type in common.\nYou can also intuitively see why range requires a core type: the semantics of ranging over slices and channels are different.\n\nRanging over a channel is potentially a blocking operation, ranging over a slice isn't\n\nThe iteration variables are different\n\n\nfor i, item := range someSlice {}\n\nWith slices i is the index of type int and item is the type of the slice elements.\nfor item := range someChan {}\n\nWith channels, item is the type of the chan elements and that's the only possible range variable.\nThe best you can have is a type switch:\nfunc MinOf[T any, U chan T | []T](input U) (result int) {\n switch t := any(input).(type) {\n case chan T:\n // range over chan\n case []T:\n // range over slice\n }\n return\n}\n\nBut again, the behavior of this function (blocking vs. non-blocking) is type dependant, and it's unclear what advantages you get by using generics here.\n"
] | [
3
] | [] | [] | [
"generics",
"go"
] | stackoverflow_0074674257_generics_go.txt |
Q:
analyze the train-validation accuracy learning curve
I am building a two-layer neural network from scratch on the Fashion MNIST dataset. In between, using the RELU as activation and on the last layer, I am using softmax cross entropy. I am getting the below learning curve between train and validation accuracy which is wrong obviously. But if you see my loss curve, it's decreasing but my model is not learning. I am not able to my head around where I am going wrong. Could anyone explain these two graphs, like where I could be possibly going wrong?
A:
I don't know exactly what you are doing, and I don't know anything about your architecture, but it's wrong to use ReLU on the last layer.
Usually you leave the last layer as linear (no activation). This will produce the logits that enter the Softmax. The output of the softmax will try to approximate the probability distribution on the classes.
This could be a reason for your results.
| analyze the train-validation accuracy learning curve | I am building a two-layer neural network from scratch on the Fashion MNIST dataset. In between, using the RELU as activation and on the last layer, I am using softmax cross entropy. I am getting the below learning curve between train and validation accuracy which is wrong obviously. But if you see my loss curve, it's decreasing but my model is not learning. I am not able to my head around where I am going wrong. Could anyone explain these two graphs, like where I could be possibly going wrong?
| [
"I don't know exactly what you are doing, and I don't know anything about your architecture, but it's wrong to use ReLU on the last layer.\nUsually you leave the last layer as linear (no activation). This will produce the logits that enter the Softmax. The output of the softmax will try to approximate the probability distribution on the classes.\nThis could be a reason for your results.\n"
] | [
0
] | [] | [] | [
"cross_entropy",
"neural_network",
"numpy",
"python",
"softmax"
] | stackoverflow_0074671726_cross_entropy_neural_network_numpy_python_softmax.txt |
Q:
Django Rest Framework Cannot save a model it tells me the date must be a str
I have this Profile model that also has location attached to it but not trying to save the location now only trying to save the Profile but get an error:
class Profile(models.Model):
# Gender
M = 'M'
F = 'F'
O = 'O'
GENDER = [
(M, "male"),
(F, "female"),
(O, "Other")
]
# Basic information
background = models.FileField(upload_to=background_to, null=True, blank=True)
photo = models.FileField(upload_to=image_to, null=True, blank=True)
slug = AutoSlugField(populate_from=['first_name', 'last_name', 'gender'])
first_name = models.CharField(max_length=100)
middle_name = models.CharField(max_length=100, null=True, blank=True)
last_name = models.CharField(max_length=100)
birthdate = models.DateField()
gender = models.CharField(max_length=1, choices=GENDER, default=None)
bio = models.TextField(max_length=5000, null=True, blank=True)
languages = ArrayField(models.CharField(max_length=30, null=True, blank=True), null=True, blank=True)
# Location information
website = models.URLField(max_length=256, null=True, blank=True)
# owner information
user = models.OneToOneField(User, on_delete=models.CASCADE)
created_at = models.DateTimeField(auto_now_add=True, verbose_name="created at")
updated_at = models.DateTimeField(auto_now=True, verbose_name="updated at")
class Meta:
verbose_name = "profile"
verbose_name_plural = "profiles"
db_table = "user_profiles"
def __str__(self):
return self.first_name + ' ' + self.last_name
def get_absolute_url(self):
return self.slug
and this is the view I am using to save the Profile with. I tried sending the data to a serializer first and saving that but the serializer was invalid every time:
class CreateProfileView(APIView):
permission_classes = [permissions.IsAuthenticated]
def post(self, request):
data = dict(request.data)
location = {}
location.update(street=data.pop('street'))
location.update(additional=data.pop('additional'))
location.update(country=data.pop('country'))
location.update(state=data.pop('state'))
location.update(city=data.pop('city'))
location.update(zip=data.pop('zip'))
location.update(phone=data.pop('phone'))
user_id = data.pop('user')
id = int((user_id[0]))
image = data.pop('photo')
user = User.objects.get(pk=id)
print(data['birthdate'])
new_profile = Profile.objects.create(**data, user=user)
# new_location = Location.objects.create(**location, profile=new_profile)
return Response("Profile saved successfully")
and this the data coming in from the front end:
0: photo → File { name: "tumblr_005ddc5e92b6818f41d4dba4bb08e77e_bbe06c5b_540.jpg", lastModified: 1670127532084, size: 91844, … }
1: first_name → "Calvin"
2: middle_name → "undefined"
3: last_name → "Cani"
4: birthdate → "1971-09-01"
5: gender → "M"
6: bio → "This is general information about me"
7: languages → ""
8: street → "street one"
9: additional → "zwartkop"
10: country → "1"
11: state → "1"
12: city → "1"
13: zip → "0186"
14: phone → "0815252165"
15: website → ""
16: user → "1"
When I try and save a Profile I get the following error I cannot seem to find an answer for:
TypeError: fromisoformat: argument must be str
What is wrong please and how do I fix it?
I actually want to validate the data first and then save it and I tried to serialize the data first but that proved to be fatal, so I took a different approach. new to this and trying to learn so I know how it all fits together. Thanks
A:
The error you are encountering is likely due to the birthdate field in your Profile model being a DateField, but the value you are trying to save is a string. You must convert the string value to a date object before saving it to the birthdate field.
Here is an example of how you can do this:
from datetime import datetime
# Your code here
class CreateProfileView(APIView):
permission_classes = [permissions.IsAuthenticated]
def post(self, request):
data = dict(request.data)
location = {}
location.update(street=data.pop('street'))
location.update(additional=data.pop('additional'))
location.update(country=data.pop('country'))
location.update(state=data.pop('state'))
location.update(city=data.pop('city'))
location.update(zip=data.pop('zip'))
location.update(phone=data.pop('phone'))
user_id = data.pop('user')
id = int((user_id[0]))
image = data.pop('photo')
user = User.objects.get(pk=id)
# Convert the string value to a date object
birthdate_str = data.pop('birthdate')
birthdate = datetime.strptime(birthdate_str, '%Y-%m-%d').date()
new_profile = Profile.objects.create(**data, birthdate=birthdate, user=user)
# new_location = Location.objects.create(**location, profile=new_profile)
return Response("Profile saved successfully")
| Django Rest Framework Cannot save a model it tells me the date must be a str | I have this Profile model that also has location attached to it but not trying to save the location now only trying to save the Profile but get an error:
class Profile(models.Model):
# Gender
M = 'M'
F = 'F'
O = 'O'
GENDER = [
(M, "male"),
(F, "female"),
(O, "Other")
]
# Basic information
background = models.FileField(upload_to=background_to, null=True, blank=True)
photo = models.FileField(upload_to=image_to, null=True, blank=True)
slug = AutoSlugField(populate_from=['first_name', 'last_name', 'gender'])
first_name = models.CharField(max_length=100)
middle_name = models.CharField(max_length=100, null=True, blank=True)
last_name = models.CharField(max_length=100)
birthdate = models.DateField()
gender = models.CharField(max_length=1, choices=GENDER, default=None)
bio = models.TextField(max_length=5000, null=True, blank=True)
languages = ArrayField(models.CharField(max_length=30, null=True, blank=True), null=True, blank=True)
# Location information
website = models.URLField(max_length=256, null=True, blank=True)
# owner information
user = models.OneToOneField(User, on_delete=models.CASCADE)
created_at = models.DateTimeField(auto_now_add=True, verbose_name="created at")
updated_at = models.DateTimeField(auto_now=True, verbose_name="updated at")
class Meta:
verbose_name = "profile"
verbose_name_plural = "profiles"
db_table = "user_profiles"
def __str__(self):
return self.first_name + ' ' + self.last_name
def get_absolute_url(self):
return self.slug
and this is the view I am using to save the Profile with. I tried sending the data to a serializer first and saving that but the serializer was invalid every time:
class CreateProfileView(APIView):
permission_classes = [permissions.IsAuthenticated]
def post(self, request):
data = dict(request.data)
location = {}
location.update(street=data.pop('street'))
location.update(additional=data.pop('additional'))
location.update(country=data.pop('country'))
location.update(state=data.pop('state'))
location.update(city=data.pop('city'))
location.update(zip=data.pop('zip'))
location.update(phone=data.pop('phone'))
user_id = data.pop('user')
id = int((user_id[0]))
image = data.pop('photo')
user = User.objects.get(pk=id)
print(data['birthdate'])
new_profile = Profile.objects.create(**data, user=user)
# new_location = Location.objects.create(**location, profile=new_profile)
return Response("Profile saved successfully")
and this the data coming in from the front end:
0: photo → File { name: "tumblr_005ddc5e92b6818f41d4dba4bb08e77e_bbe06c5b_540.jpg", lastModified: 1670127532084, size: 91844, … }
1: first_name → "Calvin"
2: middle_name → "undefined"
3: last_name → "Cani"
4: birthdate → "1971-09-01"
5: gender → "M"
6: bio → "This is general information about me"
7: languages → ""
8: street → "street one"
9: additional → "zwartkop"
10: country → "1"
11: state → "1"
12: city → "1"
13: zip → "0186"
14: phone → "0815252165"
15: website → ""
16: user → "1"
When I try and save a Profile I get the following error I cannot seem to find an answer for:
TypeError: fromisoformat: argument must be str
What is wrong please and how do I fix it?
I actually want to validate the data first and then save it and I tried to serialize the data first but that proved to be fatal, so I took a different approach. new to this and trying to learn so I know how it all fits together. Thanks
| [
"The error you are encountering is likely due to the birthdate field in your Profile model being a DateField, but the value you are trying to save is a string. You must convert the string value to a date object before saving it to the birthdate field.\nHere is an example of how you can do this:\nfrom datetime import datetime\n\n# Your code here\n\nclass CreateProfileView(APIView):\n permission_classes = [permissions.IsAuthenticated]\n\n def post(self, request):\n data = dict(request.data)\n location = {}\n location.update(street=data.pop('street'))\n location.update(additional=data.pop('additional'))\n location.update(country=data.pop('country'))\n location.update(state=data.pop('state'))\n location.update(city=data.pop('city'))\n location.update(zip=data.pop('zip'))\n location.update(phone=data.pop('phone'))\n user_id = data.pop('user')\n id = int((user_id[0]))\n image = data.pop('photo')\n user = User.objects.get(pk=id)\n\n # Convert the string value to a date object\n birthdate_str = data.pop('birthdate')\n birthdate = datetime.strptime(birthdate_str, '%Y-%m-%d').date()\n\n new_profile = Profile.objects.create(**data, birthdate=birthdate, user=user)\n # new_location = Location.objects.create(**location, profile=new_profile)\n return Response(\"Profile saved successfully\")\n\n\n"
] | [
1
] | [] | [] | [
"django",
"django_rest_framework",
"python"
] | stackoverflow_0074674389_django_django_rest_framework_python.txt |
Q:
How to submit a HTML checklist form and get the checked data in a json file upon form submission without reloading the page in python flask?
I am building an application that uses Python Flask. There is an HTML form that looks like this:
<form method="POST" action="/route_name">
<p>No Annotation<input type="checkbox" value="No annotation" name="mycheckbox" ></p>
<p>Connector<input type="checkbox" value="connector" name="mycheckbox"></p>
<p>Enclosure<input type="checkbox" value="enclosure" name="mycheckbox"></p>
<p>Text<input type="checkbox" value="text" name="mycheckbox"></p>
<input type="submit" value="Submit">
How do I get the checked data in a json file upon the form submission without reloading the page?
A:
Attach an event listener to the submit button that calls a JavaScript function when the button is clicked.
<form method="POST" action="/route_name">
<p>No Annotation<input type="checkbox" value="No annotation" name="mycheckbox" ></p>
<p>Connector<input type="checkbox" value="connector" name="mycheckbox"></p>
<p>Enclosure<input type="checkbox" value="enclosure" name="mycheckbox"></p>
<p>Text<input type="checkbox" value="text" name="mycheckbox"></p>
<input type="submit" value="Submit" onclick="submitForm()">
</form>
<script>
function submitForm() {
// Get the checked data from the form
}
</script>
submitForm function is called when the submit button is clicked. This function is where you can implement the logic to get the checked data from the form.
To get the checked data from the form, you can use the querySelectorAll method to select all the checkbox elements in the form then loop through them to get the checked values. Store the checked values in an array and convert the array to a JSON object using the JSON.stringify method.
function submitForm() {
// Select all the checkbox elements in the form
const checkboxes = document.querySelectorAll('input[type="checkbox"]:checked');
// Create an array to store the checked values
const checkedValues = [];
// Loop through the checkboxes and get the checked values
checkboxes.forEach(checkbox => {
checkedValues.push(checkbox.value);
});
// Convert the array of checked values to a JSON object
const checkedData = JSON.stringify(checkedValues);
// Output the JSON object to the console
console.log(checkedData);
}
querySelectorAll method is used to select all the checked checkbox elements in the form. The checked values are then added to the checkedValues array, and the array is converted to a JSON object using the JSON.stringify method. The JSON object is then output to the console, so you can see the data that was checked.
Once you have the checked data in a JSON object, you can use Ajax to send the data to your Flask route without reloading the page. To do this, you can use the XMLHttpRequest object to send a POST request to your route.
function submitForm() {
// Select all the checkbox elements in the form
const checkboxes = document.querySelectorAll('input[type="checkbox"]:checked');
// Create an array to store the checked values
const checked
| How to submit a HTML checklist form and get the checked data in a json file upon form submission without reloading the page in python flask? | I am building an application that uses Python Flask. There is an HTML form that looks like this:
<form method="POST" action="/route_name">
<p>No Annotation<input type="checkbox" value="No annotation" name="mycheckbox" ></p>
<p>Connector<input type="checkbox" value="connector" name="mycheckbox"></p>
<p>Enclosure<input type="checkbox" value="enclosure" name="mycheckbox"></p>
<p>Text<input type="checkbox" value="text" name="mycheckbox"></p>
<input type="submit" value="Submit">
How do I get the checked data in a json file upon the form submission without reloading the page?
| [
"Attach an event listener to the submit button that calls a JavaScript function when the button is clicked.\n<form method=\"POST\" action=\"/route_name\">\n <p>No Annotation<input type=\"checkbox\" value=\"No annotation\" name=\"mycheckbox\" ></p>\n <p>Connector<input type=\"checkbox\" value=\"connector\" name=\"mycheckbox\"></p>\n <p>Enclosure<input type=\"checkbox\" value=\"enclosure\" name=\"mycheckbox\"></p>\n <p>Text<input type=\"checkbox\" value=\"text\" name=\"mycheckbox\"></p>\n <input type=\"submit\" value=\"Submit\" onclick=\"submitForm()\">\n</form>\n\n<script>\n function submitForm() {\n // Get the checked data from the form\n }\n</script>\n\nsubmitForm function is called when the submit button is clicked. This function is where you can implement the logic to get the checked data from the form.\nTo get the checked data from the form, you can use the querySelectorAll method to select all the checkbox elements in the form then loop through them to get the checked values. Store the checked values in an array and convert the array to a JSON object using the JSON.stringify method.\nfunction submitForm() {\n // Select all the checkbox elements in the form\n const checkboxes = document.querySelectorAll('input[type=\"checkbox\"]:checked');\n\n // Create an array to store the checked values\n const checkedValues = [];\n\n // Loop through the checkboxes and get the checked values\n checkboxes.forEach(checkbox => {\n checkedValues.push(checkbox.value);\n });\n\n // Convert the array of checked values to a JSON object\n const checkedData = JSON.stringify(checkedValues);\n\n // Output the JSON object to the console\n console.log(checkedData);\n}\n\nquerySelectorAll method is used to select all the checked checkbox elements in the form. The checked values are then added to the checkedValues array, and the array is converted to a JSON object using the JSON.stringify method. The JSON object is then output to the console, so you can see the data that was checked.\nOnce you have the checked data in a JSON object, you can use Ajax to send the data to your Flask route without reloading the page. To do this, you can use the XMLHttpRequest object to send a POST request to your route.\nfunction submitForm() {\n // Select all the checkbox elements in the form\n const checkboxes = document.querySelectorAll('input[type=\"checkbox\"]:checked');\n\n // Create an array to store the checked values\n const checked\n\n"
] | [
1
] | [] | [] | [
"flask",
"javascript",
"jquery"
] | stackoverflow_0074674404_flask_javascript_jquery.txt |
Q:
How to refer to the local parameters in Angualr within a function
I'm using Angular 14. I've a repetitive piece of code. I decided to make it generic but I'm not able to refer to the parameters passed to that generic method because Objects are involved in my code. Whenever I use the passed parameter with object with dot (.) operator, the compiler starts treating it as an actual key and throws error that key is not found.
Initial code without generic method:
setAllInputFields() {
// this code repeats
if (this.recordSpecificData.fruitBox.length === 0) {
this.myReactiveForm
.get('boxDetails.fruitBoxNumber')
?.setValue('not found');
} else {
const activeStatus = this.recordSpecificData.fruitBox.find(
(item: any) => item.fruitOrderStatus === 'A'
);
this.myReactiveForm
.get('boxDetails.fruitBoxNumber')
?.setValue(activeStatus.fruitBoxNumber);
}
// this code repeats
if (this.recordSpecificData.vegetableBox.length === 0) {
this.myReactiveForm
.get('boxDetails.vegetableBoxNumber')
?.setValue('not found');
} else {
const activeStatus = this.recordSpecificData.vegetableBox.find(
(item: any) => item.vegetableOrderStatus === 'A'
);
this.myReactiveForm
.get('boxDetails.vegetableBoxNumber')
?.setValue(activeStatus.vegetableBoxNumber);
}
// this code repeats 15 more times
}
Notice that in every repeated code only fruitBox , fruitBoxNumber, fruitOrderStatus and similarly vegetableBox , vegetableBoxNumber, vegetableOrderStatus are changing. So I decided to make a generic method which will accept 3 parameters:
I tried:
// For e.g. arg1 will be box type, arg2 will be box number and arg3 will be order status
genericMethod(arg1: any, arg2: any, arg3: any) {
if (this.recordSpecificData.arg1.length === 0) {
this.partsDetailsReactiveForm
.get('partsDetails.'+arg2)
?.setValue('not found');
} else {
const activeStatus = this.recordSpecificData.arg1.find(
(item: any) => item.arg3 === 'A'
);
this.partsDetailsReactiveForm
.get('partsDetails.'+arg2)
?.setValue(activeStatus.arg2);
}
}
But as you can see the problem. compiler will start looking for things like this.recordSpecificData.arg1 and item.arg3. this is causing the failure. Please help me.
A:
That's called "dot notation". There's another way to access properties in object which is bracket notation.
var a = 'key'; var obj = {key: 1}
obj.a = obj['a'] => return undefined as obj doesn't have property a
obj[a] = obj['key'] => return 1
| How to refer to the local parameters in Angualr within a function | I'm using Angular 14. I've a repetitive piece of code. I decided to make it generic but I'm not able to refer to the parameters passed to that generic method because Objects are involved in my code. Whenever I use the passed parameter with object with dot (.) operator, the compiler starts treating it as an actual key and throws error that key is not found.
Initial code without generic method:
setAllInputFields() {
// this code repeats
if (this.recordSpecificData.fruitBox.length === 0) {
this.myReactiveForm
.get('boxDetails.fruitBoxNumber')
?.setValue('not found');
} else {
const activeStatus = this.recordSpecificData.fruitBox.find(
(item: any) => item.fruitOrderStatus === 'A'
);
this.myReactiveForm
.get('boxDetails.fruitBoxNumber')
?.setValue(activeStatus.fruitBoxNumber);
}
// this code repeats
if (this.recordSpecificData.vegetableBox.length === 0) {
this.myReactiveForm
.get('boxDetails.vegetableBoxNumber')
?.setValue('not found');
} else {
const activeStatus = this.recordSpecificData.vegetableBox.find(
(item: any) => item.vegetableOrderStatus === 'A'
);
this.myReactiveForm
.get('boxDetails.vegetableBoxNumber')
?.setValue(activeStatus.vegetableBoxNumber);
}
// this code repeats 15 more times
}
Notice that in every repeated code only fruitBox , fruitBoxNumber, fruitOrderStatus and similarly vegetableBox , vegetableBoxNumber, vegetableOrderStatus are changing. So I decided to make a generic method which will accept 3 parameters:
I tried:
// For e.g. arg1 will be box type, arg2 will be box number and arg3 will be order status
genericMethod(arg1: any, arg2: any, arg3: any) {
if (this.recordSpecificData.arg1.length === 0) {
this.partsDetailsReactiveForm
.get('partsDetails.'+arg2)
?.setValue('not found');
} else {
const activeStatus = this.recordSpecificData.arg1.find(
(item: any) => item.arg3 === 'A'
);
this.partsDetailsReactiveForm
.get('partsDetails.'+arg2)
?.setValue(activeStatus.arg2);
}
}
But as you can see the problem. compiler will start looking for things like this.recordSpecificData.arg1 and item.arg3. this is causing the failure. Please help me.
| [
"That's called \"dot notation\". There's another way to access properties in object which is bracket notation.\nvar a = 'key'; var obj = {key: 1}\nobj.a = obj['a'] => return undefined as obj doesn't have property a\nobj[a] = obj['key'] => return 1\n\n"
] | [
0
] | [] | [] | [
"angular"
] | stackoverflow_0074669685_angular.txt |
Q:
matching pair of employees in a same department
I'm trying to make a list of employees working in a same department like:
employeeName
department
employeeName
Tim
2
kim
Tim
2
Jim
Kim
2
Tim
Kim
2
Jim
Jim
2
Kim
Jim
2
Tim
Aim
3
Sim
Sim
3
Aim
But the only thing i can do for now is:
SELECT emp_name, dept_code
FROM employee
WHERE dept_code IN (SELECT dept_code FROM employee);
employeeName
department
Tim
2
Kim
2
Jim
2
Aim
3
Sim
3
How can I make a list pairing with the employee working in a same department? thanks gurus...
A:
To first point that out: I dislike your idea to create such a result listing "pairs" twice and would prefer another, easier query whose results would be better to read. I will come back to this later in this answer.
But anyway, if you really want to produce the outcome you have shown, we can do this with CROSS JOIN. This builds all combinations of employees.
In the WHERE clause, we will set the conditions that they must work in the same department, but have different names:
SELECT
e1.emp_name AS employeeName,
e1.dept_code AS department,
e2.emp_name AS employeeName
FROM
employee e1
CROSS JOIN employee e2
WHERE
e1.dept_code = e2.dept_code
AND e1.emp_name <> e2.emp_name
ORDER BY e1.dept_code, e1.emp_name, e2.emp_name;
To come back to the idea to make this much easier and better to read: We can just use LISTAGG with GROUP BY to produce a comma-separated list of employees per department. I highly recommend to use this approach due to much better performance and readability.
This query will do on new Oracle DB's:
SELECT dept_code,
LISTAGG (emp_name,',') AS employees
FROM employee
GROUP BY dept_code;
On older Oracle DB's, we need to add a WITHIN GROUP clause:
SELECT dept_code,
LISTAGG (emp_name,',')
WITHIN GROUP (ORDER BY emp_name) AS employees
FROM employee
GROUP BY dept_code;
This will produce following result for your sample data:
DEPT_CODE
EMPLOYEES
2
Jim,Kim,Tim
3
Aim,Sim
Here we can try out these things: db<>fiddle
A:
You will get all the pairs (A,B) and (B,A) of employees in the same department at the exclusion of all (A,A) with:
SELECT e1.emp_name AS first_emp_name, e1.dept_code, e2.emp_name AS second_emp_name
FROM employee e1
JOIN employee e2 ON e1.dept_code = e2.dept_code AND e1.emp_name <> e2.emp_name ;
| matching pair of employees in a same department | I'm trying to make a list of employees working in a same department like:
employeeName
department
employeeName
Tim
2
kim
Tim
2
Jim
Kim
2
Tim
Kim
2
Jim
Jim
2
Kim
Jim
2
Tim
Aim
3
Sim
Sim
3
Aim
But the only thing i can do for now is:
SELECT emp_name, dept_code
FROM employee
WHERE dept_code IN (SELECT dept_code FROM employee);
employeeName
department
Tim
2
Kim
2
Jim
2
Aim
3
Sim
3
How can I make a list pairing with the employee working in a same department? thanks gurus...
| [
"To first point that out: I dislike your idea to create such a result listing \"pairs\" twice and would prefer another, easier query whose results would be better to read. I will come back to this later in this answer.\nBut anyway, if you really want to produce the outcome you have shown, we can do this with CROSS JOIN. This builds all combinations of employees.\nIn the WHERE clause, we will set the conditions that they must work in the same department, but have different names:\nSELECT \ne1.emp_name AS employeeName, \ne1.dept_code AS department, \ne2.emp_name AS employeeName\nFROM \nemployee e1\nCROSS JOIN employee e2\nWHERE \ne1.dept_code = e2.dept_code\nAND e1.emp_name <> e2.emp_name\nORDER BY e1.dept_code, e1.emp_name, e2.emp_name;\n\nTo come back to the idea to make this much easier and better to read: We can just use LISTAGG with GROUP BY to produce a comma-separated list of employees per department. I highly recommend to use this approach due to much better performance and readability.\nThis query will do on new Oracle DB's:\nSELECT dept_code, \nLISTAGG (emp_name,',') AS employees\nFROM employee\nGROUP BY dept_code;\n\nOn older Oracle DB's, we need to add a WITHIN GROUP clause:\nSELECT dept_code, \nLISTAGG (emp_name,',') \n WITHIN GROUP (ORDER BY emp_name) AS employees\nFROM employee\nGROUP BY dept_code;\n\nThis will produce following result for your sample data:\n\n\n\n\nDEPT_CODE\nEMPLOYEES\n\n\n\n\n2\nJim,Kim,Tim\n\n\n3\nAim,Sim\n\n\n\n\nHere we can try out these things: db<>fiddle\n",
"You will get all the pairs (A,B) and (B,A) of employees in the same department at the exclusion of all (A,A) with:\nSELECT e1.emp_name AS first_emp_name, e1.dept_code, e2.emp_name AS second_emp_name\n FROM employee e1\nJOIN employee e2 ON e1.dept_code = e2.dept_code AND e1.emp_name <> e2.emp_name ;\n\n"
] | [
3,
0
] | [] | [] | [
"oracle",
"sql"
] | stackoverflow_0074672920_oracle_sql.txt |
Q:
ValueError: could not convert string to float: '"815745789754417152"'
This is error code
ValueError Traceback (most recent call last)
Input In [42], in <cell line: 3>()
1 from sklearn.neighbors import KNeighborsClassifier as knn
2 classifier=knn(n_neighbors=5)
----> 3 classifier.fit(X,y)
4 bots = training_data[training_data.bot==1]
5 Nbots = training_data[training_data.bot==0]
After result show this error
ValueError: could not convert string to float: '"815745789754417152"'
My code using
enter image description here
A:
The string itself seems to be "815745789754417152". It can't convert " to a numeric value.
You can strip it off by:
string = string[1:-1]
| ValueError: could not convert string to float: '"815745789754417152"' | This is error code
ValueError Traceback (most recent call last)
Input In [42], in <cell line: 3>()
1 from sklearn.neighbors import KNeighborsClassifier as knn
2 classifier=knn(n_neighbors=5)
----> 3 classifier.fit(X,y)
4 bots = training_data[training_data.bot==1]
5 Nbots = training_data[training_data.bot==0]
After result show this error
ValueError: could not convert string to float: '"815745789754417152"'
My code using
enter image description here
| [
"The string itself seems to be \"815745789754417152\". It can't convert \" to a numeric value.\nYou can strip it off by:\nstring = string[1:-1]\n\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074674430_python.txt |
Q:
Disabling mouseEvents after gameOver
I have a game project that I am trying to disable any mouse interactions on it after the game has ended and only to re-enable it after i restart my game.
I tried using a condition with the in-game timer such that it enables the mouseEvent function only when the timer is above 0s.
if (startTime > 0) {
setupMouseInteraction();
}
However I am still able to interact with the game despite the timer running out.
A:
If you want to disable mouse interactions with your game after the game has ended, you can use the pointer-events CSS property to prevent users from interacting with the game elements. This property controls whether or not an element can be the target of pointer events (such as mouse clicks or hover events), and it can be set to none to disable all pointer events for an element.
Here is an example of how you could use the pointer-events property to disable mouse interactions with your game after the game has ended:
const gameOverStyles = {
pointerEvents: 'none',
};
function App() {
const [gameOver, setGameOver] = useState(false);
// Other code for your game...
if (gameOver) {
return <div style={gameOverStyles}>Game Over!</div>;
}
// Rest of your game code...
}
In this example, we are using the pointer-events property with the none value to disable all pointer events for the div element that is shown when the game is over. This will prevent users from interacting with the game elements even if the timer has run out and the game is technically still running.
You can also use the pointer-events property on specific elements within your game to selectively disable pointer events for those elements. For example, if you want to disable pointer events for a button that would normally allow users to restart the game, you could do something like this:
const gameOverStyles = {
pointerEvents: 'none',
};
function App() {
const [gameOver, setGameOver] = useState(false);
// Other code for your game...
if (gameOver) {
return (
<div>
<div>Game Over!</div>
<button style={gameOverStyles}>Restart Game</button>
</div>
);
}
// Rest of your game code...
}
In this example, the pointer-events property is applied only to the button element, so users will still be able to interact with other elements in the game, but they will not be able to click on the button to restart the game.
| Disabling mouseEvents after gameOver | I have a game project that I am trying to disable any mouse interactions on it after the game has ended and only to re-enable it after i restart my game.
I tried using a condition with the in-game timer such that it enables the mouseEvent function only when the timer is above 0s.
if (startTime > 0) {
setupMouseInteraction();
}
However I am still able to interact with the game despite the timer running out.
| [
"If you want to disable mouse interactions with your game after the game has ended, you can use the pointer-events CSS property to prevent users from interacting with the game elements. This property controls whether or not an element can be the target of pointer events (such as mouse clicks or hover events), and it can be set to none to disable all pointer events for an element.\nHere is an example of how you could use the pointer-events property to disable mouse interactions with your game after the game has ended:\nconst gameOverStyles = {\n pointerEvents: 'none',\n};\n\nfunction App() {\n const [gameOver, setGameOver] = useState(false);\n\n // Other code for your game...\n\n if (gameOver) {\n return <div style={gameOverStyles}>Game Over!</div>;\n }\n\n // Rest of your game code...\n}\n\nIn this example, we are using the pointer-events property with the none value to disable all pointer events for the div element that is shown when the game is over. This will prevent users from interacting with the game elements even if the timer has run out and the game is technically still running.\nYou can also use the pointer-events property on specific elements within your game to selectively disable pointer events for those elements. For example, if you want to disable pointer events for a button that would normally allow users to restart the game, you could do something like this:\nconst gameOverStyles = {\n pointerEvents: 'none',\n};\n\nfunction App() {\n const [gameOver, setGameOver] = useState(false);\n\n // Other code for your game...\n\n if (gameOver) {\n return (\n <div>\n <div>Game Over!</div>\n <button style={gameOverStyles}>Restart Game</button>\n </div>\n );\n }\n\n // Rest of your game code...\n}\n\nIn this example, the pointer-events property is applied only to the button element, so users will still be able to interact with other elements in the game, but they will not be able to click on the button to restart the game.\n"
] | [
0
] | [] | [] | [
"javascript",
"p5.js"
] | stackoverflow_0074674199_javascript_p5.js.txt |
Q:
Result of git rebase is different from what is shown in the git documentation
Suppose I create a simple local repo following the example shown in the main docs for git rebase:
A---B---C topic
/
D---E---F---G master
I'm on windows so I use Powershell to do this, included for convenience:
md first-example
cd first-example
git init
function Create-Commits
{
param (
$Commits,
$Branch
)
foreach ($commit in $commits)
{
git checkout $Branch
new-item "$commit.txt"
git add "$commit.txt"
git commit -m "$commit"
git tag $commit
}
}
Create-Commits -Commits @("D", "E") -Branch master
git branch topic
Create-Commits -Commits @("A") -Branch topic
Create-Commits @("F") -Branch master
Create-Commits -Commits @("B") -Branch topic
Create-Commits -Commits @("G") -Branch master
Create-Commits -Commits @("C") -Branch topic
git log --graph --format="%(describe:tags=true)" --all
cd ../
Now, according to the text in the docs:
Assume the following history exists and the current branch is "topic":
A---B---C topic
/
D---E---F---G master
From this point, the result of either of the following commands:
git rebase master
git rebase master topic
would be:
A'--B'--C' topic
/
D---E---F---G master
When I try this, Here's what I get for git log --graph --format="%(describe:tags=true)" --all:
* G-3-g57a4992
* G-2-gcb715a5
* G-1-g5334a53
* G
* F
| * C
| * B
| * A
|/
* E
* D
Here's what I get for git log --graph --format="%(describe:tags=true)" topic
* G-3-g57a4992
* G-2-gcb715a5
* G-1-g5334a53
* G
* F
* E
* D
And here's what I get for git log --graph --format="%(describe:tags=true)" master
* G
* F
* E
* D
Aside from the fact that the resulting revision history is different from what is stated in the docs, the commits with tags A, B and C don't seem to belong to any branch.
What happened with those commits and why have they not been removed altogether as suggested by the documentation? Do they belong to any specific branch now?
A:
The history still matches, the Git documentation only draws them on 2 parallel lines, your history shows them as a single line.
A-B-C-D
and
A-B
\
C-D
is exactly the same history.
As to why you still see your old commits: git rebase never removes commits, it only recreates/copies them and then sets the "branch" label to point to the new commits. The old commits are still in the Git object database (the .git/objects directory).
git log starts at refs (branches or tags) and will display all commits reachable through those refs; starting at the commit pointed to by the ref, then walking the parent: pointers of each commit.
You are running git tag $commit for every commit, so they remain reachable, even after creating copies of them and moving the original branch pointer.
| Result of git rebase is different from what is shown in the git documentation | Suppose I create a simple local repo following the example shown in the main docs for git rebase:
A---B---C topic
/
D---E---F---G master
I'm on windows so I use Powershell to do this, included for convenience:
md first-example
cd first-example
git init
function Create-Commits
{
param (
$Commits,
$Branch
)
foreach ($commit in $commits)
{
git checkout $Branch
new-item "$commit.txt"
git add "$commit.txt"
git commit -m "$commit"
git tag $commit
}
}
Create-Commits -Commits @("D", "E") -Branch master
git branch topic
Create-Commits -Commits @("A") -Branch topic
Create-Commits @("F") -Branch master
Create-Commits -Commits @("B") -Branch topic
Create-Commits -Commits @("G") -Branch master
Create-Commits -Commits @("C") -Branch topic
git log --graph --format="%(describe:tags=true)" --all
cd ../
Now, according to the text in the docs:
Assume the following history exists and the current branch is "topic":
A---B---C topic
/
D---E---F---G master
From this point, the result of either of the following commands:
git rebase master
git rebase master topic
would be:
A'--B'--C' topic
/
D---E---F---G master
When I try this, Here's what I get for git log --graph --format="%(describe:tags=true)" --all:
* G-3-g57a4992
* G-2-gcb715a5
* G-1-g5334a53
* G
* F
| * C
| * B
| * A
|/
* E
* D
Here's what I get for git log --graph --format="%(describe:tags=true)" topic
* G-3-g57a4992
* G-2-gcb715a5
* G-1-g5334a53
* G
* F
* E
* D
And here's what I get for git log --graph --format="%(describe:tags=true)" master
* G
* F
* E
* D
Aside from the fact that the resulting revision history is different from what is stated in the docs, the commits with tags A, B and C don't seem to belong to any branch.
What happened with those commits and why have they not been removed altogether as suggested by the documentation? Do they belong to any specific branch now?
| [
"The history still matches, the Git documentation only draws them on 2 parallel lines, your history shows them as a single line.\nA-B-C-D\n\nand\nA-B\n \\\n C-D\n\nis exactly the same history.\nAs to why you still see your old commits: git rebase never removes commits, it only recreates/copies them and then sets the \"branch\" label to point to the new commits. The old commits are still in the Git object database (the .git/objects directory).\ngit log starts at refs (branches or tags) and will display all commits reachable through those refs; starting at the commit pointed to by the ref, then walking the parent: pointers of each commit.\nYou are running git tag $commit for every commit, so they remain reachable, even after creating copies of them and moving the original branch pointer.\n"
] | [
0
] | [] | [] | [
"git",
"git_rebase",
"powershell",
"version_control"
] | stackoverflow_0074121544_git_git_rebase_powershell_version_control.txt |
Q:
Spring doesn't see WebSecurityConfigurerAdapter
I want to create WebSecurityConfig class that extends WebSecurityConfigurerAdapter, but I always get error "Cannot resolve symbol 'WebSecurityConfigurerAdapter'".
I have already tried to add different dependencies. It's my gradle file
dependencies {
runtimeOnly 'org.postgresql:postgresql'
implementation group: 'org.postgresql', name: 'postgresql', version: '42.5.1'
implementation group: 'org.springframework.boot', name: 'spring-boot-starter-data-jpa', version: '3.0.0'
implementation group: 'org.springframework.boot', name: 'spring-boot-starter-data-rest', version: '3.0.0'
implementation group: 'org.springframework.boot', name: 'spring-boot-starter-tomcat', version: '3.0.0'
implementation group: 'org.springframework.boot', name: 'spring-boot-starter-web', version: '3.0.0'
implementation group: 'io.jsonwebtoken', name: 'jjwt-api', version: '0.11.5'
implementation group: 'org.postgresql', name: 'postgresql', version: '42.5.1'
implementation group: 'org.springframework.boot', name: 'spring-boot-starter-validation', version: '3.0.0'
implementation group: 'org.springframework.security', name: 'spring-security-core', version: '6.0.0'
implementation group: 'org.springframework.security', name: 'spring-security-config', version: '6.0.0'
implementation group: 'org.springframework.security', name: 'spring-security-web', version: '6.0.0'
implementation group: 'org.springframework.security', name: 'spring-security-oauth2-jose', version: '6.0.0'
implementation group: 'org.springframework.security', name: 'spring-security-oauth2-resource-server', version: '6.0.0'
implementation group: 'org.springframework.boot', name: 'spring-boot-starter-security', version: '3.0.0'
implementation group: 'org.springframework.boot', name: 'spring-boot-starter-thymeleaf', version: '3.0.0'
implementation group: 'org.thymeleaf.extras', name: 'thymeleaf-extras-springsecurity5', version: '3.1.0.RELEASE'
}
Maybe I don't understand something easy. Can you help me with this,please.
I have already spent 2 days on this
There is the same question on stack overflow (Cannot resolve symbol WebSecurityConfigurerAdapter), but it doesn't help me.
It's my file with WebSecurityConfig class
import nsu.project.springserver.security.jwt.AuthEntryPointJwt;
import nsu.project.springserver.security.jwt.AuthTokenFilter;
import nsu.project.springserver.security.services.UserDetailsServiceImpl;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.authentication.AuthenticationManager;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.http.SessionCreationPolicy;
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;
import org.springframework.security.crypto.password.PasswordEncoder;
import org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter;
@Configuration
@EnableWebSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
....
}
A:
WebsecurityConfigurerAdapter was removed in Spring-boot 3.
Expose a SecurityFilterChain bean instead. Open the manual or have look at those tutorials.
A:
Video from Dan Vega:
What's new in Spring Security 6
Spring Security: Upgrading the Deprecated WebSecurityConfigurerAdapter
Configure HTTP Security
More importantly, if we want to avoid deprecation for HTTP security, we can create a SecurityFilterChain bean.
For example, suppose we want to secure the endpoints depending on the roles, and leave an anonymous entry point only for login. We'll also restrict any delete request to an admin role. We'll use Basic Authentication:
@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
http.csrf()
.disable()
.authorizeRequests()
.antMatchers(HttpMethod.DELETE)
.hasRole("ADMIN")
.antMatchers("/admin/**")
.hasAnyRole("ADMIN")
.antMatchers("/user/**")
.hasAnyRole("USER", "ADMIN")
.antMatchers("/login/**")
.anonymous()
.anyRequest()
.authenticated()
.and()
.httpBasic()
.and()
.sessionManagement()
.sessionCreationPolicy(SessionCreationPolicy.STATELESS);
return http.build();
}
The HTTP security will build a DefaultSecurityFilterChain object to load request matchers and filters.
spring Security: Upgrading the Deprecated WebSecurityConfigurerAdapter
| Spring doesn't see WebSecurityConfigurerAdapter | I want to create WebSecurityConfig class that extends WebSecurityConfigurerAdapter, but I always get error "Cannot resolve symbol 'WebSecurityConfigurerAdapter'".
I have already tried to add different dependencies. It's my gradle file
dependencies {
runtimeOnly 'org.postgresql:postgresql'
implementation group: 'org.postgresql', name: 'postgresql', version: '42.5.1'
implementation group: 'org.springframework.boot', name: 'spring-boot-starter-data-jpa', version: '3.0.0'
implementation group: 'org.springframework.boot', name: 'spring-boot-starter-data-rest', version: '3.0.0'
implementation group: 'org.springframework.boot', name: 'spring-boot-starter-tomcat', version: '3.0.0'
implementation group: 'org.springframework.boot', name: 'spring-boot-starter-web', version: '3.0.0'
implementation group: 'io.jsonwebtoken', name: 'jjwt-api', version: '0.11.5'
implementation group: 'org.postgresql', name: 'postgresql', version: '42.5.1'
implementation group: 'org.springframework.boot', name: 'spring-boot-starter-validation', version: '3.0.0'
implementation group: 'org.springframework.security', name: 'spring-security-core', version: '6.0.0'
implementation group: 'org.springframework.security', name: 'spring-security-config', version: '6.0.0'
implementation group: 'org.springframework.security', name: 'spring-security-web', version: '6.0.0'
implementation group: 'org.springframework.security', name: 'spring-security-oauth2-jose', version: '6.0.0'
implementation group: 'org.springframework.security', name: 'spring-security-oauth2-resource-server', version: '6.0.0'
implementation group: 'org.springframework.boot', name: 'spring-boot-starter-security', version: '3.0.0'
implementation group: 'org.springframework.boot', name: 'spring-boot-starter-thymeleaf', version: '3.0.0'
implementation group: 'org.thymeleaf.extras', name: 'thymeleaf-extras-springsecurity5', version: '3.1.0.RELEASE'
}
Maybe I don't understand something easy. Can you help me with this,please.
I have already spent 2 days on this
There is the same question on stack overflow (Cannot resolve symbol WebSecurityConfigurerAdapter), but it doesn't help me.
It's my file with WebSecurityConfig class
import nsu.project.springserver.security.jwt.AuthEntryPointJwt;
import nsu.project.springserver.security.jwt.AuthTokenFilter;
import nsu.project.springserver.security.services.UserDetailsServiceImpl;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.authentication.AuthenticationManager;
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.http.SessionCreationPolicy;
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;
import org.springframework.security.crypto.password.PasswordEncoder;
import org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter;
@Configuration
@EnableWebSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
....
}
| [
"WebsecurityConfigurerAdapter was removed in Spring-boot 3.\nExpose a SecurityFilterChain bean instead. Open the manual or have look at those tutorials.\n",
"Video from Dan Vega:\nWhat's new in Spring Security 6\nSpring Security: Upgrading the Deprecated WebSecurityConfigurerAdapter\nConfigure HTTP Security\nMore importantly, if we want to avoid deprecation for HTTP security, we can create a SecurityFilterChain bean.\nFor example, suppose we want to secure the endpoints depending on the roles, and leave an anonymous entry point only for login. We'll also restrict any delete request to an admin role. We'll use Basic Authentication:\n@Bean\npublic SecurityFilterChain filterChain(HttpSecurity http) throws Exception {\n http.csrf()\n .disable()\n .authorizeRequests()\n .antMatchers(HttpMethod.DELETE)\n .hasRole(\"ADMIN\")\n .antMatchers(\"/admin/**\")\n .hasAnyRole(\"ADMIN\")\n .antMatchers(\"/user/**\")\n .hasAnyRole(\"USER\", \"ADMIN\")\n .antMatchers(\"/login/**\")\n .anonymous()\n .anyRequest()\n .authenticated()\n .and()\n .httpBasic()\n .and()\n .sessionManagement()\n .sessionCreationPolicy(SessionCreationPolicy.STATELESS);\n\n return http.build();\n}\n\nThe HTTP security will build a DefaultSecurityFilterChain object to load request matchers and filters.\nspring Security: Upgrading the Deprecated WebSecurityConfigurerAdapter\n"
] | [
1,
0
] | [] | [] | [
"java",
"spring",
"spring_security"
] | stackoverflow_0074673260_java_spring_spring_security.txt |
Q:
Why rust complains about Self's unknown size in one trait and doesn't in another similar trait.?
Consider below two traits
trait FromSomeStr {
// rust doesn't complain about Self unknown size in this trait
fn from_input(input: String) -> Self;
}
trait FromOtherStr: FromSomeStr
{
// rust complains about Self unknown size in this trait, but doesn't in FromSomeStr trait
fn from_some_other_str(input: String) -> Self {
Self::from_input(input)
}
}
Rust compiler response:
Compiling playground v0.0.1 (/playground)
error[E0277]: the size for values of type `Self` cannot be known at compilation time
--> src/main.rs:9:46
|
9 | fn from_some_other_str(input: String) -> Self {
| ^^^^ doesn't have a size known at compile-time
|
= note: the return type of a function must have a statically known size
help: consider further restricting `Self`
|
9 | fn from_some_other_str(input: String) -> Self where Self: Sized {
| +++++++++++++++++
error[E0277]: the size for values of type `Self` cannot be known at compilation time
--> src/main.rs:10:9
|
10 | Self::from_input(input)
| ^^^^^^^^^^^^^^^^ doesn't have a size known at compile-time
|
= note: the return type of a function must have a statically known size
help: consider further restricting `Self`
|
9 | fn from_some_other_str(input: String) -> Self where Self: Sized {
| +++++++++++++++++
For more information about this error, try `rustc --explain E0277`.
error: could not compile `playground` due to 2 previous errors
Both methods from_input and from_some_other_str return Self and we don't know about it's size unless some struct implements both the traits.
Rust strangely asks to constraint Self with Sized trait in FromOtherStr trait but doesn't in FromSomeStr. Why this behavior ?
Playgorund Link: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=60bfb31c844dd7b765b7921f64b808ce
I expected rust not to give any error
A:
Disclaimer: I'm not a Rust expert, and this is more of a gut feeling than actual knowledge. Please take this answer with a grain of salt.
Both should fail. A return value always has to be Sized, because it has to live in memory somewhere, and the compiler has to know its size for that.
So the real question is not why the second case fails, but why the first case doesn't.
@ChayimFriedman seems to have looked at the compiler code, and the answer is that compilers only type check functions in traits with a default implementation.
Whether or not this is a good design decision is not the subject of discussion here, but that's at least the reason for the behavior you see.
Either way, the takeaway is: always use Sized as a trait requirement if one of your functions returns Self.
Original answer:
Let's simplify your problem:
pub trait FromSomeStr {
// rust doesn't complain about Self unknown size in this trait
fn from_input(input: String) -> Self;
}
pub trait FromOtherStr: FromSomeStr {
// rust complains about Self unknown size in this trait, but doesn't in FromSomeStr trait
fn from_some_other_str(input: String) {
Self::from_input(input);
}
}
error[E0277]: the size for values of type `Self` cannot be known at compilation time
--> src/lib.rs:9:9
|
9 | Self::from_input(input);
| ^^^^^^^^^^^^^^^^ doesn't have a size known at compile-time
|
= note: the return type of a function must have a statically known size
help: consider further restricting `Self`
|
8 | fn from_some_other_str(input: String) where Self: Sized {
| +++++++++++++++++
You can see it still happens even if you don't return anything. The problem here is the call itself.
It's gone when we remove the return from the called function:
pub trait FromSomeStr {
// rust doesn't complain about Self unknown size in this trait
fn from_input(input: String);
}
pub trait FromOtherStr: FromSomeStr {
// rust complains about Self unknown size in this trait, but doesn't in FromSomeStr trait
fn from_some_other_str(input: String) {
Self::from_input(input);
}
}
So the question is, why so?
It makes sense to me why Rust complains about calling the function, because that's where the object gets created. Not knowing the size of the return value of the called function makes it impossible to compile it.
I don't know why it doesn't complain at the definition already, though. Someone else might have to explain that. But my guess is that errors like this only manifest at the calling site, not at the definition site.
Example in the Rust library:
Default is a trait that behaves very similar to your FromSomeStr. But it works:
pub trait FromOtherStr: Default {
// rust complains about Self unknown size in this trait, but doesn't in FromSomeStr trait
fn from_some_other_str(_input: String) -> Self {
Self::default()
}
}
So why is that?
If you look at the implementation of Default, you can see:
pub trait Default: Sized {
// ...
}
And I guess the Sized here was introduced specifically because otherwise it would be impossible to return a Self object.
| Why rust complains about Self's unknown size in one trait and doesn't in another similar trait.? | Consider below two traits
trait FromSomeStr {
// rust doesn't complain about Self unknown size in this trait
fn from_input(input: String) -> Self;
}
trait FromOtherStr: FromSomeStr
{
// rust complains about Self unknown size in this trait, but doesn't in FromSomeStr trait
fn from_some_other_str(input: String) -> Self {
Self::from_input(input)
}
}
Rust compiler response:
Compiling playground v0.0.1 (/playground)
error[E0277]: the size for values of type `Self` cannot be known at compilation time
--> src/main.rs:9:46
|
9 | fn from_some_other_str(input: String) -> Self {
| ^^^^ doesn't have a size known at compile-time
|
= note: the return type of a function must have a statically known size
help: consider further restricting `Self`
|
9 | fn from_some_other_str(input: String) -> Self where Self: Sized {
| +++++++++++++++++
error[E0277]: the size for values of type `Self` cannot be known at compilation time
--> src/main.rs:10:9
|
10 | Self::from_input(input)
| ^^^^^^^^^^^^^^^^ doesn't have a size known at compile-time
|
= note: the return type of a function must have a statically known size
help: consider further restricting `Self`
|
9 | fn from_some_other_str(input: String) -> Self where Self: Sized {
| +++++++++++++++++
For more information about this error, try `rustc --explain E0277`.
error: could not compile `playground` due to 2 previous errors
Both methods from_input and from_some_other_str return Self and we don't know about it's size unless some struct implements both the traits.
Rust strangely asks to constraint Self with Sized trait in FromOtherStr trait but doesn't in FromSomeStr. Why this behavior ?
Playgorund Link: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=60bfb31c844dd7b765b7921f64b808ce
I expected rust not to give any error
| [
"Disclaimer: I'm not a Rust expert, and this is more of a gut feeling than actual knowledge. Please take this answer with a grain of salt.\nBoth should fail. A return value always has to be Sized, because it has to live in memory somewhere, and the compiler has to know its size for that.\nSo the real question is not why the second case fails, but why the first case doesn't.\n@ChayimFriedman seems to have looked at the compiler code, and the answer is that compilers only type check functions in traits with a default implementation.\nWhether or not this is a good design decision is not the subject of discussion here, but that's at least the reason for the behavior you see.\nEither way, the takeaway is: always use Sized as a trait requirement if one of your functions returns Self.\n\nOriginal answer:\nLet's simplify your problem:\npub trait FromSomeStr {\n // rust doesn't complain about Self unknown size in this trait\n fn from_input(input: String) -> Self;\n}\n\npub trait FromOtherStr: FromSomeStr {\n // rust complains about Self unknown size in this trait, but doesn't in FromSomeStr trait\n fn from_some_other_str(input: String) {\n Self::from_input(input);\n }\n}\n\nerror[E0277]: the size for values of type `Self` cannot be known at compilation time\n --> src/lib.rs:9:9\n |\n9 | Self::from_input(input);\n | ^^^^^^^^^^^^^^^^ doesn't have a size known at compile-time\n |\n = note: the return type of a function must have a statically known size\nhelp: consider further restricting `Self`\n |\n8 | fn from_some_other_str(input: String) where Self: Sized {\n | +++++++++++++++++\n\nYou can see it still happens even if you don't return anything. The problem here is the call itself.\nIt's gone when we remove the return from the called function:\npub trait FromSomeStr {\n // rust doesn't complain about Self unknown size in this trait\n fn from_input(input: String);\n}\n\npub trait FromOtherStr: FromSomeStr {\n // rust complains about Self unknown size in this trait, but doesn't in FromSomeStr trait\n fn from_some_other_str(input: String) {\n Self::from_input(input);\n }\n}\n\nSo the question is, why so?\nIt makes sense to me why Rust complains about calling the function, because that's where the object gets created. Not knowing the size of the return value of the called function makes it impossible to compile it.\nI don't know why it doesn't complain at the definition already, though. Someone else might have to explain that. But my guess is that errors like this only manifest at the calling site, not at the definition site.\n\nExample in the Rust library:\nDefault is a trait that behaves very similar to your FromSomeStr. But it works:\npub trait FromOtherStr: Default {\n // rust complains about Self unknown size in this trait, but doesn't in FromSomeStr trait\n fn from_some_other_str(_input: String) -> Self {\n Self::default()\n }\n}\n\nSo why is that?\nIf you look at the implementation of Default, you can see:\npub trait Default: Sized {\n // ...\n}\n\nAnd I guess the Sized here was introduced specifically because otherwise it would be impossible to return a Self object.\n"
] | [
0
] | [] | [] | [
"rust"
] | stackoverflow_0074673544_rust.txt |
Q:
how to deploy a java web application, spring boot, jpa?
I need your help to deploy a web application developed with spring, java, jpa, mysql.
I have google cloud account.
I have deployed a REST service with a .jar file.
Do you have any reference to any manual or courses?
A:
You can either use Compute Engine to create a Virtual Machine, import your project as a jar or in another format, install java and run through the command line. You can find a guide from Google here. You may also want to take the route of using App Engine. Or, you can find a Google CodeLab for deploying a spring boot application here.
Google has a lot of documentation and guides for these sorts of things so go check out their developer website.
A:
You could consider deploying it on Cloud Run using source code or images.
If you're interested in it. Maybe this series could help a bit Springboot on Google Cloud CloudRun
| how to deploy a java web application, spring boot, jpa? | I need your help to deploy a web application developed with spring, java, jpa, mysql.
I have google cloud account.
I have deployed a REST service with a .jar file.
Do you have any reference to any manual or courses?
| [
"You can either use Compute Engine to create a Virtual Machine, import your project as a jar or in another format, install java and run through the command line. You can find a guide from Google here. You may also want to take the route of using App Engine. Or, you can find a Google CodeLab for deploying a spring boot application here.\nGoogle has a lot of documentation and guides for these sorts of things so go check out their developer website.\n",
"You could consider deploying it on Cloud Run using source code or images.\nIf you're interested in it. Maybe this series could help a bit Springboot on Google Cloud CloudRun\n"
] | [
0,
0
] | [] | [] | [
"google_cloud_platform",
"java",
"mysql",
"spring_boot",
"spring_data_jpa"
] | stackoverflow_0074672655_google_cloud_platform_java_mysql_spring_boot_spring_data_jpa.txt |
Q:
TypeError: custom() got an unexpected keyword argument 'path'---yolov7
i am trying to load yolov7 model ( for the weights which i am trained for my dataset ) but i am get error
model = torch.hub.load('yolov7','custom', path='/home/runs/train/yolov7x-custom/weights/best.pt',force_reload=True,source='local')
A:
Try to re-clone the repository and place your model in "/home/runs/train/yolov7x-custom/weights" path, or easily you can clone the repo (!git clone https://github.com/WongKinYiu/yolov7.git) then run the detect script:
!python3 detect.py --source path to image --weights path ro
weights
A:
Worked for me :), since yolov7 they changed path param to path_or_model param
MODEL = torch.hub.load('.', 'custom',
path_or_model=MODEL_PATH,
source='local',
)
| TypeError: custom() got an unexpected keyword argument 'path'---yolov7 | i am trying to load yolov7 model ( for the weights which i am trained for my dataset ) but i am get error
model = torch.hub.load('yolov7','custom', path='/home/runs/train/yolov7x-custom/weights/best.pt',force_reload=True,source='local')
| [
"Try to re-clone the repository and place your model in \"/home/runs/train/yolov7x-custom/weights\" path, or easily you can clone the repo (!git clone https://github.com/WongKinYiu/yolov7.git) then run the detect script:\n\n!python3 detect.py --source path to image --weights path ro\nweights\n\n",
"Worked for me :), since yolov7 they changed path param to path_or_model param\nMODEL = torch.hub.load('.', 'custom',\n path_or_model=MODEL_PATH,\n source='local',\n )\n\n"
] | [
0,
0
] | [] | [] | [
"jupyter_notebook",
"linux",
"yolo",
"yolov5"
] | stackoverflow_0074328379_jupyter_notebook_linux_yolo_yolov5.txt |
Q:
Test styles triggered by mouse over events
I have a button that displays different styles when mouse moves over it:
background-color: green;
&:hover {
background-color: red;
}
Here is my test:
fireEvent.mouseOver(button);
expect(button).toHaveStyle(`
background-color: red;
`);
However, the test complained that the background color is green instead of red.
I tried fireEvent.mouseEnter before calling mouseOver. Didn't make any difference. What did I miss?
A:
You need to wait for the event to be fired and the style to be applied. You can try
fireEvent.mouseOver(button);
expect(await button).toHaveStyle(`
background-color: red;
`);
A:
Maybe you can wrap it in act().
| Test styles triggered by mouse over events | I have a button that displays different styles when mouse moves over it:
background-color: green;
&:hover {
background-color: red;
}
Here is my test:
fireEvent.mouseOver(button);
expect(button).toHaveStyle(`
background-color: red;
`);
However, the test complained that the background color is green instead of red.
I tried fireEvent.mouseEnter before calling mouseOver. Didn't make any difference. What did I miss?
| [
"You need to wait for the event to be fired and the style to be applied. You can try\nfireEvent.mouseOver(button);\n\nexpect(await button).toHaveStyle(`\n background-color: red;\n`);\n\n",
"Maybe you can wrap it in act().\n"
] | [
1,
0
] | [] | [] | [
"react_testing_library",
"reactjs"
] | stackoverflow_0052377301_react_testing_library_reactjs.txt |
Q:
How to get the continent given the coordinates (latitude and longitude) in Python?
Is there a method that allows to get the continent where it is in place given its coordinates (without an API key)?
I'm using:
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent='...')
location = geolocator.reverse('51.0456448, 3.7273618')
print(location.address)
print((location.latitude, location.longitude))
print(location.raw)
But it does not return the continent. Even giving a place name and using geolocator.geocode() doesn't work. Besides, even giving a name and using:
import urllib.error, urllib.request, urllib.parse
import json
target = 'http://py4e-data.dr-chuck.net/json?'
local = 'Paris'
url = target + urllib.parse.urlencode({'address': local, 'key' : 42})
data = urllib.request.urlopen(url).read()
js = json.loads(data)
print(json.dumps(js, indent=4))
Doesn't work either.
A:
A bit late, but for future reference and those who could need it, like me recently, here is one way to do it with Wikipedia and the use of Pandas, requests and geopy:
import pandas as pd
import requests
from geopy.geocoders import Nominatim
URLS = {
"Africa": "https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Africa",
"Asia": "https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Asia",
"Europe": "https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Europe",
"North America": "https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_North_America",
"Ocenia": "https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Oceania",
"South America": "https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_South_America",
}
def get_continents_and_countries() -> dict[str, str]:
"""Helper function to get countries and corresponding continents.
Returns:
Dictionary where keys are countries and values are continents.
"""
df_ = pd.concat(
[
pd.DataFrame(
pd.read_html(
requests.get(url).text.replace("<br />", ";"),
match="Flag",
)[0]
.pipe(
lambda df_: df_.rename(
columns={col: i for i, col in enumerate(df_.columns)}
)
)[2]
.str.split(";;")
.apply(lambda x: x[0])
)
.assign(continent=continent)
.rename(columns={2: "country"})
for continent, url in URLS.items()
]
).reset_index(drop=True)
df_["country"] = (
df_["country"]
.str.replace("*", "", regex=False)
.str.split("[")
.apply(lambda x: x[0])
).str.replace("\xa0", "")
return dict(df_.to_dict(orient="split")["data"])
def get_location_of(coo: str, data: dict[str, str]) -> tuple[str, str, str]:
"""Function to get the country of given coordinates.
Args:
coo: coordinates as string ("lat, lon").
data: input dictionary of countries and continents.
Returns:
Tuple of coordinates, country and continent (or Unknown if country not found).
"""
geolocator = Nominatim(user_agent="stackoverflow", timeout=25)
country: str = (
geolocator.reverse(coo, language="en-US").raw["display_name"].split(", ")[-1]
)
return (coo, country, data.get(country, "Unknown"))
Finally:
continents_and_countries = get_continents_and_countries()
print(get_location_of("51.0456448, 3.7273618", continents_and_countries))
# Output
('51.0456448, 3.7273618', 'Belgium', 'Europe')
| How to get the continent given the coordinates (latitude and longitude) in Python? | Is there a method that allows to get the continent where it is in place given its coordinates (without an API key)?
I'm using:
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent='...')
location = geolocator.reverse('51.0456448, 3.7273618')
print(location.address)
print((location.latitude, location.longitude))
print(location.raw)
But it does not return the continent. Even giving a place name and using geolocator.geocode() doesn't work. Besides, even giving a name and using:
import urllib.error, urllib.request, urllib.parse
import json
target = 'http://py4e-data.dr-chuck.net/json?'
local = 'Paris'
url = target + urllib.parse.urlencode({'address': local, 'key' : 42})
data = urllib.request.urlopen(url).read()
js = json.loads(data)
print(json.dumps(js, indent=4))
Doesn't work either.
| [
"A bit late, but for future reference and those who could need it, like me recently, here is one way to do it with Wikipedia and the use of Pandas, requests and geopy:\nimport pandas as pd\nimport requests\nfrom geopy.geocoders import Nominatim\n\nURLS = {\n \"Africa\": \"https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Africa\",\n \"Asia\": \"https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Asia\",\n \"Europe\": \"https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Europe\",\n \"North America\": \"https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_North_America\",\n \"Ocenia\": \"https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_Oceania\",\n \"South America\": \"https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependent_territories_in_South_America\",\n}\n\ndef get_continents_and_countries() -> dict[str, str]:\n \"\"\"Helper function to get countries and corresponding continents.\n\n Returns:\n Dictionary where keys are countries and values are continents.\n\n \"\"\"\n df_ = pd.concat(\n [\n pd.DataFrame(\n pd.read_html(\n requests.get(url).text.replace(\"<br />\", \";\"),\n match=\"Flag\",\n )[0]\n .pipe(\n lambda df_: df_.rename(\n columns={col: i for i, col in enumerate(df_.columns)}\n )\n )[2]\n .str.split(\";;\")\n .apply(lambda x: x[0])\n )\n .assign(continent=continent)\n .rename(columns={2: \"country\"})\n for continent, url in URLS.items()\n ]\n ).reset_index(drop=True)\n df_[\"country\"] = (\n df_[\"country\"]\n .str.replace(\"*\", \"\", regex=False)\n .str.split(\"[\")\n .apply(lambda x: x[0])\n ).str.replace(\"\\xa0\", \"\")\n return dict(df_.to_dict(orient=\"split\")[\"data\"])\n\ndef get_location_of(coo: str, data: dict[str, str]) -> tuple[str, str, str]:\n \"\"\"Function to get the country of given coordinates.\n\n Args:\n coo: coordinates as string (\"lat, lon\").\n data: input dictionary of countries and continents.\n\n Returns:\n Tuple of coordinates, country and continent (or Unknown if country not found).\n\n \"\"\"\n geolocator = Nominatim(user_agent=\"stackoverflow\", timeout=25)\n country: str = (\n geolocator.reverse(coo, language=\"en-US\").raw[\"display_name\"].split(\", \")[-1]\n )\n return (coo, country, data.get(country, \"Unknown\"))\n\nFinally:\ncontinents_and_countries = get_continents_and_countries()\n\nprint(get_location_of(\"51.0456448, 3.7273618\", continents_and_countries))\n\n# Output\n('51.0456448, 3.7273618', 'Belgium', 'Europe')\n\n"
] | [
0
] | [] | [] | [
"coordinates",
"geolocation",
"geopy",
"python",
"python_requests"
] | stackoverflow_0069771711_coordinates_geolocation_geopy_python_python_requests.txt |
Q:
Installing windows GuestOS to ESXi VM using PowerCLI
I tried the following command to create a VM with Windows GuestOS using PowerCLI:
New-VM -Name <VM-Name> -Datastore <name-of-datastore> -DiskGB 1 -DiskStorageFormat Thin -MemoryGB 4 -CD -GuestId windows7Server64Guest -NumCpu 2 -VMHost <VM-Host-IP>
But when I log into my VM, I found that there is no OS installed in the VM.
I tried search for the solution but I could not find anything in the PowerCLI documentation.
A:
It is possible that the -CD parameter was not specified correctly or the Windows ISO file was not attached to the virtual CD/DVD drive. To verify, you can use the Get-CDDrive cmdlet to check if the ISO file is attached to the virtual CD/DVD drive of the VM. If not, you can use the Set-CDDrive cmdlet to attach the ISO file and then power on the VM to begin the installation process.
Additionally, it is recommended to use the -OSCustomizationSpec parameter with the New-VM cmdlet to specify the OS customization settings, such as the administrator password, time zone, and domain membership, during the VM creation process. This can ensure that the OS is properly installed and configured in the VM.
| Installing windows GuestOS to ESXi VM using PowerCLI | I tried the following command to create a VM with Windows GuestOS using PowerCLI:
New-VM -Name <VM-Name> -Datastore <name-of-datastore> -DiskGB 1 -DiskStorageFormat Thin -MemoryGB 4 -CD -GuestId windows7Server64Guest -NumCpu 2 -VMHost <VM-Host-IP>
But when I log into my VM, I found that there is no OS installed in the VM.
I tried search for the solution but I could not find anything in the PowerCLI documentation.
| [
"It is possible that the -CD parameter was not specified correctly or the Windows ISO file was not attached to the virtual CD/DVD drive. To verify, you can use the Get-CDDrive cmdlet to check if the ISO file is attached to the virtual CD/DVD drive of the VM. If not, you can use the Set-CDDrive cmdlet to attach the ISO file and then power on the VM to begin the installation process.\nAdditionally, it is recommended to use the -OSCustomizationSpec parameter with the New-VM cmdlet to specify the OS customization settings, such as the administrator password, time zone, and domain membership, during the VM creation process. This can ensure that the OS is properly installed and configured in the VM.\n"
] | [
0
] | [] | [] | [
"esxi",
"powercli",
"powershell",
"virtual_machine",
"vmware"
] | stackoverflow_0074429263_esxi_powercli_powershell_virtual_machine_vmware.txt |
Q:
AWS API Gateway - Parameter mapping path with HTTP API (overwrite:path)
I started looking into using AWS HTTP API as a single point of entry to some micro services running with ECS.
One micro service has the following route internally on the server:
/sessions/{session_id}/topics
I define exactly the same route in my HTTP API and use CloudMap and a VPC Link to reach my ECS cluster. So far so good, the requests can reach the servers. The path is however not the same when it arrives. As per AWS documentation [1] it will prepend the stage name so that the request looks the following when it arrives:
/{stage_name}/sessions/{session_id}/topics
So I started to look into Parameter mappings so that I can change the path for the integration, but I cannot get it to work.
For requestParameters I want overwrite the path like below, but for some reason the original path with the stage variable is still there. If I just define overwrite:path as $request.path.sessionId I get only the ID as the path or if I write whatever string I want it will arrive as I define it. But when I mix the $request.path.sessionId and the other parts of the string it does not seem to work.
How do I format this correctly?
paths:
/sessions/{sessionId}/topics:
post:
responses:
default:
description: "Default response for POST /sessions/{sessionId}/topics"
x-amazon-apigateway-integration:
requestParameters:
overwrite:path: "/sessions/$request.path.sessionId/topics"
payloadFormatVersion: "1.0"
connectionId: (removed)
type: "http_proxy"
httpMethod: "POST"
uri: (removed)
connectionType: "VPC_LINK"
timeoutInMillis: 30000
[1] https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-private.html
A:
You can try to use parentheses. Formal notation instead of shorthand notation.
overwrite:path: "/sessions/${request.path.sessionId}/topics"
It worked well for me for complex mappings.
mapping template is a script expressed in Velocity Template Language (VTL)
A:
dont remove the uri and the connectionId and it will work for you.
add only requestParameters:
overwrite:path: "/sessions/$request.path.sessionId/topics"
| AWS API Gateway - Parameter mapping path with HTTP API (overwrite:path) | I started looking into using AWS HTTP API as a single point of entry to some micro services running with ECS.
One micro service has the following route internally on the server:
/sessions/{session_id}/topics
I define exactly the same route in my HTTP API and use CloudMap and a VPC Link to reach my ECS cluster. So far so good, the requests can reach the servers. The path is however not the same when it arrives. As per AWS documentation [1] it will prepend the stage name so that the request looks the following when it arrives:
/{stage_name}/sessions/{session_id}/topics
So I started to look into Parameter mappings so that I can change the path for the integration, but I cannot get it to work.
For requestParameters I want overwrite the path like below, but for some reason the original path with the stage variable is still there. If I just define overwrite:path as $request.path.sessionId I get only the ID as the path or if I write whatever string I want it will arrive as I define it. But when I mix the $request.path.sessionId and the other parts of the string it does not seem to work.
How do I format this correctly?
paths:
/sessions/{sessionId}/topics:
post:
responses:
default:
description: "Default response for POST /sessions/{sessionId}/topics"
x-amazon-apigateway-integration:
requestParameters:
overwrite:path: "/sessions/$request.path.sessionId/topics"
payloadFormatVersion: "1.0"
connectionId: (removed)
type: "http_proxy"
httpMethod: "POST"
uri: (removed)
connectionType: "VPC_LINK"
timeoutInMillis: 30000
[1] https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-private.html
| [
"You can try to use parentheses. Formal notation instead of shorthand notation.\noverwrite:path: \"/sessions/${request.path.sessionId}/topics\"\nIt worked well for me for complex mappings.\nmapping template is a script expressed in Velocity Template Language (VTL)\n",
"dont remove the uri and the connectionId and it will work for you.\nadd only requestParameters:\noverwrite:path: \"/sessions/$request.path.sessionId/topics\"\n"
] | [
1,
0
] | [] | [] | [
"amazon_web_services",
"aws_api_gateway",
"aws_http_api"
] | stackoverflow_0065619664_amazon_web_services_aws_api_gateway_aws_http_api.txt |
Q:
CSS Grid System adding extra gap
I am trying to understand the basics of the CSS grid system. I have an image I want to place in the upper left corner. When I place it in the top left corner, for some reason it adds extra space at the top and the left. As well, when I adjust the gap in the CSS, nothing changes, unless I change it to something extremely large (like 300px).
Here is the code I have so far. I tried adjusting the gap, removing the gap, etc.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<link rel="stylesheet" href="gridgallerycss.css">
</head>
<body>
<div class="gallery">
<figure class="gallery__item gallery__item--1">
<img src="Emma_Allerd_Images/emmapic1.jpg" class="gallery__img" alt="Image 1">
</figure>
</div>
</body>
</html>
CSS
.gallery{
display: grid;;
grid-template-columns: repeat(8, 1fr);
grid-template-rows: repeat(8, 5vw);
gap: 15px;
}
.gallery__img{
width: 100%;
height: 100%;
object-fit: cover;
}
.gallery__item--1{
grid-column-start: 1;
grid-column-end: 4;
grid-row-start: 1;
grid-row-end: 6;
}
A:
I think the spacing you are referring to is the default margin on the figure element, which is set by the browser itself.
Simply add margin: 0; to remove it.
| CSS Grid System adding extra gap | I am trying to understand the basics of the CSS grid system. I have an image I want to place in the upper left corner. When I place it in the top left corner, for some reason it adds extra space at the top and the left. As well, when I adjust the gap in the CSS, nothing changes, unless I change it to something extremely large (like 300px).
Here is the code I have so far. I tried adjusting the gap, removing the gap, etc.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<link rel="stylesheet" href="gridgallerycss.css">
</head>
<body>
<div class="gallery">
<figure class="gallery__item gallery__item--1">
<img src="Emma_Allerd_Images/emmapic1.jpg" class="gallery__img" alt="Image 1">
</figure>
</div>
</body>
</html>
CSS
.gallery{
display: grid;;
grid-template-columns: repeat(8, 1fr);
grid-template-rows: repeat(8, 5vw);
gap: 15px;
}
.gallery__img{
width: 100%;
height: 100%;
object-fit: cover;
}
.gallery__item--1{
grid-column-start: 1;
grid-column-end: 4;
grid-row-start: 1;
grid-row-end: 6;
}
| [
"I think the spacing you are referring to is the default margin on the figure element, which is set by the browser itself.\nSimply add margin: 0; to remove it.\n"
] | [
1
] | [] | [] | [
"css",
"gallery",
"grid",
"html",
"javascript"
] | stackoverflow_0074674294_css_gallery_grid_html_javascript.txt |
Q:
How do I transfer my data to the image page?
I want to develop a project with ASP.NET Core 5.0, in this project I receive my data using a Web API, but I could not transfer the incoming data to the image page. I need help on this. Appropriate classes have been created for the JSON data returned, I will share some of them. I can see the data is coming, but I'm having trouble transferring it to the image part.
Web API: https://gutendex.com/
public class BookListModel
{
[JsonPropertyName("count")]
public int count { get; set; }
[JsonPropertyName("next")]
public string next { get; set; }
[JsonPropertyName("previous")]
public object previous { get; set; }
[JsonPropertyName("results")]
public List<Result> results { get; set; }
}
public class Result
{
[JsonPropertyName("id")]
public int id { get; set; }
[JsonPropertyName("title")]
public string title { get; set; }
[JsonPropertyName("authors")]
public List<AuthorModel> authors { get; set; }
[JsonPropertyName("translators")]
public List<Translator> translators { get; set; }
[JsonPropertyName("subjects")]
public List<string> subjects { get; set; }
[JsonPropertyName("bookshelves")]
public List<string> bookshelves { get; set; }
[JsonPropertyName("languages")]
public List<string> languages { get; set; }
[JsonPropertyName("copyright")]
public bool copyright { get; set; }
[JsonPropertyName("media_type")]
public string media_type { get; set; }
[JsonPropertyName("formats")]
public Formats formats { get; set; }
[JsonPropertyName("download_count")]
public int download_count { get; set; }
}
Controller:
public async Task<IActionResult> GetBooksFromApi()
{
var books = new BookListModel();
using (var httpClient = new HttpClient())
{
using (var response = await httpClient.GetAsync("https://gutendex.com/books"))
{
string apiResponse = await response.Content.ReadAsStringAsync();
books = JsonConvert.DeserializeObject<BookListModel>(apiResponse);
}
}
return View(books);
}
View:
@model BookListModel
<div>
@foreach (var item in Model.results)
{
@Model.results.ToString()
}
</div>
Json
A:
To transfer the incoming data to the image page, you need to pass the data to the view in the controller. In the controller, you are calling the View method and passing the books object as the parameter. This will make the books object available in the view, which you can use to display the data.
For example, you can use the @foreach loop in your view to iterate over the results property of the books object and display each result. You can then access the properties of each Result object, such as the title and authors, to display the relevant information on the page.
Here is an example of how you could display the data in your view:
@model BookListModel
<div>
@foreach (var item in Model.results)
{
<h2>@item.title</h2>
<p>Authors:
@foreach (var author in item.authors)
{
@author.name
}
</p>
<p>Subjects:
@foreach (var subject in item.subjects)
{
@subject
}
</p>
<p>Bookshelves:
@foreach (var bookshelf in item.bookshelves)
{
@bookshelf
}
</p>
}
</div>
This code will display the title, authors, subjects, and bookshelves for each Result object in the results list. You can customize this code to display the information in the way that you want.
Alternative
To transfer the incoming data to the image page, you can use the Razor syntax to loop through the results and display the data for each book. For example:
@foreach (var item in Model.results)
{
<div>
<h3>@item.title</h3>
<p>Authors: @string.Join(", ", item.authors.Select(a => a.name))</p>
<p>Subjects: @string.Join(", ", item.subjects)</p>
<p>Download count: @item.download_count</p>
<img src="@item.formats.GetImageUrl()" />
</div>
}
You can also create a helper method in the Result class to get the image URL from the formats object, like this:
public class Result
{
// other properties
To transfer the incoming data to the image page, you can use the Razor syntax to loop through the results and display the data for each book. For example:
@foreach (var item in Model.results)
{
<div>
<h3>@item.title</h3>
<p>Authors: @string.Join(", ", item.authors.Select(a => a.name))</p>
<p>Subjects: @string.Join(", ", item.subjects)</p>
<p>Download count: @item.download_count</p>
<img src="@item.formats.GetImageUrl()" />
</div>
}
You can also create a helper method in the Result class to get the image URL from the formats object, like this:
public class Result
{
// other properties
public string GetImageUrl()
{
// return the first image format found in the formats object, or empty string if none found
return formats.Where(f => f.Key.StartsWith("image/")).FirstOrDefault().Value ?? "";
}
}
Then in the view, you can simply call the GetImageUrl() method to get the image URL:
@foreach (var item in Model.results)
{
<div>
<h3>@item.title</h3>
<p>Authors: @string.Join(", ", item.authors.Select(a => a.name))</p>
<p>Subjects: @string.Join(", ", item.subjects)</p>
<p>Download count: @item.download_count</p>
<img src="@item.GetImageUrl()" />
</div>
}
| How do I transfer my data to the image page? | I want to develop a project with ASP.NET Core 5.0, in this project I receive my data using a Web API, but I could not transfer the incoming data to the image page. I need help on this. Appropriate classes have been created for the JSON data returned, I will share some of them. I can see the data is coming, but I'm having trouble transferring it to the image part.
Web API: https://gutendex.com/
public class BookListModel
{
[JsonPropertyName("count")]
public int count { get; set; }
[JsonPropertyName("next")]
public string next { get; set; }
[JsonPropertyName("previous")]
public object previous { get; set; }
[JsonPropertyName("results")]
public List<Result> results { get; set; }
}
public class Result
{
[JsonPropertyName("id")]
public int id { get; set; }
[JsonPropertyName("title")]
public string title { get; set; }
[JsonPropertyName("authors")]
public List<AuthorModel> authors { get; set; }
[JsonPropertyName("translators")]
public List<Translator> translators { get; set; }
[JsonPropertyName("subjects")]
public List<string> subjects { get; set; }
[JsonPropertyName("bookshelves")]
public List<string> bookshelves { get; set; }
[JsonPropertyName("languages")]
public List<string> languages { get; set; }
[JsonPropertyName("copyright")]
public bool copyright { get; set; }
[JsonPropertyName("media_type")]
public string media_type { get; set; }
[JsonPropertyName("formats")]
public Formats formats { get; set; }
[JsonPropertyName("download_count")]
public int download_count { get; set; }
}
Controller:
public async Task<IActionResult> GetBooksFromApi()
{
var books = new BookListModel();
using (var httpClient = new HttpClient())
{
using (var response = await httpClient.GetAsync("https://gutendex.com/books"))
{
string apiResponse = await response.Content.ReadAsStringAsync();
books = JsonConvert.DeserializeObject<BookListModel>(apiResponse);
}
}
return View(books);
}
View:
@model BookListModel
<div>
@foreach (var item in Model.results)
{
@Model.results.ToString()
}
</div>
Json
| [
"To transfer the incoming data to the image page, you need to pass the data to the view in the controller. In the controller, you are calling the View method and passing the books object as the parameter. This will make the books object available in the view, which you can use to display the data.\nFor example, you can use the @foreach loop in your view to iterate over the results property of the books object and display each result. You can then access the properties of each Result object, such as the title and authors, to display the relevant information on the page.\nHere is an example of how you could display the data in your view:\n@model BookListModel\n\n<div>\n@foreach (var item in Model.results)\n{\n <h2>@item.title</h2>\n <p>Authors:\n @foreach (var author in item.authors)\n {\n @author.name\n }\n </p>\n <p>Subjects:\n @foreach (var subject in item.subjects)\n {\n @subject\n }\n </p>\n <p>Bookshelves:\n @foreach (var bookshelf in item.bookshelves)\n {\n @bookshelf\n }\n </p>\n}\n</div>\n\nThis code will display the title, authors, subjects, and bookshelves for each Result object in the results list. You can customize this code to display the information in the way that you want.\nAlternative\nTo transfer the incoming data to the image page, you can use the Razor syntax to loop through the results and display the data for each book. For example:\n@foreach (var item in Model.results)\n{\n<div>\n<h3>@item.title</h3>\n<p>Authors: @string.Join(\", \", item.authors.Select(a => a.name))</p>\n<p>Subjects: @string.Join(\", \", item.subjects)</p>\n<p>Download count: @item.download_count</p>\n<img src=\"@item.formats.GetImageUrl()\" />\n</div>\n}\n\nYou can also create a helper method in the Result class to get the image URL from the formats object, like this:\npublic class Result\n{\n// other properties\nTo transfer the incoming data to the image page, you can use the Razor syntax to loop through the results and display the data for each book. For example:\n@foreach (var item in Model.results)\n{\n<div>\n<h3>@item.title</h3>\n<p>Authors: @string.Join(\", \", item.authors.Select(a => a.name))</p>\n<p>Subjects: @string.Join(\", \", item.subjects)</p>\n<p>Download count: @item.download_count</p>\n<img src=\"@item.formats.GetImageUrl()\" />\n</div>\n}\n\nYou can also create a helper method in the Result class to get the image URL from the formats object, like this:\npublic class Result\n{\n// other properties\n\npublic string GetImageUrl()\n{\n // return the first image format found in the formats object, or empty string if none found\n return formats.Where(f => f.Key.StartsWith(\"image/\")).FirstOrDefault().Value ?? \"\";\n}\n\n}\nThen in the view, you can simply call the GetImageUrl() method to get the image URL:\n@foreach (var item in Model.results)\n{\n<div>\n<h3>@item.title</h3>\n<p>Authors: @string.Join(\", \", item.authors.Select(a => a.name))</p>\n<p>Subjects: @string.Join(\", \", item.subjects)</p>\n<p>Download count: @item.download_count</p>\n<img src=\"@item.GetImageUrl()\" />\n</div>\n}\n\n"
] | [
0
] | [] | [] | [
"asp.net_core_mvc",
"asp.net_core_webapi",
"c#",
"json"
] | stackoverflow_0074674244_asp.net_core_mvc_asp.net_core_webapi_c#_json.txt |
Q:
if i create an android application which is using custom service written by me , will it works same with the built in service
CUSTOM SERVICE IN ANDROID
Let's say I develop an application which is using Bluetooth Manager Service which is in AOSP. But if I add a new hardware and create a HAL, AIDL, JNI, Framework Service for that hardware, will it works the same way if try to access that service or will there be any changes.
I was creating an application which uses service written by me.
But i want to know how to access that service in by application code.
How to add that service in AOSP
A:
You can create a system service in the OS, create it's manager and then access it in the application layer via a custom jar which contains the manager class.
Refer to this article, highly recommend to read this, explains pretty much everything required:
https://medium.com/android-news/system-service-in-aosp-750007d39555
| if i create an android application which is using custom service written by me , will it works same with the built in service | CUSTOM SERVICE IN ANDROID
Let's say I develop an application which is using Bluetooth Manager Service which is in AOSP. But if I add a new hardware and create a HAL, AIDL, JNI, Framework Service for that hardware, will it works the same way if try to access that service or will there be any changes.
I was creating an application which uses service written by me.
But i want to know how to access that service in by application code.
How to add that service in AOSP
| [
"You can create a system service in the OS, create it's manager and then access it in the application layer via a custom jar which contains the manager class.\nRefer to this article, highly recommend to read this, explains pretty much everything required:\nhttps://medium.com/android-news/system-service-in-aosp-750007d39555\n"
] | [
0
] | [] | [] | [
"android",
"android_intentservice",
"android_source",
"intentservice",
"service"
] | stackoverflow_0074640277_android_android_intentservice_android_source_intentservice_service.txt |
Q:
Function not working when importing other JS files
I'm using a simple script to check when something is in the viewport which then adds a class.
However the script stops working in Webpack when I try to import anything.
Below is basic example of importing which breaks my JS file. It doesn't matter what order I put the import and script etc.
window.jQuery = $;
import 'slick-carousel';
$(function ($, win) {
$.fn.inViewport = function (cb) {
return this.each(function (i, el) {
function visPx() {
var H = $(this).height(),
r = el.getBoundingClientRect(), t = r.top, b = r.bottom;
return cb.call(el, Math.max(0, t > 0 ? H - t : (b < H ? b : H)));
} visPx();
$(win).on("resize scroll", visPx);
});
};
}(jQuery, window));
$(".preload").inViewport(function (px) {
if (px) $(this).addClass("is-active");
});
If I remove import 'slick-carousel'; or any of my imports (its not specific to slick-carousel) the script stops working.
Here is an example the viewport script working.
A:
It seems to me that the import in itself might be wrong. Import in javascript is normally an url like:
import bar from './bar.js';
bar();
Using only import with module name is something you can do in node.
From documentation:
"module-name: The module to import from. The evaluation of the specifier is host-specified. This is often a relative or absolute URL to the .js file containing the module. In Node, extension-less imports often refer to packages in node_modules. Certain bundlers may permit importing files without extensions; check your environment. Only single quoted and double quoted Strings are allowed."
Try adding the import with a relative or absolute url.
If this is not the issue. Please open the console and add the errormessage you get when running with the import.
A:
Codepen used in your link doesn't support imports, try using Codesandbox where you can install and import dependencies in the browser just like with webpack.
A:
When you import a module in JavaScript, the code from that module is executed in the same scope as the code that imported it. This means that any variables or functions defined in the imported module will be available in the same scope as your code, and can potentially conflict with your own variables and functions.
In your case, the $ and jQuery variables are defined in the slick-carousel module, which conflicts with your own definitions of these variables. You can fix this issue by using a different variable name for your $ and jQuery variables, or by using the import statement to import the $ and jQuery variables from the slick-carousel module.
Here is an example of how you can import the $ and jQuery variables from the slick-carousel module and use them in your code:
import $, { jQuery } from 'slick-carousel';
$(function () {
$.fn.inViewport = function (cb) {
return this.each(function (i, el) {
function visPx() {
var H = $(this).height(),
r = el.getBoundingClientRect(), t = r.top, b = r.bottom;
return cb.call(el, Math.max(0, t > 0 ? H - t : (b < H ? b : H)));
}
visPx();
$(win).on("resize scroll", visPx);
});
};
});
$(".preload").inViewport(function (px) {
if (px) $(this).addClass("is-active");
});
In this code, we use the import statement to import the $ and jQuery variables from the slick-carousel module. We then use these imported variables in our code, rather than defining our own $ and jQuery variables. This avoids the conflict with the variables defined in the slick-carousel module, and allows your code to run correctly.
Note that you can also import the $ and jQuery variables using the require() function, like this:
const $ = require('slick-carousel').default;
const jQuery = require('slick-carousel').jQuery;
$(function () {
// Your code here...
});
This will have the same effect as using the import statement, but uses the require() function instead. You can use either approach depending on your preference.
A:
It looks like you are trying to use jQuery and the slick-carousel library in your code. There are a few possible reasons why your code might not be working when you try to import these libraries.
First, make sure that you have installed the necessary dependencies and included them in your project. You can use a package manager like npm or yarn to install the dependencies and then import them in your code like this:
// Import jQuery and slick-carousel
import $ from 'jquery';
import 'slick-carousel';
// Use jQuery and slick-carousel in your code
$(function () {
$(".preload").inViewport(function (px) {
if (px) $(this).addClass("is-active");
});
});
Another issue that you might be encountering is that Webpack uses a different syntax for importing modules. Instead of using the import keyword, you need to use require to import dependencies in your code when using Webpack. Here is an example of how you can use require to import jQuery and slick-carousel in your code:
// Import jQuery and slick-carousel using require
var $ = require('jquery');
require('slick-carousel');
// Use jQuery and slick-carousel in your code
$(function () {
$(".preload").inViewport(function (px) {
if (px) $(this).addClass("is-active");
});
});
Finally, make sure that you are using the correct versions of jQuery and slick-carousel that are compatible with each other. Sometimes different versions of these libraries can cause conflicts and cause your code to break.
I hope this helps! Let me know if you have any other questions.
| Function not working when importing other JS files | I'm using a simple script to check when something is in the viewport which then adds a class.
However the script stops working in Webpack when I try to import anything.
Below is basic example of importing which breaks my JS file. It doesn't matter what order I put the import and script etc.
window.jQuery = $;
import 'slick-carousel';
$(function ($, win) {
$.fn.inViewport = function (cb) {
return this.each(function (i, el) {
function visPx() {
var H = $(this).height(),
r = el.getBoundingClientRect(), t = r.top, b = r.bottom;
return cb.call(el, Math.max(0, t > 0 ? H - t : (b < H ? b : H)));
} visPx();
$(win).on("resize scroll", visPx);
});
};
}(jQuery, window));
$(".preload").inViewport(function (px) {
if (px) $(this).addClass("is-active");
});
If I remove import 'slick-carousel'; or any of my imports (its not specific to slick-carousel) the script stops working.
Here is an example the viewport script working.
| [
"It seems to me that the import in itself might be wrong. Import in javascript is normally an url like:\nimport bar from './bar.js';\n\nbar();\n\nUsing only import with module name is something you can do in node.\nFrom documentation:\n\"module-name: The module to import from. The evaluation of the specifier is host-specified. This is often a relative or absolute URL to the .js file containing the module. In Node, extension-less imports often refer to packages in node_modules. Certain bundlers may permit importing files without extensions; check your environment. Only single quoted and double quoted Strings are allowed.\"\nTry adding the import with a relative or absolute url.\nIf this is not the issue. Please open the console and add the errormessage you get when running with the import.\n",
"Codepen used in your link doesn't support imports, try using Codesandbox where you can install and import dependencies in the browser just like with webpack.\n",
"When you import a module in JavaScript, the code from that module is executed in the same scope as the code that imported it. This means that any variables or functions defined in the imported module will be available in the same scope as your code, and can potentially conflict with your own variables and functions.\nIn your case, the $ and jQuery variables are defined in the slick-carousel module, which conflicts with your own definitions of these variables. You can fix this issue by using a different variable name for your $ and jQuery variables, or by using the import statement to import the $ and jQuery variables from the slick-carousel module.\nHere is an example of how you can import the $ and jQuery variables from the slick-carousel module and use them in your code:\nimport $, { jQuery } from 'slick-carousel';\n\n$(function () {\n $.fn.inViewport = function (cb) {\n return this.each(function (i, el) {\n function visPx() {\n var H = $(this).height(),\n r = el.getBoundingClientRect(), t = r.top, b = r.bottom;\n return cb.call(el, Math.max(0, t > 0 ? H - t : (b < H ? b : H)));\n }\n visPx();\n $(win).on(\"resize scroll\", visPx);\n });\n };\n});\n\n$(\".preload\").inViewport(function (px) {\n if (px) $(this).addClass(\"is-active\");\n});\n\nIn this code, we use the import statement to import the $ and jQuery variables from the slick-carousel module. We then use these imported variables in our code, rather than defining our own $ and jQuery variables. This avoids the conflict with the variables defined in the slick-carousel module, and allows your code to run correctly.\nNote that you can also import the $ and jQuery variables using the require() function, like this:\nconst $ = require('slick-carousel').default;\nconst jQuery = require('slick-carousel').jQuery;\n\n$(function () {\n // Your code here...\n});\n\nThis will have the same effect as using the import statement, but uses the require() function instead. You can use either approach depending on your preference.\n",
"It looks like you are trying to use jQuery and the slick-carousel library in your code. There are a few possible reasons why your code might not be working when you try to import these libraries.\nFirst, make sure that you have installed the necessary dependencies and included them in your project. You can use a package manager like npm or yarn to install the dependencies and then import them in your code like this:\n// Import jQuery and slick-carousel\nimport $ from 'jquery';\nimport 'slick-carousel';\n\n// Use jQuery and slick-carousel in your code\n$(function () {\n $(\".preload\").inViewport(function (px) {\n if (px) $(this).addClass(\"is-active\");\n });\n});\n\nAnother issue that you might be encountering is that Webpack uses a different syntax for importing modules. Instead of using the import keyword, you need to use require to import dependencies in your code when using Webpack. Here is an example of how you can use require to import jQuery and slick-carousel in your code:\n// Import jQuery and slick-carousel using require\nvar $ = require('jquery');\nrequire('slick-carousel');\n\n// Use jQuery and slick-carousel in your code\n$(function () {\n $(\".preload\").inViewport(function (px) {\n if (px) $(this).addClass(\"is-active\");\n });\n});\n\nFinally, make sure that you are using the correct versions of jQuery and slick-carousel that are compatible with each other. Sometimes different versions of these libraries can cause conflicts and cause your code to break.\nI hope this helps! Let me know if you have any other questions.\n"
] | [
1,
0,
0,
0
] | [] | [] | [
"javascript",
"jquery",
"webpack"
] | stackoverflow_0074572084_javascript_jquery_webpack.txt |
Q:
Telegram Inline Bot - Buttons get stuck loading
I am working on a inline telegram bot.
The bot should be invoked through any chat so I am using the inline method, however the bot now uses a conversation flow that requires the conversation to be started by using the /start command which is not what I want.
After calling the bot with the command I set the user should see message 1 then click on a button which will show a new selection of buttons and another message.
My problem is that now the bot shows the inital message and 2 buttons, but when I click on the button nothing happens. I believe this is due to the ConversationHandler States and how it's setup
conv_handler = ConversationHandler(
entry_points=[CommandHandler('start', inlinequery)],
states={
FIRST: [
CallbackQueryHandler(one, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(two, pattern='^' + str(TWO) + '$'),
CallbackQueryHandler(three, pattern='^' + str(THREE) + '$'),
],
SECOND: [
CallbackQueryHandler(start_over, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(end, pattern='^' + str(TWO) + '$'),
],
},
fallbacks=[CommandHandler('start', inlinequery)],
Based on this it is waiting for the /start command to initiate the conv_handler. I want It to start when the user sends in any chat @botusername <command I set> which is written in the function inlinequery.
The code:
from datetime import datetime
from uuid import uuid4
import logging
import emojihash
from telegram import InlineKeyboardButton, InlineKeyboardMarkup, Update
from telegram.ext import (
Updater,
CommandHandler,
CallbackQueryHandler,
ConversationHandler,
CallbackContext,
)
from telegram.ext import InlineQueryHandler, CommandHandler, CallbackContext
from telegram.utils.helpers import escape_markdown
from telegram import InlineQueryResultArticle, ParseMode, InputTextMessageContent, Update
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.INFO
)
logger = logging.getLogger(__name__)
TransactionDateTime: str = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
TransactionNumber: int = 1
TotalTransactions:int = 1
EmojiCode: str = emojihash.eh1("unique password" , 5) #TODO: make more complex
Emojihash: str =emojihash.eh1("unique code",5)
FIRST, SECOND = range(2)
ONE, TWO, THREE, FOUR = range(4)
verified_message_2="message 2"
verified_message_1 = "Message 1 "
def inlinequery(update: Update, context: CallbackContext) -> None:
print("Inline hit!")
# print the inline query from the update.
query = update.inline_query.query
print("len(query):" + str(len(query)))
if len(query) > 0:
print("query[-1] == " "?: " + str(query[-1] == "?"))
print("query[-1] == " + query[-1])
# len(query) > 1 and query[-1] == " "
if len(query) == 0 or query[-1] != ".":
print("Empty query, showing message to seller to type username of buyer")
results = [
InlineQueryResultArticle(
id="Noop",
title="title",
input_message_content=InputTextMessageContent("I don't know how to use this bot yet. I was supposed to type the username but clicked this button anyway. Give me a second to figure this out."),
)
]
update.inline_query.answer(results)
# else if the query ends with a period character:
elif len(query) > 1 and query[-1] == ".":
buyer_username = query
SellerUserName: str = update.inline_query.from_user.username
print("buyer_username:" + buyer_username)
EmojiCode: str = emojihash.eh1("unique password" + SellerUserName + str(update.inline_query.from_user.id), 5)
keyboard = [
[
InlineKeyboardButton(EmojiCode, callback_data=str(ONE)),
],
[
InlineKeyboardButton(Emojihash, callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
context.bot.send_message(chat_id=update.inline_query.from_user.id,text=verified_message_1, reply_markup=reply_markup)
return FIRST
def start_over(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
SellerUserName: str = update.inline_query.from_user.username
buyer_username = query
print("buyer_username:" + buyer_username)
EmojiCode: str = emojihash.eh1("unique password" + SellerUserName + str(update.inline_query.from_user.id), 5)
SellerUserName: str = update.inline_query.from_user.username
verified_message_1 = f"""message 1 """
query.answer()
keyboard = [
[
InlineKeyboardButton(EmojiCode, callback_data=str(ONE)),
],
[
InlineKeyboardButton(Emojihash, callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
context.bot.send_message(chat_id=update.inline_query.from_user.id,text=verified_message_1, reply_markup=reply_markup)
return FIRST
def one(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
query.answer()
keyboard = [
[
InlineKeyboardButton(EmojiCode, callback_data=str(THREE)),
],
[
InlineKeyboardButton(Emojihash, callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
query.edit_message_text(
text=verified_message_2, reply_markup=reply_markup
)
return FIRST
def two(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
query.answer()
keyboard = [
[
InlineKeyboardButton("Yes", callback_data=str(ONE)),
],
[
InlineKeyboardButton("No", callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
query.edit_message_text(
text="You clicked on the wrong code. Do you want to try again?", reply_markup=reply_markup
)
return SECOND
def three(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
buyer_username = query
SellerUserName: str = update.inline_query.from_user.username
print("buyer_username:" + buyer_username)
SellerUserName: str = update.inline_query.from_user.username
query.answer()
keyboard = [
[
InlineKeyboardButton(text='Yes', url=f'https://t.me/{SellerUserName}'),
],
[
InlineKeyboardButton("No", callback_data=str(TWO)),
],
[ InlineKeyboardButton("Read Again", callback_data=str(ONE)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
query.edit_message_text(
text=f"""With this you have confirmed you read the messages above.
Go back to chat with seller?""", reply_markup=reply_markup
)
return SECOND
def end(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
query.answer()
query.edit_message_text(text="Process stopped")
return ConversationHandler.END
def main() -> None:
"""Run the bot."""
updater = Updater("TOKEN")
dispatcher = updater.dispatcher
conv_handler = ConversationHandler(
entry_points=[CommandHandler('start', inlinequery)],
states={
FIRST: [
CallbackQueryHandler(one, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(two, pattern='^' + str(TWO) + '$'),
CallbackQueryHandler(three, pattern='^' + str(THREE) + '$'),
],
SECOND: [
CallbackQueryHandler(start_over, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(end, pattern='^' + str(TWO) + '$'),
],
},
fallbacks=[CommandHandler('start', inlinequery)],
)
# Add ConversationHandler to dispatcher that will be used for handling updates
dispatcher.add_handler(conv_handler)
dispatcher.add_handler(InlineQueryHandler(inlinequery))
# Start the Bot
updater.start_polling()
# Run the bot until you press Ctrl-C or the process receives SIGINT,
# SIGTERM or SIGABRT. This should be used most of the time, since
# start_polling() is non-blocking and will stop the bot gracefully.
updater.idle()
if __name__ == '__main__':
main()
I tried switching out the Command handler to be a InlineQueryHandler, but that didn't give any results
A:
that requires the conversation to be started by using the /start command which is not what I want.
This is not the case - you can use any handler as entry point.
I tried switching out the Command handler to be a InlineQueryHandler, but that didn't give any results
This is one caveat here: The per_chat setting of ConversationHandler defaults to True, but InlineQuery are not linked to a chat_id. If you set per_chat=False, using an InlineQueryHandler as entry point should work just fine. See also here for more info on what the per_* settings do.
Disclaimer: I'm currently the maintainer of python-telegram-bot.
| Telegram Inline Bot - Buttons get stuck loading | I am working on a inline telegram bot.
The bot should be invoked through any chat so I am using the inline method, however the bot now uses a conversation flow that requires the conversation to be started by using the /start command which is not what I want.
After calling the bot with the command I set the user should see message 1 then click on a button which will show a new selection of buttons and another message.
My problem is that now the bot shows the inital message and 2 buttons, but when I click on the button nothing happens. I believe this is due to the ConversationHandler States and how it's setup
conv_handler = ConversationHandler(
entry_points=[CommandHandler('start', inlinequery)],
states={
FIRST: [
CallbackQueryHandler(one, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(two, pattern='^' + str(TWO) + '$'),
CallbackQueryHandler(three, pattern='^' + str(THREE) + '$'),
],
SECOND: [
CallbackQueryHandler(start_over, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(end, pattern='^' + str(TWO) + '$'),
],
},
fallbacks=[CommandHandler('start', inlinequery)],
Based on this it is waiting for the /start command to initiate the conv_handler. I want It to start when the user sends in any chat @botusername <command I set> which is written in the function inlinequery.
The code:
from datetime import datetime
from uuid import uuid4
import logging
import emojihash
from telegram import InlineKeyboardButton, InlineKeyboardMarkup, Update
from telegram.ext import (
Updater,
CommandHandler,
CallbackQueryHandler,
ConversationHandler,
CallbackContext,
)
from telegram.ext import InlineQueryHandler, CommandHandler, CallbackContext
from telegram.utils.helpers import escape_markdown
from telegram import InlineQueryResultArticle, ParseMode, InputTextMessageContent, Update
logging.basicConfig(
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.INFO
)
logger = logging.getLogger(__name__)
TransactionDateTime: str = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
TransactionNumber: int = 1
TotalTransactions:int = 1
EmojiCode: str = emojihash.eh1("unique password" , 5) #TODO: make more complex
Emojihash: str =emojihash.eh1("unique code",5)
FIRST, SECOND = range(2)
ONE, TWO, THREE, FOUR = range(4)
verified_message_2="message 2"
verified_message_1 = "Message 1 "
def inlinequery(update: Update, context: CallbackContext) -> None:
print("Inline hit!")
# print the inline query from the update.
query = update.inline_query.query
print("len(query):" + str(len(query)))
if len(query) > 0:
print("query[-1] == " "?: " + str(query[-1] == "?"))
print("query[-1] == " + query[-1])
# len(query) > 1 and query[-1] == " "
if len(query) == 0 or query[-1] != ".":
print("Empty query, showing message to seller to type username of buyer")
results = [
InlineQueryResultArticle(
id="Noop",
title="title",
input_message_content=InputTextMessageContent("I don't know how to use this bot yet. I was supposed to type the username but clicked this button anyway. Give me a second to figure this out."),
)
]
update.inline_query.answer(results)
# else if the query ends with a period character:
elif len(query) > 1 and query[-1] == ".":
buyer_username = query
SellerUserName: str = update.inline_query.from_user.username
print("buyer_username:" + buyer_username)
EmojiCode: str = emojihash.eh1("unique password" + SellerUserName + str(update.inline_query.from_user.id), 5)
keyboard = [
[
InlineKeyboardButton(EmojiCode, callback_data=str(ONE)),
],
[
InlineKeyboardButton(Emojihash, callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
context.bot.send_message(chat_id=update.inline_query.from_user.id,text=verified_message_1, reply_markup=reply_markup)
return FIRST
def start_over(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
SellerUserName: str = update.inline_query.from_user.username
buyer_username = query
print("buyer_username:" + buyer_username)
EmojiCode: str = emojihash.eh1("unique password" + SellerUserName + str(update.inline_query.from_user.id), 5)
SellerUserName: str = update.inline_query.from_user.username
verified_message_1 = f"""message 1 """
query.answer()
keyboard = [
[
InlineKeyboardButton(EmojiCode, callback_data=str(ONE)),
],
[
InlineKeyboardButton(Emojihash, callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
context.bot.send_message(chat_id=update.inline_query.from_user.id,text=verified_message_1, reply_markup=reply_markup)
return FIRST
def one(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
query.answer()
keyboard = [
[
InlineKeyboardButton(EmojiCode, callback_data=str(THREE)),
],
[
InlineKeyboardButton(Emojihash, callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
query.edit_message_text(
text=verified_message_2, reply_markup=reply_markup
)
return FIRST
def two(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
query.answer()
keyboard = [
[
InlineKeyboardButton("Yes", callback_data=str(ONE)),
],
[
InlineKeyboardButton("No", callback_data=str(TWO)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
query.edit_message_text(
text="You clicked on the wrong code. Do you want to try again?", reply_markup=reply_markup
)
return SECOND
def three(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
buyer_username = query
SellerUserName: str = update.inline_query.from_user.username
print("buyer_username:" + buyer_username)
SellerUserName: str = update.inline_query.from_user.username
query.answer()
keyboard = [
[
InlineKeyboardButton(text='Yes', url=f'https://t.me/{SellerUserName}'),
],
[
InlineKeyboardButton("No", callback_data=str(TWO)),
],
[ InlineKeyboardButton("Read Again", callback_data=str(ONE)),
],
]
reply_markup = InlineKeyboardMarkup(keyboard)
query.edit_message_text(
text=f"""With this you have confirmed you read the messages above.
Go back to chat with seller?""", reply_markup=reply_markup
)
return SECOND
def end(update: Update, context: CallbackContext) -> int:
query = update.callback_query
logger.info("User clicked on button %s", query.data)
query.answer()
query.edit_message_text(text="Process stopped")
return ConversationHandler.END
def main() -> None:
"""Run the bot."""
updater = Updater("TOKEN")
dispatcher = updater.dispatcher
conv_handler = ConversationHandler(
entry_points=[CommandHandler('start', inlinequery)],
states={
FIRST: [
CallbackQueryHandler(one, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(two, pattern='^' + str(TWO) + '$'),
CallbackQueryHandler(three, pattern='^' + str(THREE) + '$'),
],
SECOND: [
CallbackQueryHandler(start_over, pattern='^' + str(ONE) + '$'),
CallbackQueryHandler(end, pattern='^' + str(TWO) + '$'),
],
},
fallbacks=[CommandHandler('start', inlinequery)],
)
# Add ConversationHandler to dispatcher that will be used for handling updates
dispatcher.add_handler(conv_handler)
dispatcher.add_handler(InlineQueryHandler(inlinequery))
# Start the Bot
updater.start_polling()
# Run the bot until you press Ctrl-C or the process receives SIGINT,
# SIGTERM or SIGABRT. This should be used most of the time, since
# start_polling() is non-blocking and will stop the bot gracefully.
updater.idle()
if __name__ == '__main__':
main()
I tried switching out the Command handler to be a InlineQueryHandler, but that didn't give any results
| [
"\nthat requires the conversation to be started by using the /start command which is not what I want.\n\nThis is not the case - you can use any handler as entry point.\n\nI tried switching out the Command handler to be a InlineQueryHandler, but that didn't give any results\n\nThis is one caveat here: The per_chat setting of ConversationHandler defaults to True, but InlineQuery are not linked to a chat_id. If you set per_chat=False, using an InlineQueryHandler as entry point should work just fine. See also here for more info on what the per_* settings do.\n\nDisclaimer: I'm currently the maintainer of python-telegram-bot.\n"
] | [
0
] | [] | [] | [
"py_telegram_bot_api",
"python",
"python_telegram_bot"
] | stackoverflow_0074672289_py_telegram_bot_api_python_python_telegram_bot.txt |
Q:
Image is not moving away from cursor
i have a image ,i want whenever cursor try to touch the image it moves away randomly from the cursor i tried using jquery but it not working , see this link http://jsfiddle.net/emreerkan/atNva/
my index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<link rel="stylesheet" href="style.css">
<script src="https://code.jquery.com/jquery-3.6.1.min.js" integrity="sha256-o88AwQnZB+VDvE9tvIXrMQaPlFFSUTR+nldQm1LuPXQ=" crossorigin="anonymous"></script>
</head>
<body>
<img src="https://images.pexels.com/photos/569986/pexels-photo-569986.jpeg?auto=compress&cs=tinysrgb&w=600" width="100" height="100" alt="Grey Square" class="som" />
</body>
<!-- <script src="jquery-3.6.1.min.js"></script> -->
<script>
alert('hi')
jQuery(function($) {
$('.som').mouseover(function() {
var dWidth = $(document).width() - 100, // 100 = image width
dHeight = $(document).height() - 100, // 100 = image height
nextX = Math.floor(Math.random() * dWidth),
nextY = Math.floor(Math.random() * dHeight);
$(this).animate({ left: nextX + 'px', top: nextY + 'px' });
});
});
</script>
</html>
my style.css
body { position: relative; }
#img { position: relative; }
A:
in css
use the pointer .som instead of #img
because in your jQuey code you used top and left,these properties will not work unless the position property is set first, in this case its position:relative;
(static is the default value of position )
everything else looks good
| Image is not moving away from cursor | i have a image ,i want whenever cursor try to touch the image it moves away randomly from the cursor i tried using jquery but it not working , see this link http://jsfiddle.net/emreerkan/atNva/
my index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<link rel="stylesheet" href="style.css">
<script src="https://code.jquery.com/jquery-3.6.1.min.js" integrity="sha256-o88AwQnZB+VDvE9tvIXrMQaPlFFSUTR+nldQm1LuPXQ=" crossorigin="anonymous"></script>
</head>
<body>
<img src="https://images.pexels.com/photos/569986/pexels-photo-569986.jpeg?auto=compress&cs=tinysrgb&w=600" width="100" height="100" alt="Grey Square" class="som" />
</body>
<!-- <script src="jquery-3.6.1.min.js"></script> -->
<script>
alert('hi')
jQuery(function($) {
$('.som').mouseover(function() {
var dWidth = $(document).width() - 100, // 100 = image width
dHeight = $(document).height() - 100, // 100 = image height
nextX = Math.floor(Math.random() * dWidth),
nextY = Math.floor(Math.random() * dHeight);
$(this).animate({ left: nextX + 'px', top: nextY + 'px' });
});
});
</script>
</html>
my style.css
body { position: relative; }
#img { position: relative; }
| [
"in css\nuse the pointer .som instead of #img\nbecause in your jQuey code you used top and left,these properties will not work unless the position property is set first, in this case its position:relative;\n(static is the default value of position )\neverything else looks good\n"
] | [
0
] | [] | [] | [
"html",
"javascript",
"jquery"
] | stackoverflow_0074673162_html_javascript_jquery.txt |
Q:
Flutter : how to use focusNode property on TextField
I want to control the TextField widget when f user taps on it. How can I implement the focusNode property? There's no detailed explanation in the description.
A:
FocusNode focusNode;
void initState() {
focusNode = new FocusNode();
// listen to focus changes
focusNode.addListener(() => print('focusNode updated: hasFocus: ${focusNode.hasFocus}'));
}
void setFocus() {
FocusScope.of(context).requestFocus(focusNode);
}
Widget build() {
return
...
new TextField(focusNode: focusNode, ...);
}
A:
Don't forget to dispose of it afterwards to avoid memory leaks:
@override
void dispose() {
focusNode.dispose();
super.dispose();
}
A:
How to Focus Control from outside tapped or pressed;
FocusNode emailFocus = FocusNode();
FocusNode passFocus = FocusNode();
//Add GestureDetector for getting outside touch event;
GestureDetector(
onTap: () {
try {
emailFocus.unfocus();
passFocus.unfocus();
} catch (e) {
print(e);
}
print("Tapped");
},
//add focus in textfield
TextFormField(
autofocus: true,
focusNode: passFocus,
| Flutter : how to use focusNode property on TextField | I want to control the TextField widget when f user taps on it. How can I implement the focusNode property? There's no detailed explanation in the description.
| [
"FocusNode focusNode;\n\nvoid initState() {\n focusNode = new FocusNode();\n\n // listen to focus changes\n focusNode.addListener(() => print('focusNode updated: hasFocus: ${focusNode.hasFocus}')); \n}\n\nvoid setFocus() {\n FocusScope.of(context).requestFocus(focusNode);\n}\n\nWidget build() {\n return\n ...\n new TextField(focusNode: focusNode, ...);\n}\n\n",
"Don't forget to dispose of it afterwards to avoid memory leaks:\n@override\n void dispose() {\n focusNode.dispose();\n super.dispose();\n }\n\n",
"How to Focus Control from outside tapped or pressed;\nFocusNode emailFocus = FocusNode();\nFocusNode passFocus = FocusNode();\n\n//Add GestureDetector for getting outside touch event;\nGestureDetector(\n onTap: () {\n try {\n emailFocus.unfocus();\n passFocus.unfocus();\n } catch (e) {\n print(e);\n }\n\n print(\"Tapped\");\n },\n\n//add focus in textfield\n TextFormField(\n autofocus: true,\n focusNode: passFocus,\n\n"
] | [
29,
12,
0
] | [] | [] | [
"dart",
"flutter"
] | stackoverflow_0049912227_dart_flutter.txt |
Q:
vmware powercli
i want to create New vm's from muliple template's like windows 8,7 and more like that, also want to choose how many i need of that vm.
the full code is here:
link to pastebin
but the thing is i want to do multiple templates at once how can i do that?
i was thinking about an foreach($template in $template){do that}
then im doing template1,template2 but its giving me the error that its just template1,template2 instead of 2 diffrent templates dont know how to seperate that. i thought about the , but i dont know then.
A:
This is not possible, you have to specify one (and only one) template to create a new from template.
You can verify that by running the following command :
Get-Help New-VM -Parameter Template
-Template <Template>
Specifies the virtual machine template you want to use for the creation of the new virtual machine. Passing values to this parameter through a pipeline is deprecated and will be disabled in a future release.
Required? true
Position? 2
Default value
Accept pipeline input? true (ByValue)
Accept wildcard characters? true
As you can see , this -Template parameter takes only one object of the type [Template].
If it were able to take several, we would see :
-Template <Template[]>
But it is the same in the GUI anyway : in the vSphere client you have to choose only one template to create a VM from it.
A:
One way to handle multiple templates in your script would be to use a loop to iterate over the list of templates. For example, you could modify your script as follows:
$psSnapInName = "VMware.VimAutomation.Core"
if (-not (Get-PSSnapin -Name $psSnapInName -ErrorAction SilentlyContinue))
{
Add-PSSnapin -Name $psSnapInName -ErrorAction Stop}
Get-Template| select Name|Export-Csv c:\templates.csv
$data = Import-Csv "c:\templates.csv"
$t = $data | % { $_."Name" }
Function Create_VM
{
Param(
[parameter(Mandatory=$true,
HelpMessage="Supply the number of Vm's that need to be made.",
ValueFromPipeline=$false)]
[int]
$Number
,
[parameter(Mandatory=$true,
HelpMessage="Supply the Name of Vm's that need to be made.",
ValueFromPipeline=$false)]
[string]
$NameVM
,
[parameter(Mandatory=$true,
HelpMessage="Supply the Name of the Templates, separated by a comma.",
ValueFromPipeline=$false)]
[string]
$strTemplates
)
$i = 0
$strTemplates= $strTemplates.Split(",")
$ts= $t.Split(",")
$strDestinationHost = "my ip"
$strCustomSpec = "Config"
$strDatastore = "MAIN"
$strLocation = "test"
$strPool = "Resources"
$Date=get-date -uformat "%Y%m%d"
$NumArray = (1..$Number)
foreach($strTemplate in $strTemplates) {
for ($seq=1; $seq -le $Number; $seq++){
$i = $i +1
$string = $NameVM + $i
"Creating $string"
New-VM -Name $string -Template $(get-template $strTemplate) -location (get-folder $strLocation) -pool (get-resourcepool $strPool) -VMHost $(Get-VMHost $strDestinationHost) -Datastore $(Get-Datastore $strDatastore) -OSCustomizationSpec $(Get-OSCustomizationSpec $strCustomSpec) | Start-VM
}
}
}
In this modified script, the $strTemplates variable is split into an array of individual templates using the Split() method. Then, a foreach loop is used to iterate over each template in the array and create the specified number of VMs using that template.
| vmware powercli | i want to create New vm's from muliple template's like windows 8,7 and more like that, also want to choose how many i need of that vm.
the full code is here:
link to pastebin
but the thing is i want to do multiple templates at once how can i do that?
i was thinking about an foreach($template in $template){do that}
then im doing template1,template2 but its giving me the error that its just template1,template2 instead of 2 diffrent templates dont know how to seperate that. i thought about the , but i dont know then.
| [
"This is not possible, you have to specify one (and only one) template to create a new from template.\nYou can verify that by running the following command :\nGet-Help New-VM -Parameter Template\n\n-Template <Template>\n Specifies the virtual machine template you want to use for the creation of the new virtual machine. Passing values to this parameter through a pipeline is deprecated and will be disabled in a future release.\n\n Required? true\n Position? 2\n Default value\n Accept pipeline input? true (ByValue)\n Accept wildcard characters? true\n\nAs you can see , this -Template parameter takes only one object of the type [Template].\nIf it were able to take several, we would see :\n-Template <Template[]>\n\nBut it is the same in the GUI anyway : in the vSphere client you have to choose only one template to create a VM from it.\n",
"One way to handle multiple templates in your script would be to use a loop to iterate over the list of templates. For example, you could modify your script as follows:\n$psSnapInName = \"VMware.VimAutomation.Core\"\nif (-not (Get-PSSnapin -Name $psSnapInName -ErrorAction SilentlyContinue))\n{\nAdd-PSSnapin -Name $psSnapInName -ErrorAction Stop}\n\nGet-Template| select Name|Export-Csv c:\\templates.csv\n\n$data = Import-Csv \"c:\\templates.csv\"\n$t = $data | % { $_.\"Name\" }\n\nFunction Create_VM\n{\nParam(\n[parameter(Mandatory=$true,\nHelpMessage=\"Supply the number of Vm's that need to be made.\",\nValueFromPipeline=$false)]\n[int]\n$Number\n,\n[parameter(Mandatory=$true,\nHelpMessage=\"Supply the Name of Vm's that need to be made.\",\nValueFromPipeline=$false)]\n[string]\n$NameVM\n,\n[parameter(Mandatory=$true,\nHelpMessage=\"Supply the Name of the Templates, separated by a comma.\",\nValueFromPipeline=$false)]\n[string]\n$strTemplates\n)\n$i = 0\n$strTemplates= $strTemplates.Split(\",\")\n$ts= $t.Split(\",\")\n$strDestinationHost = \"my ip\"\n$strCustomSpec = \"Config\"\n$strDatastore = \"MAIN\"\n$strLocation = \"test\"\n$strPool = \"Resources\"\n$Date=get-date -uformat \"%Y%m%d\"\n$NumArray = (1..$Number)\n\nforeach($strTemplate in $strTemplates) {\nfor ($seq=1; $seq -le $Number; $seq++){\n$i = $i +1\n$string = $NameVM + $i\n\"Creating $string\"\nNew-VM -Name $string -Template $(get-template $strTemplate) -location (get-folder $strLocation) -pool (get-resourcepool $strPool) -VMHost $(Get-VMHost $strDestinationHost) -Datastore $(Get-Datastore $strDatastore) -OSCustomizationSpec $(Get-OSCustomizationSpec $strCustomSpec) | Start-VM\n}\n}\n}\n\nIn this modified script, the $strTemplates variable is split into an array of individual templates using the Split() method. Then, a foreach loop is used to iterate over each template in the array and create the specified number of VMs using that template.\n"
] | [
0,
0
] | [] | [] | [
"powercli",
"powershell"
] | stackoverflow_0028942506_powercli_powershell.txt |
Q:
Maven and Sonarqube
A few basic questions on ci/cd pipelines.
When we build java code, do we create jar file before going for sonarqube analysis or does both happen simultaneously. My understanding is sonarqube analysis needs to be performed before maven build. Build should happen only if codequality crosses our quality checks.
Does sonar scanner and maven are used individually or sonar scanner is integrated with maven. I know both are possible but what is the best way that we need artifacts to be created only if code passes quality checks.
How does the sonarqube tell CI system (be it azuredevops or any other system) whether to go for next steps or break if the quality check is failed.
A:
Usually you run your full build (which contains building the jar file or in general artifacts) and the sonar analysis will be done afterwards (unit tests coverage, static code analysis etc.) and no it is not done before it's done afterwards otherwise it would not be possible to integrate results like code coverage of the unit/integration test into the sonarqube analysis.
Technically the sonar scanner can be triggered via the Maven build (it is done via a maven plugin) and often called like this: mvn verify sonar:sonar(assumed that it is configured correctly).
SonarQube has a webhook which will be called/triggered if the quality is not as expected. Most of the time the CI/CD system have a stage which will shows the result of that and makes the final result of the build "red". Also many source code hosting solutions (GitHub, GitLab, Gitea or alike) having indicators which shows that (usually) within a pull request...
Update:
If you run sonar analysis on a project without compiling the code you will get this:
$ mvn clean sonar:sonar
[INFO] JavaClasspath initialization
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.952 s
[INFO] Finished at: 2022-12-04T21:41:34+01:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.9.1.2184:sonar (default-cli) on project kata-fraction:
Your project contains .java files, please provide compiled classes with sonar.java.binaries property, or exclude them from the analysis with sonar.exclusions property. -> [Help 1]
[ERROR]
| Maven and Sonarqube | A few basic questions on ci/cd pipelines.
When we build java code, do we create jar file before going for sonarqube analysis or does both happen simultaneously. My understanding is sonarqube analysis needs to be performed before maven build. Build should happen only if codequality crosses our quality checks.
Does sonar scanner and maven are used individually or sonar scanner is integrated with maven. I know both are possible but what is the best way that we need artifacts to be created only if code passes quality checks.
How does the sonarqube tell CI system (be it azuredevops or any other system) whether to go for next steps or break if the quality check is failed.
| [
"\nUsually you run your full build (which contains building the jar file or in general artifacts) and the sonar analysis will be done afterwards (unit tests coverage, static code analysis etc.) and no it is not done before it's done afterwards otherwise it would not be possible to integrate results like code coverage of the unit/integration test into the sonarqube analysis.\n\nTechnically the sonar scanner can be triggered via the Maven build (it is done via a maven plugin) and often called like this: mvn verify sonar:sonar(assumed that it is configured correctly).\n\nSonarQube has a webhook which will be called/triggered if the quality is not as expected. Most of the time the CI/CD system have a stage which will shows the result of that and makes the final result of the build \"red\". Also many source code hosting solutions (GitHub, GitLab, Gitea or alike) having indicators which shows that (usually) within a pull request...\n\n\nUpdate:\n\nIf you run sonar analysis on a project without compiling the code you will get this:\n\n$ mvn clean sonar:sonar\n\n[INFO] JavaClasspath initialization\n[INFO] ------------------------------------------------------------------------\n\n[INFO] BUILD FAILURE\n[INFO] ------------------------------------------------------------------------\n[INFO] Total time: 2.952 s\n[INFO] Finished at: 2022-12-04T21:41:34+01:00\n[INFO] ------------------------------------------------------------------------\n[ERROR] Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.9.1.2184:sonar (default-cli) on project kata-fraction: \nYour project contains .java files, please provide compiled classes with sonar.java.binaries property, or exclude them from the analysis with sonar.exclusions property. -> [Help 1]\n[ERROR] \n\n"
] | [
0
] | [] | [] | [
"cicd",
"continuous_integration",
"devops",
"maven",
"sonarqube"
] | stackoverflow_0074673952_cicd_continuous_integration_devops_maven_sonarqube.txt |
Q:
How to store input in array per index in Java
I'm creating a sign-up form, and I create another array to hold the past entered username and compare it to the present username to know if it existed or not, but the result is not what I imagine
do{
System.out.print("Username: ");
usrname = scan.next();
for(int a = 0; a < 10; a++){
if(usrname.equals(acc[a])){
trans = false;
System.out.print("Username Already Existed, Try Again!\n");
break;
}
else{
trans = true;
acc[a] = usrname;
break;
}
}
}while(trans == false);
I expect to get the username and store it in an array in only one index like
username: AA21
Array:
0= AA21
1 = Null
...
succesfully sign-up
then another sign-up from the user, and the array still holds the previous username
| How to store input in array per index in Java | I'm creating a sign-up form, and I create another array to hold the past entered username and compare it to the present username to know if it existed or not, but the result is not what I imagine
do{
System.out.print("Username: ");
usrname = scan.next();
for(int a = 0; a < 10; a++){
if(usrname.equals(acc[a])){
trans = false;
System.out.print("Username Already Existed, Try Again!\n");
break;
}
else{
trans = true;
acc[a] = usrname;
break;
}
}
}while(trans == false);
I expect to get the username and store it in an array in only one index like
username: AA21
Array:
0= AA21
1 = Null
...
succesfully sign-up
then another sign-up from the user, and the array still holds the previous username
| [] | [] | [
"public static void main(String[] args) {\n String usrname;\n String acc[] = new String[10];\n boolean trans;\n Scanner scan = new Scanner(System.in);\n\n do{\n trans = true;\n System.out.print(\"Username: \");\n usrname = scan.next();\n for(int a = 0; a < 10; a++){\n if(usrname.equals(acc[a])){\n trans = false;\n System.out.print(\"Username Already Existed, Try Again!\\n\");\n break;\n }\n }\n acc[a] = usrname;\n }while(trans == false);\n\n System.out.println(\"Successfully Sign-Up!\");\n}\n\nThe trans variable is initialized to true at the beginning of the do-while loop, and is updated to false only if the username entered by the user already exists in the acc array.\nThe acc[a] = usrname; line is moved outside of the if statement, so that it is executed only if the username entered by the user is unique.\nThe acc array is initialized with null values in all its elements, so that the usrname.equals(acc[a]) comparison in the if statement will work correctly.\n"
] | [
-1
] | [
"arrays",
"java",
"string"
] | stackoverflow_0074674403_arrays_java_string.txt |
Q:
Blazor Server HttpContext is null when published on local IIS
In my Blazor Server app I have this code in a component that needs to read cookies from the Request (so I would read them before the render):
[Inject] private IHttpContextAccessor HttpCxAccessor { get; set; }
...
protected override void OnInitialized()
{
var context = HttpCxAccessor.HttpContext;
// context is null when on Local IIS
the code works when I run it from VS (IISExpress) but when I publish it on local IIS, the HttpContext is null
A:
You shouldn't use HttpContextAccessor in Blazor Server because the Blazor Server works outside the .NetCore pipeline and basically there is no guarantee that you will have access to the desired amount of HttpContext everywhere for more info you can refer to this issue. However, If you have to use the HttpContext then you have to get the desired value(s) from HttpContext when rendering _Host.cshtml and save it in a variable and use that variable in the form of Cascading Parameters in the components in the rest of the program.
An Example of implementation is here.
| Blazor Server HttpContext is null when published on local IIS | In my Blazor Server app I have this code in a component that needs to read cookies from the Request (so I would read them before the render):
[Inject] private IHttpContextAccessor HttpCxAccessor { get; set; }
...
protected override void OnInitialized()
{
var context = HttpCxAccessor.HttpContext;
// context is null when on Local IIS
the code works when I run it from VS (IISExpress) but when I publish it on local IIS, the HttpContext is null
| [
"You shouldn't use HttpContextAccessor in Blazor Server because the Blazor Server works outside the .NetCore pipeline and basically there is no guarantee that you will have access to the desired amount of HttpContext everywhere for more info you can refer to this issue. However, If you have to use the HttpContext then you have to get the desired value(s) from HttpContext when rendering _Host.cshtml and save it in a variable and use that variable in the form of Cascading Parameters in the components in the rest of the program.\nAn Example of implementation is here.\n"
] | [
1
] | [] | [] | [
"asp.net_core",
"blazor",
"blazor_server_side"
] | stackoverflow_0074658896_asp.net_core_blazor_blazor_server_side.txt |
Q:
Socket.io in Reactjs
Here's how I'm listening to notifications from socket.io in reactjs:
render(){
var socket = io('http://localhost:8080');
socket.on(isAuthenticated().user._id, function(msg){
toast.info(msg)
});
const {deal, supplier, buyer, info,user, shipment, payment} = this.state;
return(
....
)
}
Everything is working fine. I see that the notification is emitted just once from the server side yet it is getting rendered 6 times on the client side. How do I limit it?
A:
You shouldn't have side effects such as io and socket.on() in your render method.
You should use componentDidMount for that. It is invoked immediately after a component is inserted into DOM. See this link for information about componentDidMount and make sure you add componentWillUnmount too, to clean up any event listener you created in componentDidMount, as it is invoked just before a component is removed from DOM.
After you initialize your socket event listener in componentDidMount by using the on method, you need to clear the event listener in componentWillUnmount by calling the off method. socket inherits all methods from Emitter (click here for more information) which defines the off method which you need to use in componentWillUnmount.
From the documentation:
Pass event and fn to remove a listener.
Pass event to remove all listeners on that event.
Pass nothing to remove all listeners on all events.
So in your case, you could just use socket.off() without parameters to remove all socket listeners. Or socket.off(isAuthenticated().user._id) to remove all event listeners for that specific event.
A:
Building on Dejan Janjušević's answer, you can implement the useEffect hook for cleaner code. There's a nice blog post on useEffect react hook here, and the official SocketIO documentation on using it with react hooks if you're curious.
import React, { useEffect } from 'react';
import io from "socket.io-client";
function App() {
// [...] states, etc.
// only when the entire page (re)loads
useEffect(() => {
// Establish connection
setSocket(io('http://localhost:8000'));
// Handle events
socket.on(isAuthenticated().user._id, function(msg){
toast.info(msg);
});
// Clean up
return () => {
socket.off(isAuthenticated().user._id);
};
}, []);
// [...] some business logic
return(
[...] what needs to be done and rendered
);
}
export default App;
| Socket.io in Reactjs | Here's how I'm listening to notifications from socket.io in reactjs:
render(){
var socket = io('http://localhost:8080');
socket.on(isAuthenticated().user._id, function(msg){
toast.info(msg)
});
const {deal, supplier, buyer, info,user, shipment, payment} = this.state;
return(
....
)
}
Everything is working fine. I see that the notification is emitted just once from the server side yet it is getting rendered 6 times on the client side. How do I limit it?
| [
"You shouldn't have side effects such as io and socket.on() in your render method.\nYou should use componentDidMount for that. It is invoked immediately after a component is inserted into DOM. See this link for information about componentDidMount and make sure you add componentWillUnmount too, to clean up any event listener you created in componentDidMount, as it is invoked just before a component is removed from DOM.\nAfter you initialize your socket event listener in componentDidMount by using the on method, you need to clear the event listener in componentWillUnmount by calling the off method. socket inherits all methods from Emitter (click here for more information) which defines the off method which you need to use in componentWillUnmount.\nFrom the documentation:\n\nPass event and fn to remove a listener.\nPass event to remove all listeners on that event.\nPass nothing to remove all listeners on all events.\n\nSo in your case, you could just use socket.off() without parameters to remove all socket listeners. Or socket.off(isAuthenticated().user._id) to remove all event listeners for that specific event.\n",
"Building on Dejan Janjušević's answer, you can implement the useEffect hook for cleaner code. There's a nice blog post on useEffect react hook here, and the official SocketIO documentation on using it with react hooks if you're curious.\nimport React, { useEffect } from 'react';\nimport io from \"socket.io-client\";\n\nfunction App() {\n\n // [...] states, etc.\n \n // only when the entire page (re)loads\n useEffect(() => {\n\n // Establish connection\n setSocket(io('http://localhost:8000'));\n\n // Handle events\n socket.on(isAuthenticated().user._id, function(msg){\n toast.info(msg);\n });\n\n // Clean up\n return () => {\n socket.off(isAuthenticated().user._id);\n };\n }, []);\n\n // [...] some business logic\n\n return(\n [...] what needs to be done and rendered\n );\n}\n\nexport default App;\n\n"
] | [
0,
0
] | [] | [] | [
"reactjs",
"socket.io"
] | stackoverflow_0062893891_reactjs_socket.io.txt |
Q:
VMWare PowerCLI Invoke-VMScript
I would like to Invoke a command on a Remote VM using PowerShell with PowerCLI.
Invoke-VMScript -ScriptText "cmd /c calc" -ScriptType Bat -VM $VMName -GuestCredential $Credential -Confirm:$false -ea SilentlyContinue
Sadly everytime when my command get's invoked an Popup appears telling me "A Program running on this computer is trying to display a message" If click manually on that Popup my Script runs fine, but how can I automate this, so that I can use PowerCLI for this.
The goal is to execute a Binary in Interactive Mode, that processes Automated Tasks, when the Script get's invoked by "Invoke-VMScript"
A:
This is an issue with Interactive Services Detection. Your script is trying to run as interactive in Session 0.
The standard workarounds are creating a schedule task and then triggering it. Or invoking psexec.exe to the user session with -i.
A:
To automate the execution of the command on the remote VM, you can use the -Wait flag with the Invoke-VMScript cmdlet. This will cause the cmdlet to wait for the script to complete before continuing with the execution of the script.
For example:
Invoke-VMScript -ScriptText "cmd /c calc" -ScriptType Bat -VM $VMName -GuestCredential $Credential -Confirm:$false -Wait -ea SilentlyContinue
This will ensure that the script is executed on the remote VM and that the results are returned before continuing with the execution of the script.
| VMWare PowerCLI Invoke-VMScript | I would like to Invoke a command on a Remote VM using PowerShell with PowerCLI.
Invoke-VMScript -ScriptText "cmd /c calc" -ScriptType Bat -VM $VMName -GuestCredential $Credential -Confirm:$false -ea SilentlyContinue
Sadly everytime when my command get's invoked an Popup appears telling me "A Program running on this computer is trying to display a message" If click manually on that Popup my Script runs fine, but how can I automate this, so that I can use PowerCLI for this.
The goal is to execute a Binary in Interactive Mode, that processes Automated Tasks, when the Script get's invoked by "Invoke-VMScript"
| [
"This is an issue with Interactive Services Detection. Your script is trying to run as interactive in Session 0. \nThe standard workarounds are creating a schedule task and then triggering it. Or invoking psexec.exe to the user session with -i.\n",
"To automate the execution of the command on the remote VM, you can use the -Wait flag with the Invoke-VMScript cmdlet. This will cause the cmdlet to wait for the script to complete before continuing with the execution of the script.\nFor example:\nInvoke-VMScript -ScriptText \"cmd /c calc\" -ScriptType Bat -VM $VMName -GuestCredential $Credential -Confirm:$false -Wait -ea SilentlyContinue\n\nThis will ensure that the script is executed on the remote VM and that the results are returned before continuing with the execution of the script.\n"
] | [
2,
0
] | [] | [] | [
"powercli",
"powershell"
] | stackoverflow_0044501144_powercli_powershell.txt |
Q:
Android application with android directory/package
I have seen some android application with a directory named as "android" which contains framework files.
Can anyone tell me about he implementation procedure and how it is achieved?
A:
To access framework APIs or symbols which are not enabled in the public SDK, you can utilise those APIs by compiling the framework or services jar from AOSP and add it as a library module to the AndroidStudio project or just directly add to the project with the dependency for the jar via compileOnly files in app's build.gradle.
| Android application with android directory/package | I have seen some android application with a directory named as "android" which contains framework files.
Can anyone tell me about he implementation procedure and how it is achieved?
| [
"To access framework APIs or symbols which are not enabled in the public SDK, you can utilise those APIs by compiling the framework or services jar from AOSP and add it as a library module to the AndroidStudio project or just directly add to the project with the dependency for the jar via compileOnly files in app's build.gradle.\n"
] | [
0
] | [] | [] | [
"android",
"android_source",
"android_studio",
"java"
] | stackoverflow_0074555701_android_android_source_android_studio_java.txt |
Q:
Vector of structs not storing elements after 1st element of vector when reading in from external file
Was having a bit of trouble on one of my labs from my intro to CS course. The purpose of the lab is to put structs of elements into a vector and to read them out using vector notation and the struct datatype. Right now, it works fine except that after it gets through the first element of the vector, it no longer seems to read in into the for loop I have, as when it is printed, it displays the strings in the struct as spaces and the unsigned numbers in the structs as 0s.
External file name: "students.txt"
External file:
5
jj432 Jennifer Jones 9 7 10 8 10 4 6 9 0 91 82 93
fh167 Frank Harvey 7 8 8 9 10 6 7 5 0 10 82 81 93 92
ss632 Susan Smith 8 7 10 10 5 0 9 9 8 8 94 88 72 65
ma312 Marie Avalon 4 5 9 6 8 9 7 7 8 6 62 73 79 84
ww785 William Watson 8 9 7 7 8 8 9 10 9 9 94 93 93 100
Code:
#include <iostream>
#include <vector>
#include <string>
#include <fstream>
#include <iomanip>
using namespace std;
void output_header();
const size_t QUIZ_AMOUNT = 10;
const size_t TEST_AMOUNT = 3;
int main()
{
output_header();
struct StudentInfo
{
string id;
string first_name;
string last_name;
vector <unsigned> quizes;
vector <unsigned> tests;
};
ifstream input_file;
input_file.open("students.txt");
unsigned file_header_counting;
input_file >> file_header_counting;
vector <StudentInfo> vec_students;
for (size_t repeat = 0; repeat < file_header_counting; repeat++)
{
StudentInfo students;
input_file >> students.id;
input_file >> students.first_name;
input_file >> students.last_name;
for (size_t quiz_loc = 0; quiz_loc < QUIZ_AMOUNT; quiz_loc++)
{
unsigned students_quizes;
input_file >> students_quizes;
students.quizes.push_back(students_quizes);
}
for (size_t test_loc = 0; test_loc < TEST_AMOUNT; test_loc++)
{
unsigned students_tests;
input_file >> students_tests;
students.tests.push_back(students_tests);
}
vec_students.push_back(students);
}
for (size_t print = 0; print < vec_students.size(); print++)
{
cout << vec_students[print].id << ' '
<< vec_students[print].first_name << ' '
<< vec_students[print].last_name << ' ';
for (size_t i = 0; i < QUIZ_AMOUNT; i++)
{
cout << vec_students[print].quizes[i]
<< ' ';
}
for (size_t j = 0; j < TEST_AMOUNT-1; j++)
{
cout << vec_students[print].tests[j]
<< ' ';
}
cout << endl;
}
}
void output_header()
{
const unsigned ID_FORMAT = 5;
const unsigned NAME_FORMAT = 12;
const unsigned QUIZ_EXAM_FORMAT = 18;
const unsigned PERCENT_FORMAT = 19;
const unsigned GRADE_FORMAT = 8;
cout << "********************************************************"
<< "************************" << endl
<< " Student Grade Report" << endl << left << setw(ID_FORMAT)
<< "Id" << setw(NAME_FORMAT) << fixed << "Last Name"
<< setw(QUIZ_EXAM_FORMAT) << "Quiz Percentage"
<< setw(QUIZ_EXAM_FORMAT) << "Exam Percentage"
<< setw(PERCENT_FORMAT) << "Weighted Percent"
<< setw(GRADE_FORMAT) << "Grade" << endl << endl;
}
Example of output:
********************************************************************************
Student Grade Report
Id Last Name Quiz Percentage Exam Percentage Weighted Percent Grade
jj432 Jennifer Jones 9 7 10 8 10 4 6 9 0 91 82 93
91 91 91 91 91 91 91 91 91 91 0 0
91 91 91 91 91 91 91 91 91 91 0 0
91 91 91 91 91 91 91 91 91 91 0 0
91 91 91 91 91 91 91 91 91 91 0 0
I tried messing around with the input system, checking to make sure my external file was correct and didn't have unnecessary whitespace etc, but I haven't had much luck yet. Any/all advice appreaciated.
| Vector of structs not storing elements after 1st element of vector when reading in from external file | Was having a bit of trouble on one of my labs from my intro to CS course. The purpose of the lab is to put structs of elements into a vector and to read them out using vector notation and the struct datatype. Right now, it works fine except that after it gets through the first element of the vector, it no longer seems to read in into the for loop I have, as when it is printed, it displays the strings in the struct as spaces and the unsigned numbers in the structs as 0s.
External file name: "students.txt"
External file:
5
jj432 Jennifer Jones 9 7 10 8 10 4 6 9 0 91 82 93
fh167 Frank Harvey 7 8 8 9 10 6 7 5 0 10 82 81 93 92
ss632 Susan Smith 8 7 10 10 5 0 9 9 8 8 94 88 72 65
ma312 Marie Avalon 4 5 9 6 8 9 7 7 8 6 62 73 79 84
ww785 William Watson 8 9 7 7 8 8 9 10 9 9 94 93 93 100
Code:
#include <iostream>
#include <vector>
#include <string>
#include <fstream>
#include <iomanip>
using namespace std;
void output_header();
const size_t QUIZ_AMOUNT = 10;
const size_t TEST_AMOUNT = 3;
int main()
{
output_header();
struct StudentInfo
{
string id;
string first_name;
string last_name;
vector <unsigned> quizes;
vector <unsigned> tests;
};
ifstream input_file;
input_file.open("students.txt");
unsigned file_header_counting;
input_file >> file_header_counting;
vector <StudentInfo> vec_students;
for (size_t repeat = 0; repeat < file_header_counting; repeat++)
{
StudentInfo students;
input_file >> students.id;
input_file >> students.first_name;
input_file >> students.last_name;
for (size_t quiz_loc = 0; quiz_loc < QUIZ_AMOUNT; quiz_loc++)
{
unsigned students_quizes;
input_file >> students_quizes;
students.quizes.push_back(students_quizes);
}
for (size_t test_loc = 0; test_loc < TEST_AMOUNT; test_loc++)
{
unsigned students_tests;
input_file >> students_tests;
students.tests.push_back(students_tests);
}
vec_students.push_back(students);
}
for (size_t print = 0; print < vec_students.size(); print++)
{
cout << vec_students[print].id << ' '
<< vec_students[print].first_name << ' '
<< vec_students[print].last_name << ' ';
for (size_t i = 0; i < QUIZ_AMOUNT; i++)
{
cout << vec_students[print].quizes[i]
<< ' ';
}
for (size_t j = 0; j < TEST_AMOUNT-1; j++)
{
cout << vec_students[print].tests[j]
<< ' ';
}
cout << endl;
}
}
void output_header()
{
const unsigned ID_FORMAT = 5;
const unsigned NAME_FORMAT = 12;
const unsigned QUIZ_EXAM_FORMAT = 18;
const unsigned PERCENT_FORMAT = 19;
const unsigned GRADE_FORMAT = 8;
cout << "********************************************************"
<< "************************" << endl
<< " Student Grade Report" << endl << left << setw(ID_FORMAT)
<< "Id" << setw(NAME_FORMAT) << fixed << "Last Name"
<< setw(QUIZ_EXAM_FORMAT) << "Quiz Percentage"
<< setw(QUIZ_EXAM_FORMAT) << "Exam Percentage"
<< setw(PERCENT_FORMAT) << "Weighted Percent"
<< setw(GRADE_FORMAT) << "Grade" << endl << endl;
}
Example of output:
********************************************************************************
Student Grade Report
Id Last Name Quiz Percentage Exam Percentage Weighted Percent Grade
jj432 Jennifer Jones 9 7 10 8 10 4 6 9 0 91 82 93
91 91 91 91 91 91 91 91 91 91 0 0
91 91 91 91 91 91 91 91 91 91 0 0
91 91 91 91 91 91 91 91 91 91 0 0
91 91 91 91 91 91 91 91 91 91 0 0
I tried messing around with the input system, checking to make sure my external file was correct and didn't have unnecessary whitespace etc, but I haven't had much luck yet. Any/all advice appreaciated.
| [] | [] | [
"You need to initialise your vector.\nvector <StudentInfo> vec_students = vector<StudentInfo>();\n\n"
] | [
-1
] | [
"c++",
"file",
"struct",
"vector"
] | stackoverflow_0074662837_c++_file_struct_vector.txt |
Q:
Can I transfer an object to a component in tiles?
I have an object in which all properties need to be passed into the component. But one by one, it was too slow.
Is there a similar practice in the angular?
import { Component, Input } from '@angular/core';
@Component({
selector: 'app-demo',
template: `
<p>
{{name}}-{{age}}
</p>
`
})
export class DemoComponent {
@Input() name: string = '';
@Input() age: number = 0
}
import { Component } from '@angular/core';
@Component({
selector: 'app-use',
template: `
<p>
use works!
</p>
<!-- Syntax similar to React, Pass in all at once, not one by one -->
<app-demo ...demoInject></app-demo>
`,
styleUrls: ['./use.component.css']
})
export class UseComponent {
demoInject = {
name: 'tom',
age: 20
}
}
A:
In Angular you typically pass objects from the parent- to the child-component like this:
First the parent-component:
@Component({
selector: 'app-use',
template: `
<p>
use works!
</p>
<!-- Syntax similar to React, Pass in all at once, not one by one -->
<app-demo [demoDto]="demoInject"></app-demo>
`,
})
export class UseComponent {
demoInject = {
name: 'tom',
age: 20
} as DemoDto;
}
Then the child-component:
@Component({
selector: 'app-demo',
template: `
<p *ngIf="demoDto">
{{demoDto.name}}-{{demoDto.age}}
</p>
`
})
export class DemoComponent {
@Input()
demoDto!: DemoDto;
}
| Can I transfer an object to a component in tiles? | I have an object in which all properties need to be passed into the component. But one by one, it was too slow.
Is there a similar practice in the angular?
import { Component, Input } from '@angular/core';
@Component({
selector: 'app-demo',
template: `
<p>
{{name}}-{{age}}
</p>
`
})
export class DemoComponent {
@Input() name: string = '';
@Input() age: number = 0
}
import { Component } from '@angular/core';
@Component({
selector: 'app-use',
template: `
<p>
use works!
</p>
<!-- Syntax similar to React, Pass in all at once, not one by one -->
<app-demo ...demoInject></app-demo>
`,
styleUrls: ['./use.component.css']
})
export class UseComponent {
demoInject = {
name: 'tom',
age: 20
}
}
| [
"In Angular you typically pass objects from the parent- to the child-component like this:\nFirst the parent-component:\n@Component({\n selector: 'app-use',\n template: `\n <p>\n use works!\n </p>\n <!-- Syntax similar to React, Pass in all at once, not one by one -->\n <app-demo [demoDto]=\"demoInject\"></app-demo>\n `,\n})\nexport class UseComponent {\n demoInject = {\n name: 'tom',\n age: 20\n } as DemoDto;\n}\n\nThen the child-component:\n@Component({\n selector: 'app-demo',\n template: `\n <p *ngIf=\"demoDto\">\n {{demoDto.name}}-{{demoDto.age}}\n </p>\n`\n})\n\nexport class DemoComponent {\n @Input()\n demoDto!: DemoDto;\n}\n\n\n"
] | [
1
] | [] | [] | [
"angular",
"typescript"
] | stackoverflow_0074674169_angular_typescript.txt |
Q:
Docker cannot start on Windows
Executing docker version command on Windows returns the following results:
C:\Projects> docker version
Client:
Version: 1.13.0-dev
API version: 1.25
Go version: go1.7.3
Git commit: d8d3314
Built: Tue Nov 1 03:05:34 2016
OS/Arch: windows/amd64
error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.25/version: open //./pipe/docker_engine: The system cannot find the file
specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
Running the diagnostics produces the following:
C:\Projects> wget https://github.com/Microsoft/Virtualization-
Documentation/raw/master/windows-server-container-tools/Debug-
ContainerHost/Debug-ContainerHost.ps1 -UseBasicParsin | iex
Checking for common problems
Describing Windows Version and Prerequisites
[+] Is Windows 10 Anniversary Update or Windows Server 2016 608ms
[+] Has KB3192366, KB3194496, or later installed if running Windows build 14393 141ms
[+] Is not a build with blocking issues 29ms
Describing Docker is installed
[-] A Docker service is installed - 'Docker' or 'com.Docker.Service' 134ms
Expected: value to not be empty
27: $services | Should Not BeNullOrEmpty
at <ScriptBlock>, <No file>: line 27
[+] Service is running 127ms
[+] Docker.exe is in path 2.14s
Describing User has permissions to use Docker daemon
[+] docker.exe should not return access denied 42ms
Describing Windows container settings are correct
[-] Do not have DisableVSmbOplock set to 1 53ms
Expected: {0}
But was: {1}
66: $regvalue.VSmbDisableOplocks | Should Be 0
at <ScriptBlock>, <No file>: line 66
[+] Do not have zz values set 42ms
Describing The right container base images are installed
error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.25/images/json: open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
[-] At least one of 'microsoft/windowsservercore' or 'microsoft/nanoserver' should be installed 129ms
ValidationMetadataException: The argument is null or empty. Provide an argument that is not null or empty, and then try the command again.
ParameterBindingValidationException: Cannot validate argument on parameter 'Property'. The argument is null or empty. Provide an argument that is not null or empty, and then try the command again.
at <ScriptBlock>, <No file>: line 90
Describing Container network is created
[-] Error occurred in Describe block 1.08s
RuntimeException: Cannot index into a null array.
at <ScriptBlock>, <No file>: line 119
Showing output from: docker info
Showing output from: docker version
Client:
Version: 1.13.0-dev
API version: 1.25
Go version: go1.7.3
Git commit: d8d3314
Built: Tue Nov 1 03:05:34 2016
OS/Arch: windows/amd64
Showing output from: docker network ls
Warnings & errors from the last 24 hours
Logs saved to C:\Projects\logs_20161107-084122.csv
C:\Projects>
A:
The error is related to that part:
In the default daemon configuration on Windows, the docker client must
be run elevated to connect
First, verify that Docker Desktop application is running. If not, launch it: that will run the docker daemon (just wait few minutes).
Then, if the error still persist, you can try to switch Docker daemon type, as explained below:
With Powershell:
Open Powershell as administrator
Launch command: & 'C:\Program Files\Docker\Docker\DockerCli.exe' -SwitchDaemon
OR, with cmd:
Open cmd as administrator
Launch command: "C:\Program Files\Docker\Docker\DockerCli.exe" -SwitchDaemon
A:
I had the same problem.
Starting the docker daemon resolved the issue. Just search for docker pressing windows key and click on "Docker Dekstop". Daemon should be running in a minute.
After starting up Docker Desktop, make sure the docker daemon status in the bottom left is green and shows RUNNING when you hover over it.
A:
You can run "C:\Program Files\Docker\Docker\DockerCli.exe" -SwitchDaemon and point Docker CLI to either Linux or Windows containers. This worked for me.
A:
Error Code:
error during connect: Get
http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.29/version: open
//./pipe/docker_engine: The system cannot find the file specified. In the
default daemon configuration on Windows, the docker client must be run
elevated to connect . This error may also indicate that the docker
daemon is not running.
Solutions:
1) For Windows 7 Command Window(cmd.exe), open cmd.exe with run as administrator and execute following command:
docker-machine env --shell cmd default
You will receive following output:
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.99.100:2376
SET DOCKER_CERT_PATH=C:\Users\USER_NAME\.docker\machine\machines\default
SET DOCKER_MACHINE_NAME=default
SET COMPOSE_CONVERT_WINDOWS_PATHS=true
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('docker-machine env --shell cmd default') DO @%i
Copy the command below and execute on cmd:
@FOR /f "tokens=*" %i IN ('docker-machine env --shell cmd default') DO @%i
And then execute following command to control:
docker version
2) For Windows 7 Powershell, open powershell.exe with run as administrator and execute following command:
docker-machine env --shell=powershell | Invoke-Expression
And then execute following command to control:
docker version
3) If you reopen cmd or powershell, you should repeat the related steps again.
A:
If you see docker desktop is STOPPED or Not Running screen at left side bottom, then do following
Open PowerShell with – Run as Administrator
Close Docker Desktop if it is open
Execute the following command on PowerShell
“& 'C:\Program Files\Docker\Docker\DockerCli.exe' -SwitchDaemon”
Open Docker Desktop, it will get started.
I was facing this issue. I tried the above-mentioned steps and it worked for me. Thanks!
A:
I know this question was long ago but I found no proper explanation and solution, so hopefully, my answer is useful :)
Assuming you install Docker Toolbox on Windows, both docker and docker-machine commands will be available. Often, people get confused when to use either of these.
The docker commands are used only within a virtual machine to manage images. The docker-machine commands are used on the host to manage the Linux VMs.
So, please use docker-machine commands on your Windows machine. Use docker command inside your VM. To use the docker commands, for example, docker ps, you either can open Docker Quickstart Terminal or run these on your cmd/bash/PowerShell:
docker-machine run default /assuming default is your Linux VM/
docker-machine ssh default
This will start boot2docker and you will see the docker icon on the command line. Then you can use docker commands.
Good luck :)
A:
1.- Open the location of the shortcut:
2.- Right click and properties and add "-SwitchDaemon" to destiny
3.- Give administrator permissions, advanced options:
4.- Restart windows.
A:
Try resolving the issue with either of the following options:
Option A
Start-Service "Hyper-V Virtual Machine Management"
Start-Service "Hyper-V Host Compute Service"
or
Option B
Open "Window Security"
Open "App & Browser control"
Click "Exploit protection settings" at the bottom
Switch to "Program settings" tab
Locate "C:\WINDOWS\System32\vmcompute.exe" in the list and expand it
Click "Edit"
Scroll down to "Code flow guard (CFG)" and uncheck "Override system settings"
Start vmcompute from powershell "net start vmcompute"
Then restart your system
A:
I got the same error for Docker version 19.03.12 and Windows 10. Resolved it by going through the below steps. Hope it helps others.
Go to Windows Start -> Search Box (Type here to search). There
enter 'Services'. Among the listed items, click Services app.
Now search 'Docker Desktop Service' in the Services window opened. Right click on it and Start the service. Its status should be changed to 'Running'.
If step 2 gives error like 'the dependency service failed to start', then start all dependency services. For me, I had to start a service called 'Server'.
Double click 'Docker Desktop' icon in desktop. Now you will see 'Docker Desktop is running' in system tray.
Now run the command 'docker version' from Command Prompt or PowerShell. It should give clean output.
If any issue in step 5, run Command Prompt or PowerShell as administrator.
Above resolution assumes Docker is already installed and Hyper-V / Virtualization is enabled in your system.
A:
I have faced same issue, it may be issue of administrator, so followed below steps to setup docker on
windows10
.
Download docker desktop from docker hub after login to docker.Docker Desktop Installer.exe file will be downloaded.
Install Docker Desktop Installer.exeusing Run as administrator -> Mark windows container during installation else it will only install linux container. It will ask for Logout after logging out and login it shows docker desktop in menu.
After install, go to -> computer management -> Local users and groups -> Groups -> docker-user -> Add user in members
Run docker desktop using Run as administrator
Check docker whale icon in Notification tab
run command >docker version
Successfully using docker without any issue.
A:
if you are in windows try this
docker-machine env --shell cmd default
@FOR /f "tokens=*" %i IN ('docker-machine env --shell cmd default') DO @%i
for testing try
docker run hello-world
A:
If you have installed docker on Windows 10 Pro with Hyper-V enabled and you are still not able to run Docker on Windows 10, then, as the error suggests, your docker daemon is not started.
The following steps helped me to start docker successfully:
Use command on cmd(Admin mode)
docker-machine restart default
Then you'll get a message something like:
open C:\User\\{User_name}\\.docker\machine\machines\default\config.json:
The system cannot find the file specified.
Go to the docker icon which will be on your windows tray (bottom right corner of the desktop)
Right click on the docker icon > Settings > Reset > Restart Docker
It will take few moments
Then you'll see the following message:
Docker is running with the green indicator
Note: If you already had Docker containers running on your system, then don't follow these steps. You may lose the existing containers.
A:
Reason : one reason may cause because we shut down the vmmem by command
wsl --shutdown
Solution : Simple Restart the Docker by right-clicking will fix the problem.
A:
The same issue arrived when I started with the docker in windows 10. I was able to run docker --version successfully but failed when I tried to run docker pull docker/whalesay.
I tried many things suggested in the answers over here but my issue was resolved when I followed the below steps:
1 . Search for docker in windows and run docker desktop as administrator.
2 . Check the bottom-left docker symbol it should be green if the docker is running.
3 . If it's not running first install "wsl_update".
4 . Open the docker desktop and sign in with your docker credentials, when you are logged in you can see the server restarting and the bottom left logo turns green.
5. To check whether docker is running or not open PowerShell as administrator and run docker run hello-world.
A:
For me on Windows 11, editing %APPDATA%\Docker\settings.json to the following values and then restarting Docker Desktop did the trick (I am using WSL2, not Hyper-V):
A:
For me the issue was virtualization was not enabled.
On windows 10: Go to task manager -> Performance -> CPU and you should see as section as "Virtualization : Enabled"
If you do not see this option, it means that virtualization has not been enabled.
Another interesting thing to note is you must have Hyper V enabled. However as I was using parallels desktop, I had to enabled to "Nested Virtualization" for Hyper V to be "truly enabled". So if your windows is a VM, check out the settings for Parallels (or whatever you're using) that nested virtualization is enabled.
A:
I was getting same errors after an install on Windows 10. And I tried restarting but it did not work, so I did the following (do not recommend if you have been working in docker for awhile, this was on a fresh install):
1) Find the whale in your system tray, and right click
2) Go to settings > Reset
3) Reset to factory defaults
I was then able to follow the starting docker tutorial on the website with Windows 10, and now it works like a charm.
A:
Open C drive in powershell Or Git bash and run below command
.\Program Files\Docker\Docker\DockerCli.exe -SwitchDaemon
A:
My solution was pretty simple. I noticed that docker was running linux containers instead of windows containers. What i did is switch to windows containers by right clicking on the docker icon in the system tray and choosing Switch to Windows Containers.
A:
I had the same issue lately. Problem was Security Software(Trendmicro) was blocking docker to create Hyperv network interface. You should also check firewall, AV software not blocking installation or configuration.
A:
For me the error was resolved by stopping a virtual Ubuntu instance that'd been running in Hyper-V:
The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
Once Ubuntu instance had been stopped, and Docker Desktop had been restarted, my usual docker commands ran just fine.
PS: I had the idea to try this because of an Error Log that Docker Desktop had helpfully compiled and offered to send to Docker Hub as user feedback... the log appeared to indicate that my machine was short on RAM, and Docker was failing for this very simple reason. Killing the Ubuntu instance solved that.
A:
If none of the other answers work for you, try this:
Open up a terminal and run:
wsl -l -v
If you notice that there's a docker-desktop left hanging in the 'Installing' state, close Docker, run powershell as adminstrator and unregister docker-desktop:
PS C:\WINDOWS\system32> .\wslconfig.exe /u docker-desktop
Restart docker and hopefully it works. If it doesn't, try uninstalling docker first, then unregistering docker-desktop, and re-installing Docker.
Source: https://github.com/docker/for-win/issues/7295#issuecomment-645989416
A:
In my case the WSL2 Linux-Kernel was missing, download, execute and restart:
https://learn.microsoft.com/de-de/windows/wsl/wsl2-kernel
Solved the problem.
A:
One of my friends was having a similar issue, we tried this and it worked.
Hyper-V, despite being listed under "Turn Windows features on or off" as being active, was not in fact active. This became apparent when running systeminfo under PowerShell, and seeing
that the requirements were listed as met (which is not the output you would expect were Hyper-V actually running).Steps:
Open "Turn Windows features on or off"
If you are not sure how to do this please refer
https://www.howtogeek.com/250228/what-windows-10s-optional-features-do-and-how-to-[turn-them-on-or-off/][1]
Turn Hyper-V off (uncheck box, making sure all sub-components are marked as off)
Hit "Ok" - and your machine will reboot.
When your computer starts up again, open "Turn Windows features on or off" and turn Hyper-V back on. Your machine will reboot again.
Now you can test by running docker hello-world image.
A:
After installing docker desktop into your pc (windows one). You may find up this location. What is actually does,? It starts the Docker Daemon via your CLI
"C:\Program Files\Docker\Docker\DockerCli.exe" -SwitchDaemon
A:
For Installation in Windows 10 machine:
Before installing search Windows Features in search and check the windows hypervisor platform and Subsystem for Linux
Installation for WSL 1 or 2 installation is compulsory so install it while docker prompt you to install it.
https://learn.microsoft.com/en-us/windows/wsl/install-win10
You need to install ubantu(version 16,18 or 20) from windows store:
ubantu version 20
After installation you can run command like docker -version
or docker run hello-world in Linux terminal.
This video will help:
https://www.youtube.com/watch?v=5RQbdMn04Oc&t=471s
A:
Make sure you have Hyper-V enabled, that was the problem in my case.
A:
That's worked for me on win10-home https://github.com/docker/for-win/issues/11967
Shutdown your service docker
Now execute this into the window command terminal
RMDIR /S %USERPROFILE%\AppData\Roaming\Docker
Startup your service docker
Now click on your "Docker Desktop"
The "Docker Desktop" will now runnig ... done ... :)
A:
You can also use Self-diagnose tool
Docker Desktop contains a self-diagnose tool which helps you to identify some common problems. Before you run the self-diagnose tool, locate com.docker.diagnose.exe. This is usually installed in C:\Program Files\Docker\Docker\resources\com.docker.diagnose.exe.
To run the self-diagnose tool in Powershell:
& "C:\Program Files\Docker\Docker\resources\com.docker.diagnose.exe" check
The tool runs a suite of checks and displays PASS or FAIL next to each check. If there are any failures, it highlights the most relevant at the end.
Then run This command
& 'C:\Program Files\Docker\Docker\DockerCli.exe' -SwitchDaemon
A:
I had the same issue in the terminal right after installation of Docker Desktop 4.7.1 running with WSL 2 backend. The tray whale icon was not showing either.
In my case the problem was that I already had a WSL distribution (Ubuntu) installed before and it has been the default. Docker Desktop with WSL 2 backend installs its own distribution called docker-desktop. And it has to be the default one (at least if not configured elsewhere).
So I had to run this command in PowerShell: wsl --setdefault docker-desktop and restart docker services. Found the solution here.
A:
I am using Windows 7 with Docker Toolbox and to fix it just open
Docker Quickstart Terminal.
$ docker version Client: Version: 17.05.0-ce API version: 1.29
Go version: go1.7.5 Git commit: 89658be Built: Fri May 5
15:36:11 2017 OS/Arch: windows/amd64
Server: Version: 17.05.0-ce API version: 1.29 (minimum version
1.12) Go version: go1.7.5 Git commit: 89658be Built: Thu May 4 21:43:09 2017 OS/Arch: linux/amd64 Experimental: false
A:
Delete the folder under %appdata%\Docker as indicated in Github issues
For quick access press Ctrl+R, paste "%appdata%\Docker" then Enter, it should open a folder located in AppData\Roaming\Docker (e.g. C:\Users\YourUsername\AppData\Roaming\Docker)
A:
For win10 I had the same issue:
error during connect: Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.39/images/load?quiet=0: open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
The docker service appeared to work. Restarting did not help.
Running the binary from the installation directory as administrator helped.
In my case:
run as administrator -> C:\Program Files\Docker\Docker\Docker for Windows.exe
A:
For Windows -
Open 'Docker for Desktop' --> Go on debug icon -> Click on 'Reset to factory defaults'
A:
My case was that I ran docker commands in WSL shell and were still able to do this, while in git-bash (or another windows based shell) i was facing this error.
The solution for me was this answer but then restarting windows
A:
Try running the following from an elevated command prompt:
SET DOCKER_CERT_PATH=C:\Users\[YourName]\.docker\machine\machines\default
SET DOCKER_HOST=tcp://[yourDockerDeamonIp]:2376
SET DOCKER_MACHINE_NAME=default
SET DOCKER_TLS_VERIFY=1
SET DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
You might also find that even without setting those env variables, running commands from the docker quick start terminal works no problem.
A:
I run in to the same problem. I solved this by enabling hyper-v.
Enable virtualization in BIOS
Install hyper-v
A:
I too faced error which says
"Access is denied. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running."
Resolved this by running "powershell" in administrator mode.
This solution will help those who uses two users on one windows machine
A:
Solved for me by running a docker desktop app, check-in notification. Setup if necessary.
$ net start com.docker.service
The Docker for Windows Service service is starting.
The Docker for Windows Service service was started successfully.
$ docker version
$ net start com.docker.service
The requested service has already been started.
A:
with the recent update of docker, I had an issue which was docker app hanged at startup. I resolved this by terminating wsl.exe using taskmanager.
A:
For me this issue resolved by singing in Docker Desktop.
A:
You need the admin privilege to run the service
I had a similar issue. The problem goes away when I run command prompt ( run as an administrator", and type " docker version".
C:\WINDOWS\system32>docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:23:10 2020
OS/Arch: windows/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b
Built: Wed Mar 11 01:29:16 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
A:
Faced the similar issue, having installed docker desktop on a Windows VM, running on WSL2.
Solution:
Updated the Windows to latest build and VMTools to the latest(11.2) version, fixed the issue, now docker is running non-stop.
A:
The easiest way I fixed the issue is by terminating the docker desktop and restarting it again. If you see a blue-lit docker icon in the bottom left corner, then that means the docker daemon has started successfully, and the above error should be fixed.
A:
I had this issue, when am trying to create MySQL image using the command line
To fix this I just wait for the Docker Desktop app to start and running correctly then I tried again.
A:
Uninstall Docker in “Add or remove programs”
Restart your computer
Install Docker as Administrator (and not by running the installer directly)
If the installer asks for a reboot, do it
A:
It may be because docker daemon has choosen linux and broken
try switching to windows or linux by using this command
Launch powershell using administrator and run below command
'C:\Program Files\Docker\Docker\DockerCli.exe' -SwitchDaemon
Or open cmd prompt as administrator and rub below command
"C:\Program Files\Docker\Docker\DockerCli.exe" -SwitchDaemon
A:
First, I downloaded docker for windows 10, OS Built 19042 and version 20H2, as shown in this video,
but my docker was at the beginning stage. I run the docker with the command provided, but I got such an
error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/version: open //./pipe/docker_engine: The system cannot find the file
Then, these solutions worked for me to start the docker:
Open Powershell as administrator &
run this command: 'C:\Program Files\Docker\Docker\DockerCli.exe' -SwitchDaemon
OR
Open cmd as administrator &
run this command: "C:\Program Files\Docker\Docker\DockerCli.exe" -SwitchDaemon
I found this from here. Hope this helps you too!
A:
If you are getting this pop-up:
Click the link in pop-up. And dowloand this 'WSL2 Linux kernel update package for x64 machines':
Once you downloaded it. Go through the installation.
Then restart Docker. It will work.
A:
Somehow my docker desktop couldn't start in the first attempt post installation and system restart, so i killed the docker process in the task manager and opened the docker desktop again, viola it started fine. Able to run projects from cmd prompt (docker run -d -p <project_name>), able to see my container images as well in docker desktop.
| Docker cannot start on Windows | Executing docker version command on Windows returns the following results:
C:\Projects> docker version
Client:
Version: 1.13.0-dev
API version: 1.25
Go version: go1.7.3
Git commit: d8d3314
Built: Tue Nov 1 03:05:34 2016
OS/Arch: windows/amd64
error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.25/version: open //./pipe/docker_engine: The system cannot find the file
specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
Running the diagnostics produces the following:
C:\Projects> wget https://github.com/Microsoft/Virtualization-
Documentation/raw/master/windows-server-container-tools/Debug-
ContainerHost/Debug-ContainerHost.ps1 -UseBasicParsin | iex
Checking for common problems
Describing Windows Version and Prerequisites
[+] Is Windows 10 Anniversary Update or Windows Server 2016 608ms
[+] Has KB3192366, KB3194496, or later installed if running Windows build 14393 141ms
[+] Is not a build with blocking issues 29ms
Describing Docker is installed
[-] A Docker service is installed - 'Docker' or 'com.Docker.Service' 134ms
Expected: value to not be empty
27: $services | Should Not BeNullOrEmpty
at <ScriptBlock>, <No file>: line 27
[+] Service is running 127ms
[+] Docker.exe is in path 2.14s
Describing User has permissions to use Docker daemon
[+] docker.exe should not return access denied 42ms
Describing Windows container settings are correct
[-] Do not have DisableVSmbOplock set to 1 53ms
Expected: {0}
But was: {1}
66: $regvalue.VSmbDisableOplocks | Should Be 0
at <ScriptBlock>, <No file>: line 66
[+] Do not have zz values set 42ms
Describing The right container base images are installed
error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.25/images/json: open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
[-] At least one of 'microsoft/windowsservercore' or 'microsoft/nanoserver' should be installed 129ms
ValidationMetadataException: The argument is null or empty. Provide an argument that is not null or empty, and then try the command again.
ParameterBindingValidationException: Cannot validate argument on parameter 'Property'. The argument is null or empty. Provide an argument that is not null or empty, and then try the command again.
at <ScriptBlock>, <No file>: line 90
Describing Container network is created
[-] Error occurred in Describe block 1.08s
RuntimeException: Cannot index into a null array.
at <ScriptBlock>, <No file>: line 119
Showing output from: docker info
Showing output from: docker version
Client:
Version: 1.13.0-dev
API version: 1.25
Go version: go1.7.3
Git commit: d8d3314
Built: Tue Nov 1 03:05:34 2016
OS/Arch: windows/amd64
Showing output from: docker network ls
Warnings & errors from the last 24 hours
Logs saved to C:\Projects\logs_20161107-084122.csv
C:\Projects>
| [
"The error is related to that part:\n\nIn the default daemon configuration on Windows, the docker client must\nbe run elevated to connect\n\n\nFirst, verify that Docker Desktop application is running. If not, launch it: that will run the docker daemon (just wait few minutes).\n\nThen, if the error still persist, you can try to switch Docker daemon type, as explained below:\n\n\nWith Powershell:\n\nOpen Powershell as administrator\nLaunch command: & 'C:\\Program Files\\Docker\\Docker\\DockerCli.exe' -SwitchDaemon\n\nOR, with cmd:\n\nOpen cmd as administrator\nLaunch command: \"C:\\Program Files\\Docker\\Docker\\DockerCli.exe\" -SwitchDaemon\n\n",
"I had the same problem.\nStarting the docker daemon resolved the issue. Just search for docker pressing windows key and click on \"Docker Dekstop\". Daemon should be running in a minute.\n\nAfter starting up Docker Desktop, make sure the docker daemon status in the bottom left is green and shows RUNNING when you hover over it.\n",
"You can run \"C:\\Program Files\\Docker\\Docker\\DockerCli.exe\" -SwitchDaemon and point Docker CLI to either Linux or Windows containers. This worked for me.\n",
"Error Code:\n\nerror during connect: Get\n http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.29/version: open\n //./pipe/docker_engine: The system cannot find the file specified. In the\n default daemon configuration on Windows, the docker client must be run\n elevated to connect . This error may also indicate that the docker\n daemon is not running.\n\nSolutions:\n1) For Windows 7 Command Window(cmd.exe), open cmd.exe with run as administrator and execute following command:\ndocker-machine env --shell cmd default\n\nYou will receive following output:\nSET DOCKER_TLS_VERIFY=1\nSET DOCKER_HOST=tcp://192.168.99.100:2376\nSET DOCKER_CERT_PATH=C:\\Users\\USER_NAME\\.docker\\machine\\machines\\default\nSET DOCKER_MACHINE_NAME=default\nSET COMPOSE_CONVERT_WINDOWS_PATHS=true\nREM Run this command to configure your shell:\nREM @FOR /f \"tokens=*\" %i IN ('docker-machine env --shell cmd default') DO @%i\n\nCopy the command below and execute on cmd:\n@FOR /f \"tokens=*\" %i IN ('docker-machine env --shell cmd default') DO @%i\n\nAnd then execute following command to control:\ndocker version\n\n2) For Windows 7 Powershell, open powershell.exe with run as administrator and execute following command:\ndocker-machine env --shell=powershell | Invoke-Expression\n\nAnd then execute following command to control:\ndocker version\n\n3) If you reopen cmd or powershell, you should repeat the related steps again.\n",
"If you see docker desktop is STOPPED or Not Running screen at left side bottom, then do following\n\nOpen PowerShell with – Run as Administrator\nClose Docker Desktop if it is open\nExecute the following command on PowerShell\n“& 'C:\\Program Files\\Docker\\Docker\\DockerCli.exe' -SwitchDaemon”\nOpen Docker Desktop, it will get started.\n\nI was facing this issue. I tried the above-mentioned steps and it worked for me. Thanks!\n",
"I know this question was long ago but I found no proper explanation and solution, so hopefully, my answer is useful :)\nAssuming you install Docker Toolbox on Windows, both docker and docker-machine commands will be available. Often, people get confused when to use either of these.\nThe docker commands are used only within a virtual machine to manage images. The docker-machine commands are used on the host to manage the Linux VMs.\nSo, please use docker-machine commands on your Windows machine. Use docker command inside your VM. To use the docker commands, for example, docker ps, you either can open Docker Quickstart Terminal or run these on your cmd/bash/PowerShell:\ndocker-machine run default /assuming default is your Linux VM/\ndocker-machine ssh default\nThis will start boot2docker and you will see the docker icon on the command line. Then you can use docker commands.\nGood luck :)\n",
"1.- Open the location of the shortcut:\n\n2.- Right click and properties and add \"-SwitchDaemon\" to destiny\n\n3.- Give administrator permissions, advanced options:\n\n4.- Restart windows.\n",
"Try resolving the issue with either of the following options:\nOption A\nStart-Service \"Hyper-V Virtual Machine Management\"\nStart-Service \"Hyper-V Host Compute Service\"\n\nor\nOption B\n\nOpen \"Window Security\"\nOpen \"App & Browser control\"\nClick \"Exploit protection settings\" at the bottom\nSwitch to \"Program settings\" tab\nLocate \"C:\\WINDOWS\\System32\\vmcompute.exe\" in the list and expand it\nClick \"Edit\"\nScroll down to \"Code flow guard (CFG)\" and uncheck \"Override system settings\"\nStart vmcompute from powershell \"net start vmcompute\"\nThen restart your system\n\n",
"I got the same error for Docker version 19.03.12 and Windows 10. Resolved it by going through the below steps. Hope it helps others.\n\nGo to Windows Start -> Search Box (Type here to search). There\nenter 'Services'. Among the listed items, click Services app.\nNow search 'Docker Desktop Service' in the Services window opened. Right click on it and Start the service. Its status should be changed to 'Running'.\nIf step 2 gives error like 'the dependency service failed to start', then start all dependency services. For me, I had to start a service called 'Server'.\nDouble click 'Docker Desktop' icon in desktop. Now you will see 'Docker Desktop is running' in system tray.\nNow run the command 'docker version' from Command Prompt or PowerShell. It should give clean output.\nIf any issue in step 5, run Command Prompt or PowerShell as administrator.\n\nAbove resolution assumes Docker is already installed and Hyper-V / Virtualization is enabled in your system.\n",
"I have faced same issue, it may be issue of administrator, so followed below steps to setup docker on \n\nwindows10\n\n.\n\nDownload docker desktop from docker hub after login to docker.Docker Desktop Installer.exe file will be downloaded.\nInstall Docker Desktop Installer.exeusing Run as administrator -> Mark windows container during installation else it will only install linux container. It will ask for Logout after logging out and login it shows docker desktop in menu.\nAfter install, go to -> computer management -> Local users and groups -> Groups -> docker-user -> Add user in members\n\nRun docker desktop using Run as administrator\n\nCheck docker whale icon in Notification tab\n\nrun command >docker version\n\nSuccessfully using docker without any issue.\n\n",
"if you are in windows try this\n docker-machine env --shell cmd default \n @FOR /f \"tokens=*\" %i IN ('docker-machine env --shell cmd default') DO @%i\n\nfor testing try \ndocker run hello-world\n\n",
"If you have installed docker on Windows 10 Pro with Hyper-V enabled and you are still not able to run Docker on Windows 10, then, as the error suggests, your docker daemon is not started. \nThe following steps helped me to start docker successfully:\n\nUse command on cmd(Admin mode)\ndocker-machine restart default\n\nThen you'll get a message something like:\n\nopen C:\\User\\\\{User_name}\\\\.docker\\machine\\machines\\default\\config.json:\n The system cannot find the file specified.\n\nGo to the docker icon which will be on your windows tray (bottom right corner of the desktop)\nRight click on the docker icon > Settings > Reset > Restart Docker \nIt will take few moments\nThen you'll see the following message:\n\nDocker is running with the green indicator\n\n\nNote: If you already had Docker containers running on your system, then don't follow these steps. You may lose the existing containers.\n\n",
"Reason : one reason may cause because we shut down the vmmem by command\nwsl --shutdown\n\nSolution : Simple Restart the Docker by right-clicking will fix the problem.\n\n",
"The same issue arrived when I started with the docker in windows 10. I was able to run docker --version successfully but failed when I tried to run docker pull docker/whalesay.\nI tried many things suggested in the answers over here but my issue was resolved when I followed the below steps: \n1 . Search for docker in windows and run docker desktop as administrator.\n2 . Check the bottom-left docker symbol it should be green if the docker is running.\n3 . If it's not running first install \"wsl_update\".\n4 . Open the docker desktop and sign in with your docker credentials, when you are logged in you can see the server restarting and the bottom left logo turns green.\n5. To check whether docker is running or not open PowerShell as administrator and run docker run hello-world.\n",
"For me on Windows 11, editing %APPDATA%\\Docker\\settings.json to the following values and then restarting Docker Desktop did the trick (I am using WSL2, not Hyper-V):\n\n",
"For me the issue was virtualization was not enabled.\nOn windows 10: Go to task manager -> Performance -> CPU and you should see as section as \"Virtualization : Enabled\"\nIf you do not see this option, it means that virtualization has not been enabled.\nAnother interesting thing to note is you must have Hyper V enabled. However as I was using parallels desktop, I had to enabled to \"Nested Virtualization\" for Hyper V to be \"truly enabled\". So if your windows is a VM, check out the settings for Parallels (or whatever you're using) that nested virtualization is enabled.\n",
"I was getting same errors after an install on Windows 10. And I tried restarting but it did not work, so I did the following (do not recommend if you have been working in docker for awhile, this was on a fresh install):\n1) Find the whale in your system tray, and right click\n2) Go to settings > Reset\n3) Reset to factory defaults\nI was then able to follow the starting docker tutorial on the website with Windows 10, and now it works like a charm.\n",
"Open C drive in powershell Or Git bash and run below command\n.\\Program Files\\Docker\\Docker\\DockerCli.exe -SwitchDaemon\n\n",
"My solution was pretty simple. I noticed that docker was running linux containers instead of windows containers. What i did is switch to windows containers by right clicking on the docker icon in the system tray and choosing Switch to Windows Containers.\n",
"I had the same issue lately. Problem was Security Software(Trendmicro) was blocking docker to create Hyperv network interface. You should also check firewall, AV software not blocking installation or configuration.\n",
"For me the error was resolved by stopping a virtual Ubuntu instance that'd been running in Hyper-V:\nThe system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.\nOnce Ubuntu instance had been stopped, and Docker Desktop had been restarted, my usual docker commands ran just fine. \nPS: I had the idea to try this because of an Error Log that Docker Desktop had helpfully compiled and offered to send to Docker Hub as user feedback... the log appeared to indicate that my machine was short on RAM, and Docker was failing for this very simple reason. Killing the Ubuntu instance solved that.\n",
"If none of the other answers work for you, try this:\nOpen up a terminal and run:\nwsl -l -v \n\nIf you notice that there's a docker-desktop left hanging in the 'Installing' state, close Docker, run powershell as adminstrator and unregister docker-desktop:\nPS C:\\WINDOWS\\system32> .\\wslconfig.exe /u docker-desktop\n\nRestart docker and hopefully it works. If it doesn't, try uninstalling docker first, then unregistering docker-desktop, and re-installing Docker.\nSource: https://github.com/docker/for-win/issues/7295#issuecomment-645989416\n",
"In my case the WSL2 Linux-Kernel was missing, download, execute and restart:\nhttps://learn.microsoft.com/de-de/windows/wsl/wsl2-kernel\nSolved the problem.\n",
"One of my friends was having a similar issue, we tried this and it worked.\nHyper-V, despite being listed under \"Turn Windows features on or off\" as being active, was not in fact active. This became apparent when running systeminfo under PowerShell, and seeing\nthat the requirements were listed as met (which is not the output you would expect were Hyper-V actually running).Steps:\n\nOpen \"Turn Windows features on or off\"\nIf you are not sure how to do this please refer\nhttps://www.howtogeek.com/250228/what-windows-10s-optional-features-do-and-how-to-[turn-them-on-or-off/][1]\nTurn Hyper-V off (uncheck box, making sure all sub-components are marked as off)\nHit \"Ok\" - and your machine will reboot.\nWhen your computer starts up again, open \"Turn Windows features on or off\" and turn Hyper-V back on. Your machine will reboot again.\n\nNow you can test by running docker hello-world image.\n",
"After installing docker desktop into your pc (windows one). You may find up this location. What is actually does,? It starts the Docker Daemon via your CLI\n\"C:\\Program Files\\Docker\\Docker\\DockerCli.exe\" -SwitchDaemon\n\n",
"For Installation in Windows 10 machine:\nBefore installing search Windows Features in search and check the windows hypervisor platform and Subsystem for Linux\n\nInstallation for WSL 1 or 2 installation is compulsory so install it while docker prompt you to install it.\nhttps://learn.microsoft.com/en-us/windows/wsl/install-win10\nYou need to install ubantu(version 16,18 or 20) from windows store:\nubantu version 20\nAfter installation you can run command like docker -version\nor docker run hello-world in Linux terminal.\nThis video will help:\nhttps://www.youtube.com/watch?v=5RQbdMn04Oc&t=471s\n",
"Make sure you have Hyper-V enabled, that was the problem in my case.\n",
"That's worked for me on win10-home https://github.com/docker/for-win/issues/11967\n\nShutdown your service docker\nNow execute this into the window command terminal\nRMDIR /S %USERPROFILE%\\AppData\\Roaming\\Docker\nStartup your service docker\nNow click on your \"Docker Desktop\"\n\nThe \"Docker Desktop\" will now runnig ... done ... :)\n",
"You can also use Self-diagnose tool\nDocker Desktop contains a self-diagnose tool which helps you to identify some common problems. Before you run the self-diagnose tool, locate com.docker.diagnose.exe. This is usually installed in C:\\Program Files\\Docker\\Docker\\resources\\com.docker.diagnose.exe.\nTo run the self-diagnose tool in Powershell:\n& \"C:\\Program Files\\Docker\\Docker\\resources\\com.docker.diagnose.exe\" check\n\nThe tool runs a suite of checks and displays PASS or FAIL next to each check. If there are any failures, it highlights the most relevant at the end.\nThen run This command\n& 'C:\\Program Files\\Docker\\Docker\\DockerCli.exe' -SwitchDaemon\n\n",
"I had the same issue in the terminal right after installation of Docker Desktop 4.7.1 running with WSL 2 backend. The tray whale icon was not showing either.\nIn my case the problem was that I already had a WSL distribution (Ubuntu) installed before and it has been the default. Docker Desktop with WSL 2 backend installs its own distribution called docker-desktop. And it has to be the default one (at least if not configured elsewhere).\nSo I had to run this command in PowerShell: wsl --setdefault docker-desktop and restart docker services. Found the solution here.\n",
"I am using Windows 7 with Docker Toolbox and to fix it just open\nDocker Quickstart Terminal.\n\n$ docker version Client: Version: 17.05.0-ce API version: 1.29\n Go version: go1.7.5 Git commit: 89658be Built: Fri May 5\n 15:36:11 2017 OS/Arch: windows/amd64\nServer: Version: 17.05.0-ce API version: 1.29 (minimum version\n 1.12) Go version: go1.7.5 Git commit: 89658be Built: Thu May 4 21:43:09 2017 OS/Arch: linux/amd64 Experimental: false\n\n",
"Delete the folder under %appdata%\\Docker as indicated in Github issues\nFor quick access press Ctrl+R, paste \"%appdata%\\Docker\" then Enter, it should open a folder located in AppData\\Roaming\\Docker (e.g. C:\\Users\\YourUsername\\AppData\\Roaming\\Docker)\n",
"For win10 I had the same issue:\nerror during connect: Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.39/images/load?quiet=0: open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.\n\nThe docker service appeared to work. Restarting did not help.\nRunning the binary from the installation directory as administrator helped.\nIn my case:\nrun as administrator -> C:\\Program Files\\Docker\\Docker\\Docker for Windows.exe\n\n",
"For Windows -\nOpen 'Docker for Desktop' --> Go on debug icon -> Click on 'Reset to factory defaults'\n",
"My case was that I ran docker commands in WSL shell and were still able to do this, while in git-bash (or another windows based shell) i was facing this error.\nThe solution for me was this answer but then restarting windows\n",
"Try running the following from an elevated command prompt:\nSET DOCKER_CERT_PATH=C:\\Users\\[YourName]\\.docker\\machine\\machines\\default\nSET DOCKER_HOST=tcp://[yourDockerDeamonIp]:2376\nSET DOCKER_MACHINE_NAME=default\nSET DOCKER_TLS_VERIFY=1\nSET DOCKER_TOOLBOX_INSTALL_PATH=C:\\Program Files\\Docker Toolbox\n\nYou might also find that even without setting those env variables, running commands from the docker quick start terminal works no problem.\n",
"I run in to the same problem. I solved this by enabling hyper-v.\n\nEnable virtualization in BIOS\nInstall hyper-v \n\n",
"I too faced error which says\n\"Access is denied. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.\"\n\nResolved this by running \"powershell\" in administrator mode.\nThis solution will help those who uses two users on one windows machine\n",
"Solved for me by running a docker desktop app, check-in notification. Setup if necessary.\n$ net start com.docker.service\n\nThe Docker for Windows Service service is starting.\nThe Docker for Windows Service service was started successfully.\n$ docker version\n\n$ net start com.docker.service\n\nThe requested service has already been started.\n",
"with the recent update of docker, I had an issue which was docker app hanged at startup. I resolved this by terminating wsl.exe using taskmanager.\n\n",
"For me this issue resolved by singing in Docker Desktop. \n\n",
"You need the admin privilege to run the service\nI had a similar issue. The problem goes away when I run command prompt ( run as an administrator\", and type \" docker version\".\nC:\\WINDOWS\\system32>docker version\n\n\nClient: Docker Engine - Community\n Version: 19.03.8\n API version: 1.40\n Go version: go1.12.17\n Git commit: afacb8b\n Built: Wed Mar 11 01:23:10 2020\n OS/Arch: windows/amd64\n Experimental: false\n\nServer: Docker Engine - Community\n Engine:\n Version: 19.03.8\n API version: 1.40 (minimum version 1.12)\n Go version: go1.12.17\n Git commit: afacb8b\n Built: Wed Mar 11 01:29:16 2020\n OS/Arch: linux/amd64\n Experimental: false\n containerd:\n Version: v1.2.13\n GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429\n runc:\n Version: 1.0.0-rc10\n GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd\n docker-init:\n Version: 0.18.0\n GitCommit: fec3683\n\n",
"Faced the similar issue, having installed docker desktop on a Windows VM, running on WSL2.\nSolution:\nUpdated the Windows to latest build and VMTools to the latest(11.2) version, fixed the issue, now docker is running non-stop.\n",
"The easiest way I fixed the issue is by terminating the docker desktop and restarting it again. If you see a blue-lit docker icon in the bottom left corner, then that means the docker daemon has started successfully, and the above error should be fixed.\n",
"I had this issue, when am trying to create MySQL image using the command line\n\nTo fix this I just wait for the Docker Desktop app to start and running correctly then I tried again.\n\n\n",
"\nUninstall Docker in “Add or remove programs”\nRestart your computer\nInstall Docker as Administrator (and not by running the installer directly)\nIf the installer asks for a reboot, do it\n\n",
"It may be because docker daemon has choosen linux and broken\ntry switching to windows or linux by using this command\nLaunch powershell using administrator and run below command\n'C:\\Program Files\\Docker\\Docker\\DockerCli.exe' -SwitchDaemon\n\nOr open cmd prompt as administrator and rub below command\n\"C:\\Program Files\\Docker\\Docker\\DockerCli.exe\" -SwitchDaemon\n\n",
"First, I downloaded docker for windows 10, OS Built 19042 and version 20H2, as shown in this video,\nbut my docker was at the beginning stage. I run the docker with the command provided, but I got such an\nerror during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.24/version: open //./pipe/docker_engine: The system cannot find the file\nThen, these solutions worked for me to start the docker:\n\nOpen Powershell as administrator &\nrun this command: 'C:\\Program Files\\Docker\\Docker\\DockerCli.exe' -SwitchDaemon\n\nOR\n\nOpen cmd as administrator &\nrun this command: \"C:\\Program Files\\Docker\\Docker\\DockerCli.exe\" -SwitchDaemon\n\n\nI found this from here. Hope this helps you too!\n",
"If you are getting this pop-up:\n\nClick the link in pop-up. And dowloand this 'WSL2 Linux kernel update package for x64 machines':\n\nOnce you downloaded it. Go through the installation.\nThen restart Docker. It will work.\n",
"Somehow my docker desktop couldn't start in the first attempt post installation and system restart, so i killed the docker process in the task manager and opened the docker desktop again, viola it started fine. Able to run projects from cmd prompt (docker run -d -p <project_name>), able to see my container images as well in docker desktop.\n\n"
] | [
520,
107,
77,
46,
25,
15,
9,
7,
7,
6,
5,
5,
5,
5,
5,
4,
4,
4,
4,
3,
3,
3,
3,
3,
3,
3,
3,
3,
3,
3,
2,
2,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"I am using window 10 and i performed below steps to resolve this issue.\n\ncheck Virtualization is enabled from taskmanager-->performance \nRestarted the docker service \nInstall the latest docker build and restarted the machine.\nMake sure the docker service is running.\n\nAbove steps helped me to resolve the issue.\n",
"After unsuccessfully trying everything from these answers I simply upgraded to Windows 11\n(In my case: 19043.1237 -> 22000.258)\n",
"I have faced such a problem a couple of times, but every time it's because I did not start the docker.\nTo solve this issue, Just open your docker and wait until it finishes starting.\n",
"You can start Kitematic when you get this error. It will display a button to reset the VM and will fix the issue.\n",
"Be sure you start Powershell \"as Administrator\" that will also prevent the error you got from docker version .\nthese hints will be probably outdated as of 2021:\nThen try to start the docker service: start-service docker\nIf that fails delete the docker.pid file you will find with cd $env:programfiles\\docker; rm docker.pid\nFinally you should change HKLM:\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\Virtualization\\Containers\\VSmbDisableOplocks to 0 or delete the value.\n",
"This is the final solution.. its works for me...!!\n1) Find the whale in your system tray, and right click\n2) Go to settings > Reset\n3) Reset to factory defaults\n"
] | [
-1,
-1,
-1,
-2,
-3,
-4
] | [
"docker",
"docker_desktop"
] | stackoverflow_0040459280_docker_docker_desktop.txt |
Q:
Xcode 5.1 unable to find utility "make", not a developer tool or in PATH
I am trying to follow these directions to install Google Protocol Buffers. After creating the script, I try to run it with the following command:
$ ./build-proto-ios.sh
I receive the following output:
mkdir: ios-build: File exists
Platform is iPhoneSimulator
./build-proto-ios.sh: line 40: ./configure: No such file or directory
make: error: unable to find utility "make", not a developer tool or in PATH
make: error: unable to find utility "make", not a developer tool or in PATH
cp: src/.libs/libprotobuf-lite.a: No such file or directory
Platform is iPhoneOS
./build-proto-ios.sh: line 40: ./configure: No such file or directory
make: error: unable to find utility "make", not a developer tool or in PATH
make: error: unable to find utility "make", not a developer tool or in PATH
cp: src/.libs/libprotobuf-lite.a: No such file or directory
Platform is iPhoneOS
./build-proto-ios.sh: line 40: ./configure: No such file or directory
make: error: unable to find utility "make", not a developer tool or in PATH
make: error: unable to find utility "make", not a developer tool or in PATH
cp: src/.libs/libprotobuf-lite.a: No such file or directory
make: error: unable to find utility "make", not a developer tool or in PATH
fatal error: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/lipo: can't open input file: ios-build/libprotobuf-lite-armv7.a (No such file or directory)
So, I looked for information on:
unable to find utility "make", not a developer tool or in PATH
I found this information for installing the Xcode command line tools, because I thought that might be the cause. But even after installing the latest command line tools for OS X Mavericks, I'm still getting this error.
Any ideas?
A:
1) xcode-select -p -> /Applications/Xcode.app/Contents/Developer
2) sudo xcode-select --switch /Library/Developer/CommandLineTools
3) xcode-select -p -> /Library/Developer/CommandLineTools
4) eval "$(rbenv init -)"
5) rbenv install 2.5.3
6) rbenv local 2.5.3
Also, see that xcode is properly updated
A:
Make is included with Xcode. You do not need to install the command line tools to use it. It lives inside the Xcode application:
/Applications/Xcode.app/Contents/Developer/usr/bin/make
And can be accessed using xcrun.
xcrun make
This can still fail if for some reason the active developer directory is not set, or is incorrect. This can happen if you have moved Xcode, for instance.
Set it using xcode-select
sudo xcode-select -switch /Applications/Xcode.app/Contents/Developer
Check to see that you can access make using those methods from a command line session. If you can, the proto buffers build scripts are likely the problem.
A:
Looks like that's not your actual problem. When the script is running, it's looking for the included make in the iOS 6 folder... which probably doesn't exist.
Check out https://gist.github.com/PR3x/0fde040902ed4e9a1a61 for a script that will build protobuf as a fat library for you. Just place it in a new folder, chmod +x, and run. (Based on https://stackoverflow.com/a/19582682/939927)
One thing to note is that you'll need to change all of your generated .pb.* files to use the ::google_public: namespace instead of ::google:, as Apple uses that internally.
Another thing to note is that this only works for 32-bit ARM and the simulator. 64-bit ARM (iPhone 5s) doesn't build yet.
Good luck!
A:
For me, it was because of the wrong path of SDKROOT. This
solution worked for me.
xport SDKROOT=$(xcrun -sdk macosx --show-sdk-path
| Xcode 5.1 unable to find utility "make", not a developer tool or in PATH | I am trying to follow these directions to install Google Protocol Buffers. After creating the script, I try to run it with the following command:
$ ./build-proto-ios.sh
I receive the following output:
mkdir: ios-build: File exists
Platform is iPhoneSimulator
./build-proto-ios.sh: line 40: ./configure: No such file or directory
make: error: unable to find utility "make", not a developer tool or in PATH
make: error: unable to find utility "make", not a developer tool or in PATH
cp: src/.libs/libprotobuf-lite.a: No such file or directory
Platform is iPhoneOS
./build-proto-ios.sh: line 40: ./configure: No such file or directory
make: error: unable to find utility "make", not a developer tool or in PATH
make: error: unable to find utility "make", not a developer tool or in PATH
cp: src/.libs/libprotobuf-lite.a: No such file or directory
Platform is iPhoneOS
./build-proto-ios.sh: line 40: ./configure: No such file or directory
make: error: unable to find utility "make", not a developer tool or in PATH
make: error: unable to find utility "make", not a developer tool or in PATH
cp: src/.libs/libprotobuf-lite.a: No such file or directory
make: error: unable to find utility "make", not a developer tool or in PATH
fatal error: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/lipo: can't open input file: ios-build/libprotobuf-lite-armv7.a (No such file or directory)
So, I looked for information on:
unable to find utility "make", not a developer tool or in PATH
I found this information for installing the Xcode command line tools, because I thought that might be the cause. But even after installing the latest command line tools for OS X Mavericks, I'm still getting this error.
Any ideas?
| [
"1) xcode-select -p -> /Applications/Xcode.app/Contents/Developer\n2) sudo xcode-select --switch /Library/Developer/CommandLineTools\n3) xcode-select -p -> /Library/Developer/CommandLineTools\n4) eval \"$(rbenv init -)\"\n5) rbenv install 2.5.3\n6) rbenv local 2.5.3\nAlso, see that xcode is properly updated\n",
"Make is included with Xcode. You do not need to install the command line tools to use it. It lives inside the Xcode application:\n/Applications/Xcode.app/Contents/Developer/usr/bin/make\nAnd can be accessed using xcrun.\nxcrun make\nThis can still fail if for some reason the active developer directory is not set, or is incorrect. This can happen if you have moved Xcode, for instance.\nSet it using xcode-select\nsudo xcode-select -switch /Applications/Xcode.app/Contents/Developer\nCheck to see that you can access make using those methods from a command line session. If you can, the proto buffers build scripts are likely the problem.\n",
"Looks like that's not your actual problem. When the script is running, it's looking for the included make in the iOS 6 folder... which probably doesn't exist.\nCheck out https://gist.github.com/PR3x/0fde040902ed4e9a1a61 for a script that will build protobuf as a fat library for you. Just place it in a new folder, chmod +x, and run. (Based on https://stackoverflow.com/a/19582682/939927)\nOne thing to note is that you'll need to change all of your generated .pb.* files to use the ::google_public: namespace instead of ::google:, as Apple uses that internally.\nAnother thing to note is that this only works for 32-bit ARM and the simulator. 64-bit ARM (iPhone 5s) doesn't build yet.\nGood luck!\n",
"For me, it was because of the wrong path of SDKROOT. This\nsolution worked for me.\nxport SDKROOT=$(xcrun -sdk macosx --show-sdk-path\n\n"
] | [
5,
1,
1,
0
] | [] | [] | [
"command_line",
"ios",
"protocol_buffers"
] | stackoverflow_0023836149_command_line_ios_protocol_buffers.txt |
Q:
ImportError: cannot import name 'structural_similarity' error
In my image comparision code following: https://www.pyimagesearch.com/2014/09/15/python-compare-two-images/
While using
from skimage.measure import structural_similarity as ssim
and then
s = ssim(imageA, imageB)
I am getting error:
from skimage.measure import structural_similarity as ssim
ImportError: cannot import name 'structural_similarity'
A:
I found the solution. As this question is unique and not covered anywhere. So, posting the answer.
#from skimage.measure import structural_similarity as ssim
from skimage import measure
.
.
.
#s = ssim(imageA, imageB)
s = measure.compare_ssim(imageA, imageB)
Change commented line to uncommented line.
A:
Please check your skimage version.
https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.compare_ssim
Changed in version 0.16: This function was renamed from skimage.measure.compare_ssim to skimage.metrics.structural_similarity.
Hope it helps.
A:
change import line to
from skimage.metrics import structural_similarity as ssim
This may work better than using compare_ssim since that is going to be deprecated
A:
I use next solution:
from skimage import metrics
metrics.structural_similarity(grayA, grayB, full=True)
| ImportError: cannot import name 'structural_similarity' error | In my image comparision code following: https://www.pyimagesearch.com/2014/09/15/python-compare-two-images/
While using
from skimage.measure import structural_similarity as ssim
and then
s = ssim(imageA, imageB)
I am getting error:
from skimage.measure import structural_similarity as ssim
ImportError: cannot import name 'structural_similarity'
| [
"I found the solution. As this question is unique and not covered anywhere. So, posting the answer.\n#from skimage.measure import structural_similarity as ssim\nfrom skimage import measure\n.\n.\n.\n#s = ssim(imageA, imageB)\ns = measure.compare_ssim(imageA, imageB)\n\nChange commented line to uncommented line.\n",
"Please check your skimage version.\nhttps://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.compare_ssim\nChanged in version 0.16: This function was renamed from skimage.measure.compare_ssim to skimage.metrics.structural_similarity.\nHope it helps.\n",
"change import line to \nfrom skimage.metrics import structural_similarity as ssim\n\nThis may work better than using compare_ssim since that is going to be deprecated \n",
"I use next solution:\nfrom skimage import metrics\nmetrics.structural_similarity(grayA, grayB, full=True)\n\n"
] | [
67,
27,
25,
0
] | [] | [] | [
"python_3.x",
"scikit_image"
] | stackoverflow_0055178229_python_3.x_scikit_image.txt |
Q:
How do I deal with and it is returning ``` HTTPError: HTTP Error 403: Forbidden?
I am trying to copy a table from a website using this code
covid = pd.read_html("https://covid19.ncdc.gov.ng/")[0].head()
and it is returning
HTTPError: HTTP Error 403: Forbidden
A:
You can use requests:
import pandas as pd
import requests
req=requests.get('https://covid19.ncdc.gov.ng/')
covid = pd.read_html(req.text)[0].head()
'''
| | States Affected | No. of Cases (Lab Confirmed) | No. of Cases (on admission) | No. Discharged | No. of Deaths |
|---:|:------------------|-------------------------------:|------------------------------:|-----------------:|----------------:|
| 0 | Lagos | 104187 | 1044 | 102372 | 771 |
| 1 | FCT | 29508 | 19 | 29240 | 249 |
| 2 | Rivers | 18105 | 27 | 17923 | 155 |
| 3 | Kaduna | 11619 | 1 | 11529 | 89 |
| 4 | Oyo | 10352 | 6 | 10144 | 202 |
'''
| How do I deal with and it is returning ``` HTTPError: HTTP Error 403: Forbidden? | I am trying to copy a table from a website using this code
covid = pd.read_html("https://covid19.ncdc.gov.ng/")[0].head()
and it is returning
HTTPError: HTTP Error 403: Forbidden
| [
"You can use requests:\nimport pandas as pd\nimport requests\nreq=requests.get('https://covid19.ncdc.gov.ng/')\ncovid = pd.read_html(req.text)[0].head()\n'''\n| | States Affected | No. of Cases (Lab Confirmed) | No. of Cases (on admission) | No. Discharged | No. of Deaths |\n|---:|:------------------|-------------------------------:|------------------------------:|-----------------:|----------------:|\n| 0 | Lagos | 104187 | 1044 | 102372 | 771 |\n| 1 | FCT | 29508 | 19 | 29240 | 249 |\n| 2 | Rivers | 18105 | 27 | 17923 | 155 |\n| 3 | Kaduna | 11619 | 1 | 11529 | 89 |\n| 4 | Oyo | 10352 | 6 | 10144 | 202 |\n'''\n\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"error_handling",
"html",
"list",
"python"
] | stackoverflow_0074670041_dataframe_error_handling_html_list_python.txt |
Q:
No metadata for "User" was found using TypeOrm
I'm trying to get a basic setup working using TypeORM, and getting this error following the setup.
Here is a REPL (just do yarn install && yarn db:dev followed by yarn db:migrate && yarn start to reproduce the error)
Inserting a new user into the database...
{ EntityMetadataNotFound: No metadata for "User" was found.
at new EntityMetadataNotFoundError (/Users/admin/work/typeorm-naming-strategy/src/error/EntityMetadataNotFoundError.ts:9:9)
at Connection.getMetadata (/Users/admin/work/typeorm-naming-strategy/src/connection/Connection.ts:313:19)
at /Users/admin/work/typeorm-naming-strategy/src/persistence/EntityPersistExecutor.ts:77:55
at Array.forEach (<anonymous>)
at EntityPersistExecutor.<anonymous> (/Users/admin/work/typeorm-naming-strategy/src/persistence/EntityPersistExecutor.ts:71:30)
at step (/Users/admin/work/typeorm-naming-strategy/node_modules/typeorm/persistence/EntityPersistExecutor.js:32:23)
at Object.next (/Users/admin/work/typeorm-naming-strategy/node_modules/typeorm/persistence/EntityPersistExecutor.js:13:53)
at /Users/admin/work/typeorm-naming-strategy/node_modules/typeorm/persistence/EntityPersistExecutor.js:7:71
at new Promise (<anonymous>)
at __awaiter (/Users/admin/work/typeorm-naming-strategy/node_modules/typeorm/persistence/EntityPersistExecutor.js:3:12)
name: 'EntityMetadataNotFound',
message: 'No metadata for "User" was found.' }
A:
Adding answer bit late but its very common error.
There are two main reasons for above error. In OrmConfig,
You have used *.ts instead of *.js. For example,
entities: [__dirname + '/../**/*.entity.ts'] <-- Wrong
It should be
entities: [__dirname + '/../**/*.entity.js']
entities path is wrong. Make sure, entities path is defined according to dist folder not src folder.
A:
The problem is on ormConfig
Please try to use this:
entities: [__dirname + '/../**/*.entity.{js,ts}']
A:
In my case I had forgotten to add new entity ( User in your case ) to app.module.ts .
Here is the solution that worked for me:
// app.module.ts
@Module({
imports: [
TypeOrmModule.forRoot({
...
entities: [User]
...
}),
...
],
})
A:
Following will ensure that the file extensions used by TypeORM are both .js and .ts
entities: [__dirname + '/../**/*.entity.{js,ts}']
change this in your config file.
A:
I got this err message, but i solved in a different way, it could also happen when you already have a table called "users" for example, and your model User is not with the same columns and configs as your table, in my case my table had a column called posts with a default value, and my model hadn't that default value set up, so before you check the entities directory on the ormconfig.json i highly recommend you check if your models are with the same properties and configs as your database table
A:
If your app is using the latest DataSource instead of OrmConfig (like apps using the latest version of typeorm (0.3.1 the moment i'm writing theses lines)), make sure to call initialize() method of your DataSource object inside your data-source.ts file before using it anywhere in your app.
A:
enable autoLoadEntities in app.module.ts
imports: [UserModule,
TypeOrmModule.forRoot({
autoLoadEntities: true
})
]
A:
I believe my issue was caused by accidentally having both ormconfig.json and ormconfig.ts
A:
I'll contribute to this list of answers. In my case, I'm using a fancy-pants IDE that likes to auto-transpile the ts code into js without my knowledge. The issue I had was, the resulting js file was no bueno. Killing these auto-gen'd files was the answer.
The relevant bit of my ormconfig looks like:
{
...
"entities": ["src/entity/**/*.ts", "test/entity-mocks/**/*.ts"],
"migrations": ["src/migration/**/*.ts"],
"subscribers": ["src/subscriber/**/*.ts"]
...
}
Actually, I even tried the methods above to also include transpiled js files if they exist and I got other weird behavior.
A:
In my case i forgot to use connection.connect() method after creation of connection and got same error when using manager.find().
A:
I've tried all the solutions above, changing the typeORM version solved it for me.
A:
Make sure you have established a connection with the database before using the entities. In my case, I was using the AppDataSource before it was initialized. Here is how I fixed it:
import "reflect-metadata";
import { DataSource } from "typeorm";
import { Season } from "src/models/Season";
const AppDataSource = new DataSource({
type: "postgres",
host: "localhost",
port: 5432,
username: "postgres",
password: "postgres",
database: "test-db",
synchronize: false,
logging: false,
entities: [Season],
migrations: [],
subscribers: [],
});
AppDataSource.initialize()
.then(async () => {
console.log("Connection initialized with database...");
})
.catch((error) => console.log(error));
export const getDataSource = (delay = 3000): Promise<DataSource> => {
if (AppDataSource.isInitialized) return Promise.resolve(AppDataSource);
return new Promise((resolve, reject) => {
setTimeout(() => {
if (AppDataSource.isInitialized) resolve(AppDataSource);
else reject("Failed to create connection with database");
}, delay);
});
};
And in your services where you want to use the DataSource:
import { getDataSource } from "src/config/data-source";
import { Season } from "src/models/Season";
const init = async (event) => {
const AppDataSource = await getDataSource();
const seasonRepo = AppDataSource.getRepository(Season);
// Your business logic
};
You can also extend my function to add retry logic if required :)
A:
Just in case someone runs into the same problem i had. I had to do two different things going against me:
I was missing the declaration of entity in ormConfig:
ormConfig={
...,
entities: [
...,
UserEntity
]
}
And since i was making changes (afterwards) to an existing entity, the cache was throwing a similar error. The solution for this was to remove the projects root folders: dist/ folder. Thou this might only be a Nest.js + TypeOrm issue.
A:
In my case the entity in question was the only one giving the error. The rest of entities worked fine.
The automatic import failed to write correctly the name of the file.
import { BillingInfo } from "../entity/Billinginfo";
instead of
import { BillingInfo } from "../entity/BillingInfo";
The I for Info should be capital. The IDE also failed to show any errors on this import.
A:
This error can also come up if you have a nodemon.json file in your project.
So after including your entity directory for both build and dev in the entities array
export const AppDataSource = new DataSource({
type: 'mysql',
host: DB_HOST_DEV,
port: Number(DB_PORT),
username: DB_USER_DEV,
password: DB_PASSWORD_DEV,
database: DB_NAME_DEV,
synchronize: false,
logging: false,
entities: [
process.env.NODE_ENV === "prod"
? "build/entity/*{.ts,.js}"
: "src/entity/*{.ts,.js}",
],
migrations: ["src/migration/*.ts"],
subscribers: [],
})
and still getting this error, remove the nodemon.json file if you have it in your project
A:
my fail was that instead of using Models I used Requests. Requests are loaded earlier so it end it undefined. Try to move your Model Class to Models
A:
Add your entity in orm.config.ts file.
entities: [empData,user],
A:
I omitted this @Entity() decorator on the entity class. So immediately I fixed this, and it worked for me.
Example:
import { ObjectType, Field, ID } from '@nestjs/graphql';
import { Column, Entity, PrimaryGeneratedColumn } from 'typeorm';
@ObjectType()
@Entity()
export class Message {
@Field((type) => ID)
@PrimaryGeneratedColumn('uuid')
id: string;
@Field()
@Column()
conversationId: string;
@Field()
@Column()
sender: string;
}
A:
I got the same issue and later found I haven't call the functionality to connect the DB. Just by calling the connection fixed my issue.
export const AppDataSource = new DataSource({
type: 'mysql',
host: process.env.MYSQL_HOST,
....
});
let dataSource: DataSource;
export const ConnectDb = async () => {
dataSource = await AppDataSource.initialize();
And use this to connect in your function.
Another occasion I got a similar message when I run $ npm test without properly mocking Jest.spyOn() method and fixed it by:
jest.spyOn(db, 'MySQLDbCon').mockReturnValueOnce(Promise.resolve());
const updateResult = {}
as UpdateResult;
jest.spyOn(db.SQLDataSource, 'getRepository').mockReturnValue({
update: jest.fn().mockResolvedValue(updateResult),
}
as unknown as Repository < unknown > );
A:
ForMe , i have set error path cause this problem
my error code like this
and then i check the entities path . i modify the path to correct. then the typeorm working
A:
For me it was two things, while using it with NestJS:
There was two entities with the same name (but different tables)
Once I fixed it, the error still happened, then I got to point two:
Delete the dist/build directory
Not sure how the build process works for typeorm, but once I deleted that and ran the project again it worked. Seems like it was not updating the generated files.
A:
In my case I've provided a wrong db password. Instead of receiving an error message like "connection to db failed" I've got the error message below.
EntityMetadataNotFound: No metadata for "User" was found.
A:
Using TypeORM with NestJS, for me the issue was that I was setting the migrations property of the TypeOrmModuleOptions object for the TypeOrmModuleAsyncOptions useFactory method, when instead I should've only set it on the migrations config, which uses the standard TypeORM DataSource type.
This is what I ended up with:
typeorm.config.ts
import { DataSource } from 'typeorm';
import {
TypeOrmModuleAsyncOptions,
TypeOrmModuleOptions,
} from '@nestjs/typeorm';
const postgresDataSourceConfig: TypeOrmModuleOptions = {
...
// NO migrations property here
// Had to add this as well
autoLoadEntities: true,
...
};
export const typeormAsyncConfig: TypeOrmModuleAsyncOptions = {
useFactory: async (): Promise<TypeOrmModuleOptions> => {
return postgresDataSourceConfig;
},
};
// Needed to work with migrations, not used by NestJS itself
export const postgresDataSource = new DataSource({
...postgresDataSourceConfig,
type: 'postgres',
// Instead use it here, because the TypeORM Nest Module does not care for migrations
// They must be done outside of NestJS entirely
migrations: ['src/database/migrations/*.ts'],
});
migrations.config.ts
import { postgresDataSource } from './typeorm.config';
export default postgresDataSource;
And the script to run TypeORM CLI was
"typeorm-cli": "ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli -d ./src/database/migrations.config.ts"
A:
in my case, i was changing one EmployeeSchema to EmployeeEntity, but i forgot the entity annotation on it:
@Entity()
export class Employee {
A:
Besides the @Entity() annotation, with typeorm version 0.3.0 and above also don't forget to put all your entities into your DataSource:
export const dataSource = new DataSource({
type: "sqlite",
database: 'data/my_database.db',
entities: [User],
...
...
})
https://github.com/typeorm/typeorm/blob/master/CHANGELOG.md#030-2022-03-17
| No metadata for "User" was found using TypeOrm | I'm trying to get a basic setup working using TypeORM, and getting this error following the setup.
Here is a REPL (just do yarn install && yarn db:dev followed by yarn db:migrate && yarn start to reproduce the error)
Inserting a new user into the database...
{ EntityMetadataNotFound: No metadata for "User" was found.
at new EntityMetadataNotFoundError (/Users/admin/work/typeorm-naming-strategy/src/error/EntityMetadataNotFoundError.ts:9:9)
at Connection.getMetadata (/Users/admin/work/typeorm-naming-strategy/src/connection/Connection.ts:313:19)
at /Users/admin/work/typeorm-naming-strategy/src/persistence/EntityPersistExecutor.ts:77:55
at Array.forEach (<anonymous>)
at EntityPersistExecutor.<anonymous> (/Users/admin/work/typeorm-naming-strategy/src/persistence/EntityPersistExecutor.ts:71:30)
at step (/Users/admin/work/typeorm-naming-strategy/node_modules/typeorm/persistence/EntityPersistExecutor.js:32:23)
at Object.next (/Users/admin/work/typeorm-naming-strategy/node_modules/typeorm/persistence/EntityPersistExecutor.js:13:53)
at /Users/admin/work/typeorm-naming-strategy/node_modules/typeorm/persistence/EntityPersistExecutor.js:7:71
at new Promise (<anonymous>)
at __awaiter (/Users/admin/work/typeorm-naming-strategy/node_modules/typeorm/persistence/EntityPersistExecutor.js:3:12)
name: 'EntityMetadataNotFound',
message: 'No metadata for "User" was found.' }
| [
"Adding answer bit late but its very common error.\nThere are two main reasons for above error. In OrmConfig,\n\nYou have used *.ts instead of *.js. For example,\n\nentities: [__dirname + '/../**/*.entity.ts'] <-- Wrong\n\nIt should be\nentities: [__dirname + '/../**/*.entity.js'] \n\n\nentities path is wrong. Make sure, entities path is defined according to dist folder not src folder.\n\n",
"The problem is on ormConfig \nPlease try to use this:\nentities: [__dirname + '/../**/*.entity.{js,ts}']\n\n",
"In my case I had forgotten to add new entity ( User in your case ) to app.module.ts .\nHere is the solution that worked for me:\n// app.module.ts\n\n@Module({\n imports: [\n TypeOrmModule.forRoot({\n ...\n entities: [User]\n ...\n }),\n ...\n ],\n})\n\n",
"Following will ensure that the file extensions used by TypeORM are both .js and .ts \nentities: [__dirname + '/../**/*.entity.{js,ts}']\n\nchange this in your config file.\n",
"I got this err message, but i solved in a different way, it could also happen when you already have a table called \"users\" for example, and your model User is not with the same columns and configs as your table, in my case my table had a column called posts with a default value, and my model hadn't that default value set up, so before you check the entities directory on the ormconfig.json i highly recommend you check if your models are with the same properties and configs as your database table\n",
"If your app is using the latest DataSource instead of OrmConfig (like apps using the latest version of typeorm (0.3.1 the moment i'm writing theses lines)), make sure to call initialize() method of your DataSource object inside your data-source.ts file before using it anywhere in your app.\n",
"enable autoLoadEntities in app.module.ts\nimports: [UserModule,\n TypeOrmModule.forRoot({\n autoLoadEntities: true\n })\n]\n\n",
"I believe my issue was caused by accidentally having both ormconfig.json and ormconfig.ts\n",
"I'll contribute to this list of answers. In my case, I'm using a fancy-pants IDE that likes to auto-transpile the ts code into js without my knowledge. The issue I had was, the resulting js file was no bueno. Killing these auto-gen'd files was the answer.\nThe relevant bit of my ormconfig looks like:\n{\n ...\n \"entities\": [\"src/entity/**/*.ts\", \"test/entity-mocks/**/*.ts\"],\n \"migrations\": [\"src/migration/**/*.ts\"],\n \"subscribers\": [\"src/subscriber/**/*.ts\"]\n ...\n}\n\nActually, I even tried the methods above to also include transpiled js files if they exist and I got other weird behavior.\n",
"In my case i forgot to use connection.connect() method after creation of connection and got same error when using manager.find().\n",
"I've tried all the solutions above, changing the typeORM version solved it for me.\n",
"Make sure you have established a connection with the database before using the entities. In my case, I was using the AppDataSource before it was initialized. Here is how I fixed it:\nimport \"reflect-metadata\";\nimport { DataSource } from \"typeorm\";\nimport { Season } from \"src/models/Season\";\n\nconst AppDataSource = new DataSource({\n type: \"postgres\",\n host: \"localhost\",\n port: 5432,\n username: \"postgres\",\n password: \"postgres\",\n database: \"test-db\",\n synchronize: false,\n logging: false,\n entities: [Season],\n migrations: [],\n subscribers: [],\n});\n\nAppDataSource.initialize()\n .then(async () => {\n console.log(\"Connection initialized with database...\");\n })\n .catch((error) => console.log(error));\n\nexport const getDataSource = (delay = 3000): Promise<DataSource> => {\n if (AppDataSource.isInitialized) return Promise.resolve(AppDataSource);\n\n return new Promise((resolve, reject) => {\n setTimeout(() => {\n if (AppDataSource.isInitialized) resolve(AppDataSource);\n else reject(\"Failed to create connection with database\");\n }, delay);\n });\n};\n\nAnd in your services where you want to use the DataSource:\nimport { getDataSource } from \"src/config/data-source\";\nimport { Season } from \"src/models/Season\";\n\nconst init = async (event) => {\n const AppDataSource = await getDataSource();\n const seasonRepo = AppDataSource.getRepository(Season);\n // Your business logic\n};\n\nYou can also extend my function to add retry logic if required :)\n",
"Just in case someone runs into the same problem i had. I had to do two different things going against me:\nI was missing the declaration of entity in ormConfig:\normConfig={\n ...,\n entities: [\n ...,\n UserEntity\n ]\n\n}\n\nAnd since i was making changes (afterwards) to an existing entity, the cache was throwing a similar error. The solution for this was to remove the projects root folders: dist/ folder. Thou this might only be a Nest.js + TypeOrm issue.\n",
"In my case the entity in question was the only one giving the error. The rest of entities worked fine.\nThe automatic import failed to write correctly the name of the file.\nimport { BillingInfo } from \"../entity/Billinginfo\";\n\ninstead of\nimport { BillingInfo } from \"../entity/BillingInfo\";\n\nThe I for Info should be capital. The IDE also failed to show any errors on this import.\n",
"This error can also come up if you have a nodemon.json file in your project.\nSo after including your entity directory for both build and dev in the entities array\nexport const AppDataSource = new DataSource({\ntype: 'mysql',\nhost: DB_HOST_DEV,\nport: Number(DB_PORT),\nusername: DB_USER_DEV,\npassword: DB_PASSWORD_DEV,\ndatabase: DB_NAME_DEV,\nsynchronize: false,\nlogging: false,\nentities: [ \n process.env.NODE_ENV === \"prod\"\n ? \"build/entity/*{.ts,.js}\"\n : \"src/entity/*{.ts,.js}\",\n],\nmigrations: [\"src/migration/*.ts\"],\nsubscribers: [],\n\n})\nand still getting this error, remove the nodemon.json file if you have it in your project\n",
"my fail was that instead of using Models I used Requests. Requests are loaded earlier so it end it undefined. Try to move your Model Class to Models\n",
"Add your entity in orm.config.ts file.\nentities: [empData,user],\n\n",
"I omitted this @Entity() decorator on the entity class. So immediately I fixed this, and it worked for me.\nExample:\nimport { ObjectType, Field, ID } from '@nestjs/graphql';\nimport { Column, Entity, PrimaryGeneratedColumn } from 'typeorm';\n\n@ObjectType()\n@Entity()\nexport class Message {\n @Field((type) => ID)\n @PrimaryGeneratedColumn('uuid')\n id: string;\n\n @Field()\n @Column()\n conversationId: string;\n\n @Field()\n @Column()\n sender: string;\n}\n\n",
"I got the same issue and later found I haven't call the functionality to connect the DB. Just by calling the connection fixed my issue.\n\n\n export const AppDataSource = new DataSource({\n type: 'mysql',\n host: process.env.MYSQL_HOST,\n ....\n \n });\n \n \n let dataSource: DataSource;\n \n export const ConnectDb = async () => {\n dataSource = await AppDataSource.initialize();\n\n\n\nAnd use this to connect in your function.\nAnother occasion I got a similar message when I run $ npm test without properly mocking Jest.spyOn() method and fixed it by:\n\n\njest.spyOn(db, 'MySQLDbCon').mockReturnValueOnce(Promise.resolve());\nconst updateResult = {}\nas UpdateResult;\njest.spyOn(db.SQLDataSource, 'getRepository').mockReturnValue({\n update: jest.fn().mockResolvedValue(updateResult),\n }\n as unknown as Repository < unknown > );\n\n\n\n",
"ForMe , i have set error path cause this problem\nmy error code like this\n\nand then i check the entities path . i modify the path to correct. then the typeorm working\n\n",
"For me it was two things, while using it with NestJS:\n\nThere was two entities with the same name (but different tables)\n\nOnce I fixed it, the error still happened, then I got to point two:\n\nDelete the dist/build directory\n\nNot sure how the build process works for typeorm, but once I deleted that and ran the project again it worked. Seems like it was not updating the generated files.\n",
"In my case I've provided a wrong db password. Instead of receiving an error message like \"connection to db failed\" I've got the error message below.\nEntityMetadataNotFound: No metadata for \"User\" was found.\n",
"Using TypeORM with NestJS, for me the issue was that I was setting the migrations property of the TypeOrmModuleOptions object for the TypeOrmModuleAsyncOptions useFactory method, when instead I should've only set it on the migrations config, which uses the standard TypeORM DataSource type.\nThis is what I ended up with:\ntypeorm.config.ts\nimport { DataSource } from 'typeorm';\nimport {\n TypeOrmModuleAsyncOptions,\n TypeOrmModuleOptions,\n} from '@nestjs/typeorm';\n\nconst postgresDataSourceConfig: TypeOrmModuleOptions = {\n ...\n // NO migrations property here\n // Had to add this as well\n autoLoadEntities: true,\n ...\n};\n\nexport const typeormAsyncConfig: TypeOrmModuleAsyncOptions = {\n useFactory: async (): Promise<TypeOrmModuleOptions> => {\n return postgresDataSourceConfig;\n },\n};\n\n// Needed to work with migrations, not used by NestJS itself\nexport const postgresDataSource = new DataSource({\n ...postgresDataSourceConfig,\n type: 'postgres',\n // Instead use it here, because the TypeORM Nest Module does not care for migrations\n // They must be done outside of NestJS entirely\n migrations: ['src/database/migrations/*.ts'],\n});\n\nmigrations.config.ts\nimport { postgresDataSource } from './typeorm.config';\n\nexport default postgresDataSource;\n\nAnd the script to run TypeORM CLI was\n\"typeorm-cli\": \"ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli -d ./src/database/migrations.config.ts\"\n",
"in my case, i was changing one EmployeeSchema to EmployeeEntity, but i forgot the entity annotation on it:\n@Entity()\nexport class Employee {\n\n",
"Besides the @Entity() annotation, with typeorm version 0.3.0 and above also don't forget to put all your entities into your DataSource:\nexport const dataSource = new DataSource({\n type: \"sqlite\",\n database: 'data/my_database.db',\n entities: [User],\n ...\n ...\n})\n\nhttps://github.com/typeorm/typeorm/blob/master/CHANGELOG.md#030-2022-03-17\n"
] | [
36,
26,
14,
5,
2,
2,
2,
1,
1,
1,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"typeorm"
] | stackoverflow_0051562162_typeorm.txt |
Q:
Convert namespace string to a map of slices
I am using go-validator lib,
To get the error namespace I can call:
ns := e.Namespace()
Will result something like this:
"Customer.customer_addresses[0].location_name"
And to get the error message I can call:
err := e.Error()
Will result something like:
"location_name is a required field"
Is there standard lib, utilities, or the fastest way to make this string:
"customer_addresses[0].location_name"
To become map like this:
{
"customer_addresses": [
{
"location_name": "location_name is a required field"
}
]
}
Thank you for your help
A:
In that case, you can use the Map() method on the ValidationErrors type to convert the error namespace and message into a map. Here is an example:
// Import the go-validator library
import "gopkg.in/go-playground/validator.v9"
// create a new instance of the validator
validate := validator.New()
// create a validation error
e := validator.ValidationError{
Namespace: "Customer.customer_addresses[0].location_name",
Field: "location_name",
Struct: "Customer",
Tag: "required",
ActualTag: "",
Kind: reflect.String,
Type: reflect.TypeOf(""),
Value: "",
Param: "",
Message: "location_name is a required field",
}
// convert the error namespace and message to a map using the Map() method
errMap := e.Map()
fmt.Println(errMap)
// Output:
// map[cus
tomer_addresses:[{location_name:location_name is a required field}]]
You can also use the StructNamespace() method on the ValidationErrors type to get the error namespace without the field name, which will give you a string like "Customer.customer_addresses[0]" that you can use as a key in the map.
ns := e.StructNamespace()
errMap := map[string]interface{}{
ns: []map[string]string{
{e.Field: e.Error()},
},
}
| Convert namespace string to a map of slices | I am using go-validator lib,
To get the error namespace I can call:
ns := e.Namespace()
Will result something like this:
"Customer.customer_addresses[0].location_name"
And to get the error message I can call:
err := e.Error()
Will result something like:
"location_name is a required field"
Is there standard lib, utilities, or the fastest way to make this string:
"customer_addresses[0].location_name"
To become map like this:
{
"customer_addresses": [
{
"location_name": "location_name is a required field"
}
]
}
Thank you for your help
| [
"In that case, you can use the Map() method on the ValidationErrors type to convert the error namespace and message into a map. Here is an example:\n// Import the go-validator library\nimport \"gopkg.in/go-playground/validator.v9\"\n\n// create a new instance of the validator\nvalidate := validator.New()\n\n// create a validation error\ne := validator.ValidationError{\n Namespace: \"Customer.customer_addresses[0].location_name\",\n Field: \"location_name\",\n Struct: \"Customer\",\n Tag: \"required\",\n ActualTag: \"\",\n Kind: reflect.String,\n Type: reflect.TypeOf(\"\"),\n Value: \"\",\n Param: \"\",\n Message: \"location_name is a required field\",\n}\n\n// convert the error namespace and message to a map using the Map() method\nerrMap := e.Map()\nfmt.Println(errMap)\n\n// Output:\n// map[cus\n\ntomer_addresses:[{location_name:location_name is a required field}]]\n\nYou can also use the StructNamespace() method on the ValidationErrors type to get the error namespace without the field name, which will give you a string like \"Customer.customer_addresses[0]\" that you can use as a key in the map.\n ns := e.StructNamespace()\nerrMap := map[string]interface{}{\n ns: []map[string]string{\n {e.Field: e.Error()},\n },\n}\n\n"
] | [
0
] | [] | [] | [
"go",
"validation"
] | stackoverflow_0074674119_go_validation.txt |
Q:
How to create_dir from DelayedFormat in Rust?
Would like to make directory with %Y%m%y_%H%M%S name.
This is what I have so far:
use std::fs;
use chrono;
fn main() {
let now = chrono::offset::Local::now();
let custom_datetime_format = now.format("%Y%m%y_%H%M%S");
println!("{:}", custom_datetime_format);
let new_dir = fs::create_dir(custom_datetime_format).unwrap();
println!("New directory created");
println!("{:#?}", new_dir);
}
and error
error[E0277]: the trait bound `DelayedFormat<StrftimeItems<'_>>: AsRef<Path>` is not satisfied
--> src/main.rs:9:34
|
9 | let new_dir = fs::create_dir(custom_datetime_format).unwrap();
| -------------- ^^^^^^^^^^^^^^^^^^^^^^ the trait `AsRef<Path>` is not implemented for `DelayedFormat<StrftimeItems<'_>>`
| |
| required by a bound introduced by this call
|
note: required by a bound in `create_dir`
--> /Users/macbook/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/src/rust/library/std/src/fs.rs:1977:22
|
1977 | pub fn create_dir<P: AsRef<Path>>(path: P) -> io::Result<()> {
| ^^^^^^^^^^^ required by this bound in `create_dir`
For more information about this error, try `rustc --explain E0277`.
From my current understanding problem is that create_dir function expect parameter fo type AsRef>.
Any idea how to convert DelayedFormat<StrftimeItems<'_>> to AsRef ?
Or this is all wrong ?
A:
Because your datetime doesn't implement AsRef<Path> you have to convert the datetime to a String first.
One way is the one you found yourself:
fs::create_dir(format!("{custom_datetime_format}")).unwrap();
or you use it's ToString implementation which is implemented for every type that implements Display:
fs::create_dir(custom_datetime_format.to_string()).unwrap();
| How to create_dir from DelayedFormat in Rust? | Would like to make directory with %Y%m%y_%H%M%S name.
This is what I have so far:
use std::fs;
use chrono;
fn main() {
let now = chrono::offset::Local::now();
let custom_datetime_format = now.format("%Y%m%y_%H%M%S");
println!("{:}", custom_datetime_format);
let new_dir = fs::create_dir(custom_datetime_format).unwrap();
println!("New directory created");
println!("{:#?}", new_dir);
}
and error
error[E0277]: the trait bound `DelayedFormat<StrftimeItems<'_>>: AsRef<Path>` is not satisfied
--> src/main.rs:9:34
|
9 | let new_dir = fs::create_dir(custom_datetime_format).unwrap();
| -------------- ^^^^^^^^^^^^^^^^^^^^^^ the trait `AsRef<Path>` is not implemented for `DelayedFormat<StrftimeItems<'_>>`
| |
| required by a bound introduced by this call
|
note: required by a bound in `create_dir`
--> /Users/macbook/.rustup/toolchains/stable-x86_64-apple-darwin/lib/rustlib/src/rust/library/std/src/fs.rs:1977:22
|
1977 | pub fn create_dir<P: AsRef<Path>>(path: P) -> io::Result<()> {
| ^^^^^^^^^^^ required by this bound in `create_dir`
For more information about this error, try `rustc --explain E0277`.
From my current understanding problem is that create_dir function expect parameter fo type AsRef>.
Any idea how to convert DelayedFormat<StrftimeItems<'_>> to AsRef ?
Or this is all wrong ?
| [
"Because your datetime doesn't implement AsRef<Path> you have to convert the datetime to a String first.\nOne way is the one you found yourself:\nfs::create_dir(format!(\"{custom_datetime_format}\")).unwrap();\n\nor you use it's ToString implementation which is implemented for every type that implements Display:\nfs::create_dir(custom_datetime_format.to_string()).unwrap();\n\n"
] | [
1
] | [] | [] | [
"rust"
] | stackoverflow_0074674371_rust.txt |
Q:
Length of the longest decreasing subsequence built by appending elements to the end and the front using dynamic programming
The restrictions are that the elements can be appended to the front if they are greater than the element at the front and to the back if they are smaller than the back. It can also ignore elements (and there comes the difficulty).
Example:
Input:
{6, 7, 3, 5, 4}
The longest sequence to that input is:
Start with {6}.
Append 7 to the front because it is greater than 6. {7, 6}
Ignore 3.
Append 5 to the back because it is smaller. {7, 6, 5}
Append 4 to the back because it is smaller. {7, 6, 5, 4}
If we appended 3, the sequence would be smaller {7, 6, 3} because then we wouldn't be able to append 4.
I tried to adapt a LIS algorithm to solve it, but the results are totally wrong.
int adapted_LIS(int input[], int n)
{
int score[n] = {};
score[0] = 1;
for (int i = 1; i < n; i++)
{
score[i] = 1;
int front = input[i];
int back = input[i];
for (int j = 0; j < i; j++)
{
if (input[j] > front)
{
front = input[j];
score[i] = std::max(score[i], score[j] + 1);
}
else if (input[j] < back)
{
back = input[j];
score[i] = std::max(score[i], score[j] + 1);
}
}
}
return *std::max_element(score, score + n);
}
How can I solve it using Dynamic Programming?
A:
The optimal substructure that we need for dynamic programming is that, given two sequences with the same front and back, it’s obviously better to extend the longer one (or the same, if the sequences have the same length). Here’s some C++ (inefficient for clarity and so that it can’t be fed directly to an online judge):
#include <algorithm>
#include <iostream>
#include <map>
#include <utility>
#include <vector>
std::vector<int> PushFront(int x, std::vector<int> subsequence) {
subsequence.insert(subsequence.begin(), x);
return subsequence;
}
std::vector<int> PushBack(std::vector<int> subsequence, int x) {
subsequence.push_back(x);
return subsequence;
}
void Consider(std::map<std::pair<int, int>, std::vector<int>> &table,
std::vector<int> subsequence) {
std::vector<int> &entry = table[{subsequence.front(), subsequence.back()}];
if (subsequence.size() > entry.size()) {
entry = std::move(subsequence);
}
}
std::vector<int> TwoSidedDecreasingSubsequence(const std::vector<int> &input) {
if (input.empty()) {
return {};
}
// Maps {front, back} to the longest known subsequence.
std::map<std::pair<int, int>, std::vector<int>> table;
for (int x : input) {
auto table_copy = table;
for (const auto &[front_back, subsequence] : table_copy) {
auto [front, back] = front_back;
if (x > front) {
Consider(table, PushFront(x, subsequence));
}
if (back > x) {
Consider(table, PushBack(subsequence, x));
}
}
Consider(table, {x});
}
return std::max_element(
table.begin(), table.end(),
[&](const std::pair<std::pair<int, int>, std::vector<int>> &a,
const std::pair<std::pair<int, int>, std::vector<int>> &b) {
return a.second.size() < b.second.size();
})
->second;
}
int main() {
for (int x : TwoSidedDecreasingSubsequence({6, 7, 3, 5, 4})) {
std::cout << ' ' << x;
}
std::cout << '\n';
}
A:
You're trying to use the Longest Increasing Subsequence (LIS) algorithm to solve this problem, but that won't work because the rules for constructing a valid sequence in this problem are different from the rules for constructing a longest increasing subsequence.
To solve this problem using dynamic programming, you'll need to come up with a new approach that takes into account the specific rules for appending elements to the front or back of the sequence.
One way to do this is to create two separate dynamic programming arrays, one to keep track of the longest sequence ending at the front of the array, and another to keep track of the longest sequence ending at the back of the array. Then, you can iterate over the input array and update these two arrays based on the rules for appending elements to the front or back of the sequence.
Here's a rough outline of my proposed solution: (This should serve as an explanation to the code snippet below)
Initialize two dynamic programming arrays, front and back, both of size n. Set all elements of these arrays to 1.
Iterate over the input array from left to right. At each step i, do the following:
If the element at index i is greater than the element at the front of the sequence (i.e., the element at index front[i-1] in the input array), append this element to the front of the sequence by setting front[i] to front[i-1] + 1.
If the element at index i is smaller than the element at the back of the sequence (i.e., the element at index back[i-1] in the input array), append this element to the back of the sequence by setting back[i] to back[i-1] + 1.
At the end of the loop, the longest sequence that can be constructed from the input array will be the maximum of front[n-1] and back[n-1].
int longest_sequence(int input[], int n)
{
// Initialize dynamic programming arrays
int front[n];
int back[n];
// Set all elements of the arrays to 1
for (int i = 0; i < n; i++)
{
front[i] = 1;
back[i] = 1;
}
// Iterate over the input array from left to right
for (int i = 1; i < n; i++)
{
// If the current element is smaller than the element at the front of the sequence,
// append it to the front of the sequence by updating front[i]
if (input[i] < input[front[i-1] - 1])
{
front[i] = front[i-1] + 1;
}
// If the current element is greater than the element at the back of the sequence,
// append it to the back of the sequence by updating back[i]
if (input[i] > input[back[i-1] + 1])
{
back[i] = back[i-1] + 1;
}
}
// Return the maximum of front[n-1] and back[n-1]
return std::max(front[n-1], back[n-1]);
I tested my solution with your example:
int input[] = {6, 7, 3, 5, 4};
int n = 5;
int result = longest_sequence(input, n);
As you outlined (step-by-step) the longest sequence is {7, 6, 5, 4} of length 4. That is exactly the result that is being returned by longest_sequence!
This algorithm will take O(n^2) time to run, since you need to iterate over the input array and then over the dynamic programming arrays at each step. This is the best you can do, since any algorithm that solves this problem will need to examine every element in the input array at least once.
... Probably the biggest caveat of implementing the solution dynamically.
| Length of the longest decreasing subsequence built by appending elements to the end and the front using dynamic programming | The restrictions are that the elements can be appended to the front if they are greater than the element at the front and to the back if they are smaller than the back. It can also ignore elements (and there comes the difficulty).
Example:
Input:
{6, 7, 3, 5, 4}
The longest sequence to that input is:
Start with {6}.
Append 7 to the front because it is greater than 6. {7, 6}
Ignore 3.
Append 5 to the back because it is smaller. {7, 6, 5}
Append 4 to the back because it is smaller. {7, 6, 5, 4}
If we appended 3, the sequence would be smaller {7, 6, 3} because then we wouldn't be able to append 4.
I tried to adapt a LIS algorithm to solve it, but the results are totally wrong.
int adapted_LIS(int input[], int n)
{
int score[n] = {};
score[0] = 1;
for (int i = 1; i < n; i++)
{
score[i] = 1;
int front = input[i];
int back = input[i];
for (int j = 0; j < i; j++)
{
if (input[j] > front)
{
front = input[j];
score[i] = std::max(score[i], score[j] + 1);
}
else if (input[j] < back)
{
back = input[j];
score[i] = std::max(score[i], score[j] + 1);
}
}
}
return *std::max_element(score, score + n);
}
How can I solve it using Dynamic Programming?
| [
"The optimal substructure that we need for dynamic programming is that, given two sequences with the same front and back, it’s obviously better to extend the longer one (or the same, if the sequences have the same length). Here’s some C++ (inefficient for clarity and so that it can’t be fed directly to an online judge):\n#include <algorithm>\n#include <iostream>\n#include <map>\n#include <utility>\n#include <vector>\n\nstd::vector<int> PushFront(int x, std::vector<int> subsequence) {\n subsequence.insert(subsequence.begin(), x);\n return subsequence;\n}\n\nstd::vector<int> PushBack(std::vector<int> subsequence, int x) {\n subsequence.push_back(x);\n return subsequence;\n}\n\nvoid Consider(std::map<std::pair<int, int>, std::vector<int>> &table,\n std::vector<int> subsequence) {\n std::vector<int> &entry = table[{subsequence.front(), subsequence.back()}];\n if (subsequence.size() > entry.size()) {\n entry = std::move(subsequence);\n }\n}\n\nstd::vector<int> TwoSidedDecreasingSubsequence(const std::vector<int> &input) {\n if (input.empty()) {\n return {};\n }\n // Maps {front, back} to the longest known subsequence.\n std::map<std::pair<int, int>, std::vector<int>> table;\n for (int x : input) {\n auto table_copy = table;\n for (const auto &[front_back, subsequence] : table_copy) {\n auto [front, back] = front_back;\n if (x > front) {\n Consider(table, PushFront(x, subsequence));\n }\n if (back > x) {\n Consider(table, PushBack(subsequence, x));\n }\n }\n Consider(table, {x});\n }\n return std::max_element(\n table.begin(), table.end(),\n [&](const std::pair<std::pair<int, int>, std::vector<int>> &a,\n const std::pair<std::pair<int, int>, std::vector<int>> &b) {\n return a.second.size() < b.second.size();\n })\n ->second;\n}\n\nint main() {\n for (int x : TwoSidedDecreasingSubsequence({6, 7, 3, 5, 4})) {\n std::cout << ' ' << x;\n }\n std::cout << '\\n';\n}\n\n",
"You're trying to use the Longest Increasing Subsequence (LIS) algorithm to solve this problem, but that won't work because the rules for constructing a valid sequence in this problem are different from the rules for constructing a longest increasing subsequence.\nTo solve this problem using dynamic programming, you'll need to come up with a new approach that takes into account the specific rules for appending elements to the front or back of the sequence.\nOne way to do this is to create two separate dynamic programming arrays, one to keep track of the longest sequence ending at the front of the array, and another to keep track of the longest sequence ending at the back of the array. Then, you can iterate over the input array and update these two arrays based on the rules for appending elements to the front or back of the sequence.\nHere's a rough outline of my proposed solution: (This should serve as an explanation to the code snippet below)\n\nInitialize two dynamic programming arrays, front and back, both of size n. Set all elements of these arrays to 1.\n\nIterate over the input array from left to right. At each step i, do the following:\n\nIf the element at index i is greater than the element at the front of the sequence (i.e., the element at index front[i-1] in the input array), append this element to the front of the sequence by setting front[i] to front[i-1] + 1.\nIf the element at index i is smaller than the element at the back of the sequence (i.e., the element at index back[i-1] in the input array), append this element to the back of the sequence by setting back[i] to back[i-1] + 1.\n\n\nAt the end of the loop, the longest sequence that can be constructed from the input array will be the maximum of front[n-1] and back[n-1].\n\n\nint longest_sequence(int input[], int n)\n{\n // Initialize dynamic programming arrays\n int front[n];\n int back[n];\n\n // Set all elements of the arrays to 1\n for (int i = 0; i < n; i++)\n {\n front[i] = 1;\n back[i] = 1;\n }\n\n // Iterate over the input array from left to right\n for (int i = 1; i < n; i++)\n {\n // If the current element is smaller than the element at the front of the sequence,\n // append it to the front of the sequence by updating front[i]\n if (input[i] < input[front[i-1] - 1])\n {\n front[i] = front[i-1] + 1;\n }\n\n // If the current element is greater than the element at the back of the sequence,\n // append it to the back of the sequence by updating back[i]\n if (input[i] > input[back[i-1] + 1])\n {\n back[i] = back[i-1] + 1;\n }\n }\n\n // Return the maximum of front[n-1] and back[n-1]\n return std::max(front[n-1], back[n-1]);\n\nI tested my solution with your example:\nint input[] = {6, 7, 3, 5, 4};\nint n = 5;\n\nint result = longest_sequence(input, n);\n\nAs you outlined (step-by-step) the longest sequence is {7, 6, 5, 4} of length 4. That is exactly the result that is being returned by longest_sequence!\nThis algorithm will take O(n^2) time to run, since you need to iterate over the input array and then over the dynamic programming arrays at each step. This is the best you can do, since any algorithm that solves this problem will need to examine every element in the input array at least once.\n... Probably the biggest caveat of implementing the solution dynamically.\n"
] | [
2,
0
] | [] | [] | [
"algorithm",
"c++",
"dynamic_programming"
] | stackoverflow_0074639947_algorithm_c++_dynamic_programming.txt |
Q:
Golang Exec to Execute Multiline Commands
On a mac, the following opens a terminal and runs the echo command:
osascript -e 'tell app "Terminal"
do script "echo hello"
end tell'
I'm trying to run this using Golang's exec library:
c := exec.Command(`osascript`, `-e`, `'tell app "Terminal"
do script "echo hello"
end tell'`) // #nosec G204
err := c.Run()
But it errors with "exit status 1", stderr reads "already started"
Any idea what's going wrong?
A:
It looks like you are trying to run a shell script using osascript and the exec package in Go. There are a few things that could be causing the error you are seeing.
First, it is important to understand that the exec.Command function takes the command and any arguments to that command as separate string arguments. So, in your code, the -e flag and the script that you are trying to run are being treated as separate arguments to osascript, which is probably not what you want.
To fix this, you can use the -e flag to specify the script inline, like this:
c := exec.Command(`osascript`, `-e`, `tell app "Terminal" do script "echo hello" end tell`)
Another thing to consider is that the exec package in Go runs commands directly, without using a shell. This means that any shell-specific syntax in the command, such as the single quotes around the script in your example, will not be interpreted by the shell. Instead, the single quotes will be treated as part of the script itself, which is likely not what you want.
To fix this, you can either remove the single quotes around the script, or use the sh -c command to run the script through a shell, like this:
c := exec.Command(`sh`, `-c`, `osascript -e 'tell app "Terminal" do script "echo hello" end tell'`)
| Golang Exec to Execute Multiline Commands | On a mac, the following opens a terminal and runs the echo command:
osascript -e 'tell app "Terminal"
do script "echo hello"
end tell'
I'm trying to run this using Golang's exec library:
c := exec.Command(`osascript`, `-e`, `'tell app "Terminal"
do script "echo hello"
end tell'`) // #nosec G204
err := c.Run()
But it errors with "exit status 1", stderr reads "already started"
Any idea what's going wrong?
| [
"It looks like you are trying to run a shell script using osascript and the exec package in Go. There are a few things that could be causing the error you are seeing.\nFirst, it is important to understand that the exec.Command function takes the command and any arguments to that command as separate string arguments. So, in your code, the -e flag and the script that you are trying to run are being treated as separate arguments to osascript, which is probably not what you want.\nTo fix this, you can use the -e flag to specify the script inline, like this:\nc := exec.Command(`osascript`, `-e`, `tell app \"Terminal\" do script \"echo hello\" end tell`)\n\nAnother thing to consider is that the exec package in Go runs commands directly, without using a shell. This means that any shell-specific syntax in the command, such as the single quotes around the script in your example, will not be interpreted by the shell. Instead, the single quotes will be treated as part of the script itself, which is likely not what you want.\nTo fix this, you can either remove the single quotes around the script, or use the sh -c command to run the script through a shell, like this:\nc := exec.Command(`sh`, `-c`, `osascript -e 'tell app \"Terminal\" do script \"echo hello\" end tell'`)\n\n"
] | [
0
] | [] | [] | [
"exec",
"go",
"osascript"
] | stackoverflow_0074671809_exec_go_osascript.txt |
Q:
Confirmation before closing the browser tab or warn the user on unsaved changes in Next.js?
all I am currently in a process of migrating my existing react application to nextjs. I have a use case where in I want to show a confirmation before the user is trying to close a browser tab or warn the user about unsaved changes. When a user is in the process of filling out an application form and decides to drop out I want to warn him that the stored data will be lost, and for an existing user if he has already signed up and lands on the dashboard and wants to close the tab or wants to go back to a previous route which in this case is signup I want to prompt him to either logout or continue.
Previously I was using CRA and react-router dom v5 and the following is the code that I used to achieve the above results:-
import React, { useEffect, useState } from "react";
import { Prompt } from "react-router-dom";
const useUnsavedUsageWarning = (
message = "Are you sure you want to discard changes?"
) => {
const [isDirty, setDirty] = useState(false);
useEffect(() => {
// Detecting browser closing
window.onbeforeunload = isDirty && (() => message);
return () => {
window.onbeforeunload = null;
};
}, [isDirty]);
const routerPrompt = <Prompt when={isDirty} message={message} />;
return [routerPrompt, () => setDirty(true), () => setDirty(false)];
};
export default useUnsavedUsageWarning;
Following is a link to code sandbox for my sample react js example:-
https://codesandbox.io/s/back-handling-react-router-v5-4e9q3m
Any help or support is much appreciated.
A:
import { useRouter } from "next/router";
import React, { useState } from "react";
// Custom hook to show a confirmation prompt before navigating away from the current page
const useUnsavedUsageWarning = (
message = "Are you sure you want to discard changes?"
) => {
// State to keep track of whether the current page is "dirty" (has unsaved changes)
const [isDirty, setDirty] = useState(false);
// Get the router object from Next.js
const router = useRouter();
// Listen for route changes and show a confirmation prompt if the page is dirty
router.events.on("routeChangeStart", (url) => {
if (isDirty) {
if (!confirm(message)) {
// Prevent the route change if the user clicks "Cancel" in the confirmation prompt
event.preventDefault();
}
}
});
// Return the dirty state, and functions to set and reset it
return [isDirty, setDirty, () => setDirty(false)];
};
const Page = () => {
// Use the custom hook to get the dirty state and set/reset functions
const [isDirty, setDirty, resetDirty] = useUnsavedUsageWarning();
return (
<div>
<button onClick={() => setDirty(true)}>Make page dirty</button>
<button onClick={resetDirty}>Reset dirty state</button>
{isDirty && <p>This page is dirty!</p>}
</div>
);
};
export default Page;
| Confirmation before closing the browser tab or warn the user on unsaved changes in Next.js? | all I am currently in a process of migrating my existing react application to nextjs. I have a use case where in I want to show a confirmation before the user is trying to close a browser tab or warn the user about unsaved changes. When a user is in the process of filling out an application form and decides to drop out I want to warn him that the stored data will be lost, and for an existing user if he has already signed up and lands on the dashboard and wants to close the tab or wants to go back to a previous route which in this case is signup I want to prompt him to either logout or continue.
Previously I was using CRA and react-router dom v5 and the following is the code that I used to achieve the above results:-
import React, { useEffect, useState } from "react";
import { Prompt } from "react-router-dom";
const useUnsavedUsageWarning = (
message = "Are you sure you want to discard changes?"
) => {
const [isDirty, setDirty] = useState(false);
useEffect(() => {
// Detecting browser closing
window.onbeforeunload = isDirty && (() => message);
return () => {
window.onbeforeunload = null;
};
}, [isDirty]);
const routerPrompt = <Prompt when={isDirty} message={message} />;
return [routerPrompt, () => setDirty(true), () => setDirty(false)];
};
export default useUnsavedUsageWarning;
Following is a link to code sandbox for my sample react js example:-
https://codesandbox.io/s/back-handling-react-router-v5-4e9q3m
Any help or support is much appreciated.
| [
"import { useRouter } from \"next/router\";\nimport React, { useState } from \"react\";\n\n// Custom hook to show a confirmation prompt before navigating away from the current page\nconst useUnsavedUsageWarning = (\n message = \"Are you sure you want to discard changes?\"\n) => {\n // State to keep track of whether the current page is \"dirty\" (has unsaved changes)\n const [isDirty, setDirty] = useState(false);\n\n // Get the router object from Next.js\n const router = useRouter();\n\n // Listen for route changes and show a confirmation prompt if the page is dirty\n router.events.on(\"routeChangeStart\", (url) => {\n if (isDirty) {\n if (!confirm(message)) {\n // Prevent the route change if the user clicks \"Cancel\" in the confirmation prompt\n event.preventDefault();\n }\n }\n });\n\n // Return the dirty state, and functions to set and reset it\n return [isDirty, setDirty, () => setDirty(false)];\n};\n\nconst Page = () => {\n // Use the custom hook to get the dirty state and set/reset functions\n const [isDirty, setDirty, resetDirty] = useUnsavedUsageWarning();\n\n return (\n <div>\n <button onClick={() => setDirty(true)}>Make page dirty</button>\n <button onClick={resetDirty}>Reset dirty state</button>\n {isDirty && <p>This page is dirty!</p>}\n </div>\n );\n};\n\nexport default Page;\n\n"
] | [
0
] | [] | [] | [
"javascript",
"next.js",
"reactjs"
] | stackoverflow_0074672901_javascript_next.js_reactjs.txt |
Q:
Create and stream excel file with large data with couple of million rows in nodejs
Create and stream excel file with large data with couple of million rows in NodeJs.
I tried search on internet, but I actually can not find any good guidance. Thank you so much for your reply.
A:
use the xlsx package, a simple way to create and manipulate Excel files in Node.js. The xlsx package supports streaming, which allows you to create large Excel files without running out of memory.
const XLSX = require('xlsx');
const fs = require('fs');
// Define the data for the Excel file
const data = [
['ID', 'Name', 'Email'],
['1', 'John Doe', '[email protected]'],
['2', 'Jane Doe', '[email protected]'],
// Add more rows here...
];
// Create a new workbook and add worksheet
const workbook = XLSX.utils.book_new();
const worksheet = XLSX.utils.aoa_to_sheet(data);
XLSX.utils.book_append_sheet(workbook, worksheet, 'Sheet1');
// Create a write stream for the Excel file
const stream = fs.createWriteStream('myfile.xlsx');
// Use the write stream to write the Excel file to disk
XLSX.write(workbook, {type: 'stream', bookType: 'xlsx'}, stream)
.then(() => {
// The file has been written successfully
console.log('File written successfully');
})
.catch(err => {
// There was an error writing the file
console.error(err);
});
xlsx package is imported and the fs module is used to create a write stream for the Excel file. The data for the Excel file is then defined as an array of arrays (AOA), and a new workbook and worksheet are created using this data.
XLSX.write method is then used to write the Excel file to the write stream, using the bookType: 'xlsx' option to specify that the file should be written in the XLSX format. The XLSX.write method returns a promise, so you can use the then and catch methods to handle the success and failure cases respectively. Change the file name and path and it will in your disk.
| Create and stream excel file with large data with couple of million rows in nodejs | Create and stream excel file with large data with couple of million rows in NodeJs.
I tried search on internet, but I actually can not find any good guidance. Thank you so much for your reply.
| [
"use the xlsx package, a simple way to create and manipulate Excel files in Node.js. The xlsx package supports streaming, which allows you to create large Excel files without running out of memory.\nconst XLSX = require('xlsx');\nconst fs = require('fs');\n\n// Define the data for the Excel file\nconst data = [\n ['ID', 'Name', 'Email'],\n ['1', 'John Doe', '[email protected]'],\n ['2', 'Jane Doe', '[email protected]'],\n // Add more rows here...\n];\n\n// Create a new workbook and add worksheet\nconst workbook = XLSX.utils.book_new();\nconst worksheet = XLSX.utils.aoa_to_sheet(data);\nXLSX.utils.book_append_sheet(workbook, worksheet, 'Sheet1');\n\n// Create a write stream for the Excel file\nconst stream = fs.createWriteStream('myfile.xlsx');\n\n// Use the write stream to write the Excel file to disk\nXLSX.write(workbook, {type: 'stream', bookType: 'xlsx'}, stream)\n .then(() => {\n // The file has been written successfully\n console.log('File written successfully');\n })\n .catch(err => {\n // There was an error writing the file\n console.error(err);\n });\n\nxlsx package is imported and the fs module is used to create a write stream for the Excel file. The data for the Excel file is then defined as an array of arrays (AOA), and a new workbook and worksheet are created using this data.\nXLSX.write method is then used to write the Excel file to the write stream, using the bookType: 'xlsx' option to specify that the file should be written in the XLSX format. The XLSX.write method returns a promise, so you can use the then and catch methods to handle the success and failure cases respectively. Change the file name and path and it will in your disk.\n"
] | [
2
] | [] | [] | [
"node.js"
] | stackoverflow_0074674443_node.js.txt |
Q:
how to use SvgPicture.string as a imageProvider flutter
im using flutter_svg package for svg. and now i want to use a svg inside a container as decoration like this,
Container(
decoration: BoxDecoration(
image: DecorationImage(
image: SvgPicture.string(
'''<svg viewBox="...">...</svg>'''
),
),
),
)
but the problem is DecorationImage peram expecting 'ImageProvider'. how can i do this ?
i tried flutter_svg_provider but its also not working. i found this solution, but dont know how to use.
A:
The SvgPicture is a Widget, not an Image, which is why it can not be used as DecorationImage here. The way you can use the SvgPicture behind your Container is a Stack:
Stack(
children: [
SvgPicture.string(
'''<svg viewBox="...">...</svg>''',
(... width, height etc.)
),
Container(
child: (..., foreground widget)
),
],
)
Obviously, you have to make sure that both have the same size if you need it. But that depends on your usecase.
A:
use a custom Decoration like this:
class SvgDecoration extends Decoration {
SvgDecoration.string(String rawSvg, {this.key})
: rawSvgFuture = Future.value(rawSvg);
SvgDecoration.file(File file, {this.key})
: rawSvgFuture = file.readAsString();
SvgDecoration.asset(String asset, {this.key})
: rawSvgFuture = rootBundle.loadString(asset);
final Future<String> rawSvgFuture;
final String? key;
@override
BoxPainter createBoxPainter([ui.VoidCallback? onChanged]) {
return _SvgDecorationPainter(rawSvgFuture, onChanged, key);
}
}
class _SvgDecorationPainter extends BoxPainter {
_SvgDecorationPainter(this.rawSvgFuture, ui.VoidCallback? onChanged, String? key) {
rawSvgFuture
.then((rawSvg) => svg.fromSvgString(rawSvg, key ?? '(no key)'))
.then((d) {
drawable = d;
onChanged?.call();
});
}
final Future<String> rawSvgFuture;
DrawableRoot? drawable;
@override
void paint(ui.Canvas canvas, ui.Offset offset, ImageConfiguration configuration) {
if (drawable != null) {
canvas
..save()
..translate(offset.dx, offset.dy);
drawable!
..scaleCanvasToViewBox(canvas, configuration.size!)
..draw(canvas, offset & configuration.size!);
canvas.restore();
}
}
}
as you can see there are 3 constructors: SvgDecoration.string, SvgDecoration.file and SvgDecoration.asset but of course you can add some other custom constructors (like SvgDecoration.network for example)
| how to use SvgPicture.string as a imageProvider flutter | im using flutter_svg package for svg. and now i want to use a svg inside a container as decoration like this,
Container(
decoration: BoxDecoration(
image: DecorationImage(
image: SvgPicture.string(
'''<svg viewBox="...">...</svg>'''
),
),
),
)
but the problem is DecorationImage peram expecting 'ImageProvider'. how can i do this ?
i tried flutter_svg_provider but its also not working. i found this solution, but dont know how to use.
| [
"The SvgPicture is a Widget, not an Image, which is why it can not be used as DecorationImage here. The way you can use the SvgPicture behind your Container is a Stack:\nStack(\n children: [\n SvgPicture.string(\n '''<svg viewBox=\"...\">...</svg>''',\n (... width, height etc.)\n ),\n Container(\n child: (..., foreground widget)\n ),\n ],\n)\n\nObviously, you have to make sure that both have the same size if you need it. But that depends on your usecase.\n",
"use a custom Decoration like this:\nclass SvgDecoration extends Decoration {\n SvgDecoration.string(String rawSvg, {this.key})\n : rawSvgFuture = Future.value(rawSvg);\n\n SvgDecoration.file(File file, {this.key})\n : rawSvgFuture = file.readAsString();\n\n SvgDecoration.asset(String asset, {this.key})\n : rawSvgFuture = rootBundle.loadString(asset);\n\n final Future<String> rawSvgFuture;\n final String? key;\n\n @override\n BoxPainter createBoxPainter([ui.VoidCallback? onChanged]) {\n return _SvgDecorationPainter(rawSvgFuture, onChanged, key);\n }\n}\n\nclass _SvgDecorationPainter extends BoxPainter {\n _SvgDecorationPainter(this.rawSvgFuture, ui.VoidCallback? onChanged, String? key) {\n rawSvgFuture\n .then((rawSvg) => svg.fromSvgString(rawSvg, key ?? '(no key)'))\n .then((d) {\n drawable = d;\n onChanged?.call();\n });\n }\n\n final Future<String> rawSvgFuture;\n DrawableRoot? drawable;\n\n @override\n void paint(ui.Canvas canvas, ui.Offset offset, ImageConfiguration configuration) {\n if (drawable != null) {\n canvas\n ..save()\n ..translate(offset.dx, offset.dy);\n drawable!\n ..scaleCanvasToViewBox(canvas, configuration.size!)\n ..draw(canvas, offset & configuration.size!);\n canvas.restore();\n }\n }\n}\n\nas you can see there are 3 constructors: SvgDecoration.string, SvgDecoration.file and SvgDecoration.asset but of course you can add some other custom constructors (like SvgDecoration.network for example)\n"
] | [
1,
1
] | [] | [] | [
"flutter"
] | stackoverflow_0074667881_flutter.txt |
Q:
numpy: multiply uint16 ndarray by scalar
I have a ndarray 'a' of dtype uint16.
I would like to multiply all entries by a scalar, let's say 2.
The max value for uint16 is 65535. Let's assume some entries of a are greater than 65535/2.
Because of numerical issues, these values will become small values after applying the multiplication
For example, if a is:
1, 1
1, 32867
then a*2 will be:
2, 2
2, 198
This makes sense, but the behavior I would like to enforce is to have 65535 as the "max ceiling", i.e.
x = x*2 if x*2<65535 else 65535
and a*2:
2, 2,
2, 65535
Does numpy support this?
note: I would like the resulting array also to be of dtype uint16
A:
I think the only way is to cast the array to a bigger data type and then clip the values before casting it back to uint16.
For example:
import numpy as np
a = np.array([*stuff], dtype=np.uint16)
res = np.clip(a.astype(np.uint32) * 2, 0, 65535).astype(np.uint16)
| numpy: multiply uint16 ndarray by scalar | I have a ndarray 'a' of dtype uint16.
I would like to multiply all entries by a scalar, let's say 2.
The max value for uint16 is 65535. Let's assume some entries of a are greater than 65535/2.
Because of numerical issues, these values will become small values after applying the multiplication
For example, if a is:
1, 1
1, 32867
then a*2 will be:
2, 2
2, 198
This makes sense, but the behavior I would like to enforce is to have 65535 as the "max ceiling", i.e.
x = x*2 if x*2<65535 else 65535
and a*2:
2, 2,
2, 65535
Does numpy support this?
note: I would like the resulting array also to be of dtype uint16
| [
"I think the only way is to cast the array to a bigger data type and then clip the values before casting it back to uint16.\nFor example:\nimport numpy as np\n\na = np.array([*stuff], dtype=np.uint16)\nres = np.clip(a.astype(np.uint32) * 2, 0, 65535).astype(np.uint16)\n\n"
] | [
2
] | [] | [] | [
"multidimensional_array",
"numeric",
"numpy",
"python",
"type_conversion"
] | stackoverflow_0074670564_multidimensional_array_numeric_numpy_python_type_conversion.txt |
Q:
EntityMetadataNotFound: No metadata for "BusinessApplication" was found
I've been using TypeORM with no problems for a while, but then suddenly this error pops up when making an API call:
EntityMetadataNotFound: No metadata for "BusinessApplication" was found.
at new EntityMetadataNotFoundError (C:\Users\Robbie\Code\fit-society\node_modules\typeorm\error\EntityMetadataNotFoundError.js:10:28)
at Connection.getMetadata (C:\Users\Robbie\Code\fit-society\node_modules\typeorm\connection\Connection.js:336:19)
at EntityManager.<anonymous> (C:\Users\Robbie\Code\fit-society\node_modules\typeorm\entity-manager\EntityManager.js:459:44)
at step (C:\Users\Robbie\Code\fit-society\node_modules\tslib\tslib.js:136:27)
at Object.next (C:\Users\Robbie\Code\fit-society\node_modules\tslib\tslib.js:117:57)
at C:\Users\Robbie\Code\fit-society\node_modules\tslib\tslib.js:110:75
at new Promise (<anonymous>)
at Object.__awaiter (C:\Users\Robbie\Code\fit-society\node_modules\tslib\tslib.js:106:16)
at EntityManager.find (C:\Users\Robbie\Code\fit-society\node_modules\typeorm\entity-manager\EntityManager.js:456:24)
at module.exports../src/pages/api/business-applications/[id].ts.__webpack_exports__.default.Object (C:\Users\Robbie\Code\fit-society\.next\server\static\development\pages\api\business-applications\[id].js:1648:65)
at process._tickCallback (internal/process/next_tick.js:68:7)
It happens when this code is called:
import { BusinessApplication } from '../../../backend/all-entities';
import db from '../../../backend/database';
// in a function...
const manager = await db.getManager();
// in this case, req.data.id does equal "oldest"
const application: BusinessApplication | undefined =
req.data.id === 'oldest'
? (await manager.find(BusinessApplication, { order: { dateSubmitted: 'DESC' }, take: 1 }))[0]
: await manager.findOne(BusinessApplication, { where: { id: parseInt(req.data.id, 10) } });
if (application == null) throw createError(404, 'Business application not found');
return application;
In backend/all-entities.ts:
/**
* This file exists to solve circular dependency problems with Webpack by explicitly specifying the module loading order.
* @see https://medium.com/visual-development/how-to-fix-nasty-circular-dependency-issues-once-and-for-all-in-javascript-typescript-a04c987cf0de
*/
import Account_ from './entities/Account';
export { default as Qualification } from './entities/Qualification';
export { default as EditableAccount } from './entities/EditableAccount';
export { default as EditableBusiness } from './entities/EditableBusiness';
export { default as Business } from './entities/Business';
export { default as BusinessApplication, SendableBusinessApplication } from './entities/BusinessApplication';
export { default as EditableCustomer } from './entities/EditableCustomer';
export { default as Customer } from './entities/Customer';
export { default as Offer } from './entities/Offer';
export { default as ProductOffer } from './entities/ProductOffer';
export { default as ServiceOffer } from './entities/ServiceOffer';
In backend/database.ts:
import 'reflect-metadata';
import {
Connection,
ConnectionManager,
ConnectionOptions,
createConnection,
EntityManager,
getConnectionManager
} from 'typeorm';
import { Business, BusinessApplication, Customer, ProductOffer, ServiceOffer, Qualification } from './all-entities';
/**
* Database manager class
*/
class Database {
private connectionManager: ConnectionManager;
constructor() {
this.connectionManager = getConnectionManager();
}
private async getConnection(): Promise<Connection> {
const CONNECTION_NAME = 'default';
let connection: Connection;
if (this.connectionManager.has(CONNECTION_NAME)) {
connection = this.connectionManager.get(CONNECTION_NAME);
if (!connection.isConnected) {
connection = await connection.connect();
}
} else {
const connectionOptions: ConnectionOptions = {
name: CONNECTION_NAME,
type: 'postgres',
url: process.env.DATABASE_URL,
synchronize: true,
entities: [Business, BusinessApplication, Qualification, Customer, ProductOffer, ServiceOffer]
};
connection = await createConnection(connectionOptions);
}
return connection;
}
public getManager(): Promise<EntityManager> {
return this.getConnection().then(conn => conn.manager);
}
}
const db = new Database();
export default db;
In backend/entities/BusinessApplication.ts:
import { IsIn, IsString, IsOptional } from 'class-validator';
import { Column, CreateDateColumn, Entity, PrimaryGeneratedColumn } from 'typeorm';
import { EditableBusiness } from '../all-entities';
class PasswordlessBusinessApplication extends EditableBusiness {
@Column()
@IsIn(['individual', 'company'])
type!: 'individual' | 'company';
@Column({ nullable: true })
@IsOptional()
@IsString()
fein?: string;
@Column({ nullable: true })
@IsOptional()
@IsString()
professionalCertificationUrl?: string;
}
@Entity()
export default class BusinessApplication extends PasswordlessBusinessApplication {
@PrimaryGeneratedColumn()
id!: number;
@CreateDateColumn()
dateSubmitted!: Date;
@Column()
@IsString()
passwordHash!: string;
}
/**
* A business application sent by the client, which contains a password instead of a password hash.
* Qualification objects do not require id or business.
*/
export class SendableBusinessApplication extends PasswordlessBusinessApplication {
@IsString()
password!: string;
}
From what I can see, the imports all point to the right file, I imported reflect-metadata, and I put the @Entity() decorator on the BusinessApplication class. So what could be going wrong? Notably, if I change await manager.find(BusinessApplication, ...) in the first file to await manager.find('BusinessApplication', ...) it works fine, but I don't want to do that because I'll lose intellisense. Also, this error doesn't happen the first time the server is initialized, but after it is hot-module-reloaded by Webpack it breaks (this can happen after Next.js disposes of the page or after I change the code).
A:
The problem
For me, this was happening after the webpack hot-reload because when everything was reloaded, new entity models were generated. Although new entity models were generated, TypeORM didn't know about them because I only made a connection to the database once, when the database.ts module was initialized, as you can see from the file. So when TypeORM compared the new entities from the manager.find(BusinessApplication, ...) call and old entities, it said they were not the same because they don't have referential equality (no two functions are the same in JS). Therefore, it didn't find the metadata when comparing it to manager.connection.entityMetadatas, which contained the old version only.
The fix
I'll just need to make a new connection to the database after every reload so it is populated with the new entity metadata.
A:
remove dist folder from your project and run again
A:
Got this error when I renamed an entity without having .entity in the file name
A:
Don't forget to put entities into your DataSource (typeorm version 0.3.0 an above)
https://github.com/typeorm/typeorm/blob/master/CHANGELOG.md#030-2022-03-17
export const dataSource = new DataSource({
...
...
entities: [BusinessApplication],
...
...
})
| EntityMetadataNotFound: No metadata for "BusinessApplication" was found | I've been using TypeORM with no problems for a while, but then suddenly this error pops up when making an API call:
EntityMetadataNotFound: No metadata for "BusinessApplication" was found.
at new EntityMetadataNotFoundError (C:\Users\Robbie\Code\fit-society\node_modules\typeorm\error\EntityMetadataNotFoundError.js:10:28)
at Connection.getMetadata (C:\Users\Robbie\Code\fit-society\node_modules\typeorm\connection\Connection.js:336:19)
at EntityManager.<anonymous> (C:\Users\Robbie\Code\fit-society\node_modules\typeorm\entity-manager\EntityManager.js:459:44)
at step (C:\Users\Robbie\Code\fit-society\node_modules\tslib\tslib.js:136:27)
at Object.next (C:\Users\Robbie\Code\fit-society\node_modules\tslib\tslib.js:117:57)
at C:\Users\Robbie\Code\fit-society\node_modules\tslib\tslib.js:110:75
at new Promise (<anonymous>)
at Object.__awaiter (C:\Users\Robbie\Code\fit-society\node_modules\tslib\tslib.js:106:16)
at EntityManager.find (C:\Users\Robbie\Code\fit-society\node_modules\typeorm\entity-manager\EntityManager.js:456:24)
at module.exports../src/pages/api/business-applications/[id].ts.__webpack_exports__.default.Object (C:\Users\Robbie\Code\fit-society\.next\server\static\development\pages\api\business-applications\[id].js:1648:65)
at process._tickCallback (internal/process/next_tick.js:68:7)
It happens when this code is called:
import { BusinessApplication } from '../../../backend/all-entities';
import db from '../../../backend/database';
// in a function...
const manager = await db.getManager();
// in this case, req.data.id does equal "oldest"
const application: BusinessApplication | undefined =
req.data.id === 'oldest'
? (await manager.find(BusinessApplication, { order: { dateSubmitted: 'DESC' }, take: 1 }))[0]
: await manager.findOne(BusinessApplication, { where: { id: parseInt(req.data.id, 10) } });
if (application == null) throw createError(404, 'Business application not found');
return application;
In backend/all-entities.ts:
/**
* This file exists to solve circular dependency problems with Webpack by explicitly specifying the module loading order.
* @see https://medium.com/visual-development/how-to-fix-nasty-circular-dependency-issues-once-and-for-all-in-javascript-typescript-a04c987cf0de
*/
import Account_ from './entities/Account';
export { default as Qualification } from './entities/Qualification';
export { default as EditableAccount } from './entities/EditableAccount';
export { default as EditableBusiness } from './entities/EditableBusiness';
export { default as Business } from './entities/Business';
export { default as BusinessApplication, SendableBusinessApplication } from './entities/BusinessApplication';
export { default as EditableCustomer } from './entities/EditableCustomer';
export { default as Customer } from './entities/Customer';
export { default as Offer } from './entities/Offer';
export { default as ProductOffer } from './entities/ProductOffer';
export { default as ServiceOffer } from './entities/ServiceOffer';
In backend/database.ts:
import 'reflect-metadata';
import {
Connection,
ConnectionManager,
ConnectionOptions,
createConnection,
EntityManager,
getConnectionManager
} from 'typeorm';
import { Business, BusinessApplication, Customer, ProductOffer, ServiceOffer, Qualification } from './all-entities';
/**
* Database manager class
*/
class Database {
private connectionManager: ConnectionManager;
constructor() {
this.connectionManager = getConnectionManager();
}
private async getConnection(): Promise<Connection> {
const CONNECTION_NAME = 'default';
let connection: Connection;
if (this.connectionManager.has(CONNECTION_NAME)) {
connection = this.connectionManager.get(CONNECTION_NAME);
if (!connection.isConnected) {
connection = await connection.connect();
}
} else {
const connectionOptions: ConnectionOptions = {
name: CONNECTION_NAME,
type: 'postgres',
url: process.env.DATABASE_URL,
synchronize: true,
entities: [Business, BusinessApplication, Qualification, Customer, ProductOffer, ServiceOffer]
};
connection = await createConnection(connectionOptions);
}
return connection;
}
public getManager(): Promise<EntityManager> {
return this.getConnection().then(conn => conn.manager);
}
}
const db = new Database();
export default db;
In backend/entities/BusinessApplication.ts:
import { IsIn, IsString, IsOptional } from 'class-validator';
import { Column, CreateDateColumn, Entity, PrimaryGeneratedColumn } from 'typeorm';
import { EditableBusiness } from '../all-entities';
class PasswordlessBusinessApplication extends EditableBusiness {
@Column()
@IsIn(['individual', 'company'])
type!: 'individual' | 'company';
@Column({ nullable: true })
@IsOptional()
@IsString()
fein?: string;
@Column({ nullable: true })
@IsOptional()
@IsString()
professionalCertificationUrl?: string;
}
@Entity()
export default class BusinessApplication extends PasswordlessBusinessApplication {
@PrimaryGeneratedColumn()
id!: number;
@CreateDateColumn()
dateSubmitted!: Date;
@Column()
@IsString()
passwordHash!: string;
}
/**
* A business application sent by the client, which contains a password instead of a password hash.
* Qualification objects do not require id or business.
*/
export class SendableBusinessApplication extends PasswordlessBusinessApplication {
@IsString()
password!: string;
}
From what I can see, the imports all point to the right file, I imported reflect-metadata, and I put the @Entity() decorator on the BusinessApplication class. So what could be going wrong? Notably, if I change await manager.find(BusinessApplication, ...) in the first file to await manager.find('BusinessApplication', ...) it works fine, but I don't want to do that because I'll lose intellisense. Also, this error doesn't happen the first time the server is initialized, but after it is hot-module-reloaded by Webpack it breaks (this can happen after Next.js disposes of the page or after I change the code).
| [
"The problem\nFor me, this was happening after the webpack hot-reload because when everything was reloaded, new entity models were generated. Although new entity models were generated, TypeORM didn't know about them because I only made a connection to the database once, when the database.ts module was initialized, as you can see from the file. So when TypeORM compared the new entities from the manager.find(BusinessApplication, ...) call and old entities, it said they were not the same because they don't have referential equality (no two functions are the same in JS). Therefore, it didn't find the metadata when comparing it to manager.connection.entityMetadatas, which contained the old version only.\nThe fix\nI'll just need to make a new connection to the database after every reload so it is populated with the new entity metadata.\n",
"remove dist folder from your project and run again\n",
"Got this error when I renamed an entity without having .entity in the file name\n",
"Don't forget to put entities into your DataSource (typeorm version 0.3.0 an above)\nhttps://github.com/typeorm/typeorm/blob/master/CHANGELOG.md#030-2022-03-17\nexport const dataSource = new DataSource({\n ...\n ...\n entities: [BusinessApplication],\n ...\n ...\n})\n\n"
] | [
4,
2,
2,
0
] | [] | [] | [
"next.js",
"typeorm",
"typescript",
"webpack",
"webpack_hmr"
] | stackoverflow_0060677582_next.js_typeorm_typescript_webpack_webpack_hmr.txt |
Q:
Writing to a JSON column type in BigQuery using Spark
I have a column of type JSON in my BigQuery schema definition. I want to write to this from a Java Spark Pipeline but I cannot seem to find a way that this is possible.
If create a Struct of the JSON it results in a RECORD type.
And if I use to_json like below it turns converts into a STRING type.
dataframe = dataframe.withColumn("JSON_COLUMN, functions.to_json(functions.col("JSON_COLUMN)))
I know BigQuery has support for JSON columns but is there any way to write to them with Java Spark currently?
A:
As @DavidRabinowitz mentioned in the comment, feature to insert JSON type data into BigQuery using spark-bigquery-connector will be released soon.
All the updates regarding the BigQuery features will be updated in this document.
Posting the answer as community wiki for the benefit of the community that might encounter this use case in the future.
Feel free to edit this answer for additional information.
| Writing to a JSON column type in BigQuery using Spark | I have a column of type JSON in my BigQuery schema definition. I want to write to this from a Java Spark Pipeline but I cannot seem to find a way that this is possible.
If create a Struct of the JSON it results in a RECORD type.
And if I use to_json like below it turns converts into a STRING type.
dataframe = dataframe.withColumn("JSON_COLUMN, functions.to_json(functions.col("JSON_COLUMN)))
I know BigQuery has support for JSON columns but is there any way to write to them with Java Spark currently?
| [
"As @DavidRabinowitz mentioned in the comment, feature to insert JSON type data into BigQuery using spark-bigquery-connector will be released soon.\nAll the updates regarding the BigQuery features will be updated in this document.\nPosting the answer as community wiki for the benefit of the community that might encounter this use case in the future.\nFeel free to edit this answer for additional information.\n"
] | [
0
] | [] | [] | [
"apache_spark",
"apache_spark_sql",
"google_bigquery",
"java"
] | stackoverflow_0074655855_apache_spark_apache_spark_sql_google_bigquery_java.txt |
Q:
How do versions after py3.10 implement asyncio.get_event_loop with the same behavior as previous versions
python3.10-asyncio-get_event_loop
Deprecated since version 3.10: Emits a deprecation warning if there is no running event loop. In future Python releases, this function may become an alias of get_running_loop() and will accordingly raise a RuntimeError if there is no running event loop.
The behavior of get_event_loop has changed in version 3.10, now the sanic-jwt library needs to be compatible with later versions of 3.10, and needs to be modified to remove this warning(DeprecationWarning: There is no current event loop)
The place of the warning is the call method under ConfigItem on line 134
sanic_jwt/configuration.py
enter image description here
I tried the method of this article and the test did not pass. It should not match the behavior of the version before 3.10
PR
A:
If you want to hide the DeprecationWarning, set a higher logging level. Or if you have to use Python3.10+, then you can do something like:
import asyncio
def get_event_loop() -> asyncio.AbstractEventLoop:
try:
return asyncio.get_running_loop()
except (RuntimeError, Exception):
return asyncio.new_event_loop()
# DO NOT RECOMMEND TO OVERRIDE THE built-in one
# override the built-in get_event_loop function
asyncio.get_event_loop = get_event_loop
| How do versions after py3.10 implement asyncio.get_event_loop with the same behavior as previous versions | python3.10-asyncio-get_event_loop
Deprecated since version 3.10: Emits a deprecation warning if there is no running event loop. In future Python releases, this function may become an alias of get_running_loop() and will accordingly raise a RuntimeError if there is no running event loop.
The behavior of get_event_loop has changed in version 3.10, now the sanic-jwt library needs to be compatible with later versions of 3.10, and needs to be modified to remove this warning(DeprecationWarning: There is no current event loop)
The place of the warning is the call method under ConfigItem on line 134
sanic_jwt/configuration.py
enter image description here
I tried the method of this article and the test did not pass. It should not match the behavior of the version before 3.10
PR
| [
"If you want to hide the DeprecationWarning, set a higher logging level. Or if you have to use Python3.10+, then you can do something like:\nimport asyncio\n\ndef get_event_loop() -> asyncio.AbstractEventLoop:\n try:\n return asyncio.get_running_loop()\n except (RuntimeError, Exception):\n return asyncio.new_event_loop()\n\n# DO NOT RECOMMEND TO OVERRIDE THE built-in one\n# override the built-in get_event_loop function\nasyncio.get_event_loop = get_event_loop\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x",
"python_asyncio",
"sanic"
] | stackoverflow_0074673969_python_python_3.x_python_asyncio_sanic.txt |
Q:
Is there a way to strikethrough a whole row in Markdown tables?
I'm trying to implement a table in Markdown for tracking purposes. I know two tildes between texts like ~~this~~ will make it strikethrough text, but I was wondering if there's a possibility to strikethrough the whole table row like the screenshot shown below? Adding two tildes in all the cells of the table doesn't do the work for me as well.
screenshot of example
I tried Googling to no avail. I tried putting two tildes outside the whole table row and it wasn't working as well.
A:
The table cells are each parsed individually. The only way to apply markup to all cells in a row is to do so manually:
| col a | col b |
|-------|-------|
| ~~nope~~ | ~~nu-uh~~ |
You could use CSS, e.g. the nth-child construct to do this, but it won't work on sites like GitHub that do not allow custom CSS.
| Is there a way to strikethrough a whole row in Markdown tables? | I'm trying to implement a table in Markdown for tracking purposes. I know two tildes between texts like ~~this~~ will make it strikethrough text, but I was wondering if there's a possibility to strikethrough the whole table row like the screenshot shown below? Adding two tildes in all the cells of the table doesn't do the work for me as well.
screenshot of example
I tried Googling to no avail. I tried putting two tildes outside the whole table row and it wasn't working as well.
| [
"The table cells are each parsed individually. The only way to apply markup to all cells in a row is to do so manually:\n| col a | col b |\n|-------|-------|\n| ~~nope~~ | ~~nu-uh~~ |\n\nYou could use CSS, e.g. the nth-child construct to do this, but it won't work on sites like GitHub that do not allow custom CSS.\n"
] | [
0
] | [] | [] | [
"markdown",
"strikethrough",
"text",
"uitableview"
] | stackoverflow_0074672957_markdown_strikethrough_text_uitableview.txt |
Q:
Need big help merging work done outside of git repository
I had another developer help me with some html/css/javascript files. He doesn't work in git, so I just gave him a zip file.
So, in return, he handed me a zip file with all the changes and I cannot figure out the best way to merge the changes in my current branch. I've been stuck on this for a couple hours now.
Every time I create a branch from my current branch and overwrite the files there (with over devs work) then try to merge with my current branch it just overwrites the files without actually merging them (I'm expecting to see some merge conflicts).
A:
The best way forward would be to identify which commit was the source for your zip file, create a branch off that commit, dump the zip file, create a commit, merge the branch:
git checkout -b external-work $commit-hash-of-the-source-commit
unzip the-archive.zip
git add -u
git add any/new/files that-might have/been/added
git commit -m 'Work from external dev'
git checkout your-branch
git merge external-work
| Need big help merging work done outside of git repository | I had another developer help me with some html/css/javascript files. He doesn't work in git, so I just gave him a zip file.
So, in return, he handed me a zip file with all the changes and I cannot figure out the best way to merge the changes in my current branch. I've been stuck on this for a couple hours now.
Every time I create a branch from my current branch and overwrite the files there (with over devs work) then try to merge with my current branch it just overwrites the files without actually merging them (I'm expecting to see some merge conflicts).
| [
"The best way forward would be to identify which commit was the source for your zip file, create a branch off that commit, dump the zip file, create a commit, merge the branch:\ngit checkout -b external-work $commit-hash-of-the-source-commit\nunzip the-archive.zip\ngit add -u\ngit add any/new/files that-might have/been/added\ngit commit -m 'Work from external dev'\ngit checkout your-branch\ngit merge external-work\n\n"
] | [
1
] | [] | [] | [
"git",
"git_merge",
"github",
"merge",
"repo"
] | stackoverflow_0074395882_git_git_merge_github_merge_repo.txt |
Q:
Publish Storybook components to NPM using Semantic Release and Github Actions
Article for reference
I can set up Github Actions but get stuck on GitHub Release; it says
Run npx semantic-release [semantic-release]: node version >=16 ||
^14.17 is required. Found v12.22.12.
See
https://github.com/semantic-release/semantic-release/blob/master/docs/support/node-version.md
for more details and solutions. Error: Process completed with exit
code 1.
It says I'm using an older version of Node. However, it's not possible. Both my package.json and node -v says it is 16.x.x.
What could be wrong?
A:
My working step is:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
fetch-depth: 0
token: ${{ secrets.ADMIN_TOKEN }}
- name: setup nodejs
uses: actions/setup-node@v3
with:
node-version: '16'
- name: release using semantic-release
env:
GITHUB_TOKEN: ${{ secrets.ADMIN_TOKEN }}
GIT_AUTHOR_NAME: secrets.automation.dev
GIT_AUTHOR_EMAIL: [email protected]
GIT_COMMITTER_NAME: secrets.automation.dev
GIT_COMMITTER_EMAIL: [email protected]
run: |
sudo apt-get update
sudo apt-get install python
pip install --user bumpversion
npm install @semantic-release/changelog
npm install @semantic-release/exec
npm install @semantic-release/git
npm install @semantic-release/github
npx semantic-release
the .releaserc file is:
{
"debug": true,
"branches": [ "main" ],
"plugins": [
["@semantic-release/commit-analyzer", {
"preset": "angular",
"releaseRules": [
{"type": "release","release": "patch"}
]}],
"@semantic-release/release-notes-generator",
"@semantic-release/changelog",
[
"@semantic-release/exec",
{
"prepareCmd": "bump2version --allow-dirty --current-version ${lastRelease.version} --new-version ${nextRelease.version} patch"
}
],
[
"@semantic-release/git",
{
"message": "chore(release): ${nextRelease.version} release notes\n\n${nextRelease.notes}"
}
],
"@semantic-release/github"
]
}
| Publish Storybook components to NPM using Semantic Release and Github Actions | Article for reference
I can set up Github Actions but get stuck on GitHub Release; it says
Run npx semantic-release [semantic-release]: node version >=16 ||
^14.17 is required. Found v12.22.12.
See
https://github.com/semantic-release/semantic-release/blob/master/docs/support/node-version.md
for more details and solutions. Error: Process completed with exit
code 1.
It says I'm using an older version of Node. However, it's not possible. Both my package.json and node -v says it is 16.x.x.
What could be wrong?
| [
"My working step is:\nbuild:\n runs-on: ubuntu-latest\n steps:\n - name: Checkout code\n uses: actions/checkout@v3\n with:\n fetch-depth: 0\n token: ${{ secrets.ADMIN_TOKEN }}\n\n - name: setup nodejs\n uses: actions/setup-node@v3\n with:\n node-version: '16'\n\n - name: release using semantic-release\n env:\n GITHUB_TOKEN: ${{ secrets.ADMIN_TOKEN }}\n GIT_AUTHOR_NAME: secrets.automation.dev\n GIT_AUTHOR_EMAIL: [email protected]\n GIT_COMMITTER_NAME: secrets.automation.dev\n GIT_COMMITTER_EMAIL: [email protected]\n run: |\n sudo apt-get update\n sudo apt-get install python\n pip install --user bumpversion\n npm install @semantic-release/changelog\n npm install @semantic-release/exec\n npm install @semantic-release/git\n npm install @semantic-release/github\n npx semantic-release\n\nthe .releaserc file is:\n{\n \"debug\": true,\n \"branches\": [ \"main\" ],\n \"plugins\": [\n [\"@semantic-release/commit-analyzer\", {\n \"preset\": \"angular\",\n \"releaseRules\": [\n {\"type\": \"release\",\"release\": \"patch\"}\n ]}],\n \"@semantic-release/release-notes-generator\",\n \"@semantic-release/changelog\",\n [\n \"@semantic-release/exec\",\n {\n \"prepareCmd\": \"bump2version --allow-dirty --current-version ${lastRelease.version} --new-version ${nextRelease.version} patch\"\n }\n ],\n [\n \"@semantic-release/git\",\n {\n \"message\": \"chore(release): ${nextRelease.version} release notes\\n\\n${nextRelease.notes}\"\n }\n ],\n \"@semantic-release/github\"\n ]\n}\n\n"
] | [
0
] | [] | [] | [
"github_actions",
"semantic_release",
"storybook"
] | stackoverflow_0074146677_github_actions_semantic_release_storybook.txt |
Q:
Unable to find HTTP client library - Android
I have multiple modules in my project and have added the following dependency in my project level build.gradle file for using HTTP Client Library throughout the project:
compile "cz.msebera.android:httpclient:4.4.1.2"
I created a new module and I want to use the above mentioned library in it. So here is what I did in build.gradle for that module:
android {
compileSdkVersion 25
buildToolsVersion "26.0.0"
useLibrary 'cz.msebera.android.httpclient'
defaultConfig {
minSdkVersion 16
targetSdkVersion 25
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
I have been following this post. Upon trying to sync the project I get the following error:
Error: Unable to find optional library: cz.msebera.android.httpclient
I can't figure out what went wrong here. Please help me to sort it out.
A:
Use this dependency in your build.gradle(Module: app):
dependencies {
compile 'org.apache.httpcomponents:httpclient-android:4.3.5.1'
}
A:
use this
compile group: 'cz.msebera.android', name: 'httpclient', version: '4.4.1.1' or you can use library directly from HERE and paste it in your lib folder and choose add as library by right click on that library after paste.
A:
Use
compile 'org.apache.httpcomponents:httpclient-android:4.3.5.1'
A:
Update answer for Android IDE dolphin 2021 on 12/2022
-Using cz.msebera.android , remove all org.apache.httpcomponents . Apache library create conflict with android sdk >=30. Adding this line to build.gradle file:
implementation 'cz.msebera.android:httpclient:4.5.8'
-Modify all import from org.apche.http to cz.msebera.
-You can reference to my project at: https://github.com/tamobi1991/RssFeeda
| Unable to find HTTP client library - Android | I have multiple modules in my project and have added the following dependency in my project level build.gradle file for using HTTP Client Library throughout the project:
compile "cz.msebera.android:httpclient:4.4.1.2"
I created a new module and I want to use the above mentioned library in it. So here is what I did in build.gradle for that module:
android {
compileSdkVersion 25
buildToolsVersion "26.0.0"
useLibrary 'cz.msebera.android.httpclient'
defaultConfig {
minSdkVersion 16
targetSdkVersion 25
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
I have been following this post. Upon trying to sync the project I get the following error:
Error: Unable to find optional library: cz.msebera.android.httpclient
I can't figure out what went wrong here. Please help me to sort it out.
| [
"Use this dependency in your build.gradle(Module: app):\ndependencies {\n compile 'org.apache.httpcomponents:httpclient-android:4.3.5.1'\n}\n\n",
"use this\n compile group: 'cz.msebera.android', name: 'httpclient', version: '4.4.1.1' or you can use library directly from HERE and paste it in your lib folder and choose add as library by right click on that library after paste.\n",
"Use\ncompile 'org.apache.httpcomponents:httpclient-android:4.3.5.1'\n\n",
"Update answer for Android IDE dolphin 2021 on 12/2022\n-Using cz.msebera.android , remove all org.apache.httpcomponents . Apache library create conflict with android sdk >=30. Adding this line to build.gradle file:\nimplementation 'cz.msebera.android:httpclient:4.5.8'\n-Modify all import from org.apche.http to cz.msebera.\n-You can reference to my project at: https://github.com/tamobi1991/RssFeeda\n"
] | [
1,
1,
0,
0
] | [] | [] | [
"android",
"httpclient"
] | stackoverflow_0045429357_android_httpclient.txt |
Q:
how to correctly add and remove element with alpine.js
I want to completely hide my element based on screen size (mobile & desktop), not just giving it display: none so I use x-if in alpine.js
<template x-if="window.innerWidth < 768 ? false : true">
<div>
<a href="$">Hidden Link</a>
</div>
</template>
On screen it looks like the navbar is removed, but when I make sure and create a test case for it, it fails. The webdriver still could recognize the hidden link. So how do actually add and remove element with alpine.js based on user screen size (Desktop-Mobile)?
A:
To completely remove an element from the page based on screen size, you can use the x-if directive in conjunction with the x-transition:leave attribute to animate the element's removal from the page. This will remove the element from the page and its content will no longer be present in the page's HTML code. Here's an example of how you could do this:
<template x-if="window.innerWidth < 768" x-transition:leave="transition duration-500">
<div>
<a href="$">Hidden Link</a>
</div>
</template>
| how to correctly add and remove element with alpine.js | I want to completely hide my element based on screen size (mobile & desktop), not just giving it display: none so I use x-if in alpine.js
<template x-if="window.innerWidth < 768 ? false : true">
<div>
<a href="$">Hidden Link</a>
</div>
</template>
On screen it looks like the navbar is removed, but when I make sure and create a test case for it, it fails. The webdriver still could recognize the hidden link. So how do actually add and remove element with alpine.js based on user screen size (Desktop-Mobile)?
| [
"To completely remove an element from the page based on screen size, you can use the x-if directive in conjunction with the x-transition:leave attribute to animate the element's removal from the page. This will remove the element from the page and its content will no longer be present in the page's HTML code. Here's an example of how you could do this:\n<template x-if=\"window.innerWidth < 768\" x-transition:leave=\"transition duration-500\">\n <div>\n <a href=\"$\">Hidden Link</a>\n </div>\n</template>\n\n"
] | [
0
] | [] | [] | [
"alpine.js",
"javascript"
] | stackoverflow_0074674488_alpine.js_javascript.txt |
Q:
How to remove a new createtd Element in the Dom with the click Function in Javascript
I created a TODO List. I have a input and take the value of the input to append it to an element.
With the value of the input i create a new list elemnet with a delete button. Now i have troubles to delete my new created elemnt within the list.
this is my js code
//create a variable for the button
let btn = document.querySelector('.add');
// create a variable for the list
let ul = document.querySelector('.list');
//create a variable for the input to get the value
let input = document.querySelector('.txt');
// with add button take the value from the input and create a new li elemtn
btn.addEventListener('click', function () {
let txt = input.value; // create variable for input value
let li = document.createElement('li'); // create new list elemnt
li.textContent = txt;
let but = document.createElement('button'); // create ne button elemnt
but.textContent = 'delete';
li.textContent = txt;
li.appendChild(but);
ul.appendChild(li);
});
// try to delte list element from unorderd list
but.addEventListener('click', function () {
ul.removeChild(li);
});
Here the html code
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Document</title>
<link rel="stylesheet" href="styles.css" />
</head>
<body>
<div id="container">
<input type="text" class="txt" />
<button class="add">Add to list</button>
<ul class="list"></ul>
</div>
<script src="index.js"></script>
</body>
</html>
The goal is to delete the list element from the todo app with the delete button
I tried to add an eventhandler to the delete Button within each List Element. I tried to use removechild but it will not work.
A:
The issue you're encountering is most likely caused by timing. You're declaring variables with let which scopes them to a local function but re-using them during an event. You may also have tried to set these variables before the DOM was full loaded, in which case they don't actually exist yet.
Your code works "as expected" with minor modifications:
document.addEventListener('DOMContentLoaded', function() {
//create a variable for the button
let btn = document.querySelector('.add');
// create a variable for the list
let ul = document.querySelector('.list');
//create a variable for the input to get the value
let input = document.querySelector('.txt');
// with add button take the value from the input and create a new li elemtn
btn.addEventListener('click', function () {
let txt = input.value; // create variable for input value
let li = document.createElement('li'); // create new list elemnt
li.textContent = txt;
let but = document.createElement('button'); // create ne button elemnt
but.textContent = 'delete';
but.addEventListener('click', function () {
ul.removeChild(li);
});
li.appendChild(but);
ul.appendChild(li);
});
// try to delte list element from unorderd list
})
<div id="container">
<input type="text" class="txt" />
<button class="add">Add to list</button>
<ul class="list"></ul>
</div>
<script src="index.js"></script>
A:
To remove a new created element in the DOM with the click function in JavaScript, you can use the "removeChild" method. Here is an example:
// create the element
var newElement = document.createElement('p');
newElement.innerHTML = 'Click here to remove me';
// add the element to the DOM
document.body.appendChild(newElement);
// add a click event listener to the element
newElement.addEventListener('click', function() {
// remove the element from the DOM
document.body.removeChild(newElement);
});
This will create a new paragraph element with the text "Click here to remove me", add it to the DOM, and attach a click event listener to it. When the element is clicked, it will be removed from the DOM.
A:
To delete a list element, you can add an event listener to the delete button, and use the removeChild() method on the parent element (in this case, the ul element) to remove the list item that contains the delete button.
However, the problem with your code is that the but variable is declared inside the event listener for the add button, and therefore it is not accessible from the event listener for the delete button. To fix this, you can move the declaration of the but variable outside of the event listener, and use the this keyword inside the event listener to refer to the delete button that was clicked.
Here is an updated version of your code that uses these changes to delete a list item when the delete button is clicked:
//create a variable for the button
let btn = document.querySelector('.add');
// create a variable for the list
let ul = document.querySelector('.list');
//create a variable for the input to get the value
let input = document.querySelector('.txt');
// create variable for the delete button
let but = document.createElement('button');
but.textContent = 'delete';
// with add button take the value from the input and create a new li elemtn
btn.addEventListener('click', function () {
let txt = input.value; // create variable for input value
let li = document.createElement('li'); // create new list elemnt
li.textContent = txt;
li.appendChild(but);
ul.appendChild(li);
});
// delete list element from unordered list when delete button is clicked
but.addEventListener('click', function () {
ul.removeChild(this.parentElement);
});
| How to remove a new createtd Element in the Dom with the click Function in Javascript | I created a TODO List. I have a input and take the value of the input to append it to an element.
With the value of the input i create a new list elemnet with a delete button. Now i have troubles to delete my new created elemnt within the list.
this is my js code
//create a variable for the button
let btn = document.querySelector('.add');
// create a variable for the list
let ul = document.querySelector('.list');
//create a variable for the input to get the value
let input = document.querySelector('.txt');
// with add button take the value from the input and create a new li elemtn
btn.addEventListener('click', function () {
let txt = input.value; // create variable for input value
let li = document.createElement('li'); // create new list elemnt
li.textContent = txt;
let but = document.createElement('button'); // create ne button elemnt
but.textContent = 'delete';
li.textContent = txt;
li.appendChild(but);
ul.appendChild(li);
});
// try to delte list element from unorderd list
but.addEventListener('click', function () {
ul.removeChild(li);
});
Here the html code
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Document</title>
<link rel="stylesheet" href="styles.css" />
</head>
<body>
<div id="container">
<input type="text" class="txt" />
<button class="add">Add to list</button>
<ul class="list"></ul>
</div>
<script src="index.js"></script>
</body>
</html>
The goal is to delete the list element from the todo app with the delete button
I tried to add an eventhandler to the delete Button within each List Element. I tried to use removechild but it will not work.
| [
"The issue you're encountering is most likely caused by timing. You're declaring variables with let which scopes them to a local function but re-using them during an event. You may also have tried to set these variables before the DOM was full loaded, in which case they don't actually exist yet.\nYour code works \"as expected\" with minor modifications:\n\n\ndocument.addEventListener('DOMContentLoaded', function() {\n //create a variable for the button\n let btn = document.querySelector('.add');\n // create a variable for the list\n let ul = document.querySelector('.list');\n //create a variable for the input to get the value\n let input = document.querySelector('.txt');\n\n // with add button take the value from the input and create a new li elemtn\n btn.addEventListener('click', function () {\n let txt = input.value; // create variable for input value\n let li = document.createElement('li'); // create new list elemnt\n li.textContent = txt;\n let but = document.createElement('button'); // create ne button elemnt\n but.textContent = 'delete';\n but.addEventListener('click', function () {\n ul.removeChild(li);\n });\n li.appendChild(but);\n ul.appendChild(li);\n });\n // try to delte list element from unorderd list\n})\n <div id=\"container\">\n <input type=\"text\" class=\"txt\" />\n <button class=\"add\">Add to list</button>\n\n <ul class=\"list\"></ul>\n </div>\n <script src=\"index.js\"></script>\n\n\n\n",
"To remove a new created element in the DOM with the click function in JavaScript, you can use the \"removeChild\" method. Here is an example:\n// create the element\nvar newElement = document.createElement('p');\nnewElement.innerHTML = 'Click here to remove me';\n\n// add the element to the DOM\ndocument.body.appendChild(newElement);\n\n// add a click event listener to the element\nnewElement.addEventListener('click', function() {\n // remove the element from the DOM\n document.body.removeChild(newElement);\n});\n\n\nThis will create a new paragraph element with the text \"Click here to remove me\", add it to the DOM, and attach a click event listener to it. When the element is clicked, it will be removed from the DOM.\n",
"To delete a list element, you can add an event listener to the delete button, and use the removeChild() method on the parent element (in this case, the ul element) to remove the list item that contains the delete button.\nHowever, the problem with your code is that the but variable is declared inside the event listener for the add button, and therefore it is not accessible from the event listener for the delete button. To fix this, you can move the declaration of the but variable outside of the event listener, and use the this keyword inside the event listener to refer to the delete button that was clicked.\nHere is an updated version of your code that uses these changes to delete a list item when the delete button is clicked:\n//create a variable for the button\nlet btn = document.querySelector('.add');\n// create a variable for the list\nlet ul = document.querySelector('.list');\n//create a variable for the input to get the value\nlet input = document.querySelector('.txt');\n\n// create variable for the delete button\nlet but = document.createElement('button');\nbut.textContent = 'delete';\n\n// with add button take the value from the input and create a new li elemtn\nbtn.addEventListener('click', function () {\n let txt = input.value; // create variable for input value\n let li = document.createElement('li'); // create new list elemnt\n li.textContent = txt;\n li.appendChild(but);\n\n ul.appendChild(li);\n});\n\n// delete list element from unordered list when delete button is clicked\nbut.addEventListener('click', function () {\n ul.removeChild(this.parentElement);\n});\n\n"
] | [
1,
0,
0
] | [] | [] | [
"dom",
"javascript"
] | stackoverflow_0074671856_dom_javascript.txt |
Q:
One Vuejs Pinia store with namespacing actions
Can pinia actions be divided into two namespaces, such that access happens over property n1 and n2 such that:
// current
store.n1a('hi')
store.n2b()
// wanted
store.n1.a('hi')
store.n2.b()
// ugly workaround:
store.namespace1().a('hi')
// store would look like
actions: {
namespace1() {
return {
a(msg) {
console.log(msg);
},
};
....
},
},
It helps with clean f() naming a lot. bath.paint() and kitchen.paint() instead of bathPaint() etc. Similar: https://vuejs.org/api/options-state.html#expose
A:
Pinia don't need namespace.
You can access another store juste by import.
[Link][1]
[1]: https://pinia.vuejs.org/core-concepts/getters.html#accessing-other-stores-getters
My advice will be to create two separate store and acces it as you want
| One Vuejs Pinia store with namespacing actions | Can pinia actions be divided into two namespaces, such that access happens over property n1 and n2 such that:
// current
store.n1a('hi')
store.n2b()
// wanted
store.n1.a('hi')
store.n2.b()
// ugly workaround:
store.namespace1().a('hi')
// store would look like
actions: {
namespace1() {
return {
a(msg) {
console.log(msg);
},
};
....
},
},
It helps with clean f() naming a lot. bath.paint() and kitchen.paint() instead of bathPaint() etc. Similar: https://vuejs.org/api/options-state.html#expose
| [
"Pinia don't need namespace.\nYou can access another store juste by import.\n[Link][1]\n[1]: https://pinia.vuejs.org/core-concepts/getters.html#accessing-other-stores-getters\nMy advice will be to create two separate store and acces it as you want\n"
] | [
0
] | [] | [] | [
"javascript",
"pinia",
"vue.js",
"vuejs3"
] | stackoverflow_0074646222_javascript_pinia_vue.js_vuejs3.txt |
Q:
Compiling and running .exe in vs code with tasks
So I've been learning C++ and all my practice was done in vs code so far. However, when I got to the OOP stuff I couldn't compile multiple files all at once with the extension. So, I googled how to do it and everywhere I went I found it can be done with tasks. It took me hours to find the right code for the tasks.json file that actually works. However, it can only compile and I manually have to run the .exe file. Obviously I wanna do both but I have no idea how. I just need the extra bit that allows the task to run the file after it is compiled. I looked up the documentation for tasks with vs code, but it is very confusing and makes no sense to me at all. I wanna learn C++ not how to do tasks.json lol. Hopefully, someone can help me. If you are gonna tell me to look up the documentation or something don't say anything at all because I am not posting this to have people tell me look up the documentation because that documentation is garbage.
Here is the current tasks.json that can only compile:
{
"version": "2.0.0",
"tasks": [
{
"type": "shell",
"label": "C++ Compile",
"command": "C:\\Program Files\\mingw-w64\\mingw64\\bin\\g++.exe",
"args": [
"-g",
"${fileDirname}\\*.cpp",
"-o",
"${fileDirname}\\${fileBasenameNoExtension}.exe"
],
"options": {
"cwd": "${workspaceFolder}"
},
"problemMatcher": [
"$gcc"
],
"group": {
"kind": "build",
"isDefault": true
}
}
]
}
Thank you for your time, and any help is appreciated.
A:
First answer on stackoverflow so please forgive any mistakes(in presenting the answer)
{
"version": "2.0.0",
"tasks": [
{
"type": "shell",
"label": "C++ Compile",
"command": "C:\\Program Files\\mingw-w64\\mingw64\\bin\\g++.exe",
"args": [
"-g",
"${fileDirname}\\*.cpp",
"-o",
"${fileDirname}\\${fileBasenameNoExtension}.exe"
],
"options": {
"cwd": "${workspaceFolder}"
},
"problemMatcher": [
"$gcc"
],
},
{
"type": "shell",
"label": "C++ run",
"command": ".\\${fileBasenameNoExtension}.exe",
"dependsOn":["C++ Compile"],
"dependsOrder": "sequence",
"group": {
"kind": "build",
"isDefault": true
}
}
]
}
the groups should be shifted from previous task(c++ compile) to new task (c++ run) and if .exe(code assumes it to be in workspace dir) is not found change the directory according to your config.
A:
Well, since you can compile, and compiling is running an exe file(g++.exe), just copy the task again and change the exe file name, BAM! all done.
Do note you might need to change the "args" section.
And here's the finished tasks.json:
{
"version": "2.0.0",
"tasks": [
{
"type": "shell",
"label": "C++ Compile",
"command": "C:\\Program Files\\mingw-w64\\mingw64\\bin\\g++.exe",
"args": [
"-g",
"${fileDirname}\\*.cpp",
"-o",
"${fileDirname}\\${fileBasenameNoExtension}.exe"
],
"options": {
"cwd": "${workspaceFolder}"
},
"problemMatcher": [
"$gcc"
],
"group": {
"kind": "build",
"isDefault": true
}
},{
"type": "shell",
"label": "WHATEVER LABEL YOU WANT HERE",
"command": "${fileDirname}\\${fileBasenameNoExtension}.exe",
"args": [
],
"options": {
"cwd": "${workspaceFolder}"
},
"problemMatcher": [
],
"group": {
"kind": "build",
"isDefault": true
}
}
]
}
| Compiling and running .exe in vs code with tasks | So I've been learning C++ and all my practice was done in vs code so far. However, when I got to the OOP stuff I couldn't compile multiple files all at once with the extension. So, I googled how to do it and everywhere I went I found it can be done with tasks. It took me hours to find the right code for the tasks.json file that actually works. However, it can only compile and I manually have to run the .exe file. Obviously I wanna do both but I have no idea how. I just need the extra bit that allows the task to run the file after it is compiled. I looked up the documentation for tasks with vs code, but it is very confusing and makes no sense to me at all. I wanna learn C++ not how to do tasks.json lol. Hopefully, someone can help me. If you are gonna tell me to look up the documentation or something don't say anything at all because I am not posting this to have people tell me look up the documentation because that documentation is garbage.
Here is the current tasks.json that can only compile:
{
"version": "2.0.0",
"tasks": [
{
"type": "shell",
"label": "C++ Compile",
"command": "C:\\Program Files\\mingw-w64\\mingw64\\bin\\g++.exe",
"args": [
"-g",
"${fileDirname}\\*.cpp",
"-o",
"${fileDirname}\\${fileBasenameNoExtension}.exe"
],
"options": {
"cwd": "${workspaceFolder}"
},
"problemMatcher": [
"$gcc"
],
"group": {
"kind": "build",
"isDefault": true
}
}
]
}
Thank you for your time, and any help is appreciated.
| [
"First answer on stackoverflow so please forgive any mistakes(in presenting the answer)\n{\n \"version\": \"2.0.0\",\n \"tasks\": [\n {\n \"type\": \"shell\",\n \"label\": \"C++ Compile\",\n \"command\": \"C:\\\\Program Files\\\\mingw-w64\\\\mingw64\\\\bin\\\\g++.exe\",\n \"args\": [\n \"-g\",\n \"${fileDirname}\\\\*.cpp\",\n \"-o\",\n \"${fileDirname}\\\\${fileBasenameNoExtension}.exe\"\n ],\n \"options\": {\n \"cwd\": \"${workspaceFolder}\"\n },\n \"problemMatcher\": [\n \"$gcc\"\n ],\n\n },\n {\n \"type\": \"shell\",\n \"label\": \"C++ run\",\n \"command\": \".\\\\${fileBasenameNoExtension}.exe\",\n \"dependsOn\":[\"C++ Compile\"],\n \"dependsOrder\": \"sequence\",\n \"group\": {\n \"kind\": \"build\",\n \"isDefault\": true\n }\n \n }\n ]\n}\n\n\nthe groups should be shifted from previous task(c++ compile) to new task (c++ run) and if .exe(code assumes it to be in workspace dir) is not found change the directory according to your config.\n",
"Well, since you can compile, and compiling is running an exe file(g++.exe), just copy the task again and change the exe file name, BAM! all done.\nDo note you might need to change the \"args\" section.\nAnd here's the finished tasks.json:\n{\n \"version\": \"2.0.0\",\n \"tasks\": [\n {\n \"type\": \"shell\",\n \"label\": \"C++ Compile\",\n \"command\": \"C:\\\\Program Files\\\\mingw-w64\\\\mingw64\\\\bin\\\\g++.exe\",\n \"args\": [\n \"-g\",\n \"${fileDirname}\\\\*.cpp\",\n \"-o\",\n \"${fileDirname}\\\\${fileBasenameNoExtension}.exe\"\n ],\n \"options\": {\n \"cwd\": \"${workspaceFolder}\"\n },\n \"problemMatcher\": [\n \"$gcc\"\n ],\n \"group\": {\n \"kind\": \"build\",\n \"isDefault\": true\n }\n },{\n \"type\": \"shell\",\n \"label\": \"WHATEVER LABEL YOU WANT HERE\",\n \"command\": \"${fileDirname}\\\\${fileBasenameNoExtension}.exe\",\n \"args\": [\n ],\n \"options\": {\n \"cwd\": \"${workspaceFolder}\"\n },\n \"problemMatcher\": [\n ],\n \"group\": {\n \"kind\": \"build\",\n \"isDefault\": true\n }\n }\n ]\n}\n\n"
] | [
0,
0
] | [] | [] | [
"vscode_tasks"
] | stackoverflow_0063820474_vscode_tasks.txt |
Q:
How to use CompletableFuture with api call
I am using Completable Future for the first time. I am not getting any errors, but want to understand if I am using it correctly.
CompletableFuture<HashMap<String, String>> completableFuture = new CompletableFuture<>();
for (int iterator = 0; iterator < 10; iterator = iterator + 1)) {
int finalIterator = iterator;
Executors.newCachedThreadPool().submit(() -> {
completableFuture.complete(getCountryInfo((listofnames , iterator)))));
return null;
});
completableFuture.get().entrySet().stream().map(id -> {
return countryInfo(id);
})
}
I am calling a rest endpoint in the service class getCountryInfo. I want to achieve parallel processing using CompletableFuture.
Thank you.
A:
Looks like you waht to get information about each country in parallel.
Let's for clarity use countryCodes instead of listofnames. To make parallel calls for each country code and then merge them into a single map, you can use the CompletableFuture.supplyAsync method to create a CompletableFuture for fetching each country code. The supplyAsync method takes a Supplier as an argument, which is a function that returns a value without taking any arguments. In this case, the Supplier could be a lambda expression that calls the function that fetches information for a given country code.
Here is how to make parallel calls for each country code and then merge the results into a single map:
List<String> countryCodes = ... // List of country codes
Map<String, Info> countryInfoMap = new HashMap<>(); // Map to store the results
List<CompletableFuture<Info>> futures = new ArrayList<>();
for (String countryCode : countryCodes) {
CompletableFuture<Info> future = CompletableFuture.supplyAsync(() -> fetchInfoForCountry(countryCode));
futures.add(future);
}
CompletableFuture<Void> allFutures = CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]));
allFutures.join();
for (CompletableFuture<Info> future : futures) {
Info info = future.get();
countryInfoMap.put(info.getCountryCode(), info);
}
In this code the for loop iterates over the list of country codes, and each country code gets a CompletableFuture fetching the Info. Each CompletableFuture will be executed asynchronously in a separate thread, allowing to make parallel calls for each country code.
After creating the CompletableFutures use CompletableFuture.allOf method to wait for all of the CompletableFutures to complete. Then using another for loop to iterate over the CompletableFutures and get the result (Info object) of each one. In this loop all results are available, so process them in any appropriate way (for this example we just put them into countryInfoMap map using the country code as the key.
| How to use CompletableFuture with api call | I am using Completable Future for the first time. I am not getting any errors, but want to understand if I am using it correctly.
CompletableFuture<HashMap<String, String>> completableFuture = new CompletableFuture<>();
for (int iterator = 0; iterator < 10; iterator = iterator + 1)) {
int finalIterator = iterator;
Executors.newCachedThreadPool().submit(() -> {
completableFuture.complete(getCountryInfo((listofnames , iterator)))));
return null;
});
completableFuture.get().entrySet().stream().map(id -> {
return countryInfo(id);
})
}
I am calling a rest endpoint in the service class getCountryInfo. I want to achieve parallel processing using CompletableFuture.
Thank you.
| [
"Looks like you waht to get information about each country in parallel.\nLet's for clarity use countryCodes instead of listofnames. To make parallel calls for each country code and then merge them into a single map, you can use the CompletableFuture.supplyAsync method to create a CompletableFuture for fetching each country code. The supplyAsync method takes a Supplier as an argument, which is a function that returns a value without taking any arguments. In this case, the Supplier could be a lambda expression that calls the function that fetches information for a given country code.\nHere is how to make parallel calls for each country code and then merge the results into a single map:\nList<String> countryCodes = ... // List of country codes\nMap<String, Info> countryInfoMap = new HashMap<>(); // Map to store the results\n\nList<CompletableFuture<Info>> futures = new ArrayList<>();\n\nfor (String countryCode : countryCodes) {\n CompletableFuture<Info> future = CompletableFuture.supplyAsync(() -> fetchInfoForCountry(countryCode));\n futures.add(future);\n}\n\nCompletableFuture<Void> allFutures = CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]));\nallFutures.join();\n\nfor (CompletableFuture<Info> future : futures) {\n Info info = future.get();\n countryInfoMap.put(info.getCountryCode(), info);\n}\n\nIn this code the for loop iterates over the list of country codes, and each country code gets a CompletableFuture fetching the Info. Each CompletableFuture will be executed asynchronously in a separate thread, allowing to make parallel calls for each country code.\nAfter creating the CompletableFutures use CompletableFuture.allOf method to wait for all of the CompletableFutures to complete. Then using another for loop to iterate over the CompletableFutures and get the result (Info object) of each one. In this loop all results are available, so process them in any appropriate way (for this example we just put them into countryInfoMap map using the country code as the key.\n"
] | [
0
] | [] | [] | [
"api",
"completable_future",
"java",
"stream"
] | stackoverflow_0074651220_api_completable_future_java_stream.txt |
Q:
Get Scorm test result from inside an iframe (same domain)
My company is getting a scorm test from another company. (scorm schemaversion 1.2)
We are embedded the test in an iframe like this:
<html>
</head>
<iframe src="exam/scormcontent/index.html" name="course">
</iframe>
</html>
This is the test folder structure:
I am new this scorm solution. What we are trying to do is to get the final result of the scorm test (student passed/failed) in the parent html page.
The html page and the scorm are planned to be hosted on the same domain.
P.S: The entire project involves a react app, where at some stage, the user is supposed to do the scorm test, and he will only be allowed to continue if he passed the test. I am not sure if our plan to use iframe is what we should do. I would love to learn if there is a better option.
A:
I have found a way to do it based on this:
https://github.com/hershkoy/react_scrom
The idea is to inject javascript code into the iframe (requires that the iframe and the parent are on the same domain).
The injected javascript code is listening to the button click events, and send a postMessage event to the parent when detects that the course is completed.
<div id="result"></div>
<input id="btn" type="button" value="Go to course" name="btnOpenPopup" onClick="OpenNewWindow()" />
<iframe style="display:none;" id="myiframe" src="http://localhost/training/content" name="course" frameborder="0" style="overflow:hidden;height:100%;width:100%" height="100%" width="100%"></iframe>
<script type="text/javascript">
const iframe = document.getElementById('myiframe');
const iframeWin = iframe.contentWindow || iframe;
const iframeDoc = iframe.contentDocument || iframeWin.document;
function OpenNewWindow() {
iframe.style.display="block";
document.getElementById('result').innerHTML = "";
document.getElementById('btn').style.display="none";
}
function injectThis() {
//alert("hi!");
document.addEventListener('click', (event) => {
console.log("click!");
let chk_condition = event &&
event.target &&
event.target.href &&
event.target.href.includes("exam_completed");
if (chk_condition) {
event.preventDefault();
event.stopPropagation();
window.parent.postMessage({type: 'course:completed'}, '*');
//window.close();
};
});
};
window.addEventListener('message', event => {
// IMPORTANT: check the origin of the data!
if ( true /*event.origin.startsWith('http://localhost:3002')*/) {
// The data was sent from your site.
// Data sent with postMessage is stored in event.data:
console.log(event.data);
if (event.data.type=="course:completed"){
iframe.style.display="none";
document.getElementById('result').innerHTML = "TEST PASSED!";
};
} else {
// The data was NOT sent from your site!
// Be careful! Do not use it. This else branch is
// here just for clarity, you usually shouldn't need it.
return;
}
});
var script = iframeDoc.createElement("script");
script.append('window.onload = ' + injectThis.toString() + ';');
iframeDoc.documentElement.appendChild(script);
</script>
| Get Scorm test result from inside an iframe (same domain) | My company is getting a scorm test from another company. (scorm schemaversion 1.2)
We are embedded the test in an iframe like this:
<html>
</head>
<iframe src="exam/scormcontent/index.html" name="course">
</iframe>
</html>
This is the test folder structure:
I am new this scorm solution. What we are trying to do is to get the final result of the scorm test (student passed/failed) in the parent html page.
The html page and the scorm are planned to be hosted on the same domain.
P.S: The entire project involves a react app, where at some stage, the user is supposed to do the scorm test, and he will only be allowed to continue if he passed the test. I am not sure if our plan to use iframe is what we should do. I would love to learn if there is a better option.
| [
"I have found a way to do it based on this:\nhttps://github.com/hershkoy/react_scrom\nThe idea is to inject javascript code into the iframe (requires that the iframe and the parent are on the same domain).\nThe injected javascript code is listening to the button click events, and send a postMessage event to the parent when detects that the course is completed.\n <div id=\"result\"></div>\n <input id=\"btn\" type=\"button\" value=\"Go to course\" name=\"btnOpenPopup\" onClick=\"OpenNewWindow()\" /> \n <iframe style=\"display:none;\" id=\"myiframe\" src=\"http://localhost/training/content\" name=\"course\" frameborder=\"0\" style=\"overflow:hidden;height:100%;width:100%\" height=\"100%\" width=\"100%\"></iframe>\n\n <script type=\"text/javascript\"> \n\n const iframe = document.getElementById('myiframe');\n const iframeWin = iframe.contentWindow || iframe;\n const iframeDoc = iframe.contentDocument || iframeWin.document; \n\n function OpenNewWindow() { \n iframe.style.display=\"block\";\n document.getElementById('result').innerHTML = \"\";\n document.getElementById('btn').style.display=\"none\";\n } \n\n function injectThis() {\n //alert(\"hi!\");\n document.addEventListener('click', (event) => {\n console.log(\"click!\");\n\n let chk_condition = event && \n event.target && \n event.target.href && \n event.target.href.includes(\"exam_completed\");\n\n if (chk_condition) {\n event.preventDefault();\n event.stopPropagation();\n window.parent.postMessage({type: 'course:completed'}, '*');\n //window.close();\n };\n }); \n };\n\n window.addEventListener('message', event => {\n // IMPORTANT: check the origin of the data! \n if ( true /*event.origin.startsWith('http://localhost:3002')*/) {\n // The data was sent from your site.\n // Data sent with postMessage is stored in event.data:\n console.log(event.data);\n if (event.data.type==\"course:completed\"){\n iframe.style.display=\"none\";\n document.getElementById('result').innerHTML = \"TEST PASSED!\";\n };\n\n } else {\n // The data was NOT sent from your site! \n // Be careful! Do not use it. This else branch is\n // here just for clarity, you usually shouldn't need it.\n return;\n }\n });\n\n\n var script = iframeDoc.createElement(\"script\");\n\n script.append('window.onload = ' + injectThis.toString() + ';');\n iframeDoc.documentElement.appendChild(script);\n \n </script>\n\n"
] | [
0
] | [] | [] | [
"scorm"
] | stackoverflow_0074602495_scorm.txt |
Q:
Apache Shiro - How to force logout on all devices after password reset with rememberMe active
I'm using apache shiro with rememberMe active. The rememberMe token is saved in cookie. I want to force all devices login using the same username to logout after password reset. I managed to invalidate all sessions of the same user, however the rememberMe token saved in each device always creating a new valid session. Thus the other devices still can access restricted data.
This is how I invalidate the sessions:
DefaultSecurityManager securityManager = (DefaultSecurityManager) SecurityUtils.getSecurityManager();
DefaultSessionManager sessionManager = (DefaultSessionManager) securityManager.getSessionManager();
Collection<Session> activeSessions = sessionManager.getSessionDAO().getActiveSessions();
for (Session session : activeSessions) {
Subject subject = new Subject.Builder().sessionId(session.getId()).buildSubject();
if (theUsernameChangingThePassword.equals(subject.getPrincipal().toString())) {
subject.logout();
}
}
Is there a way to invalidate rememberMe token on username/principal basis? How do you guys handle this?
A:
Using the rememberMe feature
To force a logout on all devices after a password reset with the rememberMe feature active in Apache Shiro, you can do the following:
In your Apache Shiro configuration, set the rememberMeCookie.maxAge property to a negative value. This will cause the rememberMe cookie to expire immediately, effectively logging out the user on all devices.
After the password reset is complete, set the rememberMeCookie.maxAge property back to a positive value (e.g. 604800 seconds, which is one week). This will enable the rememberMe feature again, but the user will be required to login again on all devices.
For example, your Apache Shiro configuration might look something like this:
[main]
# Other Apache Shiro settings
rememberMeCookie = org.apache.shiro.web.servlet.SimpleCookie
rememberMeCookie.name = rememberMe
rememberMeCookie.httpOnly = true
rememberMeCookie.maxAge = -1
Then, after the password reset is complete, you could set the rememberMeCookie.maxAge property back to a positive value like this:
# After password reset
rememberMeCookie.maxAge = 604800
This will cause the rememberMe cookie to expire immediately, logging out the user on all devices, and then enable the rememberMe feature again with a one-week expiration time. This will ensure that the user is required to login again on all devices after the password reset.
Note that this approach is only effective if the user has the rememberMe feature enabled on all devices. If the user has the rememberMe feature enabled on only some devices, those devices will remain logged in after the password reset, and the user will need to manually log out on those devices. Additionally, this approach will not work if the user has disabled cookies in their browser, as the rememberMe cookie will not be set and will not be able to be expired.
using a session store
When you are using a session store, you can just purge all sessions there.
Consider this section (session storage) from the documentation: https://shiro.apache.org/session-management.html#SessionManagement-SessionManager-Storage
So, if you defined your own session storage (e.g. a database, a hazelcast cluster, etc.), you can just empty the table there.
when using JWT
... you better have an expiration date set. Or just switch the secret keys, so users are forced to re-login.
when not using any of those features
The default session manager will use an in-memory implementation and does not use clustering. Just restart your application
when not using sessions at all
... no logout is required, as users will need to authenticate on every request.
| Apache Shiro - How to force logout on all devices after password reset with rememberMe active | I'm using apache shiro with rememberMe active. The rememberMe token is saved in cookie. I want to force all devices login using the same username to logout after password reset. I managed to invalidate all sessions of the same user, however the rememberMe token saved in each device always creating a new valid session. Thus the other devices still can access restricted data.
This is how I invalidate the sessions:
DefaultSecurityManager securityManager = (DefaultSecurityManager) SecurityUtils.getSecurityManager();
DefaultSessionManager sessionManager = (DefaultSessionManager) securityManager.getSessionManager();
Collection<Session> activeSessions = sessionManager.getSessionDAO().getActiveSessions();
for (Session session : activeSessions) {
Subject subject = new Subject.Builder().sessionId(session.getId()).buildSubject();
if (theUsernameChangingThePassword.equals(subject.getPrincipal().toString())) {
subject.logout();
}
}
Is there a way to invalidate rememberMe token on username/principal basis? How do you guys handle this?
| [
"Using the rememberMe feature\nTo force a logout on all devices after a password reset with the rememberMe feature active in Apache Shiro, you can do the following:\nIn your Apache Shiro configuration, set the rememberMeCookie.maxAge property to a negative value. This will cause the rememberMe cookie to expire immediately, effectively logging out the user on all devices.\nAfter the password reset is complete, set the rememberMeCookie.maxAge property back to a positive value (e.g. 604800 seconds, which is one week). This will enable the rememberMe feature again, but the user will be required to login again on all devices.\nFor example, your Apache Shiro configuration might look something like this:\n[main]\n# Other Apache Shiro settings\n\nrememberMeCookie = org.apache.shiro.web.servlet.SimpleCookie\nrememberMeCookie.name = rememberMe\nrememberMeCookie.httpOnly = true\nrememberMeCookie.maxAge = -1\n\nThen, after the password reset is complete, you could set the rememberMeCookie.maxAge property back to a positive value like this:\n# After password reset\nrememberMeCookie.maxAge = 604800\n\nThis will cause the rememberMe cookie to expire immediately, logging out the user on all devices, and then enable the rememberMe feature again with a one-week expiration time. This will ensure that the user is required to login again on all devices after the password reset.\nNote that this approach is only effective if the user has the rememberMe feature enabled on all devices. If the user has the rememberMe feature enabled on only some devices, those devices will remain logged in after the password reset, and the user will need to manually log out on those devices. Additionally, this approach will not work if the user has disabled cookies in their browser, as the rememberMe cookie will not be set and will not be able to be expired.\nusing a session store\nWhen you are using a session store, you can just purge all sessions there.\nConsider this section (session storage) from the documentation: https://shiro.apache.org/session-management.html#SessionManagement-SessionManager-Storage\nSo, if you defined your own session storage (e.g. a database, a hazelcast cluster, etc.), you can just empty the table there.\nwhen using JWT\n... you better have an expiration date set. Or just switch the secret keys, so users are forced to re-login.\nwhen not using any of those features\nThe default session manager will use an in-memory implementation and does not use clustering. Just restart your application\nwhen not using sessions at all\n... no logout is required, as users will need to authenticate on every request.\n"
] | [
0
] | [] | [] | [
"remember_me",
"shiro"
] | stackoverflow_0074178615_remember_me_shiro.txt |
Q:
why there is different output in for-loop
Linux bash: why the two shell script as follow had different result?
[root@yumserver ~]# data="a,b,c";IFS=",";for i in $data;do echo $i;done
a
b
c
[root@yumserver ~]# IFS=",";for i in a,b,c;do echo $i;done
a b c
expect output: the second script also output:
a
b
c
I should understood what @M.NejatAydin means。Thanks also @EdMorton,@HaimCohen!
[root@k8smaster01 ~]# set -x;data="a,b,c";IFS=",";echo $data;echo "$data";for i in $data;do echo $i;done
+ data=a,b,c
+ IFS=,
+ echo a b c
a b c
+ echo a,b,c
a,b,c
+ for i in '$data'
+ echo a
a
+ for i in '$data'
+ echo b
b
+ for i in '$data'
+ echo c
c
[root@k8smaster01 ~]# IFS=",";for i in a,b,c;do echo $i;done
+ IFS=,
+ for i in a,b,c
+ echo a b c
a b c
A:
@HaimCohen explained in detail why you get a different result with those two approaches. Which is what you asked. His answer is correct, it should get upvoted and accepted.
Just a trivial addition from my side: you can easily modify the second of your approaches however if you define the variable on the fly:
IFS=",";for i in ${var="a,b,c"};do echo $i;done
A:
Word splitting is performed on the results of unquoted expansions (specifically, parameter expansions, command substitutions, and arithmetic expansions, with a few exceptions which are not relevant here). The literal string a,b,c in the
second for loop is not an expansion at all. Thus, word splitting is not performed on that literal string. But note that, in the second example, word splitting is still performed on $i (an unquoted expansion) in the command echo $i.
It seems the point of confusion is where and when the IFS is used. It is used in the word splitting phase following an (unquoted) expansion. It is not used when the shell reads its input and breaks the input into words, which is an earlier phase.
Note: IFS is also used in other contexts (eg, by the read builtin command) which are not relevant to this question.
A:
The difference between the two scripts is how the input data is provided to the for loop. In the first script, the input data is stored in a variable named "data" and is passed to the for loop using the $data syntax. In the second script, the input data is directly provided to the for loop using the "a,b,c" syntax.
When using the $data syntax, the IFS (Internal Field Separator) is applied to the input data, splitting it into separate items based on the specified delimiter (in this case, a comma). This allows the for loop to iterate over each individual item in the input data, resulting in the output of "a", "b", and "c" on separate lines.
In the second script, however, the IFS is not applied to the input data because it is not passed through a variable. As a result, the for loop treats the input data as a single item and outputs it as one string, resulting in the output of "a b c" on the same line.
| why there is different output in for-loop | Linux bash: why the two shell script as follow had different result?
[root@yumserver ~]# data="a,b,c";IFS=",";for i in $data;do echo $i;done
a
b
c
[root@yumserver ~]# IFS=",";for i in a,b,c;do echo $i;done
a b c
expect output: the second script also output:
a
b
c
I should understood what @M.NejatAydin means。Thanks also @EdMorton,@HaimCohen!
[root@k8smaster01 ~]# set -x;data="a,b,c";IFS=",";echo $data;echo "$data";for i in $data;do echo $i;done
+ data=a,b,c
+ IFS=,
+ echo a b c
a b c
+ echo a,b,c
a,b,c
+ for i in '$data'
+ echo a
a
+ for i in '$data'
+ echo b
b
+ for i in '$data'
+ echo c
c
[root@k8smaster01 ~]# IFS=",";for i in a,b,c;do echo $i;done
+ IFS=,
+ for i in a,b,c
+ echo a b c
a b c
| [
"@HaimCohen explained in detail why you get a different result with those two approaches. Which is what you asked. His answer is correct, it should get upvoted and accepted.\nJust a trivial addition from my side: you can easily modify the second of your approaches however if you define the variable on the fly:\nIFS=\",\";for i in ${var=\"a,b,c\"};do echo $i;done\n\n",
"Word splitting is performed on the results of unquoted expansions (specifically, parameter expansions, command substitutions, and arithmetic expansions, with a few exceptions which are not relevant here). The literal string a,b,c in the\nsecond for loop is not an expansion at all. Thus, word splitting is not performed on that literal string. But note that, in the second example, word splitting is still performed on $i (an unquoted expansion) in the command echo $i.\nIt seems the point of confusion is where and when the IFS is used. It is used in the word splitting phase following an (unquoted) expansion. It is not used when the shell reads its input and breaks the input into words, which is an earlier phase.\nNote: IFS is also used in other contexts (eg, by the read builtin command) which are not relevant to this question.\n",
"The difference between the two scripts is how the input data is provided to the for loop. In the first script, the input data is stored in a variable named \"data\" and is passed to the for loop using the $data syntax. In the second script, the input data is directly provided to the for loop using the \"a,b,c\" syntax.\nWhen using the $data syntax, the IFS (Internal Field Separator) is applied to the input data, splitting it into separate items based on the specified delimiter (in this case, a comma). This allows the for loop to iterate over each individual item in the input data, resulting in the output of \"a\", \"b\", and \"c\" on separate lines.\nIn the second script, however, the IFS is not applied to the input data because it is not passed through a variable. As a result, the for loop treats the input data as a single item and outputs it as one string, resulting in the output of \"a b c\" on the same line.\n"
] | [
2,
2,
1
] | [] | [] | [
"bash",
"double_quotes",
"for_loop",
"ifs"
] | stackoverflow_0074674297_bash_double_quotes_for_loop_ifs.txt |
Q:
Why does Android 8.1 not see resources?
I updated my phone to android 8.1 and launched my app. I noticed the strange error - "resources not found". In android 7.1 everything works well. I've already tried to clean and rebuild project. I think the code below should be enough to find an error.
SplashActivity style
<style name="SplashTheme" parent="Theme.AppCompat.Light.NoActionBar">
<item name="colorAccent">@color/colorAccent</item>
<item name="android:windowBackground">@drawable/background_splash</item>
</style>
bakcground_splash.xml
<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<item
android:drawable="@color/colorPrimaryDark"/>
<item>
<bitmap
android:gravity="center"
android:src="@mipmap/ic_launcher"/>
</item>
</layer-list>
Manifest
<activity
android:name=".splash.SplashActivity"
android:theme="@style/SplashTheme">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
<intent-filter>
<action android:name="android.intent.action.VIEW" />
<category android:name="android.intent.category.DEFAULT" />
<category android:name="android.intent.category.BROWSABLE" />
<data
android:scheme="http"
android:host="music.pl"
android:pathPrefix="/music" />
</intent-filter>
</activity>
Error
java.lang.RuntimeException: Unable to start activity ComponentInfo{com.linkplayer.linkplayer/com.linkplayer.linkplayer.splash.SplashActivity}: android.content.res.Resources$NotFoundException: Drawable com.linkplayer.linkplayer:drawable/background_splash with resource ID #0x7f07005e
Thanks in advance for the help. Have a nice evening.
A:
The error was in this part of xml:
<item>
<bitmap
android:gravity="center"
android:src="@mipmap/ic_launcher"/>
</item>
I don't know why but Android 8.0+ can't see mipmaps. I had to use file from "drawable" directory:
<item>
<bitmap
android:gravity="center"
android:src="@drawable/my_drawable"/>
</item>
Hope it will help you.
A:
Still up to date (android 8.0.1)
You can try this. build.gradle
android {
bundle {
density {
enableSplit = false
}
abi {
enableSplit = false
}
language {
enableSplit = false
}
}
...
}
| Why does Android 8.1 not see resources? | I updated my phone to android 8.1 and launched my app. I noticed the strange error - "resources not found". In android 7.1 everything works well. I've already tried to clean and rebuild project. I think the code below should be enough to find an error.
SplashActivity style
<style name="SplashTheme" parent="Theme.AppCompat.Light.NoActionBar">
<item name="colorAccent">@color/colorAccent</item>
<item name="android:windowBackground">@drawable/background_splash</item>
</style>
bakcground_splash.xml
<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<item
android:drawable="@color/colorPrimaryDark"/>
<item>
<bitmap
android:gravity="center"
android:src="@mipmap/ic_launcher"/>
</item>
</layer-list>
Manifest
<activity
android:name=".splash.SplashActivity"
android:theme="@style/SplashTheme">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
<intent-filter>
<action android:name="android.intent.action.VIEW" />
<category android:name="android.intent.category.DEFAULT" />
<category android:name="android.intent.category.BROWSABLE" />
<data
android:scheme="http"
android:host="music.pl"
android:pathPrefix="/music" />
</intent-filter>
</activity>
Error
java.lang.RuntimeException: Unable to start activity ComponentInfo{com.linkplayer.linkplayer/com.linkplayer.linkplayer.splash.SplashActivity}: android.content.res.Resources$NotFoundException: Drawable com.linkplayer.linkplayer:drawable/background_splash with resource ID #0x7f07005e
Thanks in advance for the help. Have a nice evening.
| [
"The error was in this part of xml:\n<item>\n <bitmap\n android:gravity=\"center\"\n android:src=\"@mipmap/ic_launcher\"/>\n</item>\n\nI don't know why but Android 8.0+ can't see mipmaps. I had to use file from \"drawable\" directory:\n<item>\n <bitmap\n android:gravity=\"center\"\n android:src=\"@drawable/my_drawable\"/>\n</item>\n\nHope it will help you.\n",
"Still up to date (android 8.0.1)\nYou can try this. build.gradle\nandroid {\nbundle {\n density {\n enableSplit = false\n }\n\n abi {\n enableSplit = false\n }\n\n language {\n enableSplit = false\n }\n}\n\n...\n}\n"
] | [
0,
0
] | [] | [] | [
"android",
"android_8.1_oreo",
"splash_screen"
] | stackoverflow_0053841115_android_android_8.1_oreo_splash_screen.txt |
Q:
Stop Excel from converting a string to a number
I have a column in my CSV file that contains a string of numbers separated by commas. Excel keeps converting them to numbers even though I want to treat it as text.
Example:
470,1680 get converted to 4,701,680
However, I want it to stay as 470,1680
I tried to format the cells as text but that removes the original comma. How can I achieve this?
A:
Rename the .CSV file to a .TXT file. Open the file with Excel, and the text import wizard will pop up. Tell Excel that it's a delimited file and that a comma is the delimiter. Excel will then give you a screen that allows you to assign formats to each column. Select the text format for the column in question. Import and you're done!
To test this, I created the following .CSV file:
test1,"470,1680",does it work
test2,"120,3204",i don't know
When opening the CSV directly in Excel, I get the following:
test1 4,701,680 does it work
test2 1,203,204 i don't know
When opening using my method, I get this instead:
test1 470,1680 does it work
test2 120,3204 i don't know
Is this not the desired result?
A:
What I found that worked was this:
="12345678901349539725", "CSV value2", "Another value"
The key here is that this value is a string containing ="{Number}". Somehow, Excel respects that pattern.
Perhaps it could be better written as
"="12345678901349539725""
But don't go crazy with the quotes in your code.
A:
If you can manipulate CVS file put ' in front of each number
A:
OK... so, the file is using carriage return + line feed characters to delineate the beginning of a new record. It also (for reasons I don't understand) has line feed characters within each record at random places - but there are no carriage returns.
To fix this, I opened the file with Notepad++, and did a find and replace with "Extended" search mode. I replaced \n with nothing. The data now opens in Excel properly using my earlier recommended solution.
You can, of course, use any other program (not just Notepad++) to make this character substitution. Does that help?
A:
Try this where DocNumber is actually text :
Select (CHAR(10)+DocNumber) AS DocNumber
That is by adding an invisible text char it fools Excel into making it a Text string.
You can use CHAR(32) too.
A:
The problem is about Excel thousands separator. My quick solution is simple and worked for me.
Go to Excel-->File-->Options-->Advanced
Find "Thousands separator". Probably your separator is ",".
Change the separator like "x", etc.
After you are done, I recommend to switch the separator as "," back.
| Stop Excel from converting a string to a number | I have a column in my CSV file that contains a string of numbers separated by commas. Excel keeps converting them to numbers even though I want to treat it as text.
Example:
470,1680 get converted to 4,701,680
However, I want it to stay as 470,1680
I tried to format the cells as text but that removes the original comma. How can I achieve this?
| [
"Rename the .CSV file to a .TXT file. Open the file with Excel, and the text import wizard will pop up. Tell Excel that it's a delimited file and that a comma is the delimiter. Excel will then give you a screen that allows you to assign formats to each column. Select the text format for the column in question. Import and you're done!\nTo test this, I created the following .CSV file:\ntest1,\"470,1680\",does it work\ntest2,\"120,3204\",i don't know\n\nWhen opening the CSV directly in Excel, I get the following:\ntest1 4,701,680 does it work\ntest2 1,203,204 i don't know\n\nWhen opening using my method, I get this instead:\ntest1 470,1680 does it work\ntest2 120,3204 i don't know\n\nIs this not the desired result?\n",
"What I found that worked was this:\n=\"12345678901349539725\", \"CSV value2\", \"Another value\"\n\nThe key here is that this value is a string containing =\"{Number}\". Somehow, Excel respects that pattern.\nPerhaps it could be better written as \n\"=\"12345678901349539725\"\" \n\nBut don't go crazy with the quotes in your code.\n",
"If you can manipulate CVS file put ' in front of each number\n",
"OK... so, the file is using carriage return + line feed characters to delineate the beginning of a new record. It also (for reasons I don't understand) has line feed characters within each record at random places - but there are no carriage returns.\nTo fix this, I opened the file with Notepad++, and did a find and replace with \"Extended\" search mode. I replaced \\n with nothing. The data now opens in Excel properly using my earlier recommended solution.\nYou can, of course, use any other program (not just Notepad++) to make this character substitution. Does that help?\n",
"Try this where DocNumber is actually text :\nSelect (CHAR(10)+DocNumber) AS DocNumber\nThat is by adding an invisible text char it fools Excel into making it a Text string.\nYou can use CHAR(32) too.\n",
"The problem is about Excel thousands separator. My quick solution is simple and worked for me.\n\nGo to Excel-->File-->Options-->Advanced\nFind \"Thousands separator\". Probably your separator is \",\".\nChange the separator like \"x\", etc.\n\nAfter you are done, I recommend to switch the separator as \",\" back.\n"
] | [
7,
5,
2,
0,
0,
0
] | [] | [] | [
"csv",
"excel",
"navicat"
] | stackoverflow_0020908973_csv_excel_navicat.txt |
Q:
How can I build the Android kernel for pixel 4
I try to build the Android kernel for pixel 4 .
export ARCH=arm64
export CROSS_COMPILE=/home/xxx/AOSP/prebuilts/gcc/linux-x86/aarch64/aarch64-linux-android-4.9/bin/aarch64-linux-android-
make floral_defconfig (not sure)
make
I failed with an error:
arch/arm64/Makefile:49: LSE atomics not supported by binutils
arch/arm64/Makefile:57: Detected assembler with broken .inst; disassembly will be unreliable
arch/arm64/Makefile:83: *** CROSS_COMPILE_ARM32 not defined or empty, the compat vDSO will not be built. Stop.
I had try export CC_FOR_BUILD=clang ,it didn't work.
How Should I solve the problem.
A:
The error message is quite self explanatory above, you haven't defined certain variables.
Try to follow these guides, very helpful:
https://github.com/nathanchance/android-kernel-clang
https://forum.xda-developers.com/t/reference-how-to-compile-an-android-kernel.3627297/
| How can I build the Android kernel for pixel 4 | I try to build the Android kernel for pixel 4 .
export ARCH=arm64
export CROSS_COMPILE=/home/xxx/AOSP/prebuilts/gcc/linux-x86/aarch64/aarch64-linux-android-4.9/bin/aarch64-linux-android-
make floral_defconfig (not sure)
make
I failed with an error:
arch/arm64/Makefile:49: LSE atomics not supported by binutils
arch/arm64/Makefile:57: Detected assembler with broken .inst; disassembly will be unreliable
arch/arm64/Makefile:83: *** CROSS_COMPILE_ARM32 not defined or empty, the compat vDSO will not be built. Stop.
I had try export CC_FOR_BUILD=clang ,it didn't work.
How Should I solve the problem.
| [
"The error message is quite self explanatory above, you haven't defined certain variables.\nTry to follow these guides, very helpful:\nhttps://github.com/nathanchance/android-kernel-clang\nhttps://forum.xda-developers.com/t/reference-how-to-compile-an-android-kernel.3627297/\n"
] | [
0
] | [] | [] | [
"android_kernel",
"android_source"
] | stackoverflow_0074343968_android_kernel_android_source.txt |
Q:
Why do right and bottom attributes not work an absolute positioned image?
I have noticed that I always need height & width attributes for absolute positioned images, even if I set left, right, bottom and top attributes.
See example code below or here: https://codepen.io/Rechi/pen/mdKQGym
The div.descendant follows the expected behavior, image does not.
Is the reason behind that the element replacement mentioned here:
absolute positioned text area does not respect right and bottom properties?
.ancestor {
width: 200px;
height: 200px;
background: red;
position: relative;
}
.ancestor .descendant {
background: green;
position: absolute;
left: 5px;
right: 5px;
top: 5px;
bottom: 5px;
box-sizing: border-box;
border: 2px solid blue;
}
.ancestor img.descendant {
width: 100%;
height: 100%;
object-fit: cover;
}
<div class="ancestor">
<div class="descendant"></div>
<img src="https://cdn.pixabay.com/photo/2013/07/12/17/47/test-pattern-152459_960_720.png" alt="" class="descendant">
</div>
A:
Declare your css classes without embedded.
Add space between composed classes as img .descendant.
.ancestor {
width: 200px;
height: 200px;
background: red;
position: relative;
}
.descendant {
background: green;
position: absolute;
left: 5px;
right: 5px;
top: 5px;
bottom: 5px;
box-sizing: border-box;
border: 2px solid blue;
}
img .descendant {
width: 100%;
height: 100%;
object-fit: cover;
}
<div class="ancestor">
<div class="descendant"></div>
<img src="https://cdn.pixabay.com/photo/2013/07/12/17/47/test-pattern-152459_960_720.png" alt="" class="descendant">
</div>
| Why do right and bottom attributes not work an absolute positioned image? | I have noticed that I always need height & width attributes for absolute positioned images, even if I set left, right, bottom and top attributes.
See example code below or here: https://codepen.io/Rechi/pen/mdKQGym
The div.descendant follows the expected behavior, image does not.
Is the reason behind that the element replacement mentioned here:
absolute positioned text area does not respect right and bottom properties?
.ancestor {
width: 200px;
height: 200px;
background: red;
position: relative;
}
.ancestor .descendant {
background: green;
position: absolute;
left: 5px;
right: 5px;
top: 5px;
bottom: 5px;
box-sizing: border-box;
border: 2px solid blue;
}
.ancestor img.descendant {
width: 100%;
height: 100%;
object-fit: cover;
}
<div class="ancestor">
<div class="descendant"></div>
<img src="https://cdn.pixabay.com/photo/2013/07/12/17/47/test-pattern-152459_960_720.png" alt="" class="descendant">
</div>
| [
"Declare your css classes without embedded.\nAdd space between composed classes as img .descendant.\n\n\n.ancestor {\n width: 200px;\n height: 200px;\n background: red;\n position: relative;\n}\n\n.descendant {\n background: green;\n position: absolute;\n left: 5px;\n right: 5px;\n top: 5px;\n bottom: 5px;\n box-sizing: border-box;\n border: 2px solid blue;\n}\n\nimg .descendant {\n width: 100%;\n height: 100%;\n object-fit: cover;\n}\n<div class=\"ancestor\">\n <div class=\"descendant\"></div>\n <img src=\"https://cdn.pixabay.com/photo/2013/07/12/17/47/test-pattern-152459_960_720.png\" alt=\"\" class=\"descendant\">\n</div>\n\n\n\n"
] | [
0
] | [] | [] | [
"absolute",
"css"
] | stackoverflow_0074674500_absolute_css.txt |
Q:
Flutter FirebaseFirestore multiple apps
Im working on a flutter project, when i'm trying to connect to Firestore using a secondary firebase app.
see Configure multiple projects.
I get the following error when trying to connect to db
secAppFirestore
.collection("yos")
:
"Unhandled Exception: [core/no-app] No Firebase App '[DEFAULT]' has
been created - call Firebase.initializeApp".
my code
void main() async {
// Avoid errors caused by flutter upgrade.
WidgetsFlutterBinding.ensureInitialized();
await Firebase.initializeApp(
name: "yostest2-xxxxx",
options: FirebaseOptions(
apiKey: "AIzaSy...",
appId: "1:68080...",
messagingSenderId: "",
projectId: "yostest2-xxxxx",
storageBucket: "yostest2-xxxxx.appspot.com",
databaseURL: 'https://yostest2-xxxxx.firebaseio.com',
),
);
runApp(const MyApp());
}
....
void _incrementCounter() async {
FirebaseApp secondaryApp = Firebase.app('yostest2-xxxxx');
FirebaseFirestore secAppFirestore =
FirebaseFirestore.instanceFor(app: secondaryApp);
await secAppFirestore
.collection("yos")
.add({"timestamp": FieldValue.serverTimestamp()});
setState(() {
_counter++;
});
}
A:
You need to call Firebase.initializeApp(); which will initialize the default app before you call Firebase.initializeApp(name: "yostest2-xxxxx",...) for your secondary app. Place it above your current init line and await it as well.
If you encounter issues on hot reload then you may also have to track whether initializeApp has already been called via iterating Firebase.apps[].
A:
If however you'd like to use Firestore with a secondary Firebase App, use the instanceFor method:
FirebaseApp secondaryApp = Firebase.app('SecondaryApp');
FirebaseFirestore firestore = FirebaseFirestore.instanceFor(app: secondaryApp);
for more information you check this link :
https://firebase.flutter.dev/docs/firestore/usage/
| Flutter FirebaseFirestore multiple apps | Im working on a flutter project, when i'm trying to connect to Firestore using a secondary firebase app.
see Configure multiple projects.
I get the following error when trying to connect to db
secAppFirestore
.collection("yos")
:
"Unhandled Exception: [core/no-app] No Firebase App '[DEFAULT]' has
been created - call Firebase.initializeApp".
my code
void main() async {
// Avoid errors caused by flutter upgrade.
WidgetsFlutterBinding.ensureInitialized();
await Firebase.initializeApp(
name: "yostest2-xxxxx",
options: FirebaseOptions(
apiKey: "AIzaSy...",
appId: "1:68080...",
messagingSenderId: "",
projectId: "yostest2-xxxxx",
storageBucket: "yostest2-xxxxx.appspot.com",
databaseURL: 'https://yostest2-xxxxx.firebaseio.com',
),
);
runApp(const MyApp());
}
....
void _incrementCounter() async {
FirebaseApp secondaryApp = Firebase.app('yostest2-xxxxx');
FirebaseFirestore secAppFirestore =
FirebaseFirestore.instanceFor(app: secondaryApp);
await secAppFirestore
.collection("yos")
.add({"timestamp": FieldValue.serverTimestamp()});
setState(() {
_counter++;
});
}
| [
"You need to call Firebase.initializeApp(); which will initialize the default app before you call Firebase.initializeApp(name: \"yostest2-xxxxx\",...) for your secondary app. Place it above your current init line and await it as well.\nIf you encounter issues on hot reload then you may also have to track whether initializeApp has already been called via iterating Firebase.apps[].\n",
"If however you'd like to use Firestore with a secondary Firebase App, use the instanceFor method:\nFirebaseApp secondaryApp = Firebase.app('SecondaryApp');\nFirebaseFirestore firestore = FirebaseFirestore.instanceFor(app: secondaryApp);\nfor more information you check this link :\nhttps://firebase.flutter.dev/docs/firestore/usage/\n"
] | [
3,
0
] | [] | [] | [
"firebase",
"flutter",
"google_cloud_firestore"
] | stackoverflow_0069576516_firebase_flutter_google_cloud_firestore.txt |
Q:
Adding flutter Path to .zshrc causes error in iTerm with oh-my-zsh
I am going to add Path of flutter to .zshrc permanently.
when I installed either iTerm or oh-my-sh, it rewritten .zshrc file. Now if I add export PATH=$HOME/coding/flutter/env/flutter/bin and enter source ./.zshrc on iTerm, I am getting the following errors.
/Users/mine/.oh-my-zsh/oh-my-zsh.sh:56: command not found: mkdir
/Users/mine/.oh-my-zsh/oh-my-zsh.sh:117: command not found: rm
zrecompile:99: command not found: wc
zrecompile:135: command not found: mv
detect-clipboard:33: command not found: uname
nvm:7: command not found: tr
nvm:7: command not found: tr
How can I add other path to .zshrc ?
If I add the path, I can run flutter on terminal but many other commands are not found.
I tried to add the export to .zprofile or some other files but still the same error.
A:
let's say, A='apple'.
if you assign A='banana', A is 'banana' now.
But If you expand the A (which has apple) before assign banana, A would hold both. It would be A=$A:banana.
In this case, A will apple:banana. this is what you need.
You needed to hold both. But what you did is A=banana
I suggest putting this line below in your .zshrc.
export PATH=$PATH:$HOME/coding/flutter/env/flutter/bin
The line will expand the current $PATH, which doesn't have the flutter PATH, before assigning $HOME/coding/flutter/env/flutter/bin.
| Adding flutter Path to .zshrc causes error in iTerm with oh-my-zsh | I am going to add Path of flutter to .zshrc permanently.
when I installed either iTerm or oh-my-sh, it rewritten .zshrc file. Now if I add export PATH=$HOME/coding/flutter/env/flutter/bin and enter source ./.zshrc on iTerm, I am getting the following errors.
/Users/mine/.oh-my-zsh/oh-my-zsh.sh:56: command not found: mkdir
/Users/mine/.oh-my-zsh/oh-my-zsh.sh:117: command not found: rm
zrecompile:99: command not found: wc
zrecompile:135: command not found: mv
detect-clipboard:33: command not found: uname
nvm:7: command not found: tr
nvm:7: command not found: tr
How can I add other path to .zshrc ?
If I add the path, I can run flutter on terminal but many other commands are not found.
I tried to add the export to .zprofile or some other files but still the same error.
| [
"let's say, A='apple'.\nif you assign A='banana', A is 'banana' now.\nBut If you expand the A (which has apple) before assign banana, A would hold both. It would be A=$A:banana.\nIn this case, A will apple:banana. this is what you need.\nYou needed to hold both. But what you did is A=banana\nI suggest putting this line below in your .zshrc.\nexport PATH=$PATH:$HOME/coding/flutter/env/flutter/bin\n\nThe line will expand the current $PATH, which doesn't have the flutter PATH, before assigning $HOME/coding/flutter/env/flutter/bin.\n"
] | [
0
] | [] | [] | [
"linux",
"oh_my_zsh",
"terminal",
"zshrc"
] | stackoverflow_0074630740_linux_oh_my_zsh_terminal_zshrc.txt |
Q:
How to pass selected, named arguments to Jinja2's include context?
Using Django templating engine I can include another partial template while setting a custom context using named arguments, like this:
{% include "list.html" with articles=articles_list1 only %}
{% include "list.html" with articles=articles_list2 only %}
As you may be supposing, articles_list1 and articles_list2 are two different lists, but I can reuse the very same list.html template which will be using the articles variable.
I'm trying to achieve the same thing using Jinja2, but I can't see what's the recommended way, as the with keyword is not supported.
A:
Jinja2 has an extension that enables the with keyword - it won't give you the same syntax as Django, and it may not work the way you anticipate but you could do this:
{% with articles=articles_list1 %}
{% include "list.html" %}
{% endwith %}
{% with articles=articles_list2 %}
{% include "list.html" %}
{% endwith %}
However, if list.html is basically just functioning as a way to create a list then you might want to change it to a macro instead - this will give you much more flexibility.
{% macro build_list(articles) %}
<ul>
{% for art in articles %}
<li>{{art}}</li>
{% endfor %}
</ul>
{% endmacro %}
{# And you call it thusly #}
{{ build_list(articles_list1) }}
{{ build_list(articles_list2) }}
To use this macro from another template, import it:
{% from "build_list_macro_def.html" import build_list %}
A:
This way you can pass multiple variables to Jinja2 Include statement - (by splitting variables by comma inside With statement):
{% with var_1=123, var_2="value 2", var_3=500 %}
{% include "your_template.html" %}
{% endwith %}
A:
For readers in 2017+, Jinja as of 2.9 includes the with statement by default. No extension necessary.
http://jinja.pocoo.org/docs/2.9/templates/#with-statement
In older versions of Jinja (before 2.9) it was required to enable this feature with an extension. It’s now enabled by default.
A:
Updated 2021+
Included templates have access to the variables of the active context by default. For more details about context behavior of imports and includes, see Import Context Behavior.
From Jinja 2.2 onwards, you can mark an include with ignore missing; in which case Jinja will ignore the statement if the template to be included does not exist. When combined with with or without context, it must be placed before the context visibility statement. Here are some valid examples:
{% include "sidebar.html" ignore missing %}
{% include "sidebar.html" ignore missing with context %}
{% include "sidebar.html" ignore missing without context %}
A:
Another option, without plugins, is to use macros and include them from another file:
file macro.j2
{% macro my_macro(param) %}
{{ param }}
{% endmacro %}
file main.j2
{% from 'macro.j2' import my_macro %}
{{ my_macro(param) }}
| How to pass selected, named arguments to Jinja2's include context? | Using Django templating engine I can include another partial template while setting a custom context using named arguments, like this:
{% include "list.html" with articles=articles_list1 only %}
{% include "list.html" with articles=articles_list2 only %}
As you may be supposing, articles_list1 and articles_list2 are two different lists, but I can reuse the very same list.html template which will be using the articles variable.
I'm trying to achieve the same thing using Jinja2, but I can't see what's the recommended way, as the with keyword is not supported.
| [
"Jinja2 has an extension that enables the with keyword - it won't give you the same syntax as Django, and it may not work the way you anticipate but you could do this:\n{% with articles=articles_list1 %}\n {% include \"list.html\" %}\n{% endwith %}\n{% with articles=articles_list2 %}\n {% include \"list.html\" %}\n{% endwith %}\n\nHowever, if list.html is basically just functioning as a way to create a list then you might want to change it to a macro instead - this will give you much more flexibility.\n{% macro build_list(articles) %}\n <ul>\n {% for art in articles %}\n <li>{{art}}</li>\n {% endfor %}\n </ul>\n{% endmacro %}\n\n{# And you call it thusly #}\n{{ build_list(articles_list1) }}\n{{ build_list(articles_list2) }}\n\nTo use this macro from another template, import it:\n{% from \"build_list_macro_def.html\" import build_list %}\n\n",
"This way you can pass multiple variables to Jinja2 Include statement - (by splitting variables by comma inside With statement):\n {% with var_1=123, var_2=\"value 2\", var_3=500 %}\n {% include \"your_template.html\" %}\n {% endwith %}\n\n",
"For readers in 2017+, Jinja as of 2.9 includes the with statement by default. No extension necessary.\nhttp://jinja.pocoo.org/docs/2.9/templates/#with-statement\n\nIn older versions of Jinja (before 2.9) it was required to enable this feature with an extension. It’s now enabled by default.\n\n",
"Updated 2021+\nIncluded templates have access to the variables of the active context by default. For more details about context behavior of imports and includes, see Import Context Behavior.\nFrom Jinja 2.2 onwards, you can mark an include with ignore missing; in which case Jinja will ignore the statement if the template to be included does not exist. When combined with with or without context, it must be placed before the context visibility statement. Here are some valid examples:\n{% include \"sidebar.html\" ignore missing %}\n{% include \"sidebar.html\" ignore missing with context %}\n{% include \"sidebar.html\" ignore missing without context %}\n\n",
"Another option, without plugins, is to use macros and include them from another file:\nfile macro.j2\n{% macro my_macro(param) %}\n {{ param }}\n{% endmacro %}\n\nfile main.j2\n{% from 'macro.j2' import my_macro %}\n\n{{ my_macro(param) }}\n\n"
] | [
150,
71,
46,
4,
0
] | [] | [] | [
"jinja2",
"templates"
] | stackoverflow_0009404990_jinja2_templates.txt |
Q:
How to import changes in other branches in GIT?
I have created a branch called "multiple_fixes" off my Dev branch and have been working there for a couple weeks. While I was working on my branch, several other people have created branches after I created mine, and have since made pull requests into the Dev branch.
Now, I am ready to merge my changes into Dev, but I have many files that are down-level from the current Dev branch. If I do a Pull Request, I will undo all the other changes that people have checked in.
How do I pull the current changes of Dev into my branch, without over-writing my work?
A:
Doing a merge of two branches will not overwrite the changes of one branch with changes of the other branch. Git will perform a three-way merge and will try to consolidate the changes. This works in most cases, but it can also fail; especially if the same line was updated in both branches or two lines very close together were modified. Such conflicts need to be resolved manually.
A:
You should do rebase before merge, it will combine commits from your target branch to your current branch. If you have conflicts, git will start conflict resolver automatically with default editor.
git rebase origin dev
git push -f
or
git fetch --all
git checkout dev
git pull
git checkout multiple_fixes
git rebase dev
git push -f
-f tag prevents commits duplication on pull-push, if your branch/repo settings allows it, if it's not, use git pull; git push
| How to import changes in other branches in GIT? | I have created a branch called "multiple_fixes" off my Dev branch and have been working there for a couple weeks. While I was working on my branch, several other people have created branches after I created mine, and have since made pull requests into the Dev branch.
Now, I am ready to merge my changes into Dev, but I have many files that are down-level from the current Dev branch. If I do a Pull Request, I will undo all the other changes that people have checked in.
How do I pull the current changes of Dev into my branch, without over-writing my work?
| [
"Doing a merge of two branches will not overwrite the changes of one branch with changes of the other branch. Git will perform a three-way merge and will try to consolidate the changes. This works in most cases, but it can also fail; especially if the same line was updated in both branches or two lines very close together were modified. Such conflicts need to be resolved manually.\n",
"You should do rebase before merge, it will combine commits from your target branch to your current branch. If you have conflicts, git will start conflict resolver automatically with default editor.\ngit rebase origin dev\ngit push -f\n\nor\ngit fetch --all\ngit checkout dev\ngit pull\ngit checkout multiple_fixes\ngit rebase dev\ngit push -f \n\n-f tag prevents commits duplication on pull-push, if your branch/repo settings allows it, if it's not, use git pull; git push\n"
] | [
0,
0
] | [] | [] | [
"bitbucket",
"git",
"git_commit",
"git_merge",
"merge_conflict_resolution"
] | stackoverflow_0074448292_bitbucket_git_git_commit_git_merge_merge_conflict_resolution.txt |
Q:
How kubernetes assigns podCIDR for nodes?
I'm currently learning about Kubernetes networking.
What I've got so far, is that we have CNI plugins which takes care of handling network connectivity for pods - they create network interfaces inside a network namespace when a pod is created, they set up routes for the pod, etc.
So basically kubernetes delegates some network-related tasks to the CNI plugins.
But I suppose there is some portion of networking tasks that kubernetes does by itself. For example - kubernetes assigns to each node a podCIDR.
For example, I've set up a kubernetes cluster using kubeadm, with the command:
kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=1.24.0
And when I then look at the nodes I see that each received its own podCIDR range, for example:
spec:
podCIDR: 192.168.2.0/24
podCIDRs:
- 192.168.2.0/24
My question is: How does kubernetes calculate CIDR ranges for the nodes? Does it always assign a /24 subnet for each node?
A:
When you configure the maximum number of Pods per node for the cluster, Kubernetes uses this value to allocate a CIDR range for the nodes. You can calculate the maximum number of nodes on the cluster based on the cluster's secondary IP address range for Pods and the allocated CIDR range for the node.
Kubernetes assigns each node a range of IP addresses, a CIDR block, so that each Pod can have a unique IP address. The size of the CIDR block corresponds to the maximum number of Pods per node.
Also please refer to the similar SO & CIDR ranges for more information.
| How kubernetes assigns podCIDR for nodes? | I'm currently learning about Kubernetes networking.
What I've got so far, is that we have CNI plugins which takes care of handling network connectivity for pods - they create network interfaces inside a network namespace when a pod is created, they set up routes for the pod, etc.
So basically kubernetes delegates some network-related tasks to the CNI plugins.
But I suppose there is some portion of networking tasks that kubernetes does by itself. For example - kubernetes assigns to each node a podCIDR.
For example, I've set up a kubernetes cluster using kubeadm, with the command:
kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=1.24.0
And when I then look at the nodes I see that each received its own podCIDR range, for example:
spec:
podCIDR: 192.168.2.0/24
podCIDRs:
- 192.168.2.0/24
My question is: How does kubernetes calculate CIDR ranges for the nodes? Does it always assign a /24 subnet for each node?
| [
"When you configure the maximum number of Pods per node for the cluster, Kubernetes uses this value to allocate a CIDR range for the nodes. You can calculate the maximum number of nodes on the cluster based on the cluster's secondary IP address range for Pods and the allocated CIDR range for the node.\nKubernetes assigns each node a range of IP addresses, a CIDR block, so that each Pod can have a unique IP address. The size of the CIDR block corresponds to the maximum number of Pods per node.\nAlso please refer to the similar SO & CIDR ranges for more information.\n"
] | [
0
] | [] | [] | [
"kubernetes",
"networking"
] | stackoverflow_0074673889_kubernetes_networking.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.