content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
How to handle ERROR while Populating document With Mongoose 6.6.3 and Next js
So I populate game from the product like this
const getHandler = async (req: NextApiRequest, res: NextApiResponse) => {
await db.connect();
const products = await Product.find({}).populate('game')
.populate('category');
res.send(products);
await db.disconnect();
};
Its works but sometimes I get an error like this
MissingSchemaError: Schema hasn't been registered for model "Game". Use mongoose.model(name, schema)
And I assume that this is because I have to call the model first, in my populate code like this
const games = await Game.find({});
//calling the model
const category = await Category.find({});
const products = await Product.find({})
.populate('game', 'status')
.populate('category', 'name');
after this, I never get an error again, Is there any better way to handle this error?
A:
It is not clear from your code what is causing the error, but there are a few things you can try to avoid it. First, you can check if the Game model has been registered with Mongoose before using it by calling mongoose.modelNames() and checking if "Game" is in the returned array. If it is not, you can register it by calling mongoose.model(name, schema), where name is the name of the model and schema is the schema for the model.
Another potential cause of the error is that you are trying to access the Game model before it has been fully loaded. In this case, you can try to add a await statement before calling the Game model, to make sure that it has been loaded before you use it.
const mongoose = require('mongoose');
// Check if the "Game" model has been registered
if (!mongoose.modelNames().includes('Game')) {
// Register the "Game" model if it has not been registered yet
mongoose.model('Game', gameSchema);
}
const getHandler = async (req, res) => {
// Connect to the database
await mongoose.connect();
// Make sure the "Game" model is loaded before using it
await mongoose.model('Game').ensureIndexes();
// Use the "Game" model to find all games
const games = await mongoose.model('Game').find({});
// Use the populated data to return the products
const products = await Product.find({}).populate('game', 'status')
.populate('category', 'name');
res.send(products);
// Disconnect from the database
await mongoose.disconnect();
};
I hope this helps!
| How to handle ERROR while Populating document With Mongoose 6.6.3 and Next js | So I populate game from the product like this
const getHandler = async (req: NextApiRequest, res: NextApiResponse) => {
await db.connect();
const products = await Product.find({}).populate('game')
.populate('category');
res.send(products);
await db.disconnect();
};
Its works but sometimes I get an error like this
MissingSchemaError: Schema hasn't been registered for model "Game". Use mongoose.model(name, schema)
And I assume that this is because I have to call the model first, in my populate code like this
const games = await Game.find({});
//calling the model
const category = await Category.find({});
const products = await Product.find({})
.populate('game', 'status')
.populate('category', 'name');
after this, I never get an error again, Is there any better way to handle this error?
| [
"It is not clear from your code what is causing the error, but there are a few things you can try to avoid it. First, you can check if the Game model has been registered with Mongoose before using it by calling mongoose.modelNames() and checking if \"Game\" is in the returned array. If it is not, you can register it by calling mongoose.model(name, schema), where name is the name of the model and schema is the schema for the model.\nAnother potential cause of the error is that you are trying to access the Game model before it has been fully loaded. In this case, you can try to add a await statement before calling the Game model, to make sure that it has been loaded before you use it.\nconst mongoose = require('mongoose');\n\n// Check if the \"Game\" model has been registered\nif (!mongoose.modelNames().includes('Game')) {\n // Register the \"Game\" model if it has not been registered yet\n mongoose.model('Game', gameSchema);\n}\n\nconst getHandler = async (req, res) => {\n // Connect to the database\n await mongoose.connect();\n\n // Make sure the \"Game\" model is loaded before using it\n await mongoose.model('Game').ensureIndexes();\n\n // Use the \"Game\" model to find all games\n const games = await mongoose.model('Game').find({});\n\n // Use the populated data to return the products\n const products = await Product.find({}).populate('game', 'status')\n .populate('category', 'name'); \n\n res.send(products);\n\n // Disconnect from the database\n await mongoose.disconnect();\n};\n\nI hope this helps!\n"
] | [
1
] | [] | [] | [
"mongodb",
"mongoose",
"next.js",
"node.js"
] | stackoverflow_0074677880_mongodb_mongoose_next.js_node.js.txt |
Q:
Assign group number for each row, based on columns value ranges
I have some data, that needs to be clusterised into groups. That should be done by a few predifined conditions.
Suppose we have the following table:
d = {'ID': [100, 101, 102, 103, 104, 105],
'col_1': [12, 3, 7, 13, 19, 25],
'col_2': [3, 1, 3, 3, 2, 4]
}
df = pd.DataFrame(data=d)
df.head()
Here, I want to group ID based on the following ranges, conditions, on col_1 and col_2.
For col_1 I divide values into following groups: [0, 10], [11, 15], [16, 20], [20, +inf]
For col_2 just use the df['col_2'].unique() values: [1], [2], [3], [4].
The desired groupping is in group_num column:
notice, that 0 and 3 rows have the same group number and the order, in which group number is assigned.
For now, I only came up with if-elif function to pre-define all the groups. It's not the solution for now cause in my real task there are far more ranges and confitions.
My code snippet, if it's relevant:
# This logic is not working cause here I have to predefine all the groups configurations, aka numbers,
# but I want to make groups "dymanicly":
# first group created and if the next row is not in that group -> create new one
def groupping(val_1, val_2):
# not using match case here, cause my Python < 3.10
if ((val_1 >= 0) and (val_1 <10)) and (val_2 == 1):
return 1
elif ((val_1 >= 0) and (val_1 <10)) and (val_2 == 2):
return 2
elif ...
...
df['group_num'] = df.apply(lambda x: groupping(x.col_1, x.col_2), axis=1)
A:
Not sure I understand the full logic, can't you use pandas.cut:
bins = [0, 10, 15, 20, np.inf]
df['group_num'] = pd.cut(df['col_1'], bins=bins,
labels=range(1, len(bins)))
Output:
ID col_1 col_2 group_num
0 100 12 3 2
1 101 3 1 1
2 102 7 3 1
3 103 13 2 2
4 104 19 3 3
5 105 25 4 4
A:
make dataframe for chking group
bins = [0, 10, 15, 20, float('inf')]
df1 = df[['col_1', 'col_2']].assign(col_1=pd.cut(df['col_1'], bins=bins, right=False)).sort_values(['col_1', 'col_2'])
df1
col_1 col_2
1 [0.0, 10.0) 1
2 [0.0, 10.0) 3
0 [10.0, 15.0) 3
3 [10.0, 15.0) 3
4 [15.0, 20.0) 2
5 [20.0, inf) 4
chk group by df1
df1.ne(df1.shift(1)).any(axis=1).cumsum()
output:
1 1
2 2
0 3
3 3
4 4
5 5
dtype: int32
make output to group_num column
df.assign(group_num=df1.ne(df1.shift(1)).any(axis=1).cumsum())
result:
ID col_1 col_2 group_num
0 100 12 3 3
1 101 3 1 1
2 102 7 3 2
3 103 13 3 3
4 104 19 2 4
5 105 25 4 5
| Assign group number for each row, based on columns value ranges | I have some data, that needs to be clusterised into groups. That should be done by a few predifined conditions.
Suppose we have the following table:
d = {'ID': [100, 101, 102, 103, 104, 105],
'col_1': [12, 3, 7, 13, 19, 25],
'col_2': [3, 1, 3, 3, 2, 4]
}
df = pd.DataFrame(data=d)
df.head()
Here, I want to group ID based on the following ranges, conditions, on col_1 and col_2.
For col_1 I divide values into following groups: [0, 10], [11, 15], [16, 20], [20, +inf]
For col_2 just use the df['col_2'].unique() values: [1], [2], [3], [4].
The desired groupping is in group_num column:
notice, that 0 and 3 rows have the same group number and the order, in which group number is assigned.
For now, I only came up with if-elif function to pre-define all the groups. It's not the solution for now cause in my real task there are far more ranges and confitions.
My code snippet, if it's relevant:
# This logic is not working cause here I have to predefine all the groups configurations, aka numbers,
# but I want to make groups "dymanicly":
# first group created and if the next row is not in that group -> create new one
def groupping(val_1, val_2):
# not using match case here, cause my Python < 3.10
if ((val_1 >= 0) and (val_1 <10)) and (val_2 == 1):
return 1
elif ((val_1 >= 0) and (val_1 <10)) and (val_2 == 2):
return 2
elif ...
...
df['group_num'] = df.apply(lambda x: groupping(x.col_1, x.col_2), axis=1)
| [
"Not sure I understand the full logic, can't you use pandas.cut:\nbins = [0, 10, 15, 20, np.inf]\ndf['group_num'] = pd.cut(df['col_1'], bins=bins,\n labels=range(1, len(bins)))\n\nOutput:\n ID col_1 col_2 group_num\n0 100 12 3 2\n1 101 3 1 1\n2 102 7 3 1\n3 103 13 2 2\n4 104 19 3 3\n5 105 25 4 4\n\n",
"make dataframe for chking group\nbins = [0, 10, 15, 20, float('inf')]\ndf1 = df[['col_1', 'col_2']].assign(col_1=pd.cut(df['col_1'], bins=bins, right=False)).sort_values(['col_1', 'col_2'])\n\ndf1\n col_1 col_2\n1 [0.0, 10.0) 1\n2 [0.0, 10.0) 3\n0 [10.0, 15.0) 3\n3 [10.0, 15.0) 3\n4 [15.0, 20.0) 2\n5 [20.0, inf) 4\n\n\nchk group by df1\ndf1.ne(df1.shift(1)).any(axis=1).cumsum()\n\noutput:\n1 1\n2 2\n0 3\n3 3\n4 4\n5 5\ndtype: int32\n\n\nmake output to group_num column\ndf.assign(group_num=df1.ne(df1.shift(1)).any(axis=1).cumsum())\n\nresult:\n ID col_1 col_2 group_num\n0 100 12 3 3\n1 101 3 1 1\n2 102 7 3 2\n3 103 13 3 3\n4 104 19 2 4\n5 105 25 4 5\n\n"
] | [
2,
2
] | [] | [] | [
"group_by",
"grouping",
"lambda",
"pandas",
"python"
] | stackoverflow_0074677294_group_by_grouping_lambda_pandas_python.txt |
Q:
I have this node.js cloud function but it does not work?
I have this cloud function using node.js that listen every time a child is added on a specific node, then it sends a notification to the users. However when I added something on the database, it does not send anything. I am working on android studio java. Should I connect the function to the android studio, if it will only listen on the database and then send FCM messages on the device tokens.
also how to do debugging on this, I am using VS code.
This is my code:
const functions = require("firebase-functions");
const admin = require("firebase-admin");
admin.initializeApp();
exports.listen = functions.database.ref("/Emergencies/{pushId}")
.onCreate(async (change, context) => {
change.after.val();
context.params.pushId;
// Get the list of device notification tokens. Note: There are more than 1 users in here
const getDeviceTokensPromise = admin.database()
.ref("/Registered Admins/{uid}/Token").once("value");
// The snapshot to the user's tokens.
let tokensSnapshot;
// The array containing all the user's tokens.
let tokens;
const results = await Promise.all([getDeviceTokensPromise]);
tokensSnapshot = results[0];
// Check if there are any device tokens.
if (!tokensSnapshot.hasChildren()) {
return functions.logger.log(
'There are no notification tokens to send to.'
);
}
functions.logger.log(
'There are',
tokensSnapshot.numChildren(),
'tokens to send notifications to.'
);
// Notification details.
const payload = {
notification: {
title: "New Emergency Request!",
body: "Someone needs help check Emergenie App now!",
}
};
// Listing all tokens as an array.
tokens = Object.keys(tokensSnapshot.val());
// Send notifications to all tokens.
const response = await admin.messaging().sendToDevice(tokens, payload);
// For each message check if there was an error.
const tokensToRemove = [];
response.results.forEach((result, index) => {
const error = result.error;
if (error) {
functions.logger.error(
'Failure sending notification to',
tokens[index],
error
);
// Cleanup the tokens who are not registered anymore.
if (error.code === 'messaging/invalid-registration-token' ||
error.code === 'messaging/registration-token-not-registered') {
tokensToRemove.push(tokensSnapshot.ref.child(tokens[index]).remove());
}
}
});
return Promise.all(tokensToRemove);
});
A:
This seems wring:
const getDeviceTokensPromise = admin.database()
.ref("/Registered Admins/{uid}/Token").once("value");
The {uid} in this string is not defined anywhere, and is also going to be treated as just a string, rather than the ID of a user - which is what I expect you want.
More likely, you'll need to:
Load all of /Registered Admins
Loop over the results you get from that
Get the Token value for each of them
If you are new to JavaScript, Cloud Functions for Firebase is not the easiest way to learn it. I recommend first using the Admin SDK in a local Node.js process or with the emulator suite, which can be debugged with a local debugger. After those you'll be much better equipped to port that code to your Cloud Functions.
| I have this node.js cloud function but it does not work? | I have this cloud function using node.js that listen every time a child is added on a specific node, then it sends a notification to the users. However when I added something on the database, it does not send anything. I am working on android studio java. Should I connect the function to the android studio, if it will only listen on the database and then send FCM messages on the device tokens.
also how to do debugging on this, I am using VS code.
This is my code:
const functions = require("firebase-functions");
const admin = require("firebase-admin");
admin.initializeApp();
exports.listen = functions.database.ref("/Emergencies/{pushId}")
.onCreate(async (change, context) => {
change.after.val();
context.params.pushId;
// Get the list of device notification tokens. Note: There are more than 1 users in here
const getDeviceTokensPromise = admin.database()
.ref("/Registered Admins/{uid}/Token").once("value");
// The snapshot to the user's tokens.
let tokensSnapshot;
// The array containing all the user's tokens.
let tokens;
const results = await Promise.all([getDeviceTokensPromise]);
tokensSnapshot = results[0];
// Check if there are any device tokens.
if (!tokensSnapshot.hasChildren()) {
return functions.logger.log(
'There are no notification tokens to send to.'
);
}
functions.logger.log(
'There are',
tokensSnapshot.numChildren(),
'tokens to send notifications to.'
);
// Notification details.
const payload = {
notification: {
title: "New Emergency Request!",
body: "Someone needs help check Emergenie App now!",
}
};
// Listing all tokens as an array.
tokens = Object.keys(tokensSnapshot.val());
// Send notifications to all tokens.
const response = await admin.messaging().sendToDevice(tokens, payload);
// For each message check if there was an error.
const tokensToRemove = [];
response.results.forEach((result, index) => {
const error = result.error;
if (error) {
functions.logger.error(
'Failure sending notification to',
tokens[index],
error
);
// Cleanup the tokens who are not registered anymore.
if (error.code === 'messaging/invalid-registration-token' ||
error.code === 'messaging/registration-token-not-registered') {
tokensToRemove.push(tokensSnapshot.ref.child(tokens[index]).remove());
}
}
});
return Promise.all(tokensToRemove);
});
| [
"This seems wring:\nconst getDeviceTokensPromise = admin.database()\n .ref(\"/Registered Admins/{uid}/Token\").once(\"value\");\n\nThe {uid} in this string is not defined anywhere, and is also going to be treated as just a string, rather than the ID of a user - which is what I expect you want.\nMore likely, you'll need to:\n\nLoad all of /Registered Admins\nLoop over the results you get from that\nGet the Token value for each of them\n\nIf you are new to JavaScript, Cloud Functions for Firebase is not the easiest way to learn it. I recommend first using the Admin SDK in a local Node.js process or with the emulator suite, which can be debugged with a local debugger. After those you'll be much better equipped to port that code to your Cloud Functions.\n"
] | [
0
] | [] | [] | [
"android_studio",
"firebase_cloud_messaging",
"firebase_realtime_database",
"google_cloud_functions",
"node.js"
] | stackoverflow_0074675692_android_studio_firebase_cloud_messaging_firebase_realtime_database_google_cloud_functions_node.js.txt |
Q:
how can I connect laravel with xampp mysql
I was trying to connect my laravek project to database but I couldnot connect with my xampp mysql. I can connect with my local downloaded mysql. but not with the xampp mysql.
I wanted to connect my project with xampp mysql and was expecting I will be able to work in phpmyadmin.
A:
Maybe this article can help u.
Remember to edit the .env file on your proyect folder to set the enviroment variables value
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=database_name
DB_USERNAME=root
DB_PASSWORD=
By default, the Xampp user is root , with no password
| how can I connect laravel with xampp mysql | I was trying to connect my laravek project to database but I couldnot connect with my xampp mysql. I can connect with my local downloaded mysql. but not with the xampp mysql.
I wanted to connect my project with xampp mysql and was expecting I will be able to work in phpmyadmin.
| [
"Maybe this article can help u.\nRemember to edit the .env file on your proyect folder to set the enviroment variables value\n DB_CONNECTION=mysql\n DB_HOST=127.0.0.1\n DB_PORT=3306\n DB_DATABASE=database_name\n DB_USERNAME=root\n DB_PASSWORD=\n\nBy default, the Xampp user is root , with no password\n"
] | [
0
] | [] | [] | [
"laravel_9",
"mysql",
"php",
"xampp"
] | stackoverflow_0074677939_laravel_9_mysql_php_xampp.txt |
Q:
Delete a Container with an Anchor
I'm new to React and I'm creating a Wiki of my repositories on GitHub.
So, I want to remove that item from my list when I click "Remover" and open the repository on a new page when I click "Ver repositório".
But there's is my problem!
When I click the red anchor, it does remove, like I expected. Also, when I click the blue Anchor, it open the repository page and ALSO removes the list. What should I do?
This is the function that I created to remove the repository:
const handleRemoveRepo = (id) => {
const remove = repos.filter((repo) => repo.id !== repo.id);
setRepos(remove);
if ((repo) => repo.id === repo.id) {
return null;
}
And this is the container to map everything:
<Container>
{repos.map(repo => <ItemRepo handleRemoveRepo={handleRemoveRepo} repo={repo} />)}
</Container>
My container comes from this index.js:
<ItemContainer onClick={handleRemove}>
<h3>{repo.name}</h3>
<p>{repo.full_name}</p>
<a href={repo.html_url} target="_blank">Ver repositório</a> <br />
<a href="#" className="remove">Remover</a>
<hr />
</ItemContainer>
Thanks in advance.
React App page for example
I tried switching the conditional on my function, exporting a new function on ItemRepo, but both didn't work.
A:
The "remove" click handler is assigned to the container, so it will be invoked if you click anywhere in the container:
<ItemContainer onClick={handleRemove}>
If it should only be invoked when clicking a specific link, assign the click hander to that specific link instead:
<a href="#" className="remove" onClick={handleRemove}>Remover</a>
As an aside... This shouldn't be a link. It's not navigating the user anywhere, but is just performing an action in code. A <button> is a more semantically appropriate element for that. Which can be styled to not have the defaul button background, or styled however you want.
| Delete a Container with an Anchor | I'm new to React and I'm creating a Wiki of my repositories on GitHub.
So, I want to remove that item from my list when I click "Remover" and open the repository on a new page when I click "Ver repositório".
But there's is my problem!
When I click the red anchor, it does remove, like I expected. Also, when I click the blue Anchor, it open the repository page and ALSO removes the list. What should I do?
This is the function that I created to remove the repository:
const handleRemoveRepo = (id) => {
const remove = repos.filter((repo) => repo.id !== repo.id);
setRepos(remove);
if ((repo) => repo.id === repo.id) {
return null;
}
And this is the container to map everything:
<Container>
{repos.map(repo => <ItemRepo handleRemoveRepo={handleRemoveRepo} repo={repo} />)}
</Container>
My container comes from this index.js:
<ItemContainer onClick={handleRemove}>
<h3>{repo.name}</h3>
<p>{repo.full_name}</p>
<a href={repo.html_url} target="_blank">Ver repositório</a> <br />
<a href="#" className="remove">Remover</a>
<hr />
</ItemContainer>
Thanks in advance.
React App page for example
I tried switching the conditional on my function, exporting a new function on ItemRepo, but both didn't work.
| [
"The \"remove\" click handler is assigned to the container, so it will be invoked if you click anywhere in the container:\n<ItemContainer onClick={handleRemove}>\n\nIf it should only be invoked when clicking a specific link, assign the click hander to that specific link instead:\n<a href=\"#\" className=\"remove\" onClick={handleRemove}>Remover</a>\n\n\nAs an aside... This shouldn't be a link. It's not navigating the user anywhere, but is just performing an action in code. A <button> is a more semantically appropriate element for that. Which can be styled to not have the defaul button background, or styled however you want.\n"
] | [
0
] | [] | [] | [
"javascript",
"reactjs"
] | stackoverflow_0074677989_javascript_reactjs.txt |
Q:
Cassandra Optimization
I have a huge Cassandra table. over 2 billion rows and keeps growing. Systems are only supposed to write into this table, but if anyone updates or deletes any values, I want to notify you. No notification is needed for inserts.
How can we achieve this? Both batch and real-time streaming approaches are fine.
I know there are Cassandra triggers, but not sure if there are any performance issues or other disadvantages. Please check this link for an implementation.
https://medium.com/rahasak/publish-events-from-cassandra-to-kafka-via-cassandra-triggers-59818dcf7eed
Another approach is to use Cassandra Kafka Connectors but never have used that. Not sure how to architect this solution using this connector.
A:
Having written a trigger before, I can say that I would not recommend that path.
Cassandra has recently significantly improved its change data capture (CDC) feature, which I think is a better approach for what you're trying to do. Basically, any changes to a table produce events that end up on a streaming topic. And then you could use those however you need to.
DataStax (my employer) has a CDC Agent which is designed to run on each Cassandra node. The agent:
Watches the commitlog/cdc_raw directory for mutations.
Fetches the changed rows.
Writes the data rows onto an Apache Pulsar topic.
Removes the corresponding file(s) from the commitlog/cdc_raw directory.
More documentation on this process can be found here: About CDC for Cassandra
You'll want to make sure that cdc_enabled is set to true in each cassandra.yaml file. The rest of the process is laid out in the repo's quickstart guide.
The CDC Agent is designed to work with Apache Pulsar, not Kafka. Standing up a Pulsar instance isn't terribly difficult. But I'm sure if you wanted to use Kafka, you could have a look at the repo and figure out how to make it work with Kafka.
FWIW, here's how WalMart handled Cassandra CDC -> Kafka: WalMart's Cassandra CDC Solution
| Cassandra Optimization | I have a huge Cassandra table. over 2 billion rows and keeps growing. Systems are only supposed to write into this table, but if anyone updates or deletes any values, I want to notify you. No notification is needed for inserts.
How can we achieve this? Both batch and real-time streaming approaches are fine.
I know there are Cassandra triggers, but not sure if there are any performance issues or other disadvantages. Please check this link for an implementation.
https://medium.com/rahasak/publish-events-from-cassandra-to-kafka-via-cassandra-triggers-59818dcf7eed
Another approach is to use Cassandra Kafka Connectors but never have used that. Not sure how to architect this solution using this connector.
| [
"Having written a trigger before, I can say that I would not recommend that path.\nCassandra has recently significantly improved its change data capture (CDC) feature, which I think is a better approach for what you're trying to do. Basically, any changes to a table produce events that end up on a streaming topic. And then you could use those however you need to.\nDataStax (my employer) has a CDC Agent which is designed to run on each Cassandra node. The agent:\n\nWatches the commitlog/cdc_raw directory for mutations.\nFetches the changed rows.\nWrites the data rows onto an Apache Pulsar topic.\nRemoves the corresponding file(s) from the commitlog/cdc_raw directory.\n\nMore documentation on this process can be found here: About CDC for Cassandra\nYou'll want to make sure that cdc_enabled is set to true in each cassandra.yaml file. The rest of the process is laid out in the repo's quickstart guide.\nThe CDC Agent is designed to work with Apache Pulsar, not Kafka. Standing up a Pulsar instance isn't terribly difficult. But I'm sure if you wanted to use Kafka, you could have a look at the repo and figure out how to make it work with Kafka.\nFWIW, here's how WalMart handled Cassandra CDC -> Kafka: WalMart's Cassandra CDC Solution\n"
] | [
0
] | [] | [] | [
"apache_kafka",
"apache_pulsar",
"cassandra",
"change_data_capture"
] | stackoverflow_0074677161_apache_kafka_apache_pulsar_cassandra_change_data_capture.txt |
Q:
Best Way to approach a 3D customisable Globe in Flutter?
following question:
I'm currently working on a Travel App and I thought of the idea of implementing an 3D Globe that can be rotated by the user. I thought of the Globe as a basic White Sphere with the borders of the countries visible as a stroke. All countries that have been visited should be filled with a color.
I thought of implementing it with the unity widget, but I'd like to maintain a lightweight feeling.
Is there a way to emulate js or WebGL?
What do you think is the best way to approach this?
Thank you for your time
Linus
A:
You can use webview_flutter, I played with it a little bit to display planet models in javascript library three.js. Thanks to it you can use it like standard js instance of it with only one drawback that I didn't try to solve: I was unable to run it in browser mode as Flutter needs to emulate js itself by platform speciffic way. There is option to load js/html code from local assets but then you will not be able to load models in js directly as you will face cors policy restrictions, then the only way is to pass models as json from flutter code. It works pretty well, supports two way communication between js and flutter so you can also add events to model etc.
| Best Way to approach a 3D customisable Globe in Flutter? | following question:
I'm currently working on a Travel App and I thought of the idea of implementing an 3D Globe that can be rotated by the user. I thought of the Globe as a basic White Sphere with the borders of the countries visible as a stroke. All countries that have been visited should be filled with a color.
I thought of implementing it with the unity widget, but I'd like to maintain a lightweight feeling.
Is there a way to emulate js or WebGL?
What do you think is the best way to approach this?
Thank you for your time
Linus
| [
"You can use webview_flutter, I played with it a little bit to display planet models in javascript library three.js. Thanks to it you can use it like standard js instance of it with only one drawback that I didn't try to solve: I was unable to run it in browser mode as Flutter needs to emulate js itself by platform speciffic way. There is option to load js/html code from local assets but then you will not be able to load models in js directly as you will face cors policy restrictions, then the only way is to pass models as json from flutter code. It works pretty well, supports two way communication between js and flutter so you can also add events to model etc.\n"
] | [
0
] | [] | [] | [
"3d",
"flutter",
"flutter_animation",
"flutter_layout",
"rendering"
] | stackoverflow_0074285892_3d_flutter_flutter_animation_flutter_layout_rendering.txt |
Q:
how to use type as key of another type?
As shown in the example below, I'm trying to make this behavior, as I want to pass the component name dynamically alongside its props
any suggestions?
as the current implementation is not working as expected
type AllowedComponents = 'A' | 'B' | 'C'
type StepProps = {
A: AProps,
B: BProps,
C: CProps
}
function someFn(componentName:AllowedComponents, props : stepProps[typeof componentName] ){
....
}
A:
You want someFn() to be generic, as follows:
function someFn<K extends AllowedComponents>(componentName: K, props: StepProps[K]) { }
The generic type parameter K is constrained to the AllowedComponents type, so it can only be one of the allowed component names. And the componentName parameter is of type K. The props parameter is of the indexed access type StepProps[K], so its type will be the props type for the given componentName.
In this way, the someFn() function can be used to pass a component name and its corresponding props to a function in a type-safe way. Here's an example of how you might use it:
declare const aProps: AProps;
declare const bProps: BProps;
someFn("A", aProps); // okay!
someFn("A", bProps); // error!
Playground link to code
| how to use type as key of another type? | As shown in the example below, I'm trying to make this behavior, as I want to pass the component name dynamically alongside its props
any suggestions?
as the current implementation is not working as expected
type AllowedComponents = 'A' | 'B' | 'C'
type StepProps = {
A: AProps,
B: BProps,
C: CProps
}
function someFn(componentName:AllowedComponents, props : stepProps[typeof componentName] ){
....
}
| [
"You want someFn() to be generic, as follows:\nfunction someFn<K extends AllowedComponents>(componentName: K, props: StepProps[K]) { }\n\nThe generic type parameter K is constrained to the AllowedComponents type, so it can only be one of the allowed component names. And the componentName parameter is of type K. The props parameter is of the indexed access type StepProps[K], so its type will be the props type for the given componentName.\nIn this way, the someFn() function can be used to pass a component name and its corresponding props to a function in a type-safe way. Here's an example of how you might use it:\ndeclare const aProps: AProps;\ndeclare const bProps: BProps;\nsomeFn(\"A\", aProps); // okay!\nsomeFn(\"A\", bProps); // error!\n\nPlayground link to code\n"
] | [
1
] | [] | [] | [
"typescript",
"typescript_generics",
"typescript_typings"
] | stackoverflow_0074677708_typescript_typescript_generics_typescript_typings.txt |
Q:
gcc: warn if macro was redefined (regardless of previous definition)
gcc's manual says the following:
If a macro is redefined with a definition that is not effectively the same as the old one, the preprocessor issues a warning and changes the macro to use the new definition. If the new definition is effectively the same, the redefinition is silently ignored. This allows, for instance, two different headers to define a common macro. The preprocessor will only complain if the definitions do not match.
(emphasis mine)
Is there a way to make gcc more strict and issue a warning when a macro is redefined, regardless of definition?
Example:
#define TEST 1
#define TEST 1
int main(void) {
return 0;
}
Compiling with gcc -Wall -Wextra -Wpedantic does not generate any warning whatsoever.
A:
Techincally speaking, if a header defines an apple as being red, then another wants to make sure everybody knows the apple is red, this should not be an issue. This is the reason behind it. And also to not compromise linking between multiple libraries if they have the same macro definition and the same value.
some h/hxx/hpp header
#define apples red
It's the usual attitude when you see some people wanting to make sure everyone knows they know (we all have these friends or co-workers, don't we? :) ) that apples are red so they state the obvious.
Preprocessor definitions are "compiled" (so-to-speak, rather said, interpreted and replaced or evaluated accordingly) at, well, compile-time. So having the same token defined multiple times is no real overhead on the app, it might just add a bit of compilation time.
The problem is when some wise-guy wants to let you know apples can also be green.
some other h/hxx/hpp header
#define apples green
Then, when you need to use some apples, you end up with:
some c/cxx/cpp file
#include "some_header.h/hxx/hpp"
#include "some_other_header.h/hxx/hpp"
And you end up with "apples " redefined.
Putting aside the daunting task of seeing where the conflict comes from (usually when combining multiple third-party libs/framerworks that might use similar names in macros or have the same acronyms prefixing a macro), this should be enough for you to know there is a conflict.
Keep in mind, this is a warning, so it will not stop the compilation. To treat warnings as errors, use -Werror.
I wouldn't worry about duplicate definitions, to be honest. They won't harm the code. If you really wanna go overkill-mode, you can always do some testing:
#if defined(apples) ...
... or ...
#ifdef apples ...
| gcc: warn if macro was redefined (regardless of previous definition) | gcc's manual says the following:
If a macro is redefined with a definition that is not effectively the same as the old one, the preprocessor issues a warning and changes the macro to use the new definition. If the new definition is effectively the same, the redefinition is silently ignored. This allows, for instance, two different headers to define a common macro. The preprocessor will only complain if the definitions do not match.
(emphasis mine)
Is there a way to make gcc more strict and issue a warning when a macro is redefined, regardless of definition?
Example:
#define TEST 1
#define TEST 1
int main(void) {
return 0;
}
Compiling with gcc -Wall -Wextra -Wpedantic does not generate any warning whatsoever.
| [
"Techincally speaking, if a header defines an apple as being red, then another wants to make sure everybody knows the apple is red, this should not be an issue. This is the reason behind it. And also to not compromise linking between multiple libraries if they have the same macro definition and the same value.\n\nsome h/hxx/hpp header\n#define apples red\n\n\nIt's the usual attitude when you see some people wanting to make sure everyone knows they know (we all have these friends or co-workers, don't we? :) ) that apples are red so they state the obvious.\nPreprocessor definitions are \"compiled\" (so-to-speak, rather said, interpreted and replaced or evaluated accordingly) at, well, compile-time. So having the same token defined multiple times is no real overhead on the app, it might just add a bit of compilation time.\nThe problem is when some wise-guy wants to let you know apples can also be green.\n\nsome other h/hxx/hpp header\n#define apples green\n\n\nThen, when you need to use some apples, you end up with:\n\nsome c/cxx/cpp file\n#include \"some_header.h/hxx/hpp\"\n#include \"some_other_header.h/hxx/hpp\"\n\n\nAnd you end up with \"apples \" redefined.\nPutting aside the daunting task of seeing where the conflict comes from (usually when combining multiple third-party libs/framerworks that might use similar names in macros or have the same acronyms prefixing a macro), this should be enough for you to know there is a conflict.\nKeep in mind, this is a warning, so it will not stop the compilation. To treat warnings as errors, use -Werror.\nI wouldn't worry about duplicate definitions, to be honest. They won't harm the code. If you really wanna go overkill-mode, you can always do some testing:\n#if defined(apples) ...\n\n... or ...\n#ifdef apples ...\n\n"
] | [
0
] | [] | [] | [
"gcc"
] | stackoverflow_0074615739_gcc.txt |
Q:
Node.js Logging to File deleting old log on restart
I have a Node.js Application that I execute with systemctl start app.service on my server.
I configured my package.json correctly that on app start the log is being written in app.log with
node app.js > app.log 2>&1 the problem is now that with every restart the old log is being deleted and a new one is genereted. I want to keep my old log data for debugging purposes. How can I edit the log statement that the old log will be saved or the new log just appended? Is this possible?
I already searched in Stackoverflow and Google for a solution but did not find one. I was expecting to keep my old logs.
A:
When using IO redirection, if you want to append to a file rather than overwrite the file, you need to use ">>" instead of ">". For example:
node app.js >> app.log
| Node.js Logging to File deleting old log on restart | I have a Node.js Application that I execute with systemctl start app.service on my server.
I configured my package.json correctly that on app start the log is being written in app.log with
node app.js > app.log 2>&1 the problem is now that with every restart the old log is being deleted and a new one is genereted. I want to keep my old log data for debugging purposes. How can I edit the log statement that the old log will be saved or the new log just appended? Is this possible?
I already searched in Stackoverflow and Google for a solution but did not find one. I was expecting to keep my old logs.
| [
"When using IO redirection, if you want to append to a file rather than overwrite the file, you need to use \">>\" instead of \">\". For example:\nnode app.js >> app.log\n\n"
] | [
0
] | [] | [] | [
"javascript",
"logging",
"node.js"
] | stackoverflow_0074677465_javascript_logging_node.js.txt |
Q:
print contents of remaining tags after a tag beautifulsoup
i just printed all the contents of li using .find_all('li') and i want to continue printing 'p' tags after li tag ends, like not 'p' tags in the beginning of html or inbetween. 'p' tags or remaining tags at the end. please help. Basically need everything after final list-end tag.
from bs4 import BeautifulSoup
html_doc = """\
<html>
<p>
don't need this
</p>
<li>
text i need
</li>
<li>
<p>
don't need this
</p>
<p>
don't need this
</p>
<li>
text i need
<ol>
<li>
text i need but appended to parent li tag
</li>
<li>
text i need but appended to parent li tag
</li>
</ol>
</li>
<li>
text i need
</li>
<p>
also need this
</p>
<p>
and this
</p>
<p>
and this too
</p>"""
soup = BeautifulSoup(html_doc, "html.parser")
for li in soup.select("li"):
if li.find_parent("li"):
continue
print(" ".join(li.text.split()))
print("--sep--")
this prints
text i need
--sep--
text i need text i need but appended to parent li tag text i need but appended to parent li tag
--sep--
text i need
--sep--
thanks to @Andrej Kesely
i need this
text i need
--sep--
text i need text i need but appended to parent li tag text i need but appended to parent li tag
--sep--
text i need
--sep--
also need this
--sep--
and this
--sep--
and this too
--sep--
A:
You can try this:
for li in soup.select("li:not(li li)"):
print(" ".join([
d.get_text().strip() for d in li.descendants
if 'NavigableString' in str(type(d)) and
d.parent.name == 'li' and d.get_text().strip()
]))
print("--sep--")
# for the p tags after ANY of the [outermost] li tags
for p in soup.select("li:not(li li) ~ p"): print(p.text.strip(), "\n--sep--")
(Using :not(li li) lets you not need the if li.find_parent("li"): continue part.)
This should get you
the text from [outermost] li tags, but only made up of strings that are directly inside that li tag or an li tag inside it
and then
text from p tags that are sibling to a preceding outermost li tag. (If you want only p tags after the last li, use for p in soup.select("li:not(li li) ~ p:not(:has(~ li))")...)
| print contents of remaining tags after a tag beautifulsoup | i just printed all the contents of li using .find_all('li') and i want to continue printing 'p' tags after li tag ends, like not 'p' tags in the beginning of html or inbetween. 'p' tags or remaining tags at the end. please help. Basically need everything after final list-end tag.
from bs4 import BeautifulSoup
html_doc = """\
<html>
<p>
don't need this
</p>
<li>
text i need
</li>
<li>
<p>
don't need this
</p>
<p>
don't need this
</p>
<li>
text i need
<ol>
<li>
text i need but appended to parent li tag
</li>
<li>
text i need but appended to parent li tag
</li>
</ol>
</li>
<li>
text i need
</li>
<p>
also need this
</p>
<p>
and this
</p>
<p>
and this too
</p>"""
soup = BeautifulSoup(html_doc, "html.parser")
for li in soup.select("li"):
if li.find_parent("li"):
continue
print(" ".join(li.text.split()))
print("--sep--")
this prints
text i need
--sep--
text i need text i need but appended to parent li tag text i need but appended to parent li tag
--sep--
text i need
--sep--
thanks to @Andrej Kesely
i need this
text i need
--sep--
text i need text i need but appended to parent li tag text i need but appended to parent li tag
--sep--
text i need
--sep--
also need this
--sep--
and this
--sep--
and this too
--sep--
| [
"You can try this:\nfor li in soup.select(\"li:not(li li)\"): \n print(\" \".join([\n d.get_text().strip() for d in li.descendants \n if 'NavigableString' in str(type(d)) and \n d.parent.name == 'li' and d.get_text().strip()\n ])) \n print(\"--sep--\")\n\n# for the p tags after ANY of the [outermost] li tags\nfor p in soup.select(\"li:not(li li) ~ p\"): print(p.text.strip(), \"\\n--sep--\") \n\n(Using :not(li li) lets you not need the if li.find_parent(\"li\"): continue part.)\nThis should get you\n\nthe text from [outermost] li tags, but only made up of strings that are directly inside that li tag or an li tag inside it\n\nand then\n\ntext from p tags that are sibling to a preceding outermost li tag. (If you want only p tags after the last li, use for p in soup.select(\"li:not(li li) ~ p:not(:has(~ li))\")...)\n\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"html",
"python",
"python_3.x"
] | stackoverflow_0074677516_beautifulsoup_html_python_python_3.x.txt |
Q:
Unable to configure MySQL during reinstallation
The MySQL service I have on my Windows 10 computer suddenly stopped and refused to start again. I tried many different options to fix it and they all didn't work, so I decided to uninstall MySQL and reinstall it.
I deleted these folders before uninstall:
C:\Program Files\MySQL\
C:\ProgramData\MySQL\
C:\Users[username]\AppData\Roaming\
C:\Program Files (x86)\MySQL\
During re-installing, I noticed that Connector .NET failed to install.
I am not sure if that was why the problem occurred, but the installer has repeatedly failed the "Backing up MySQL Database" step of configuration. Every time I run it, it displays this error message:
Starting MySQL Server in order to run the mysql_upgrade tool.
Warning: There may be some errors thrown by MySQL Server, the mysql_upgrade tool is going to be run next to attempt to fix database incompatibilities.
Starting process for MySQL Server 8.0.21...
Starting process with command: C:\Program Files\MySQL\MySQL Server 8.0\bin\mysqld.exe --port=3306 --datadir="C:\ProgramData\MySQL\MySQL Server 8.0\data" --console...
Process for mysqld, with ID 12424, has been started successfully and is running.
Successfully started process for MySQL Server 8.0.21.
2020-08-19T16:52:16.133223Z 0 [System] [MY-010116] [Server] C:\Program Files\MySQL\MySQL Server 8.0\bin\mysqld.exe (mysqld 8.0.21) starting as process 7612
2020-08-19T16:52:16.251679Z 1 [ERROR] [MY-011011] [Server] Failed to find valid data directory.
2020-08-19T16:52:16.254844Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
2020-08-19T16:52:16.255289Z 0 [ERROR] [MY-010119] [Server] Aborting
2020-08-19T16:52:16.261307Z 0 [System] [MY-010910] [Server] C:\Program Files\MySQL\MySQL Server 8.0\bin\mysqld.exe: Shutdown complete (mysqld 8.0.21) MySQL Community Server - GPL.........................
Running mysqldump tool to backup the database...
Backup files will be dumped to "C:\ProgramData\MySQL\MySQL Server 8.0\Backup\mysql_dump-2020-08-19T12.55.23.sql".
Starting process with command: C:\Program Files\MySQL\MySQL Server 8.0\bin\mysqldump.exe --user=root --default-auth=caching_sha2_password --host=localhost --port=3306 --default-character-set=utf8 --routines --events --single-transaction=TRUE --all-databases --result-file="C:\ProgramData\MySQL\MySQL Server 8.0\Backup\mysql_dump-2020-08-19T12.55.23.sql"...
mysqldump: Got error: 2003: Can't connect to MySQL server on 'localhost' (10061) when trying to connect
Process for mysqldump, with ID 18972, was run successfully and exited with code 2.
Ended configuration step: Backing up MySQL database
Found existing data directory, no need to initialize the database.
I can't skip the configuration as MySQL still hasn't appeared as a service.
I tried manually creating a data folder in C:\ProgramData\MySQL\MySQL Server 8.0\ but that still hasn't solved the problem. Can anyone give me an idea of why I am receiving this error?
A:
I don't remember exactly how I solved this, but I do remember looking this question up online and I found out that there still might be a registry value that thinks Connector .NET exists (even though it doesn't), which is why the program keeps failing to upload.
I installed CCleaner, which cleaned up these rogue registry values, and then I was able to get MySQL to work. Hope this helps anyone else who is in the same position as me.
| Unable to configure MySQL during reinstallation | The MySQL service I have on my Windows 10 computer suddenly stopped and refused to start again. I tried many different options to fix it and they all didn't work, so I decided to uninstall MySQL and reinstall it.
I deleted these folders before uninstall:
C:\Program Files\MySQL\
C:\ProgramData\MySQL\
C:\Users[username]\AppData\Roaming\
C:\Program Files (x86)\MySQL\
During re-installing, I noticed that Connector .NET failed to install.
I am not sure if that was why the problem occurred, but the installer has repeatedly failed the "Backing up MySQL Database" step of configuration. Every time I run it, it displays this error message:
Starting MySQL Server in order to run the mysql_upgrade tool.
Warning: There may be some errors thrown by MySQL Server, the mysql_upgrade tool is going to be run next to attempt to fix database incompatibilities.
Starting process for MySQL Server 8.0.21...
Starting process with command: C:\Program Files\MySQL\MySQL Server 8.0\bin\mysqld.exe --port=3306 --datadir="C:\ProgramData\MySQL\MySQL Server 8.0\data" --console...
Process for mysqld, with ID 12424, has been started successfully and is running.
Successfully started process for MySQL Server 8.0.21.
2020-08-19T16:52:16.133223Z 0 [System] [MY-010116] [Server] C:\Program Files\MySQL\MySQL Server 8.0\bin\mysqld.exe (mysqld 8.0.21) starting as process 7612
2020-08-19T16:52:16.251679Z 1 [ERROR] [MY-011011] [Server] Failed to find valid data directory.
2020-08-19T16:52:16.254844Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
2020-08-19T16:52:16.255289Z 0 [ERROR] [MY-010119] [Server] Aborting
2020-08-19T16:52:16.261307Z 0 [System] [MY-010910] [Server] C:\Program Files\MySQL\MySQL Server 8.0\bin\mysqld.exe: Shutdown complete (mysqld 8.0.21) MySQL Community Server - GPL.........................
Running mysqldump tool to backup the database...
Backup files will be dumped to "C:\ProgramData\MySQL\MySQL Server 8.0\Backup\mysql_dump-2020-08-19T12.55.23.sql".
Starting process with command: C:\Program Files\MySQL\MySQL Server 8.0\bin\mysqldump.exe --user=root --default-auth=caching_sha2_password --host=localhost --port=3306 --default-character-set=utf8 --routines --events --single-transaction=TRUE --all-databases --result-file="C:\ProgramData\MySQL\MySQL Server 8.0\Backup\mysql_dump-2020-08-19T12.55.23.sql"...
mysqldump: Got error: 2003: Can't connect to MySQL server on 'localhost' (10061) when trying to connect
Process for mysqldump, with ID 18972, was run successfully and exited with code 2.
Ended configuration step: Backing up MySQL database
Found existing data directory, no need to initialize the database.
I can't skip the configuration as MySQL still hasn't appeared as a service.
I tried manually creating a data folder in C:\ProgramData\MySQL\MySQL Server 8.0\ but that still hasn't solved the problem. Can anyone give me an idea of why I am receiving this error?
| [
"I don't remember exactly how I solved this, but I do remember looking this question up online and I found out that there still might be a registry value that thinks Connector .NET exists (even though it doesn't), which is why the program keeps failing to upload.\nI installed CCleaner, which cleaned up these rogue registry values, and then I was able to get MySQL to work. Hope this helps anyone else who is in the same position as me.\n"
] | [
0
] | [] | [] | [
"devops",
"mysql"
] | stackoverflow_0063507244_devops_mysql.txt |
Q:
How to implement HLS video service with Vue3.js single page application
I'm creating video streaming platform with Vue.js. However, I encountered the question that I couldn't solve. That is when we use SPA like Vue.js, javascript are running on browser, so we have to receive segment files from server side API. If I use MPA, all I have to do is to specify the location of .m3u8 file.
I am using express for Node.js.
My question is how to create API that sends .m3u8 file and segment .ts files to client side Vue.js. Now, all .m3u8 and .ts segment files are all in server side, so I would like to know how to access or receive data from serverside API using SPA and node.js.
A:
To serve .m3u8 and .ts files from a Node.js server using Express, you can use the express.static middleware function to serve the files from a directory on the server. This middleware function takes the path to the directory containing the files as its only argument.
Here is an example of how you can use the express.static middleware to serve .m3u8 and .ts files from a directory called public:
const express = require('express')
const app = express()
// Serve the files in the "public" directory
app.use(express.static('public'))
// Start the server
const port = 3000
app.listen(port, () => {
console.log(`Server listening on port ${port}`)
})
Once you have set up the server to serve the files, you can access the .m3u8 file and the .ts segment files in your Vue.js app by making HTTP requests to the server using the fetch API or a library like Axios. For example, you can use the following code to make a request for the .m3u8 file:
// Make a request for the .m3u8 file
fetch('/path/to/file.m3u8')
.then(response => response.text())
.then(data => {
// Use the data here
})
.catch(error => {
// Handle the error here
})
You can then use the data returned from the request to load the video using a player library like HLS.js. For more information about using HLS.js with Vue.js, you can check out the official documentation here.
| How to implement HLS video service with Vue3.js single page application | I'm creating video streaming platform with Vue.js. However, I encountered the question that I couldn't solve. That is when we use SPA like Vue.js, javascript are running on browser, so we have to receive segment files from server side API. If I use MPA, all I have to do is to specify the location of .m3u8 file.
I am using express for Node.js.
My question is how to create API that sends .m3u8 file and segment .ts files to client side Vue.js. Now, all .m3u8 and .ts segment files are all in server side, so I would like to know how to access or receive data from serverside API using SPA and node.js.
| [
"To serve .m3u8 and .ts files from a Node.js server using Express, you can use the express.static middleware function to serve the files from a directory on the server. This middleware function takes the path to the directory containing the files as its only argument.\nHere is an example of how you can use the express.static middleware to serve .m3u8 and .ts files from a directory called public:\nconst express = require('express')\nconst app = express()\n\n// Serve the files in the \"public\" directory\napp.use(express.static('public'))\n\n// Start the server\nconst port = 3000\napp.listen(port, () => {\n console.log(`Server listening on port ${port}`)\n})\n\nOnce you have set up the server to serve the files, you can access the .m3u8 file and the .ts segment files in your Vue.js app by making HTTP requests to the server using the fetch API or a library like Axios. For example, you can use the following code to make a request for the .m3u8 file:\n// Make a request for the .m3u8 file\nfetch('/path/to/file.m3u8')\n .then(response => response.text())\n .then(data => {\n // Use the data here\n })\n .catch(error => {\n // Handle the error here\n })\n\nYou can then use the data returned from the request to load the video using a player library like HLS.js. For more information about using HLS.js with Vue.js, you can check out the official documentation here.\n"
] | [
0
] | [] | [] | [
"hls.js",
"http_live_streaming",
"node.js",
"vue.js"
] | stackoverflow_0074678028_hls.js_http_live_streaming_node.js_vue.js.txt |
Q:
Why does the image not render in Android Studio when an integer variable containing the drawable is passed to the painterResource function?
I have a requirement to display different images based on certain user interactions. So, I'm storing the drawable resource ID in an integer variable. However, when I pass this variable into the Image's painterResource function the image is not rendered.
Code looks like this:
val img = R.drawable.img1
val img2 = R.drawable.img2
// imageToDisplay is assigned based on certain conditions.
var imageToDisplay = img
Image(painter = painterResource(imageToDisplay), contentDescription = null)
A:
One way to solve this issue is to use the resources property of the Image component to access the drawable resources. You can then use the getDrawable function to retrieve the drawable based on the resource ID stored in the imageToDisplay variable.
Here is an example of how your code can be modified to accomplish this:
val img = R.drawable.img1
val img2 = R.drawable.img2
// imageToDisplay is assigned based on certain conditions.
var imageToDisplay = img
Image(painter = painterResource(imageToDisplay), contentDescription = null)
Alternatively, you can also use the imageResource function instead of painterResource to set the drawable resource for the Image component. The code would look like this:
val img = R.drawable.img1
val img2 = R.drawable.img2
// imageToDisplay is assigned based on certain conditions.
var imageToDisplay = img
Image(imageResource = imageToDisplay, contentDescription = null)
A:
The code you provided is working "as it is" using available drawables in my end, unless you include more details then we can only guess, but when you said
I have a requirement to display different images based on certain user interactions. …
and
… imageToDisplay is assigned based on certain conditions.
and
… when I pass this variable into the Image's painterResource function the image is not rendered.
My best guess is the composable these codes are in is not re-composing or not updating for some reason when you perform some conditional action.
Again, we can only guess so you can try this or just use this as a reference.
@Composable
fun DynamicImageComposable() {
val img = R.drawable.img
val img2 = R.drawable.img
// don't use ordinary variable, convert it to a mutable State instead
var imageToDisplay by remember {
mutableStateOf(img) // just use any drawable you want as the initial value
}
// when you change this to img2, this composable is expected to re-compose
imageToDisplay = img
Image(painter = painterResource(imageToDisplay), contentDescription = null)
}
The logic is a bit useless, but what its trying to point is using mutable state for a composable to re-compose.
| Why does the image not render in Android Studio when an integer variable containing the drawable is passed to the painterResource function? | I have a requirement to display different images based on certain user interactions. So, I'm storing the drawable resource ID in an integer variable. However, when I pass this variable into the Image's painterResource function the image is not rendered.
Code looks like this:
val img = R.drawable.img1
val img2 = R.drawable.img2
// imageToDisplay is assigned based on certain conditions.
var imageToDisplay = img
Image(painter = painterResource(imageToDisplay), contentDescription = null)
| [
"One way to solve this issue is to use the resources property of the Image component to access the drawable resources. You can then use the getDrawable function to retrieve the drawable based on the resource ID stored in the imageToDisplay variable.\nHere is an example of how your code can be modified to accomplish this:\nval img = R.drawable.img1\nval img2 = R.drawable.img2\n\n// imageToDisplay is assigned based on certain conditions.\nvar imageToDisplay = img\n\nImage(painter = painterResource(imageToDisplay), contentDescription = null)\n\nAlternatively, you can also use the imageResource function instead of painterResource to set the drawable resource for the Image component. The code would look like this:\nval img = R.drawable.img1\nval img2 = R.drawable.img2\n\n// imageToDisplay is assigned based on certain conditions.\nvar imageToDisplay = img\n\nImage(imageResource = imageToDisplay, contentDescription = null)\n\n",
"The code you provided is working \"as it is\" using available drawables in my end, unless you include more details then we can only guess, but when you said\n\nI have a requirement to display different images based on certain user interactions. …\n\nand\n\n… imageToDisplay is assigned based on certain conditions.\n\nand\n\n… when I pass this variable into the Image's painterResource function the image is not rendered.\n\nMy best guess is the composable these codes are in is not re-composing or not updating for some reason when you perform some conditional action.\nAgain, we can only guess so you can try this or just use this as a reference.\n@Composable\nfun DynamicImageComposable() {\n\n val img = R.drawable.img\n val img2 = R.drawable.img\n\n // don't use ordinary variable, convert it to a mutable State instead\n var imageToDisplay by remember {\n mutableStateOf(img) // just use any drawable you want as the initial value\n }\n\n // when you change this to img2, this composable is expected to re-compose\n imageToDisplay = img\n\n Image(painter = painterResource(imageToDisplay), contentDescription = null)\n}\n\nThe logic is a bit useless, but what its trying to point is using mutable state for a composable to re-compose.\n"
] | [
1,
1
] | [] | [] | [
"android",
"android_jetpack_compose",
"kotlin"
] | stackoverflow_0074675941_android_android_jetpack_compose_kotlin.txt |
Q:
Are all c libraries in c++ too
Hi I am running this code on the visual studio 2022 .But it is saying #include <unistd.h> cannot be opened. Basically it is c code which I am running in cpp environment.
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
int main()
{
int id;
id = fork();
if (id < 0) {
printf(" Error \n");
return (1);
}
else if (id == 0)
printf("Child\n");
else
printf("Parent \n");
return 0;
}
So i am confusing may be all c libraries are not included in cpp language.
And in case i run this program in gcc this is saying fork in not defined???
I have tried to run this code on three compilers in dev cpp , visual studio 2022 and gcc but error have been thrown.
A:
unistd.h is a unix file, if you are running this on Windows you can not use that header file.
| Are all c libraries in c++ too | Hi I am running this code on the visual studio 2022 .But it is saying #include <unistd.h> cannot be opened. Basically it is c code which I am running in cpp environment.
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
int main()
{
int id;
id = fork();
if (id < 0) {
printf(" Error \n");
return (1);
}
else if (id == 0)
printf("Child\n");
else
printf("Parent \n");
return 0;
}
So i am confusing may be all c libraries are not included in cpp language.
And in case i run this program in gcc this is saying fork in not defined???
I have tried to run this code on three compilers in dev cpp , visual studio 2022 and gcc but error have been thrown.
| [
"unistd.h is a unix file, if you are running this on Windows you can not use that header file.\n"
] | [
0
] | [] | [] | [
"c++",
"fork"
] | stackoverflow_0074677991_c++_fork.txt |
Q:
What's is the proper procedure for git push?
I have a scenario where I have a git branch (b), based off of develop.
I then made changes to branch (b) and pushed to remote (b)
Some other developer made a PR and had their code merged into develop.
I then pull the changes from remote/develop and rebased my local branch (b) onto develop
I then make more changes to my local branch (b)
When I commit and push my changes, I get a rejected error:
[! [rejected] feature/b-> feature/b (non-fast-forward)
error: failed to push some refs to 'gitlab'
hint: Updates were rejected because the tip of your current branch is behind]
What I normally tend to do is to do a --force push
But I'm wondering if this is the right approach?
A:
Your push was rejected because your local copy of your branch has a different history than the remote copy, because you rebased. Considering that you rebased intentionally, and that you want the remote version to get those changes, using force is entirely appropriate if you’re confident that nobody else is also working from the remote copy of your branch.
The best way to ensure that you’re not changing a branch that someone else is to make your own fork of the repo and push your changes to a branch in your fork; when you’re ready, you then make a pull request back to the develop branch in the “shared” repo. If each developer on your team has their own fork of each project’s repository, and if you all understand that personal forks of a repo are meant for tracking your own work in progress, there’s no real danger of creating a problem for someone else.
An alternative would be to push your rebased branch to some new remote branch, leaving the un-rebased one alone. There’s little point in that if you know you’re the only person using the branch, though.
| What's is the proper procedure for git push? | I have a scenario where I have a git branch (b), based off of develop.
I then made changes to branch (b) and pushed to remote (b)
Some other developer made a PR and had their code merged into develop.
I then pull the changes from remote/develop and rebased my local branch (b) onto develop
I then make more changes to my local branch (b)
When I commit and push my changes, I get a rejected error:
[! [rejected] feature/b-> feature/b (non-fast-forward)
error: failed to push some refs to 'gitlab'
hint: Updates were rejected because the tip of your current branch is behind]
What I normally tend to do is to do a --force push
But I'm wondering if this is the right approach?
| [
"Your push was rejected because your local copy of your branch has a different history than the remote copy, because you rebased. Considering that you rebased intentionally, and that you want the remote version to get those changes, using force is entirely appropriate if you’re confident that nobody else is also working from the remote copy of your branch.\nThe best way to ensure that you’re not changing a branch that someone else is to make your own fork of the repo and push your changes to a branch in your fork; when you’re ready, you then make a pull request back to the develop branch in the “shared” repo. If each developer on your team has their own fork of each project’s repository, and if you all understand that personal forks of a repo are meant for tracking your own work in progress, there’s no real danger of creating a problem for someone else.\nAn alternative would be to push your rebased branch to some new remote branch, leaving the un-rebased one alone. There’s little point in that if you know you’re the only person using the branch, though.\n"
] | [
1
] | [
"When you receive a \"non-fast-forward\" error while pushing to a Git branch, it means that the remote branch has new commits that are not present in your local branch. This can happen if someone else has pushed new commits to the remote branch while you were working on your local branch.\nThe \"right\" way to handle this situation depends on what you want to do with your local changes. If you want to discard your local changes and use the latest version of the branch from the remote, you can use the git reset command to reset your local branch to the latest version on the remote. This will discard any local changes that you have made and leave your branch in the same state as the remote branch.\nAlternatively, if you want to keep your local changes and merge them with the latest version of the branch on the remote, you can use the git pull command to pull the latest changes from the remote and merge them with your local branch. This will create a new merge commit that combines your local changes with the latest changes from the remote branch.\nIn either case, it is not recommended to use the --force flag when pushing to a remote branch. This flag can overwrite other people's changes and cause conflicts, which can be difficult to resolve. Instead, it is best to use the git reset or git pull commands to handle non-fast-forward errors in a more controlled manner.\n",
"It is generally not recommended to use the --force option when pushing changes to a remote Git branch. Using --force can overwrite changes on the remote branch, potentially leading to data loss or conflicts with other developers who are working on the same branch.\nIn your scenario, it sounds like you made changes to your local branch (b), and then someone else made changes to the same branch on the remote repository. When you tried to push your changes, you received a \"non-fast-forward\" error because your local branch is behind the remote branch.\nTo resolve this issue, you can try the following steps:\n\nPull the latest changes from the remote branch (b) using the git\npull command. This will merge the remote changes into your local\nbranch.\n\nResolve any conflicts that may have occurred during the merge. This\nmay involve manually editing the files that have conflicts, and\nusing the git add and git commit commands to stage and commit the\nresolved changes.\n\nPush your updated local branch (b) to the remote repository using\nthe git push command. This will update the remote branch with your\nchanges, and you should no longer receive a \"non-fast-forward\"\nerror.\n\n\nBy following these steps, you can avoid using the --force option and safely merge your changes with the latest changes on\n",
"Once you have made b \"public\" by pushing it to the remote (where anyone else might pull it), you can't effectively rebase your local branch any more.\nInstead, you should have merged the updated develop into b, which does not rewrite history. Having done this, you can reliably push additional changes from your local b to the remote b, and others will simply see that you have resolved any differences between b and develop by merging, rather than seeing an unexpected change in where b branched off develop in the first place.\n",
"To display all the branch in that project use:\ngit branch\n\nswitch to the specific branch\ngit checkout \"branch-name\"\n\nAdd all change to this branch\ngit add .\ngit commit -m \"enter you commit message here\"\n\nOnce switch to the master branch.\nYou can use this to merge the other branch (\"branch-name\")\ngit merge \"branch-name\"\n\nI hope this help\n"
] | [
-1,
-1,
-1,
-1
] | [
"git",
"gitlab"
] | stackoverflow_0074668915_git_gitlab.txt |
Q:
Why the code in react was working, but few weeks later it has error? TypeError: Cannot read properties of undefined (reading 'slice')
Last week my code was working as below:
function UserCard({ user }) {
const { name, birthday, _id, url, area } = user
//.........
//.........
//.........
return (
<div>
<img src={url.replace('upload/', 'upload/w_300,h_300,c_limit/')} className="UserCard-img" alt="user-img" />
<h3>{name.slice(0, 1).toUpperCase() + name.slice(1).toLowerCase()}</h3>
</div>
);
}
But today I found the website had error, it said:
TypeError: Cannot read properties of undefined (reading 'slice')
TypeError: Cannot read properties of undefined (reading 'replace')
And then I remove 'slice' and 'replace', then it's working now.
These kind of things happened twice already, why the code is unstable? I shouldn't write function inside {}?
A:
The error is telling you that name has an undefined value. So whatever is using this component isn't (or at least isn't always) providing a value for the name prop.
You can use optional chaining to only try to de-reference the value if one exists:
name?.slice(0, 1).toUpperCase()
Or perhaps not display the element at all if there is no value to display:
{ name ?
<h3>{name.slice(0, 1).toUpperCase() + name.slice(1).toLowerCase()}</h3> :
null
}
There are a variety of ways to structure the logic, but overall the point is to check if the variable has a value before trying to use it.
| Why the code in react was working, but few weeks later it has error? TypeError: Cannot read properties of undefined (reading 'slice') | Last week my code was working as below:
function UserCard({ user }) {
const { name, birthday, _id, url, area } = user
//.........
//.........
//.........
return (
<div>
<img src={url.replace('upload/', 'upload/w_300,h_300,c_limit/')} className="UserCard-img" alt="user-img" />
<h3>{name.slice(0, 1).toUpperCase() + name.slice(1).toLowerCase()}</h3>
</div>
);
}
But today I found the website had error, it said:
TypeError: Cannot read properties of undefined (reading 'slice')
TypeError: Cannot read properties of undefined (reading 'replace')
And then I remove 'slice' and 'replace', then it's working now.
These kind of things happened twice already, why the code is unstable? I shouldn't write function inside {}?
| [
"The error is telling you that name has an undefined value. So whatever is using this component isn't (or at least isn't always) providing a value for the name prop.\nYou can use optional chaining to only try to de-reference the value if one exists:\nname?.slice(0, 1).toUpperCase()\n\nOr perhaps not display the element at all if there is no value to display:\n{ name ?\n <h3>{name.slice(0, 1).toUpperCase() + name.slice(1).toLowerCase()}</h3> :\n null\n}\n\nThere are a variety of ways to structure the logic, but overall the point is to check if the variable has a value before trying to use it.\n"
] | [
2
] | [] | [] | [
"debugging",
"javascript",
"reactjs"
] | stackoverflow_0074678030_debugging_javascript_reactjs.txt |
Q:
HTML- tag that works for more than just one document
I'm running an HTML/CSS/JS site (I'm not using PHP as the host I'm using doesn't support PHP). I'm trying to maintain two copies of the website: a local copy stored on my computer so that I can test any new changes I might want to make before putting the website up, and the official copy that's used on the website.
The problem is that whenever I finish making a change on the local copy, I need to go to each .html file and edit the links so that they work for the official copy. For example:
<!DOCTYPE html>
<html>
<head>
<style type="text/css" rel="stylesheet" href="C:/Users/<username>/Documents/Programs/refs/style.css">
</head>
<body>
<p>Hello, World!</p>
</body>
</html>
needs to be changed to
<!DOCTYPE html>
<html>
<head>
<style type="text/css" rel="stylesheet" href="https://www.example.com/refs/style.css">
</head>
<body>
<p>Hello, World!</p>
</body>
</html>
This is very time consuming, as I would estimate I have around 15 different pages and I have to change the links for each one of them.
I've heard of the <base> tag, which changes the base URL for each link on the document. So I could use
<base href="C:/Users/<username>/Documents/Programs/">
for the local copy and
<base href="https://www.example.com/">
for the official one.
However, the <base> tag only works for one document, so in order to update my official copy, I would have to change the <base> tag on each document.
Is there a way to make the <base> tag work for more than one HTML file (like $_SERVER['DOCUMENT_ROOT'] does for PHP), or another way so that I can change all the <base> tags by only editing one file (e.g. a JavaScript file)? Any help would be very appreciated!
A:
Have you ever heard of Relative links?
Based on where is the file you are running you can retrieve resources in different directories starting from where the html file is stored.
So, if your html file is in
C:/Users//Documents/Programs/
or
https://www.example.com/
you can just put
<style type="text/css" rel="stylesheet" href="refs/style.css">
| HTML- tag that works for more than just one document | I'm running an HTML/CSS/JS site (I'm not using PHP as the host I'm using doesn't support PHP). I'm trying to maintain two copies of the website: a local copy stored on my computer so that I can test any new changes I might want to make before putting the website up, and the official copy that's used on the website.
The problem is that whenever I finish making a change on the local copy, I need to go to each .html file and edit the links so that they work for the official copy. For example:
<!DOCTYPE html>
<html>
<head>
<style type="text/css" rel="stylesheet" href="C:/Users/<username>/Documents/Programs/refs/style.css">
</head>
<body>
<p>Hello, World!</p>
</body>
</html>
needs to be changed to
<!DOCTYPE html>
<html>
<head>
<style type="text/css" rel="stylesheet" href="https://www.example.com/refs/style.css">
</head>
<body>
<p>Hello, World!</p>
</body>
</html>
This is very time consuming, as I would estimate I have around 15 different pages and I have to change the links for each one of them.
I've heard of the <base> tag, which changes the base URL for each link on the document. So I could use
<base href="C:/Users/<username>/Documents/Programs/">
for the local copy and
<base href="https://www.example.com/">
for the official one.
However, the <base> tag only works for one document, so in order to update my official copy, I would have to change the <base> tag on each document.
Is there a way to make the <base> tag work for more than one HTML file (like $_SERVER['DOCUMENT_ROOT'] does for PHP), or another way so that I can change all the <base> tags by only editing one file (e.g. a JavaScript file)? Any help would be very appreciated!
| [
"Have you ever heard of Relative links?\nBased on where is the file you are running you can retrieve resources in different directories starting from where the html file is stored.\nSo, if your html file is in\n\nC:/Users//Documents/Programs/\n\nor\n\nhttps://www.example.com/\n\nyou can just put\n<style type=\"text/css\" rel=\"stylesheet\" href=\"refs/style.css\">\n\n"
] | [
0
] | [] | [] | [
"html",
"path",
"url"
] | stackoverflow_0074678037_html_path_url.txt |
Q:
JavaFX Test Error: Process 'Gradle Test Executor (number)' finished with non-zero exit value 1
I'm trying to do some unity tests in JavaFX application builded in Gradle.
I created a path called test in ../src/.
So in src I have the main package and the test package (src/main and src/test).
When I'm trying to run the test it shows this error message:
Execution failed for task ':test'.
Process 'Gradle Test Executor 1' finished with non-zero exit value 1
Try:
Run with --stacktrace option to get the stack trace.
Run with --info or --debug option to get more log output.
Run with --scan to get full insights.
I search in Stack Overflow the same error says that it's in the wrong path and what I see it isn't. Others say that is the pom.xml file without the correct dependence, but I have like 20 pom.xml files in Gradle package. Others say to downgrade the version of JUNIT5.
What should I do?
A:
It's difficult to say for sure without more information, but there are a few things you can try to fix this issue:
Make sure that your tests are located in the correct path within the src directory. Typically, tests should be located in src/test/java (for Java tests) or src/test/kotlin (for Kotlin tests).
Check your build.gradle file to make sure that you have the necessary dependencies for running tests. For example, you should have a testImplementation dependency for JUnit 5 and any other libraries that your tests rely on.
If you're using JUnit 5, try downgrading to an earlier version to see if that fixes the issue. If you're already using an earlier version of JUnit, try upgrading to the latest version to see if that fixes the problem.
If you're still having issues, try running the tests with the --stacktrace or --debug options to get more detailed information about the error. This can help you identify the specific cause of the problem.
Ultimately, the solution will depend on the specific details of your project, so it's hard to say for sure what the problem is without more information. However, the steps above should help you troubleshoot the issue and hopefully find a solution.
| JavaFX Test Error: Process 'Gradle Test Executor (number)' finished with non-zero exit value 1 | I'm trying to do some unity tests in JavaFX application builded in Gradle.
I created a path called test in ../src/.
So in src I have the main package and the test package (src/main and src/test).
When I'm trying to run the test it shows this error message:
Execution failed for task ':test'.
Process 'Gradle Test Executor 1' finished with non-zero exit value 1
Try:
Run with --stacktrace option to get the stack trace.
Run with --info or --debug option to get more log output.
Run with --scan to get full insights.
I search in Stack Overflow the same error says that it's in the wrong path and what I see it isn't. Others say that is the pom.xml file without the correct dependence, but I have like 20 pom.xml files in Gradle package. Others say to downgrade the version of JUNIT5.
What should I do?
| [
"It's difficult to say for sure without more information, but there are a few things you can try to fix this issue:\n\nMake sure that your tests are located in the correct path within the src directory. Typically, tests should be located in src/test/java (for Java tests) or src/test/kotlin (for Kotlin tests).\nCheck your build.gradle file to make sure that you have the necessary dependencies for running tests. For example, you should have a testImplementation dependency for JUnit 5 and any other libraries that your tests rely on.\nIf you're using JUnit 5, try downgrading to an earlier version to see if that fixes the issue. If you're already using an earlier version of JUnit, try upgrading to the latest version to see if that fixes the problem.\nIf you're still having issues, try running the tests with the --stacktrace or --debug options to get more detailed information about the error. This can help you identify the specific cause of the problem.\n\nUltimately, the solution will depend on the specific details of your project, so it's hard to say for sure what the problem is without more information. However, the steps above should help you troubleshoot the issue and hopefully find a solution.\n"
] | [
0
] | [] | [] | [
"gradle",
"java",
"javafx",
"junit5",
"testing"
] | stackoverflow_0074675809_gradle_java_javafx_junit5_testing.txt |
Q:
Why it is showing your kernel is dead while training my artificial neural network?
While building, modelling and training my Artificial Neural Network, it is showing Kernel is Dead.
ann = models.Sequential([
layers.Flatten(input_shape = (32,32,3)),
layers.Dense(3000, activation = 'relu'),
layers.Dense(1000, activation = 'relu'),
layers.Dense(10, activation = 'sigmoid')
])
# Parameters used while training this neural network
ann.compile(optimizer = 'SGD',
loss = 'sparse_categorical_crossentropy',
metrics = ['accuracy'])
ann.fit(x_train, y_train, epochs = 5)
A:
It maybe because your program is dead. Try to restart your program
| Why it is showing your kernel is dead while training my artificial neural network? | While building, modelling and training my Artificial Neural Network, it is showing Kernel is Dead.
ann = models.Sequential([
layers.Flatten(input_shape = (32,32,3)),
layers.Dense(3000, activation = 'relu'),
layers.Dense(1000, activation = 'relu'),
layers.Dense(10, activation = 'sigmoid')
])
# Parameters used while training this neural network
ann.compile(optimizer = 'SGD',
loss = 'sparse_categorical_crossentropy',
metrics = ['accuracy'])
ann.fit(x_train, y_train, epochs = 5)
| [
"It maybe because your program is dead. Try to restart your program\n"
] | [
0
] | [] | [] | [
"artificial_intelligence",
"computer_vision",
"conv_neural_network",
"deep_learning",
"machine_learning"
] | stackoverflow_0074673289_artificial_intelligence_computer_vision_conv_neural_network_deep_learning_machine_learning.txt |
Q:
How to return upon encoutering first "true" in a List[IO[Boolean]] in Scala Cats Effect
Say I have a set of rules that have a validation function that returns IO[Boolean] at runtime.
case class Rule1() {
def validate(): IO[Boolean] = IO.pure(false)
}
case class Rule2() {
def validate(): IO[Boolean] = IO.pure(false)
}
case class Rule3() {
def validate(): IO[Boolean] = IO.pure(true)
}
val rules = List(Rule1(), Rule2(), Rule3())
Now I have to iterate through these rules and see "if any of these rules" hold valid and if not then throw exception!
for {
i <- rules.map(_.validate()).sequence
_ <- if (i.contains(true)) IO.unit else IO.raiseError(new RuntimeException("Failed"))
} yield ()
The problem with the code snippet above is that it is trying to evaluate all the rules! What I really want is to exit at the encounter of the first true validation.
Not sure how to achieve this using cats effects in Scala.
A:
If you take a look at list of available extension methods in your IDE, you can find findM:
for {
opt <- rules.findM(_.validate())
_ <- opt match {
case Some(_) => IO.unit
case None => IO.raiseError(new RuntimeException("Failed")
}
} yield ()
Doing it manually could be done with foldLeft and flatMap:
rules.foldLeft(IO.pure(false)) { (valueSoFar, nextValue) =>
valueSoFar.flatMap {
case true => IO.pure(true) // can skip evaluating nextValue
case false => nextValue.validate() // need to find the first true IO yet
}
}.flatMap {
case true => IO.unit
case false => IO.raiseError(new RuntimeException("Failed")
}
The former should have the additional advantage that it doesn't have to iterate over whole collection when it finds the first match, while the latter will still go through all items, even if will start discarding them at some point. findM solves that by using tailRecM internally to terminate the iteration on first met condition.
A:
I claim that existsM is the most direct way to achieve what you want. It behaves pretty much the same as exists, but for monadic predicates:
for {
t <- rules.existsM(_.validate())
_ <- IO.raiseUnless(t)(new RuntimeException("Failed"))
} yield ()
It also stops the search as soon as it finds the first true.
The raiseUnless is just some syntactic sugar that's equivalent to the if-else from your question.
A:
You can try recursive
def firstTrue(rules: List[{def validate(): IO[Boolean]}]): IO[Unit] = rules match {
case r :: rs => for {
b <- r.validate()
res <- if (b) IO.unit else firstTrue(rs)
} yield res
case _ => IO.raiseError(new RuntimeException("Failed"))
}
A:
Another approach is not using booleans at all, but the monad capabilities of IO
def validateRules(rules: List[Rule]): IO[Unit] =
rules.traverse_ { rule =>
rule.validate().flatMap { flag =>
IO.raiseUnless(flag)(new RuntimeException("Failed"))
}
}
| How to return upon encoutering first "true" in a List[IO[Boolean]] in Scala Cats Effect | Say I have a set of rules that have a validation function that returns IO[Boolean] at runtime.
case class Rule1() {
def validate(): IO[Boolean] = IO.pure(false)
}
case class Rule2() {
def validate(): IO[Boolean] = IO.pure(false)
}
case class Rule3() {
def validate(): IO[Boolean] = IO.pure(true)
}
val rules = List(Rule1(), Rule2(), Rule3())
Now I have to iterate through these rules and see "if any of these rules" hold valid and if not then throw exception!
for {
i <- rules.map(_.validate()).sequence
_ <- if (i.contains(true)) IO.unit else IO.raiseError(new RuntimeException("Failed"))
} yield ()
The problem with the code snippet above is that it is trying to evaluate all the rules! What I really want is to exit at the encounter of the first true validation.
Not sure how to achieve this using cats effects in Scala.
| [
"If you take a look at list of available extension methods in your IDE, you can find findM:\nfor {\n opt <- rules.findM(_.validate())\n _ <- opt match {\n case Some(_) => IO.unit\n case None => IO.raiseError(new RuntimeException(\"Failed\")\n }\n} yield ()\n\nDoing it manually could be done with foldLeft and flatMap:\nrules.foldLeft(IO.pure(false)) { (valueSoFar, nextValue) =>\n valueSoFar.flatMap {\n case true => IO.pure(true) // can skip evaluating nextValue \n case false => nextValue.validate() // need to find the first true IO yet\n }\n}.flatMap {\n case true => IO.unit\n case false => IO.raiseError(new RuntimeException(\"Failed\")\n}\n\nThe former should have the additional advantage that it doesn't have to iterate over whole collection when it finds the first match, while the latter will still go through all items, even if will start discarding them at some point. findM solves that by using tailRecM internally to terminate the iteration on first met condition.\n",
"I claim that existsM is the most direct way to achieve what you want. It behaves pretty much the same as exists, but for monadic predicates:\nfor {\n t <- rules.existsM(_.validate())\n _ <- IO.raiseUnless(t)(new RuntimeException(\"Failed\"))\n} yield ()\n\nIt also stops the search as soon as it finds the first true.\nThe raiseUnless is just some syntactic sugar that's equivalent to the if-else from your question.\n",
"You can try recursive\ndef firstTrue(rules: List[{def validate(): IO[Boolean]}]): IO[Unit] = rules match {\n case r :: rs => for {\n b <- r.validate()\n res <- if (b) IO.unit else firstTrue(rs)\n } yield res\n case _ => IO.raiseError(new RuntimeException(\"Failed\"))\n}\n\n",
"Another approach is not using booleans at all, but the monad capabilities of IO\ndef validateRules(rules: List[Rule]): IO[Unit] =\n rules.traverse_ { rule =>\n rule.validate().flatMap { flag =>\n IO.raiseUnless(flag)(new RuntimeException(\"Failed\"))\n }\n }\n\n"
] | [
5,
5,
4,
4
] | [] | [] | [
"cats_effect",
"exists",
"findfirst",
"scala",
"scala_cats"
] | stackoverflow_0074677712_cats_effect_exists_findfirst_scala_scala_cats.txt |
Q:
how to show toast if auth failed jetpack compose firebase if else @Composable invocations can only happen from the context of a @Composable function
There appear to be an infinite number of explanations for this error on stackoverflow, none of which address my issue.
I want to show a toast if the authentication failed
I am using firebase auth but the error is with the Location context
how can I pass through this limitation?
source code for the button
Button(
onClick = {
auth.signInWithEmailAndPassword(email, password)
.addOnCompleteListener { task ->
if (task.isSuccessful) {
navController.navigate(Screen.PreferenceScreen.route)
} else {
// If sign in fails, display a message to the user.
Log.w(TAG, "createUserWithEmail:failure", task.exception)
Toast.makeText(
LocalContext.current,
"Authentication failed.",
Toast.LENGTH_SHORT
).show()
}
}
},
modifier = Modifier
.fillMaxWidth()
.padding(8.dp),
enabled = isPasswordValid && confirmPassword == password,
) {
Text(text = "Register")
}
}
A:
Just declare the context outside of the Button, and use it in the Toast like this.
@Composable
fun MyButtonWithToast() {
val context = LocalContext.current
Button(
onClick = {
Toast.makeText(
context,
"Authentication failed.",
Toast.LENGTH_SHORT
).show()
}
) {
Text(text = "Register")
}
}
or if you have a composable structure, just simply declare it there and pass it in the composable this button is in
@Composable
fun SomeParentComposable() {
val context = LocalContext.current
MyButtonWithToast(context = context)
}
| how to show toast if auth failed jetpack compose firebase if else @Composable invocations can only happen from the context of a @Composable function | There appear to be an infinite number of explanations for this error on stackoverflow, none of which address my issue.
I want to show a toast if the authentication failed
I am using firebase auth but the error is with the Location context
how can I pass through this limitation?
source code for the button
Button(
onClick = {
auth.signInWithEmailAndPassword(email, password)
.addOnCompleteListener { task ->
if (task.isSuccessful) {
navController.navigate(Screen.PreferenceScreen.route)
} else {
// If sign in fails, display a message to the user.
Log.w(TAG, "createUserWithEmail:failure", task.exception)
Toast.makeText(
LocalContext.current,
"Authentication failed.",
Toast.LENGTH_SHORT
).show()
}
}
},
modifier = Modifier
.fillMaxWidth()
.padding(8.dp),
enabled = isPasswordValid && confirmPassword == password,
) {
Text(text = "Register")
}
}
| [
"Just declare the context outside of the Button, and use it in the Toast like this.\n@Composable\nfun MyButtonWithToast() {\n\n val context = LocalContext.current\n \n\n Button(\n onClick = {\n Toast.makeText(\n context,\n \"Authentication failed.\",\n Toast.LENGTH_SHORT\n ).show()\n }\n ) {\n Text(text = \"Register\")\n }\n}\n\nor if you have a composable structure, just simply declare it there and pass it in the composable this button is in\n@Composable\nfun SomeParentComposable() {\n\n val context = LocalContext.current\n \n MyButtonWithToast(context = context)\n}\n\n"
] | [
1
] | [] | [] | [
"android",
"android_jetpack_compose",
"android_toast",
"kotlin",
"toast"
] | stackoverflow_0074677970_android_android_jetpack_compose_android_toast_kotlin_toast.txt |
Q:
how do i make it only run the desidered option?
this script was working fine for a long time now it doesn't manage to go to the right option and just runs all of the them
have tried every option that I can think off but, but I just can't see the issue, have even tried it on other windows updates to check if it was the update that broke it.
its a .bat file the full code is here
a small ode snipet
@echo off
cls
echo.
echo install options:
echo.
echo Office Tools [0]
echo.
echo Windows Utilities [1]
echo.
echo Browser [2]
echo.
echo Social [3]
echo.
echo Media Player [4]
echo.
:finish
set /p options="How would you like to continue?"
if '%options%'=='0' goto 0
if '%options%'=='1' goto 1
if '%options%'=='2' goto 2
if '%options%'=='3' goto 3
if '%options%'=='0001' goto 0001
if '%options%'=='0002' goto 0002
if '%options%'=='0003' goto 0003
if '%options%'=='0004' goto 0004
goto finish
:0
echo.
echo VS code [0001]
echo GitHubDesktop [0002]
echo git [0003]
echo 7zip [0004]
echo Notion [0005]
echo.
;;
:1
echo.
echo Powertoys [1001]
echo Updates WindowsTerminal [1002]
echo.
:2
echo.
echo Firefox [2001]
echo Chrome [2002]
echo.
:3
echo.
echo Discord [3001]
echo Spotify [3002]
echo.
goto finish
:0001
echo.
echo Installing vscode
echo.
winget install vscode
:0002
echo.
echo Installing GitHubDesktop
echo.
winget install GitHub.GitHubDesktop
:0003
echo.
echo Installing git
echo.
winget install Git.Git
:0004
echo.
echo Installing 7zip
echo.
winget install 7zip.7zip
goto finish
A:
adding a goto finish to each option will solve these issue, solved by @Stephan
:0001
echo.
echo Installing vscode
echo.
winget install vscode
goto finish
giving the "goto finsh" at the end of each option and seting the ":finsh" before the "set /p options="How would you like to continue?"" will sove these issue
| how do i make it only run the desidered option? | this script was working fine for a long time now it doesn't manage to go to the right option and just runs all of the them
have tried every option that I can think off but, but I just can't see the issue, have even tried it on other windows updates to check if it was the update that broke it.
its a .bat file the full code is here
a small ode snipet
@echo off
cls
echo.
echo install options:
echo.
echo Office Tools [0]
echo.
echo Windows Utilities [1]
echo.
echo Browser [2]
echo.
echo Social [3]
echo.
echo Media Player [4]
echo.
:finish
set /p options="How would you like to continue?"
if '%options%'=='0' goto 0
if '%options%'=='1' goto 1
if '%options%'=='2' goto 2
if '%options%'=='3' goto 3
if '%options%'=='0001' goto 0001
if '%options%'=='0002' goto 0002
if '%options%'=='0003' goto 0003
if '%options%'=='0004' goto 0004
goto finish
:0
echo.
echo VS code [0001]
echo GitHubDesktop [0002]
echo git [0003]
echo 7zip [0004]
echo Notion [0005]
echo.
;;
:1
echo.
echo Powertoys [1001]
echo Updates WindowsTerminal [1002]
echo.
:2
echo.
echo Firefox [2001]
echo Chrome [2002]
echo.
:3
echo.
echo Discord [3001]
echo Spotify [3002]
echo.
goto finish
:0001
echo.
echo Installing vscode
echo.
winget install vscode
:0002
echo.
echo Installing GitHubDesktop
echo.
winget install GitHub.GitHubDesktop
:0003
echo.
echo Installing git
echo.
winget install Git.Git
:0004
echo.
echo Installing 7zip
echo.
winget install 7zip.7zip
goto finish
| [
"adding a goto finish to each option will solve these issue, solved by @Stephan\n:0001\necho.\necho Installing vscode\necho.\nwinget install vscode\ngoto finish\n\ngiving the \"goto finsh\" at the end of each option and seting the \":finsh\" before the \"set /p options=\"How would you like to continue?\"\" will sove these issue\n"
] | [
0
] | [] | [] | [
"batch_file",
"cmd",
"syntax_error",
"terminal",
"windows"
] | stackoverflow_0074677918_batch_file_cmd_syntax_error_terminal_windows.txt |
Q:
How to uninstall fvm (Flutter Version Manager)?
I want to uninstall FVM and install native Flutter
I already looked in the FVM documentation but I didn't find anything, and I need to uninstall FVM and install native Flutter
A:
Run command fvm list this will output the directory used for Flutter cache. Delete that directory. If you installed using pub run dart pub global deactivate fvm, if you used a standalone installation please follow its instructions.
| How to uninstall fvm (Flutter Version Manager)? | I want to uninstall FVM and install native Flutter
I already looked in the FVM documentation but I didn't find anything, and I need to uninstall FVM and install native Flutter
| [
"Run command fvm list this will output the directory used for Flutter cache. Delete that directory. If you installed using pub run dart pub global deactivate fvm, if you used a standalone installation please follow its instructions.\n"
] | [
0
] | [] | [] | [
"dart",
"flutter"
] | stackoverflow_0074677965_dart_flutter.txt |
Q:
Type 'MyType' is not assignable to type 'WritableDraft'
I use @reduxjs/toolkit and my state contains: allSlides: ISlide[];
When I try to change anything in allSlides like ex.
setAllSlides(state, action: PayloadAction<ISlide[]>) {
state.allSlides = action.payload;
},
storeNewSlide(state, action: PayloadAction<ISlide>) {
state.allSlides.push(action.payload);
},
I get a TS error
Type 'ISlide' is not assignable to type 'WritableDraft<ISlide>'
I don't have problem with changing array of primitive values or just one property in objects, but I don't know how to change correctly whole array or specific object in array.
A:
You have to add the type in the action
ie:
export const storeNewSlide = createAsyncThunk(
"slides/storeNewSlide",
async (slide: ISlide) => {
return {
slide,
};
}
);
and then in your allSlides:
storeNewSlide(state, action: PayloadAction<ISlide>) {
state.allSlides.push(action.payload.slide);
},
Same for the array of slides
A:
Had similar issue,
I think redux doesn't like direct mutation of state object.
so spread operator solved for me
setAllSlides(state, action: PayloadAction<ISlide[]>) {
state = { ...state, allSlices:action.payload };
},
storeNewSlide(state, action: PayloadAction<ISlide>) {
state = { ...state, allSlices: [...state.allSlices, action.payload } ];
},
| Type 'MyType' is not assignable to type 'WritableDraft' | I use @reduxjs/toolkit and my state contains: allSlides: ISlide[];
When I try to change anything in allSlides like ex.
setAllSlides(state, action: PayloadAction<ISlide[]>) {
state.allSlides = action.payload;
},
storeNewSlide(state, action: PayloadAction<ISlide>) {
state.allSlides.push(action.payload);
},
I get a TS error
Type 'ISlide' is not assignable to type 'WritableDraft<ISlide>'
I don't have problem with changing array of primitive values or just one property in objects, but I don't know how to change correctly whole array or specific object in array.
| [
"You have to add the type in the action\nie:\nexport const storeNewSlide = createAsyncThunk(\n \"slides/storeNewSlide\",\n async (slide: ISlide) => {\n return {\n slide,\n };\n }\n);\n\nand then in your allSlides:\nstoreNewSlide(state, action: PayloadAction<ISlide>) {\n state.allSlides.push(action.payload.slide);\n},\n\nSame for the array of slides\n",
"Had similar issue,\nI think redux doesn't like direct mutation of state object.\nso spread operator solved for me\nsetAllSlides(state, action: PayloadAction<ISlide[]>) {\n state = { ...state, allSlices:action.payload };\n},\nstoreNewSlide(state, action: PayloadAction<ISlide>) {\n state = { ...state, allSlices: [...state.allSlices, action.payload } ];\n},\n\n"
] | [
1,
0
] | [] | [] | [
"immer.js",
"redux_toolkit",
"typescript"
] | stackoverflow_0071999774_immer.js_redux_toolkit_typescript.txt |
Q:
react javascript image cropper Cannot read properties of null
Im trying to get this code for image browsing then cropping the browsed image working:
This is the supposed code to be working:
import "./styles.css";
import React, { useState } from "react";
import ReactCrop from 'react-image-crop';
import 'react-image-crop/dist/ReactCrop.css'
export default function App() {
const [src, selectFile] = useState(null);
const onImageChange = (event) => {
selectFile(URL.createObjectURL(event.target.files[0]));
};
const [image, setImage] = useState(null);
const [crop, setCrop] = useState({ aspect: 16 / 9 });
const [result, setResult] = useState(null);
function getCroppedImg(){
const canvas = document.createElement('canvas');
const scaleX = image.naturalWidth / image.width;
const scaleY = image.naturalHeight / image.height;
canvas.width = crop.width;
canvas.height = crop.height;
const ctx = canvas.getContext('2d');
ctx.drawImage
(
image,
crop.x * scaleX,
crop.y * scaleY,
crop.width * scaleX,
crop.height * scaleY,
0,
0,
crop.width,
crop.height,
);
const base64Image = canvas.toDataURL('image/jpeg');
setResult(base64Image)
}
return (
<div className="container">
<div className='row'>
<div className='col-6'>
<input type="file" accept ='image/*' onChange={onImageChange}/>
</div>
{src && <div className='col-6'>
<ReactCrop src={src} onImageLoaded={setImage} crop={crop} onChange={setCrop} />
<button className='btn btn-danger' onClick={getCroppedImg} > Crop Image </button>
</div>}
{result && <div className='col-6'> <img src={result} alt='Cropped Image' className='img-fluid' />
</div>}
</div>
</div>
);
}
You can use this sandbox link to immediately test and debug the code and see the error,
code testing in sandbox
This full code is not mainly mine, I have been following this tutorial on youtube as im trying to get it working to learn and use it on my main project, But i cannot get it working as in there this error, which is not actually in the tutorial as im not even missing any line of code so i cannot understand why this error happening, appreicated to make me understand why it happened.
this is the yt link: yt tutorial code
Also to add, When i try to browse the image in the current code it doesn't work so I actually tried to fix it by adding this line
<img src={src} />
under the it actually started to work for showing the image, but the cropping functionality is not working.
A:
The error you're seeing may be related to how the code is using React's useState hook.
In React, a hook is a special function that lets you "hook into" React features from your functional components. The useState hook is a way to add state to a functional component. It takes an initial value for the state and returns an array with two items: the current state value and a function to update the state.
In the code you posted, useState is being used to create two state variables: src and image. Here's how the code sets up the initial values for these state variables:
const [src, selectFile] = useState(null);
const [image, setImage] = useState(null);
The first useState call sets up src with an initial value of null, and the second useState call sets up image with an initial value of null.
The error you're seeing may be related to how these state variables are being used in the code. For example, the code is trying to set the src state variable when the user selects a file:
const onImageChange = (event) => {
selectFile(URL.createObjectURL(event.target.files[0]));
};
Here, the onImageChange function is being passed to the onChange attribute of an <input type="file"> element. When the user selects a file, the onImageChange function is called, which updates the src state variable using the selectFile function that was returned by the useState call.
It's possible that the error you're seeing is related to how the src state variable is being used in the code. For example, the code is using the src state variable to set the src attribute of an <img> element:
{src && <img src={src} />}
This code uses the "short-circuit" operator (&&) to only render the <img> element if src is truthy (i.e. not null or undefined). If src is null or undefined, the <img> element will not be rendered.
If the error you're seeing is related to the src state variable, it may be because the src state variable is not being properly initialized, or because the src state variable is being set to null or undefined at some point in the code.
I would recommend carefully reviewing the code and making sure that the src state variable is being initialized with a non-null value, and that it is not being set to null or undefined at any point in the code. You may also want to check the React documentation for more information on using the useState hook.
A:
The error you are seeing is because image is null when you try to access its properties in the getCroppedImg function. This happens because the image state is not set until the onImageLoaded prop of the ReactCrop component is called, which only happens when the image is loaded. However, the getCroppedImg function is called before the image is loaded, so image is still null at that point.
One way to fix this would be to check if image is null before trying to access its properties in the getCroppedImg function. You can do this by adding a conditional statement like this:
function getCroppedImg(){
if (image) {
// rest of the code
}
}
Alternatively, you can delay calling the getCroppedImg function until after the image is loaded. To do this, you can pass a callback function to the onImageLoaded prop of the ReactCrop component, and call the getCroppedImg function from inside that callback. This way, the getCroppedImg function will be called after the image state has been set, so it will not be null at that point.
Here's an example of how you can do this:
function App() {
const [src, selectFile] = useState(null);
const onImageChange = (event) => {
selectFile(URL.createObjectURL(event.target.files[0]));
};
const [image, setImage] = useState(null);
const [crop, setCrop] = useState({ aspect: 16 / 9 });
const [result, setResult] = useState(null);
function getCroppedImg(){
const canvas = document.createElement('canvas');
const scaleX = image.naturalWidth / image.width;
const scaleY = image.naturalHeight / image.height;
canvas.width = crop.width;
canvas.height = crop.height;
const ctx = canvas.getContext('2d');
ctx.drawImage
(
image,
crop.x * scaleX,
crop.y * scaleY,
crop.width * scaleX,
crop.height * scaleY,
0,
0,
crop.width,
crop.height,
);
const base64Image = canvas.toDataURL('image/jpeg');
setResult(base64Image)
}
return (
<div className="container">
<div className='row'>
<div className='col-6'>
<input type="file" accept ='image/*' onChange={onImageChange}/>
</div>
{src && <div className='col-6'>
<ReactCrop
src={src}
onImageLoaded={setImage}
crop={crop}
onChange={setCrop}
onImageLoaded={() => getCroppedImg()}
/>
<button className='btn btn-danger' onClick={getCroppedImg} > Crop Image </button>
</div>}
A:
I have managed to solve the error by adding in the React Crop part this part:
<div>
{srcImg && (
<div>
<ReactCrop
style={{ maxWidth: "50%", maxHeight: "50%" }}
crop={crop}
onChange={setCrop}
>
<img
src={srcImg}
onLoad={onLoad}
/></ReactCrop>
seems like on react crop image the onimageloaded function is no longer functioning thats why i couldn't get it to work before and the image was null, therefore i used the onLoad for the img to make the crop function work again.
| react javascript image cropper Cannot read properties of null | Im trying to get this code for image browsing then cropping the browsed image working:
This is the supposed code to be working:
import "./styles.css";
import React, { useState } from "react";
import ReactCrop from 'react-image-crop';
import 'react-image-crop/dist/ReactCrop.css'
export default function App() {
const [src, selectFile] = useState(null);
const onImageChange = (event) => {
selectFile(URL.createObjectURL(event.target.files[0]));
};
const [image, setImage] = useState(null);
const [crop, setCrop] = useState({ aspect: 16 / 9 });
const [result, setResult] = useState(null);
function getCroppedImg(){
const canvas = document.createElement('canvas');
const scaleX = image.naturalWidth / image.width;
const scaleY = image.naturalHeight / image.height;
canvas.width = crop.width;
canvas.height = crop.height;
const ctx = canvas.getContext('2d');
ctx.drawImage
(
image,
crop.x * scaleX,
crop.y * scaleY,
crop.width * scaleX,
crop.height * scaleY,
0,
0,
crop.width,
crop.height,
);
const base64Image = canvas.toDataURL('image/jpeg');
setResult(base64Image)
}
return (
<div className="container">
<div className='row'>
<div className='col-6'>
<input type="file" accept ='image/*' onChange={onImageChange}/>
</div>
{src && <div className='col-6'>
<ReactCrop src={src} onImageLoaded={setImage} crop={crop} onChange={setCrop} />
<button className='btn btn-danger' onClick={getCroppedImg} > Crop Image </button>
</div>}
{result && <div className='col-6'> <img src={result} alt='Cropped Image' className='img-fluid' />
</div>}
</div>
</div>
);
}
You can use this sandbox link to immediately test and debug the code and see the error,
code testing in sandbox
This full code is not mainly mine, I have been following this tutorial on youtube as im trying to get it working to learn and use it on my main project, But i cannot get it working as in there this error, which is not actually in the tutorial as im not even missing any line of code so i cannot understand why this error happening, appreicated to make me understand why it happened.
this is the yt link: yt tutorial code
Also to add, When i try to browse the image in the current code it doesn't work so I actually tried to fix it by adding this line
<img src={src} />
under the it actually started to work for showing the image, but the cropping functionality is not working.
| [
"The error you're seeing may be related to how the code is using React's useState hook.\nIn React, a hook is a special function that lets you \"hook into\" React features from your functional components. The useState hook is a way to add state to a functional component. It takes an initial value for the state and returns an array with two items: the current state value and a function to update the state.\nIn the code you posted, useState is being used to create two state variables: src and image. Here's how the code sets up the initial values for these state variables:\nconst [src, selectFile] = useState(null);\nconst [image, setImage] = useState(null);\n\nThe first useState call sets up src with an initial value of null, and the second useState call sets up image with an initial value of null.\nThe error you're seeing may be related to how these state variables are being used in the code. For example, the code is trying to set the src state variable when the user selects a file:\nconst onImageChange = (event) => {\n selectFile(URL.createObjectURL(event.target.files[0]));\n};\n\nHere, the onImageChange function is being passed to the onChange attribute of an <input type=\"file\"> element. When the user selects a file, the onImageChange function is called, which updates the src state variable using the selectFile function that was returned by the useState call.\nIt's possible that the error you're seeing is related to how the src state variable is being used in the code. For example, the code is using the src state variable to set the src attribute of an <img> element:\n{src && <img src={src} />}\n\nThis code uses the \"short-circuit\" operator (&&) to only render the <img> element if src is truthy (i.e. not null or undefined). If src is null or undefined, the <img> element will not be rendered.\nIf the error you're seeing is related to the src state variable, it may be because the src state variable is not being properly initialized, or because the src state variable is being set to null or undefined at some point in the code.\nI would recommend carefully reviewing the code and making sure that the src state variable is being initialized with a non-null value, and that it is not being set to null or undefined at any point in the code. You may also want to check the React documentation for more information on using the useState hook.\n",
"The error you are seeing is because image is null when you try to access its properties in the getCroppedImg function. This happens because the image state is not set until the onImageLoaded prop of the ReactCrop component is called, which only happens when the image is loaded. However, the getCroppedImg function is called before the image is loaded, so image is still null at that point.\nOne way to fix this would be to check if image is null before trying to access its properties in the getCroppedImg function. You can do this by adding a conditional statement like this:\nfunction getCroppedImg(){\n if (image) {\n // rest of the code\n }\n}\n\nAlternatively, you can delay calling the getCroppedImg function until after the image is loaded. To do this, you can pass a callback function to the onImageLoaded prop of the ReactCrop component, and call the getCroppedImg function from inside that callback. This way, the getCroppedImg function will be called after the image state has been set, so it will not be null at that point.\nHere's an example of how you can do this:\nfunction App() {\n const [src, selectFile] = useState(null);\n\n const onImageChange = (event) => {\n selectFile(URL.createObjectURL(event.target.files[0]));\n };\n\n const [image, setImage] = useState(null);\n const [crop, setCrop] = useState({ aspect: 16 / 9 });\n const [result, setResult] = useState(null);\n\n function getCroppedImg(){\n const canvas = document.createElement('canvas');\n const scaleX = image.naturalWidth / image.width;\n const scaleY = image.naturalHeight / image.height;\n canvas.width = crop.width;\n canvas.height = crop.height;\n const ctx = canvas.getContext('2d');\n ctx.drawImage\n (\n image,\n crop.x * scaleX,\n crop.y * scaleY,\n crop.width * scaleX,\n crop.height * scaleY,\n 0,\n 0,\n crop.width,\n crop.height,\n );\n \n const base64Image = canvas.toDataURL('image/jpeg');\n setResult(base64Image)\n }\n\n return (\n <div className=\"container\">\n <div className='row'>\n <div className='col-6'>\n <input type=\"file\" accept ='image/*' onChange={onImageChange}/>\n</div>\n{src && <div className='col-6'>\n<ReactCrop\n src={src}\n onImageLoaded={setImage}\n crop={crop}\n onChange={setCrop}\n onImageLoaded={() => getCroppedImg()}\n/>\n<button className='btn btn-danger' onClick={getCroppedImg} > Crop Image </button>\n </div>}\n\n",
"I have managed to solve the error by adding in the React Crop part this part:\n <div>\n {srcImg && (\n <div>\n <ReactCrop\n style={{ maxWidth: \"50%\", maxHeight: \"50%\" }}\n crop={crop}\n onChange={setCrop}\n >\n <img\n src={srcImg}\n onLoad={onLoad}\n /></ReactCrop>\n\nseems like on react crop image the onimageloaded function is no longer functioning thats why i couldn't get it to work before and the image was null, therefore i used the onLoad for the img to make the crop function work again.\n"
] | [
0,
0,
0
] | [] | [] | [
"crop",
"javascript",
"react_image_crop",
"reactjs"
] | stackoverflow_0074670886_crop_javascript_react_image_crop_reactjs.txt |
Q:
gcc -std=c++20 compiler flag -Wrestrict not understood. Is the warning justified?
the following piece of code compiles fine with gcc 11.3.0.
#include <string>
std::string homeName( const std::string& home, std::string surl )
{
if ( surl.find( home ) == 0 )
{
surl.replace( 0, home.size(), "~" ); // shorten url by replacing $HOME with "~"
}
return surl;
}
That is, if a string surl starts with the value of home, replace the head with a tilde, "~".
However, when I switched to gcc 12.2.1, there is a warning (-Wrestrict, included in -Wall) which I do not understand. The warning is for the line with the replace command.
The full message (almost unreadable, as usual) is given below, but first the outcome of some trials: I trimmed the full compiler command to the essential parts.
gcc11 -Wrestrict -O2 -std=c++20 -c file.cpp # Ok, with/without optimisations.
gcc12 -Wrestrict -O2 -std=c++20 -c file.cpp # warning, but not with std=c++17
gcc12 -Wrestrict =O1 -std=c++20 -c file.cpp # Ok
From the manual:
-Wrestrict: Warn when an object referenced by a restrict-qualified parameter (or, in C++, a __restrict-qualified parameter) is aliased by another argument, or when copies between such objects overlap.
However, I do not really understand how this applies to the above piece of code. Can somebody explain what is going on?
Here is some sample output:
[655] build> g++ --version
g++ (SUSE Linux) 12.2.1 20220830 [revision e927d1cf141f221c5a32574bde0913307e140984]
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
[655] build> g++ -Wrestrict -O2 -std=c++20 -c file.cpp
In file included from /usr/include/c++/12/string:40,
from file.cpp:1:
In static member function ‘static constexpr std::char_traits<char>::char_type* std::char_traits<char>::copy(char_type*, const char_type*, std::size_t)’,
inlined from ‘static constexpr void std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::_S_copy(_CharT*, const _CharT*, size_type) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]’ at /usr/include/c++/12/bits/basic_string.h:423:21,
inlined from ‘static constexpr void std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::_S_copy(_CharT*, const _CharT*, size_type) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]’ at /usr/include/c++/12/bits/basic_string.h:418:7,
inlined from ‘constexpr std::__cxx11::basic_string<_CharT, _Traits, _Allocator>& std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::_M_replace(size_type, size_type, const _CharT*, size_type) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]’ at /usr/include/c++/12/bits/basic_string.tcc:532:22,
inlined from ‘constexpr std::__cxx11::basic_string<_CharT, _Traits, _Alloc>& std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::replace(size_type, size_type, const _CharT*, size_type) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]’ at /usr/include/c++/12/bits/basic_string.h:2171:19,
inlined from ‘constexpr std::__cxx11::basic_string<_CharT, _Traits, _Alloc>& std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::replace(size_type, size_type, const _CharT*) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]’ at /usr/include/c++/12/bits/basic_string.h:2196:22,
inlined from ‘std::string homeName(const std::string&, std::string)’ at file.cpp:7:19:
/usr/include/c++/12/bits/char_traits.h:431:56: warning: ‘void* __builtin_memcpy(void*, const void*, long unsigned int)’ accessing 9223372036854775810 or more bytes at offsets [2, 9223372036854775807] and 1 may overlap up to 9223372036854775813 bytes at offset -3 [-Wrestrict]
431 | return static_cast<char_type*>(__builtin_memcpy(__s1, __s2, __n));
| ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~
[656] build>
A:
What's going on is an obvious gcc bug. There's nothing wrong with the shown code (except for unrelated bugs mentioned below). The same diagnostic started popping up from my own code, after switching to gcc 12.
I suspect that this specific case is another instance of bug 100366 which first appeared in gcc 11, but seems to have become more widespread in gcc 12 as a result of additional internal compiler optimizations tripping up on it.
The warning seems to be harmless. But it's annoying. So far, with my own code, I've been successful in coming up with tweaks that make it go away. In this case I don't get this warning if I rewrite this to:
if ( surl.find( home ) == 0 )
{
surl="~" + surl.substr(home.size());
}
That is, if a string surl starts with the value of home, replace the
head with a tilde,
If that's the case then the shown code slightly misses the mark. Because if my home directory is /home/sam, this replaces /home/samv/foo with ~v/foo, which is clearly wrong (your original version also suffers the same bug).
So, in addition to fixing the compiler warning you should also think hard what the correct logic should be here, too. It's not as straightforward as it might appear, on the first glance. Filenames are not just random strings. They have structure, and a little bit of organization to them.
| gcc -std=c++20 compiler flag -Wrestrict not understood. Is the warning justified? | the following piece of code compiles fine with gcc 11.3.0.
#include <string>
std::string homeName( const std::string& home, std::string surl )
{
if ( surl.find( home ) == 0 )
{
surl.replace( 0, home.size(), "~" ); // shorten url by replacing $HOME with "~"
}
return surl;
}
That is, if a string surl starts with the value of home, replace the head with a tilde, "~".
However, when I switched to gcc 12.2.1, there is a warning (-Wrestrict, included in -Wall) which I do not understand. The warning is for the line with the replace command.
The full message (almost unreadable, as usual) is given below, but first the outcome of some trials: I trimmed the full compiler command to the essential parts.
gcc11 -Wrestrict -O2 -std=c++20 -c file.cpp # Ok, with/without optimisations.
gcc12 -Wrestrict -O2 -std=c++20 -c file.cpp # warning, but not with std=c++17
gcc12 -Wrestrict =O1 -std=c++20 -c file.cpp # Ok
From the manual:
-Wrestrict: Warn when an object referenced by a restrict-qualified parameter (or, in C++, a __restrict-qualified parameter) is aliased by another argument, or when copies between such objects overlap.
However, I do not really understand how this applies to the above piece of code. Can somebody explain what is going on?
Here is some sample output:
[655] build> g++ --version
g++ (SUSE Linux) 12.2.1 20220830 [revision e927d1cf141f221c5a32574bde0913307e140984]
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
[655] build> g++ -Wrestrict -O2 -std=c++20 -c file.cpp
In file included from /usr/include/c++/12/string:40,
from file.cpp:1:
In static member function ‘static constexpr std::char_traits<char>::char_type* std::char_traits<char>::copy(char_type*, const char_type*, std::size_t)’,
inlined from ‘static constexpr void std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::_S_copy(_CharT*, const _CharT*, size_type) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]’ at /usr/include/c++/12/bits/basic_string.h:423:21,
inlined from ‘static constexpr void std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::_S_copy(_CharT*, const _CharT*, size_type) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]’ at /usr/include/c++/12/bits/basic_string.h:418:7,
inlined from ‘constexpr std::__cxx11::basic_string<_CharT, _Traits, _Allocator>& std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::_M_replace(size_type, size_type, const _CharT*, size_type) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]’ at /usr/include/c++/12/bits/basic_string.tcc:532:22,
inlined from ‘constexpr std::__cxx11::basic_string<_CharT, _Traits, _Alloc>& std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::replace(size_type, size_type, const _CharT*, size_type) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]’ at /usr/include/c++/12/bits/basic_string.h:2171:19,
inlined from ‘constexpr std::__cxx11::basic_string<_CharT, _Traits, _Alloc>& std::__cxx11::basic_string<_CharT, _Traits, _Alloc>::replace(size_type, size_type, const _CharT*) [with _CharT = char; _Traits = std::char_traits<char>; _Alloc = std::allocator<char>]’ at /usr/include/c++/12/bits/basic_string.h:2196:22,
inlined from ‘std::string homeName(const std::string&, std::string)’ at file.cpp:7:19:
/usr/include/c++/12/bits/char_traits.h:431:56: warning: ‘void* __builtin_memcpy(void*, const void*, long unsigned int)’ accessing 9223372036854775810 or more bytes at offsets [2, 9223372036854775807] and 1 may overlap up to 9223372036854775813 bytes at offset -3 [-Wrestrict]
431 | return static_cast<char_type*>(__builtin_memcpy(__s1, __s2, __n));
| ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~
[656] build>
| [
"What's going on is an obvious gcc bug. There's nothing wrong with the shown code (except for unrelated bugs mentioned below). The same diagnostic started popping up from my own code, after switching to gcc 12.\nI suspect that this specific case is another instance of bug 100366 which first appeared in gcc 11, but seems to have become more widespread in gcc 12 as a result of additional internal compiler optimizations tripping up on it.\nThe warning seems to be harmless. But it's annoying. So far, with my own code, I've been successful in coming up with tweaks that make it go away. In this case I don't get this warning if I rewrite this to:\n if ( surl.find( home ) == 0 )\n {\n surl=\"~\" + surl.substr(home.size());\n }\n\n\nThat is, if a string surl starts with the value of home, replace the\nhead with a tilde,\n\nIf that's the case then the shown code slightly misses the mark. Because if my home directory is /home/sam, this replaces /home/samv/foo with ~v/foo, which is clearly wrong (your original version also suffers the same bug).\nSo, in addition to fixing the compiler warning you should also think hard what the correct logic should be here, too. It's not as straightforward as it might appear, on the first glance. Filenames are not just random strings. They have structure, and a little bit of organization to them.\n"
] | [
0
] | [] | [] | [
"c++",
"c++20",
"optimization"
] | stackoverflow_0074677951_c++_c++20_optimization.txt |
Q:
Warp effect for SKSpriteNodes passing through it?
I'm trying to apply a warp/distortion effect to all SKSpriteNodes that pass through a fixed rectangle sized area on the screen. Shown in the image below, the same SKSpriteNode will start at the top of the screen and work its way down allowing the rectangle distortion filter to warp the node as it passes through.
I've tried using a SKEffectNode shown in the code below. However, I couldn't set a fixed height and width value to the SKEffectNode which later gave me inconsistent warp effects due the SKEffectNode constantly changing its height to accommodate all the SKSpritNode children.
I'm wondering if there is another way to achieve this effect or if I'm missing something with the SKEffectNode. Ideally I'd like the the warp to effect any SKNode that passes under it without the need to add and remove children.
Any information would be much appreciated.
Warp effect I'm trying to achieve above and current SKEffectNode code below.
func warpToUpEffectNode(effectNode:SKEffectNode, view:SKView){
effectNode.zPosition = priorityPos.upEffectNodeZ
let destinationPositions: [vector_float2] = [
vector_float2(-0.1, 1), vector_float2(0.5, 1.3), vector_float2(1.1, 1),
vector_float2(0.1, 0.5), vector_float2(0.5, 0.5), vector_float2(0.9, 0.5),
vector_float2(-0.1, 0), vector_float2(0.5, -0.3), vector_float2(1.1, 0)
]
let warpGeometryGrid = SKWarpGeometryGrid(columns: 10,rows: 1)
effectNode.warpGeometry = warpGeometryGrid.replacingByDestinationPositions(positions: destinationPositions)
}
A:
you can accomplish this using render-to-texture.
first, put all of your scene elements into a single large node, i'm calling it container
then set up your viewport area, the part you want to warp. it's a bit tricky because you also have to crop it (otherwise you'll see the fringe of the warped shape)
/*
create a crop node with
- mask
- a visible frame
- a warpable spritenode
*/
viewport_warp = SKSpriteNode(color: .white, size: CGSize(width: 150, height: 150))
let viewport_frame = SKShapeNode(rectOf: viewport_warp.size, cornerRadius: 15)
viewport_frame.strokeColor = .black
viewport_frame.zPosition = 3
viewport_warp.addChild(viewport_frame)
let viewport_mask = SKShapeNode(rectOf: viewport_warp.size, cornerRadius: 15)
viewport_mask.fillColor = .black
let cropNode = SKCropNode()
cropNode.zPosition = 2
cropNode.maskNode = viewport_mask
cropNode.addChild(viewport_warp)
addChild(cropNode)
then set up your warp geometry
//warp the geometry of the spritenode
let PINCH_OFFSET:Float = 0.1
let dst = [
// bottom row: left, center, right
vector_float2(0.0, 0.0),
vector_float2(0.5, 0.0 - PINCH_OFFSET),
vector_float2(1.0, 0.0),
// middle row: left, center, right
vector_float2(0.0 - PINCH_OFFSET, 0.5),
vector_float2(0.5, 0.5),
vector_float2(1.0 + PINCH_OFFSET, 0.5),
// top row: left, center, right
vector_float2(0.0, 1.0),
vector_float2(0.5, 1.0 + PINCH_OFFSET),
vector_float2(1.0, 1.0)
]
let warpGeometryGrid = SKWarpGeometryGrid(columns: 2,rows: 2)
viewport_warp.warpGeometry = warpGeometryGrid.replacingByDestinationPositions(positions: dst)
and finally, do render-to-texture on the container and update the texture of your spritenode
override func update(_ currentTime: TimeInterval) {
let cropped_viewport = viewport_warp.frame.insetBy(dx: 10, dy: 10) //optional: adds magnification effect
let texture:SKTexture? = self.view?.texture(from:container, crop:cropped_viewport)
viewport_warp.texture = texture
}
| Warp effect for SKSpriteNodes passing through it? | I'm trying to apply a warp/distortion effect to all SKSpriteNodes that pass through a fixed rectangle sized area on the screen. Shown in the image below, the same SKSpriteNode will start at the top of the screen and work its way down allowing the rectangle distortion filter to warp the node as it passes through.
I've tried using a SKEffectNode shown in the code below. However, I couldn't set a fixed height and width value to the SKEffectNode which later gave me inconsistent warp effects due the SKEffectNode constantly changing its height to accommodate all the SKSpritNode children.
I'm wondering if there is another way to achieve this effect or if I'm missing something with the SKEffectNode. Ideally I'd like the the warp to effect any SKNode that passes under it without the need to add and remove children.
Any information would be much appreciated.
Warp effect I'm trying to achieve above and current SKEffectNode code below.
func warpToUpEffectNode(effectNode:SKEffectNode, view:SKView){
effectNode.zPosition = priorityPos.upEffectNodeZ
let destinationPositions: [vector_float2] = [
vector_float2(-0.1, 1), vector_float2(0.5, 1.3), vector_float2(1.1, 1),
vector_float2(0.1, 0.5), vector_float2(0.5, 0.5), vector_float2(0.9, 0.5),
vector_float2(-0.1, 0), vector_float2(0.5, -0.3), vector_float2(1.1, 0)
]
let warpGeometryGrid = SKWarpGeometryGrid(columns: 10,rows: 1)
effectNode.warpGeometry = warpGeometryGrid.replacingByDestinationPositions(positions: destinationPositions)
}
| [
"you can accomplish this using render-to-texture.\n\nfirst, put all of your scene elements into a single large node, i'm calling it container\nthen set up your viewport area, the part you want to warp. it's a bit tricky because you also have to crop it (otherwise you'll see the fringe of the warped shape)\n/*\n create a crop node with\n - mask\n - a visible frame\n - a warpable spritenode\n */\nviewport_warp = SKSpriteNode(color: .white, size: CGSize(width: 150, height: 150))\nlet viewport_frame = SKShapeNode(rectOf: viewport_warp.size, cornerRadius: 15)\nviewport_frame.strokeColor = .black\nviewport_frame.zPosition = 3\nviewport_warp.addChild(viewport_frame)\n\nlet viewport_mask = SKShapeNode(rectOf: viewport_warp.size, cornerRadius: 15)\nviewport_mask.fillColor = .black\n\nlet cropNode = SKCropNode()\ncropNode.zPosition = 2\ncropNode.maskNode = viewport_mask\ncropNode.addChild(viewport_warp)\naddChild(cropNode)\n\nthen set up your warp geometry\n//warp the geometry of the spritenode\nlet PINCH_OFFSET:Float = 0.1\nlet dst = [\n // bottom row: left, center, right\n vector_float2(0.0, 0.0),\n vector_float2(0.5, 0.0 - PINCH_OFFSET),\n vector_float2(1.0, 0.0),\n\n // middle row: left, center, right\n vector_float2(0.0 - PINCH_OFFSET, 0.5),\n vector_float2(0.5, 0.5),\n vector_float2(1.0 + PINCH_OFFSET, 0.5),\n\n // top row: left, center, right\n vector_float2(0.0, 1.0),\n vector_float2(0.5, 1.0 + PINCH_OFFSET),\n vector_float2(1.0, 1.0)\n]\nlet warpGeometryGrid = SKWarpGeometryGrid(columns: 2,rows: 2)\nviewport_warp.warpGeometry = warpGeometryGrid.replacingByDestinationPositions(positions: dst)\n\nand finally, do render-to-texture on the container and update the texture of your spritenode\noverride func update(_ currentTime: TimeInterval) {\n let cropped_viewport = viewport_warp.frame.insetBy(dx: 10, dy: 10) //optional: adds magnification effect\n let texture:SKTexture? = self.view?.texture(from:container, crop:cropped_viewport)\n viewport_warp.texture = texture\n}\n\n"
] | [
0
] | [] | [] | [
"ios",
"skeffectnode",
"sprite_kit",
"swift"
] | stackoverflow_0074652652_ios_skeffectnode_sprite_kit_swift.txt |
Q:
Firefox and Content-Disposition header
I have a problem with an attachment's name. When I call the site on google chrome it returns the file with the right name and extension. I tested it with internet explorer and it works fine too. The issue lies with only Firefox. I call the site and it returns the first word on the file title and no extension.
For example if I wanted a file called "My report.docx" it turns a file called "My". I Googled around and it turns out this is a common issue with people because browsers read the headers differently. They said the fix is to quote the file name:
Content-Disposition: attachment; filename=My Report.docx
is now: (note the quotes)
Content-Disposition: attachment; filename="My Report.docx"
However, that did not work for me.
On chrome it returned "My Report.docx" (actually with the quotes). Firefox returned a odd file that had the proper extension and proper name and no quotes yet it could not be executed. It was the proper file size, proper extension, and proper name yet it could not be executed. Also it returns a space before and after the file name.
A:
I know this is a very old question, but I was recently having the same problem. The solution is to either
Encode your filename per RFC2184 or,
If you don't have special characters in your filename, quote it in the content disposition string.
Since you've already tried 2, you could try using 1 and seeing how that works out.
Usually I use the ContentDisposition class to generate my header for me:
Dim contentDispositionHeader = New Net.Mime.ContentDisposition() With {.FileName = filename}
Response.AddHeader("Content-Disposition", contentDispositionHeader.ToString())
Hope this helps.
A:
This should work as expected, here's another SOq with the same problem:
Downloading a file with a different name to the stored name
and also the Mozilla page (I guess you were referencing this one):
http://kb.mozillazine.org/Filenames_with_spaces_are_truncated_upon_download
I don't know the specifics of your server side code, but here are some things to confirm / try:
If you have PHP available at the server, can you try the code from the first link above? If not, you can probably find something on the Net in your language of choice. That way, you can confirm whether the issue is in your code or somewhere else (server setup, browser, etc.)
Is this happening on other client machines (i.e. where you try the download from) or only on that one? You might want to try others to confirm.
Is this working fine in IE / Safari or some other browser? You can even try doing it with wget or curl from the command line or something like that.
Are you also providing the Content-Type header correctly?
Can you try downloading some other file or a file of a different type, e.g. a .png or a .xls? In fact, probably the easiest would be to try a plain text file (text/plain) and then take it from there.
Hope this helps.
| Firefox and Content-Disposition header | I have a problem with an attachment's name. When I call the site on google chrome it returns the file with the right name and extension. I tested it with internet explorer and it works fine too. The issue lies with only Firefox. I call the site and it returns the first word on the file title and no extension.
For example if I wanted a file called "My report.docx" it turns a file called "My". I Googled around and it turns out this is a common issue with people because browsers read the headers differently. They said the fix is to quote the file name:
Content-Disposition: attachment; filename=My Report.docx
is now: (note the quotes)
Content-Disposition: attachment; filename="My Report.docx"
However, that did not work for me.
On chrome it returned "My Report.docx" (actually with the quotes). Firefox returned a odd file that had the proper extension and proper name and no quotes yet it could not be executed. It was the proper file size, proper extension, and proper name yet it could not be executed. Also it returns a space before and after the file name.
| [
"I know this is a very old question, but I was recently having the same problem. The solution is to either\n\nEncode your filename per RFC2184 or,\nIf you don't have special characters in your filename, quote it in the content disposition string.\n\nSince you've already tried 2, you could try using 1 and seeing how that works out.\nUsually I use the ContentDisposition class to generate my header for me:\nDim contentDispositionHeader = New Net.Mime.ContentDisposition() With {.FileName = filename}\nResponse.AddHeader(\"Content-Disposition\", contentDispositionHeader.ToString())\n\nHope this helps.\n",
"This should work as expected, here's another SOq with the same problem:\n\nDownloading a file with a different name to the stored name\n\nand also the Mozilla page (I guess you were referencing this one):\n\nhttp://kb.mozillazine.org/Filenames_with_spaces_are_truncated_upon_download\n\nI don't know the specifics of your server side code, but here are some things to confirm / try:\n\nIf you have PHP available at the server, can you try the code from the first link above? If not, you can probably find something on the Net in your language of choice. That way, you can confirm whether the issue is in your code or somewhere else (server setup, browser, etc.)\nIs this happening on other client machines (i.e. where you try the download from) or only on that one? You might want to try others to confirm. \nIs this working fine in IE / Safari or some other browser? You can even try doing it with wget or curl from the command line or something like that.\nAre you also providing the Content-Type header correctly? \nCan you try downloading some other file or a file of a different type, e.g. a .png or a .xls? In fact, probably the easiest would be to try a plain text file (text/plain) and then take it from there.\n\nHope this helps.\n"
] | [
5,
4
] | [
"Using Firefox 91.8.0esr (64bit), I have the problem of inline not working.\nContent-Disposition: inline;\nThe way that I solved this was to simply go to:\nEdit > Settings > General > Applications | Search file types or apps\nSearch for PDF of course\nChange Action to OPEN IN FIREFOX\n"
] | [
-1
] | [
"content_disposition",
"firefox",
"header"
] | stackoverflow_0009154415_content_disposition_firefox_header.txt |
Q:
draw on desktop using visual c++
I am writing an opencv application to draw using laser beam using visual studio VC++ console application. I want to draw lines on desktop.
I know that the drawing functions are available in GDI32.dll , but confused on how to integrate GDI32.dll with my vc code. can you suggest some good solution?
A:
The code below draws a blue rectangle on the desktop.
#include <iostream>
#include <Windows.h>
int main() {
/* hide console window */
ShowWindow(FindWindowA("ConsoleWindowClass", NULL), false);
/* Calling GetDC with argument 0 retrieves the desktop's DC */
HDC hDC_Desktop = GetDC(0);
/* Draw a simple blue rectangle on the desktop */
RECT rect = { 20, 20, 200, 200 };
HBRUSH blueBrush=CreateSolidBrush(RGB(0,0,255));
FillRect(hDC_Desktop, &rect, blueBrush);
Sleep(10);
return 0;
}
Just for fun. A Mandelbrot fractal drawn directly on the desktop.
#define MAGNITUDE_CUTOFF 100
#define NUMCOLOURS 256
#define WIDTH 640
#define HEIGHT 200
#define UP 72
#define DOWN 80
#define LEFT 75
#define RIGHT 77
#define SPACE 32
#define ENTER 13
#define ESCAPE 27
#define TAB 9
#define INSERT 82
#include <stdio.h>
#include <time.h>
#include <windows.h>
#include <iostream>
using namespace std;
int col(int x, int y);
void fract(void);
char op;
int ch,max_iterations;
double xmin = -2.10, xmax = 0.85, ymin = -1.5 , ymax = 1.5;
double width_fact, height_fact;
int main(){
COLORREF color = RGB(255,0,0); // COLORREF to hold the color info
SetConsoleTitle("Pixel In Console!"); // Set text of the console so you can find the window
HWND hwnd = FindWindow(NULL, "Pixel In Console?"); // Get the HWND
HDC hdc = GetDC(hwnd); // Get the DC from that HWND
width_fact = (xmax-xmin)/WIDTH;
height_fact = (ymax-ymin)/HEIGHT;
for( int x = 0 ; x < 640 ; x++ ){
for (int y = 0;y < 480; y++ ){
int blue = (col(x,y) & 0x0f) << 4;
int green = (col(x,y) & 0xf0) << 0;
int red = (col(x,y) & 0xf00) >> 4;
SetPixel(hdc, x,y, RGB(red,green,blue));
}
}
system("pause");
ReleaseDC(hwnd, hdc); // Release the DC
DeleteDC(hdc); // Delete the DC
return(0);
}
void fract(){
int x,y,icount=0;
width_fact = (xmax-xmin)/WIDTH;
height_fact = (ymax-ymin)/HEIGHT;
for (y=0;y<HEIGHT;y++){
for (x=0;x<WIDTH;x++){
// setcolor(col(x,y));
// gotoxy(x+3,y+3);printf("Û");
}
}
//setcolor(15);
}
int col( int x, int y){
int n,icount=0;
float p,q,r,i,prev_r,prev_i;
p= (( (float)x ) * width_fact) + (float)xmin;
q= (( (float)y ) * height_fact) +(float)ymin;
prev_i = 0;
prev_r = 0;
for (n=0; n <= NUMCOLOURS; n++){
r = (prev_r * prev_r) - (prev_i * prev_i) +p;
i = 2 * (prev_r * prev_i) +q;
if (( r*r + i*i) < MAGNITUDE_CUTOFF ){
prev_r = r;
prev_i = i;
}
else {
return n;
}
}
return n;
}
A:
I'm trying to run it on code::blocks and the line
HBRUSH blueBrush=CreateSolidBrush(RGB(0,0,255));
is getting the error: undefined reference to 'CreateSolidBrush@4'
| draw on desktop using visual c++ | I am writing an opencv application to draw using laser beam using visual studio VC++ console application. I want to draw lines on desktop.
I know that the drawing functions are available in GDI32.dll , but confused on how to integrate GDI32.dll with my vc code. can you suggest some good solution?
| [
"The code below draws a blue rectangle on the desktop.\n#include <iostream>\n#include <Windows.h>\n\nint main() { \n\n /* hide console window */\n ShowWindow(FindWindowA(\"ConsoleWindowClass\", NULL), false);\n\n /* Calling GetDC with argument 0 retrieves the desktop's DC */\n HDC hDC_Desktop = GetDC(0);\n\n /* Draw a simple blue rectangle on the desktop */\n RECT rect = { 20, 20, 200, 200 };\n HBRUSH blueBrush=CreateSolidBrush(RGB(0,0,255));\n FillRect(hDC_Desktop, &rect, blueBrush);\n\n Sleep(10);\n return 0;\n}\n\nJust for fun. A Mandelbrot fractal drawn directly on the desktop.\n#define MAGNITUDE_CUTOFF 100\n#define NUMCOLOURS 256\n#define WIDTH 640\n#define HEIGHT 200\n#define UP 72\n#define DOWN 80\n#define LEFT 75\n#define RIGHT 77\n#define SPACE 32\n#define ENTER 13\n#define ESCAPE 27\n#define TAB 9\n#define INSERT 82\n\n#include <stdio.h>\n#include <time.h>\n#include <windows.h>\n#include <iostream>\nusing namespace std;\n\nint col(int x, int y);\nvoid fract(void);\n\nchar op;\nint ch,max_iterations;\ndouble xmin = -2.10, xmax = 0.85, ymin = -1.5 , ymax = 1.5;\ndouble width_fact, height_fact;\n\n\nint main(){\n\n COLORREF color = RGB(255,0,0); // COLORREF to hold the color info\n\n\n SetConsoleTitle(\"Pixel In Console!\"); // Set text of the console so you can find the window\n HWND hwnd = FindWindow(NULL, \"Pixel In Console?\"); // Get the HWND\n HDC hdc = GetDC(hwnd); // Get the DC from that HWND\n\n width_fact = (xmax-xmin)/WIDTH;\n height_fact = (ymax-ymin)/HEIGHT;\n\n\n for( int x = 0 ; x < 640 ; x++ ){\n for (int y = 0;y < 480; y++ ){\n\n int blue = (col(x,y) & 0x0f) << 4;\n int green = (col(x,y) & 0xf0) << 0;\n int red = (col(x,y) & 0xf00) >> 4;\n SetPixel(hdc, x,y, RGB(red,green,blue));\n\n }\n }\n\n\n system(\"pause\");\n\n ReleaseDC(hwnd, hdc); // Release the DC\n DeleteDC(hdc); // Delete the DC\n return(0);\n}\n\nvoid fract(){\n int x,y,icount=0;\n width_fact = (xmax-xmin)/WIDTH;\n height_fact = (ymax-ymin)/HEIGHT;\n\n for (y=0;y<HEIGHT;y++){\n for (x=0;x<WIDTH;x++){\n // setcolor(col(x,y));\n // gotoxy(x+3,y+3);printf(\"Û\");\n\n }\n\n }\n //setcolor(15);\n}\n\n\nint col( int x, int y){\n int n,icount=0;\n float p,q,r,i,prev_r,prev_i;\n\n p= (( (float)x ) * width_fact) + (float)xmin;\n q= (( (float)y ) * height_fact) +(float)ymin;\n\n prev_i = 0;\n prev_r = 0;\n\n for (n=0; n <= NUMCOLOURS; n++){\n r = (prev_r * prev_r) - (prev_i * prev_i) +p;\n i = 2 * (prev_r * prev_i) +q;\n\n if (( r*r + i*i) < MAGNITUDE_CUTOFF ){\n prev_r = r;\n prev_i = i;\n }\n else {\n return n;\n }\n }\n return n;\n}\n\n",
"I'm trying to run it on code::blocks and the line\nHBRUSH blueBrush=CreateSolidBrush(RGB(0,0,255));\n\nis getting the error: undefined reference to 'CreateSolidBrush@4'\n"
] | [
16,
0
] | [] | [] | [
"c++",
"desktop",
"gdi",
"winapi",
"windows"
] | stackoverflow_0008542660_c++_desktop_gdi_winapi_windows.txt |
Q:
nesting multiple queries in SQL
I want to know in the year in which more goals were scored (in total), how many goals were scored by and against team 1 when team 1 is either a or b.
My table looks like this:
year
team1
team2
score_team1
score_team2
1
a
x
10
5
1
b
y
4
3
2
a
z
2
7
2
a
x
9
6
2
b
z
0
7
This is the output that I need:
year
team
max_score_team1
max_score_team2
2
a
11
13
2
b
0
7
I know that more goals were scored in year 2 by doing this query:
select year, sum(score_team1 + score_team2) as total
from data
group by year
order by sum(score_team1 + score_team2) desc
limit(1)
Now I want to know how many goals were scored by and against team1 when team1 is either a or b. I know how to write the queries separately but how can I nest them in one query so I can get the results in one table like the one above?
A:
To obtain the results you desire in a single table, you can use a SQL query with a GROUP BY clause and a HAVING clause to filter for the year in which the most goals were scored. If you are using MySQL, the query might look something like this:
SELECT year, team1,
SUM(score_team1) AS max_score_team1,
SUM(score_team2) AS max_score_team2
FROM data
GROUP BY year, team1
HAVING year = (SELECT year
FROM data
GROUP BY year
ORDER BY SUM(score_team1 + score_team2) DESC
LIMIT 1)
AND team1 IN ('a', 'b')
The GROUP BY clause groups the results by year and team, and the HAVING clause is used to filter only the results for the year in which the most goals were scored and for team 'a' or 'b'. The nested query in the HAVING clause is used to obtain the year in which the most goals were scored.
The result of the query would be something like this:
year
team1
max_score_team1
max_score_team2
2
a
11
13
2
b
0
7
A:
Use conditional aggregation to count team #1's goals.
select
year,
sum(score_team1 + score_team2) as total,
sum(case when team1 = 1 then score_team1 else 0 end) +
sum(case when team2 = 1 then score_team2 else 0 end) as scored_by_team_1,
sum(case when team1 = 1 then score_team2 else 0 end) +
sum(case when team2 = 1 then score_team1 else 0 end) as scored_against_team_1
from data
group by year
order by total desc
limit 1;
The problem with that: If there is more than one year with the top goals count, you'd pick one arbitrarily.
So, instead:
select year, total, scored_by_team_1, scored_against_team_1
from
(
select
year,
sum(score_team1 + score_team2) as total,
max(sum(score_team1 + score_team2)) over () as max_total,
sum(case when team1 = 1 then score_team1 else 0 end) +
sum(case when team2 = 1 then score_team2 else 0 end) as scored_by_team_1,
sum(case when team1 = 1 then score_team2 else 0 end) +
sum(case when team2 = 1 then score_team1 else 0 end) as scored_against_team_1
from data
group by year
) with_max_total
where total = max_total;
| nesting multiple queries in SQL | I want to know in the year in which more goals were scored (in total), how many goals were scored by and against team 1 when team 1 is either a or b.
My table looks like this:
year
team1
team2
score_team1
score_team2
1
a
x
10
5
1
b
y
4
3
2
a
z
2
7
2
a
x
9
6
2
b
z
0
7
This is the output that I need:
year
team
max_score_team1
max_score_team2
2
a
11
13
2
b
0
7
I know that more goals were scored in year 2 by doing this query:
select year, sum(score_team1 + score_team2) as total
from data
group by year
order by sum(score_team1 + score_team2) desc
limit(1)
Now I want to know how many goals were scored by and against team1 when team1 is either a or b. I know how to write the queries separately but how can I nest them in one query so I can get the results in one table like the one above?
| [
"To obtain the results you desire in a single table, you can use a SQL query with a GROUP BY clause and a HAVING clause to filter for the year in which the most goals were scored. If you are using MySQL, the query might look something like this:\nSELECT year, team1, \n SUM(score_team1) AS max_score_team1, \n SUM(score_team2) AS max_score_team2\nFROM data\nGROUP BY year, team1\nHAVING year = (SELECT year\n FROM data\n GROUP BY year\n ORDER BY SUM(score_team1 + score_team2) DESC\n LIMIT 1)\nAND team1 IN ('a', 'b')\n\nThe GROUP BY clause groups the results by year and team, and the HAVING clause is used to filter only the results for the year in which the most goals were scored and for team 'a' or 'b'. The nested query in the HAVING clause is used to obtain the year in which the most goals were scored.\nThe result of the query would be something like this:\n\n\n\n\nyear\nteam1\nmax_score_team1\nmax_score_team2\n\n\n\n\n2\na\n11\n13\n\n\n2\nb\n0\n7\n\n\n\n",
"Use conditional aggregation to count team #1's goals.\nselect \n year,\n sum(score_team1 + score_team2) as total,\n sum(case when team1 = 1 then score_team1 else 0 end) +\n sum(case when team2 = 1 then score_team2 else 0 end) as scored_by_team_1,\n sum(case when team1 = 1 then score_team2 else 0 end) +\n sum(case when team2 = 1 then score_team1 else 0 end) as scored_against_team_1\nfrom data\ngroup by year\norder by total desc\nlimit 1;\n\nThe problem with that: If there is more than one year with the top goals count, you'd pick one arbitrarily.\nSo, instead:\nselect year, total, scored_by_team_1, scored_against_team_1\nfrom\n(\n select \n year,\n sum(score_team1 + score_team2) as total,\n max(sum(score_team1 + score_team2)) over () as max_total,\n sum(case when team1 = 1 then score_team1 else 0 end) +\n sum(case when team2 = 1 then score_team2 else 0 end) as scored_by_team_1,\n sum(case when team1 = 1 then score_team2 else 0 end) +\n sum(case when team2 = 1 then score_team1 else 0 end) as scored_against_team_1\n from data\n group by year\n) with_max_total\nwhere total = max_total;\n\n"
] | [
0,
0
] | [] | [] | [
"nested_queries",
"sql",
"subquery"
] | stackoverflow_0074677873_nested_queries_sql_subquery.txt |
Q:
running curl in a for loop blocks access to website
I have the following code which basically runs a bunch of curl request in a for loop
This is a background system job.. however when this code is running.. i can't access the website, website is just loading until this task/function is finished
public function endcount() {
$order_list = $this->order_model->get_rows([
'where' => [
"status IN ('Processing', 'In Progress') AND service_id = 1"
],
'order_by' => 'quantity ASC',
'limit' => '200'
]);
foreach ($order_list as $key => $value) {
$start_count = $value['start_count'];
//service id = 1 is for followers
if ($value['service_id'] == '1' || $value['service_id'] == '3' || $value['service_id'] == '5') {
$end_count = 0;
try {
$username = $value['target'];
$proxy = '139.99.54.49:10163';
$proxyauth = 'username:password';
$url = 'https://www.instagram.com/'. $username .'/?__a=1&__d=dis';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_PROXY, $proxy); // PROXY details with port
curl_setopt($ch, CURLOPT_PROXYUSERPWD, $proxyauth); // Use if proxy have username and password
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Accept: application/json') );
$data = curl_exec($ch);
if (curl_errno($ch)) {
$error_msg = curl_error($ch);
var_dump($error_msg);
}
$json_data = json_decode($data, true);
if ($json_data && array_key_exists('graphql', $json_data)) {
$end_count = $json_data['graphql']['user']['edge_followed_by']['count'];
} else {
var_dump($username . ' data ga bisa');
$end_count = 0;
}
curl_close($ch);
} catch(Exception $e) {
var_dump($e->getMessage());
$end_count = 0;
}
if ($end_count == -666) {
$end_count = $start_count + 5;
}
$total_followers = $start_count + $value['quantity'];
$remains = $total_followers - $end_count;
if ($remains <= 10) {
$status = 'Success';
} else {
$status = 'Partial';
}
var_dump('end count of ' . $value['target'] . ' ' . $end_count . ' WITH REMAINS: ' . $remains);
if ($status == 'Success') {
$update_order = [
'remains' => $remains,
'status' => $status,
'updated_at' => date('Y-m-d H:i:s'),
];
$update_order = $this->order_model->update($update_order, ['id' => $value['id']]);
if ($update_order == true) {
print('ID '.$value['id'].' => ['.$status.'] - [FINAL COUNT : '.$end_count.'] - [REMAINS : '.$remains.'] - [TARGET COUNT : '.$total_followers.']<br>');
} else {
print('Error..');
}
}
//var_dump($value['target']);
} else if ($value['service_id'] == '2') {
//service id = 2 is for likers
}
}
}
is there any way i can optimize this code so it is not blocking access to the site while running ?
A:
try adding sleep(5); funtion to your code so server isn't overloaded with requests..
your code should look something like this :
foreach ($order_list as $key => $value) {
sleep(5);
$start_count = $value['start_count'];
| running curl in a for loop blocks access to website | I have the following code which basically runs a bunch of curl request in a for loop
This is a background system job.. however when this code is running.. i can't access the website, website is just loading until this task/function is finished
public function endcount() {
$order_list = $this->order_model->get_rows([
'where' => [
"status IN ('Processing', 'In Progress') AND service_id = 1"
],
'order_by' => 'quantity ASC',
'limit' => '200'
]);
foreach ($order_list as $key => $value) {
$start_count = $value['start_count'];
//service id = 1 is for followers
if ($value['service_id'] == '1' || $value['service_id'] == '3' || $value['service_id'] == '5') {
$end_count = 0;
try {
$username = $value['target'];
$proxy = '139.99.54.49:10163';
$proxyauth = 'username:password';
$url = 'https://www.instagram.com/'. $username .'/?__a=1&__d=dis';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_PROXY, $proxy); // PROXY details with port
curl_setopt($ch, CURLOPT_PROXYUSERPWD, $proxyauth); // Use if proxy have username and password
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Accept: application/json') );
$data = curl_exec($ch);
if (curl_errno($ch)) {
$error_msg = curl_error($ch);
var_dump($error_msg);
}
$json_data = json_decode($data, true);
if ($json_data && array_key_exists('graphql', $json_data)) {
$end_count = $json_data['graphql']['user']['edge_followed_by']['count'];
} else {
var_dump($username . ' data ga bisa');
$end_count = 0;
}
curl_close($ch);
} catch(Exception $e) {
var_dump($e->getMessage());
$end_count = 0;
}
if ($end_count == -666) {
$end_count = $start_count + 5;
}
$total_followers = $start_count + $value['quantity'];
$remains = $total_followers - $end_count;
if ($remains <= 10) {
$status = 'Success';
} else {
$status = 'Partial';
}
var_dump('end count of ' . $value['target'] . ' ' . $end_count . ' WITH REMAINS: ' . $remains);
if ($status == 'Success') {
$update_order = [
'remains' => $remains,
'status' => $status,
'updated_at' => date('Y-m-d H:i:s'),
];
$update_order = $this->order_model->update($update_order, ['id' => $value['id']]);
if ($update_order == true) {
print('ID '.$value['id'].' => ['.$status.'] - [FINAL COUNT : '.$end_count.'] - [REMAINS : '.$remains.'] - [TARGET COUNT : '.$total_followers.']<br>');
} else {
print('Error..');
}
}
//var_dump($value['target']);
} else if ($value['service_id'] == '2') {
//service id = 2 is for likers
}
}
}
is there any way i can optimize this code so it is not blocking access to the site while running ?
| [
"try adding sleep(5); funtion to your code so server isn't overloaded with requests..\nyour code should look something like this :\n foreach ($order_list as $key => $value) {\n\n sleep(5);\n\n $start_count = $value['start_count'];\n\n"
] | [
0
] | [] | [] | [
"php",
"php_5.6"
] | stackoverflow_0074677869_php_php_5.6.txt |
Q:
look for multiple type of XCUIElement in swift
I would like to locate two XCUIElements and no problem to find them in following code.
func cells(_ text:String) {
app.cells.element(withLabelContaining: text).firstMatch
}
func text(_ text:String) {
app.staticText.element(withLabelContaining: text).firstMatch
}
However, I would like to find a way to just use one query to look for multiple type of XCUIElement, how can I do it? I check the doc but I cannot find any solution for it
different type of XCUIElement
A:
Assuming:
let app = XCUIApplication()
the app.staticText is effectively a shortcut for
app.windows.descendants(matching: XCUIElement.ElementType.staticText)
Hence by analogy we can do this:
let typesToBeFound = [XCUIElement.ElementType.cell.rawValue,
XCUIElement.ElementType.staticText.rawValue]
let predicateFormat = "elementType == %lu OR elementType == %lu"
let predicate = NSPredicate(format: predicateFormat, argumentArray: typesToBeFound)
let result = app.windows.descendants(matching: XCUIElement.ElementType.any).matching(predicate)
| look for multiple type of XCUIElement in swift | I would like to locate two XCUIElements and no problem to find them in following code.
func cells(_ text:String) {
app.cells.element(withLabelContaining: text).firstMatch
}
func text(_ text:String) {
app.staticText.element(withLabelContaining: text).firstMatch
}
However, I would like to find a way to just use one query to look for multiple type of XCUIElement, how can I do it? I check the doc but I cannot find any solution for it
different type of XCUIElement
| [
"Assuming:\nlet app = XCUIApplication()\n\nthe app.staticText is effectively a shortcut for\napp.windows.descendants(matching: XCUIElement.ElementType.staticText)\n\nHence by analogy we can do this:\nlet typesToBeFound = [XCUIElement.ElementType.cell.rawValue, \n XCUIElement.ElementType.staticText.rawValue]\nlet predicateFormat = \"elementType == %lu OR elementType == %lu\"\nlet predicate = NSPredicate(format: predicateFormat, argumentArray: typesToBeFound)\nlet result = app.windows.descendants(matching: XCUIElement.ElementType.any).matching(predicate)\n\n"
] | [
0
] | [] | [] | [
"swift",
"xcuitest"
] | stackoverflow_0074667454_swift_xcuitest.txt |
Q:
using a library (ocrodjvu) that used to be installable through pip but now isn't?
I'd like to add a feature that was implemented in another project distributed under GPL3, which is based on the ocrodjvu library, which used to be installable through pip and is now not.
The library was transferred to python3.
I tried to install it through apt (from Linux Mint), which reported having no candidate for installation, and pip had returned 'no available version satisfying the requirements'. Installing the library from source (github link) isn't possible due to the absence of setup.py or .toml installation scripts, and the attempts to make it from Makefile has resulted in errors.
Is there a way to make use of it?
A:
I have found a fork supporting python3 and installation by setup.py: https://github.com/FriedrichFroebel/ocrodjvu
| using a library (ocrodjvu) that used to be installable through pip but now isn't? | I'd like to add a feature that was implemented in another project distributed under GPL3, which is based on the ocrodjvu library, which used to be installable through pip and is now not.
The library was transferred to python3.
I tried to install it through apt (from Linux Mint), which reported having no candidate for installation, and pip had returned 'no available version satisfying the requirements'. Installing the library from source (github link) isn't possible due to the absence of setup.py or .toml installation scripts, and the attempts to make it from Makefile has resulted in errors.
Is there a way to make use of it?
| [
"I have found a fork supporting python3 and installation by setup.py: https://github.com/FriedrichFroebel/ocrodjvu\n"
] | [
0
] | [] | [] | [
"github",
"installation",
"lib",
"pip",
"python_3.x"
] | stackoverflow_0074675755_github_installation_lib_pip_python_3.x.txt |
Q:
React js- npm install not working in a github project
I have a project from github and it has the recoil function. I'm trying npm install to run npm but it's not working:
Then I tried npm install --legacy-peer-deps, and got
61 vulnerabilities (4 low, 5 moderate, 39 high, 13 critical)
If I write npm start I got
Debugger attached.
> [email protected] start
> PORT=3006 react-scripts start
'PORT' is not recognized as an internal or external command,
operable program or batch file.
Waiting for the debugger to disconnect...
A:
It still says your debugger attached when you run "npm install".
Try to stop debugging (Debug > Stop Debugging or Shift+F5).
A:
well i found out that inside my package.json i had :
"scripts": {
"start": "Port=3006 react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
so i deleted Port=3006, with that i got:
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
and it worked!
| React js- npm install not working in a github project | I have a project from github and it has the recoil function. I'm trying npm install to run npm but it's not working:
Then I tried npm install --legacy-peer-deps, and got
61 vulnerabilities (4 low, 5 moderate, 39 high, 13 critical)
If I write npm start I got
Debugger attached.
> [email protected] start
> PORT=3006 react-scripts start
'PORT' is not recognized as an internal or external command,
operable program or batch file.
Waiting for the debugger to disconnect...
| [
"It still says your debugger attached when you run \"npm install\".\nTry to stop debugging (Debug > Stop Debugging or Shift+F5).\n",
"well i found out that inside my package.json i had :\n \"scripts\": {\n \"start\": \"Port=3006 react-scripts start\",\n \"build\": \"react-scripts build\",\n \"test\": \"react-scripts test\",\n \"eject\": \"react-scripts eject\"\n },\n\nso i deleted Port=3006, with that i got:\n \"scripts\": {\n \"start\": \"react-scripts start\",\n \"build\": \"react-scripts build\",\n \"test\": \"react-scripts test\",\n \"eject\": \"react-scripts eject\"\n },\n\nand it worked!\n"
] | [
0,
0
] | [] | [] | [
"npm",
"npm_install",
"npm_start",
"reactjs",
"recoiljs"
] | stackoverflow_0074639303_npm_npm_install_npm_start_reactjs_recoiljs.txt |
Q:
Pine-Script - variable for resolution of chart to which indicator is applied
I would like my indicator to exhibit different behavior based on the resolution of the chart to which it will be applied. Is there some function or variable which will give me the chart's resolution? I am familiar with the input.timeframe for allowing a user to select an indicator's timeframe different to the chart, but that is not relevant here.
//I want to do something like:
if Chart.Resolution == 1
//do some stuff
else if Chart.Resolution == 1D
//do some other stuff
I have tried using possible variable names like resolution, Resolution, chart.resolution, Chart.Resolution. I have checked the Pine Script v5 reference manual and found no variables or functions which will accomplish this for me.
A:
First option:
timeframe.multiplier returns the multiplier of the resolution.
timeframe.isdaily returns true if current resolution is a daily resolution, false otherwise.
timeframe.isminutes returns true if current resolution is a minutes or hourly resolution, false otherwise.
You can use:
if timeframe.isminutes and timeframe.multiplier == 1
//do stuff if 1 min
else if timeframe.isdaily and timeframe.multiplier == 1
// do stuff if 1 day
You can also use:
timeframe.ismonthly - returns true if on a monthly chart.
timeframe.isweekly - returns true if on a weekly chart.
timeframe.isseconds - returns true if on a seconds chart.
and also:
timeframe.isdwm - returns true if on a daily or weekly or monthly chart.
timeframe.isintraday -returns true if on a seconds or minutes or hourly chart.
Second option:
timeframe.period returns the current chart resolution as a string so you can use:
if timeframe.period == "1"
//do some stuff
else if timeframe.period == "1D"
//do some other stuff
| Pine-Script - variable for resolution of chart to which indicator is applied | I would like my indicator to exhibit different behavior based on the resolution of the chart to which it will be applied. Is there some function or variable which will give me the chart's resolution? I am familiar with the input.timeframe for allowing a user to select an indicator's timeframe different to the chart, but that is not relevant here.
//I want to do something like:
if Chart.Resolution == 1
//do some stuff
else if Chart.Resolution == 1D
//do some other stuff
I have tried using possible variable names like resolution, Resolution, chart.resolution, Chart.Resolution. I have checked the Pine Script v5 reference manual and found no variables or functions which will accomplish this for me.
| [
"First option:\n\ntimeframe.multiplier returns the multiplier of the resolution.\ntimeframe.isdaily returns true if current resolution is a daily resolution, false otherwise.\ntimeframe.isminutes returns true if current resolution is a minutes or hourly resolution, false otherwise.\n\nYou can use:\nif timeframe.isminutes and timeframe.multiplier == 1\n //do stuff if 1 min \n\nelse if timeframe.isdaily and timeframe.multiplier == 1\n // do stuff if 1 day\n\nYou can also use:\n\ntimeframe.ismonthly - returns true if on a monthly chart.\ntimeframe.isweekly - returns true if on a weekly chart.\ntimeframe.isseconds - returns true if on a seconds chart.\n\nand also:\n\ntimeframe.isdwm - returns true if on a daily or weekly or monthly chart.\ntimeframe.isintraday -returns true if on a seconds or minutes or hourly chart.\n\nSecond option:\ntimeframe.period returns the current chart resolution as a string so you can use:\nif timeframe.period == \"1\"\n //do some stuff\nelse if timeframe.period == \"1D\"\n //do some other stuff\n\n"
] | [
0
] | [] | [] | [
"pine_script",
"tradingview_api"
] | stackoverflow_0074677764_pine_script_tradingview_api.txt |
Q:
Can you pass type aliases to an interface in TypeScript?
I'd like to write a TypeScript interface is generic over a type function instead of just a type. In other words, I want to write something like
interface Foo<Functor> {
bar: Functor<string>;
baz: Functor<number>;
}
where Functor is some generic type alias I can pass in from the outside. Then I'll be able to make different kinds of Foo like this:
type Identity<T> = T;
type Maybe<T> = T | undefined;
type List<T> = T[];
// All of the following would typecheck
const fooIdentity: Foo<Identity> = { bar: "abc", baz: 42 };
const fooMaybe: Foo<Maybe> = { bar: undefined, baz: undefined };
const fooList: Foo<List> = { bar: ["abc", "def"], baz: [42, 43] };
I've tried to find a way to make the compiler accept this but no luck, so I'm wondering if there's a trick I'm missing or if TypeScript just can't express this.
A:
I am late to the party but there is now a comprehensive solution to this problem on the user end, so I thought I might share it.
Details of how it works can be found here, but in a nutshell we exploit the fact that interfaces can receive arguments through intersection:
type Type = { type: unknown, 0: unknown }
interface $Maybe extends Type { type: this[0] | undefined }
type apply<$T extends Type, V> = ($T & [V])['type']
type Maybe3 = apply<$Maybe, 3> // 3 | undefined
playground
Notice how we were able to detach the type constructor from its value.
This pattern is very flexible and enabled the creation of the library free-types which adds a bunch of features, like support for type constraints, partial application, composition, variadicity, optionality, inference, etc.
So what can we do with your use case?
Technically there are ready-made types for List and Identity, so we only need to define $Maybe. You can also set Identity as a default parameter to Foo, which is pretty convenient.
import { Type, apply, free } from 'free-types';
interface Foo<$T extends Type<1> = free.Id> {
bar: apply<$T, [string]>;
baz: apply<$T, [number]>;
}
interface $Maybe extends Type<1> { type: this[0] | undefined }
const fooIdentity: Foo = { bar: "abc", baz: 42 };
const fooMaybe: Foo<$Maybe> = { bar: undefined, baz: undefined };
const fooList: Foo<free.Array> = { bar: ["abc", "def"], baz: [42, 43] };
You may want to check that your type constructors can actually accept string | number.
import { Type, apply, free, Contra } from 'free-types';
interface Foo<$T extends Type<1> & Contra<$T, Type<[string | number]>> = free.Id> {
// ------------------- hacked contravariance
bar: apply<$T, [string]>;
baz: apply<$T, [number]>;
}
const fooIdentity: Foo = { bar: "abc", baz: 42 };
const fooMaybe: Foo<$Maybe> = { bar: undefined, baz: undefined };
const fooList: Foo<free.Array> = { bar: ["abc", "def"], baz: [42, 43] };
// @ts-expect-error: WeakSet expects object, not string | number
type FooWeakSet = Foo<free.WeakSet>;
You may also want to check that your type is actually a Functor (which is only the case for List here).
import { Type, apply, free, Contra } from 'free-types';
interface Foo<$T extends $Functor> {
bar: apply<$T, [string]>;
baz: apply<$T, [number]>;
}
type $Functor = Type<1, {
map: (f: (...args: any[]) => unknown) => unknown
}>
// @ts-expect-error: Identity lacks a map method
const fooIdentity: Foo<free.Id> = { bar: "abc", baz: 42 };
// @ts-expect-error: our Maybe lacks a map method
const fooMaybe: Foo<$Maybe> = { bar: undefined, baz: undefined };
const fooList: Foo<free.Array> = { bar: ["abc", "def"], baz: [42, 43] };
Finally, If you don't like having 2 versions of the same type, you can conflate them into one.
import { Type, apply, free, $Alter } from 'free-types';
interface Foo<$T extends Type<1>> {
bar: apply<$T, [string]>;
baz: apply<$T, [number]>;
}
type Array<T = never> = $Alter<free.Array, [T]>
type Array1 = Array<1> // 1[]
const fooList: Foo<Array> = { bar: ["abc", "def"], baz: [42, 43] };
playground
| Can you pass type aliases to an interface in TypeScript? | I'd like to write a TypeScript interface is generic over a type function instead of just a type. In other words, I want to write something like
interface Foo<Functor> {
bar: Functor<string>;
baz: Functor<number>;
}
where Functor is some generic type alias I can pass in from the outside. Then I'll be able to make different kinds of Foo like this:
type Identity<T> = T;
type Maybe<T> = T | undefined;
type List<T> = T[];
// All of the following would typecheck
const fooIdentity: Foo<Identity> = { bar: "abc", baz: 42 };
const fooMaybe: Foo<Maybe> = { bar: undefined, baz: undefined };
const fooList: Foo<List> = { bar: ["abc", "def"], baz: [42, 43] };
I've tried to find a way to make the compiler accept this but no luck, so I'm wondering if there's a trick I'm missing or if TypeScript just can't express this.
| [
"I am late to the party but there is now a comprehensive solution to this problem on the user end, so I thought I might share it.\nDetails of how it works can be found here, but in a nutshell we exploit the fact that interfaces can receive arguments through intersection:\ntype Type = { type: unknown, 0: unknown }\n\ninterface $Maybe extends Type { type: this[0] | undefined }\n\ntype apply<$T extends Type, V> = ($T & [V])['type']\n\ntype Maybe3 = apply<$Maybe, 3> // 3 | undefined\n\nplayground\nNotice how we were able to detach the type constructor from its value.\nThis pattern is very flexible and enabled the creation of the library free-types which adds a bunch of features, like support for type constraints, partial application, composition, variadicity, optionality, inference, etc.\n\nSo what can we do with your use case?\nTechnically there are ready-made types for List and Identity, so we only need to define $Maybe. You can also set Identity as a default parameter to Foo, which is pretty convenient.\nimport { Type, apply, free } from 'free-types';\n\ninterface Foo<$T extends Type<1> = free.Id> {\n bar: apply<$T, [string]>;\n baz: apply<$T, [number]>;\n}\n\ninterface $Maybe extends Type<1> { type: this[0] | undefined }\n \nconst fooIdentity: Foo = { bar: \"abc\", baz: 42 };\nconst fooMaybe: Foo<$Maybe> = { bar: undefined, baz: undefined };\nconst fooList: Foo<free.Array> = { bar: [\"abc\", \"def\"], baz: [42, 43] };\n\nYou may want to check that your type constructors can actually accept string | number.\nimport { Type, apply, free, Contra } from 'free-types';\n\ninterface Foo<$T extends Type<1> & Contra<$T, Type<[string | number]>> = free.Id> {\n// ------------------- hacked contravariance\n bar: apply<$T, [string]>;\n baz: apply<$T, [number]>;\n}\n \nconst fooIdentity: Foo = { bar: \"abc\", baz: 42 };\nconst fooMaybe: Foo<$Maybe> = { bar: undefined, baz: undefined };\nconst fooList: Foo<free.Array> = { bar: [\"abc\", \"def\"], baz: [42, 43] };\n\n// @ts-expect-error: WeakSet expects object, not string | number\ntype FooWeakSet = Foo<free.WeakSet>;\n\nYou may also want to check that your type is actually a Functor (which is only the case for List here).\nimport { Type, apply, free, Contra } from 'free-types';\n\ninterface Foo<$T extends $Functor> {\n bar: apply<$T, [string]>;\n baz: apply<$T, [number]>;\n}\n\ntype $Functor = Type<1, {\n map: (f: (...args: any[]) => unknown) => unknown\n}>\n\n// @ts-expect-error: Identity lacks a map method\nconst fooIdentity: Foo<free.Id> = { bar: \"abc\", baz: 42 };\n\n// @ts-expect-error: our Maybe lacks a map method\nconst fooMaybe: Foo<$Maybe> = { bar: undefined, baz: undefined };\n\nconst fooList: Foo<free.Array> = { bar: [\"abc\", \"def\"], baz: [42, 43] };\n\nFinally, If you don't like having 2 versions of the same type, you can conflate them into one.\nimport { Type, apply, free, $Alter } from 'free-types';\n\ninterface Foo<$T extends Type<1>> {\n bar: apply<$T, [string]>;\n baz: apply<$T, [number]>;\n}\n\ntype Array<T = never> = $Alter<free.Array, [T]>\n\ntype Array1 = Array<1> // 1[]\n\nconst fooList: Foo<Array> = { bar: [\"abc\", \"def\"], baz: [42, 43] };\n\nplayground\n"
] | [
0
] | [] | [] | [
"higher_kinded_types",
"typescript",
"typescript_generics"
] | stackoverflow_0065730766_higher_kinded_types_typescript_typescript_generics.txt |
Q:
Python: Scheduling cron jobs with time limit?
I have been using apscheduler. A recurring problem regarding the package is that if for any reason, a running job hangs indefinitely (for example if you create an infinite while loop inside of it) it will stop the whole process forever as there is no time limit option for the added jobs.
Apscheduler has stated multiple times that they will not add a timelimit due to various reasons (short explanation here), however the problem still remains. You could create a job that will run for days, only to stop because a webrequest gets no response and apscheduler will wait for it indefinitely.
I've been trying to find a way to add this time limit to a job. For example using the wrapt-timeout-decorator package. I would create a function which runs my job inside it, that has a time limit, and I add this function to aposcheduler. Unfortunately, the two packages collide with a circular import.
from wrapt_timeout_decorator.wrapt_timeout_decorator import timeout
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
class MyJob: # implementation is unnecessary to show here
...
@timeout(dec_timeout=600, use_signals=False)
def run_job(job: MyJob) -> None:
job.run()
job = MyJob()
scheduler = BackgroundScheduler(daemon=True)
scheduler.add_job(func=run_job, kwargs={"job": job}, trigger=CronTrigger.from_crontab(sheet_job.cron))
scheduler.start()
File
"C:\Users...\AppData\Local\Programs\Python\Python39\lib\site-packages\multiprocess\context.py",
line 62, in Pipe
from .connection import Pipe ImportError: cannot import name 'Pipe' from partially initialized module 'multiprocess.connection'
(most likely due to a circular import)
(C:\Users...\AppData\Local\Programs\Python\Python39\lib\site-packages\multiprocess\connection.py)
I've also tried adding a self made timeout decorator, shown here, but I did not get the desired outcome.
My question is: Is there a way to add a time limit to an apscheduler job, or are there any other similar packages where creating a cron job with a time limit is possible, or do you know of any self-made solution? (the program will run on Windows).
A:
Based on the number of answers and my own research this is not currently possible with apscheduler. I have written my own quick implementation. The syntax is very similar to apscheduler, you just need to create a similar Scheduler object and add jobs to it with add_job, then use start. For my needs this has solved the issue. I'm adding the implementation here as it may help somebody the future.
from typing import Callable, Optional, Any
from datetime import datetime, timedelta
from croniter import croniter
from enum import Enum
import traceback
import threading
import ctypes
import time
class JobStatus(Enum):
NOT_RUNNING = "Not running"
RUNNING = "Running"
class StoppableThread(threading.Thread):
def get_id(self):
if hasattr(self, '_thread_id'):
return self._thread_id
for id, thread in threading._active.items():
if thread is self:
return id
return None
def stop(self):
thread_id = self.get_id()
if thread_id is None:
print("Failed find thread id. Unable to stop thread.")
return
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, ctypes.py_object(SystemExit))
if res > 1:
ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, 0)
print("Failed to stop thread.")
class JobRunner:
def __init__(self, function: Callable[..., None], cron_tab: str, function_kwargs: Optional[dict[str, Any]]=None, timeout_minutes: Optional[int]=None) -> None:
self.function = function
self.cron_tab = cron_tab
self.function_kwargs = function_kwargs if function_kwargs is not None else {}
self.timeout_minutes = timeout_minutes
self.next_run_time = datetime.now()
self.next_timeout_time = None if timeout_minutes is None else datetime.now() + timedelta(minutes=timeout_minutes)
self._job_thread: Optional[StoppableThread] = None
self._update_next_run_time()
def update(self) -> None:
if self.get_job_status() == JobStatus.RUNNING:
if self.timeout_minutes is not None:
if datetime.now() < self.next_timeout_time:
print(f"Job stopped due to timeout after not finishing in {self.timeout_minutes} minutes.")
self._job_thread.stop()
self._job_thread.join()
self._job_thread = None
return
if datetime.now() < self.next_run_time:
return
self._job_thread = StoppableThread(target=self.function, kwargs=self.function_kwargs)
self._job_thread.start()
self._update_next_run_time()
self._update_next_timeout()
def get_job_status(self) -> JobStatus:
if self._job_thread is None:
return JobStatus.NOT_RUNNING
if self._job_thread.is_alive():
return JobStatus.RUNNING
return JobStatus.NOT_RUNNING
def _update_next_run_time(self) -> None:
cron = croniter(self.cron_tab, datetime.now())
self.next_run_time = cron.get_next(datetime)
def _update_next_timeout(self) -> None:
if self.timeout_minutes is not None:
self.next_timeout_time = datetime.now() + timedelta(minutes=self.timeout_minutes)
class Scheduler:
def __init__(self) -> None:
self._jobs: list[JobRunner] = []
def add_job(self, function: Callable[..., None], cron_tab: str, function_kwargs: Optional[dict[str, Any]]=None, timeout_minutes: Optional[int]=None) -> None:
self._jobs.append(JobRunner(function, cron_tab, function_kwargs, timeout_minutes))
def start(self) -> None:
while True:
time.sleep(1)
try:
for job_runner in self._jobs:
job_runner.update()
except Exception:
print(f"An error occured while running one of the jobs: {traceback.format_exc()}")
| Python: Scheduling cron jobs with time limit? | I have been using apscheduler. A recurring problem regarding the package is that if for any reason, a running job hangs indefinitely (for example if you create an infinite while loop inside of it) it will stop the whole process forever as there is no time limit option for the added jobs.
Apscheduler has stated multiple times that they will not add a timelimit due to various reasons (short explanation here), however the problem still remains. You could create a job that will run for days, only to stop because a webrequest gets no response and apscheduler will wait for it indefinitely.
I've been trying to find a way to add this time limit to a job. For example using the wrapt-timeout-decorator package. I would create a function which runs my job inside it, that has a time limit, and I add this function to aposcheduler. Unfortunately, the two packages collide with a circular import.
from wrapt_timeout_decorator.wrapt_timeout_decorator import timeout
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
class MyJob: # implementation is unnecessary to show here
...
@timeout(dec_timeout=600, use_signals=False)
def run_job(job: MyJob) -> None:
job.run()
job = MyJob()
scheduler = BackgroundScheduler(daemon=True)
scheduler.add_job(func=run_job, kwargs={"job": job}, trigger=CronTrigger.from_crontab(sheet_job.cron))
scheduler.start()
File
"C:\Users...\AppData\Local\Programs\Python\Python39\lib\site-packages\multiprocess\context.py",
line 62, in Pipe
from .connection import Pipe ImportError: cannot import name 'Pipe' from partially initialized module 'multiprocess.connection'
(most likely due to a circular import)
(C:\Users...\AppData\Local\Programs\Python\Python39\lib\site-packages\multiprocess\connection.py)
I've also tried adding a self made timeout decorator, shown here, but I did not get the desired outcome.
My question is: Is there a way to add a time limit to an apscheduler job, or are there any other similar packages where creating a cron job with a time limit is possible, or do you know of any self-made solution? (the program will run on Windows).
| [
"Based on the number of answers and my own research this is not currently possible with apscheduler. I have written my own quick implementation. The syntax is very similar to apscheduler, you just need to create a similar Scheduler object and add jobs to it with add_job, then use start. For my needs this has solved the issue. I'm adding the implementation here as it may help somebody the future.\nfrom typing import Callable, Optional, Any\nfrom datetime import datetime, timedelta\nfrom croniter import croniter\nfrom enum import Enum\nimport traceback\nimport threading \nimport ctypes\nimport time\n\n\nclass JobStatus(Enum):\n NOT_RUNNING = \"Not running\"\n RUNNING = \"Running\"\n\n\nclass StoppableThread(threading.Thread):\n\n def get_id(self):\n if hasattr(self, '_thread_id'):\n return self._thread_id\n for id, thread in threading._active.items():\n if thread is self:\n return id\n return None\n\n def stop(self):\n thread_id = self.get_id()\n if thread_id is None:\n print(\"Failed find thread id. Unable to stop thread.\")\n return\n res = ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, ctypes.py_object(SystemExit))\n if res > 1:\n ctypes.pythonapi.PyThreadState_SetAsyncExc(thread_id, 0)\n print(\"Failed to stop thread.\")\n\n\nclass JobRunner:\n\n def __init__(self, function: Callable[..., None], cron_tab: str, function_kwargs: Optional[dict[str, Any]]=None, timeout_minutes: Optional[int]=None) -> None:\n self.function = function\n self.cron_tab = cron_tab\n self.function_kwargs = function_kwargs if function_kwargs is not None else {}\n self.timeout_minutes = timeout_minutes\n self.next_run_time = datetime.now()\n self.next_timeout_time = None if timeout_minutes is None else datetime.now() + timedelta(minutes=timeout_minutes)\n self._job_thread: Optional[StoppableThread] = None\n self._update_next_run_time()\n\n def update(self) -> None:\n if self.get_job_status() == JobStatus.RUNNING:\n if self.timeout_minutes is not None:\n if datetime.now() < self.next_timeout_time:\n print(f\"Job stopped due to timeout after not finishing in {self.timeout_minutes} minutes.\")\n self._job_thread.stop()\n self._job_thread.join()\n self._job_thread = None\n return\n if datetime.now() < self.next_run_time:\n return\n self._job_thread = StoppableThread(target=self.function, kwargs=self.function_kwargs)\n self._job_thread.start()\n self._update_next_run_time()\n self._update_next_timeout()\n\n def get_job_status(self) -> JobStatus:\n if self._job_thread is None:\n return JobStatus.NOT_RUNNING\n if self._job_thread.is_alive():\n return JobStatus.RUNNING\n return JobStatus.NOT_RUNNING\n\n def _update_next_run_time(self) -> None:\n cron = croniter(self.cron_tab, datetime.now())\n self.next_run_time = cron.get_next(datetime)\n\n def _update_next_timeout(self) -> None:\n if self.timeout_minutes is not None:\n self.next_timeout_time = datetime.now() + timedelta(minutes=self.timeout_minutes)\n\n\nclass Scheduler:\n\n def __init__(self) -> None:\n self._jobs: list[JobRunner] = []\n\n def add_job(self, function: Callable[..., None], cron_tab: str, function_kwargs: Optional[dict[str, Any]]=None, timeout_minutes: Optional[int]=None) -> None:\n self._jobs.append(JobRunner(function, cron_tab, function_kwargs, timeout_minutes))\n\n def start(self) -> None:\n while True:\n time.sleep(1)\n try:\n for job_runner in self._jobs:\n job_runner.update()\n except Exception:\n print(f\"An error occured while running one of the jobs: {traceback.format_exc()}\")\n\n"
] | [
0
] | [] | [] | [
"apscheduler",
"cron",
"jobs",
"multithreading",
"python"
] | stackoverflow_0074524160_apscheduler_cron_jobs_multithreading_python.txt |
Q:
How to make multiple cURL requests to the same URL?
I need to make 8 requests to an API using cURL, each request will only change the 'data' => " Text " parameter, I thought of simply replicating the code 8 times, but that doesn't seem like a good solution and can make my code very messy .
Currently my code is like this:
$url = 'https://www.paraphraser.io/paraphrasing-api';
$ch = curl_init($url);
$data = array(
'key' => '526a099f61fdfdffdf8f27fa815129f87f',
'data' => "SIGNIFICADO: Sonhar com arvores cheia de flores mostra que você precisa reavaliar suas decisões e objetivos. Você está tentando se encaixar nos ideais de outra pessoa. Talvez você esteja se esforçando demais para impressionar os outros. Você precisa trabalhar mais e por mais tempo para alcançar seus objetivos. Talvez você precise fazer um pouco mais de esforço em relação a algum relacionamento.",
'lang' => 'br',
'mode' => '3',
'style' => '0'
);
$headers = array(
'Content-Type: text/plain; charset=utf-8',
);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$result = curl_exec($ch);
How could I make 8 requests and leave my code clean?
A:
Put your code inside a function loop through data value inside a foreach.
foreach ($datas as $data) {
curlAPI($data);
}
function curlAPI($data) {
$url = 'https://www.paraphraser.io/paraphrasing-api';
$ch = curl_init($url);
$data = array(
'key' => '526a099f61fdfdffdf8f27fa815129f87f',
'data' => $data,
'lang' => 'br',
'mode' => '3',
'style' => '0'
);
$headers = array(
'Content-Type: text/plain; charset=utf-8',
);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$result = curl_exec($ch);
return $result;
}
| How to make multiple cURL requests to the same URL? | I need to make 8 requests to an API using cURL, each request will only change the 'data' => " Text " parameter, I thought of simply replicating the code 8 times, but that doesn't seem like a good solution and can make my code very messy .
Currently my code is like this:
$url = 'https://www.paraphraser.io/paraphrasing-api';
$ch = curl_init($url);
$data = array(
'key' => '526a099f61fdfdffdf8f27fa815129f87f',
'data' => "SIGNIFICADO: Sonhar com arvores cheia de flores mostra que você precisa reavaliar suas decisões e objetivos. Você está tentando se encaixar nos ideais de outra pessoa. Talvez você esteja se esforçando demais para impressionar os outros. Você precisa trabalhar mais e por mais tempo para alcançar seus objetivos. Talvez você precise fazer um pouco mais de esforço em relação a algum relacionamento.",
'lang' => 'br',
'mode' => '3',
'style' => '0'
);
$headers = array(
'Content-Type: text/plain; charset=utf-8',
);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$result = curl_exec($ch);
How could I make 8 requests and leave my code clean?
| [
"Put your code inside a function loop through data value inside a foreach.\nforeach ($datas as $data) {\ncurlAPI($data);\n}\n\n\nfunction curlAPI($data) {\n$url = 'https://www.paraphraser.io/paraphrasing-api';\n$ch = curl_init($url);\n\n\n\n$data = array(\n 'key' => '526a099f61fdfdffdf8f27fa815129f87f',\n 'data' => $data,\n 'lang' => 'br',\n 'mode' => '3',\n 'style' => '0'\n );\n\n$headers = array(\n 'Content-Type: text/plain; charset=utf-8',\n);\n\ncurl_setopt($ch, CURLOPT_CUSTOMREQUEST, \"POST\");\ncurl_setopt($ch, CURLOPT_POSTFIELDS, $data);\ncurl_setopt($ch, CURLOPT_RETURNTRANSFER, true);\n\n\n\n$result = curl_exec($ch);\n\nreturn $result;\n}\n\n"
] | [
0
] | [] | [] | [
"curl",
"php",
"php_curl"
] | stackoverflow_0074672643_curl_php_php_curl.txt |
Q:
How to detect EAR with ArCore android
So I am trying to put a earing on a person with ArCore. But the Face ArCore mask does not cover ears best i can do is at 172 but its still far away from the ear.
this is my code
private fun getRegionPose(region: FaceRegion): Vector3? {
val buffer = augmentedFace?.meshVertices
if (buffer != null) {
return when (region) {
FaceRegion.EAR ->
Vector3(
buffer.get(177 * 3),
buffer.get(177 * 3 + 1),
buffer.get(177 * 3 + 2)
)
}
}
return null
}
override fun onUpdate(frameTime: FrameTime?) {
super.onUpdate(frameTime)
augmentedFace?.let { face ->
getRegionPose(FaceRegion.EAR)?.let {
mustacheNode?.localPosition = Vector3(it.x, it.y - 0.035f, it.z + 0.015f)
mustacheNode?.localScale = Vector3(0.07f, 0.07f, 0.07f)
}
}
}
Can some one help me please is there a way for me to go outside the bonds of the face lnadmakrs?
A:
Well after a lot of research i found this example and it was what lead me to my answer. So i leave the tutorial here hopefouly it can help some one.
https://www.kodeco.com/523-augmented-reality-in-android-with-google-s-face-api#toc-anchor-003
| How to detect EAR with ArCore android | So I am trying to put a earing on a person with ArCore. But the Face ArCore mask does not cover ears best i can do is at 172 but its still far away from the ear.
this is my code
private fun getRegionPose(region: FaceRegion): Vector3? {
val buffer = augmentedFace?.meshVertices
if (buffer != null) {
return when (region) {
FaceRegion.EAR ->
Vector3(
buffer.get(177 * 3),
buffer.get(177 * 3 + 1),
buffer.get(177 * 3 + 2)
)
}
}
return null
}
override fun onUpdate(frameTime: FrameTime?) {
super.onUpdate(frameTime)
augmentedFace?.let { face ->
getRegionPose(FaceRegion.EAR)?.let {
mustacheNode?.localPosition = Vector3(it.x, it.y - 0.035f, it.z + 0.015f)
mustacheNode?.localScale = Vector3(0.07f, 0.07f, 0.07f)
}
}
}
Can some one help me please is there a way for me to go outside the bonds of the face lnadmakrs?
| [
"Well after a lot of research i found this example and it was what lead me to my answer. So i leave the tutorial here hopefouly it can help some one.\nhttps://www.kodeco.com/523-augmented-reality-in-android-with-google-s-face-api#toc-anchor-003\n"
] | [
0
] | [] | [] | [
"android",
"arcore"
] | stackoverflow_0074670347_android_arcore.txt |
Q:
Unable to connect to the database. Retrying
I'm trying to connect to the database, seems like the set-up is correct, but for some reason, it says that it is not available.
app.module.ts
import { Module } from "@nestjs/common"
import { MongooseModule } from "@nestjs/mongoose";
import { ConfigModule } from "../config";
import { CreatorModule } from "./creator.module";
@Module({
imports: [
MongooseModule.forRoot('mongodb://localhost:27017/snaptoon', {
useCreateIndex: true,
useUnifiedTopology: true,
useNewUrlParser: true,
}),
CreatorModule,
],
controllers: [],
providers: []
})
export class AppModule {}
The error is: ERROR [MongooseModule] Unable to connect to the database. Retrying (9)...
I'm using '@nestjs/mongoose': '9.0.2'
A:
Use mongodb://127.0.0.1:27017/snaptoon instead of mongodb://localhost:27017/snaptoon as connection string. It worked for me.
A:
I solved by updating manually mongoose version to 6.2.2
WARN @nestjs/[email protected] requires a peer of mongoose@^6.0.2 but none is installed. You must install peer dependencies yourself.
I realize due to this error on npm install
just use:
npm install [email protected] --save
A:
For those people who cannot solve this problem by the answer above, according to the new specs of nestjs/mongoose.
It might be solved by deleting the line: useNewUrlParser: true.
For me, it works in both ways.
A:
Worked for me when I've added "directConnection=true" to the connection string.
"If set to true, the driver will only connect to the host provided in the URI and will not discover other hosts in the cluster. Direct connections are not valid if multiple hosts are specified."
Before:
MongooseModule.forRoot(
`mongodb://${host}:${port}/${dbName}`,
),
After:
MongooseModule.forRoot(
`mongodb://${host}:${port}/${dbName}?directConnection=true`,
),
My application was working fine in "@nestjs/mongoose": "^8.0.0" and I faced this error when I changed to "@nestjs/mongoose": "^9.0.0".
Docs: https://www.mongodb.com/docs/drivers/go/current/fundamentals/connection/#connection-options
A:
I'm trying to connect with remote database.
I've solved this problem with this adding this 2 query parameter
?authSource=admin&directConnection=true
Full URI
mongodb://username:password@host:port/dbname?authSource=admin&directConnection=true
And if you get this error
MongoParseError: Password contains unescaped characters
wrap your password with encodeURIComponent function.
mongodb://username:${encodeURIComponent(password)}@host:port/dbname?authSource=admin&directConnection=true
A:
I changed useCreateIndex: true to useCreateIndex: undefined in MongooseModule.forRoot() config parameter object.
A:
had an issue with MongoDB docker image version, solved problem with 3.6 version
A:
Are you using Node 17 or above? Node now prefers IPv6 over IPv4 from Node 17 so localhost will translate to ::1: and not 127.0.0.1 - related issue. Your local MongoDB server is probably running on 127.0.0.1 and thus your application is unable to connect. Try connecting to mongodb://127.0.0.1:27017/snaptoon as this answer suggested.
| Unable to connect to the database. Retrying | I'm trying to connect to the database, seems like the set-up is correct, but for some reason, it says that it is not available.
app.module.ts
import { Module } from "@nestjs/common"
import { MongooseModule } from "@nestjs/mongoose";
import { ConfigModule } from "../config";
import { CreatorModule } from "./creator.module";
@Module({
imports: [
MongooseModule.forRoot('mongodb://localhost:27017/snaptoon', {
useCreateIndex: true,
useUnifiedTopology: true,
useNewUrlParser: true,
}),
CreatorModule,
],
controllers: [],
providers: []
})
export class AppModule {}
The error is: ERROR [MongooseModule] Unable to connect to the database. Retrying (9)...
I'm using '@nestjs/mongoose': '9.0.2'
| [
"Use mongodb://127.0.0.1:27017/snaptoon instead of mongodb://localhost:27017/snaptoon as connection string. It worked for me.\n",
"I solved by updating manually mongoose version to 6.2.2\n WARN @nestjs/[email protected] requires a peer of mongoose@^6.0.2 but none is installed. You must install peer dependencies yourself.\n\nI realize due to this error on npm install\njust use:\n npm install [email protected] --save\n\n",
"For those people who cannot solve this problem by the answer above, according to the new specs of nestjs/mongoose.\nIt might be solved by deleting the line: useNewUrlParser: true.\nFor me, it works in both ways.\n",
"Worked for me when I've added \"directConnection=true\" to the connection string.\n\"If set to true, the driver will only connect to the host provided in the URI and will not discover other hosts in the cluster. Direct connections are not valid if multiple hosts are specified.\"\nBefore:\nMongooseModule.forRoot(\n `mongodb://${host}:${port}/${dbName}`,\n),\n\nAfter:\nMongooseModule.forRoot(\n `mongodb://${host}:${port}/${dbName}?directConnection=true`,\n),\n\nMy application was working fine in \"@nestjs/mongoose\": \"^8.0.0\" and I faced this error when I changed to \"@nestjs/mongoose\": \"^9.0.0\".\nDocs: https://www.mongodb.com/docs/drivers/go/current/fundamentals/connection/#connection-options\n",
"I'm trying to connect with remote database.\nI've solved this problem with this adding this 2 query parameter\n?authSource=admin&directConnection=true\nFull URI\nmongodb://username:password@host:port/dbname?authSource=admin&directConnection=true\nAnd if you get this error\nMongoParseError: Password contains unescaped characters\nwrap your password with encodeURIComponent function.\nmongodb://username:${encodeURIComponent(password)}@host:port/dbname?authSource=admin&directConnection=true\n",
"I changed useCreateIndex: true to useCreateIndex: undefined in MongooseModule.forRoot() config parameter object.\n",
"had an issue with MongoDB docker image version, solved problem with 3.6 version\n",
"Are you using Node 17 or above? Node now prefers IPv6 over IPv4 from Node 17 so localhost will translate to ::1: and not 127.0.0.1 - related issue. Your local MongoDB server is probably running on 127.0.0.1 and thus your application is unable to connect. Try connecting to mongodb://127.0.0.1:27017/snaptoon as this answer suggested.\n"
] | [
6,
3,
2,
2,
1,
0,
0,
0
] | [] | [] | [
"mongodb",
"mongoose",
"nestjs",
"node.js",
"typescript"
] | stackoverflow_0070730514_mongodb_mongoose_nestjs_node.js_typescript.txt |
Q:
How to export an object from a file that contains a list of objects and import this same object in another file to be able to use it?
How to export an object from a file that contains a list of objects and import this same object in another file to be able to use it?
// Example of Source File
const obj1 = {
element1: { property1: value1, property2: value2, property3: value3},
element2: { property1: value1, property2: value2, property3: value3},
element3: { property1: value1, property2: value2, property3: value3},
element4: { property1: value1, property2: value2, property3: value3},
element5: { property1: value1, property2: value2, property3: value3},
}
const obj2 = {
element1: { property1: value1, property2: value2, property3: value3},
element2: { property1: value1, property2: value2, property3: value3},
element3: { property1: value1, property2: value2, property3: value3},
element4: { property1: value1, property2: value2, property3: value3},
element5: { property1: value1, property2: value2, property3: value3},
}
export ????
// Example of Main File Importing from Source File
'use strict';
import ???
console.log(importedObject);
A:
Using ES6 Static Named import / export (ES6 Modules) syntax:
./file-a.js
const obj1 = {
element1: { property1: value1, property2: value2, property3: value3},
element2: { property1: value1, property2: value2, property3: value3},
element3: { property1: value1, property2: value2, property3: value3},
element4: { property1: value1, property2: value2, property3: value3},
element5: { property1: value1, property2: value2, property3: value3}
}
const obj2 = {
element1: { property1: value1, property2: value2, property3: value3},
element2: { property1: value1, property2: value2, property3: value3},
element3: { property1: value1, property2: value2, property3: value3},
element4: { property1: value1, property2: value2, property3: value3},
element5: { property1: value1, property2: value2, property3: value3}
}
export {obj1, obj2};
./file-b.js
import {obj1, obj2} from './file-a.js';
console.log(obj1);
console.log(obj2);
A:
If you are using CommonJS syntax:
For exporting:
// objects.js
const obj1 = {...}
const obj2 = {...}
module.exports = { obj1, obj2 };
For importing the same:
// import file
const {obj1, obj2} = require(./objects.js);
Mostly this is used in NodeJs backend structure.
A:
You could also use:
exports.obj1 = {
element1: { property1: value1, property2: value2, property3: value3},
element2: { property1: value1, property2: value2, property3: value3},
element3: { property1: value1, property2: value2, property3: value3},
element4: { property1: value1, property2: value2, property3: value3},
element5: { property1: value1, property2: value2, property3: value3}
}
exports.obj2 = {
element1: { property1: value1, property2: value2, property3: value3},
element2: { property1: value1, property2: value2, property3: value3},
element3: { property1: value1, property2: value2, property3: value3},
element4: { property1: value1, property2: value2, property3: value3},
element5: { property1: value1, property2: value2, property3: value3}
}
Your input syntax should work
| How to export an object from a file that contains a list of objects and import this same object in another file to be able to use it? | How to export an object from a file that contains a list of objects and import this same object in another file to be able to use it?
// Example of Source File
const obj1 = {
element1: { property1: value1, property2: value2, property3: value3},
element2: { property1: value1, property2: value2, property3: value3},
element3: { property1: value1, property2: value2, property3: value3},
element4: { property1: value1, property2: value2, property3: value3},
element5: { property1: value1, property2: value2, property3: value3},
}
const obj2 = {
element1: { property1: value1, property2: value2, property3: value3},
element2: { property1: value1, property2: value2, property3: value3},
element3: { property1: value1, property2: value2, property3: value3},
element4: { property1: value1, property2: value2, property3: value3},
element5: { property1: value1, property2: value2, property3: value3},
}
export ????
// Example of Main File Importing from Source File
'use strict';
import ???
console.log(importedObject);
| [
"Using ES6 Static Named import / export (ES6 Modules) syntax:\n./file-a.js\nconst obj1 = {\n element1: { property1: value1, property2: value2, property3: value3},\n element2: { property1: value1, property2: value2, property3: value3},\n element3: { property1: value1, property2: value2, property3: value3},\n element4: { property1: value1, property2: value2, property3: value3},\n element5: { property1: value1, property2: value2, property3: value3}\n}\n\nconst obj2 = {\n element1: { property1: value1, property2: value2, property3: value3},\n element2: { property1: value1, property2: value2, property3: value3},\n element3: { property1: value1, property2: value2, property3: value3},\n element4: { property1: value1, property2: value2, property3: value3},\n element5: { property1: value1, property2: value2, property3: value3}\n}\n\nexport {obj1, obj2};\n\n./file-b.js\nimport {obj1, obj2} from './file-a.js';\n\nconsole.log(obj1);\nconsole.log(obj2);\n\n",
"If you are using CommonJS syntax:\nFor exporting:\n\n// objects.js\n\nconst obj1 = {...}\n\nconst obj2 = {...}\n\nmodule.exports = { obj1, obj2 };\n\n\nFor importing the same:\n// import file \n\nconst {obj1, obj2} = require(./objects.js);\n\n\nMostly this is used in NodeJs backend structure.\n",
"You could also use:\nexports.obj1 = {\n element1: { property1: value1, property2: value2, property3: value3},\n element2: { property1: value1, property2: value2, property3: value3},\n element3: { property1: value1, property2: value2, property3: value3},\n element4: { property1: value1, property2: value2, property3: value3},\n element5: { property1: value1, property2: value2, property3: value3}\n}\n\nexports.obj2 = {\n element1: { property1: value1, property2: value2, property3: value3},\n element2: { property1: value1, property2: value2, property3: value3},\n element3: { property1: value1, property2: value2, property3: value3},\n element4: { property1: value1, property2: value2, property3: value3},\n element5: { property1: value1, property2: value2, property3: value3}\n}\n\nYour input syntax should work\n"
] | [
1,
0,
0
] | [] | [] | [
"ecmascript_6",
"javascript",
"node.js"
] | stackoverflow_0065674710_ecmascript_6_javascript_node.js.txt |
Q:
Can't publish script to PowerShell gallery, getting error
Publish-Script -Path "path-to-script.ps1" -NuGetApiKey 123456789
after doing that, I get this error in PowerShell 7.3:
Write-Error: Failed to generate the compressed file for script 'C:\Program Files\dotnet\dotnet.exe failed to pack: error '.
and I get this error in PowerShell 5.1:
Publish-PSArtifactUtility : Failed to generate the compressed file for script 'C:\Program Files\dotnet\dotnet.exe
failed to pack: error
'.
At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\2.2.5\PSModule.psm1:11338 char:17
+ ... Publish-PSArtifactUtility @PublishPSArtifactUtility_Param ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : FailedToCreateCompressedScript,Publish-PSArtifactUtility
my script has no dependency.
this problem has been going on for the past 2 weeks.
I even gave my script with API key to a friend from another country and they receive the same error too.
how can I fix this? I've published previous versions of this script before at least 6 times.
I've tried resetting my API key and running PowerShell as admin, didn't fix it.
Update:
I installed .NET 7 runtimes x64 and used this command from this answer on PowerShell 5.1:
# find the file having wrong .NET version
$path = Get-ChildItem (Get-Module PowerShellGet -ListAvailable).ModuleBase -Recurse -File |
Select-String -Pattern netcoreapp2.0 | ForEach-Object Path
# unload the module
Remove-Module PowerShellGet -Verbose -Force -EA 0
# update the file
$path | ForEach-Object {
(Get-Content -LiteralPath $_ -Raw).Replace('netcoreapp2.0', 'net7') |
Set-Content $_
}
Import-Module PowerShellGet -Force -Verbose
# now try to publish
but still getting error:
Publish-PSArtifactUtility : Failed to generate the compressed file for script 'C:\Program Files\dotnet\dotnet.exe
failed to pack: error
'.
At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\2.2.5\PSModule.psm1:11338 char:17
+ ... Publish-PSArtifactUtility @PublishPSArtifactUtility_Param ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : FailedToCreateCompressedScript,Publish-PSArtifactUtility
enter image description here
A:
I fixed the problem by installing .NET Core 2.0 SDK
https://dotnet.microsoft.com/en-us/download/dotnet/thank-you/sdk-2.1.202-windows-x64-installer
| Can't publish script to PowerShell gallery, getting error | Publish-Script -Path "path-to-script.ps1" -NuGetApiKey 123456789
after doing that, I get this error in PowerShell 7.3:
Write-Error: Failed to generate the compressed file for script 'C:\Program Files\dotnet\dotnet.exe failed to pack: error '.
and I get this error in PowerShell 5.1:
Publish-PSArtifactUtility : Failed to generate the compressed file for script 'C:\Program Files\dotnet\dotnet.exe
failed to pack: error
'.
At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\2.2.5\PSModule.psm1:11338 char:17
+ ... Publish-PSArtifactUtility @PublishPSArtifactUtility_Param ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : FailedToCreateCompressedScript,Publish-PSArtifactUtility
my script has no dependency.
this problem has been going on for the past 2 weeks.
I even gave my script with API key to a friend from another country and they receive the same error too.
how can I fix this? I've published previous versions of this script before at least 6 times.
I've tried resetting my API key and running PowerShell as admin, didn't fix it.
Update:
I installed .NET 7 runtimes x64 and used this command from this answer on PowerShell 5.1:
# find the file having wrong .NET version
$path = Get-ChildItem (Get-Module PowerShellGet -ListAvailable).ModuleBase -Recurse -File |
Select-String -Pattern netcoreapp2.0 | ForEach-Object Path
# unload the module
Remove-Module PowerShellGet -Verbose -Force -EA 0
# update the file
$path | ForEach-Object {
(Get-Content -LiteralPath $_ -Raw).Replace('netcoreapp2.0', 'net7') |
Set-Content $_
}
Import-Module PowerShellGet -Force -Verbose
# now try to publish
but still getting error:
Publish-PSArtifactUtility : Failed to generate the compressed file for script 'C:\Program Files\dotnet\dotnet.exe
failed to pack: error
'.
At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\2.2.5\PSModule.psm1:11338 char:17
+ ... Publish-PSArtifactUtility @PublishPSArtifactUtility_Param ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Write-Error], WriteErrorException
+ FullyQualifiedErrorId : FailedToCreateCompressedScript,Publish-PSArtifactUtility
enter image description here
| [
"I fixed the problem by installing .NET Core 2.0 SDK\nhttps://dotnet.microsoft.com/en-us/download/dotnet/thank-you/sdk-2.1.202-windows-x64-installer\n"
] | [
0
] | [] | [] | [
"powershell",
"powershell_7.3",
"publish"
] | stackoverflow_0074677502_powershell_powershell_7.3_publish.txt |
Q:
Convert vector to string c++
I'm trying to convert a vector<wchar_t> to string (and then print it).
std::string(vector_.begin(), vector_.end());
This code works fine, except äöü ÄÖÜ ß.
They will be converted to:
���
I also tried converting to wstring and printing with wcout, but I got the same issue.
Thanks in advance!
A:
My Solution:
First I convert my vector<wchar_t> to an utf16string like this:
std::u16string(buffer_.begin(), buffer_.end());
Then I use this function, I found somewhere on here:
std::string IO::Interface::UTF16_To_UTF8(std::u16string const& str) {
std::wstring_convert<std::codecvt_utf8_utf16<char16_t, 0x10ffff,
std::codecvt_mode::little_endian>, char16_t> cnv;
std::string utf8 = cnv.to_bytes(str);
if(cnv.converted() < str.size())
throw std::runtime_error("incomplete conversion");
return utf8;
}
I did not write the conversion function, but it works just as expected.
With this I can successfully convert a vector<wchar_t> to string
| Convert vector to string c++ | I'm trying to convert a vector<wchar_t> to string (and then print it).
std::string(vector_.begin(), vector_.end());
This code works fine, except äöü ÄÖÜ ß.
They will be converted to:
���
I also tried converting to wstring and printing with wcout, but I got the same issue.
Thanks in advance!
| [
"My Solution:\nFirst I convert my vector<wchar_t> to an utf16string like this:\nstd::u16string(buffer_.begin(), buffer_.end());\n\nThen I use this function, I found somewhere on here:\nstd::string IO::Interface::UTF16_To_UTF8(std::u16string const& str) {\n std::wstring_convert<std::codecvt_utf8_utf16<char16_t, 0x10ffff,\n std::codecvt_mode::little_endian>, char16_t> cnv;\n std::string utf8 = cnv.to_bytes(str);\n if(cnv.converted() < str.size())\n throw std::runtime_error(\"incomplete conversion\");\n return utf8;\n}\n\nI did not write the conversion function, but it works just as expected.\nWith this I can successfully convert a vector<wchar_t> to string\n"
] | [
0
] | [] | [] | [
"c++",
"string",
"type_conversion",
"wchar_t"
] | stackoverflow_0074638562_c++_string_type_conversion_wchar_t.txt |
Q:
Error in code while using Firebase in Android Studio
Error : Attempt to invoke virtual method 'android.text.Editable com.google.android.material.textfield.TextInputEditText.getText()' on a null object reference
I'm very new to Android Dev and Firebase. Trying to register an account in firebase database but keep getting this error. Please help me with a fix.
The code where I am getting the error:
final String email = loginEmail_PatRegPage.getText(). toString().trim();
final String password = loginPassword_PatRegPage.getText().toString().trim();
final String fullName = registrationFullName_PatReg.getText().toString().trim();
final String idNumber = registrationIDNumber_PatReg.getText().toString().trim();
final String mobileNumber = registrationMobileNumber_PatReg.getText().toString().trim();
Android Studio says:
Method invocation 'toString' may produce 'NullPointerException'
Tried using the recommendation given by Android Studio itself, through the "show context actions" option. The code then became like this:
final String email = Objects.requireNonNull(loginEmail_PatRegPage.getText()).toString().trim();
final String password = Objects.requireNonNull(loginPassword_PatRegPage.getText()).toString().trim();
final String fullName = Objects.requireNonNull(registrationFullName_PatReg.getText()).toString().trim();
final String idNumber = Objects.requireNonNull(registrationIDNumber_PatReg.getText()).toString().trim();
final String mobileNumber = Objects.requireNonNull(registrationMobileNumber_PatReg.getText()).toString().trim();
But I am still receiving the same error.
A:
The error message "Method invocation 'toString' may produce 'NullPointerException'" is telling you that the toString method may be called on a null value, which will cause a NullPointerException to be thrown.
To fix this, you can check if the value returned by getText() is null before calling the toString method. You can use the null coalescing operator (??) to return an empty string instead of null if getText() returns null, like this:
final String email = loginEmail_PatRegPage.getText()?.toString().trim() ?? "";
final String password = loginPassword_PatRegPage.getText()?.toString().trim() ?? "";
final String fullName = registrationFullName_PatReg.getText()?.toString().trim() ?? "";
final String idNumber = registrationIDNumber_PatReg.getText()?.toString().trim() ?? "";
final String mobileNumber = registrationMobileNumber_PatReg.getText()?.toString().trim() ?? "";
This will prevent the NullPointerException from being thrown if getText() returns null.
Alternatively, you could use the Objects.requireNonNull method to check if the value returned by getText() is null and throw an exception if it is, like this:
final String password = Objects.requireNonNull(loginPassword_PatRegPage.getText()).toString().trim();
final String fullName = Objects.requireNonNull(registrationFullName_PatReg.getText()).toString().trim();
final String idNumber = Objects.requireNonNull(registrationIDNumber_PatReg.getText()).toString().trim();
final String mobileNumber = Objects.requireNonNull(registrationMobileNumber_PatReg.getText()).toString().trim();
This approach will throw a NullPointerException if getText() returns null, which may be more appropriate if you want to make sure that getText() never returns null in your code.
| Error in code while using Firebase in Android Studio |
Error : Attempt to invoke virtual method 'android.text.Editable com.google.android.material.textfield.TextInputEditText.getText()' on a null object reference
I'm very new to Android Dev and Firebase. Trying to register an account in firebase database but keep getting this error. Please help me with a fix.
The code where I am getting the error:
final String email = loginEmail_PatRegPage.getText(). toString().trim();
final String password = loginPassword_PatRegPage.getText().toString().trim();
final String fullName = registrationFullName_PatReg.getText().toString().trim();
final String idNumber = registrationIDNumber_PatReg.getText().toString().trim();
final String mobileNumber = registrationMobileNumber_PatReg.getText().toString().trim();
Android Studio says:
Method invocation 'toString' may produce 'NullPointerException'
Tried using the recommendation given by Android Studio itself, through the "show context actions" option. The code then became like this:
final String email = Objects.requireNonNull(loginEmail_PatRegPage.getText()).toString().trim();
final String password = Objects.requireNonNull(loginPassword_PatRegPage.getText()).toString().trim();
final String fullName = Objects.requireNonNull(registrationFullName_PatReg.getText()).toString().trim();
final String idNumber = Objects.requireNonNull(registrationIDNumber_PatReg.getText()).toString().trim();
final String mobileNumber = Objects.requireNonNull(registrationMobileNumber_PatReg.getText()).toString().trim();
But I am still receiving the same error.
| [
"The error message \"Method invocation 'toString' may produce 'NullPointerException'\" is telling you that the toString method may be called on a null value, which will cause a NullPointerException to be thrown.\nTo fix this, you can check if the value returned by getText() is null before calling the toString method. You can use the null coalescing operator (??) to return an empty string instead of null if getText() returns null, like this:\nfinal String email = loginEmail_PatRegPage.getText()?.toString().trim() ?? \"\";\nfinal String password = loginPassword_PatRegPage.getText()?.toString().trim() ?? \"\";\nfinal String fullName = registrationFullName_PatReg.getText()?.toString().trim() ?? \"\";\nfinal String idNumber = registrationIDNumber_PatReg.getText()?.toString().trim() ?? \"\";\nfinal String mobileNumber = registrationMobileNumber_PatReg.getText()?.toString().trim() ?? \"\";\n\nThis will prevent the NullPointerException from being thrown if getText() returns null.\nAlternatively, you could use the Objects.requireNonNull method to check if the value returned by getText() is null and throw an exception if it is, like this:\nfinal String password = Objects.requireNonNull(loginPassword_PatRegPage.getText()).toString().trim();\nfinal String fullName = Objects.requireNonNull(registrationFullName_PatReg.getText()).toString().trim();\nfinal String idNumber = Objects.requireNonNull(registrationIDNumber_PatReg.getText()).toString().trim();\nfinal String mobileNumber = Objects.requireNonNull(registrationMobileNumber_PatReg.getText()).toString().trim();\n\n\nThis approach will throw a NullPointerException if getText() returns null, which may be more appropriate if you want to make sure that getText() never returns null in your code.\n"
] | [
0
] | [] | [] | [
"android"
] | stackoverflow_0074675061_android.txt |
Q:
Problem storing characters as an string inside a while loop
I'm having problems with this simple example.
The program inquires as to how many letters are required to form a complete word. Then it will ask for each letter individually, which is fine, but I can't make the code save the value from the current character and the next one from the next iteration until the number of letters finishes the word and print it to confirm the word.
E.g. Let's say house, which has 5 letters.
int numbersOfCharacters=5;
int counter=0;
char character;
string phrase;
while (counter < numbersOfCharacters)
{
cout << "Introduce character's number" << counter << ": ";
cin >> character;
counter = counter + 1;
phrase=character+character; //I'm not sure if I need an array here.
}
cout << "Concatenated characters: " << phrase << endl;
The output is:
Introduce the character number 1: h
Introduce the character number 2: o
Introduce the character number 3: u
Introduce the character number 4: s
Introduce the character number 5: e
Concatenated characters: ?
And the expected output should be:
Concatenated characters: house
A:
The expression phrase=character+character; doesn't do what you think it does. You are taking the user's input, adding its numeric value to itself, and then assigning (not appending) that numeric result as a char to the string.
So, for example, on the 1st iteration, the letter h has an ASCII value of 104, which you double to 208, which is outside the ASCII range. On the next iteration, the letter o is ASCII 111 which you double to 222, which is also outside of ASCII. And so on. That is why the final string is not house like you are expecting.
Perhaps you meant to use phrase=phrase+character; instead? But, that won't work either, because you can't concatenate a char value directly to a string object using operator+.
What you can do is use string::operator+= instead:
phrase += character;
Or the string::push_back() method:
phrase.push_back(character);
| Problem storing characters as an string inside a while loop | I'm having problems with this simple example.
The program inquires as to how many letters are required to form a complete word. Then it will ask for each letter individually, which is fine, but I can't make the code save the value from the current character and the next one from the next iteration until the number of letters finishes the word and print it to confirm the word.
E.g. Let's say house, which has 5 letters.
int numbersOfCharacters=5;
int counter=0;
char character;
string phrase;
while (counter < numbersOfCharacters)
{
cout << "Introduce character's number" << counter << ": ";
cin >> character;
counter = counter + 1;
phrase=character+character; //I'm not sure if I need an array here.
}
cout << "Concatenated characters: " << phrase << endl;
The output is:
Introduce the character number 1: h
Introduce the character number 2: o
Introduce the character number 3: u
Introduce the character number 4: s
Introduce the character number 5: e
Concatenated characters: ?
And the expected output should be:
Concatenated characters: house
| [
"The expression phrase=character+character; doesn't do what you think it does. You are taking the user's input, adding its numeric value to itself, and then assigning (not appending) that numeric result as a char to the string.\nSo, for example, on the 1st iteration, the letter h has an ASCII value of 104, which you double to 208, which is outside the ASCII range. On the next iteration, the letter o is ASCII 111 which you double to 222, which is also outside of ASCII. And so on. That is why the final string is not house like you are expecting.\nPerhaps you meant to use phrase=phrase+character; instead? But, that won't work either, because you can't concatenate a char value directly to a string object using operator+.\nWhat you can do is use string::operator+= instead:\nphrase += character;\nOr the string::push_back() method:\nphrase.push_back(character);\n"
] | [
0
] | [
"In your code, you must declare character array and store each word in char array and also increment in index using counter veriable.\nbelow i attached a solution,\nint numbersOfCharacters=5;\nint counter=0;\nchar character;\nchar phrase[numbersOfCharacters];\n\nwhile (counter < numbersOfCharacters)\n{\n cout << \"Introduce character's number\" << counter << \": \";\n cin >> character; \n phrase[counter]=character;\n counter++;\n\n}\ncout << \"Concatenated characters: \" << phrase << endl;\n\nOutput:\nenter image description here\n"
] | [
-3
] | [
"c++"
] | stackoverflow_0074677273_c++.txt |
Q:
How to define a type so it can be static initialized?
I have been learning about trivial and standard layout types. I think I understand the basics behind it, but there is still something I am missing.
Please have a look at the two following examples:
Example 1:
int main()
{
struct SomeType
{
SomeType() = default;
int x;
};
std::cout << "is_trivially_copyable: " << std::is_trivially_copyable<SomeType>::value << "\n";
std::cout << "is_trivial: " << std::is_trivial<SomeType>::value << "\n";
std::cout << "is_standard_layout: " << std::is_standard_layout<SomeType>::value << "\n";
static constexpr SomeType someType{};
return 0;
}
SomeType is trivial and I am able to static initialized it "static constexpr SomeType someType{};".
Example 2:
int main()
{
struct SomeType
{
SomeType() {};
int x;
};
std::cout << "is_trivially_copyable: " << std::is_trivially_copyable<SomeType>::value << "\n";
std::cout << "is_trivial: " << std::is_trivial<SomeType>::value << "\n";
std::cout << "is_standard_layout: " << std::is_standard_layout<SomeType>::value << "\n";
static constexpr SomeType someType{};
return 0;
}
SomeType is not trivial, but is standard layout, line "static constexpr SomeType someType{};" fails with an error "Error C2127 'someType': illegal initialization of 'constexpr' entity with a non-constant expression ConsoleApplication2" on MSVC compiler. If I make constructor constexpr and initialize x in constructor, this will work so my question is the following.
If I understood it right, trivial types can be static initialized, but what about non-trivial types?
Let me maybe rephrase it for easier understanding, how to define a type so it can be statically initialized?
| How to define a type so it can be static initialized? | I have been learning about trivial and standard layout types. I think I understand the basics behind it, but there is still something I am missing.
Please have a look at the two following examples:
Example 1:
int main()
{
struct SomeType
{
SomeType() = default;
int x;
};
std::cout << "is_trivially_copyable: " << std::is_trivially_copyable<SomeType>::value << "\n";
std::cout << "is_trivial: " << std::is_trivial<SomeType>::value << "\n";
std::cout << "is_standard_layout: " << std::is_standard_layout<SomeType>::value << "\n";
static constexpr SomeType someType{};
return 0;
}
SomeType is trivial and I am able to static initialized it "static constexpr SomeType someType{};".
Example 2:
int main()
{
struct SomeType
{
SomeType() {};
int x;
};
std::cout << "is_trivially_copyable: " << std::is_trivially_copyable<SomeType>::value << "\n";
std::cout << "is_trivial: " << std::is_trivial<SomeType>::value << "\n";
std::cout << "is_standard_layout: " << std::is_standard_layout<SomeType>::value << "\n";
static constexpr SomeType someType{};
return 0;
}
SomeType is not trivial, but is standard layout, line "static constexpr SomeType someType{};" fails with an error "Error C2127 'someType': illegal initialization of 'constexpr' entity with a non-constant expression ConsoleApplication2" on MSVC compiler. If I make constructor constexpr and initialize x in constructor, this will work so my question is the following.
If I understood it right, trivial types can be static initialized, but what about non-trivial types?
Let me maybe rephrase it for easier understanding, how to define a type so it can be statically initialized?
| [] | [] | [
"The answer to your question is indeed not so clear. For example the programming language Ada has a whole mechanism called \"elaboration\" that specifies how constant data is defined and how constant data gets a value before code is executed. It must be precisely defined, what data is calculated at compile time and what data is compiled at runtime.\nComing to C++ it is also important to have this always in mind. Consider for example you have a constant constexpr float x_1 = a + b. That constant is calculated by the compiler using the addition function on the processor of the machine where you do the compilation.\nNow consider a variable float x_2 = a + b. That is calculated during runtime. If you execute the code on the same machine, where you compile, the result may be the same. But if you execute it on another machine or even on another processor, the result may be even different (different rounding errors). When you compare x_1 and x_2 they may differ in an unexpected way. This example shall only illustrate how important it is to clearly know if things are calculated on the target machine (where the code runs) or on the host machine (where the code is compiled). Floating point operations are normally not identical (see How to keep float/double arithmetic deterministic?).\nThere can be also other situation where it is also of importance to exactly know what is calculated by the compiler and what is calculated at runtime (e.g. for the certification of safety critical software).\nThat been said, the answer to your question: It is clearly defined, what can be a const expression and what not. Allowed in const expressions is a subset of C++ that can be executed by the compiler alone in a kind of pre-compilation phase. The precise subset of C++ is defined as you can see here: https://en.cppreference.com/w/cpp/language/constant_expression\nIt should be clear that here no exhaustive explication for all these requirements of const expressions can be given. Luckily most things are intuitive. In details there are 37 languague constructs that are not allowed in const expressions.\nIn your case, the code does not compile because you have this construct:\n\n\na function call expression that calls a function (or a constructor) that is not declared constexpr\n\n\nThat means, const expressions can only contain other const expressions calls but not arbitrary function calls. That make sense because your constructor could hypothetically have side effects and change some global variables. In that case it would not only produce something constant. The C++ language rules prevent you from doing such things.\nThe default constructor (which you have in your second example) is not a const expression. For more information about the default constructor that is created for you see here: https://en.cppreference.com/w/cpp/language/default_constructor\n\nIf this satisfies the requirements of a constexpr constructor (until C++23)constexpr function (since C++23), the generated constructor is constexpr.\n\nIf it is an option for you to make your constructor a const expression it will look this way:\n struct SomeType\n {\n constexpr SomeType() : x(0) {};\n int x;\n };\n\n\nFor me this compiles when I use the C++ standard C++20.\n"
] | [
-1
] | [
"c++",
"standard_layout",
"static_initialization",
"trivially_copyable"
] | stackoverflow_0074677235_c++_standard_layout_static_initialization_trivially_copyable.txt |
Q:
Cannot access class 'com.google.common.util.concurrent.ListenableFuture'. Check your module classpath for missing or conflicting dependencies
I am trying to integrate CameraX in my flutter app but I get error saying Cannot access class 'com.google.common.util.concurrent.ListenableFuture'. Check your module classpath for missing or conflicting dependencies
There error comes from below line
val cameraProviderFuture = ProcessCameraProvider.getInstance(context)
Below is my native view
class CealScanQrView(val context: Context, id: Int, creationParams: Map<String?, Any?>?) :
PlatformView {
private var mCameraProvider: ProcessCameraProvider? = null
private var preview: PreviewView
private var linearLayout: LinearLayout = LinearLayout(context)
private lateinit var cameraExecutor: ExecutorService
private lateinit var options: BarcodeScannerOptions
private lateinit var scanner: BarcodeScanner
private var analysisUseCase: ImageAnalysis = ImageAnalysis.Builder()
.build()
companion object {
private val REQUIRED_PERMISSIONS = mutableListOf(Manifest.permission.CAMERA).toTypedArray()
}
init {
val linearLayoutParams = ViewGroup.LayoutParams(
ViewGroup.LayoutParams.WRAP_CONTENT,
ViewGroup.LayoutParams.WRAP_CONTENT
)
linearLayout.layoutParams = linearLayoutParams
linearLayout.orientation = LinearLayout.VERTICAL
preview = PreviewView(context)
preview.layoutParams = ViewGroup.LayoutParams(
ViewGroup.LayoutParams.MATCH_PARENT,
ViewGroup.LayoutParams.MATCH_PARENT
)
linearLayout.addView(preview)
setUpCamera()
}
private fun setUpCamera(){
if (allPermissionsGranted()) {
startCamera()
}
cameraExecutor = Executors.newSingleThreadExecutor()
options = BarcodeScannerOptions.Builder()
.setBarcodeFormats(
Barcode.FORMAT_QR_CODE)
.build()
scanner = BarcodeScanning.getClient(options)
analysisUseCase.setAnalyzer(
// newSingleThreadExecutor() will let us perform analysis on a single worker thread
Executors.newSingleThreadExecutor()
) { imageProxy ->
processImageProxy(scanner, imageProxy)
}
}
override fun getView(): View {
return linearLayout
}
override fun dispose() {
cameraExecutor.shutdown()
}
@SuppressLint("UnsafeOptInUsageError")
private fun processImageProxy(
barcodeScanner: BarcodeScanner,
imageProxy: ImageProxy
) {
imageProxy.image?.let { image ->
val inputImage =
InputImage.fromMediaImage(
image,
imageProxy.imageInfo.rotationDegrees
)
barcodeScanner.process(inputImage)
.addOnSuccessListener { barcodeList ->
val barcode = barcodeList.getOrNull(0)
// `rawValue` is the decoded value of the barcode
barcode?.rawValue?.let { value ->
mCameraProvider?.unbindAll()
}
}
.addOnFailureListener {
// This failure will happen if the barcode scanning model
// fails to download from Google Play Services
}
.addOnCompleteListener {
// When the image is from CameraX analysis use case, must
// call image.close() on received images when finished
// using them. Otherwise, new images may not be received
// or the camera may stall.
imageProxy.image?.close()
imageProxy.close()
}
}
}
private fun allPermissionsGranted() = REQUIRED_PERMISSIONS.all {
ContextCompat.checkSelfPermission(context, it) == PackageManager.PERMISSION_GRANTED
}
private fun startCamera() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(context)
cameraProviderFuture.addListener({
// Used to bind the lifecycle of cameras to the lifecycle owner
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
mCameraProvider = cameraProvider
// Preview
val surfacePreview = Preview.Builder()
.build()
.also {
it.setSurfaceProvider(preview.surfaceProvider)
}
// Select back camera as a default
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
try {
// Unbind use cases before rebinding
cameraProvider.unbindAll()
// Bind use cases to camera
cameraProvider.bindToLifecycle(
(context as FlutterActivity),
cameraSelector,
surfacePreview,
analysisUseCase,
)
} catch (exc: Exception) {
// Do nothing on exception
}
}, ContextCompat.getMainExecutor(context))
}
}
class CealScanQrViewFactory : PlatformViewFactory(StandardMessageCodec.INSTANCE) {
override fun create(context: Context, viewId: Int, args: Any?): PlatformView {
val creationParams = args as Map<String?, Any?>?
return CealScanQrView(context, viewId, creationParams)
}
}
A:
If you have the Google Guava dependency synced on your project and are still getting the error message, there may be a conflict between the version of Google Guava you are using and the version of the AndroidX CameraX library. The ListenableFuture class was moved from Google Guava to the AndroidX CameraX library in version 1.0.0-alpha13, so if you are using an older version of CameraX that still depends on Google Guava, there may be a conflict between the two libraries.
To fix this issue, you can try updating the version of the AndroidX CameraX library that you are using to the latest version (1.0.0-beta04 as of this writing). You can do this by adding the following dependency to your build.gradle file:
implementation "androidx.camera:camera-camera2:1.0.0-beta04"
This will update the version of the CameraX library that your project is using to the latest version, which should resolve the conflict and allow you to use the ListenableFuture class without any errors.
Once you have added the updated dependency, you will need to sync your project with Gradle in order for the changes to take effect. You can do this by clicking the "Try Again" button in the error message, or by running the ./gradlew build command from the command line. This will build your project with the updated dependencies and should fix the issue.
| Cannot access class 'com.google.common.util.concurrent.ListenableFuture'. Check your module classpath for missing or conflicting dependencies | I am trying to integrate CameraX in my flutter app but I get error saying Cannot access class 'com.google.common.util.concurrent.ListenableFuture'. Check your module classpath for missing or conflicting dependencies
There error comes from below line
val cameraProviderFuture = ProcessCameraProvider.getInstance(context)
Below is my native view
class CealScanQrView(val context: Context, id: Int, creationParams: Map<String?, Any?>?) :
PlatformView {
private var mCameraProvider: ProcessCameraProvider? = null
private var preview: PreviewView
private var linearLayout: LinearLayout = LinearLayout(context)
private lateinit var cameraExecutor: ExecutorService
private lateinit var options: BarcodeScannerOptions
private lateinit var scanner: BarcodeScanner
private var analysisUseCase: ImageAnalysis = ImageAnalysis.Builder()
.build()
companion object {
private val REQUIRED_PERMISSIONS = mutableListOf(Manifest.permission.CAMERA).toTypedArray()
}
init {
val linearLayoutParams = ViewGroup.LayoutParams(
ViewGroup.LayoutParams.WRAP_CONTENT,
ViewGroup.LayoutParams.WRAP_CONTENT
)
linearLayout.layoutParams = linearLayoutParams
linearLayout.orientation = LinearLayout.VERTICAL
preview = PreviewView(context)
preview.layoutParams = ViewGroup.LayoutParams(
ViewGroup.LayoutParams.MATCH_PARENT,
ViewGroup.LayoutParams.MATCH_PARENT
)
linearLayout.addView(preview)
setUpCamera()
}
private fun setUpCamera(){
if (allPermissionsGranted()) {
startCamera()
}
cameraExecutor = Executors.newSingleThreadExecutor()
options = BarcodeScannerOptions.Builder()
.setBarcodeFormats(
Barcode.FORMAT_QR_CODE)
.build()
scanner = BarcodeScanning.getClient(options)
analysisUseCase.setAnalyzer(
// newSingleThreadExecutor() will let us perform analysis on a single worker thread
Executors.newSingleThreadExecutor()
) { imageProxy ->
processImageProxy(scanner, imageProxy)
}
}
override fun getView(): View {
return linearLayout
}
override fun dispose() {
cameraExecutor.shutdown()
}
@SuppressLint("UnsafeOptInUsageError")
private fun processImageProxy(
barcodeScanner: BarcodeScanner,
imageProxy: ImageProxy
) {
imageProxy.image?.let { image ->
val inputImage =
InputImage.fromMediaImage(
image,
imageProxy.imageInfo.rotationDegrees
)
barcodeScanner.process(inputImage)
.addOnSuccessListener { barcodeList ->
val barcode = barcodeList.getOrNull(0)
// `rawValue` is the decoded value of the barcode
barcode?.rawValue?.let { value ->
mCameraProvider?.unbindAll()
}
}
.addOnFailureListener {
// This failure will happen if the barcode scanning model
// fails to download from Google Play Services
}
.addOnCompleteListener {
// When the image is from CameraX analysis use case, must
// call image.close() on received images when finished
// using them. Otherwise, new images may not be received
// or the camera may stall.
imageProxy.image?.close()
imageProxy.close()
}
}
}
private fun allPermissionsGranted() = REQUIRED_PERMISSIONS.all {
ContextCompat.checkSelfPermission(context, it) == PackageManager.PERMISSION_GRANTED
}
private fun startCamera() {
val cameraProviderFuture = ProcessCameraProvider.getInstance(context)
cameraProviderFuture.addListener({
// Used to bind the lifecycle of cameras to the lifecycle owner
val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()
mCameraProvider = cameraProvider
// Preview
val surfacePreview = Preview.Builder()
.build()
.also {
it.setSurfaceProvider(preview.surfaceProvider)
}
// Select back camera as a default
val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA
try {
// Unbind use cases before rebinding
cameraProvider.unbindAll()
// Bind use cases to camera
cameraProvider.bindToLifecycle(
(context as FlutterActivity),
cameraSelector,
surfacePreview,
analysisUseCase,
)
} catch (exc: Exception) {
// Do nothing on exception
}
}, ContextCompat.getMainExecutor(context))
}
}
class CealScanQrViewFactory : PlatformViewFactory(StandardMessageCodec.INSTANCE) {
override fun create(context: Context, viewId: Int, args: Any?): PlatformView {
val creationParams = args as Map<String?, Any?>?
return CealScanQrView(context, viewId, creationParams)
}
}
| [
"If you have the Google Guava dependency synced on your project and are still getting the error message, there may be a conflict between the version of Google Guava you are using and the version of the AndroidX CameraX library. The ListenableFuture class was moved from Google Guava to the AndroidX CameraX library in version 1.0.0-alpha13, so if you are using an older version of CameraX that still depends on Google Guava, there may be a conflict between the two libraries.\nTo fix this issue, you can try updating the version of the AndroidX CameraX library that you are using to the latest version (1.0.0-beta04 as of this writing). You can do this by adding the following dependency to your build.gradle file:\nimplementation \"androidx.camera:camera-camera2:1.0.0-beta04\"\n\nThis will update the version of the CameraX library that your project is using to the latest version, which should resolve the conflict and allow you to use the ListenableFuture class without any errors.\nOnce you have added the updated dependency, you will need to sync your project with Gradle in order for the changes to take effect. You can do this by clicking the \"Try Again\" button in the error message, or by running the ./gradlew build command from the command line. This will build your project with the updated dependencies and should fix the issue.\n"
] | [
0
] | [] | [] | [
"android",
"android_camerax",
"dart",
"flutter",
"kotlin"
] | stackoverflow_0074650296_android_android_camerax_dart_flutter_kotlin.txt |
Q:
Tkinter canvas growing out of screen because of labels on canvas
I have a tkinter canvas where I put labels on. When too much labels are added to the canvas it grows out of the screen. How do I set a max size on the canvas?
middleCanvas = Canvas(window, bg="red", width=300, height=400)
middleCanvas.grid(column=1, row=3, sticky="N")
scroll_y.grid(column=2, row=3, sticky="NS")
middleCanvas.configure(yscrollcommand=scroll_y.set)
middleCanvas.configure(scrollregion=middleCanvas.bbox("all"))
messageLabel = Label(middleCanvas, text=line)
messageLabel.grid(column=1, row=messageRow)
Tried using a scrollbar, but the bar also goes out of screen and fills the slider.
A:
This also happened to me with buttons.
You can fix it by defining a WIDTH variable and set it to the size you want. Then set the Label width to the WIDTH variable.
For example:
WIDTH = 5
messagelabel = Label(middleCanvas, text="A very, very, very very, very long string. ", width=WIDTH)
messagelabel.grid(column=1, row=messageRow)
| Tkinter canvas growing out of screen because of labels on canvas | I have a tkinter canvas where I put labels on. When too much labels are added to the canvas it grows out of the screen. How do I set a max size on the canvas?
middleCanvas = Canvas(window, bg="red", width=300, height=400)
middleCanvas.grid(column=1, row=3, sticky="N")
scroll_y.grid(column=2, row=3, sticky="NS")
middleCanvas.configure(yscrollcommand=scroll_y.set)
middleCanvas.configure(scrollregion=middleCanvas.bbox("all"))
messageLabel = Label(middleCanvas, text=line)
messageLabel.grid(column=1, row=messageRow)
Tried using a scrollbar, but the bar also goes out of screen and fills the slider.
| [
"This also happened to me with buttons.\nYou can fix it by defining a WIDTH variable and set it to the size you want. Then set the Label width to the WIDTH variable.\nFor example:\nWIDTH = 5\nmessagelabel = Label(middleCanvas, text=\"A very, very, very very, very long string. \", width=WIDTH)\nmessagelabel.grid(column=1, row=messageRow) \n\n"
] | [
1
] | [] | [] | [
"python",
"tkinter",
"tkinter_canvas"
] | stackoverflow_0074678075_python_tkinter_tkinter_canvas.txt |
Q:
Call a non exported function from a native C++ dll in C#
I am trying to call a non exported function from a native C++ DLL into a C# program.
I have the function signature, which is of type typedef void (_cdecl* TfFunc)(int, unsigned char** data)
The dll is in "A.dll", at offset 0x00003e89d.
In C++ I'd do this :
int handle = LoadLibrary("A.dll");
TfFunc func = (TfFunc)((handle) + 0x00003e89d);
func(1, null);
However despite searching extensively, I can't find a way to do such a thing in C#
A:
I could solve it with that :
[UnmanagedFunctionPointer(CallingConvention.Cdecl)]
private delegate void Func(int a1, out IntPtr a2);
IntPtr handle = LoadLibrary("A.dll");
Func f = Marshal.GetDelegateForFunctionPointer<Func>(new IntPtr(handle + 0x00003e89d));
IntPtr a2 = IntPtr.Zero;
f(1, a2);
Works as expected
| Call a non exported function from a native C++ dll in C# | I am trying to call a non exported function from a native C++ DLL into a C# program.
I have the function signature, which is of type typedef void (_cdecl* TfFunc)(int, unsigned char** data)
The dll is in "A.dll", at offset 0x00003e89d.
In C++ I'd do this :
int handle = LoadLibrary("A.dll");
TfFunc func = (TfFunc)((handle) + 0x00003e89d);
func(1, null);
However despite searching extensively, I can't find a way to do such a thing in C#
| [
"I could solve it with that :\n[UnmanagedFunctionPointer(CallingConvention.Cdecl)]\nprivate delegate void Func(int a1, out IntPtr a2);\n\nIntPtr handle = LoadLibrary(\"A.dll\");\nFunc f = Marshal.GetDelegateForFunctionPointer<Func>(new IntPtr(handle + 0x00003e89d));\n\nIntPtr a2 = IntPtr.Zero;\nf(1, a2);\n\nWorks as expected\n"
] | [
0
] | [] | [] | [
"c#",
"dll",
"dllexport",
"reverse_engineering",
"winapi"
] | stackoverflow_0074677856_c#_dll_dllexport_reverse_engineering_winapi.txt |
Q:
How do I Format a String Date in SSRS?
Using Report Builder,
In a Matrix for a report I'm creating:
I have dates in my data the are are "FullMonthName-Year"
I am trying to get them to a report as "AbbreviatedMonthName-Year"
Example:
'January-2020' turns into 'Jan-2020'
'February-2021' turns into 'Feb-2021'
thanks in advance
Within the Text-Box Properties, I have been trying to mess with the Format function but can't seem to get it right.
ex:
=Format(Fields!labelDt.Value, "MM-YYYY") just gives me "MM-YYYY"
A:
You can you use StrConv, your expression would be like:
=StrConv(MonthName(Fields!MOIS.Value), 3) & "-" & Fields!ANNEE.Value
| How do I Format a String Date in SSRS? | Using Report Builder,
In a Matrix for a report I'm creating:
I have dates in my data the are are "FullMonthName-Year"
I am trying to get them to a report as "AbbreviatedMonthName-Year"
Example:
'January-2020' turns into 'Jan-2020'
'February-2021' turns into 'Feb-2021'
thanks in advance
Within the Text-Box Properties, I have been trying to mess with the Format function but can't seem to get it right.
ex:
=Format(Fields!labelDt.Value, "MM-YYYY") just gives me "MM-YYYY"
| [
"You can you use StrConv, your expression would be like:\n=StrConv(MonthName(Fields!MOIS.Value), 3) & \"-\" & Fields!ANNEE.Value\n"
] | [
0
] | [] | [] | [
"reportbuilder3.0",
"reporting_services",
"sql_server"
] | stackoverflow_0074675304_reportbuilder3.0_reporting_services_sql_server.txt |
Q:
Compare password to a given set of passwords
I have got an assignment from uni which mentions that I have to compare a given set of passwords to the password given by the user. The set of passwords are predetermined in the question as follows
const char *passwd[NUM_PASSWDS] = {
"123foo", "bar456", "bla_blubb"
};
and has to be compared with input from user...
So I have written my code as follows;
#include <stdio.h>
#include <string.h>
#define NUM_PASSWDS 3
const char *passwd[NUM_PASSWDS] = {
"123foo", "bar456", "bla_blubb"
};
int pwdquery(char pass[]) {
for (int i = 0; i < 2; i++) {
if (passwd[i] == pass[i]) {
return printf("correct");
}
}
}
int main() {
char a[100];
for (int i = 0; i < 3; i++) {
printf("Please enter password");
scanf("%s", a);
}
pwdquery(a);
}
When I tried running my code, it shows an error...
Thank you for your time
A:
To compare string you need to strcmp function.
printf returns number of character printed. You should return fixed value on success or failure of password match. In my example I'm returning 1 in case of password match or 0 in case of no-password match.
In main function scanf function with %s argument can read entire string. You don't need for-loop to read input string.
#include<stdio.h>
#include<string.h>
#define NUM_PASSWDS 3
const char* passwd[NUM_PASSWDS] = {
"123foo", "bar456", "bla_blubb"
};
int pwdquery(char pass[]) {
for (int i = 0; i < NUM_PASSWDS; i++)
{
if (strcmp(pass, passwd[i]) == 0)
{
return 1; // One means currect password
}
}
return 0; // Zero means incorrect password password
}
int main() {
char a[100];
printf("Please enter password");
scanf("%s", a);
if(pwdquery(a) == 1)
{
printf("Password entered is correct");
}
else
{
printf("Password entered is incorrect");
}
}
With custom strcmp
#include<stdio.h>
#include<string.h>
#define NUM_PASSWDS 3
const char* passwd[NUM_PASSWDS] = {
"123foo", "bar456", "bla_blubb"
};
//Returns 0 if string don't match else returns 1
int user_strcmp(const char *str1, const char *str2)
{
size_t str1_size = strlen(str1);
size_t str2_size = strlen(str2);
if(str1_size != str2_size)
{
return 0; //String are not of same size. The strings won't match.
}
for(size_t i = 0; i < str1_size; i++)
{
if(str1[i] != str2[i])
{
return 0;
}
}
return 1;
}
int pwdquery(char pass[]) {
for (int i = 0; i < NUM_PASSWDS; i++)
{
if (user_strcmp(pass, passwd[i]) == 1)
{
return 1; // One means currect password
}
}
return 0; // Zero means incorrect password password
}
int main() {
char a[100];
printf("Please enter password");
scanf("%99s", a);
if(pwdquery(a) == 1)
{
printf("Password entered is correct");
}
else
{
printf("Password entered is incorrect");
}
}
A:
There are multiple problems in your code:
the password comparison function pwdquery compares a character and a pointer: this is incorrect. You should use strcmp to compare the strings and return a non zero value if one of these comparisons returns 0.
in the loop, you ask for the password 3 times, but you only check the password after this loop, hence only test the last one entered.
scanf("%s", a) may cause a buffer overflow if the user enters more than 99 characters without white space. Use scanf("%99s", a) to tell scanf the maximum number of characters to store into a before the null terminator.
Here is a modified version:
#include <stdio.h>
#include <string.h>
#define NUM_PASSWDS 3
const char *passwd[NUM_PASSWDS] = {
"123foo", "bar456", "bla_blubb",
};
int pwd_check(const char *pass) {
for (size_t i = 0; i < NUM_PASSWDS; i++) {
if (passwd[i] && strcmp(passwd[i], pass) == 0)
return 1;
}
return 0;
}
int main() {
char a[100];
for (int i = 0; i < 3; i++) {
printf("Please enter password: ");
if (scanf("%99s", a) != 1) {
printf("missing input, aborting\n");
return 1;
}
if (pwd_check(a)) {
printf("correct!\n");
return 0;
}
printf("incorrect password\n");
}
printf("too many errors, aborting\n");
return 1;
}
| Compare password to a given set of passwords | I have got an assignment from uni which mentions that I have to compare a given set of passwords to the password given by the user. The set of passwords are predetermined in the question as follows
const char *passwd[NUM_PASSWDS] = {
"123foo", "bar456", "bla_blubb"
};
and has to be compared with input from user...
So I have written my code as follows;
#include <stdio.h>
#include <string.h>
#define NUM_PASSWDS 3
const char *passwd[NUM_PASSWDS] = {
"123foo", "bar456", "bla_blubb"
};
int pwdquery(char pass[]) {
for (int i = 0; i < 2; i++) {
if (passwd[i] == pass[i]) {
return printf("correct");
}
}
}
int main() {
char a[100];
for (int i = 0; i < 3; i++) {
printf("Please enter password");
scanf("%s", a);
}
pwdquery(a);
}
When I tried running my code, it shows an error...
Thank you for your time
| [
"To compare string you need to strcmp function.\nprintf returns number of character printed. You should return fixed value on success or failure of password match. In my example I'm returning 1 in case of password match or 0 in case of no-password match.\nIn main function scanf function with %s argument can read entire string. You don't need for-loop to read input string.\n#include<stdio.h>\n#include<string.h>\n#define NUM_PASSWDS 3\n\nconst char* passwd[NUM_PASSWDS] = {\n\"123foo\", \"bar456\", \"bla_blubb\"\n};\nint pwdquery(char pass[]) {\n for (int i = 0; i < NUM_PASSWDS; i++) \n {\n if (strcmp(pass, passwd[i]) == 0) \n {\n return 1; // One means currect password\n }\n }\n return 0; // Zero means incorrect password password\n}\n\nint main() {\n char a[100];\n printf(\"Please enter password\");\n scanf(\"%s\", a);\n\n if(pwdquery(a) == 1)\n {\n printf(\"Password entered is correct\");\n }\n else\n {\n printf(\"Password entered is incorrect\");\n }\n}\n\n\nWith custom strcmp\n#include<stdio.h>\n#include<string.h>\n#define NUM_PASSWDS 3\n\nconst char* passwd[NUM_PASSWDS] = {\n\"123foo\", \"bar456\", \"bla_blubb\"\n};\n\n//Returns 0 if string don't match else returns 1\nint user_strcmp(const char *str1, const char *str2)\n{\n size_t str1_size = strlen(str1);\n size_t str2_size = strlen(str2);\n if(str1_size != str2_size)\n {\n return 0; //String are not of same size. The strings won't match.\n }\n for(size_t i = 0; i < str1_size; i++)\n {\n if(str1[i] != str2[i])\n {\n return 0;\n }\n }\n return 1;\n}\nint pwdquery(char pass[]) {\n for (int i = 0; i < NUM_PASSWDS; i++) \n {\n if (user_strcmp(pass, passwd[i]) == 1) \n {\n return 1; // One means currect password\n }\n }\n return 0; // Zero means incorrect password password\n}\n\nint main() {\n char a[100];\n printf(\"Please enter password\");\n scanf(\"%99s\", a);\n\n if(pwdquery(a) == 1)\n {\n printf(\"Password entered is correct\");\n }\n else\n {\n printf(\"Password entered is incorrect\");\n }\n}\n\n",
"There are multiple problems in your code:\n\nthe password comparison function pwdquery compares a character and a pointer: this is incorrect. You should use strcmp to compare the strings and return a non zero value if one of these comparisons returns 0.\n\nin the loop, you ask for the password 3 times, but you only check the password after this loop, hence only test the last one entered.\n\nscanf(\"%s\", a) may cause a buffer overflow if the user enters more than 99 characters without white space. Use scanf(\"%99s\", a) to tell scanf the maximum number of characters to store into a before the null terminator.\n\n\nHere is a modified version:\n#include <stdio.h>\n#include <string.h>\n\n#define NUM_PASSWDS 3\n\nconst char *passwd[NUM_PASSWDS] = {\n \"123foo\", \"bar456\", \"bla_blubb\",\n};\n\nint pwd_check(const char *pass) {\n for (size_t i = 0; i < NUM_PASSWDS; i++) {\n if (passwd[i] && strcmp(passwd[i], pass) == 0)\n return 1;\n }\n return 0;\n}\n\nint main() {\n char a[100];\n\n for (int i = 0; i < 3; i++) {\n printf(\"Please enter password: \");\n if (scanf(\"%99s\", a) != 1) {\n printf(\"missing input, aborting\\n\");\n return 1;\n }\n if (pwd_check(a)) {\n printf(\"correct!\\n\");\n return 0;\n }\n printf(\"incorrect password\\n\");\n }\n printf(\"too many errors, aborting\\n\");\n return 1;\n}\n\n"
] | [
0,
0
] | [] | [] | [
"c",
"pointers",
"string",
"string_comparison"
] | stackoverflow_0074677667_c_pointers_string_string_comparison.txt |
Q:
Multiple URLs in multiple browsers in selenium (local) python
I have a test script that I want to be run for multiple URLs on multiple browsers (Chrome and Firefox) locally on my machine. Every browser has to open all the URLs for the test script. I have run the test script for multiple URLs for multiple browsers. I have the following code which do the task. Is there any better way to do this code? Thank you
import time
from selenium import webdriver
driver_array = [webdriver.Firefox(), webdriver.Chrome()]
sites = [
"http://www.github.com",
"https://tribune.com.pk"
]
for index, browser in enumerate(driver_array):
print(index, browser)
for index, site in enumerate(sites):
print(index,site)
browser.get(site)
time.sleep(5)
# localitems()
# sessionitems()
# def localitems() :
local_storage = browser.execute_script( \
"var ls = window.localStorage, items = {}; " \
"for (var i = 0, k; i < ls.length; ++i) " \
"items[k = ls.key(i)] = ls.getItem(k);"\
"return items; ")
print(local_storage)
# def sessionitems() :
session_storage = browser.execute_script( \
"var ls = window.sessionStorage, items = {}; " \
"for (var i = 0, k; i < ls.length; ++i) " \
"items[k = ls.key(i)] = ls.getItem(k);"\
"return items; ")
print(session_storage)
A:
Here is one possible way to improve the code.
import time
from selenium import webdriver
driver_array = [webdriver.Firefox(), webdriver.Chrome()]
sites = [ "http://www.github.com", "https://tribune.com.pk"]
def get_storage_items(driver, storage_type):
items = driver.execute_script(
f"var ls = window.{storage_type}, items = {{}}; "
f"for (var i = 0, k; i < ls.length; ++i) "
f"items[k = ls.key(i)] = ls.getItem(k);"
"return items; "
)
return items
for index, browser in enumerate(driver_array):
print(index, browser)
for index, site in enumerate(sites):
print(index, site)
browser.get(site)
time.sleep(5)
local_storage = get_storage_items(browser, "localStorage")
print(local_storage)
session_storage = get_storage_items(browser, "sessionStorage")
print(session_storage)
In this version of the code, the localitems() and sessionitems() functions have been removed and their logic has been combined into a single get_storage_items() function that takes the driver and the storage_type (either localStorage or sessionStorage) as arguments and returns the items in the specified storage. This function is called twice for each site, once for each type of storage, and the items are printed. This avoids duplication of code and makes the code easier to read and understand.
| Multiple URLs in multiple browsers in selenium (local) python | I have a test script that I want to be run for multiple URLs on multiple browsers (Chrome and Firefox) locally on my machine. Every browser has to open all the URLs for the test script. I have run the test script for multiple URLs for multiple browsers. I have the following code which do the task. Is there any better way to do this code? Thank you
import time
from selenium import webdriver
driver_array = [webdriver.Firefox(), webdriver.Chrome()]
sites = [
"http://www.github.com",
"https://tribune.com.pk"
]
for index, browser in enumerate(driver_array):
print(index, browser)
for index, site in enumerate(sites):
print(index,site)
browser.get(site)
time.sleep(5)
# localitems()
# sessionitems()
# def localitems() :
local_storage = browser.execute_script( \
"var ls = window.localStorage, items = {}; " \
"for (var i = 0, k; i < ls.length; ++i) " \
"items[k = ls.key(i)] = ls.getItem(k);"\
"return items; ")
print(local_storage)
# def sessionitems() :
session_storage = browser.execute_script( \
"var ls = window.sessionStorage, items = {}; " \
"for (var i = 0, k; i < ls.length; ++i) " \
"items[k = ls.key(i)] = ls.getItem(k);"\
"return items; ")
print(session_storage)
| [
"Here is one possible way to improve the code.\nimport time\nfrom selenium import webdriver\n\n\ndriver_array = [webdriver.Firefox(), webdriver.Chrome()]\nsites = [ \"http://www.github.com\", \"https://tribune.com.pk\"]\n\n\ndef get_storage_items(driver, storage_type):\n items = driver.execute_script(\n f\"var ls = window.{storage_type}, items = {{}}; \"\n f\"for (var i = 0, k; i < ls.length; ++i) \"\n f\"items[k = ls.key(i)] = ls.getItem(k);\"\n \"return items; \"\n )\n return items\n\n\nfor index, browser in enumerate(driver_array):\n print(index, browser)\n for index, site in enumerate(sites):\n print(index, site)\n browser.get(site)\n time.sleep(5)\n local_storage = get_storage_items(browser, \"localStorage\")\n print(local_storage)\n session_storage = get_storage_items(browser, \"sessionStorage\")\n print(session_storage)\n\nIn this version of the code, the localitems() and sessionitems() functions have been removed and their logic has been combined into a single get_storage_items() function that takes the driver and the storage_type (either localStorage or sessionStorage) as arguments and returns the items in the specified storage. This function is called twice for each site, once for each type of storage, and the items are printed. This avoids duplication of code and makes the code easier to read and understand.\n"
] | [
0
] | [] | [] | [
"browser_automation",
"cross_browser",
"python",
"selenium",
"selenium_webdriver"
] | stackoverflow_0074678060_browser_automation_cross_browser_python_selenium_selenium_webdriver.txt |
Q:
Animation on remove and on add widget
I am trying to add animation to a list of widgets inside of a stack. When ever I remove a Widget or add a Widget to the List I Want to have a scale up/down transition as if the widget pops up from no where and shrinks to nothing. Any idea on how I can achieve this?
A:
You can use AnimatedList widget as solution.
Example video here
Flutter documentation example
import 'package:flutter/material.dart';
void main() {
runApp(const AnimatedListSample());
}
class AnimatedListSample extends StatefulWidget {
const AnimatedListSample({super.key});
@override
State<AnimatedListSample> createState() => _AnimatedListSampleState();
}
class _AnimatedListSampleState extends State<AnimatedListSample> {
final GlobalKey<AnimatedListState> _listKey = GlobalKey<AnimatedListState>();
late ListModel<int> _list;
int? _selectedItem;
late int
_nextItem; // The next item inserted when the user presses the '+' button.
@override
void initState() {
super.initState();
_list = ListModel<int>(
listKey: _listKey,
initialItems: <int>[0, 1, 2],
removedItemBuilder: _buildRemovedItem,
);
_nextItem = 3;
}
// Used to build list items that haven't been removed.
Widget _buildItem(
BuildContext context, int index, Animation<double> animation) {
return CardItem(
animation: animation,
item: _list[index],
selected: _selectedItem == _list[index],
onTap: () {
setState(() {
_selectedItem = _selectedItem == _list[index] ? null : _list[index];
});
},
);
}
// Used to build an item after it has been removed from the list. This
// method is needed because a removed item remains visible until its
// animation has completed (even though it's gone as far this ListModel is
// concerned). The widget will be used by the
// [AnimatedListState.removeItem] method's
// [AnimatedListRemovedItemBuilder] parameter.
Widget _buildRemovedItem(
int item, BuildContext context, Animation<double> animation) {
return CardItem(
animation: animation,
item: item,
// No gesture detector here: we don't want removed items to be interactive.
);
}
// Insert the "next item" into the list model.
void _insert() {
final int index =
_selectedItem == null ? _list.length : _list.indexOf(_selectedItem!);
_list.insert(index, _nextItem++);
}
// Remove the selected item from the list model.
void _remove() {
if (_selectedItem != null) {
_list.removeAt(_list.indexOf(_selectedItem!));
setState(() {
_selectedItem = null;
});
}
}
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: const Text('AnimatedList'),
actions: <Widget>[
IconButton(
icon: const Icon(Icons.add_circle),
onPressed: _insert,
tooltip: 'insert a new item',
),
IconButton(
icon: const Icon(Icons.remove_circle),
onPressed: _remove,
tooltip: 'remove the selected item',
),
],
),
body: Padding(
padding: const EdgeInsets.all(16.0),
child: AnimatedList(
key: _listKey,
initialItemCount: _list.length,
itemBuilder: _buildItem,
),
),
),
);
}
}
typedef RemovedItemBuilder<T> = Widget Function(
T item, BuildContext context, Animation<double> animation);
/// Keeps a Dart [List] in sync with an [AnimatedList].
///
/// The [insert] and [removeAt] methods apply to both the internal list and
/// the animated list that belongs to [listKey].
///
/// This class only exposes as much of the Dart List API as is needed by the
/// sample app. More list methods are easily added, however methods that
/// mutate the list must make the same changes to the animated list in terms
/// of [AnimatedListState.insertItem] and [AnimatedList.removeItem].
class ListModel<E> {
ListModel({
required this.listKey,
required this.removedItemBuilder,
Iterable<E>? initialItems,
}) : _items = List<E>.from(initialItems ?? <E>[]);
final GlobalKey<AnimatedListState> listKey;
final RemovedItemBuilder<E> removedItemBuilder;
final List<E> _items;
AnimatedListState? get _animatedList => listKey.currentState;
void insert(int index, E item) {
_items.insert(index, item);
_animatedList!.insertItem(index);
}
E removeAt(int index) {
final E removedItem = _items.removeAt(index);
if (removedItem != null) {
_animatedList!.removeItem(
index,
(BuildContext context, Animation<double> animation) {
return removedItemBuilder(removedItem, context, animation);
},
);
}
return removedItem;
}
int get length => _items.length;
E operator [](int index) => _items[index];
int indexOf(E item) => _items.indexOf(item);
}
/// Displays its integer item as 'item N' on a Card whose color is based on
/// the item's value.
///
/// The text is displayed in bright green if [selected] is
/// true. This widget's height is based on the [animation] parameter, it
/// varies from 0 to 128 as the animation varies from 0.0 to 1.0.
class CardItem extends StatelessWidget {
const CardItem({
super.key,
this.onTap,
this.selected = false,
required this.animation,
required this.item,
}) : assert(item >= 0);
final Animation<double> animation;
final VoidCallback? onTap;
final int item;
final bool selected;
@override
Widget build(BuildContext context) {
TextStyle textStyle = Theme.of(context).textTheme.headline4!;
if (selected) {
textStyle = textStyle.copyWith(color: Colors.lightGreenAccent[400]);
}
return Padding(
padding: const EdgeInsets.all(2.0),
child: SizeTransition(
sizeFactor: animation,
child: GestureDetector(
behavior: HitTestBehavior.opaque,
onTap: onTap,
child: SizedBox(
height: 80.0,
child: Card(
color: Colors.primaries[item % Colors.primaries.length],
child: Center(
child: Text('Item $item', style: textStyle),
),
),
),
),
),
);
}
}
| Animation on remove and on add widget | I am trying to add animation to a list of widgets inside of a stack. When ever I remove a Widget or add a Widget to the List I Want to have a scale up/down transition as if the widget pops up from no where and shrinks to nothing. Any idea on how I can achieve this?
| [
"You can use AnimatedList widget as solution.\nExample video here\nFlutter documentation example\nimport 'package:flutter/material.dart';\n\nvoid main() {\n runApp(const AnimatedListSample());\n}\n\nclass AnimatedListSample extends StatefulWidget {\n const AnimatedListSample({super.key});\n\n @override\n State<AnimatedListSample> createState() => _AnimatedListSampleState();\n}\n\nclass _AnimatedListSampleState extends State<AnimatedListSample> {\n final GlobalKey<AnimatedListState> _listKey = GlobalKey<AnimatedListState>();\n late ListModel<int> _list;\n int? _selectedItem;\n late int\n _nextItem; // The next item inserted when the user presses the '+' button.\n\n @override\n void initState() {\n super.initState();\n _list = ListModel<int>(\n listKey: _listKey,\n initialItems: <int>[0, 1, 2],\n removedItemBuilder: _buildRemovedItem,\n );\n _nextItem = 3;\n }\n\n // Used to build list items that haven't been removed.\n Widget _buildItem(\n BuildContext context, int index, Animation<double> animation) {\n return CardItem(\n animation: animation,\n item: _list[index],\n selected: _selectedItem == _list[index],\n onTap: () {\n setState(() {\n _selectedItem = _selectedItem == _list[index] ? null : _list[index];\n });\n },\n );\n }\n\n // Used to build an item after it has been removed from the list. This\n // method is needed because a removed item remains visible until its\n // animation has completed (even though it's gone as far this ListModel is\n // concerned). The widget will be used by the\n // [AnimatedListState.removeItem] method's\n // [AnimatedListRemovedItemBuilder] parameter.\n Widget _buildRemovedItem(\n int item, BuildContext context, Animation<double> animation) {\n return CardItem(\n animation: animation,\n item: item,\n // No gesture detector here: we don't want removed items to be interactive.\n );\n }\n\n // Insert the \"next item\" into the list model.\n void _insert() {\n final int index =\n _selectedItem == null ? _list.length : _list.indexOf(_selectedItem!);\n _list.insert(index, _nextItem++);\n }\n\n // Remove the selected item from the list model.\n void _remove() {\n if (_selectedItem != null) {\n _list.removeAt(_list.indexOf(_selectedItem!));\n setState(() {\n _selectedItem = null;\n });\n }\n }\n\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n home: Scaffold(\n appBar: AppBar(\n title: const Text('AnimatedList'),\n actions: <Widget>[\n IconButton(\n icon: const Icon(Icons.add_circle),\n onPressed: _insert,\n tooltip: 'insert a new item',\n ),\n IconButton(\n icon: const Icon(Icons.remove_circle),\n onPressed: _remove,\n tooltip: 'remove the selected item',\n ),\n ],\n ),\n body: Padding(\n padding: const EdgeInsets.all(16.0),\n child: AnimatedList(\n key: _listKey,\n initialItemCount: _list.length,\n itemBuilder: _buildItem,\n ),\n ),\n ),\n );\n }\n}\n\ntypedef RemovedItemBuilder<T> = Widget Function(\n T item, BuildContext context, Animation<double> animation);\n\n/// Keeps a Dart [List] in sync with an [AnimatedList].\n///\n/// The [insert] and [removeAt] methods apply to both the internal list and\n/// the animated list that belongs to [listKey].\n///\n/// This class only exposes as much of the Dart List API as is needed by the\n/// sample app. More list methods are easily added, however methods that\n/// mutate the list must make the same changes to the animated list in terms\n/// of [AnimatedListState.insertItem] and [AnimatedList.removeItem].\nclass ListModel<E> {\n ListModel({\n required this.listKey,\n required this.removedItemBuilder,\n Iterable<E>? initialItems,\n }) : _items = List<E>.from(initialItems ?? <E>[]);\n\n final GlobalKey<AnimatedListState> listKey;\n final RemovedItemBuilder<E> removedItemBuilder;\n final List<E> _items;\n\n AnimatedListState? get _animatedList => listKey.currentState;\n\n void insert(int index, E item) {\n _items.insert(index, item);\n _animatedList!.insertItem(index);\n }\n\n E removeAt(int index) {\n final E removedItem = _items.removeAt(index);\n if (removedItem != null) {\n _animatedList!.removeItem(\n index,\n (BuildContext context, Animation<double> animation) {\n return removedItemBuilder(removedItem, context, animation);\n },\n );\n }\n return removedItem;\n }\n\n int get length => _items.length;\n\n E operator [](int index) => _items[index];\n\n int indexOf(E item) => _items.indexOf(item);\n}\n\n/// Displays its integer item as 'item N' on a Card whose color is based on\n/// the item's value.\n///\n/// The text is displayed in bright green if [selected] is\n/// true. This widget's height is based on the [animation] parameter, it\n/// varies from 0 to 128 as the animation varies from 0.0 to 1.0.\nclass CardItem extends StatelessWidget {\n const CardItem({\n super.key,\n this.onTap,\n this.selected = false,\n required this.animation,\n required this.item,\n }) : assert(item >= 0);\n\n final Animation<double> animation;\n final VoidCallback? onTap;\n final int item;\n final bool selected;\n\n @override\n Widget build(BuildContext context) {\n TextStyle textStyle = Theme.of(context).textTheme.headline4!;\n if (selected) {\n textStyle = textStyle.copyWith(color: Colors.lightGreenAccent[400]);\n }\n return Padding(\n padding: const EdgeInsets.all(2.0),\n child: SizeTransition(\n sizeFactor: animation,\n child: GestureDetector(\n behavior: HitTestBehavior.opaque,\n onTap: onTap,\n child: SizedBox(\n height: 80.0,\n child: Card(\n color: Colors.primaries[item % Colors.primaries.length],\n child: Center(\n child: Text('Item $item', style: textStyle),\n ),\n ),\n ),\n ),\n ),\n );\n }\n}\n\n\n"
] | [
0
] | [] | [] | [
"dart",
"flutter"
] | stackoverflow_0074678000_dart_flutter.txt |
Q:
Is it possible to create muti thread in a flask server?
I am using flask and flask-restx try to create a protocol to get a specific string from another service. I am trying to figure out a way to run the function in server in different threads. Here's my code sample:
from flask_restx import Api,fields,Resource
from flask import Flask
app = Flask(__name__)
api = Api(app)
parent = api.model('Parent', {
'name': fields.String(get_answer(a,b)),
'class': fields.String(discriminator=True)
})
@api.route('/language')
class Language(Resource):
# @api.marshal_with(data_stream_request)
@api.marshal_with(parent)
@api.response(403, "Unauthorized")
def get(self):
return {"happy": "good"}
What I expect:
In Client side, first the server should run, i.e., we should able to make curl -i localhost:8080 work. Then when a specific condition is true, the client side should receive a GET request with the parent JSON data I have in server. However, if that condition is true, the GET request should not be able to return the correct result.
What I did:
One of the method I used is wrap up the decorator and Class Language(Resource) part in a different function and wrong that function in a different thread, and put that thread under a condition check. Not sure if that's the right way to do.I was seeing anyone said celery might be a good choice but not sure if that can work in flask-restx.
A:
I have the answer for you. to run a process in the background with flask, schedule it to run using another process using APScheduler. A very simple package that helps you schedule tasks to run functions at an interval, in your case one time at utcnow().
here is the link to Flask-APScheduler.
job = scheduler.add_job(myfunc, 'interval', minutes=2)
In your case use 'date' instead of 'interval' and specify run_date
job = scheduler.add_job(myfunc, 'date', run_date=datetime.utcnow())
here is the documentation:
User Guide
| Is it possible to create muti thread in a flask server? | I am using flask and flask-restx try to create a protocol to get a specific string from another service. I am trying to figure out a way to run the function in server in different threads. Here's my code sample:
from flask_restx import Api,fields,Resource
from flask import Flask
app = Flask(__name__)
api = Api(app)
parent = api.model('Parent', {
'name': fields.String(get_answer(a,b)),
'class': fields.String(discriminator=True)
})
@api.route('/language')
class Language(Resource):
# @api.marshal_with(data_stream_request)
@api.marshal_with(parent)
@api.response(403, "Unauthorized")
def get(self):
return {"happy": "good"}
What I expect:
In Client side, first the server should run, i.e., we should able to make curl -i localhost:8080 work. Then when a specific condition is true, the client side should receive a GET request with the parent JSON data I have in server. However, if that condition is true, the GET request should not be able to return the correct result.
What I did:
One of the method I used is wrap up the decorator and Class Language(Resource) part in a different function and wrong that function in a different thread, and put that thread under a condition check. Not sure if that's the right way to do.I was seeing anyone said celery might be a good choice but not sure if that can work in flask-restx.
| [
"I have the answer for you. to run a process in the background with flask, schedule it to run using another process using APScheduler. A very simple package that helps you schedule tasks to run functions at an interval, in your case one time at utcnow().\nhere is the link to Flask-APScheduler.\njob = scheduler.add_job(myfunc, 'interval', minutes=2)\n\nIn your case use 'date' instead of 'interval' and specify run_date\njob = scheduler.add_job(myfunc, 'date', run_date=datetime.utcnow())\n\nhere is the documentation:\nUser Guide\n"
] | [
0
] | [] | [] | [
"flask",
"flask_restx",
"multithreading",
"python",
"threadpoolexecutor"
] | stackoverflow_0074672488_flask_flask_restx_multithreading_python_threadpoolexecutor.txt |
Q:
Is it possible to ensure if a window is reached to the destination position and size using applescript or accessibility API?
Goal: Ensure the app's window is moved to the destination's position and size and also make sure the entire window is showed on the screen.
I have following script, it sometimes works and sometimes don't.
For the sake of simplicity I hardcoded values. Two things script is doing,
It sets position & size based on a certain parameters then it retrieve final size value. If calculated size is less than app's window minimum size then take minimize size(winWidth, winHeight) of the window and use this for positioning and sizing of the window to ensure the complete window is shown on the screen.
screen size - 1920 X 1080,
menu bar height - 24,
dock height - 61
visible empty space screen height - 1835 and the script is suppose to show the window bottom half
tell application "System Events"
tell application process "Craft"
delay 0.1
tell window 1 to set position to {1835 / 2, 25}
tell window 1 to set size to {(1835) / 2, 994}
end tell
delay 0.1
set {winWidth, winHeight} to size of the first window of application process "Craft"
tell application process "Craft"
delay 0.1
tell window 1 to set position to {1835 - winWidth, 25}
tell window 1 to set size to {(1835) / 2, 994}
end tell
delay 0.1
return winWidth
end tell
firstly I am not quite sure if this is how to calculate it but looking at Math it should work. Secondly, no idea when why it stops working but rarely.
Work around - I am running the above script more than once to fix the issue but still have some hiccups with it.
A:
There are a few issues with your script that may be causing it to sometimes fail.
First, the script sets the size of the window to {(1835) / 2, 994} in the first tell block, but then it immediately sets the size to {(1835) / 2, 994} again in the second tell block without checking whether the window is already at that size. This means that if the window is already at the correct size, the script will try to set the size to the same value again, which may cause the script to fail.
To fix this, you can add a check to ensure that the window is not already at the correct size before setting the size again. For example, you could use an if statement to check the current size of the window, and only set the size if the window is not already at the correct size. Here is an example of how you could do this:
tell application process "Craft"
delay 0.1
tell window 1 to set position to {1835 / 2, 25}
set {winWidth, winHeight} to size of window 1
if winWidth is not (1835) / 2 or winHeight is not 994 then
tell window 1 to set size to {(1835) / 2, 994}
end if
end tell
Second, the script sets the position of the window to {1835 - winWidth, 25} in the second tell block, but this does not guarantee that the window will be shown in the bottom half of the screen. This is because the winWidth variable that is used to calculate the position is the width of the window after it has been resized, not the width of the window before it has been resized. This means that if the window is not at the correct size when the position is calculated, the window may not be shown in the correct location on the screen.
To fix this, you can calculate the position of the window before setting the size, so that the correct position is used even if the size of the window changes. Here is an example of how you could do this:
tell application process "Craft"
delay 0.1
tell window 1 to set position to {1835 / 2, 25}
set {winWidth, winHeight} to size of window 1
if winWidth is not (1835) / 2 or winHeight is not 994 then
set position to {1835 - (1835) / 2, 25
I hope this helps you
| Is it possible to ensure if a window is reached to the destination position and size using applescript or accessibility API? | Goal: Ensure the app's window is moved to the destination's position and size and also make sure the entire window is showed on the screen.
I have following script, it sometimes works and sometimes don't.
For the sake of simplicity I hardcoded values. Two things script is doing,
It sets position & size based on a certain parameters then it retrieve final size value. If calculated size is less than app's window minimum size then take minimize size(winWidth, winHeight) of the window and use this for positioning and sizing of the window to ensure the complete window is shown on the screen.
screen size - 1920 X 1080,
menu bar height - 24,
dock height - 61
visible empty space screen height - 1835 and the script is suppose to show the window bottom half
tell application "System Events"
tell application process "Craft"
delay 0.1
tell window 1 to set position to {1835 / 2, 25}
tell window 1 to set size to {(1835) / 2, 994}
end tell
delay 0.1
set {winWidth, winHeight} to size of the first window of application process "Craft"
tell application process "Craft"
delay 0.1
tell window 1 to set position to {1835 - winWidth, 25}
tell window 1 to set size to {(1835) / 2, 994}
end tell
delay 0.1
return winWidth
end tell
firstly I am not quite sure if this is how to calculate it but looking at Math it should work. Secondly, no idea when why it stops working but rarely.
Work around - I am running the above script more than once to fix the issue but still have some hiccups with it.
| [
"There are a few issues with your script that may be causing it to sometimes fail.\nFirst, the script sets the size of the window to {(1835) / 2, 994} in the first tell block, but then it immediately sets the size to {(1835) / 2, 994} again in the second tell block without checking whether the window is already at that size. This means that if the window is already at the correct size, the script will try to set the size to the same value again, which may cause the script to fail.\nTo fix this, you can add a check to ensure that the window is not already at the correct size before setting the size again. For example, you could use an if statement to check the current size of the window, and only set the size if the window is not already at the correct size. Here is an example of how you could do this:\ntell application process \"Craft\"\n delay 0.1\n tell window 1 to set position to {1835 / 2, 25}\n\n set {winWidth, winHeight} to size of window 1\n if winWidth is not (1835) / 2 or winHeight is not 994 then\n tell window 1 to set size to {(1835) / 2, 994}\n end if\nend tell\n\nSecond, the script sets the position of the window to {1835 - winWidth, 25} in the second tell block, but this does not guarantee that the window will be shown in the bottom half of the screen. This is because the winWidth variable that is used to calculate the position is the width of the window after it has been resized, not the width of the window before it has been resized. This means that if the window is not at the correct size when the position is calculated, the window may not be shown in the correct location on the screen.\nTo fix this, you can calculate the position of the window before setting the size, so that the correct position is used even if the size of the window changes. Here is an example of how you could do this:\ntell application process \"Craft\"\n delay 0.1\n tell window 1 to set position to {1835 / 2, 25}\n\n set {winWidth, winHeight} to size of window 1\n if winWidth is not (1835) / 2 or winHeight is not 994 then\n set position to {1835 - (1835) / 2, 25\n\nI hope this helps you\n"
] | [
0
] | [] | [] | [
"applescript",
"cocoa",
"macos"
] | stackoverflow_0074571231_applescript_cocoa_macos.txt |
Q:
How to get pictures from mounted volume in java spring boot
I have a java application in a docker container and saving pictures is working fine but getting them doesnt work I get Error: javax.imageio.IIOException: Can't read input file!.
this is my doker image,
FROM openjdk:17-jdk-alpine
ARG JAR_FILE=*.jar
EXPOSE 9000
COPY build/libs/wall-0.0.1-SNAPSHOT.jar .
ENTRYPOINT ["java","-jar","wall-0.0.1-SNAPSHOT.jar"]
The mounting part is being done in the kubernetes cluster configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: wall-app
labels:
app: wall-app
spec:
replicas: 1
selector:
matchLabels:
app: wall-app
template:
metadata:
labels:
app: wall-app
spec:
containers:
- name: wall-app
image: u/wall
imagePullPolicy: Always
ports:
- containerPort: 9010
resources:
limits:
memory: 512Mi
volumeMounts:
- name: images-volume
mountPath: /images
volumes:
- name: images-volume
hostPath:
path: /home/ubuntu/images-volume
My java application function to get image:
Not: Im sure that the pic name is corret the probelm is with the input stream
@PostMapping(value = "/post/image")
public @ResponseBody String getImageAsBase64(@RequestBody String json){
try {
JSONObject jsonObject = new JSONObject(json);
String path = "/images/post/" + jsonObject.get("postId) + "/" + jsonObject.get("pictureName");
InputStream in = getClass()
.getResourceAsStream(path);
String encodedString = Base64.getEncoder().encodeToString(IOUtils.toByteArray(in));
System.out.print(path);
//images/post/5/test.png
return encodedString;
}catch(Exception e) {
System.out.println("picture not found"+e);
}
return null;
}
I tried ../ to get one directory above but it did not work
A:
That getResourceAsStream method is only to be used for reading resources (files) from the classpath, which is a rather logical thing. As far as I can see, you want to read the images directly from the file system, outside of your classpath as can be observed from your ENTRYPOINT. For that, you can use FileInputStream. So, you have to replace the in variable with the following code:
InputStream in = new FileInputStream(path);
But don't forget about closing the input stream. You can use the convenient try-with-resources statement for that purpose. After that, your relevant piece of code will look like this:
try (InputStream in = new FileInputStream(path)) {
String encodedString = Base64.getEncoder().encodeToString(IOUtils.toByteArray(in));
System.out.print(path);
//images/post/5/test.png
return encodedString;
}
| How to get pictures from mounted volume in java spring boot | I have a java application in a docker container and saving pictures is working fine but getting them doesnt work I get Error: javax.imageio.IIOException: Can't read input file!.
this is my doker image,
FROM openjdk:17-jdk-alpine
ARG JAR_FILE=*.jar
EXPOSE 9000
COPY build/libs/wall-0.0.1-SNAPSHOT.jar .
ENTRYPOINT ["java","-jar","wall-0.0.1-SNAPSHOT.jar"]
The mounting part is being done in the kubernetes cluster configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: wall-app
labels:
app: wall-app
spec:
replicas: 1
selector:
matchLabels:
app: wall-app
template:
metadata:
labels:
app: wall-app
spec:
containers:
- name: wall-app
image: u/wall
imagePullPolicy: Always
ports:
- containerPort: 9010
resources:
limits:
memory: 512Mi
volumeMounts:
- name: images-volume
mountPath: /images
volumes:
- name: images-volume
hostPath:
path: /home/ubuntu/images-volume
My java application function to get image:
Not: Im sure that the pic name is corret the probelm is with the input stream
@PostMapping(value = "/post/image")
public @ResponseBody String getImageAsBase64(@RequestBody String json){
try {
JSONObject jsonObject = new JSONObject(json);
String path = "/images/post/" + jsonObject.get("postId) + "/" + jsonObject.get("pictureName");
InputStream in = getClass()
.getResourceAsStream(path);
String encodedString = Base64.getEncoder().encodeToString(IOUtils.toByteArray(in));
System.out.print(path);
//images/post/5/test.png
return encodedString;
}catch(Exception e) {
System.out.println("picture not found"+e);
}
return null;
}
I tried ../ to get one directory above but it did not work
| [
"That getResourceAsStream method is only to be used for reading resources (files) from the classpath, which is a rather logical thing. As far as I can see, you want to read the images directly from the file system, outside of your classpath as can be observed from your ENTRYPOINT. For that, you can use FileInputStream. So, you have to replace the in variable with the following code:\nInputStream in = new FileInputStream(path);\n\nBut don't forget about closing the input stream. You can use the convenient try-with-resources statement for that purpose. After that, your relevant piece of code will look like this:\ntry (InputStream in = new FileInputStream(path)) {\n\n String encodedString = Base64.getEncoder().encodeToString(IOUtils.toByteArray(in));\n\n System.out.print(path);\n //images/post/5/test.png\n\n return encodedString;\n}\n\n"
] | [
0
] | [] | [] | [
"jar",
"java",
"spring",
"spring_boot"
] | stackoverflow_0074675251_jar_java_spring_spring_boot.txt |
Q:
Firestore update map keys
Is it possible to update Firestore map's keys? I'm trying to do so through the Firestore dashboard and the key field is disabled as can be seen in the screenshot below? Does that mean that instead of updating the key, one needs to first delete the old entry and then create a new one?
A:
There is no way to change the name of a field in Firestore. The API doesn't have such an operation, and (for that reason) it also isn't in the Firebase console.
You'll have to add a new field with the same value, and delete the old field.
| Firestore update map keys | Is it possible to update Firestore map's keys? I'm trying to do so through the Firestore dashboard and the key field is disabled as can be seen in the screenshot below? Does that mean that instead of updating the key, one needs to first delete the old entry and then create a new one?
| [
"There is no way to change the name of a field in Firestore. The API doesn't have such an operation, and (for that reason) it also isn't in the Firebase console.\nYou'll have to add a new field with the same value, and delete the old field.\n"
] | [
0
] | [] | [] | [
"google_cloud_firestore"
] | stackoverflow_0074677823_google_cloud_firestore.txt |
Q:
how to get country from ip in Laravel
I have Creates custom Analytics Module, Where I can track the Visitor Activity, Here I used the Laravel Package "stevebauman/location" for getting Visitor Location.
The main issue is that, when I statically provide the IP to the Variable, this ip provided me the correct location. on the other hand, When I get IP dynamically. It just provide the IP
$visitor = request()->ip();
How Can I get country Name, Code, Postal Address from the IP in Laravel
$visitor = request()->ip();
$traffic = PageVisits::where('ip_address', $visitor)->where('property_id', $id)
->whereMonth('created_at', Carbon::now()->month)->first();
if(auth()->check() && auth()->user()->usertype == 'Admin'){
}else{
if (!$traffic) {
$traffic = new PageVisits();
$traffic->ip_address = $visitor;
$traffic->property_id = $id;
$position = Location::get('https://'.$visitor);
$traffic->country = $position->countryName;
$traffic->agency_id = $property->agency_id ?? '';
$traffic->save();
}
A:
You can create a service and then can use that service in your controller.
A:
You Can Create Folder With service and but it >>>> I use apiStack
<?php
namespace App\Services;
use Illuminate\Support\Facades\Http;
class IpStack
{
protected $key;
protected $baseUrl = 'http://api.ipstack.com/';
public function __construct($key)
{
$this->key = $key;
}
public function get($ip)
{
// http://api.ipstack.com/(ip)?access_key=(key)
$response = Http::baseUrl($this->baseUrl)
->get($ip, [
'access_key' => $this->key,
]);
return $response->json();
}
}
for call in controller
{
$geoip = new IpStack(config('services.ipstack.key'));
$response = $geoip->get(request()->ip());
}
| how to get country from ip in Laravel | I have Creates custom Analytics Module, Where I can track the Visitor Activity, Here I used the Laravel Package "stevebauman/location" for getting Visitor Location.
The main issue is that, when I statically provide the IP to the Variable, this ip provided me the correct location. on the other hand, When I get IP dynamically. It just provide the IP
$visitor = request()->ip();
How Can I get country Name, Code, Postal Address from the IP in Laravel
$visitor = request()->ip();
$traffic = PageVisits::where('ip_address', $visitor)->where('property_id', $id)
->whereMonth('created_at', Carbon::now()->month)->first();
if(auth()->check() && auth()->user()->usertype == 'Admin'){
}else{
if (!$traffic) {
$traffic = new PageVisits();
$traffic->ip_address = $visitor;
$traffic->property_id = $id;
$position = Location::get('https://'.$visitor);
$traffic->country = $position->countryName;
$traffic->agency_id = $property->agency_id ?? '';
$traffic->save();
}
| [
"You can create a service and then can use that service in your controller.\n",
"You Can Create Folder With service and but it >>>> I use apiStack\n<?php\n\nnamespace App\\Services;\n\nuse Illuminate\\Support\\Facades\\Http;\n\nclass IpStack\n{\n\n protected $key;\n\n protected $baseUrl = 'http://api.ipstack.com/';\n\n public function __construct($key)\n {\n $this->key = $key;\n }\n\n public function get($ip)\n {\n // http://api.ipstack.com/(ip)?access_key=(key)\n $response = Http::baseUrl($this->baseUrl)\n ->get($ip, [\n 'access_key' => $this->key,\n ]);\n\n return $response->json();\n }\n}\n\nfor call in controller\n{\n $geoip = new IpStack(config('services.ipstack.key'));\n $response = $geoip->get(request()->ip());\n}\n\n"
] | [
1,
0
] | [] | [] | [
"ip",
"laravel",
"laravel_7",
"mysql",
"php"
] | stackoverflow_0072541796_ip_laravel_laravel_7_mysql_php.txt |
Q:
laravel dompdf get base64 format
I am using laravel-dompdf (Barryvdh\DomPDF) in a project and need to get the file in base64 format (vor a vue component)
In laravel controller:
$data = array(
'values' => $documentValues
);
$pdf = PDF::loadView('documentTemplate', $data);
I can download the file using:
$pdf->download('test.pdf');
but I need a response from server with the source file in base64 format like this:
source: 'data:application/pdf;base64,<BASE64_ENCODED_PDF>'
I have tried something like this:
base64_encode($pdf->stream())
but is not working.
Any ideas?
UPDATE
I fixed:
return 'data:application/pdf;base64,'.base64_encode($pdf->stream());
| laravel dompdf get base64 format | I am using laravel-dompdf (Barryvdh\DomPDF) in a project and need to get the file in base64 format (vor a vue component)
In laravel controller:
$data = array(
'values' => $documentValues
);
$pdf = PDF::loadView('documentTemplate', $data);
I can download the file using:
$pdf->download('test.pdf');
but I need a response from server with the source file in base64 format like this:
source: 'data:application/pdf;base64,<BASE64_ENCODED_PDF>'
I have tried something like this:
base64_encode($pdf->stream())
but is not working.
Any ideas?
UPDATE
I fixed:
return 'data:application/pdf;base64,'.base64_encode($pdf->stream());
| [] | [] | [
"Server side:\necho base64_encode($dompdf->output());\nclient side:\nhtmlstr['html'] = \"abcd\";\nawait axios.post(endpoint, transformToFormData(htmlstr), config, { responseType: 'blob' })\n.then(response => {\n if(response.status===200 && response.data != null ){\n console.log(\"response-data\", response);\n const data = response.data;\n downloadPDF(data,fileName);\n \n \n }\n })\n\nfunction downloadPDF(pdf,fileName) {\nconst linkSource = data:application/pdf;base64,${pdf};\nconst downloadLink = document.createElement(\"a\");\ndownloadLink.href = linkSource;\ndownloadLink.download = fileName;\ndownloadLink.click();\n}\n"
] | [
-1
] | [
"dompdf",
"laravel",
"php",
"vue.js"
] | stackoverflow_0070738823_dompdf_laravel_php_vue.js.txt |
Q:
ksqlDB deleting records from KTable
• We have a topic “customer_events“ in Kafka. Example of value.
{
"CUSTOMERID": "198fa518-1031-4fe8-8abd-ca29bd120259"
}
• We created a persistent stream over the topic in ksqlDB cluster in Confluent.
CREATE STREAM TEST_STREAM
(SESSIONID STRING KEY, CUSTOMERID STRING) WITH
(KAFKA_TOPIC='customer_events', KEY_FORMAT='KAFKA', PARTITIONS=1, VALUE_FORMAT='JSON');
• We created derived table over the stream in ksqlDB cluster in Confluent. The table aggregates customers according to SessionId.
CREATE TABLE QUERYABLE_TESTTABLE AS SELECT
SRC.SESSIONID SESSIONID,
COLLECT_LIST(SRC.CUSTOMERID) CUSTOMERS
FROM TEST_STREAM SRC
GROUP BY SRC.SESSIONID
EMIT CHANGES;
• We then query the table (pull query):
SELECT * from QUERYABLE_TESTTABLE ;
• The whole flow works fine (INSERT and UPDATE). The results are like expected.
SessionId
customers
"3e45e7ac-781b-4213-b288-b3f95836487c"
[ "198fa518-1031-4fe8-8abd-ca29bd120259", "bb1494de-bc1a-429b-a2b0-68684ed01d17"]
"88db0272-db35-48e9-b7ec-b326a9cde106"
[ "bc4ab46c-5e79-4ca6-af67-74688105a5c0"]
...
...
But how to remove the items from the QUERYABLE_TESTTABLE table?
We tried to insert tombstone into customer_events topic. We tried to insert tombstone into underlying topic of the QUERYABLE_TESTTABLE table, which I know is not the best idea. We search the internet, there is no clear description how to do it.
A:
You are using STREAM which doesn't read tombstone (value as null) event in the 1st place. Solution is more of a re-design. I can't think of any other solution to this problem.
If you have control over what you are publishing, instead of publishing tombstone event to customer_events table.
Add a new column. Column can be called __deleted.
By default populate false if you want to delete the key just make it true.
Add a simple where clause in your derived table WHERE __deleted != 'true'.
To also have tombstone at final table as well. Or else you will see empty array if all records for a given SESSIONID is deleted.
Just add a having clause at the end to check array size greater than 0 to create a tombstone even.
ARRAY_LENGTH(COLLECT_LIST(SRC.CUSTOMERID)) > 0
Note: You can easily do this even if your source connector is debezium. It provides out of the box class to do it
| ksqlDB deleting records from KTable | • We have a topic “customer_events“ in Kafka. Example of value.
{
"CUSTOMERID": "198fa518-1031-4fe8-8abd-ca29bd120259"
}
• We created a persistent stream over the topic in ksqlDB cluster in Confluent.
CREATE STREAM TEST_STREAM
(SESSIONID STRING KEY, CUSTOMERID STRING) WITH
(KAFKA_TOPIC='customer_events', KEY_FORMAT='KAFKA', PARTITIONS=1, VALUE_FORMAT='JSON');
• We created derived table over the stream in ksqlDB cluster in Confluent. The table aggregates customers according to SessionId.
CREATE TABLE QUERYABLE_TESTTABLE AS SELECT
SRC.SESSIONID SESSIONID,
COLLECT_LIST(SRC.CUSTOMERID) CUSTOMERS
FROM TEST_STREAM SRC
GROUP BY SRC.SESSIONID
EMIT CHANGES;
• We then query the table (pull query):
SELECT * from QUERYABLE_TESTTABLE ;
• The whole flow works fine (INSERT and UPDATE). The results are like expected.
SessionId
customers
"3e45e7ac-781b-4213-b288-b3f95836487c"
[ "198fa518-1031-4fe8-8abd-ca29bd120259", "bb1494de-bc1a-429b-a2b0-68684ed01d17"]
"88db0272-db35-48e9-b7ec-b326a9cde106"
[ "bc4ab46c-5e79-4ca6-af67-74688105a5c0"]
...
...
But how to remove the items from the QUERYABLE_TESTTABLE table?
We tried to insert tombstone into customer_events topic. We tried to insert tombstone into underlying topic of the QUERYABLE_TESTTABLE table, which I know is not the best idea. We search the internet, there is no clear description how to do it.
| [
"You are using STREAM which doesn't read tombstone (value as null) event in the 1st place. Solution is more of a re-design. I can't think of any other solution to this problem.\nIf you have control over what you are publishing, instead of publishing tombstone event to customer_events table.\n\nAdd a new column. Column can be called __deleted.\nBy default populate false if you want to delete the key just make it true.\nAdd a simple where clause in your derived table WHERE __deleted != 'true'.\nTo also have tombstone at final table as well. Or else you will see empty array if all records for a given SESSIONID is deleted.\nJust add a having clause at the end to check array size greater than 0 to create a tombstone even.\nARRAY_LENGTH(COLLECT_LIST(SRC.CUSTOMERID)) > 0\n\nNote: You can easily do this even if your source connector is debezium. It provides out of the box class to do it\n"
] | [
0
] | [] | [] | [
"ksqldb",
"ktable"
] | stackoverflow_0074463133_ksqldb_ktable.txt |
Q:
HorizontalAlignment=Stretch, MaxWidth, and Left aligned at the same time?
This seems like it should be easy but I'm stumped. In WPF, I'd like a TextBox that stretches to the width of it's parent, but only to a maximum width. The problem is that I want it to be left justified within its parent. To get it to stretch you have to use HorizontalAlignment="Stretch", but then the result is centered. I've experimented with HorizontalContentAlignment, but it doesn't seem to do anything.
How do I get this blue text box to grow with the size of the window, have a maximum width of 200 pixels, and be left justified?
<Page
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<StackPanel>
<TextBox Background="Azure" Text="Hello" HorizontalAlignment="Stretch" MaxWidth="200" />
</StackPanel>
</Page>
What's the trick?
A:
You can set HorizontalAlignment to Left, set your MaxWidth and then bind Width to the ActualWidth of the parent element:
<Page
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<StackPanel Name="Container">
<TextBox Background="Azure"
Width="{Binding ElementName=Container,Path=ActualWidth}"
Text="Hello" HorizontalAlignment="Left" MaxWidth="200" />
</StackPanel>
</Page>
A:
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="*" MaxWidth="200"/>
</Grid.ColumnDefinitions>
<TextBox Background="Azure" Text="Hello" />
</Grid>
A:
Both answers given worked for the problem I stated -- Thanks!
In my real application though, I was trying to constrain a panel inside of a ScrollViewer and Kent's method didn't handle that very well for some reason I didn't bother to track down. Basically the controls could expand beyond the MaxWidth setting and defeated my intent.
Nir's technique worked well and didn't have the problem with the ScrollViewer, though there is one minor thing to watch out for. You want to be sure the right and left margins on the TextBox are set to 0 or they'll get in the way. I also changed the binding to use ViewportWidth instead of ActualWidth to avoid issues when the vertical scrollbar appeared.
A:
You can use this for the Width of your DataTemplate:
Width="{Binding ActualWidth,RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type ScrollContentPresenter}}}"
Make sure your DataTemplate root has Margin="0" (you can use some panel as the root and set the Margin to the children of that root)
A:
Maybe I can still help somebody out who bumps into this question, because this is a very old issue.
I needed this as well and wrote a behavior to take care of this. So here is the behavior:
public class StretchMaxWidthBehavior : Behavior<FrameworkElement>
{
protected override void OnAttached()
{
base.OnAttached();
((FrameworkElement)this.AssociatedObject.Parent).SizeChanged += this.OnSizeChanged;
}
protected override void OnDetaching()
{
base.OnDetaching();
((FrameworkElement)this.AssociatedObject.Parent).SizeChanged -= this.OnSizeChanged;
}
private void OnSizeChanged(object sender, SizeChangedEventArgs e)
{
this.SetAlignments();
}
private void SetAlignments()
{
var slot = LayoutInformation.GetLayoutSlot(this.AssociatedObject);
var newWidth = slot.Width;
var newHeight = slot.Height;
if (!double.IsInfinity(this.AssociatedObject.MaxWidth))
{
if (this.AssociatedObject.MaxWidth < newWidth)
{
this.AssociatedObject.HorizontalAlignment = HorizontalAlignment.Left;
this.AssociatedObject.Width = this.AssociatedObject.MaxWidth;
}
else
{
this.AssociatedObject.HorizontalAlignment = HorizontalAlignment.Stretch;
this.AssociatedObject.Width = double.NaN;
}
}
if (!double.IsInfinity(this.AssociatedObject.MaxHeight))
{
if (this.AssociatedObject.MaxHeight < newHeight)
{
this.AssociatedObject.VerticalAlignment = VerticalAlignment.Top;
this.AssociatedObject.Height = this.AssociatedObject.MaxHeight;
}
else
{
this.AssociatedObject.VerticalAlignment = VerticalAlignment.Stretch;
this.AssociatedObject.Height = double.NaN;
}
}
}
}
Then you can use it like so:
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto" />
<ColumnDefinition />
</Grid.ColumnDefinitions>
<TextBlock Grid.Column="0" Text="Label" />
<TextBox Grid.Column="1" MaxWidth="600">
<i:Interaction.Behaviors>
<cbh:StretchMaxWidthBehavior/>
</i:Interaction.Behaviors>
</TextBox>
</Grid>
Note: don't forget to use the System.Windows.Interactivity namespace to use the behavior.
A:
Functionally similar to the accepted answer, but doesn't require the parent element to be specified:
<TextBox
Width="{Binding ActualWidth, RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type FrameworkElement}}}"
MaxWidth="500"
HorizontalAlignment="Left" />
A:
I would use SharedSizeGroup
<Grid>
<Grid.ColumnDefinition>
<ColumnDefinition SharedSizeGroup="col1"></ColumnDefinition>
<ColumnDefinition SharedSizeGroup="col2"></ColumnDefinition>
</Grid.ColumnDefinition>
<TextBox Background="Azure" Text="Hello" Grid.Column="1" MaxWidth="200" />
</Grid>
A:
In my case I had to put textbox into a stack panel in order to stretch textbox on left side.
Thanks to previous post.
Just for an example I did set a background color to see what’s happens when window size is changing.
<StackPanel Name="JustContainer" VerticalAlignment="Center" HorizontalAlignment="Stretch" Background="BlueViolet" >
<TextBox
Name="Input" Text="Hello World"
MaxWidth="300"
HorizontalAlignment="Right"
Width="{Binding ActualWidth, RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type FrameworkElement}}}">
</TextBox>
</StackPanel>
A:
These answers didn't work for me, because I needed a TextBox to stretch (i.e. consume all available space) until it reaches it's MaxWidth and, if more space is available, align to the right.
I created this simple control that works along the lines of Y C's answer, but doesn't require System.Windows.Interactivity:
public class StretchAlignmentPanel : ContentControl
{
public StretchAlignmentPanel()
{
this.SizeChanged += StretchAlignmentPanel_SizeChanged;
}
public static readonly DependencyProperty HorizontalFallbackAlignmentProperty = DependencyProperty.Register(
nameof(HorizontalFallbackAlignment), typeof(HorizontalAlignment), typeof(StretchAlignmentPanel), new PropertyMetadata(HorizontalAlignment.Stretch));
public HorizontalAlignment HorizontalFallbackAlignment
{
get { return (HorizontalAlignment)GetValue(HorizontalFallbackAlignmentProperty); }
set { SetValue(HorizontalFallbackAlignmentProperty, value); }
}
public static readonly DependencyProperty VerticalFallbackAlignmentProperty = DependencyProperty.Register(
nameof(VerticalFallbackAlignment), typeof(VerticalAlignment), typeof(StretchAlignmentPanel), new PropertyMetadata(VerticalAlignment.Stretch));
public VerticalAlignment VerticalFallbackAlignment
{
get { return (VerticalAlignment)GetValue(VerticalFallbackAlignmentProperty); }
set { SetValue(VerticalFallbackAlignmentProperty, value); }
}
private void StretchAlignmentPanel_SizeChanged(object sender, System.Windows.SizeChangedEventArgs e)
{
var fe = this.Content as FrameworkElement;
if (fe == null) return;
if(e.WidthChanged) applyHorizontalAlignment(fe);
if(e.HeightChanged) applyVerticalAlignment(fe);
}
private void applyHorizontalAlignment(FrameworkElement fe)
{
if (HorizontalFallbackAlignment == HorizontalAlignment.Stretch) return;
if (this.ActualWidth > fe.MaxWidth)
{
fe.HorizontalAlignment = HorizontalFallbackAlignment;
fe.Width = fe.MaxWidth;
}
else
{
fe.HorizontalAlignment = HorizontalAlignment.Stretch;
fe.Width = double.NaN;
}
}
private void applyVerticalAlignment(FrameworkElement fe)
{
if (VerticalFallbackAlignment == VerticalAlignment.Stretch) return;
if (this.ActualHeight > fe.MaxHeight)
{
fe.VerticalAlignment = VerticalFallbackAlignment;
fe.Height= fe.MaxHeight;
}
else
{
fe.VerticalAlignment = VerticalAlignment.Stretch;
fe.Height= double.NaN;
}
}
}
It can be used like this:
<controls:StretchAlignmentPanel HorizontalFallbackAlignment="Right">
<TextBox MaxWidth="200" MinWidth="100" Text="Example"/>
</controls:StretchAlignmentPanel>
| HorizontalAlignment=Stretch, MaxWidth, and Left aligned at the same time? | This seems like it should be easy but I'm stumped. In WPF, I'd like a TextBox that stretches to the width of it's parent, but only to a maximum width. The problem is that I want it to be left justified within its parent. To get it to stretch you have to use HorizontalAlignment="Stretch", but then the result is centered. I've experimented with HorizontalContentAlignment, but it doesn't seem to do anything.
How do I get this blue text box to grow with the size of the window, have a maximum width of 200 pixels, and be left justified?
<Page
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<StackPanel>
<TextBox Background="Azure" Text="Hello" HorizontalAlignment="Stretch" MaxWidth="200" />
</StackPanel>
</Page>
What's the trick?
| [
"You can set HorizontalAlignment to Left, set your MaxWidth and then bind Width to the ActualWidth of the parent element:\n<Page\n xmlns=\"http://schemas.microsoft.com/winfx/2006/xaml/presentation\"\n xmlns:x=\"http://schemas.microsoft.com/winfx/2006/xaml\">\n <StackPanel Name=\"Container\"> \n <TextBox Background=\"Azure\" \n Width=\"{Binding ElementName=Container,Path=ActualWidth}\"\n Text=\"Hello\" HorizontalAlignment=\"Left\" MaxWidth=\"200\" />\n </StackPanel>\n</Page>\n\n",
"<Grid>\n <Grid.ColumnDefinitions>\n <ColumnDefinition Width=\"*\" MaxWidth=\"200\"/>\n </Grid.ColumnDefinitions>\n\n <TextBox Background=\"Azure\" Text=\"Hello\" />\n</Grid>\n\n",
"Both answers given worked for the problem I stated -- Thanks!\nIn my real application though, I was trying to constrain a panel inside of a ScrollViewer and Kent's method didn't handle that very well for some reason I didn't bother to track down. Basically the controls could expand beyond the MaxWidth setting and defeated my intent.\nNir's technique worked well and didn't have the problem with the ScrollViewer, though there is one minor thing to watch out for. You want to be sure the right and left margins on the TextBox are set to 0 or they'll get in the way. I also changed the binding to use ViewportWidth instead of ActualWidth to avoid issues when the vertical scrollbar appeared.\n",
"You can use this for the Width of your DataTemplate:\nWidth=\"{Binding ActualWidth,RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type ScrollContentPresenter}}}\"\n\nMake sure your DataTemplate root has Margin=\"0\" (you can use some panel as the root and set the Margin to the children of that root)\n",
"Maybe I can still help somebody out who bumps into this question, because this is a very old issue.\nI needed this as well and wrote a behavior to take care of this. So here is the behavior:\npublic class StretchMaxWidthBehavior : Behavior<FrameworkElement>\n{ \n protected override void OnAttached()\n {\n base.OnAttached();\n ((FrameworkElement)this.AssociatedObject.Parent).SizeChanged += this.OnSizeChanged;\n }\n\n protected override void OnDetaching()\n {\n base.OnDetaching();\n ((FrameworkElement)this.AssociatedObject.Parent).SizeChanged -= this.OnSizeChanged;\n }\n\n private void OnSizeChanged(object sender, SizeChangedEventArgs e)\n {\n this.SetAlignments();\n }\n\n private void SetAlignments()\n {\n var slot = LayoutInformation.GetLayoutSlot(this.AssociatedObject);\n var newWidth = slot.Width;\n var newHeight = slot.Height;\n\n if (!double.IsInfinity(this.AssociatedObject.MaxWidth))\n {\n if (this.AssociatedObject.MaxWidth < newWidth)\n {\n this.AssociatedObject.HorizontalAlignment = HorizontalAlignment.Left;\n this.AssociatedObject.Width = this.AssociatedObject.MaxWidth;\n }\n else\n {\n this.AssociatedObject.HorizontalAlignment = HorizontalAlignment.Stretch;\n this.AssociatedObject.Width = double.NaN;\n }\n }\n\n if (!double.IsInfinity(this.AssociatedObject.MaxHeight))\n {\n if (this.AssociatedObject.MaxHeight < newHeight)\n {\n this.AssociatedObject.VerticalAlignment = VerticalAlignment.Top;\n this.AssociatedObject.Height = this.AssociatedObject.MaxHeight;\n }\n else\n {\n this.AssociatedObject.VerticalAlignment = VerticalAlignment.Stretch;\n this.AssociatedObject.Height = double.NaN;\n }\n }\n }\n}\n\nThen you can use it like so:\n<Grid>\n <Grid.ColumnDefinitions>\n <ColumnDefinition Width=\"Auto\" />\n <ColumnDefinition />\n </Grid.ColumnDefinitions>\n\n <TextBlock Grid.Column=\"0\" Text=\"Label\" />\n <TextBox Grid.Column=\"1\" MaxWidth=\"600\">\n <i:Interaction.Behaviors> \n <cbh:StretchMaxWidthBehavior/>\n </i:Interaction.Behaviors>\n </TextBox>\n</Grid>\n\nNote: don't forget to use the System.Windows.Interactivity namespace to use the behavior.\n",
"Functionally similar to the accepted answer, but doesn't require the parent element to be specified:\n<TextBox\n Width=\"{Binding ActualWidth, RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type FrameworkElement}}}\"\n MaxWidth=\"500\"\n HorizontalAlignment=\"Left\" />\n\n",
"I would use SharedSizeGroup\n<Grid>\n <Grid.ColumnDefinition>\n <ColumnDefinition SharedSizeGroup=\"col1\"></ColumnDefinition> \n <ColumnDefinition SharedSizeGroup=\"col2\"></ColumnDefinition>\n </Grid.ColumnDefinition>\n <TextBox Background=\"Azure\" Text=\"Hello\" Grid.Column=\"1\" MaxWidth=\"200\" />\n</Grid>\n\n",
"In my case I had to put textbox into a stack panel in order to stretch textbox on left side. \nThanks to previous post.\nJust for an example I did set a background color to see what’s happens when window size is changing.\n<StackPanel Name=\"JustContainer\" VerticalAlignment=\"Center\" HorizontalAlignment=\"Stretch\" Background=\"BlueViolet\" >\n <TextBox \n Name=\"Input\" Text=\"Hello World\" \n MaxWidth=\"300\"\n HorizontalAlignment=\"Right\"\n Width=\"{Binding ActualWidth, RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type FrameworkElement}}}\">\n </TextBox>\n</StackPanel>\n\n",
"These answers didn't work for me, because I needed a TextBox to stretch (i.e. consume all available space) until it reaches it's MaxWidth and, if more space is available, align to the right.\nI created this simple control that works along the lines of Y C's answer, but doesn't require System.Windows.Interactivity:\npublic class StretchAlignmentPanel : ContentControl\n{\n public StretchAlignmentPanel()\n {\n this.SizeChanged += StretchAlignmentPanel_SizeChanged;\n }\n\n public static readonly DependencyProperty HorizontalFallbackAlignmentProperty = DependencyProperty.Register(\n nameof(HorizontalFallbackAlignment), typeof(HorizontalAlignment), typeof(StretchAlignmentPanel), new PropertyMetadata(HorizontalAlignment.Stretch));\n\n public HorizontalAlignment HorizontalFallbackAlignment\n {\n get { return (HorizontalAlignment)GetValue(HorizontalFallbackAlignmentProperty); }\n set { SetValue(HorizontalFallbackAlignmentProperty, value); }\n }\n\n public static readonly DependencyProperty VerticalFallbackAlignmentProperty = DependencyProperty.Register(\n nameof(VerticalFallbackAlignment), typeof(VerticalAlignment), typeof(StretchAlignmentPanel), new PropertyMetadata(VerticalAlignment.Stretch));\n\n public VerticalAlignment VerticalFallbackAlignment\n {\n get { return (VerticalAlignment)GetValue(VerticalFallbackAlignmentProperty); }\n set { SetValue(VerticalFallbackAlignmentProperty, value); }\n }\n\n private void StretchAlignmentPanel_SizeChanged(object sender, System.Windows.SizeChangedEventArgs e)\n {\n var fe = this.Content as FrameworkElement;\n if (fe == null) return;\n \n if(e.WidthChanged) applyHorizontalAlignment(fe);\n if(e.HeightChanged) applyVerticalAlignment(fe);\n }\n\n private void applyHorizontalAlignment(FrameworkElement fe)\n {\n if (HorizontalFallbackAlignment == HorizontalAlignment.Stretch) return;\n\n if (this.ActualWidth > fe.MaxWidth)\n {\n fe.HorizontalAlignment = HorizontalFallbackAlignment;\n fe.Width = fe.MaxWidth;\n }\n else\n {\n fe.HorizontalAlignment = HorizontalAlignment.Stretch;\n fe.Width = double.NaN;\n }\n }\n\n private void applyVerticalAlignment(FrameworkElement fe)\n {\n if (VerticalFallbackAlignment == VerticalAlignment.Stretch) return;\n\n if (this.ActualHeight > fe.MaxHeight)\n {\n fe.VerticalAlignment = VerticalFallbackAlignment;\n fe.Height= fe.MaxHeight;\n }\n else\n {\n fe.VerticalAlignment = VerticalAlignment.Stretch;\n fe.Height= double.NaN;\n }\n }\n}\n\nIt can be used like this:\n<controls:StretchAlignmentPanel HorizontalFallbackAlignment=\"Right\">\n <TextBox MaxWidth=\"200\" MinWidth=\"100\" Text=\"Example\"/>\n</controls:StretchAlignmentPanel>\n\n"
] | [
95,
55,
8,
7,
3,
3,
0,
0,
0
] | [] | [] | [
"alignment",
"stretch",
"wpf",
"xaml"
] | stackoverflow_0000280331_alignment_stretch_wpf_xaml.txt |
Q:
convert array with objects to one associative array without foreach
I have an array like(result of json_decode):
array(2) {
[0]=>
object(stdClass)#1 (3) {
["key"]=>
string(6) "sample"
["startYear"]=>
string(4) "2000"
["endYear"]=>
string(4) "2015"
}
[1]=>
object(stdClass)#2 (3) {
["key"]=>
string(13) "second_sample"
["startYear"]=>
string(4) "1986"
["endYear"]=>
string(4) "1991"
}
}
I want to convert it to array like:
array(2) {
["sample"]=>
array(2) {
["startYear"]=>
string(4) "2000"
["endYear"]=>
string(4) "2015"
}
["second_sample"]=>
array(2) {
["startYear"]=>
string(4) "1986"
["endYear"]=>
string(4) "1991"
}
}
Is there beauty way to do this (cureently I'm using foreach, but I'm not sure it is a best solution).
Added a code example:
<?php
$str='[{"key":"sample","startYear":"2000","endYear":"2015"},{"key":"second_sample","startYear":"1986","endYear":"1991"}]';
$arr=json_decode($str);
var_dump($arr);
$newArr=array();
foreach ($arr as $value){
$value=(array)$value;
$newArr[array_shift($value)]=$value;
}
var_dump($newArr);
A:
You can use array_reduce
$myArray = array_reduce($initialArray, function ($result, $item) {
$item = (array) $item;
$key = $item['key'];
unset($item['key']);
$result[$key] = $item;
return $result;
}, array());
A:
You can create the desired output without making any iterated function calls by using a technique called "array destructuring" (which is a functionless version of list()). Demo
Language Construct Style:
$result = [];
foreach ($array as $object) {
[
'key' => $key,
'startYear' => $result[$key]['startYear'],
'endYear' => $result[$key]['endYear']
] = (array)$object;
}
var_export($result);
Functional Style:
var_export(
array_reduce(
$array,
function($result, $object) {
[
'key' => $key,
'startYear' => $result[$key]['startYear'],
'endYear' => $result[$key]['endYear']
] = (array)$object;
return $result;
},
[]
)
);
Both will output:
array (
'sample' =>
array (
'startYear' => '2000',
'endYear' => '2015',
),
'second_sample' =>
array (
'startYear' => '1985',
'endYear' => '1991',
),
)
A:
Simply you can use array_map like as
$result = array_map('get_object_vars',$your_array);
Edited:
As after checking your code that you've added an example over here there's no need to use an extra functions or loop to convert your array of objects into associative array instead you simply need to pass second parameter true within json_decode function like as
$arr = json_decode($json,true);
Demo
A:
An alternative to array_reduce and other provided solutions could be:
$list = array_combine(
array_column($list, 'key'),
array_map(fn ($item) => (array) $item, array_values($list))
);
Or:
$list = array_combine(
array_column($list, 'key'),
array_map('get_object_vars', $list)
);
| convert array with objects to one associative array without foreach | I have an array like(result of json_decode):
array(2) {
[0]=>
object(stdClass)#1 (3) {
["key"]=>
string(6) "sample"
["startYear"]=>
string(4) "2000"
["endYear"]=>
string(4) "2015"
}
[1]=>
object(stdClass)#2 (3) {
["key"]=>
string(13) "second_sample"
["startYear"]=>
string(4) "1986"
["endYear"]=>
string(4) "1991"
}
}
I want to convert it to array like:
array(2) {
["sample"]=>
array(2) {
["startYear"]=>
string(4) "2000"
["endYear"]=>
string(4) "2015"
}
["second_sample"]=>
array(2) {
["startYear"]=>
string(4) "1986"
["endYear"]=>
string(4) "1991"
}
}
Is there beauty way to do this (cureently I'm using foreach, but I'm not sure it is a best solution).
Added a code example:
<?php
$str='[{"key":"sample","startYear":"2000","endYear":"2015"},{"key":"second_sample","startYear":"1986","endYear":"1991"}]';
$arr=json_decode($str);
var_dump($arr);
$newArr=array();
foreach ($arr as $value){
$value=(array)$value;
$newArr[array_shift($value)]=$value;
}
var_dump($newArr);
| [
"You can use array_reduce\n$myArray = array_reduce($initialArray, function ($result, $item) {\n $item = (array) $item;\n\n $key = $item['key'];\n unset($item['key']);\n\n $result[$key] = $item;\n\n return $result;\n}, array());\n\n",
"You can create the desired output without making any iterated function calls by using a technique called \"array destructuring\" (which is a functionless version of list()). Demo\nLanguage Construct Style:\n$result = [];\nforeach ($array as $object) {\n [\n 'key' => $key,\n 'startYear' => $result[$key]['startYear'],\n 'endYear' => $result[$key]['endYear']\n ] = (array)$object;\n}\nvar_export($result);\n\nFunctional Style:\nvar_export(\n array_reduce(\n $array,\n function($result, $object) {\n [\n 'key' => $key,\n 'startYear' => $result[$key]['startYear'],\n 'endYear' => $result[$key]['endYear']\n ] = (array)$object;\n return $result;\n },\n []\n )\n);\n\nBoth will output:\narray (\n 'sample' => \n array (\n 'startYear' => '2000',\n 'endYear' => '2015',\n ),\n 'second_sample' => \n array (\n 'startYear' => '1985',\n 'endYear' => '1991',\n ),\n)\n\n",
"Simply you can use array_map like as\n$result = array_map('get_object_vars',$your_array);\n\nEdited:\nAs after checking your code that you've added an example over here there's no need to use an extra functions or loop to convert your array of objects into associative array instead you simply need to pass second parameter true within json_decode function like as\n$arr = json_decode($json,true);\n\nDemo\n",
"An alternative to array_reduce and other provided solutions could be:\n$list = array_combine(\n array_column($list, 'key'),\n array_map(fn ($item) => (array) $item, array_values($list))\n);\n\nOr:\n$list = array_combine(\n array_column($list, 'key'),\n array_map('get_object_vars', $list)\n);\n\n\n"
] | [
5,
1,
0,
0
] | [] | [] | [
"php"
] | stackoverflow_0033781680_php.txt |
Q:
How can I show errors at the top of the form instead of having them in bubbles?
I want to show errors at the top of my form. For now, when I try to validate the form with empty fields for example, I get the error in a bubble above the corresponding field. In the Type file, I tried adding 'error_bubbling' => true, but it didn't solve the problem.
How can I show errors at the top of the form instead of having them in the individual bubbles ?
Thank you.
BigCityType.php
<?php
namespace App\Form\Both;
use App\Entity\BigCity;
use App\Entity\Country;
use Symfony\Component\Form\AbstractType;
use Symfony\Component\Form\FormBuilderInterface;
use Symfony\Bridge\Doctrine\Form\Type\EntityType;
use Symfony\Component\OptionsResolver\OptionsResolver;
use Symfony\Component\Form\Extension\Core\Type\TextType;
use Symfony\Component\Form\Extension\Core\Type\SubmitType;
class BigCityType extends AbstractType
{
public function buildForm(FormBuilderInterface $builder, array $options): void
{
$builder
->add('country', EntityType::class, [
'class' => Country::class,
'choice_label' => 'name',
'placeholder' => 'Je sélectionne un pays',
'error_bubbling' => true
])
->add('name', TextType::class, [
'placeholder' => 'Je tape le nom d'une grande ville',
'error_bubbling' => true
])
->add('save', SubmitType::class, [
'attr' => ['class' => 'save'],
])
;
}
public function configureOptions(OptionsResolver $resolver): void
{
$resolver->setDefaults([
'data_class' => BigCity::class,
'translation_domain' => 'forms'
]);
}
}
create_bigcity.php
{% extends 'base.html.twig' %}
{% block main %}
{{ form_start(bigcityform) }}
{% if not bigcityform.vars.valid %}
<div class="alert alert-danger text-center" role="alert">
{% if not bigcityform.name.vars.valid %}
{{ form_errors(bigcityform.name) }}
{% elseif not bigcityform.country.vars.valid %}
{{ form_errors(bigcityform.country) }}
{% endif %}
</div>
{% endif %}
{{ form_widget(bigcityform.country) }}
{{ form_widget(bigcityform.name}) }}
{{ form_widget(bigcityform.save, {'label': "Enregistrer une grande ville"} ) }}
{{ form_end(bigcityform) }}
{% endblock %}
A:
To show errors at the top of the form instead of in individual bubbles, you can use the error_mapping option in your form fields to map the errors to a different form field.
For example, you could add a new field to your form called global_errors and use the error_mapping option to map all errors from the other fields in your form to the global_errors field. Then, you can display the errors in the global_errors field at the top of the form instead of in the individual field bubbles.
Here is an example of how you could do this in your BigCityType class:
public function buildForm(FormBuilderInterface $builder, array $options): void
{
$builder
->add('country', EntityType::class, [
'class' => Country::class,
'choice_label' => 'name',
'placeholder' => 'Je sélectionne un pays',
'error_bubbling' => true,
'error_mapping' => ['.' => 'global_errors'],
])
->add('name', TextType::class, [
'placeholder' => 'Je tape le nom d'une grande ville',
'error_bubbling' => true,
'error_mapping' => ['.' => 'global_errors'],
])
->add('global_errors', TextType::class, [
'mapped' => false,
'required' => false,
])
->add('save', SubmitType::class, [
'attr' => ['class' => 'save'],
])
;
}
Then, in your template, you can display the errors in the global_errors field at the top of the form instead of in the individual field bubbles:
{% extends 'base.html.twig' %}
{% block main %}
{{ form_start(bigcityform) }}
{% if bigcityform.global_errors.vars.errors|length > 0 %}
<div class="alert alert-danger text-center" role="alert">
{{ form_errors(bigcityform.global_errors) }}
</div>
{% endif %}
{{ form_widget(bigcityform.country) }}
{{ form_widget(bigcityform.name) }}
{{ form_widget(bigcityform.save, {'label': "Enregistrer une grande ville"} ) }}
{{ form_end(bigcityform) }}
{% endblock %}
Note that in the template, you need to check the number of errors in the global_errors field instead of checking whether the form is valid, because the global_errors field is not mapped to any data and therefore will never be considered valid.
By using the error_mapping option and the global_errors field, you can show all errors at the top of the form instead of in individual field bubbles.
| How can I show errors at the top of the form instead of having them in bubbles? | I want to show errors at the top of my form. For now, when I try to validate the form with empty fields for example, I get the error in a bubble above the corresponding field. In the Type file, I tried adding 'error_bubbling' => true, but it didn't solve the problem.
How can I show errors at the top of the form instead of having them in the individual bubbles ?
Thank you.
BigCityType.php
<?php
namespace App\Form\Both;
use App\Entity\BigCity;
use App\Entity\Country;
use Symfony\Component\Form\AbstractType;
use Symfony\Component\Form\FormBuilderInterface;
use Symfony\Bridge\Doctrine\Form\Type\EntityType;
use Symfony\Component\OptionsResolver\OptionsResolver;
use Symfony\Component\Form\Extension\Core\Type\TextType;
use Symfony\Component\Form\Extension\Core\Type\SubmitType;
class BigCityType extends AbstractType
{
public function buildForm(FormBuilderInterface $builder, array $options): void
{
$builder
->add('country', EntityType::class, [
'class' => Country::class,
'choice_label' => 'name',
'placeholder' => 'Je sélectionne un pays',
'error_bubbling' => true
])
->add('name', TextType::class, [
'placeholder' => 'Je tape le nom d'une grande ville',
'error_bubbling' => true
])
->add('save', SubmitType::class, [
'attr' => ['class' => 'save'],
])
;
}
public function configureOptions(OptionsResolver $resolver): void
{
$resolver->setDefaults([
'data_class' => BigCity::class,
'translation_domain' => 'forms'
]);
}
}
create_bigcity.php
{% extends 'base.html.twig' %}
{% block main %}
{{ form_start(bigcityform) }}
{% if not bigcityform.vars.valid %}
<div class="alert alert-danger text-center" role="alert">
{% if not bigcityform.name.vars.valid %}
{{ form_errors(bigcityform.name) }}
{% elseif not bigcityform.country.vars.valid %}
{{ form_errors(bigcityform.country) }}
{% endif %}
</div>
{% endif %}
{{ form_widget(bigcityform.country) }}
{{ form_widget(bigcityform.name}) }}
{{ form_widget(bigcityform.save, {'label': "Enregistrer une grande ville"} ) }}
{{ form_end(bigcityform) }}
{% endblock %}
| [
"To show errors at the top of the form instead of in individual bubbles, you can use the error_mapping option in your form fields to map the errors to a different form field.\nFor example, you could add a new field to your form called global_errors and use the error_mapping option to map all errors from the other fields in your form to the global_errors field. Then, you can display the errors in the global_errors field at the top of the form instead of in the individual field bubbles.\nHere is an example of how you could do this in your BigCityType class:\npublic function buildForm(FormBuilderInterface $builder, array $options): void\n{\n $builder\n ->add('country', EntityType::class, [\n 'class' => Country::class,\n 'choice_label' => 'name',\n 'placeholder' => 'Je sélectionne un pays',\n 'error_bubbling' => true,\n 'error_mapping' => ['.' => 'global_errors'],\n ])\n ->add('name', TextType::class, [\n 'placeholder' => 'Je tape le nom d'une grande ville',\n 'error_bubbling' => true,\n 'error_mapping' => ['.' => 'global_errors'],\n ])\n ->add('global_errors', TextType::class, [\n 'mapped' => false,\n 'required' => false,\n ])\n ->add('save', SubmitType::class, [\n 'attr' => ['class' => 'save'],\n ])\n ;\n}\n\nThen, in your template, you can display the errors in the global_errors field at the top of the form instead of in the individual field bubbles:\n{% extends 'base.html.twig' %}\n\n{% block main %}\n\n{{ form_start(bigcityform) }}\n\n{% if bigcityform.global_errors.vars.errors|length > 0 %}\n <div class=\"alert alert-danger text-center\" role=\"alert\">\n {{ form_errors(bigcityform.global_errors) }}\n </div>\n{% endif %}\n\n{{ form_widget(bigcityform.country) }}\n{{ form_widget(bigcityform.name) }}\n{{ form_widget(bigcityform.save, {'label': \"Enregistrer une grande ville\"} ) }}\n\n{{ form_end(bigcityform) }}\n\n{% endblock %}\n\nNote that in the template, you need to check the number of errors in the global_errors field instead of checking whether the form is valid, because the global_errors field is not mapped to any data and therefore will never be considered valid.\nBy using the error_mapping option and the global_errors field, you can show all errors at the top of the form instead of in individual field bubbles.\n"
] | [
0
] | [] | [] | [
"forms",
"symfony"
] | stackoverflow_0074585571_forms_symfony.txt |
Q:
ML.NET OLSTrainer on Macos with an arm M1 processor error with libomp
I am trying to use a ml.net's OLSTrainer on a Mac with an M1 processor but I get the error below and was looking for some assistance.
Unhandled exception. System.NotSupportedException: The MKL library (libMklImports) or one of its dependencies is missing.
at Microsoft.ML.Trainers.OlsTrainer.TrainCore(IChannel ch, Factory cursorFactory, Int32 featureCount)
Microsoft documentation points me to this link/version to install, but makes no difference.
https://learn.microsoft.com/en-us/dotnet/machine-learning/how-to-guides/install-extra-dependencies
wget https://raw.githubusercontent.com/Homebrew/homebrew-core/fb8323f2b170bd4ae97e1bac9bf3e2983af3fdb0/Formula/libomp.rb && brew install ./libomp.rb && brew link libomp --force
A:
That's because the current Timeseries support is dependent on Intel SDKs, and does not support Apple Silicon directly.
https://github.com/dotnet/machinelearning/blob/main/docs/project-docs/platform-limitations.md
A:
try reinstall the llvm-omp by conda on mac
below URL is more details
https://scikit-learn.org/0.22/developers/advanced_installation.html
| ML.NET OLSTrainer on Macos with an arm M1 processor error with libomp | I am trying to use a ml.net's OLSTrainer on a Mac with an M1 processor but I get the error below and was looking for some assistance.
Unhandled exception. System.NotSupportedException: The MKL library (libMklImports) or one of its dependencies is missing.
at Microsoft.ML.Trainers.OlsTrainer.TrainCore(IChannel ch, Factory cursorFactory, Int32 featureCount)
Microsoft documentation points me to this link/version to install, but makes no difference.
https://learn.microsoft.com/en-us/dotnet/machine-learning/how-to-guides/install-extra-dependencies
wget https://raw.githubusercontent.com/Homebrew/homebrew-core/fb8323f2b170bd4ae97e1bac9bf3e2983af3fdb0/Formula/libomp.rb && brew install ./libomp.rb && brew link libomp --force
| [
"That's because the current Timeseries support is dependent on Intel SDKs, and does not support Apple Silicon directly.\nhttps://github.com/dotnet/machinelearning/blob/main/docs/project-docs/platform-limitations.md\n",
"try reinstall the llvm-omp by conda on mac\nbelow URL is more details\nhttps://scikit-learn.org/0.22/developers/advanced_installation.html\n"
] | [
0,
0
] | [] | [] | [
".net_core",
"apple_m1",
"macos",
"ml.net"
] | stackoverflow_0072508823_.net_core_apple_m1_macos_ml.net.txt |
Q:
How to launch a project correctly?
There is such a project https://github.com/WentianZhang-ML/FRT-PAD , I want to run it locally. At the very end it says that you can run as
python train_main.py \
--train_data [om/ci]
--test_data [ci/om]
--downstream [FE/FR/FA]
--graph_type [direct/dense]
I try to run this file in juputer, but I get SystemExit: 2
A:
Some simple options for running a python script in jupyter:
Option 1: Open a terminal in Jupyter, and run your Python scripts in the terminal like you would in your local terminal.
Option 2: Make a notebook, and use %run <name of script.py> as an entry in a cell. This is more fully featured than using !python <name of script.py> in a cell.
When the SystemExit: 2 error is raised, the Python interpreter will exit with a non-zero exit code (in this case, 2), indicating that the script was not executed successfully.
To troubleshoot this error, you can try the following steps:
Check the script for syntax errors. Make sure that the script is written in valid Python and that it follows the correct syntax for the version of Python that you are using.
Make sure that any external modules or libraries that the script depends on are installed and are in the Python interpreter's search path.
If you are using Jupyter, make sure that you are using the !python command to run the script within the notebook, rather than the python command.
| How to launch a project correctly? | There is such a project https://github.com/WentianZhang-ML/FRT-PAD , I want to run it locally. At the very end it says that you can run as
python train_main.py \
--train_data [om/ci]
--test_data [ci/om]
--downstream [FE/FR/FA]
--graph_type [direct/dense]
I try to run this file in juputer, but I get SystemExit: 2
| [
"Some simple options for running a python script in jupyter:\nOption 1: Open a terminal in Jupyter, and run your Python scripts in the terminal like you would in your local terminal.\nOption 2: Make a notebook, and use %run <name of script.py> as an entry in a cell. This is more fully featured than using !python <name of script.py> in a cell.\nWhen the SystemExit: 2 error is raised, the Python interpreter will exit with a non-zero exit code (in this case, 2), indicating that the script was not executed successfully.\nTo troubleshoot this error, you can try the following steps:\n\nCheck the script for syntax errors. Make sure that the script is written in valid Python and that it follows the correct syntax for the version of Python that you are using.\n\nMake sure that any external modules or libraries that the script depends on are installed and are in the Python interpreter's search path.\n\nIf you are using Jupyter, make sure that you are using the !python command to run the script within the notebook, rather than the python command.\n\n\n"
] | [
0
] | [] | [] | [
"machine_learning",
"python"
] | stackoverflow_0074678074_machine_learning_python.txt |
Q:
Add a parameter to your sql query
here is the problem I am currently facing. My goal is to add exercises from a database with in predefined programs before. I have made an sql query that allows not to add a duplicate exercise in the program. And the problem I have is that in the sql query, my program cannot take the id of a program that is in the parameters of my function.
My controller containing my function to retrieve the exercises that are or are not in the program
Public function GetExercicesFromBDD($id) {
$leProgramChoisie = new ExerciceModel();
$leProgramChoisie = $leProgramChoisie->GetProgramById($id);
$leProgram = DB::table('ProgramToExercice')->where('IdProgram', '=', $id)->get();
$mesExercices =DB::table('Exercice')
->leftjoin('ProgramToExercice', function ($join) {
$join->on('ProgramToExercice.IdExercice', '=', 'Exercice.Id')
->Where('ProgramToExercice.IdProgram' ,'=', $id );
})
->whereNull('ProgramToExercice.IdProgram')
->get();
dd($mesExercices);
return view('addExerciceIntoProgram', ['mesExercices'=>$mesExercices, 'IdProgram'=>$id, "leProgramChoisie" => $leProgramChoisie]);
}
My model to get the program id
public function GetProgramById($id) {
$leProgram = DB::table('ProgramToExercice')->where('IdProgram', '=', $id)->get();
return $leProgram;
}
my view containing the button to add exercises with its route
@foreach ($programs as $program)
<form action={{url("Program/" . $program->Id . "/editExercice")}} method="post">
@csrf
<button type="submit" class="btn btn-info">Ajouter des exercices dans un programme</button>
</form>
A:
The anonymous function is not aware of $id outside of its scope, so need to pass that in with use:
$mesExercices =DB::table('Exercice') // pass the $id
->leftjoin('ProgramToExercice', function ($join) use ($id) {
$join->on('ProgramToExercice.IdExercice', '=', 'Exercice.Id')
->Where('ProgramToExercice.IdProgram' ,'=', $id );
})
->whereNull('ProgramToExercice.IdProgram')
->get();
| Add a parameter to your sql query | here is the problem I am currently facing. My goal is to add exercises from a database with in predefined programs before. I have made an sql query that allows not to add a duplicate exercise in the program. And the problem I have is that in the sql query, my program cannot take the id of a program that is in the parameters of my function.
My controller containing my function to retrieve the exercises that are or are not in the program
Public function GetExercicesFromBDD($id) {
$leProgramChoisie = new ExerciceModel();
$leProgramChoisie = $leProgramChoisie->GetProgramById($id);
$leProgram = DB::table('ProgramToExercice')->where('IdProgram', '=', $id)->get();
$mesExercices =DB::table('Exercice')
->leftjoin('ProgramToExercice', function ($join) {
$join->on('ProgramToExercice.IdExercice', '=', 'Exercice.Id')
->Where('ProgramToExercice.IdProgram' ,'=', $id );
})
->whereNull('ProgramToExercice.IdProgram')
->get();
dd($mesExercices);
return view('addExerciceIntoProgram', ['mesExercices'=>$mesExercices, 'IdProgram'=>$id, "leProgramChoisie" => $leProgramChoisie]);
}
My model to get the program id
public function GetProgramById($id) {
$leProgram = DB::table('ProgramToExercice')->where('IdProgram', '=', $id)->get();
return $leProgram;
}
my view containing the button to add exercises with its route
@foreach ($programs as $program)
<form action={{url("Program/" . $program->Id . "/editExercice")}} method="post">
@csrf
<button type="submit" class="btn btn-info">Ajouter des exercices dans un programme</button>
</form>
| [
"The anonymous function is not aware of $id outside of its scope, so need to pass that in with use:\n $mesExercices =DB::table('Exercice') // pass the $id \n ->leftjoin('ProgramToExercice', function ($join) use ($id) {\n $join->on('ProgramToExercice.IdExercice', '=', 'Exercice.Id')\n ->Where('ProgramToExercice.IdProgram' ,'=', $id );\n\n })\n ->whereNull('ProgramToExercice.IdProgram')\n ->get();\n\n"
] | [
0
] | [] | [] | [
"laravel",
"php"
] | stackoverflow_0074678045_laravel_php.txt |
Q:
System.InvalidOperationException: Startup assembly Microsoft.AspNetCore.SpaProxy failed to execute
---> System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.AspNetCore.SpaProxy, Culture=neutral, PublicKeyToken=null'. The system cannot find the file specified.
File name: 'Microsoft.AspNetCore.SpaProxy, Culture=neutral, PublicKeyToken=null'
at System.Reflection.RuntimeAssembly.InternalLoad(ObjectHandleOnStack assemblyName, ObjectHandleOnStack requestingAssembly, StackCrawlMarkHandle stackMark, Boolean throwOnFileNotFound, ObjectHandleOnStack assemblyLoadContext, ObjectHandleOnStack retAssembly)
at System.Reflection.RuntimeAssembly.InternalLoad(AssemblyName assemblyName, RuntimeAssembly requestingAssembly, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, AssemblyLoadContext assemblyLoadContext)
at System.Reflection.Assembly.Load(AssemblyName assemblyRef)
at Microsoft.AspNetCore.Hosting.GenericWebHostBuilder.ExecuteHostingStartups()
I am somehow unable to start my app anymore and always get this error and results in a 404 on localhost.I dont know how i will be able to fix this and get my app running again. Hope somebody can help me thx.
A:
I have this same problem in my AspNet React App.
The problem occurred due to me having updated the version of Microsoft.AspNetCore.SpaProxy to 7.0.0 while my project is using .NET 6
So, I downgrade the version of Microsoft.AspNetCore.SpaProxy to 6.0.11 and it works.
| System.InvalidOperationException: Startup assembly Microsoft.AspNetCore.SpaProxy failed to execute | ---> System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.AspNetCore.SpaProxy, Culture=neutral, PublicKeyToken=null'. The system cannot find the file specified.
File name: 'Microsoft.AspNetCore.SpaProxy, Culture=neutral, PublicKeyToken=null'
at System.Reflection.RuntimeAssembly.InternalLoad(ObjectHandleOnStack assemblyName, ObjectHandleOnStack requestingAssembly, StackCrawlMarkHandle stackMark, Boolean throwOnFileNotFound, ObjectHandleOnStack assemblyLoadContext, ObjectHandleOnStack retAssembly)
at System.Reflection.RuntimeAssembly.InternalLoad(AssemblyName assemblyName, RuntimeAssembly requestingAssembly, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, AssemblyLoadContext assemblyLoadContext)
at System.Reflection.Assembly.Load(AssemblyName assemblyRef)
at Microsoft.AspNetCore.Hosting.GenericWebHostBuilder.ExecuteHostingStartups()
I am somehow unable to start my app anymore and always get this error and results in a 404 on localhost.I dont know how i will be able to fix this and get my app running again. Hope somebody can help me thx.
| [
"I have this same problem in my AspNet React App.\nThe problem occurred due to me having updated the version of Microsoft.AspNetCore.SpaProxy to 7.0.0 while my project is using .NET 6\nSo, I downgrade the version of Microsoft.AspNetCore.SpaProxy to 6.0.11 and it works.\n"
] | [
0
] | [] | [] | [
"asp.net_core",
"asp.net_mvc",
"c#",
"visual_studio_2022"
] | stackoverflow_0074486727_asp.net_core_asp.net_mvc_c#_visual_studio_2022.txt |
Q:
MacOS path is a space delimited version of PATH and they are linked in zsh
In most shells, $VAR and $var and $Var are three different variables because the shell (all that >> I << am aware of) is case sensitive.
In zsh on MacOS 13.01:
[Start a fresh copy of zsh]
% s=tst
% S="something else"
% echo "\$s=$s"
$s=tst
% echo "\$S=$S"
$S=something else
% [[ "$s" == "$S" ]] || echo "Not equal"
Not equal
However, in zsh on MacOS, examine $PATH and $path:
% echo $PATH
/Users/dawg/perl5/bin:/usr/local/opt/ruby/bin:/usr/local/opt/[email protected]/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin
% echo $path
/Users/dawg/perl5/bin /usr/local/opt/ruby/bin /usr/local/opt/[email protected]/bin /usr/local/bin /usr/bin /bin /usr/sbin /sbin /Library/Apple/usr/bin
It appears that $path is a space delimited version of $PATH.
Now change PATH in the typical way:
% export PATH=/some/new/folder:$PATH
% echo $PATH
/some/new/folder:/Users/dawg/perl5/bin:/usr/local/opt/ruby/bin:/usr/local/opt/[email protected]/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin
That is what I expect, but it also changes $path:
% echo $path
/some/new/folder /Users/dawg/perl5/bin /usr/local/opt/ruby/bin /usr/local/opt/[email protected]/bin /usr/local/bin /usr/bin /bin /usr/sbin /sbin /Library/Apple/usr/bin
WORSE, changing $path also changes $PATH!
This is not the same in a brew install of Bash which has no secret $path waiting to bite.
It bit me (with an hour of head scratching) with a loop like this on zsh on MacOS:
for path in **/*.txt; do
# do some things with sys utilities with $path...
# sys utilities like awk in the PATH were not found
done
Questions:
Is the $PATH / $path link documented somewhere?
What is the purpose?
How do I turn this off??
Why would Apple do this???
A:
path is an array that is "tied" to PATH. This is a general feature provided by zsh via the typeset -T command.
$ typeset -T FOO foo '+'
$ FOO=a+b+c
$ print -l $FOO
a+b+c
$ print -l $foo
a
b
c
Here, FOO is the scalar and foo is the array. (Pairs of names that are identical except for case is a convention, not a requirement).
+ is the separator between elements in the scalar that determines the values of the corresponding array. Modifying one variable affects the other.
The purpose is to make it easier to modify PATH, by adding or removing directories to the array and letting the shell handle updating the scalar with necessary separators.
path and PATH behave as if defined with
typeset -T PATH path :
or
typeset -T PATH path
(: is the default separator, as the feature is intended to provide array equivalent of variables like PATH, MANPATH, etc.)
There is no way to "untie" such scalar/array pairs; I would just accept that path is effectively reserved by zsh.
Both the scalar and the array may be manipulated as normal. If one is unset, the other will automatically be
unset too. There is no way of untying the variables
without unsetting them, nor of converting the type of one
of them with another typeset command; +T does not work,
assigning an array to scalar is an error, and assigning a
scalar to array sets it to be a single-element array.
You can see what other pairs are defined using typeset -T and no other arguments. It will list all variables in lexicographic order (which means all uppercase names, likely scalars, followed by all lowercase names, likely arrays).
| MacOS path is a space delimited version of PATH and they are linked in zsh | In most shells, $VAR and $var and $Var are three different variables because the shell (all that >> I << am aware of) is case sensitive.
In zsh on MacOS 13.01:
[Start a fresh copy of zsh]
% s=tst
% S="something else"
% echo "\$s=$s"
$s=tst
% echo "\$S=$S"
$S=something else
% [[ "$s" == "$S" ]] || echo "Not equal"
Not equal
However, in zsh on MacOS, examine $PATH and $path:
% echo $PATH
/Users/dawg/perl5/bin:/usr/local/opt/ruby/bin:/usr/local/opt/[email protected]/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin
% echo $path
/Users/dawg/perl5/bin /usr/local/opt/ruby/bin /usr/local/opt/[email protected]/bin /usr/local/bin /usr/bin /bin /usr/sbin /sbin /Library/Apple/usr/bin
It appears that $path is a space delimited version of $PATH.
Now change PATH in the typical way:
% export PATH=/some/new/folder:$PATH
% echo $PATH
/some/new/folder:/Users/dawg/perl5/bin:/usr/local/opt/ruby/bin:/usr/local/opt/[email protected]/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin
That is what I expect, but it also changes $path:
% echo $path
/some/new/folder /Users/dawg/perl5/bin /usr/local/opt/ruby/bin /usr/local/opt/[email protected]/bin /usr/local/bin /usr/bin /bin /usr/sbin /sbin /Library/Apple/usr/bin
WORSE, changing $path also changes $PATH!
This is not the same in a brew install of Bash which has no secret $path waiting to bite.
It bit me (with an hour of head scratching) with a loop like this on zsh on MacOS:
for path in **/*.txt; do
# do some things with sys utilities with $path...
# sys utilities like awk in the PATH were not found
done
Questions:
Is the $PATH / $path link documented somewhere?
What is the purpose?
How do I turn this off??
Why would Apple do this???
| [
"path is an array that is \"tied\" to PATH. This is a general feature provided by zsh via the typeset -T command.\n$ typeset -T FOO foo '+'\n$ FOO=a+b+c\n$ print -l $FOO\na+b+c\n$ print -l $foo\na\nb\nc\n\nHere, FOO is the scalar and foo is the array. (Pairs of names that are identical except for case is a convention, not a requirement).\n+ is the separator between elements in the scalar that determines the values of the corresponding array. Modifying one variable affects the other.\nThe purpose is to make it easier to modify PATH, by adding or removing directories to the array and letting the shell handle updating the scalar with necessary separators.\npath and PATH behave as if defined with\ntypeset -T PATH path :\n\nor\ntypeset -T PATH path\n\n(: is the default separator, as the feature is intended to provide array equivalent of variables like PATH, MANPATH, etc.)\nThere is no way to \"untie\" such scalar/array pairs; I would just accept that path is effectively reserved by zsh.\n\nBoth the scalar and the array may be manipulated as normal. If one is unset, the other will automatically be\nunset too. There is no way of untying the variables\nwithout unsetting them, nor of converting the type of one\nof them with another typeset command; +T does not work,\nassigning an array to scalar is an error, and assigning a\nscalar to array sets it to be a single-element array.\n\nYou can see what other pairs are defined using typeset -T and no other arguments. It will list all variables in lexicographic order (which means all uppercase names, likely scalars, followed by all lowercase names, likely arrays).\n"
] | [
2
] | [] | [] | [
"bash",
"macos",
"path",
"zsh"
] | stackoverflow_0074677914_bash_macos_path_zsh.txt |
Q:
Can't find "live editing of literals" when working with Jetpack Compose
I am trying to find the ' live editing of literals ' menu which should be here -
The option is enabled via Android Studio settings, I have the latest version installed and I have no clue why I can't see it like it should be in the official documentations -
https://developer.android.com/jetpack/compose/tooling#live-edit-literals
Any clues?
A:
Live edit of literals
Android Studio can update in real time some constant literals used in composables within previews, emulator, and physical device. Here are some supported types:
Int
String
Color
Dp
Boolean
check the setting do u have Live edit of literals enabled. if you do, you just need to press finger press icon right next to preview run button Icon in your preview screen. It only change above listed types.
A:
It means that you have added a new component and it is not yet detected to change live (maybe you copied and pasted it).
| Can't find "live editing of literals" when working with Jetpack Compose | I am trying to find the ' live editing of literals ' menu which should be here -
The option is enabled via Android Studio settings, I have the latest version installed and I have no clue why I can't see it like it should be in the official documentations -
https://developer.android.com/jetpack/compose/tooling#live-edit-literals
Any clues?
| [
"Live edit of literals\nAndroid Studio can update in real time some constant literals used in composables within previews, emulator, and physical device. Here are some supported types:\nInt\nString\nColor\nDp\nBoolean\ncheck the setting do u have Live edit of literals enabled. if you do, you just need to press finger press icon right next to preview run button Icon in your preview screen. It only change above listed types.\n",
"It means that you have added a new component and it is not yet detected to change live (maybe you copied and pasted it).\n"
] | [
0,
0
] | [] | [] | [
"android",
"android_jetpack_compose",
"android_studio",
"rendering"
] | stackoverflow_0071287325_android_android_jetpack_compose_android_studio_rendering.txt |
Q:
Dart : parse date timezone gives UnimplementedError
I need to parse a date in the following format in my Flutter application (come from JSON) :
2019-05-17T15:03:22.472+0000
According to the documentation, I have to use Z to get the time zone (last 5 characters in RFC 822 format), so I use the following :
new DateFormat("y-M-d'T'H:m:s.SZ").parseStrict(json['startDate']);
But it fails with error :
FormatException: Characters remaining after date parsing in
2019-05-17T15:03:22.472+0000
Here is another test :
/// THIS WORKS
try {
print(new DateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS").format(DateTime.now()));
} catch (e) {
print(e.toString());
}
/// THIS RETURNS `UnimplementedError` (as soon as I use a 'Z' or 'z') somewhere
try {
print(new DateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZ").format(DateTime.now()));
} catch (e) {
print(e.toString());
}
Is Z implemented ?
A:
Sadly z and v patterns are not implemented.
Those won't be implemented until Dart DateTime's have time zone information
More info on this issue https://github.com/dart-lang/intl/issues/19
A:
From the DateFormat class docs:
DateFormat is for formatting and parsing dates in a locale-sensitive manner.
You're not asking to parse a locale-sensitive date string but rather an ISO 8601 date string, so thus you should not use the DateFormat class.
Instead, use the DateTime.parse method, which supports the format you described, per its docs:
Examples of accepted strings:
...
"2002-02-27T14:00:00-0500": Same as "2002-02-27T19:00:00Z"
A:
You can use this format
print(new DateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'").format(DateTime.now()));
It worked for me.
| Dart : parse date timezone gives UnimplementedError | I need to parse a date in the following format in my Flutter application (come from JSON) :
2019-05-17T15:03:22.472+0000
According to the documentation, I have to use Z to get the time zone (last 5 characters in RFC 822 format), so I use the following :
new DateFormat("y-M-d'T'H:m:s.SZ").parseStrict(json['startDate']);
But it fails with error :
FormatException: Characters remaining after date parsing in
2019-05-17T15:03:22.472+0000
Here is another test :
/// THIS WORKS
try {
print(new DateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS").format(DateTime.now()));
} catch (e) {
print(e.toString());
}
/// THIS RETURNS `UnimplementedError` (as soon as I use a 'Z' or 'z') somewhere
try {
print(new DateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZ").format(DateTime.now()));
} catch (e) {
print(e.toString());
}
Is Z implemented ?
| [
"Sadly z and v patterns are not implemented. \nThose won't be implemented until Dart DateTime's have time zone information\nMore info on this issue https://github.com/dart-lang/intl/issues/19\n",
"From the DateFormat class docs:\n\nDateFormat is for formatting and parsing dates in a locale-sensitive manner.\n\nYou're not asking to parse a locale-sensitive date string but rather an ISO 8601 date string, so thus you should not use the DateFormat class.\nInstead, use the DateTime.parse method, which supports the format you described, per its docs:\n\nExamples of accepted strings:\n\n...\n\"2002-02-27T14:00:00-0500\": Same as \"2002-02-27T19:00:00Z\"\n\n\n",
"You can use this format\nprint(new DateFormat(\"yyyy-MM-dd'T'HH:mm:ss'Z'\").format(DateTime.now()));\n\nIt worked for me.\n"
] | [
11,
1,
0
] | [] | [] | [
"dart",
"date",
"date_formatting",
"flutter",
"timezone"
] | stackoverflow_0056189407_dart_date_date_formatting_flutter_timezone.txt |
Q:
Firestore call doesn't execute completely after user terminates the app
I need a firestore function to execute when the user terminates the app. Specifically, I need to delete a document in firestore. I used the applicationWillTerminate in the AppDelegate for that.
This is the code:
func applicationWillTerminate(_ application: UIApplication) {
print("App terminated")
guard let email = UserDefaults.standard.value(forKey: "email") as? String else {
return
}
let safeEmail = DatabaseManager.safeEmail(emailAddress: email)
Firestore.firestore().collection("LocationsList").document(safeEmail).delete() { err in
if let err = err {
print("Error removing document: \(err)")
}
else {
print("Document removed")
}
}
}
Terminal successfully prints "App terminated" meaning that the function is called when the user terminates the app. However, the Firestore call doesn't completely execute, which I would believe is due to the short time limit the applicationWillTerminate has. Is there any way I can get the Firestore document to delete/the call to execute completely?
A:
I have solved the problem by implementing the code in the sceneDidDisconncet in the SceneDelegate instead of the applicationWillTerminate. This works well for what I was trying to achieve and the Firestore document is deleted when the user kills/terminates the app. This is the code snippet (in the SceneDelegate):
func sceneDidDisconnect(_ scene: UIScene) {
//print("inactive")
guard let email = UserDefaults.standard.value(forKey: "email") as? String else {
return
}
let safeEmail = DatabaseManager.safeEmail(emailAddress: email)
Firestore.firestore().collection("LocationsList").document(safeEmail).delete() { err in
if let err = err {
print("Error removing document: \(err)")
}
else {
print("Document removed")
}
}
// Called as the scene is being released by the system.
// This occurs shortly after the scene enters the background, or when its session is discarded.
// Release any resources associated with this scene that can be re-created the next time the scene connects.
// The scene may re-connect later, as its session was not necessarily discarded (see `application:didDiscardSceneSessions` instead).
}
| Firestore call doesn't execute completely after user terminates the app | I need a firestore function to execute when the user terminates the app. Specifically, I need to delete a document in firestore. I used the applicationWillTerminate in the AppDelegate for that.
This is the code:
func applicationWillTerminate(_ application: UIApplication) {
print("App terminated")
guard let email = UserDefaults.standard.value(forKey: "email") as? String else {
return
}
let safeEmail = DatabaseManager.safeEmail(emailAddress: email)
Firestore.firestore().collection("LocationsList").document(safeEmail).delete() { err in
if let err = err {
print("Error removing document: \(err)")
}
else {
print("Document removed")
}
}
}
Terminal successfully prints "App terminated" meaning that the function is called when the user terminates the app. However, the Firestore call doesn't completely execute, which I would believe is due to the short time limit the applicationWillTerminate has. Is there any way I can get the Firestore document to delete/the call to execute completely?
| [
"I have solved the problem by implementing the code in the sceneDidDisconncet in the SceneDelegate instead of the applicationWillTerminate. This works well for what I was trying to achieve and the Firestore document is deleted when the user kills/terminates the app. This is the code snippet (in the SceneDelegate):\nfunc sceneDidDisconnect(_ scene: UIScene) {\n //print(\"inactive\")\n guard let email = UserDefaults.standard.value(forKey: \"email\") as? String else {\n return\n }\n let safeEmail = DatabaseManager.safeEmail(emailAddress: email)\n Firestore.firestore().collection(\"LocationsList\").document(safeEmail).delete() { err in\n if let err = err {\n print(\"Error removing document: \\(err)\")\n }\n else {\n print(\"Document removed\")\n }\n }\n // Called as the scene is being released by the system.\n // This occurs shortly after the scene enters the background, or when its session is discarded.\n // Release any resources associated with this scene that can be re-created the next time the scene connects.\n // The scene may re-connect later, as its session was not necessarily discarded (see `application:didDiscardSceneSessions` instead).\n}\n\n"
] | [
0
] | [] | [] | [
"firebase",
"google_cloud_firestore",
"ios",
"swift"
] | stackoverflow_0074668616_firebase_google_cloud_firestore_ios_swift.txt |
Q:
Matrix Multiplication Using double pointers passing into functions in C
I am wondering why I cannot get the value in the function, it always cause segmentation fault...
`
void multiply(int M, int N, int K, int **matrixA, int **matrixB, int **matrixC){
for (int i = 0; i < M; i++){
for (int j = 0; j < K; j++){
int sum = 0;
for (int k = 0; k < N; k++){
sum += (*(*(matrixA + j) + k)) * (*(*(matrixB + k) + j));
}
*(*(matrixC + i) + j) = sum;
}
}
}
int main(){
int M, N, K;
scanf("%d%d%d", &M, &N, &K);
int matrixA[M][N];
int matrixB[N][K];
int matrixC[M][K];
for(int i=0; i<M; i++){
for(int j=0; j<N; j++){
scanf("%d", matrixA[i]+j);
}
}
for(int i=0; i<N; i++){
for(int j=0; j<K; j++){
scanf("%d", matrixB[i]+j);
}
}
multiply(M, N, K, (int **)matrixA, (int **)matrixB, (int **)matrixC);
for(int i=0; i<M; i++){
for(int j=0; j<K; j++){
printf("%d ", matrixC[i][j]);
}
printf("\n");
}
return 0;
}
`
I want to print out the result "matrixC", but in the function, it would cause segmentation fault. I have tried several times, and it seems like it would miss the addresses of the pointer under the double pointers.
A:
Change the prototype of the function multiply to this:
void multiply(int M, int N, int K, int matrixA[M][N], int matrixB[N][K], int matrixC[M][K]);
make your life easier like this (body of function multiply):
for (int i = 0; i < M; i++) { //for each row of matrixA
for (int j = 0; j < K; j++) { //for each column of matrixB
matrixC[i][j] = 0; //set field to zero
for (int k = 0; k < N; k++) { //for each col of A and each row of B
//take the dot product of row i (matrixA) and col j (matrixB)
matrixC[i][j] += matrixA[i][k] * matrixB[k][j];
}
}
}
You have an error in this line
sum += (*(*(matrixA + j) + k)) * (*(*(matrixB + k) + j));
which has been corrected to
matrixA[i][k] //index 'i' not 'j'
The var sum is not needed, therefore opted out.
| Matrix Multiplication Using double pointers passing into functions in C | I am wondering why I cannot get the value in the function, it always cause segmentation fault...
`
void multiply(int M, int N, int K, int **matrixA, int **matrixB, int **matrixC){
for (int i = 0; i < M; i++){
for (int j = 0; j < K; j++){
int sum = 0;
for (int k = 0; k < N; k++){
sum += (*(*(matrixA + j) + k)) * (*(*(matrixB + k) + j));
}
*(*(matrixC + i) + j) = sum;
}
}
}
int main(){
int M, N, K;
scanf("%d%d%d", &M, &N, &K);
int matrixA[M][N];
int matrixB[N][K];
int matrixC[M][K];
for(int i=0; i<M; i++){
for(int j=0; j<N; j++){
scanf("%d", matrixA[i]+j);
}
}
for(int i=0; i<N; i++){
for(int j=0; j<K; j++){
scanf("%d", matrixB[i]+j);
}
}
multiply(M, N, K, (int **)matrixA, (int **)matrixB, (int **)matrixC);
for(int i=0; i<M; i++){
for(int j=0; j<K; j++){
printf("%d ", matrixC[i][j]);
}
printf("\n");
}
return 0;
}
`
I want to print out the result "matrixC", but in the function, it would cause segmentation fault. I have tried several times, and it seems like it would miss the addresses of the pointer under the double pointers.
| [
"Change the prototype of the function multiply to this:\nvoid multiply(int M, int N, int K, int matrixA[M][N], int matrixB[N][K], int matrixC[M][K]);\n\nmake your life easier like this (body of function multiply):\nfor (int i = 0; i < M; i++) { //for each row of matrixA\n for (int j = 0; j < K; j++) { //for each column of matrixB\n matrixC[i][j] = 0; //set field to zero\n for (int k = 0; k < N; k++) { //for each col of A and each row of B\n //take the dot product of row i (matrixA) and col j (matrixB)\n matrixC[i][j] += matrixA[i][k] * matrixB[k][j];\n }\n }\n}\n\nYou have an error in this line\nsum += (*(*(matrixA + j) + k)) * (*(*(matrixB + k) + j));\n\nwhich has been corrected to\nmatrixA[i][k] //index 'i' not 'j'\n\nThe var sum is not needed, therefore opted out.\n"
] | [
0
] | [] | [] | [
"c",
"matrix",
"matrix_multiplication"
] | stackoverflow_0074677652_c_matrix_matrix_multiplication.txt |
Q:
Can I add two different splash screen animations on one splash screen in Flutter?
I want my logo to scale up but the name to slide from right on the same splash screen both are images. How can I do that?
class SplashScreen extends StatelessWidget {
const SplashScreen({super.key});
@override
Widget build(BuildContext context) {
return AnimatedSplashScreen(
splash: Column(
mainAxisAlignment: MainAxisAlignment.center,
mainAxisSize: MainAxisSize.max,
children: [
Image.asset(
'assets/Icon.jpeg',
height: 250,
width: 250,
),
]),
backgroundColor: Colors.white,
nextScreen: const MyHomePage(title: 'Flutter Demo Home Page'),
splashTransition: SplashTransition.scaleTransition,
splashIconSize: 450,
duration: 100,
);
}
}
I can not find how to achieve this.
A:
To achieve the effect you described, you can use a combination of animation and layout widgets in Flutter.
To animate the scaling of the logo, you can use an AnimatedContainer widget and animate its height and width properties. You can also use a CurvedAnimation to specify a non-linear animation curve, which can make the animation feel more natural.
To slide in the name from the right side of the screen, you can use a Positioned widget, which allows you to position a child widget relative to the top, bottom, left, and right edges of its parent container. By setting the right property of the Positioned widget to a negative value, you can make the child widget appear to slide in from the right side of the screen.
Here is an example of how you might implement this in your code:
class SplashScreen extends StatelessWidget {
const SplashScreen({super.key});
@override
Widget build(BuildContext context) {
return AnimatedSplashScreen(
splash: Column(
mainAxisAlignment: MainAxisAlignment.center,
mainAxisSize: MainAxisSize.max,
children: [
// Use an AnimatedContainer to animate the scaling of the logo
AnimatedContainer(
duration: const Duration(milliseconds: 1000),
curve: Curves.easeInOut,
height: 250,
width: 250,
child: Image.asset('assets/Icon.jpeg'),
),
// Use a Positioned widget to slide in the name from the right side of the screen
Positioned(
right: -300,
child: Image.asset('assets/Name.jpeg'),
),
]),
backgroundColor: Colors.white,
nextScreen: const MyHomePage(title: 'Flutter Demo Home Page'),
splashTransition: SplashTransition.scaleTransition,
splashIconSize: 450,
duration: 100,
);
}
}
You can adjust the duration and curve properties of the AnimatedContainer and Positioned widgets to control the timing and easing of the animations. You can also adjust the initial right position of the Positioned widget to control how far the name will slide in from the right side of the screen.
| Can I add two different splash screen animations on one splash screen in Flutter? | I want my logo to scale up but the name to slide from right on the same splash screen both are images. How can I do that?
class SplashScreen extends StatelessWidget {
const SplashScreen({super.key});
@override
Widget build(BuildContext context) {
return AnimatedSplashScreen(
splash: Column(
mainAxisAlignment: MainAxisAlignment.center,
mainAxisSize: MainAxisSize.max,
children: [
Image.asset(
'assets/Icon.jpeg',
height: 250,
width: 250,
),
]),
backgroundColor: Colors.white,
nextScreen: const MyHomePage(title: 'Flutter Demo Home Page'),
splashTransition: SplashTransition.scaleTransition,
splashIconSize: 450,
duration: 100,
);
}
}
I can not find how to achieve this.
| [
"To achieve the effect you described, you can use a combination of animation and layout widgets in Flutter.\nTo animate the scaling of the logo, you can use an AnimatedContainer widget and animate its height and width properties. You can also use a CurvedAnimation to specify a non-linear animation curve, which can make the animation feel more natural.\nTo slide in the name from the right side of the screen, you can use a Positioned widget, which allows you to position a child widget relative to the top, bottom, left, and right edges of its parent container. By setting the right property of the Positioned widget to a negative value, you can make the child widget appear to slide in from the right side of the screen.\nHere is an example of how you might implement this in your code:\nclass SplashScreen extends StatelessWidget {\n const SplashScreen({super.key});\n @override\n Widget build(BuildContext context) {\n return AnimatedSplashScreen(\n splash: Column(\n mainAxisAlignment: MainAxisAlignment.center,\n mainAxisSize: MainAxisSize.max,\n children: [\n // Use an AnimatedContainer to animate the scaling of the logo\n AnimatedContainer(\n duration: const Duration(milliseconds: 1000),\n curve: Curves.easeInOut,\n height: 250,\n width: 250,\n child: Image.asset('assets/Icon.jpeg'),\n ),\n // Use a Positioned widget to slide in the name from the right side of the screen\n Positioned(\n right: -300,\n child: Image.asset('assets/Name.jpeg'),\n ),\n ]),\n backgroundColor: Colors.white,\n nextScreen: const MyHomePage(title: 'Flutter Demo Home Page'),\n splashTransition: SplashTransition.scaleTransition,\n splashIconSize: 450,\n duration: 100,\n );\n }\n}\n\nYou can adjust the duration and curve properties of the AnimatedContainer and Positioned widgets to control the timing and easing of the animations. You can also adjust the initial right position of the Positioned widget to control how far the name will slide in from the right side of the screen.\n"
] | [
0
] | [] | [] | [
"flutter"
] | stackoverflow_0074678122_flutter.txt |
Q:
why use etcd?Can I use redis to implement configuration management/service discovery etc.?
I learned etcd for a few hours, but a question suddenly came into me. I found that redis is fully capable of covering functions which etcd owns.Like key/value CRUD && watch, and redis is very simple to use. why people choose etcd instead of redis?
why?
I googled a few posts, but no post told me the reason.
Thanks!
A:
Redis stores data in memory, which makes it very high performance but not very durable. If the redis server dies, it's easy to lose data. Etcd stores data in files on disc, and performs fsync across multiple nodes before resolving to guarantee consistency, which makes it very durable but not very performant.
That's a good trade-off for kubernetes, which is using etcd for cluster state and configuration, not user data. It would not be a good trade-off for something like user session data which you might be using redis for in your app because you need extremely fast response times and can tolerate a bit of data loss or inconsistency.
A:
A major difference which is affecting my choice of one vs the other is:
etcd keeps the data index in RAM and the data store on disk
redis keeps both data index and data store in RAM
Theoretically, this means etcd ought to be a good fit for large data / small memory scenarios, where redis would require large RAM.
In practice, etcd's current behaviour is that it allocates some memory per transaction when data is accessed. Under heavy load, the memory footprint of the etcd server balloons unboundedly (appears limited by the rate of read requests), and the Go runtime eventually OOM's, killing the server.
In contrast, the redis design requires a virtual address space sized in relation to the dataset, or to the partition of the dataset stored locally.
Memory footprint examples
Eg, with redis, a 8GB dataset partition with an index size of 0.5GB requires 8.5GB of virtual address space (ie, could be handled with 1GB of RAM and 7.5GB of swap), but not less, and the requirement has an upper bound.
The same 8GB dataset, with etcd, would require only 0.5GB of virtual address space, but not less (ie, could be handled with 500MB of RAM and no swap), in theory. In practice, under high load, etcd's memory use is unbounded.
Other considerations
There are other considerations like data consistency, or supported languages, that have to be evaluated separately.
In my case, the language the server is written in is a factor, as I have in-house C expertise, but no Go expertise. This means I can maintain/diagnose/customize redis (written in C) in-house if needed, but cannot do the same with etc (written in Go), I'd have to use it as released by the maintainers.
My conclusion
Unfortunately, the memory behaviour of etcd, whereby it needs to allocate memory to access the indexed data, negates the memory advantages it might have theoretically, and the risk of crash by OOM due to high load make it unsuitable in applications that might experience unexpected usage spikes. Github bug 14362, Github bug 14352, other OOM reports
Furthermore, the ability to customize the server in-house (ie, available C vs Go expertise) is a business consideration that weighs in redis's favour, in my case.
| why use etcd?Can I use redis to implement configuration management/service discovery etc.? | I learned etcd for a few hours, but a question suddenly came into me. I found that redis is fully capable of covering functions which etcd owns.Like key/value CRUD && watch, and redis is very simple to use. why people choose etcd instead of redis?
why?
I googled a few posts, but no post told me the reason.
Thanks!
| [
"Redis stores data in memory, which makes it very high performance but not very durable. If the redis server dies, it's easy to lose data. Etcd stores data in files on disc, and performs fsync across multiple nodes before resolving to guarantee consistency, which makes it very durable but not very performant.\nThat's a good trade-off for kubernetes, which is using etcd for cluster state and configuration, not user data. It would not be a good trade-off for something like user session data which you might be using redis for in your app because you need extremely fast response times and can tolerate a bit of data loss or inconsistency.\n",
"A major difference which is affecting my choice of one vs the other is:\n\netcd keeps the data index in RAM and the data store on disk\nredis keeps both data index and data store in RAM\n\nTheoretically, this means etcd ought to be a good fit for large data / small memory scenarios, where redis would require large RAM.\nIn practice, etcd's current behaviour is that it allocates some memory per transaction when data is accessed. Under heavy load, the memory footprint of the etcd server balloons unboundedly (appears limited by the rate of read requests), and the Go runtime eventually OOM's, killing the server.\nIn contrast, the redis design requires a virtual address space sized in relation to the dataset, or to the partition of the dataset stored locally.\nMemory footprint examples\nEg, with redis, a 8GB dataset partition with an index size of 0.5GB requires 8.5GB of virtual address space (ie, could be handled with 1GB of RAM and 7.5GB of swap), but not less, and the requirement has an upper bound.\nThe same 8GB dataset, with etcd, would require only 0.5GB of virtual address space, but not less (ie, could be handled with 500MB of RAM and no swap), in theory. In practice, under high load, etcd's memory use is unbounded.\nOther considerations\nThere are other considerations like data consistency, or supported languages, that have to be evaluated separately.\nIn my case, the language the server is written in is a factor, as I have in-house C expertise, but no Go expertise. This means I can maintain/diagnose/customize redis (written in C) in-house if needed, but cannot do the same with etc (written in Go), I'd have to use it as released by the maintainers.\nMy conclusion\nUnfortunately, the memory behaviour of etcd, whereby it needs to allocate memory to access the indexed data, negates the memory advantages it might have theoretically, and the risk of crash by OOM due to high load make it unsuitable in applications that might experience unexpected usage spikes. Github bug 14362, Github bug 14352, other OOM reports\nFurthermore, the ability to customize the server in-house (ie, available C vs Go expertise) is a business consideration that weighs in redis's favour, in my case.\n"
] | [
4,
0
] | [] | [] | [
"etcd",
"redis"
] | stackoverflow_0051624598_etcd_redis.txt |
Q:
How to make table columns with the same width in GitHub markdown?
I'm trying to make table with two identical screenshots, but the right column is wider then left one. Moreover, it depends on text in the row below, but I don't clearly understand, how it depends exactly.
Two simple examples:
 | 
:---:|:---:
Usage on GNOME 3 | Drag-and-drop on GNOME 3
 | 
:---:|:---:
Usage on GNOME 3 | Drag and drop on GNOME 3
The same text length, the same words... What's wrong with hyphens and how can I fix it?
A:
If the only difference is truly the hyphen, try a different type of hyphen (e.g. an 'actual' hyphen, instead of the minus sign: http://unicode-table.com/en/2010/).
I should say I cannot reproduce this exactly.
The images in my example (left vs. right) are about a pixel or so different, not as much as yours:
A:
I had this same issue and was only able to solve it with html. You can add html code directly into the markdown file:
<table width="100%">
<thead>
<tr>
<th width="50%">First header</th>
<th width="50%">Second header long</th>
</tr>
</thead>
<tbody>
<tr>
<td width="50%"><img src="https://docs.github.com/assets/cb-194149/images/help/images/view.png"/></td>
<td width="50%"><img src="https://docs.github.com/assets/cb-194149/images/help/images/view.png"/></td>
</tr>
</tbody>
</table>
| How to make table columns with the same width in GitHub markdown? | I'm trying to make table with two identical screenshots, but the right column is wider then left one. Moreover, it depends on text in the row below, but I don't clearly understand, how it depends exactly.
Two simple examples:
 | 
:---:|:---:
Usage on GNOME 3 | Drag-and-drop on GNOME 3
 | 
:---:|:---:
Usage on GNOME 3 | Drag and drop on GNOME 3
The same text length, the same words... What's wrong with hyphens and how can I fix it?
| [
"If the only difference is truly the hyphen, try a different type of hyphen (e.g. an 'actual' hyphen, instead of the minus sign: http://unicode-table.com/en/2010/). \nI should say I cannot reproduce this exactly.\nThe images in my example (left vs. right) are about a pixel or so different, not as much as yours:\n\n",
"I had this same issue and was only able to solve it with html. You can add html code directly into the markdown file:\n<table width=\"100%\">\n <thead>\n <tr>\n <th width=\"50%\">First header</th>\n <th width=\"50%\">Second header long</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <td width=\"50%\"><img src=\"https://docs.github.com/assets/cb-194149/images/help/images/view.png\"/></td>\n <td width=\"50%\"><img src=\"https://docs.github.com/assets/cb-194149/images/help/images/view.png\"/></td>\n </tr>\n </tbody>\n</table>\n\n"
] | [
1,
0
] | [] | [] | [
"github",
"github_flavored_markdown"
] | stackoverflow_0038787198_github_github_flavored_markdown.txt |
Q:
Sorting table content after loading them
In my App the user can follow different groups. My code first checks which groups the user does follow and puts it in an array. After that the code opens each group id and loads every post in it with newPost = importPosts. And after that the post gets inserted to the table. At the same time it also loads the picture of the user asynchronous. Every post has a timestamp.
But the posts don't get sorted after the timestamp. How can I do that? Right now it is totally randomly.
I tried .queryOrdered(byChild: "userTime") but this does not work. It sorts them before the posts from different groups are merged. So after merging them they are again not sorted. The merging is the issue.
I also tried table.sort { $0.newPost.userTime < $1.newPost.userTime } but then it says Contextual closure type '(DataSnapshot, String?) -> Void' expects 2 arguments, but 1 was used in closure body.
func loadFollowedPoi() {
myFeed.myArray1 = []
let userID = Auth.auth().currentUser!.uid
let database = Database.database().reference()
database.child("user/\(userID)/abonniertePoi/").observeSingleEvent(of: .value, with: { snapshot in
for child in snapshot.children.allObjects as! [DataSnapshot] {
myFeed.myArray1.append(child.key)
}
self.postsLaden()
})
}
func postsLaden() {
dic = [:]
for groupId in myFeed.myArray1[0..<myFeed.myArray1.count] {
print(groupId)
let placeIdFromSearch = ViewController.placeidUebertragen
ref = Database.database().reference().child("placeID/\(groupId)")
ref.observe(.childAdded) { (snapshot) in
guard let dic = snapshot.value as? [String: Any] else { return }
let newPost = importPosts(dictionary: dic, key: snapshot.key)
guard let userUid = newPost.userID else { return }
self.fetchUser(uid: userUid, completed: {
self.table.insert(newPost, at: 0)
print(newPost.userTime)
self.tableView.reloadData()
})
}
}
}
func fetchUser(uid: String, completed: @escaping () -> Void) {
ref = Database.database().reference().child("user").child(uid)
ref.observe(.value) { (snapshot) in
guard let dic = snapshot.value as? [String: Any] else { return }
let newUser = UserModel(dictionary: dic)
self.users.insert(newUser, at: 0)
completed()
}
}
A:
Without a bit more context and view of the database structure, it's a bit difficult to provide code to solve your issues. That being said, I think there are a few things to consider.
My understanding is you are fetching posts from users included in each group individually. Is this correct? To fetch a sorted array from your database, you'd likely need to fetch all posts where post.groupId is in groupIds where groupIds is your array of desired groupId.
If you're going to stick with the fetch + append to existing array approach, the naive method is to sort the resulting array every time you append new posts. Assuming table is your array of posts:
self.fetchUser(uid: userUid, completed: {
self.table.insert(newPost, at: 0)
print(newPost.userTime)
self.table = self.table.sorted(by: { $0.userTime > $1.userTime })
self.tableView.reloadData()
})
| Sorting table content after loading them | In my App the user can follow different groups. My code first checks which groups the user does follow and puts it in an array. After that the code opens each group id and loads every post in it with newPost = importPosts. And after that the post gets inserted to the table. At the same time it also loads the picture of the user asynchronous. Every post has a timestamp.
But the posts don't get sorted after the timestamp. How can I do that? Right now it is totally randomly.
I tried .queryOrdered(byChild: "userTime") but this does not work. It sorts them before the posts from different groups are merged. So after merging them they are again not sorted. The merging is the issue.
I also tried table.sort { $0.newPost.userTime < $1.newPost.userTime } but then it says Contextual closure type '(DataSnapshot, String?) -> Void' expects 2 arguments, but 1 was used in closure body.
func loadFollowedPoi() {
myFeed.myArray1 = []
let userID = Auth.auth().currentUser!.uid
let database = Database.database().reference()
database.child("user/\(userID)/abonniertePoi/").observeSingleEvent(of: .value, with: { snapshot in
for child in snapshot.children.allObjects as! [DataSnapshot] {
myFeed.myArray1.append(child.key)
}
self.postsLaden()
})
}
func postsLaden() {
dic = [:]
for groupId in myFeed.myArray1[0..<myFeed.myArray1.count] {
print(groupId)
let placeIdFromSearch = ViewController.placeidUebertragen
ref = Database.database().reference().child("placeID/\(groupId)")
ref.observe(.childAdded) { (snapshot) in
guard let dic = snapshot.value as? [String: Any] else { return }
let newPost = importPosts(dictionary: dic, key: snapshot.key)
guard let userUid = newPost.userID else { return }
self.fetchUser(uid: userUid, completed: {
self.table.insert(newPost, at: 0)
print(newPost.userTime)
self.tableView.reloadData()
})
}
}
}
func fetchUser(uid: String, completed: @escaping () -> Void) {
ref = Database.database().reference().child("user").child(uid)
ref.observe(.value) { (snapshot) in
guard let dic = snapshot.value as? [String: Any] else { return }
let newUser = UserModel(dictionary: dic)
self.users.insert(newUser, at: 0)
completed()
}
}
| [
"Without a bit more context and view of the database structure, it's a bit difficult to provide code to solve your issues. That being said, I think there are a few things to consider.\nMy understanding is you are fetching posts from users included in each group individually. Is this correct? To fetch a sorted array from your database, you'd likely need to fetch all posts where post.groupId is in groupIds where groupIds is your array of desired groupId.\nIf you're going to stick with the fetch + append to existing array approach, the naive method is to sort the resulting array every time you append new posts. Assuming table is your array of posts:\nself.fetchUser(uid: userUid, completed: {\n self.table.insert(newPost, at: 0)\n print(newPost.userTime)\n self.table = self.table.sorted(by: { $0.userTime > $1.userTime }) \n self.tableView.reloadData()\n })\n\n"
] | [
0
] | [] | [] | [
"firebase_realtime_database",
"swift",
"uitableview"
] | stackoverflow_0074647487_firebase_realtime_database_swift_uitableview.txt |
Q:
getting errors Java script code convert in JSR223 sampler in jmeter
Hi I am trying to convert java script code in JSR223 sampler in Jmeter, getting errors. Please help me to convert below java script code into JMeter
const allCapsAlpha = [..."ABCDEFGHIJKLMNOPQRSTUVWXYZ"];
const allLowerAlpha = [..."abcdefghijklmnopqrstuvwxyz"];
const allNumbers = [..."0123456789"];
const base = [...allCapsAlpha, ...allNumbers, ...allLowerAlpha];
const generator = (base, len) => {
return [...Array(len)]
.map(i => base[Math.random()*base.length|0])
.join('');
};
const code = generator(base, 43)
//const code = "ad1f71a2814cb0c7f1d2e5e652d68c8801101929a181a320bbb3751d"
var codeChallenge = encodeURI(CryptoJS.SHA256(code).toString(CryptoJS.enc.Base64));
codeChallenge = codeChallenge.replaceAll("+", "-")
codeChallenge = codeChallenge.replaceAll("/", "_")
codeChallenge = codeChallenge.substr(-1) == "=" ? codeChallenge.slice(0, -1) : codeChallenge
const characters ='ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz';
function generateString(length) {
let result = ' ';
const charactersLength = characters.length;
for ( let i = 0; i < length; i++ ) {
result += characters.charAt(Math.floor(Math.random() * charactersLength));
}
return result;
}
pm.collectionVariables.set("code", code);
pm.collectionVariables.set("codeChallenge", codeChallenge);
pm.collectionVariables.set("state",generateString(10));
A:
If you're looking for a Groovy (which is the recommended scripting option for JMeter especially for cryptographic operations) equivalent of getting the codeChallenge - it would be something like:
import java.nio.charset.StandardCharsets
import java.security.MessageDigest
def code = 'ad1f71a2814cb0c7f1d2e5e652d68c8801101929a181a320bbb3751d'
MessageDigest md = MessageDigest.getInstance('SHA-256');
md.update(code.getBytes(StandardCharsets.UTF_8));
byte[] bytes = md.digest()
def codeChallenge = bytes.encodeBase64().toString()
codeChallenge = codeChallenge.replaceAll("\\+", "-")
codeChallenge = codeChallenge.replaceAll("/", "_")
codeChallenge = codeChallenge.substring(codeChallenge.length() - 1) == "=" ? codeChallenge.substring(0, codeChallenge.length() - 1) : codeChallenge
More information on Groovy scripting in JMeter: Apache Groovy: What Is Groovy Used For?
| getting errors Java script code convert in JSR223 sampler in jmeter | Hi I am trying to convert java script code in JSR223 sampler in Jmeter, getting errors. Please help me to convert below java script code into JMeter
const allCapsAlpha = [..."ABCDEFGHIJKLMNOPQRSTUVWXYZ"];
const allLowerAlpha = [..."abcdefghijklmnopqrstuvwxyz"];
const allNumbers = [..."0123456789"];
const base = [...allCapsAlpha, ...allNumbers, ...allLowerAlpha];
const generator = (base, len) => {
return [...Array(len)]
.map(i => base[Math.random()*base.length|0])
.join('');
};
const code = generator(base, 43)
//const code = "ad1f71a2814cb0c7f1d2e5e652d68c8801101929a181a320bbb3751d"
var codeChallenge = encodeURI(CryptoJS.SHA256(code).toString(CryptoJS.enc.Base64));
codeChallenge = codeChallenge.replaceAll("+", "-")
codeChallenge = codeChallenge.replaceAll("/", "_")
codeChallenge = codeChallenge.substr(-1) == "=" ? codeChallenge.slice(0, -1) : codeChallenge
const characters ='ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz';
function generateString(length) {
let result = ' ';
const charactersLength = characters.length;
for ( let i = 0; i < length; i++ ) {
result += characters.charAt(Math.floor(Math.random() * charactersLength));
}
return result;
}
pm.collectionVariables.set("code", code);
pm.collectionVariables.set("codeChallenge", codeChallenge);
pm.collectionVariables.set("state",generateString(10));
| [
"If you're looking for a Groovy (which is the recommended scripting option for JMeter especially for cryptographic operations) equivalent of getting the codeChallenge - it would be something like:\nimport java.nio.charset.StandardCharsets\nimport java.security.MessageDigest\n\ndef code = 'ad1f71a2814cb0c7f1d2e5e652d68c8801101929a181a320bbb3751d'\n\nMessageDigest md = MessageDigest.getInstance('SHA-256');\nmd.update(code.getBytes(StandardCharsets.UTF_8));\n\nbyte[] bytes = md.digest()\n\ndef codeChallenge = bytes.encodeBase64().toString()\n\ncodeChallenge = codeChallenge.replaceAll(\"\\\\+\", \"-\")\ncodeChallenge = codeChallenge.replaceAll(\"/\", \"_\")\ncodeChallenge = codeChallenge.substring(codeChallenge.length() - 1) == \"=\" ? codeChallenge.substring(0, codeChallenge.length() - 1) : codeChallenge\n\nMore information on Groovy scripting in JMeter: Apache Groovy: What Is Groovy Used For?\n"
] | [
0
] | [] | [] | [
"jmeter"
] | stackoverflow_0074675627_jmeter.txt |
Q:
Run Docker image, cannot connect database
I am using Windows 11 x64, Java / JDK 19, Spring Boot 3, PostgreSQL 15.1 at my local PC. My Dockerfile
FROM amazoncorretto:19-alpine3.16-jdk
WORKDIR /app
ARG JAR_FILE=target/spring_jwt-1.0.0-SNAPSHOT.jar
COPY ${JAR_FILE} ./app.jar
EXPOSE 8081
ENTRYPOINT ["java","-jar","app.jar"]
My console log:
org.postgresql.util.PSQLException: Connection to 127.0.0.1:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:319) ~[postgresql-42.5.1.jar!/:42.5.1]
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) ~[postgresql-42.5.1.jar!/:42.5.1]
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:247) ~[postgresql-42.5.1.jar!/:42.5.1]
at org.postgresql.Driver.makeConnection(Driver.java:434) ~[postgresql-42.5.1.jar!/:42.5.1]
at org.postgresql.Driver.connect(Driver.java:291) ~[postgresql-42.5.1.jar!/:42.5.1]
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-5.0.1.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:359) ~[HikariCP-5.0.1.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:201) ~[HikariCP-5.0.1.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:470) ~[HikariCP-5.0.1.jar!/:na] at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561) ~[HikariCP-5.0.1.jar!/:na]
full: https://gist.github.com/donhuvy/0821da63081f4fd447111b7d1c6f2310
Docker run
docker run -p 8081:8081 latest:latest
How to run image with database connect?
A:
Localhost (127.0.0.1) within a docker container refers to the container itself.
If you want to access anything on your actual machine, you need the network stack of the container to be able to access it, see container-networking.
For your example, the easiest way would be to use the host option:
If you use the host network mode for a container, that container’s
network stack is not isolated from the Docker host (the container
shares the host’s networking namespace), and the container does not
get its own IP-address allocated. For instance, if you run a container
which binds to port 80 and you use host networking, the container’s
application is available on port 80 on the host’s IP address.
Also see: What does --net=host option in Docker command really do?
A:
Set
#spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/foo_db
spring.datasource.url=jdbc:postgresql://host.docker.internal:5432/foo_db
| Run Docker image, cannot connect database | I am using Windows 11 x64, Java / JDK 19, Spring Boot 3, PostgreSQL 15.1 at my local PC. My Dockerfile
FROM amazoncorretto:19-alpine3.16-jdk
WORKDIR /app
ARG JAR_FILE=target/spring_jwt-1.0.0-SNAPSHOT.jar
COPY ${JAR_FILE} ./app.jar
EXPOSE 8081
ENTRYPOINT ["java","-jar","app.jar"]
My console log:
org.postgresql.util.PSQLException: Connection to 127.0.0.1:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:319) ~[postgresql-42.5.1.jar!/:42.5.1]
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) ~[postgresql-42.5.1.jar!/:42.5.1]
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:247) ~[postgresql-42.5.1.jar!/:42.5.1]
at org.postgresql.Driver.makeConnection(Driver.java:434) ~[postgresql-42.5.1.jar!/:42.5.1]
at org.postgresql.Driver.connect(Driver.java:291) ~[postgresql-42.5.1.jar!/:42.5.1]
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-5.0.1.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:359) ~[HikariCP-5.0.1.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:201) ~[HikariCP-5.0.1.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:470) ~[HikariCP-5.0.1.jar!/:na] at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561) ~[HikariCP-5.0.1.jar!/:na]
full: https://gist.github.com/donhuvy/0821da63081f4fd447111b7d1c6f2310
Docker run
docker run -p 8081:8081 latest:latest
How to run image with database connect?
| [
"Localhost (127.0.0.1) within a docker container refers to the container itself.\nIf you want to access anything on your actual machine, you need the network stack of the container to be able to access it, see container-networking.\nFor your example, the easiest way would be to use the host option:\n\nIf you use the host network mode for a container, that container’s\nnetwork stack is not isolated from the Docker host (the container\nshares the host’s networking namespace), and the container does not\nget its own IP-address allocated. For instance, if you run a container\nwhich binds to port 80 and you use host networking, the container’s\napplication is available on port 80 on the host’s IP address.\n\nAlso see: What does --net=host option in Docker command really do?\n",
"Set\n#spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/foo_db\nspring.datasource.url=jdbc:postgresql://host.docker.internal:5432/foo_db\n\n"
] | [
0,
0
] | [] | [] | [
"docker",
"java",
"postgresql",
"spring_boot"
] | stackoverflow_0074677911_docker_java_postgresql_spring_boot.txt |
Q:
Cannot run python program in Vs code
you just know it by seeing the picture
I don't know what to do...
I tried many things from google tried putting the same file path to launcher.json but nothing worked even tried reinstalling the whole visual studio code
As asked adding launch.json file code:
launch.json code
A:
Have you tried using the generic configuration for running a currently open file?
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
},
| Cannot run python program in Vs code | you just know it by seeing the picture
I don't know what to do...
I tried many things from google tried putting the same file path to launcher.json but nothing worked even tried reinstalling the whole visual studio code
As asked adding launch.json file code:
launch.json code
| [
"Have you tried using the generic configuration for running a currently open file?\n \"configurations\": [\n {\n \"name\": \"Python: Current File\",\n \"type\": \"python\",\n \"request\": \"launch\",\n \"program\": \"${file}\",\n \"console\": \"integratedTerminal\"\n },\n\n"
] | [
0
] | [] | [] | [
"python",
"visual_studio_code"
] | stackoverflow_0074677714_python_visual_studio_code.txt |
Q:
lambda function to start aws workspaces - overcome client.start_workspaces limitation
i have a lambda function to start all the workspaces machines in my env
Lambda Function :
import boto3
client = boto3.client('workspaces')
def lambda_handler(event,context):
workspaces = client.describe_workspaces()['Workspaces']
for workspace in workspaces:
if workspace['WorkspaceProperties']['RunningMode'] == 'AUTO_STOP':
if workspace['State'] == 'STOPPED':
workspaces_id = (workspace['WorkspaceId'])
client.start_workspaces(
StartWorkspaceRequests=[
{
'WorkspaceId': workspaces_id
},
]
)
The client.start_workspaces has a limitation of 25 workspaces per request , any idea how to overcome this ? im trying to build a robust solution for more then 25 workspaces .
https://docs.aws.amazon.com/workspaces/latest/api/API_StartWorkspaces.html#API_StartWorkspaces_RequestSyntax
Thanks in advance to the helpers
A:
You can use the paginate method to automatically paginate through the list of workspaces and call start_workspaces for each page of results. This would look something like this:
import boto3
client = boto3.client('workspaces')
def lambda_handler(event, context):
workspaces_paginator = client.get_paginator('describe_workspaces')
# Loop through all pages of workspaces
for page in workspaces_paginator.paginate():
workspaces = page['Workspaces']
# Filter for workspaces that are in AUTO_STOP mode and are currently stopped
stopped_workspaces = [workspace for workspace in workspaces if workspace['WorkspaceProperties']['RunningMode'] == 'AUTO_STOP' and workspace['State'] == 'STOPPED']
# Call start_workspaces for the current page of workspaces
if stopped_workspaces:
client.start_workspaces(
StartWorkspaceRequests=[
{
'WorkspaceId': workspace['WorkspaceId']
} for workspace in stopped_workspaces
]
)
| lambda function to start aws workspaces - overcome client.start_workspaces limitation | i have a lambda function to start all the workspaces machines in my env
Lambda Function :
import boto3
client = boto3.client('workspaces')
def lambda_handler(event,context):
workspaces = client.describe_workspaces()['Workspaces']
for workspace in workspaces:
if workspace['WorkspaceProperties']['RunningMode'] == 'AUTO_STOP':
if workspace['State'] == 'STOPPED':
workspaces_id = (workspace['WorkspaceId'])
client.start_workspaces(
StartWorkspaceRequests=[
{
'WorkspaceId': workspaces_id
},
]
)
The client.start_workspaces has a limitation of 25 workspaces per request , any idea how to overcome this ? im trying to build a robust solution for more then 25 workspaces .
https://docs.aws.amazon.com/workspaces/latest/api/API_StartWorkspaces.html#API_StartWorkspaces_RequestSyntax
Thanks in advance to the helpers
| [
"You can use the paginate method to automatically paginate through the list of workspaces and call start_workspaces for each page of results. This would look something like this:\nimport boto3\nclient = boto3.client('workspaces')\n\ndef lambda_handler(event, context):\n workspaces_paginator = client.get_paginator('describe_workspaces')\n\n # Loop through all pages of workspaces\n for page in workspaces_paginator.paginate():\n workspaces = page['Workspaces']\n\n # Filter for workspaces that are in AUTO_STOP mode and are currently stopped\n stopped_workspaces = [workspace for workspace in workspaces if workspace['WorkspaceProperties']['RunningMode'] == 'AUTO_STOP' and workspace['State'] == 'STOPPED']\n\n # Call start_workspaces for the current page of workspaces\n if stopped_workspaces:\n client.start_workspaces(\n StartWorkspaceRequests=[\n {\n 'WorkspaceId': workspace['WorkspaceId']\n } for workspace in stopped_workspaces\n ]\n )\n\n"
] | [
0
] | [] | [] | [
"amazon_web_services",
"amazon_workspaces",
"aws_lambda"
] | stackoverflow_0074678009_amazon_web_services_amazon_workspaces_aws_lambda.txt |
Q:
read IP address and check if valid/between range
My current script:
#!/usr/local/bin/bash
echo -n "Enter VPS IP address:"
read userinput
lookupip="vps $userinput"
if [[ $userinput -lt 80.* || $userinput -gt 255.* ]] #checks input is in the range
then
echo "Input outside acceptable range."
else
#The grep removes all from VPS tool output except primary IP address
$lookupip | grep -E -o '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)' | sed '1 ! d' | xargs ping -oc 1000 -Q
fi
The lowest IP address range is in the 80.X.X.X range, I've tried using:
8*
80*
80...*
But it always errors with:
line 10: [[: 80.X.X.X: syntax error in expression (error token is ".X.X.X")
What would be the best way to define a range of IP address less than (lt) and gt (greater than)?
A:
if you just want to check if the IP address is valid or not, you can use ipcalc command in bash to check this.
ipcalc -c $userinput
example
ipcalc -c 10.20.30.401
ipcalc: bad IPv4 address: 10.20.30.401
A:
Probably not the best solution, but as a quick fix for your script should do:
#!/usr/local/bin/bash
echo -n "Enter VPS IP address:"
read userinput
lookupip="vps $userinput"
first_octet=`echo "$userinput" | cut -d'.' -f1`
if [[ $first_octet -lt 80 || $first_octet -gt 255 ]]
then
echo "Input outside acceptable range."
else
#The grep removes all from VPS tool output except primary IP address
$lookupip | grep -E -o '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)' | sed '1 ! d' | xargs ping -oc 1000 -Q
fi
EDITED: a better solution would be to take all three IP addresses (the one under inspection, lowest and highest) as parameters, convert them to 32bit number (that's what inet_aton() function does) and check ranges:
#!/usr/local/bin/bash
inet_aton ()
{
local IFS=. ipaddr ip32 i
ipaddr=($1)
for i in 3 2 1 0
do
(( ip32 += ipaddr[3-i] * (256 ** i) ))
done
return $ip32
}
echo -n "Enter VPS IP address, min IP address, max IP address:"
read userinput
ip1=`echo "$userinput" | cut -d' ' -f1`
ip2=`echo "$userinput" | cut -d' ' -f2`
ip3=`echo "$userinput" | cut -d' ' -f3`
lookupip="vps $ip1"
ip=`inet_aton $ip1`
min=`inet_aton $ip2`
max=`inet_aton $ip3`
if [[ $ip -lt $min || $ip -gt $max ]]
then
echo "Input outside acceptable range."
else
#The grep removes all from VPS tool output except primary IP address
$lookupip | grep -E -o '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)' | sed '1 ! d' | xargs ping -oc 1000 -Q
fi
The only difference would be that you have to enter 3 IP addresses, not one as before. Of course, the lowest and highest IP addresses could be hard-coded or taken from elsewhere, but I leave that, along with parameter validation and error checking, up to you.
A:
For Ubuntu:
sudo apt install grepcidr ipcalc
And then:
RANGE="192.168.1.210-192.168.1.250"
IP="192.168.1.250"
grepcidr "$(ipcalc -r "$RANGE" | tail -n+2)" <(echo "$IP") >/dev/null && echo "in" || echo "out"
| read IP address and check if valid/between range | My current script:
#!/usr/local/bin/bash
echo -n "Enter VPS IP address:"
read userinput
lookupip="vps $userinput"
if [[ $userinput -lt 80.* || $userinput -gt 255.* ]] #checks input is in the range
then
echo "Input outside acceptable range."
else
#The grep removes all from VPS tool output except primary IP address
$lookupip | grep -E -o '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)' | sed '1 ! d' | xargs ping -oc 1000 -Q
fi
The lowest IP address range is in the 80.X.X.X range, I've tried using:
8*
80*
80...*
But it always errors with:
line 10: [[: 80.X.X.X: syntax error in expression (error token is ".X.X.X")
What would be the best way to define a range of IP address less than (lt) and gt (greater than)?
| [
"if you just want to check if the IP address is valid or not, you can use ipcalc command in bash to check this.\nipcalc -c $userinput\nexample\nipcalc -c 10.20.30.401\nipcalc: bad IPv4 address: 10.20.30.401\n\n",
"Probably not the best solution, but as a quick fix for your script should do:\n#!/usr/local/bin/bash\necho -n \"Enter VPS IP address:\"\nread userinput\nlookupip=\"vps $userinput\"\nfirst_octet=`echo \"$userinput\" | cut -d'.' -f1`\n\nif [[ $first_octet -lt 80 || $first_octet -gt 255 ]]\n then\n echo \"Input outside acceptable range.\"\n else\n\n#The grep removes all from VPS tool output except primary IP address\n\n$lookupip | grep -E -o '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)' | sed '1 ! d' | xargs ping -oc 1000 -Q\n\nfi\n\nEDITED: a better solution would be to take all three IP addresses (the one under inspection, lowest and highest) as parameters, convert them to 32bit number (that's what inet_aton() function does) and check ranges:\n#!/usr/local/bin/bash\n\ninet_aton ()\n{\n local IFS=. ipaddr ip32 i\n ipaddr=($1)\n for i in 3 2 1 0\n do\n (( ip32 += ipaddr[3-i] * (256 ** i) ))\n done\n\n return $ip32\n}\n\necho -n \"Enter VPS IP address, min IP address, max IP address:\"\nread userinput\n\nip1=`echo \"$userinput\" | cut -d' ' -f1`\nip2=`echo \"$userinput\" | cut -d' ' -f2`\nip3=`echo \"$userinput\" | cut -d' ' -f3`\n\nlookupip=\"vps $ip1\"\n\nip=`inet_aton $ip1`\nmin=`inet_aton $ip2`\nmax=`inet_aton $ip3`\n\nif [[ $ip -lt $min || $ip -gt $max ]]\n then\n echo \"Input outside acceptable range.\"\n else\n\n#The grep removes all from VPS tool output except primary IP address\n\n$lookupip | grep -E -o '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)' | sed '1 ! d' | xargs ping -oc 1000 -Q\n\nfi\n\nThe only difference would be that you have to enter 3 IP addresses, not one as before. Of course, the lowest and highest IP addresses could be hard-coded or taken from elsewhere, but I leave that, along with parameter validation and error checking, up to you.\n",
"For Ubuntu:\nsudo apt install grepcidr ipcalc\n\nAnd then:\nRANGE=\"192.168.1.210-192.168.1.250\"\nIP=\"192.168.1.250\"\n\ngrepcidr \"$(ipcalc -r \"$RANGE\" | tail -n+2)\" <(echo \"$IP\") >/dev/null && echo \"in\" || echo \"out\"\n\n"
] | [
2,
1,
0
] | [] | [] | [
"bash",
"if_statement",
"ip_address",
"loops"
] | stackoverflow_0014751293_bash_if_statement_ip_address_loops.txt |
Q:
NULL pointer protection with ARM Cortex-M MPU
The MPU in ARM Cortex-M (M0+/M3/M4/M7/etc.) is often advertised as allowing to set up protection against dereferencing the NULL pointer. But how to do this in practice? (Some online discussions, like in the Zephyr Project, indicate that the issue is not quite trivial.)
I'm looking for the simplest possible MPU code running in "Privileged mode" on bare-metal ARM Cortex-M. Please note that "protection against dereferencing the NULL pointer" means to me protection both against reads and writes. Also, it is not just about the address 0x0, but small offsets from it as well. For example, accessing a struct member via a NULL pointer should also cause MPU exception:
struct foo {
. . .
uint8_t x;
};
. . .
uint8_t x = (*(struct foo volatile *)NULL)->x; // should fail!
A:
After some experimentation, I've come up with the MPU setting that seems to work for most ARM Cortex-M MCUs. Here is the code (using the CMSIS):
/* Configure the MPU to prevent NULL-pointer dereferencing ... */
MPU->RBAR = 0x0U /* base address (NULL) */
| MPU_RBAR_VALID_Msk /* valid region */
| (MPU_RBAR_REGION_Msk & 7U); /* region #7 */
MPU->RASR = (7U << MPU_RASR_SIZE_Pos) /* 2^(7+1) region, see NOTE0 */
| (0x0U << MPU_RASR_AP_Pos) /* no-access region */
| MPU_RASR_ENABLE_Msk; /* region enable */
MPU->CTRL = MPU_CTRL_PRIVDEFENA_Msk /* enable background region */
| MPU_CTRL_ENABLE_Msk; /* enable the MPU */
__ISB();
__DSB();
This code sets up a no-access MPU region #7 around the address 0x0 (any other MPU region will do as well). This works even for the MCUs, where the Vector Table also resides at address 0x0. Apparently, the MPU does not check access to the region by instructions other than LDR/STR, such as reading the vector address during Cortex-M exception entry.
However, in case the Vector Table resides at 0, the size of the no-access region must not contain any data that the CPU would legitimately read with the LDR instruction. This means that the size of the no-access region should be about the size of the Vector Table. In the code above, the size is set to 2^(7+1)==256 bytes, which should be fine even for relatively small vector tables.
The code above works also for MCUs that automatically relocate the Vector Table, such as STM32. For these MCUs, the size of the no-access region can be increased all the way to the relocated Vector Table, like 0x0800'0000 in the case of STM32. (You could set the size to 2^(26+1)==0x0800'0000).
Protection against NULL-pointer dereferencing is an important tool for improving the system's robustness and even for preventing malicious attacks. I hope that this answer will help fellow embedded developers.
| NULL pointer protection with ARM Cortex-M MPU | The MPU in ARM Cortex-M (M0+/M3/M4/M7/etc.) is often advertised as allowing to set up protection against dereferencing the NULL pointer. But how to do this in practice? (Some online discussions, like in the Zephyr Project, indicate that the issue is not quite trivial.)
I'm looking for the simplest possible MPU code running in "Privileged mode" on bare-metal ARM Cortex-M. Please note that "protection against dereferencing the NULL pointer" means to me protection both against reads and writes. Also, it is not just about the address 0x0, but small offsets from it as well. For example, accessing a struct member via a NULL pointer should also cause MPU exception:
struct foo {
. . .
uint8_t x;
};
. . .
uint8_t x = (*(struct foo volatile *)NULL)->x; // should fail!
| [
"After some experimentation, I've come up with the MPU setting that seems to work for most ARM Cortex-M MCUs. Here is the code (using the CMSIS):\n/* Configure the MPU to prevent NULL-pointer dereferencing ... */\nMPU->RBAR = 0x0U /* base address (NULL) */\n | MPU_RBAR_VALID_Msk /* valid region */\n | (MPU_RBAR_REGION_Msk & 7U); /* region #7 */\nMPU->RASR = (7U << MPU_RASR_SIZE_Pos) /* 2^(7+1) region, see NOTE0 */\n | (0x0U << MPU_RASR_AP_Pos) /* no-access region */\n | MPU_RASR_ENABLE_Msk; /* region enable */\n\nMPU->CTRL = MPU_CTRL_PRIVDEFENA_Msk /* enable background region */\n | MPU_CTRL_ENABLE_Msk; /* enable the MPU */\n__ISB();\n__DSB();\n\nThis code sets up a no-access MPU region #7 around the address 0x0 (any other MPU region will do as well). This works even for the MCUs, where the Vector Table also resides at address 0x0. Apparently, the MPU does not check access to the region by instructions other than LDR/STR, such as reading the vector address during Cortex-M exception entry.\nHowever, in case the Vector Table resides at 0, the size of the no-access region must not contain any data that the CPU would legitimately read with the LDR instruction. This means that the size of the no-access region should be about the size of the Vector Table. In the code above, the size is set to 2^(7+1)==256 bytes, which should be fine even for relatively small vector tables.\nThe code above works also for MCUs that automatically relocate the Vector Table, such as STM32. For these MCUs, the size of the no-access region can be increased all the way to the relocated Vector Table, like 0x0800'0000 in the case of STM32. (You could set the size to 2^(26+1)==0x0800'0000).\nProtection against NULL-pointer dereferencing is an important tool for improving the system's robustness and even for preventing malicious attacks. I hope that this answer will help fellow embedded developers.\n"
] | [
1
] | [] | [] | [
"arm",
"c",
"embedded",
"mpu"
] | stackoverflow_0074549991_arm_c_embedded_mpu.txt |
Q:
How do i access a variable inside a yup schemma
I'm using a yup schemma for validating a form, but in this form, some of the information only needs to be required if the variable numberPages is equals to 3, the problem is that this variable is declared outside the schemma declaration, therefore, i can't access that variable to test it's value. Any ideas on how to do this?
This is what i tried:
const numberPages = cadastroPage === "cliente" ? 2 : 3;
const esquema = yup.object().shape({
Machines: yup.string().when("numberPages === 3", {
is: true,
then: yup.string().required("Required"),
}),
.
.
.
A:
To access a variable inside a Yup schema, you can pass the variable as an argument to the when() method and then use the test() method to test the value of the variable.
Here's an example of how you can do this:
const numberPages = 3; // This variable is declared outside the schema
const schema = yup.object().shape({
Machines: yup.string().when("numberPages", {
is: (value) => value === 3, // Test the value of the "numberPages" variable
then: yup.string().required("Required"),
}),
});
In your case, you can pass the numberPages variable as an argument to the when() method and then use the test() method to check if the value of the numberPages variable is equal to 3.
Here's how you can modify your code to do this:
const numberPages = cadastroPage === "cliente" ? 2 : 3;
const esquema = yup.object().shape({
Machines: yup.string().when("numberPages", {
test: (value) => value === 3, // Test the value of the "numberPages" variable
then: yup.string().required("Required"),
}),
});
| How do i access a variable inside a yup schemma | I'm using a yup schemma for validating a form, but in this form, some of the information only needs to be required if the variable numberPages is equals to 3, the problem is that this variable is declared outside the schemma declaration, therefore, i can't access that variable to test it's value. Any ideas on how to do this?
This is what i tried:
const numberPages = cadastroPage === "cliente" ? 2 : 3;
const esquema = yup.object().shape({
Machines: yup.string().when("numberPages === 3", {
is: true,
then: yup.string().required("Required"),
}),
.
.
.
| [
"To access a variable inside a Yup schema, you can pass the variable as an argument to the when() method and then use the test() method to test the value of the variable.\nHere's an example of how you can do this:\nconst numberPages = 3; // This variable is declared outside the schema\n\nconst schema = yup.object().shape({\n Machines: yup.string().when(\"numberPages\", {\n is: (value) => value === 3, // Test the value of the \"numberPages\" variable\n then: yup.string().required(\"Required\"),\n }),\n});\n\nIn your case, you can pass the numberPages variable as an argument to the when() method and then use the test() method to check if the value of the numberPages variable is equal to 3.\nHere's how you can modify your code to do this:\nconst numberPages = cadastroPage === \"cliente\" ? 2 : 3;\n\nconst esquema = yup.object().shape({\n Machines: yup.string().when(\"numberPages\", {\n test: (value) => value === 3, // Test the value of the \"numberPages\" variable\n then: yup.string().required(\"Required\"),\n }),\n});\n\n"
] | [
0
] | [] | [] | [
"forms",
"javascript",
"reactjs",
"yup"
] | stackoverflow_0074678156_forms_javascript_reactjs_yup.txt |
Q:
Linking boost libraries
I have downloaded the boost library (version 1.46.1), but I don't know how to link it through xcode.I found an old question says to put the -lfftw3 flag, so I've put it.
I also added the path: /home/Documents/C++/boost_1_46_1 (it's the directory where I have put the library), but I am getting an error from the linker:
ld: warning: directory not found for option '-L/home/ramy/Documents/C++/boost_1_46_1'
ld: library not found for -lfftw3
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Command /Developer/usr/bin/clang++ failed with exit code 1
So the question are two:
1)How to manage xcode to link boost?
2)Where to put the library in file system? In linux there was /usr/lib, here there isn't this path, should I put it in /Developer/usr/lib?
A:
Or for those who are looking for a quick answer (and are on linux), the magic is simply to add the following flags:
-l boost_system
| Linking boost libraries | I have downloaded the boost library (version 1.46.1), but I don't know how to link it through xcode.I found an old question says to put the -lfftw3 flag, so I've put it.
I also added the path: /home/Documents/C++/boost_1_46_1 (it's the directory where I have put the library), but I am getting an error from the linker:
ld: warning: directory not found for option '-L/home/ramy/Documents/C++/boost_1_46_1'
ld: library not found for -lfftw3
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Command /Developer/usr/bin/clang++ failed with exit code 1
So the question are two:
1)How to manage xcode to link boost?
2)Where to put the library in file system? In linux there was /usr/lib, here there isn't this path, should I put it in /Developer/usr/lib?
| [
"Or for those who are looking for a quick answer (and are on linux), the magic is simply to add the following flags:\n-l boost_system\n\n"
] | [
9
] | [
"For anyone else trying to figure out how to link boost libraries, you're much better served by reading the (well written) Boost Getting Started guide.\n"
] | [
-1
] | [
"boost",
"c++",
"linker",
"xcode"
] | stackoverflow_0010262742_boost_c++_linker_xcode.txt |
Q:
OnCollision Destroy objects
Hello I know that I have the code right. I want to destroy the material when my player goes on them. I don't know why I can't destroy them. I have put only to my materials box collider with X= 1 Y=1 Z=1.I don't understand why I can't destroy them. The material I gave it also as a tag. Instead of my player destroy those material he pass through them..I have a RigidBody on the player.
void OnCollisionEnter ( Collision collision )
{
if ( collision.gameObject.tag == "material" )
{
Destroy ( collision.gameObject );
}
}
A:
You need to debug collisions
try this function and screenshot both results and material gameobject :
void OnCollisionEnter(Collision collision)
{
Debug.Log("all collisions :" + collision.gameObject.name);
if (collision.gameObject.CompareTag("material"))
{
Debug.Log("Material collision :" + collision.gameObject.name);
Destroy(collision.gameObject);
}
}
}
A:
Did you check that the Collider component is contained in the object?
And did you make sure that the IsTrigger is not checked on the Collider?
If that doesn't work, check if it's running properly with Debug.Log.
I'm not sure if Debug is taken properly. Maybe not, but check the basics first.
| OnCollision Destroy objects | Hello I know that I have the code right. I want to destroy the material when my player goes on them. I don't know why I can't destroy them. I have put only to my materials box collider with X= 1 Y=1 Z=1.I don't understand why I can't destroy them. The material I gave it also as a tag. Instead of my player destroy those material he pass through them..I have a RigidBody on the player.
void OnCollisionEnter ( Collision collision )
{
if ( collision.gameObject.tag == "material" )
{
Destroy ( collision.gameObject );
}
}
| [
"You need to debug collisions\ntry this function and screenshot both results and material gameobject :\nvoid OnCollisionEnter(Collision collision)\n {\n Debug.Log(\"all collisions :\" + collision.gameObject.name);\n \n if (collision.gameObject.CompareTag(\"material\"))\n {\n Debug.Log(\"Material collision :\" + collision.gameObject.name);\n Destroy(collision.gameObject);\n }\n }\n }\n\n",
"Did you check that the Collider component is contained in the object?\nAnd did you make sure that the IsTrigger is not checked on the Collider?\nIf that doesn't work, check if it's running properly with Debug.Log.\nI'm not sure if Debug is taken properly. Maybe not, but check the basics first.\n"
] | [
0,
0
] | [] | [] | [
"c#",
"unity3d"
] | stackoverflow_0074670670_c#_unity3d.txt |
Q:
Plotly timeline with objects
In the below example, I would like to group the elements of y axis by continent, and to display the name of the continent at the top of each group. I can't figure out in the layout where we can set it. the example come from this plotly page
import pandas as pd
import plotly.graph_objects as go
from plotly import data
df = data.gapminder()
df = df.loc[ (df.year.isin([1987, 2007]))]
countries = (
df.loc[ (df.year.isin([2007]))]
.sort_values(by=["pop"], ascending=True)["country"]
.unique()
)[5:-10]
data = {"x": [], "y": [], "colors": [], "years": []}
for country in countries:
data["x"].extend(
[
df.loc[(df.year == 1987) & (df.country == country)]["pop"].values[0],
df.loc[(df.year == 2007) & (df.country == country)]["pop"].values[0],
None,
]
)
data["y"].extend([country, country, None]),
data["colors"].extend(["cyan", "darkblue", "white"]),
data["years"].extend(["1987", "2007", None])
fig = go.Figure(
data=[
go.Scatter(
x=data["x"],
y=data["y"],
mode="lines",
marker=dict(
color="grey",
)),
go.Scatter(
x=data["x"],
y=data["y"],
text=data["years"],
mode="markers",
marker=dict(
color=data["colors"],
symbol=["square","circle","circle"]*10,
size=16
),
hovertemplate="""Country: %{y} <br> Population: %{x} <br> Year: %{text} <br><extra></extra>"""
)
]
)
A:
To show grouping by continent instead of the code you showed would require looping through the data structure from dictionary format to data frame. y-axis by continent by specifying a multi-index for the y-axis.
I have limited myself to the top 5 countries by continent because the large number of categorical variables on the y-axis creates a situation that is difficult to see for visualization. You can rewrite/not set here according to your needs. Furthermore, in terms of visualization, I have set the x-axis type to log format because the large discrepancies in the numbers make the visualization weaker. This is also something I added on my own and you can edit it yourself.
import pandas as pd
import plotly.graph_objects as go
from plotly import data
df = data.gapminder()
df = df.loc[(df.year.isin([1987, 2007]))]
# top5 by continent
countries = (df.loc[df.year.isin([2007])]
.groupby(['continent',], as_index=False, sort=[True])[['country','pop']].head()['country']
)
df = df[df['country'].isin(countries.tolist())]
fig = go.Figure()
for c in df['continent'].unique():
dff = df.query('continent == @c')
#print(dff)
for cc in dff['country'].unique():
dfc = dff.query('country == @cc')
fig.add_trace(go.Scatter(x=dfc['pop'].tolist(),
y=[dfc['continent'],dfc['country']],
mode='lines+markers',
marker=dict(
color='grey',
))
)
fig.add_trace(go.Scatter(x=dfc['pop'].tolist(),
y=[dfc['continent'],dfc['country']],
text=dfc["year"],
mode="markers",
marker=dict(
color=["cyan", "darkblue", "white"],
size=16,
))
)
fig.update_layout(autosize=False, height=800, width=800, showlegend=False)
fig.update_xaxes(type='log')
fig.show()
| Plotly timeline with objects | In the below example, I would like to group the elements of y axis by continent, and to display the name of the continent at the top of each group. I can't figure out in the layout where we can set it. the example come from this plotly page
import pandas as pd
import plotly.graph_objects as go
from plotly import data
df = data.gapminder()
df = df.loc[ (df.year.isin([1987, 2007]))]
countries = (
df.loc[ (df.year.isin([2007]))]
.sort_values(by=["pop"], ascending=True)["country"]
.unique()
)[5:-10]
data = {"x": [], "y": [], "colors": [], "years": []}
for country in countries:
data["x"].extend(
[
df.loc[(df.year == 1987) & (df.country == country)]["pop"].values[0],
df.loc[(df.year == 2007) & (df.country == country)]["pop"].values[0],
None,
]
)
data["y"].extend([country, country, None]),
data["colors"].extend(["cyan", "darkblue", "white"]),
data["years"].extend(["1987", "2007", None])
fig = go.Figure(
data=[
go.Scatter(
x=data["x"],
y=data["y"],
mode="lines",
marker=dict(
color="grey",
)),
go.Scatter(
x=data["x"],
y=data["y"],
text=data["years"],
mode="markers",
marker=dict(
color=data["colors"],
symbol=["square","circle","circle"]*10,
size=16
),
hovertemplate="""Country: %{y} <br> Population: %{x} <br> Year: %{text} <br><extra></extra>"""
)
]
)
| [
"To show grouping by continent instead of the code you showed would require looping through the data structure from dictionary format to data frame. y-axis by continent by specifying a multi-index for the y-axis.\nI have limited myself to the top 5 countries by continent because the large number of categorical variables on the y-axis creates a situation that is difficult to see for visualization. You can rewrite/not set here according to your needs. Furthermore, in terms of visualization, I have set the x-axis type to log format because the large discrepancies in the numbers make the visualization weaker. This is also something I added on my own and you can edit it yourself.\nimport pandas as pd\nimport plotly.graph_objects as go\nfrom plotly import data\n\ndf = data.gapminder()\ndf = df.loc[(df.year.isin([1987, 2007]))]\n\n# top5 by continent\ncountries = (df.loc[df.year.isin([2007])]\n .groupby(['continent',], as_index=False, sort=[True])[['country','pop']].head()['country']\n)\n\ndf = df[df['country'].isin(countries.tolist())]\n\nfig = go.Figure()\n\nfor c in df['continent'].unique():\n dff = df.query('continent == @c')\n #print(dff)\n for cc in dff['country'].unique():\n dfc = dff.query('country == @cc')\n fig.add_trace(go.Scatter(x=dfc['pop'].tolist(),\n y=[dfc['continent'],dfc['country']],\n mode='lines+markers',\n marker=dict(\n color='grey',\n ))\n )\n fig.add_trace(go.Scatter(x=dfc['pop'].tolist(),\n y=[dfc['continent'],dfc['country']],\n text=dfc[\"year\"],\n mode=\"markers\",\n marker=dict(\n color=[\"cyan\", \"darkblue\", \"white\"],\n size=16,\n ))\n )\n \nfig.update_layout(autosize=False, height=800, width=800, showlegend=False)\nfig.update_xaxes(type='log')\n\nfig.show()\n\n\n"
] | [
0
] | [] | [] | [
"plotly",
"python"
] | stackoverflow_0074677111_plotly_python.txt |
Q:
Echarts, way to have VisualMap on Emphasis mode only
I am using visualmap so i can have a line which is colored with a different color below y= 0 and above y =0. This works great, but ideally id only have the different colors appear on hover over, ie. in emphasis mode .. is there any way to achieve this?
thanks
A:
It is possible to achieve the effect you are looking for using the emphasis option in visualmap. The emphasis option allows you to specify different styles for the visual map when it is in normal mode and when it is in emphasis mode, which is activated when the user hovers over the visual map. To use the emphasis option, you would need to set the emphasis property of the visual map to an object with two properties: normal and emphasis. The normal property would contain the styles for the visual map in normal mode, and the emphasis property would contain the styles for the visual map in emphasis mode.
For example, if you wanted to have the line colored with a different color below y=0 and above y=0 in normal mode, but only have the different colors appear on hover in emphasis mode, you could use the following code:
visualMap: {
type: 'continuous',
dimension: 1,
splitNumber: 2,
pieces: [{
gte: 0,
color: '#00ff00'
}, {
lt: 0,
color: '#ff0000'
}],
emphasis: {
normal: {
color: ['#00ff00', '#ff0000']
},
emphasis: {
color: ['#ff0000', '#00ff00']
}
}
}
In this code, the visual map is set to use the continuous type and the dimension is set to 1, which specifies that the visual map should be applied to the first dimension (the y-axis) of the data. The splitNumber property is set to 2, which means that the data will be split into two pieces, with values below 0 and values above 0. The pieces property is used to specify the colors for the different pieces of data. In this case, the data below 0 will be colored red and the data above 0 will be colored green.
The emphasis property is then used to specify the styles for the visual map in normal mode and in emphasis mode. In normal mode, the visual map is set to use a gradient color scale with the colors red and green, but in emphasis mode, the colors are reversed, so that the data below 0 is colored green and the data above 0 is colored red. This will cause the different colors to only appear when the user hovers over the visual map.
| Echarts, way to have VisualMap on Emphasis mode only | I am using visualmap so i can have a line which is colored with a different color below y= 0 and above y =0. This works great, but ideally id only have the different colors appear on hover over, ie. in emphasis mode .. is there any way to achieve this?
thanks
| [
"It is possible to achieve the effect you are looking for using the emphasis option in visualmap. The emphasis option allows you to specify different styles for the visual map when it is in normal mode and when it is in emphasis mode, which is activated when the user hovers over the visual map. To use the emphasis option, you would need to set the emphasis property of the visual map to an object with two properties: normal and emphasis. The normal property would contain the styles for the visual map in normal mode, and the emphasis property would contain the styles for the visual map in emphasis mode.\nFor example, if you wanted to have the line colored with a different color below y=0 and above y=0 in normal mode, but only have the different colors appear on hover in emphasis mode, you could use the following code:\nvisualMap: {\n type: 'continuous',\n dimension: 1,\n splitNumber: 2,\n pieces: [{\n gte: 0,\n color: '#00ff00'\n }, {\n lt: 0,\n color: '#ff0000'\n }],\n emphasis: {\n normal: {\n color: ['#00ff00', '#ff0000']\n },\n emphasis: {\n color: ['#ff0000', '#00ff00']\n }\n }\n}\n\nIn this code, the visual map is set to use the continuous type and the dimension is set to 1, which specifies that the visual map should be applied to the first dimension (the y-axis) of the data. The splitNumber property is set to 2, which means that the data will be split into two pieces, with values below 0 and values above 0. The pieces property is used to specify the colors for the different pieces of data. In this case, the data below 0 will be colored red and the data above 0 will be colored green.\nThe emphasis property is then used to specify the styles for the visual map in normal mode and in emphasis mode. In normal mode, the visual map is set to use a gradient color scale with the colors red and green, but in emphasis mode, the colors are reversed, so that the data below 0 is colored green and the data above 0 is colored red. This will cause the different colors to only appear when the user hovers over the visual map.\n"
] | [
0
] | [] | [] | [
"echarts"
] | stackoverflow_0074678235_echarts.txt |
Q:
How to fix Module not found: Can't resolve '@heroicons/react/solid' in react app?
I am following this brilliant post to learn react. However, some essential bits are missing.
When I open the app in the browser I get the error
./src/components/Navbar.js
Module not found: Can't resolve '@heroicons/react/solid'
Apparently, I am missing a module. I tried to install it but nothing helped so far.
I tried:
npm install heroicons-react
npm install @react-icons/all-files --save
npm install @iconify/icons-heroicons-solid
npm install @heroicons/vue
The folder structure looks like:
project
|
|-package.json
|-node_modules
|-homepage
|-node_modules
|-package_json
|-src
|-public
|-README.md
I tried the to execute the commands in the project directory and the homepage directory. Not sure which one I should use.
The code in question in Navbar.js looks like:
import { ArrowRightIcon } from "@heroicons/react/solid";
A:
This will resolve you problem.
npm i @heroicons/react
A:
This question is already resolved and I just wanted to add a few more things for newcomers. heroicons have clear documentation on GitHub.
React:
First, install @heroicons/react from npm:
npm install @heroicons/react
Now each icon can be imported individually as a React component:
import { BeakerIcon } from '@heroicons/react/solid'
function MyComponent() {
return (
<div>
<BeakerIcon className="h-5 w-5 text-blue-500"/>
<p>...</p>
</div>
)
}
Vue
Note that this library currently only supports Vue 3.
First, install @heroicons/vue from npm:
npm install @heroicons/vue
Now each icon can be imported individually as a Vue component:
<template>
<div>
<BeakerIcon class="h-5 w-5 text-blue-500"/>
<p>...</p>
</div>
</template>
<script>
import { BeakerIcon } from '@heroicons/vue/solid'
export default {
components: { BeakerIcon }
}
</script>
A:
Downgrade to 1.0.6 solved it for me
yarn add @heroicons/[email protected]
A:
For anyone recently having trouble, you need to:
import {} from '@heroicons/react/24/outline'
24 or 20 are the original sizes of icon as specified on heroicons site
A:
It could be because the installed version is v2 heroicons. Try installing heroiconsv1.
npm install heroicons-react
A:
After installing with:
npm install @heroicons/react
use
npm audit fix --force
A:
test this command npm install heroicons-react
or add
"@hookform/resolvers": "^0.1.0"
to your package.json
A:
Maintainers has released an update recently and it's messing up the imports used in previous version. I wish they could make the release a bit more easier to adapt on consumer side.
Anyway, you now need to define the sizes too in the import statements.
Previous Version import:
import {} from '@heroicons/react/outline'
import {} from '@heroicons/react/solid'
Latest version import:
import {} from '@heroicons/react/24/outline'
import {} from '@heroicons/react/20/solid'
A:
npm i @heroicons/react@v1
depending version
| How to fix Module not found: Can't resolve '@heroicons/react/solid' in react app? | I am following this brilliant post to learn react. However, some essential bits are missing.
When I open the app in the browser I get the error
./src/components/Navbar.js
Module not found: Can't resolve '@heroicons/react/solid'
Apparently, I am missing a module. I tried to install it but nothing helped so far.
I tried:
npm install heroicons-react
npm install @react-icons/all-files --save
npm install @iconify/icons-heroicons-solid
npm install @heroicons/vue
The folder structure looks like:
project
|
|-package.json
|-node_modules
|-homepage
|-node_modules
|-package_json
|-src
|-public
|-README.md
I tried the to execute the commands in the project directory and the homepage directory. Not sure which one I should use.
The code in question in Navbar.js looks like:
import { ArrowRightIcon } from "@heroicons/react/solid";
| [
"This will resolve you problem.\nnpm i @heroicons/react\n\n",
"This question is already resolved and I just wanted to add a few more things for newcomers. heroicons have clear documentation on GitHub.\nReact:\nFirst, install @heroicons/react from npm:\nnpm install @heroicons/react\n\nNow each icon can be imported individually as a React component:\nimport { BeakerIcon } from '@heroicons/react/solid'\n\nfunction MyComponent() {\n return (\n <div>\n <BeakerIcon className=\"h-5 w-5 text-blue-500\"/>\n <p>...</p>\n </div>\n )\n}\n\nVue\nNote that this library currently only supports Vue 3.\nFirst, install @heroicons/vue from npm:\nnpm install @heroicons/vue\n\nNow each icon can be imported individually as a Vue component:\n<template>\n <div>\n <BeakerIcon class=\"h-5 w-5 text-blue-500\"/>\n <p>...</p>\n </div>\n</template>\n\n<script>\nimport { BeakerIcon } from '@heroicons/vue/solid'\n\nexport default {\n components: { BeakerIcon }\n}\n</script>\n\n",
"Downgrade to 1.0.6 solved it for me\nyarn add @heroicons/[email protected]\n\n",
"For anyone recently having trouble, you need to:\nimport {} from '@heroicons/react/24/outline'\n\n24 or 20 are the original sizes of icon as specified on heroicons site\n",
"It could be because the installed version is v2 heroicons. Try installing heroiconsv1.\nnpm install heroicons-react\n\n",
"After installing with:\nnpm install @heroicons/react\nuse\nnpm audit fix --force\n",
"test this command npm install heroicons-react\nor add\n\"@hookform/resolvers\": \"^0.1.0\" \n\nto your package.json\n",
"Maintainers has released an update recently and it's messing up the imports used in previous version. I wish they could make the release a bit more easier to adapt on consumer side.\nAnyway, you now need to define the sizes too in the import statements.\nPrevious Version import:\n\nimport {} from '@heroicons/react/outline'\n\n\nimport {} from '@heroicons/react/solid'\n\nLatest version import:\n\nimport {} from '@heroicons/react/24/outline'\n\n\nimport {} from '@heroicons/react/20/solid'\n\n",
"npm i @heroicons/react@v1\ndepending version\n"
] | [
12,
7,
5,
3,
2,
1,
1,
1,
0
] | [] | [] | [
"create_react_app",
"npm",
"reactjs"
] | stackoverflow_0068809554_create_react_app_npm_reactjs.txt |
Q:
Transition zoom at Text in Swiftui when a previous fade in transition ends for the same Text
I have two texts in a SwiftUI View and two @State wrappers. When the view appears the first text is visible and the second is not. After a few seconds the first text fadeouts and the second text should fade in at the same time. So far so good. Now here is my issue... after the second text fades in, a few seconds later the same second text named Text("HELLO FROM THE OTHER SIDE") has to zoom out. This is the issue i have. How should i change the code so i can trigger the zoom out transition called TextZoomOutTransition as well? Here is the code:
import SwiftUI
struct Transitions: View {
@State changeText: Bool
@State zoomText: Bool
private var TextFadeOut: AnyTransition {
.opacity
.animation(
.easeOut(duration: 0.3)
)
}
private var TextFadeIn: AnyTransition {
.opacity
.animation(
.easeIn(duration: 0.3)
)
}
private var TextZoomOutTransition: AnyTransition {
return .asymmetric(
insertion: .opacity,
removal: .scale(
scale: 1000, anchor: UnitPoint(x: 0.50, y: 0.45))
.animation(
.easeInOut(duration: 2.0)
.delay(0.1)
)
)
}
public var body: some View {
ZStack(alignment: .center) {
Color.clear
VStack(spacing: 24) {
if !changeText {
Text("HELLO THERE")
.transition(TextFadeOut)
} else if !zoomText {
Text("HELLO FROM THE OTHER SIDE")
.transition(TextFadeIn)
}
}
}
.onAppear {
zoomText = false
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) {
changeText = true
}
DispatchQueue.main.asyncAfter(deadline: .now() + 5.0) {
zoomText = true
}
}
}
}
A:
The zoom out transition already does the fade in during insertion, you just need to add the duration. The same transition fades in at insertion and zooms out at removal.
Here's the code - I corrected some mistakes, if you don't mind (variables start with lower case, @State are variables...):
@State private var changeText = false
@State private var zoomText = false
private var textFadeOut: AnyTransition {
.opacity
.animation(
.easeOut(duration: 0.3)
)
}
private var textZoomOutTransition: AnyTransition {
return .asymmetric(
insertion: .opacity
// Here: add duration for fade in
.animation(
.easeIn(duration: 0.3)
),
removal: .scale(
scale: 1000, anchor: UnitPoint(x: 0.50, y: 0.45))
.animation(
.easeInOut(duration: 2.0)
.delay(0.1)
)
)
}
public var body: some View {
ZStack(alignment: .center) {
Color.clear
VStack(spacing: 24) {
if !changeText {
Text("HELLO THERE")
.transition(textFadeOut)
} else if !zoomText {
Text("HELLO FROM THE OTHER SIDE")
// Use this transition for insertion and removal
.transition(textZoomOutTransition)
}
}
}
.onAppear {
zoomText = false
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) {
changeText = true
}
DispatchQueue.main.asyncAfter(deadline: .now() + 5.0) {
zoomText = true
}
}
}
| Transition zoom at Text in Swiftui when a previous fade in transition ends for the same Text | I have two texts in a SwiftUI View and two @State wrappers. When the view appears the first text is visible and the second is not. After a few seconds the first text fadeouts and the second text should fade in at the same time. So far so good. Now here is my issue... after the second text fades in, a few seconds later the same second text named Text("HELLO FROM THE OTHER SIDE") has to zoom out. This is the issue i have. How should i change the code so i can trigger the zoom out transition called TextZoomOutTransition as well? Here is the code:
import SwiftUI
struct Transitions: View {
@State changeText: Bool
@State zoomText: Bool
private var TextFadeOut: AnyTransition {
.opacity
.animation(
.easeOut(duration: 0.3)
)
}
private var TextFadeIn: AnyTransition {
.opacity
.animation(
.easeIn(duration: 0.3)
)
}
private var TextZoomOutTransition: AnyTransition {
return .asymmetric(
insertion: .opacity,
removal: .scale(
scale: 1000, anchor: UnitPoint(x: 0.50, y: 0.45))
.animation(
.easeInOut(duration: 2.0)
.delay(0.1)
)
)
}
public var body: some View {
ZStack(alignment: .center) {
Color.clear
VStack(spacing: 24) {
if !changeText {
Text("HELLO THERE")
.transition(TextFadeOut)
} else if !zoomText {
Text("HELLO FROM THE OTHER SIDE")
.transition(TextFadeIn)
}
}
}
.onAppear {
zoomText = false
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) {
changeText = true
}
DispatchQueue.main.asyncAfter(deadline: .now() + 5.0) {
zoomText = true
}
}
}
}
| [
"The zoom out transition already does the fade in during insertion, you just need to add the duration. The same transition fades in at insertion and zooms out at removal.\nHere's the code - I corrected some mistakes, if you don't mind (variables start with lower case, @State are variables...):\n@State private var changeText = false\n@State private var zoomText = false\n\nprivate var textFadeOut: AnyTransition {\n .opacity\n .animation(\n .easeOut(duration: 0.3)\n )\n}\n\nprivate var textZoomOutTransition: AnyTransition {\n return .asymmetric(\n insertion: .opacity\n \n // Here: add duration for fade in\n .animation(\n .easeIn(duration: 0.3)\n ),\n \n removal: .scale(\n scale: 1000, anchor: UnitPoint(x: 0.50, y: 0.45))\n .animation(\n .easeInOut(duration: 2.0)\n .delay(0.1)\n )\n )\n}\n\npublic var body: some View {\n \n ZStack(alignment: .center) {\n Color.clear\n \n \n VStack(spacing: 24) {\n \n if !changeText {\n Text(\"HELLO THERE\")\n .transition(textFadeOut)\n } else if !zoomText {\n Text(\"HELLO FROM THE OTHER SIDE\")\n \n // Use this transition for insertion and removal\n .transition(textZoomOutTransition)\n }\n }\n }\n .onAppear {\n zoomText = false\n DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) {\n changeText = true\n }\n DispatchQueue.main.asyncAfter(deadline: .now() + 5.0) {\n zoomText = true\n }\n }\n}\n\n"
] | [
0
] | [] | [] | [
"swiftui",
"swiftui_transition"
] | stackoverflow_0074677376_swiftui_swiftui_transition.txt |
Q:
Time nonce generation in go routines
I am calling rest api which expects nonce header. The nonce must be unique timestamp and every consecutive call should have timestamp > previous one. My goal is to launch 10 go routines and from each one do a call to the web api. Since we do not have control over the routine execution order we might end up doing a webapi call with a nonce < previous one. I do not have control over the api implementation.
I have stripped down my code to something very simple which illustrate the problem:
package main
import (
"fmt"
"time"
)
func main() {
count := 10
results := make(chan string, count)
for i := 0; i < 10; i++ {
go someWork(results)
// Enabling the following line would give the
// expected outcome but does look like a hack to me.
// time.Sleep(time.Millisecond)
}
for i := 0; i < count; i++ {
fmt.Println(<-results)
}
}
func someWork(done chan string) {
// prepare http request, do http request, send to done chan the result
done <- time.Now().Format("15:04:05.00000")
}
From the output you can see how we have timestamps which are not chronologically ordered:
13:18:26.98549
13:18:26.98560
13:18:26.98561
13:18:26.98553
13:18:26.98556
13:18:26.98556
13:18:26.98557
13:18:26.98558
13:18:26.98559
13:18:26.98555
What would be the idiomatic way to achieve the expected outcome without adding the sleep line?
Thanks!
A:
I think: you can use a WaitGroup, for example:
package main
import (
"fmt"
"sync"
"time"
)
var wg sync.WaitGroup = sync.WaitGroup{}
var ct int = 0
func hello() {
fmt.Printf("Hello Go %v\n", time.Now().Format("15:04:05.00000"))
// when you are done, call done:
time.Sleep(time.Duration(10 * int(time.Second)))
wg.Done()
}
func main() {
for i := 0; i < 10; i++ {
wg.Add(1)
go hello()
wg.Wait()
}
}
| Time nonce generation in go routines | I am calling rest api which expects nonce header. The nonce must be unique timestamp and every consecutive call should have timestamp > previous one. My goal is to launch 10 go routines and from each one do a call to the web api. Since we do not have control over the routine execution order we might end up doing a webapi call with a nonce < previous one. I do not have control over the api implementation.
I have stripped down my code to something very simple which illustrate the problem:
package main
import (
"fmt"
"time"
)
func main() {
count := 10
results := make(chan string, count)
for i := 0; i < 10; i++ {
go someWork(results)
// Enabling the following line would give the
// expected outcome but does look like a hack to me.
// time.Sleep(time.Millisecond)
}
for i := 0; i < count; i++ {
fmt.Println(<-results)
}
}
func someWork(done chan string) {
// prepare http request, do http request, send to done chan the result
done <- time.Now().Format("15:04:05.00000")
}
From the output you can see how we have timestamps which are not chronologically ordered:
13:18:26.98549
13:18:26.98560
13:18:26.98561
13:18:26.98553
13:18:26.98556
13:18:26.98556
13:18:26.98557
13:18:26.98558
13:18:26.98559
13:18:26.98555
What would be the idiomatic way to achieve the expected outcome without adding the sleep line?
Thanks!
| [
"I think: you can use a WaitGroup, for example:\npackage main\n\nimport (\n \"fmt\"\n \"sync\"\n \"time\"\n)\n\nvar wg sync.WaitGroup = sync.WaitGroup{}\nvar ct int = 0\n\nfunc hello() {\n fmt.Printf(\"Hello Go %v\\n\", time.Now().Format(\"15:04:05.00000\"))\n // when you are done, call done:\n time.Sleep(time.Duration(10 * int(time.Second)))\n wg.Done()\n}\n\nfunc main() {\n for i := 0; i < 10; i++ {\n wg.Add(1)\n go hello()\n wg.Wait()\n }\n\n}\n\n"
] | [
0
] | [
"As I understand you only need to synchronize (serialize) the goroutines till request send part, that is where the timestamp and nonce need to be sequential. response processing can be parallely\nYou can use a mutex for this case like in below code\npackage main\n\nimport (\n \"fmt\"\n \"sync\"\n \"time\"\n)\n\nfunc main() {\n count := 10\n results := make(chan string, count)\n var mutex sync.Mutex\n for i := 0; i < count; i++ {\n go someWork(&mutex, results)\n }\n\n for i := 0; i < count; i++ {\n fmt.Println(<-results)\n }\n}\n\nfunc someWork(mut *sync.Mutex, done chan string) {\n // Lock the mutex, go routine getting lock here, \n // is guarranteed to create the timestamp and \n // perform the request before any other\n mut.Lock()\n // Get the timestamp\n myTimeStamp := time.Now().Format(\"15:04:05.00000\")\n //prepare http request, do http request\n //free the mutex\n mut.Unlock()\n\n // Process response\n // send to done chan the result\n done <- myTimeStamp\n}\n\nOutput is chronolgically ordered\n21:24:03.36582\n21:24:03.36593\n21:24:03.36595\n21:24:03.36596\n21:24:03.36597\n21:24:03.36597\n21:24:03.36598\n21:24:03.36598\n21:24:03.36599\n21:24:03.36600\n\nBut still some duplicate timestamps, may be need more finegrained timestamp, but that is up to the use case.\nHope this helps.\n"
] | [
-1
] | [
"go",
"goroutine",
"http_headers",
"nonce"
] | stackoverflow_0074675191_go_goroutine_http_headers_nonce.txt |
Q:
convert html to json using rdd.map
I have html file which I want to parse in pySpark.
Example:
<MainStruct Rank="1">
<Struct Name="A">
<Struct Name="AA">
<Struct Name="AAA">
<Field Name="F1">Data</Field>
</Struct>
<Struct Name="ListPart">
<List Name="ListName">
<Struct Name="S1">
<Field Name="F1">AAA</Field>
<Field Name="F2">BBB</Field>
<Field Name="F3">CCC</Field>
</Struct>
<Struct Name="S1">
<Field Name="F1">XXX</Field>
<Field Name="F2">GGG</Field>
<Field Name="F3">BBB</Field>
</Struct>
</List>
</Struct>
</Struct>
</Struct>
</FullStudy>
rdd_html = spark.sparkContext.wholeTextFiles(path_to_XML, minPartitions=1000, use_unicode=True)
df_html = spark.createDataFrame(rdd_html,['filename', 'content'])
rdd_map = df_html.rdd.map(lambda x: xmltodict(x['content'],'mainstruct'))
df_map = spark.createDataFrame(rdd_map)
df_map.display()
but in my Notebook output I have problem with list elements. They are parsed inсorrectly.
>object
>AA:
>ListPart:
ListName: "[{S1={F1=AAA, F2=BBB, F3=CCC}}, {S1={F1=XXX, F2=GGG, F3=BBB}}]"
>AAA:
F1: "Data"
List element represents as one string line.
My function to parse it:
def xmltodict(content,first_tag=''):
#Content from xml File
content = re.sub('\n', '', content)
content = re.sub('\r', '', content)
content = re.sub('>\s+<', '><', content)
data = unicodedata.normalize('NFKD', content)
soup = BeautifulSoup(data, 'lxml')
body = soup.find('body')
if(first_tag.strip()!=''):
struct = body.find(first_tag)
else:
struct=body
return parser(struct)
def parser(struct):
struct_all = struct.findAll(True, recursive=False)
struct_dict = {}
for strc in struct_all:
tag = strc.name
tag_name_prop = strc.attrs['name']
if tag == 'struct':
d = parser(strc)
el = {tag_name_prop: d}
struct_dict.update(el)
elif tag == 'field':
v = strc.text
struct_dict[tag_name_prop] = v
elif tag == 'list':
l_elem = []
for child in strc.contents:
soap_child = BeautifulSoup(str(child), 'lxml').find('body')
l_elem.append(parser(soap_child))
el = {tag_name_prop: l_elem}
struct_dict.update(el)
with open (result.txt,'w') as file:
file.write(json.dumps(struct_dict))
return struct_dict
the result in txt file is that I want to receive:
"A": { "AA": {
"AAA": {"F1": "Data"},
"ListPart": {
"ListName": [
{
"S1": {"F1": "AAA",
"F2": "BBB",
"F3": "CCC"
}
},
{
"S1": { "F1": "XXX",
"F2": "GGG",
"F3": "BBB"
}}]
}}}
but in my notebook output I have problem with list elements. They are parsed inсorrectly.
>object
>AA:
>ListPart:
ListName: "[{S1={F1=AAA, F2=BBB, F3=CCC}}, {S1={F1=XXX, F2=GGG, F3=BBB}}]"
>AAA:
F1: "Data"
Why list represents as one string line? Why are there "=" symbols instead of ":"?
A:
i simplified this issue to that:
def parseList(row):
d = {}
d['el1']='AAA'
l = [{'x1':'XA'},{'x1':'XB'}]
d['el2']=l
return Row(res=d)
rdd_html = spark.sparkContext.wholeTextFiles(path_to_file_test, minPartitions=1000, use_unicode=True)
df_html = spark.createDataFrame(rdd_html,['filename', 'content'])
rdd_map = df_html.rdd.map(parseList2)
df_map = spark.createDataFrame(rdd_map)
df_map.display()
in result i also have
>object
el2: "[{x1=XA}, {x1=XB}]"
el1: "AAA"
not that
>object
>el2
x1:"XA"
x1:"XB"
el1: "AAA"
| convert html to json using rdd.map | I have html file which I want to parse in pySpark.
Example:
<MainStruct Rank="1">
<Struct Name="A">
<Struct Name="AA">
<Struct Name="AAA">
<Field Name="F1">Data</Field>
</Struct>
<Struct Name="ListPart">
<List Name="ListName">
<Struct Name="S1">
<Field Name="F1">AAA</Field>
<Field Name="F2">BBB</Field>
<Field Name="F3">CCC</Field>
</Struct>
<Struct Name="S1">
<Field Name="F1">XXX</Field>
<Field Name="F2">GGG</Field>
<Field Name="F3">BBB</Field>
</Struct>
</List>
</Struct>
</Struct>
</Struct>
</FullStudy>
rdd_html = spark.sparkContext.wholeTextFiles(path_to_XML, minPartitions=1000, use_unicode=True)
df_html = spark.createDataFrame(rdd_html,['filename', 'content'])
rdd_map = df_html.rdd.map(lambda x: xmltodict(x['content'],'mainstruct'))
df_map = spark.createDataFrame(rdd_map)
df_map.display()
but in my Notebook output I have problem with list elements. They are parsed inсorrectly.
>object
>AA:
>ListPart:
ListName: "[{S1={F1=AAA, F2=BBB, F3=CCC}}, {S1={F1=XXX, F2=GGG, F3=BBB}}]"
>AAA:
F1: "Data"
List element represents as one string line.
My function to parse it:
def xmltodict(content,first_tag=''):
#Content from xml File
content = re.sub('\n', '', content)
content = re.sub('\r', '', content)
content = re.sub('>\s+<', '><', content)
data = unicodedata.normalize('NFKD', content)
soup = BeautifulSoup(data, 'lxml')
body = soup.find('body')
if(first_tag.strip()!=''):
struct = body.find(first_tag)
else:
struct=body
return parser(struct)
def parser(struct):
struct_all = struct.findAll(True, recursive=False)
struct_dict = {}
for strc in struct_all:
tag = strc.name
tag_name_prop = strc.attrs['name']
if tag == 'struct':
d = parser(strc)
el = {tag_name_prop: d}
struct_dict.update(el)
elif tag == 'field':
v = strc.text
struct_dict[tag_name_prop] = v
elif tag == 'list':
l_elem = []
for child in strc.contents:
soap_child = BeautifulSoup(str(child), 'lxml').find('body')
l_elem.append(parser(soap_child))
el = {tag_name_prop: l_elem}
struct_dict.update(el)
with open (result.txt,'w') as file:
file.write(json.dumps(struct_dict))
return struct_dict
the result in txt file is that I want to receive:
"A": { "AA": {
"AAA": {"F1": "Data"},
"ListPart": {
"ListName": [
{
"S1": {"F1": "AAA",
"F2": "BBB",
"F3": "CCC"
}
},
{
"S1": { "F1": "XXX",
"F2": "GGG",
"F3": "BBB"
}}]
}}}
but in my notebook output I have problem with list elements. They are parsed inсorrectly.
>object
>AA:
>ListPart:
ListName: "[{S1={F1=AAA, F2=BBB, F3=CCC}}, {S1={F1=XXX, F2=GGG, F3=BBB}}]"
>AAA:
F1: "Data"
Why list represents as one string line? Why are there "=" symbols instead of ":"?
| [
"i simplified this issue to that:\n def parseList(row):\n d = {}\n d['el1']='AAA'\n l = [{'x1':'XA'},{'x1':'XB'}]\n d['el2']=l\n return Row(res=d)\n\nrdd_html = spark.sparkContext.wholeTextFiles(path_to_file_test, minPartitions=1000, use_unicode=True)\ndf_html = spark.createDataFrame(rdd_html,['filename', 'content'])\nrdd_map = df_html.rdd.map(parseList2)\ndf_map = spark.createDataFrame(rdd_map)\ndf_map.display()\n\nin result i also have\n>object\n el2: \"[{x1=XA}, {x1=XB}]\"\n el1: \"AAA\"\n\nnot that\n>object\n >el2 \n x1:\"XA\"\n x1:\"XB\"\n el1: \"AAA\"\n\n"
] | [
0
] | [] | [] | [
"html_parsing",
"pyspark",
"rdd",
"xml_parsing"
] | stackoverflow_0074675076_html_parsing_pyspark_rdd_xml_parsing.txt |
Q:
What does print()'s `flush` do?
There is a boolean optional argument to the print() function flush which defaults to False.
The documentation says it is to forcibly flush the stream.
I don't understand the concept of flushing. What is flushing here? What is flushing of stream?
A:
Normally output to a file or the console is buffered, with text output at least until you print a newline. The flush makes sure that any output that is buffered goes to the destination.
I do use it e.g. when I make a user prompt like Do you want to continue (Y/n):, before getting the input.
This can be simulated (on Ubuntu 12.4 using Python 2.7):
from __future__ import print_function
import sys
from time import sleep
fp = sys.stdout
print('Do you want to continue (Y/n): ', end='')
# fp.flush()
sleep(5)
If you run this, you will see that the prompt string does not show up until the sleep ends and the program exits. If you uncomment the line with flush, you will see the prompt and then have to wait 5 seconds for the program to finish
A:
There are a couple of things to understand here. One is the difference between buffered I/O and unbuffered I/O. The concept is fairly simple - for buffered I/O, there is an internal buffer which is kept. Only when that buffer is full (or some other event happens, such as it reaches a newline) is the output "flushed". With unbuffered I/O, whenever a call is made to output something, it will do this, 1 character at a time.
Most I/O functions fall into the buffered category, mainly for performance reasons: it's a lot faster to write chunks at a time (all I/O functions eventually get down to syscalls of some description, which are expensive.)
flush lets you manually choose when you want this internal buffer to be written - a call to flush will write any characters in the buffer. Generally, this isn't needed, because the stream will handle this itself. However, there may be situations when you want to make sure something is output before you continue - this is where you'd use a call to flush().
A:
Two perfect answers we have here,
Anthon made it very clear to understand, Basically, the print line technically does not run (print) until the next line has finished.
Technically the line does run it just stays unbuffered until the
next line has finished running.
This might cause a bug for some people who uses the sleep function after running a print function expecting to see it prints before the sleep function started.
So why am I adding another answer?
The Future Has Arrived And I Would Like To Take The Time And Update You With It:
from __future__ import print_function
First of all, I believe this was an inside joke meant to show an error: Future is not defined ^_^
I'm looking at PyCharm's documentation right now and it looks like they added a flush method built inside the print function itself, Take a look at this:
def print(self, *args, sep=' ', end='\n', file=None): # known special case of print
"""
print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)
Prints the values to a stream, or to sys.stdout by default.
Optional keyword arguments:
file: a file-like object (stream); defaults to the current sys.stdout.
sep: string inserted between values, default a space.
end: string appended after the last value, default a newline.
flush: whether to forcibly flush the stream.
"""
pass
So we might be able to use: (Not sure if the usage order of the parameters should be the same)
from __present__ import print_function
from time import sleep
print('Hello World', flush=True)
sleep(5)
Or This:
print('Hello World', file=sys.stdout , flush=True)
As Anthon said:
If you run this, you will see that the prompt string does not show up
until the sleep ends and the program exits. If you uncomment the line
with flush, you will see the prompt and then have to wait 5 seconds
for the program to finish
So let's just convert that to our current situation:
If you run this, you will see the prompt and then have to wait 5 seconds
for the program to finish, If you change the line
with flush to flush=False, you will see that the prompt string does not show up
until the sleep ends and the program exits.
A:
A practical example to understand print() with and without the flush parameter:
import time
for i in range(5):
print(i, end=" ", flush=True) # Print numbers as soon as they are generated
# print(i, end=" ", flush=False) # Print everything together at the end
time.sleep(0.5)
print("end")
You can comment/uncomment the print lines to check how this affects the way the output is produced.
This is similar to the accepted answer, just a bit simpler and for Python 3.
| What does print()'s `flush` do? | There is a boolean optional argument to the print() function flush which defaults to False.
The documentation says it is to forcibly flush the stream.
I don't understand the concept of flushing. What is flushing here? What is flushing of stream?
| [
"Normally output to a file or the console is buffered, with text output at least until you print a newline. The flush makes sure that any output that is buffered goes to the destination.\nI do use it e.g. when I make a user prompt like Do you want to continue (Y/n):, before getting the input.\nThis can be simulated (on Ubuntu 12.4 using Python 2.7):\nfrom __future__ import print_function\n\nimport sys\nfrom time import sleep\n\nfp = sys.stdout\nprint('Do you want to continue (Y/n): ', end='')\n# fp.flush()\nsleep(5)\n\nIf you run this, you will see that the prompt string does not show up until the sleep ends and the program exits. If you uncomment the line with flush, you will see the prompt and then have to wait 5 seconds for the program to finish\n",
"There are a couple of things to understand here. One is the difference between buffered I/O and unbuffered I/O. The concept is fairly simple - for buffered I/O, there is an internal buffer which is kept. Only when that buffer is full (or some other event happens, such as it reaches a newline) is the output \"flushed\". With unbuffered I/O, whenever a call is made to output something, it will do this, 1 character at a time.\nMost I/O functions fall into the buffered category, mainly for performance reasons: it's a lot faster to write chunks at a time (all I/O functions eventually get down to syscalls of some description, which are expensive.) \nflush lets you manually choose when you want this internal buffer to be written - a call to flush will write any characters in the buffer. Generally, this isn't needed, because the stream will handle this itself. However, there may be situations when you want to make sure something is output before you continue - this is where you'd use a call to flush().\n",
"Two perfect answers we have here,\nAnthon made it very clear to understand, Basically, the print line technically does not run (print) until the next line has finished.\n\nTechnically the line does run it just stays unbuffered until the\nnext line has finished running.\n\nThis might cause a bug for some people who uses the sleep function after running a print function expecting to see it prints before the sleep function started.\nSo why am I adding another answer?\nThe Future Has Arrived And I Would Like To Take The Time And Update You With It:\nfrom __future__ import print_function\n\nFirst of all, I believe this was an inside joke meant to show an error: Future is not defined ^_^\n\nI'm looking at PyCharm's documentation right now and it looks like they added a flush method built inside the print function itself, Take a look at this:\ndef print(self, *args, sep=' ', end='\\n', file=None): # known special case of print\n\"\"\"\nprint(value, ..., sep=' ', end='\\n', file=sys.stdout, flush=False)\n\nPrints the values to a stream, or to sys.stdout by default.\nOptional keyword arguments:\nfile: a file-like object (stream); defaults to the current sys.stdout.\nsep: string inserted between values, default a space.\nend: string appended after the last value, default a newline.\nflush: whether to forcibly flush the stream.\n\"\"\"\npass\n\n\n\nSo we might be able to use: (Not sure if the usage order of the parameters should be the same)\nfrom __present__ import print_function\n\nfrom time import sleep\n\nprint('Hello World', flush=True)\n\nsleep(5)\n\nOr This:\nprint('Hello World', file=sys.stdout , flush=True)\n\nAs Anthon said:\n\nIf you run this, you will see that the prompt string does not show up\nuntil the sleep ends and the program exits. If you uncomment the line\nwith flush, you will see the prompt and then have to wait 5 seconds\nfor the program to finish\n\nSo let's just convert that to our current situation:\nIf you run this, you will see the prompt and then have to wait 5 seconds\nfor the program to finish, If you change the line\nwith flush to flush=False, you will see that the prompt string does not show up\nuntil the sleep ends and the program exits.\n",
"A practical example to understand print() with and without the flush parameter:\nimport time\n\nfor i in range(5):\n print(i, end=\" \", flush=True) # Print numbers as soon as they are generated\n # print(i, end=\" \", flush=False) # Print everything together at the end\n time.sleep(0.5)\n\nprint(\"end\")\n\nYou can comment/uncomment the print lines to check how this affects the way the output is produced.\nThis is similar to the accepted answer, just a bit simpler and for Python 3.\n"
] | [
43,
40,
6,
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0015608229_python_python_3.x.txt |
Q:
Changing multiple while statements using defined functions?
Python code where user inputs surname, forename that must be certain length and wont accept numeric values. I'm creating code for a website to get a user to input various questions like name, address, phone number etc. my code is working currently for each question, but every question is a while statement and I wanted to define functions for each instead (minimizing the repetition of while statements). Please see below 1. working while statement 2. def code I'm failing at creating, because it doesn't take the length or no numeric values into account. The format of my question below for parts 1. and 2. don't include the start "while" & "def" statement for some reason.
1.
first_name = "First Name:\t"
while first_name:
first_name = input("First Name:\t")
if len(first_name) < 15 and first_name.isalpha():
break
else:
print("Invalid entry. Please try again")
continue
second_name = "Second Name:\t"
while second_name:
second_name = input("Second Name:\t")
if len(second_name) < 15 and second_name.isalpha():
break
else:
print("Invalid entry. Please try again")
continue
def name(first):
while True:
if len(first) < 15 and first.isalpha():
break
else:
print('invalid')
continue
first = input("First Name:\t")
A:
You can modify the function like this:
def name_check(name):
while True:
if len(name) < 15 and name.isalpha():
break
else:
print('invalid')
name = input("First Name:\t")
continue
return name
result = name_check(input("First Name:\t"))
print(result)
| Changing multiple while statements using defined functions? | Python code where user inputs surname, forename that must be certain length and wont accept numeric values. I'm creating code for a website to get a user to input various questions like name, address, phone number etc. my code is working currently for each question, but every question is a while statement and I wanted to define functions for each instead (minimizing the repetition of while statements). Please see below 1. working while statement 2. def code I'm failing at creating, because it doesn't take the length or no numeric values into account. The format of my question below for parts 1. and 2. don't include the start "while" & "def" statement for some reason.
1.
first_name = "First Name:\t"
while first_name:
first_name = input("First Name:\t")
if len(first_name) < 15 and first_name.isalpha():
break
else:
print("Invalid entry. Please try again")
continue
second_name = "Second Name:\t"
while second_name:
second_name = input("Second Name:\t")
if len(second_name) < 15 and second_name.isalpha():
break
else:
print("Invalid entry. Please try again")
continue
def name(first):
while True:
if len(first) < 15 and first.isalpha():
break
else:
print('invalid')
continue
first = input("First Name:\t")
| [
"You can modify the function like this:\ndef name_check(name):\n while True:\n if len(name) < 15 and name.isalpha():\n break\n else:\n print('invalid')\n name = input(\"First Name:\\t\")\n continue\n return name\n\nresult = name_check(input(\"First Name:\\t\"))\nprint(result)\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x",
"user_defined_functions",
"while_loop"
] | stackoverflow_0074678022_python_python_3.x_user_defined_functions_while_loop.txt |
Q:
Javascript overwrites the array in several places within an object
Sorry for the strange title, but I've come across an issue that is plain weird. To give some background, i'm working on a booking system that takes a time range as an input from admin, generates available times based on it, and then reduces the available times based on already made bookings (i.e admin specifies availability from 10:00 to 12:00, booking has been made to 11:30, available times will be times = [10:00, 10:30, 11:00, 12:00]).
I have an object that contains per month for each day the available times.
availableTimesPerDay: {
1: ["10:00","10:30","11:00","11:30","12:00"],
2: ["10:00","10:30","11:00","11:30","12:00"],
3: ["10:00","10:30","11:00","11:30","12:00"],
....
}
Where the number represents the date for the given month.
Bookings are represented as an array of objects, format is:
bookedTimes = [
{
date: "2022-12-01T11:30:00.000+02:00"
}
];
I planned to have a function which would iterate through each booking and remove the availability for that time on a given date (based on example above, 11:30 would need to be removed from availableTimesPerDay[1] leaving the value for it as ["10:00","10:30","11:00","12:00"]
The function itself is defined as such:
function reduceAvailableTimesBasedOnDateTime(availableTimesPerDay,bookedTimes){
console.log(JSON.stringify(availableTimesPerDay));
bookedTimes.forEach((bookedDateObject) => {
let bookedDate = new Date(bookedDateObject.date);
// 1
let currentAvailableTimesOnDate = availableTimesPerDay[bookedDate.getDate()];
// ["10:00","10:30","11:00","11:30","12:00"]
let bookedTime = bookedDate.toLocaleTimeString('et');
// "13:30:00"
let time = bookedTime.substring(0,bookedTime.length - 3);
// "13:30"
let index = currentAvailableTimesOnDate.indexOf(time);
// 3
if (index > -1) {
currentAvailableTimesOnDate.splice(index, 1);
// ["10:00","10:30","11:00","12:00"]
}
})
console.log(JSON.stringify(availableTimesPerDay));
return availableTimesPerDay;
}
The way I understand this function is that i've extracted a specific array of available times into a new variable and removed a specific time from that array. I have done no modifications on an original data and I would expect at this stage the availableTimesPerDay to remain unmodified. However, when I run my code, the availableTimesPerDay is modified even though I do no operations with availableTimesPerDay object itself.
What's even stranger is that the modification is not just strictly done on the 1st element, but on all specific dates that have the same day of the week. Here's output from the console for the console.log(availableTimesPerDay) defined in the function (note that 11:30 value is removed on dates 1st of December, 8th of December, 15th of December etc.
booking-helper.js:94 {"1":["10:00","10:30","11:00","11:30","12:00"],"2":[],"3":[],"4":[],"5":[],"6":[],"7":[],"8":["10:00","10:30","11:00","11:30","12:00"],"9":[],"10":[],"11":[],"12":[],"13":[],"14":[],"15":["10:00","10:30","11:00","11:30","12:00"],"16":[],"17":[],"18":[],"19":[],"20":[],"21":[],"22":["10:00","10:30","11:00","11:30","12:00"],"23":[],"24":[],"25":[],"26":[],"27":[],"28":[],"29":["10:00","10:30","11:00","11:30","12:00"],"30":[],"31":[]}
booking-helper.js:105 {"1":["10:00","10:30","11:00","12:00"],"2":[],"3":[],"4":[],"5":[],"6":[],"7":[],"8":["10:00","10:30","11:00","12:00"],"9":[],"10":[],"11":[],"12":[],"13":[],"14":[],"15":["10:00","10:30","11:00","12:00"],"16":[],"17":[],"18":[],"19":[],"20":[],"21":[],"22":["10:00","10:30","11:00","12:00"],"23":[],"24":[],"25":[],"26":[],"27":[],"28":[],"29":["10:00","10:30","11:00","12:00"],"30":[],"31":[
What's even more interesting is that if I copy the same function to codepen with same data or call it directly from the browsers console it works as expected - it removes the specific time from a specific date.
A:
The way I understand this function is that I've extracted a specific array of available times into a new variable and removed a specific time from that array. I have done no modifications on an original data and I would expect at this stage the availableTimesPerDay to remain unmodified.
But that's not what is happening. A mere assignment of an array to a new variable does not create a new array. The new variable will reference the same array. So whatever mutation you bring to that array will be visible whether you look at that array via currentAvailableTimesOnDate or via availableTimesPerDay[bookedDate.getDate()]: they are just different ways to see the same array object.
If you don't want that splice to affect availableTimesPerDay[bookedDate.getDate()], then you must take a copy:
let currentAvailableTimesOnDate = [...availableTimesPerDay[bookedDate.getDate()]];
What's even stranger is that the modification is not just strictly done on the 1st element, but on all specific dates that have the same day of the week.
This would suggest that you have initialise availableTimesPerDay with a similar misunderstanding, so that all entries in that array reference the same array. This could for instance happen when you had initialised it as follows:
let availableTimesPerDay = Array(7).fill( ["10:00","10:30","11:00","11:30","12:00"]);
This creates one array ["10:00","10:30","11:00","11:30","12:00"] and populates the outer array with duplicate references to that array.
You should solve that too, and do something like this:
let availableTimesPerDay = Array.from({length: 7}, () =>
["10:00","10:30","11:00","11:30","12:00"]
);
Now that array literal is evaluated 7 times, each time producing a new array.
A:
It seems like you might be under the mistaken assumption that this code:
let currentAvailableTimesOnDate = availableTimesPerDay[bookedDate.getDate()];
makes a copy of the array and you are then operating on the copy, not the original array. But that's not the case. You're essentially just aliasing the same array and then operating on it. To demonstrate:
const availableTimesPerDay = {
1: ["10:00","10:30","11:00","11:30","12:00"],
2: ["10:00","10:30","11:00","11:30","12:00"],
3: ["10:00","10:30","11:00","11:30","12:00"],
};
const currentAvailableTimesOnDate = availableTimesPerDay[1];
currentAvailableTimesOnDate.splice(0, 100);
console.log(availableTimesPerDay[1]);
If you run this code in the browser console, it will log an empty array, even though you "do no operations with availableTimesPerDay object itself."
To copy the array, you have at least a few options:
const currentAvailableTimesOnDate = availableTimesPerDay[1].slice();
// OR
const currentAvailableTimesOnDate = [...availableTimesPerDay[1]];
// OR
const currentAvailableTimesOnDate = Array.from(availableTimesPerDay[1]);
Using any of the above code, you would then be operating on a copy of the array, not the original one.
Regarding the day-of-week thing, that sounds to me like you are using getDay() instead of getDate() somewhere, though I do not see that in your code, and in fact you say you do not see that in the browser console. I don't have a clear answer for that but could it be that at one point you had getDay() and you are accidentally running an older version of the code that is different from what you are showing here and testing in the console?
| Javascript overwrites the array in several places within an object | Sorry for the strange title, but I've come across an issue that is plain weird. To give some background, i'm working on a booking system that takes a time range as an input from admin, generates available times based on it, and then reduces the available times based on already made bookings (i.e admin specifies availability from 10:00 to 12:00, booking has been made to 11:30, available times will be times = [10:00, 10:30, 11:00, 12:00]).
I have an object that contains per month for each day the available times.
availableTimesPerDay: {
1: ["10:00","10:30","11:00","11:30","12:00"],
2: ["10:00","10:30","11:00","11:30","12:00"],
3: ["10:00","10:30","11:00","11:30","12:00"],
....
}
Where the number represents the date for the given month.
Bookings are represented as an array of objects, format is:
bookedTimes = [
{
date: "2022-12-01T11:30:00.000+02:00"
}
];
I planned to have a function which would iterate through each booking and remove the availability for that time on a given date (based on example above, 11:30 would need to be removed from availableTimesPerDay[1] leaving the value for it as ["10:00","10:30","11:00","12:00"]
The function itself is defined as such:
function reduceAvailableTimesBasedOnDateTime(availableTimesPerDay,bookedTimes){
console.log(JSON.stringify(availableTimesPerDay));
bookedTimes.forEach((bookedDateObject) => {
let bookedDate = new Date(bookedDateObject.date);
// 1
let currentAvailableTimesOnDate = availableTimesPerDay[bookedDate.getDate()];
// ["10:00","10:30","11:00","11:30","12:00"]
let bookedTime = bookedDate.toLocaleTimeString('et');
// "13:30:00"
let time = bookedTime.substring(0,bookedTime.length - 3);
// "13:30"
let index = currentAvailableTimesOnDate.indexOf(time);
// 3
if (index > -1) {
currentAvailableTimesOnDate.splice(index, 1);
// ["10:00","10:30","11:00","12:00"]
}
})
console.log(JSON.stringify(availableTimesPerDay));
return availableTimesPerDay;
}
The way I understand this function is that i've extracted a specific array of available times into a new variable and removed a specific time from that array. I have done no modifications on an original data and I would expect at this stage the availableTimesPerDay to remain unmodified. However, when I run my code, the availableTimesPerDay is modified even though I do no operations with availableTimesPerDay object itself.
What's even stranger is that the modification is not just strictly done on the 1st element, but on all specific dates that have the same day of the week. Here's output from the console for the console.log(availableTimesPerDay) defined in the function (note that 11:30 value is removed on dates 1st of December, 8th of December, 15th of December etc.
booking-helper.js:94 {"1":["10:00","10:30","11:00","11:30","12:00"],"2":[],"3":[],"4":[],"5":[],"6":[],"7":[],"8":["10:00","10:30","11:00","11:30","12:00"],"9":[],"10":[],"11":[],"12":[],"13":[],"14":[],"15":["10:00","10:30","11:00","11:30","12:00"],"16":[],"17":[],"18":[],"19":[],"20":[],"21":[],"22":["10:00","10:30","11:00","11:30","12:00"],"23":[],"24":[],"25":[],"26":[],"27":[],"28":[],"29":["10:00","10:30","11:00","11:30","12:00"],"30":[],"31":[]}
booking-helper.js:105 {"1":["10:00","10:30","11:00","12:00"],"2":[],"3":[],"4":[],"5":[],"6":[],"7":[],"8":["10:00","10:30","11:00","12:00"],"9":[],"10":[],"11":[],"12":[],"13":[],"14":[],"15":["10:00","10:30","11:00","12:00"],"16":[],"17":[],"18":[],"19":[],"20":[],"21":[],"22":["10:00","10:30","11:00","12:00"],"23":[],"24":[],"25":[],"26":[],"27":[],"28":[],"29":["10:00","10:30","11:00","12:00"],"30":[],"31":[
What's even more interesting is that if I copy the same function to codepen with same data or call it directly from the browsers console it works as expected - it removes the specific time from a specific date.
| [
"\nThe way I understand this function is that I've extracted a specific array of available times into a new variable and removed a specific time from that array. I have done no modifications on an original data and I would expect at this stage the availableTimesPerDay to remain unmodified.\n\nBut that's not what is happening. A mere assignment of an array to a new variable does not create a new array. The new variable will reference the same array. So whatever mutation you bring to that array will be visible whether you look at that array via currentAvailableTimesOnDate or via availableTimesPerDay[bookedDate.getDate()]: they are just different ways to see the same array object.\nIf you don't want that splice to affect availableTimesPerDay[bookedDate.getDate()], then you must take a copy:\nlet currentAvailableTimesOnDate = [...availableTimesPerDay[bookedDate.getDate()]];\n\n\nWhat's even stranger is that the modification is not just strictly done on the 1st element, but on all specific dates that have the same day of the week.\n\nThis would suggest that you have initialise availableTimesPerDay with a similar misunderstanding, so that all entries in that array reference the same array. This could for instance happen when you had initialised it as follows:\nlet availableTimesPerDay = Array(7).fill( [\"10:00\",\"10:30\",\"11:00\",\"11:30\",\"12:00\"]);\n\nThis creates one array [\"10:00\",\"10:30\",\"11:00\",\"11:30\",\"12:00\"] and populates the outer array with duplicate references to that array.\nYou should solve that too, and do something like this:\nlet availableTimesPerDay = Array.from({length: 7}, () => \n [\"10:00\",\"10:30\",\"11:00\",\"11:30\",\"12:00\"]\n);\n\nNow that array literal is evaluated 7 times, each time producing a new array.\n",
"It seems like you might be under the mistaken assumption that this code:\nlet currentAvailableTimesOnDate = availableTimesPerDay[bookedDate.getDate()];\n\nmakes a copy of the array and you are then operating on the copy, not the original array. But that's not the case. You're essentially just aliasing the same array and then operating on it. To demonstrate:\nconst availableTimesPerDay = {\n 1: [\"10:00\",\"10:30\",\"11:00\",\"11:30\",\"12:00\"],\n 2: [\"10:00\",\"10:30\",\"11:00\",\"11:30\",\"12:00\"],\n 3: [\"10:00\",\"10:30\",\"11:00\",\"11:30\",\"12:00\"],\n};\n\nconst currentAvailableTimesOnDate = availableTimesPerDay[1];\ncurrentAvailableTimesOnDate.splice(0, 100);\n\nconsole.log(availableTimesPerDay[1]);\n\nIf you run this code in the browser console, it will log an empty array, even though you \"do no operations with availableTimesPerDay object itself.\"\nTo copy the array, you have at least a few options:\nconst currentAvailableTimesOnDate = availableTimesPerDay[1].slice();\n// OR\nconst currentAvailableTimesOnDate = [...availableTimesPerDay[1]];\n// OR\nconst currentAvailableTimesOnDate = Array.from(availableTimesPerDay[1]);\n\nUsing any of the above code, you would then be operating on a copy of the array, not the original one.\nRegarding the day-of-week thing, that sounds to me like you are using getDay() instead of getDate() somewhere, though I do not see that in your code, and in fact you say you do not see that in the browser console. I don't have a clear answer for that but could it be that at one point you had getDay() and you are accidentally running an older version of the code that is different from what you are showing here and testing in the console?\n"
] | [
3,
0
] | [
"You are correct that you cannot call getTime() or toISOString() on a string. These methods can only be called on a Date object.\nWhen you create a new Date object from a string using the new Date() constructor, JavaScript automatically parses the string and creates a new Date object with the date and time represented by the string. However, as I mentioned earlier, this creates a reference to the original Date object, which can cause issues like the one you are experiencing.\nOne solution to this problem is to use the Date.parse() method to parse the string and create a new Date object with the date and time represented by the string. This method returns the number of milliseconds since the Unix epoch (January 1, 1970) for the date and time represented by the string, which you can then use to create a new Date object using the new Date() constructor. Here is an example:\nfunction reduceAvailableTimesBasedOnDateTime(availableTimesPerDay,bookedTimes){\n console.log(JSON.stringify(availableTimesPerDay));\n bookedTimes.forEach((bookedDateObject) => {\n // Parse the string using the `Date.parse()` method\n let bookedDateMilliseconds = Date.parse(bookedDateObject.date);\n // Create a new Date object using the parsed date and time\n let bookedDate = new Date(bookedDateMilliseconds);\n let currentAvailableTimesOnDate = availableTimesPerDay[bookedDate.getDate()];\n let bookedTime = bookedDate.toLocaleTimeString('et');\n let time = bookedTime.substring(0,bookedTime.length - 3);\n let index = currentAvailableTimesOnDate.indexOf(time);\n if (index > -1) { \n currentAvailableTimesOnDate.splice(index, 1);\n }\n })\n console.log(JSON.stringify(availableTimesPerDay));\n return availableTimesPerDay;\n}\n\nAnother solution is to use the Date.parse() method to parse the string and create a new Date object with the date and time represented by the string, and then use the getTime() method to create a new Date object with the same date and time as the original object. Here is an example:\nfunction reduceAvailableTimesBasedOnDateTime(availableTimesPerDay,bookedTimes){\n console.log(JSON.stringify(availableTimesPerDay));\n bookedTimes.forEach((bookedDateObject) => {\n // Parse the string using the `Date.parse()` method\n let bookedDateMilliseconds = Date.parse(bookedDateObject.date);\n // Create a new Date object using the parsed date and time\n let bookedDate = new Date(bookedDateMilliseconds);\n // Create a new Date object using the original object's `getTime()` method\n let newBookedDate = new Date(bookedDate.getTime());\n let currentAvailableTimesOnDate = availableTimesPerDay[newBookedDate.getDate()];\n let bookedTime = newBookedDate.toLocaleTimeString('et');\n let time = bookedTime.substring(0,bookedTime.length - 3);\n let index = currentAvailableTimesOnDate.indexOf(time);\n if (index > -1) { \n currentAvailableTimesOnDate.splice(index, 1);\n }\n })\n console.log(JSON.stringify(availableTimesPerDay));\n return availableTimesPerDay;\n}\n\n"
] | [
-1
] | [
"javascript"
] | stackoverflow_0074678167_javascript.txt |
Q:
Execution failed for task ':app:compileFlutterBuildDebug'. > Process 'command '/Users/cit/flutter/bin/flutter'' finished with non-zero exit value 1
ANY ONE PLEASE HELP ME WITH THIS ERROR.
FAILURE: Build failed with an exception.
Where:
Script '/Users/cit/flutter/packages/flutter_tools/gradle/flutter.gradle' line: 1159
What went wrong:
Execution failed for task ':app:compileFlutterBuildDebug'.
Process 'command '/Users/cit/flutter/bin/flutter'' finished with non-zero exit value 1
Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
Get more help at https://help.gradle.org
BUILD FAILED in 12s
Exception: Gradle task assembleDebug failed with exit code 1
A:
run flutter clean then flutter pub get
A:
i tried to run that purchased on codecanion.same problem same line.i tried all that searched on google it all doesn't work
| Execution failed for task ':app:compileFlutterBuildDebug'. > Process 'command '/Users/cit/flutter/bin/flutter'' finished with non-zero exit value 1 | ANY ONE PLEASE HELP ME WITH THIS ERROR.
FAILURE: Build failed with an exception.
Where:
Script '/Users/cit/flutter/packages/flutter_tools/gradle/flutter.gradle' line: 1159
What went wrong:
Execution failed for task ':app:compileFlutterBuildDebug'.
Process 'command '/Users/cit/flutter/bin/flutter'' finished with non-zero exit value 1
Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
Get more help at https://help.gradle.org
BUILD FAILED in 12s
Exception: Gradle task assembleDebug failed with exit code 1
| [
"run flutter clean then flutter pub get\n",
"i tried to run that purchased on codecanion.same problem same line.i tried all that searched on google it all doesn't work\n"
] | [
1,
0
] | [] | [] | [
"dart",
"dart_pub",
"flutter",
"flutter_dependencies",
"flutter_test"
] | stackoverflow_0074327351_dart_dart_pub_flutter_flutter_dependencies_flutter_test.txt |
Q:
Bootstrap styling working but not the same looking anymore
<button type="button" class="btn btn-info">Info</button>
This is from getBootstrap.com
<a id="manage" class="btn btn-info" style="margin:2px;" asp-area="Identity" asp-page="/Account/Manage/Index" title="Manage">@UserManager.GetUserName(User)!</a>
This is from my web app.
Hi, I am not sure what is happening here, I am using Bootstrap styling and used the same btn-info but the output is different as showns above.
Strange thing is that this was working fine, the same color when I used this last time a week ago, then back on my project today to continue working on it, suddenly different.
Maybe I have done something without knowing, I thought, so I went back one revision then two, but the color of the button is still the same, slightly brighter.
Launched the app on bothe Chrome and Edge, no difference.
Has anyone had such experience before?
====================
(Update1 09 Jul 2021)
<link rel="stylesheet" type="text/css" href="~/css/site.css" />
<link rel="stylesheet" type="text/css" href="~/lib/bootstrap/dist/css/bootstrap.min.css" />
<link rel="stylesheet" type="text/css" href="~/lib/datatables/DataTables-1.10.25/css/dataTables.bootstrap5.min.css" />
<script src="~/js/site.js" asp-append-version="true"></script>
<script type="text/javascript" src="~/lib/jquery/dist/jquery.min.js"></script>
<script type="text/javascript" src="~/lib/bootstrap/dist/js/bootstrap.bundle.min.js"></script>
<script type="text/javascript" src="~/lib/datatables/DataTables-1.10.25/js/jquery.dataTables.min.js"></script>
<script type="text/javascript" src="~/lib/datatables/DataTables-1.10.25/js/dataTables.bootstrap5.min.js"></script>
I am using downloaded version of Bootstrap and above is the script code.
<button class="btn btn-outline-primary btn-sm" style="margin:3px;" data-toggle="collapse" data-target="#row_op" onclick="c => this.Collapsed = !this.Collapsed">
Detail Expand/Collapse
</button>
<div id="row_op" class="collapse">
<div class="row">
I was doing some troubleshooting yesterday and I think that strange color is only an indication of something is not working properly. I am also using data-target and collapse but they are not working at the moment.
What I am really struggling to understand is that, I deleted the current version of the project file completely and downloaded a commit I made a month ago(that is 5 revision ago) then ran it, strange color was still there and collapse did not work either.
This to me seemed like an issue with browser but tyring this on both Chrome and Edge was the first thing I did.
====================
(Update2 09 Jul 2021)
I remembered that I had published copy on my laptop for server testing. So launched it on the laptop, everything was normal, color of the button, spacing, collape all functioning.
Then I copied entire folder to my desktop and launched it again, Bootstrap still not working properly. Check version of Chrome 91.0.4472.124 the same on both computers.
At least this confirms that the code is fine but the issue is even more troublesome because now I found that my app can behave differently on different computers for totally unknown reason.
But what else could affect behavior of the code on a browser, if it is not the code or the browser?
A:
On Visual Studio 2019 community, I switched test launch option from IIS express to my app, everything is working fine again. So the combination of what I use for testing affects the result.
My conclusion is that the user environment needs to be controlled but I don't think there is anything that can be done differently during development.
A:
Second image of button (blue background with black text) is official bootstrap info button design. The issue here might be a third party extension distorting your view of bootstrap main page. Be sure to test it in - no extension incognito mode.
| Bootstrap styling working but not the same looking anymore |
<button type="button" class="btn btn-info">Info</button>
This is from getBootstrap.com
<a id="manage" class="btn btn-info" style="margin:2px;" asp-area="Identity" asp-page="/Account/Manage/Index" title="Manage">@UserManager.GetUserName(User)!</a>
This is from my web app.
Hi, I am not sure what is happening here, I am using Bootstrap styling and used the same btn-info but the output is different as showns above.
Strange thing is that this was working fine, the same color when I used this last time a week ago, then back on my project today to continue working on it, suddenly different.
Maybe I have done something without knowing, I thought, so I went back one revision then two, but the color of the button is still the same, slightly brighter.
Launched the app on bothe Chrome and Edge, no difference.
Has anyone had such experience before?
====================
(Update1 09 Jul 2021)
<link rel="stylesheet" type="text/css" href="~/css/site.css" />
<link rel="stylesheet" type="text/css" href="~/lib/bootstrap/dist/css/bootstrap.min.css" />
<link rel="stylesheet" type="text/css" href="~/lib/datatables/DataTables-1.10.25/css/dataTables.bootstrap5.min.css" />
<script src="~/js/site.js" asp-append-version="true"></script>
<script type="text/javascript" src="~/lib/jquery/dist/jquery.min.js"></script>
<script type="text/javascript" src="~/lib/bootstrap/dist/js/bootstrap.bundle.min.js"></script>
<script type="text/javascript" src="~/lib/datatables/DataTables-1.10.25/js/jquery.dataTables.min.js"></script>
<script type="text/javascript" src="~/lib/datatables/DataTables-1.10.25/js/dataTables.bootstrap5.min.js"></script>
I am using downloaded version of Bootstrap and above is the script code.
<button class="btn btn-outline-primary btn-sm" style="margin:3px;" data-toggle="collapse" data-target="#row_op" onclick="c => this.Collapsed = !this.Collapsed">
Detail Expand/Collapse
</button>
<div id="row_op" class="collapse">
<div class="row">
I was doing some troubleshooting yesterday and I think that strange color is only an indication of something is not working properly. I am also using data-target and collapse but they are not working at the moment.
What I am really struggling to understand is that, I deleted the current version of the project file completely and downloaded a commit I made a month ago(that is 5 revision ago) then ran it, strange color was still there and collapse did not work either.
This to me seemed like an issue with browser but tyring this on both Chrome and Edge was the first thing I did.
====================
(Update2 09 Jul 2021)
I remembered that I had published copy on my laptop for server testing. So launched it on the laptop, everything was normal, color of the button, spacing, collape all functioning.
Then I copied entire folder to my desktop and launched it again, Bootstrap still not working properly. Check version of Chrome 91.0.4472.124 the same on both computers.
At least this confirms that the code is fine but the issue is even more troublesome because now I found that my app can behave differently on different computers for totally unknown reason.
But what else could affect behavior of the code on a browser, if it is not the code or the browser?
| [
"On Visual Studio 2019 community, I switched test launch option from IIS express to my app, everything is working fine again. So the combination of what I use for testing affects the result.\nMy conclusion is that the user environment needs to be controlled but I don't think there is anything that can be done differently during development.\n",
"Second image of button (blue background with black text) is official bootstrap info button design. The issue here might be a third party extension distorting your view of bootstrap main page. Be sure to test it in - no extension incognito mode.\n"
] | [
0,
0
] | [] | [] | [
".net_5",
"css",
"html",
"twitter_bootstrap"
] | stackoverflow_0068309404_.net_5_css_html_twitter_bootstrap.txt |
Q:
React-bootstrap-table2-paginator issue (changing page renders the previous and next page together)
After going to the next table-page, the table-page render the next table-page along with this previous data like this:
My entire component looks like this:
import React from "react";
import { MembersData } from "../temp-data/MembersData";
import BootstrapTable from "react-bootstrap-table-next";
import paginationFactory from "react-bootstrap-table2-paginator";
function MemebersTable() {
const columns = [
{
dataField: "id",
text: "ID",
sort: true,
},
{
dataField: "name",
text: "Name",
sort: true,
},
{
dataField: "bloodType",
text: "Blood Type",
sort: true,
},
{
dataField: "email",
text: "Email",
sort: true,
},
{
dataField: "phone",
text: "Phone",
},
];
return (
<div className="members-table">
<div className="card m-2 shadow">
<div className="card-header text-uppercase fw-bold text-primary">
Members
</div>
<BootstrapTable
keyField="id"
data={MembersData}
columns={columns}
striped
hover
condensed
bordered={false}
pagination={paginationFactory()}
/>
</div>
</div>
);
}
export default MemebersTable;
the next-table page should not contain the previous table-page details.
A:
Actually i kept the ID same for multiple data... thats why it happened.... just keeping the ID unique for all the data.. it will work fine!!!!!!!!!!!!!!!!
| React-bootstrap-table2-paginator issue (changing page renders the previous and next page together) |
After going to the next table-page, the table-page render the next table-page along with this previous data like this:
My entire component looks like this:
import React from "react";
import { MembersData } from "../temp-data/MembersData";
import BootstrapTable from "react-bootstrap-table-next";
import paginationFactory from "react-bootstrap-table2-paginator";
function MemebersTable() {
const columns = [
{
dataField: "id",
text: "ID",
sort: true,
},
{
dataField: "name",
text: "Name",
sort: true,
},
{
dataField: "bloodType",
text: "Blood Type",
sort: true,
},
{
dataField: "email",
text: "Email",
sort: true,
},
{
dataField: "phone",
text: "Phone",
},
];
return (
<div className="members-table">
<div className="card m-2 shadow">
<div className="card-header text-uppercase fw-bold text-primary">
Members
</div>
<BootstrapTable
keyField="id"
data={MembersData}
columns={columns}
striped
hover
condensed
bordered={false}
pagination={paginationFactory()}
/>
</div>
</div>
);
}
export default MemebersTable;
the next-table page should not contain the previous table-page details.
| [
"Actually i kept the ID same for multiple data... thats why it happened.... just keeping the ID unique for all the data.. it will work fine!!!!!!!!!!!!!!!!\n"
] | [
0
] | [] | [] | [
"bootstrap_5",
"react_bootstrap_table",
"react_table",
"reactjs"
] | stackoverflow_0074658725_bootstrap_5_react_bootstrap_table_react_table_reactjs.txt |
Q:
Unable to install Informatica server
unable to install informatica server in windows 10. After install anywhere screen nothing comes up.
ZeroGu6: Windows DLL failed to load
at ZeroGa4.b(DashoA10*..)
at ZeroGa4.b(DashoA10*..)
at com.zerog.ia.installer.LifeCycleManager.b(DashoA10*..)
at com.zerog.ia.installer.LifeCycleManager.a(DashoA10*..)
at com.zerog.ia.installer.Main.main(DashoA10*..)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.zerog.lax.LAX.launch(DashoA10*..)
at com.zerog.lax.LAX.main(DashoA10*..)
A:
While downloading the Informatica package you will have two setups provided. Separate setup would be provided for Informatica 10, and you have to use that setup. Other setup will not be compatible with windows 10.
You can find this inside, V860964-01_1of8 directory
A:
You need to specify the supporting files location explicitely while extracting using Winrar.
V860964-01_1of8 -- it has .zip file
V860964-01_2of8 -- it has .z01 file
V860964-01_3of8 -- it has .z02 file
V860964-01_4of8 -- it has .z03 file
V860964-01_5of8 -- it has .z04 file
V860964-01_6of8 -- it has .z05 file
V860964-01_7of8 -- it has .z06 file
V860964-01_8of8 -- it has .z07 file
A:
When you see this error:
! Cannot open D:\Informatica\Informatica\V76290-01_4of4\dac_win_11g_infa_win_64bit_961.zip.z01! Seek error in the file
Unzip files all in one place (not in different folders, the same folder for all the files)
then try to unzip again.
it works for me.
| Unable to install Informatica server | unable to install informatica server in windows 10. After install anywhere screen nothing comes up.
ZeroGu6: Windows DLL failed to load
at ZeroGa4.b(DashoA10*..)
at ZeroGa4.b(DashoA10*..)
at com.zerog.ia.installer.LifeCycleManager.b(DashoA10*..)
at com.zerog.ia.installer.LifeCycleManager.a(DashoA10*..)
at com.zerog.ia.installer.Main.main(DashoA10*..)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.zerog.lax.LAX.launch(DashoA10*..)
at com.zerog.lax.LAX.main(DashoA10*..)
| [
"While downloading the Informatica package you will have two setups provided. Separate setup would be provided for Informatica 10, and you have to use that setup. Other setup will not be compatible with windows 10.\n \nYou can find this inside, V860964-01_1of8 directory\n",
"You need to specify the supporting files location explicitely while extracting using Winrar.\nV860964-01_1of8 -- it has .zip file\nV860964-01_2of8 -- it has .z01 file\nV860964-01_3of8 -- it has .z02 file\nV860964-01_4of8 -- it has .z03 file\nV860964-01_5of8 -- it has .z04 file\nV860964-01_6of8 -- it has .z05 file\nV860964-01_7of8 -- it has .z06 file\nV860964-01_8of8 -- it has .z07 file\n\n",
"When you see this error:\n! Cannot open D:\\Informatica\\Informatica\\V76290-01_4of4\\dac_win_11g_infa_win_64bit_961.zip.z01! Seek error in the file\n\nUnzip files all in one place (not in different folders, the same folder for all the files)\nthen try to unzip again.\nit works for me.\n\n"
] | [
0,
0,
0
] | [] | [] | [
"informatica_powercenter"
] | stackoverflow_0062257051_informatica_powercenter.txt |
Q:
FormData is not defined in React Jest
I am writing a unit testing code for the React project. I am trying to test one function
//function aa
export const login = (values) => async => (dispatch) => {
let bodyFormData = new FormData();
bodyFormData.append('username', values.login);
bodyFormData.append('password', values.password);
return await axios({
method: 'post',
url: url,
data: bodyFormData
}
}
//aa test
it("Login Action", async () => {
afterEach(() => {
store.clearActions();
});
const values = {
login: "aaaaa",
password: "bbbbb"
};
const expectedResult = { type: "LOGIN_PASS" };
const result = await store.dispatch(login(values));
expect(result).toEqual(expectedResult);
});
In the browser, this works ok. but when testing I get below error
ReferenceError: FormData is not defined
I tried to use this module but no luck...
https://www.npmjs.com/package/form-data
I do not want to just test axios, I need to test full function.
A:
You will need to mock FormData within your unit test, as the FormData web API is not available in the node.js/jsdom environment.
function FormDataMock() {
this.append = jest.fn();
}
global.FormData = FormDataMock
If you wish to mock other methods within the FormData global:
const entries = jest.fn()
global.FormData = () => ({ entries })
A:
I was also facing this issue and it turned out that testEnvironment (inside jest.config.js) was set to 'node'. Changing it to 'jsdom' resolved it.
A:
You need to mock the FormData for same, simply add below lines in top of test file.
// @ts-ignore
global.FormData = require('react-native/Libraries/Network/FormData');
| FormData is not defined in React Jest | I am writing a unit testing code for the React project. I am trying to test one function
//function aa
export const login = (values) => async => (dispatch) => {
let bodyFormData = new FormData();
bodyFormData.append('username', values.login);
bodyFormData.append('password', values.password);
return await axios({
method: 'post',
url: url,
data: bodyFormData
}
}
//aa test
it("Login Action", async () => {
afterEach(() => {
store.clearActions();
});
const values = {
login: "aaaaa",
password: "bbbbb"
};
const expectedResult = { type: "LOGIN_PASS" };
const result = await store.dispatch(login(values));
expect(result).toEqual(expectedResult);
});
In the browser, this works ok. but when testing I get below error
ReferenceError: FormData is not defined
I tried to use this module but no luck...
https://www.npmjs.com/package/form-data
I do not want to just test axios, I need to test full function.
| [
"You will need to mock FormData within your unit test, as the FormData web API is not available in the node.js/jsdom environment. \nfunction FormDataMock() {\n this.append = jest.fn();\n}\n\nglobal.FormData = FormDataMock\n\nIf you wish to mock other methods within the FormData global:\nconst entries = jest.fn()\nglobal.FormData = () => ({ entries })\n\n",
"I was also facing this issue and it turned out that testEnvironment (inside jest.config.js) was set to 'node'. Changing it to 'jsdom' resolved it.\n",
"You need to mock the FormData for same, simply add below lines in top of test file.\n// @ts-ignore\nglobal.FormData = require('react-native/Libraries/Network/FormData');\n\n"
] | [
7,
3,
0
] | [] | [] | [
"jestjs",
"reactjs",
"unit_testing"
] | stackoverflow_0059726506_jestjs_reactjs_unit_testing.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.