content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: Tailwind classes not working with html file I am getting a different font output because of tailwindcss but the classes aren't showing any output. The files: package.json { "dependencies": { "autoprefixer": "^10.4.1", "postcss": "^8.4.5", "tailwindcss": "^3.0.8" }, "scripts": { "build-css": "tailwindcss build style.css -o css/style.css" } } style.css @tailwind base; @tailwind components; @tailwind utilities; tailwind.config.js module.exports = { content: [], theme: { extend: {}, }, plugins: [], } index.html <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> <link rel="stylesheet" href="css/style.css"> </head> <body> <div> <div> <h1 class="font-sans">Welcome to my tailwind course</h1> <h2 class="font-serif">Created by Varun Prasannan</h2> <p class="font-mono">Lorem ipsum, dolor sit amet consectetur adipisicing elit. Commodi, dignissimos numquam temporibus quisquam optio suscipit neque iure nam ab architecto quaerat similique vero, a aperiam impedit dolores. Autem, iure expedita.</p> </div> </div> </body> </html> As you can see, different font classes are applied such as class="font-sans" class="font-serif" class="font-mono" but the output is the same tailwindcss default font. I have a feeling that the paths to some files(that I don't know of) have to be updated in the tailwind.config.js file, but i don't exactly know the path format. A: I am not quite sure with this problem, but I recommend you to follow the installation instructions from Tailwind, which I will put it here Besides I spotted the content field in your tailwind config file is empty, in which you should have specified the path to all of your template files. Here is an example, what this does is pointing to all html and JS file in your src folder. Hope this might help module.exports = { content: ["./src/*.{html,js}"], theme: { extend: {}, }, plugins: [], } A: Add html file path to tailwind.config.js file "content" tab. Here my HTML file is in public folder and styles.css in src folder. And important step is run this command immediately. npx tailwindcss -i ./src/styles.css -o ./public/styles.css then only tailwind classes are reflecting on live-server module.exports = { content: ["./public/*.{html,js}"], theme: { extend: {}, }, plugins: [], } A: According to Tailwind docs: tailwind.config.js module.exports = { content: [ './pages/**/*.{html,js}', './components/**/*.{html,js}' ], // ... } Use * to match anything except slashes and hidden files Use ** to match zero or more directories Use comma-separate values between {} to match against a list of options https://tailwindcss.com/docs/content-configuration
Tailwind classes not working with html file
I am getting a different font output because of tailwindcss but the classes aren't showing any output. The files: package.json { "dependencies": { "autoprefixer": "^10.4.1", "postcss": "^8.4.5", "tailwindcss": "^3.0.8" }, "scripts": { "build-css": "tailwindcss build style.css -o css/style.css" } } style.css @tailwind base; @tailwind components; @tailwind utilities; tailwind.config.js module.exports = { content: [], theme: { extend: {}, }, plugins: [], } index.html <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> <link rel="stylesheet" href="css/style.css"> </head> <body> <div> <div> <h1 class="font-sans">Welcome to my tailwind course</h1> <h2 class="font-serif">Created by Varun Prasannan</h2> <p class="font-mono">Lorem ipsum, dolor sit amet consectetur adipisicing elit. Commodi, dignissimos numquam temporibus quisquam optio suscipit neque iure nam ab architecto quaerat similique vero, a aperiam impedit dolores. Autem, iure expedita.</p> </div> </div> </body> </html> As you can see, different font classes are applied such as class="font-sans" class="font-serif" class="font-mono" but the output is the same tailwindcss default font. I have a feeling that the paths to some files(that I don't know of) have to be updated in the tailwind.config.js file, but i don't exactly know the path format.
[ "I am not quite sure with this problem, but I recommend you to follow the installation instructions from Tailwind, which I will put it here\nBesides I spotted the content field in your tailwind config file is empty, in which you should have specified the path to all of your template files. Here is an example, what this does is pointing to all html and JS file in your src folder. Hope this might help\nmodule.exports = {\n content: [\"./src/*.{html,js}\"],\n theme: {\n extend: {},\n },\n plugins: [],\n}\n\n", "Add html file path to tailwind.config.js file \"content\" tab. Here my HTML file is in public folder and styles.css in src folder. And important step is run this command immediately.\nnpx tailwindcss -i ./src/styles.css -o ./public/styles.css\nthen only tailwind classes are reflecting on live-server\nmodule.exports = {\n content: [\"./public/*.{html,js}\"],\n theme: {\n extend: {},\n },\n plugins: [],\n}\n\n", "According to Tailwind docs:\ntailwind.config.js\nmodule.exports = {\n content: [\n './pages/**/*.{html,js}',\n './components/**/*.{html,js}'\n ],\n // ...\n}\n\n\nUse * to match anything except slashes and hidden files\nUse ** to match zero or more directories\nUse comma-separate values between {} to match against a list of options\n\nhttps://tailwindcss.com/docs/content-configuration\n" ]
[ 1, 0, 0 ]
[]
[]
[ "css", "html", "tailwind_css" ]
stackoverflow_0070528494_css_html_tailwind_css.txt
Q: throw new TypeError('OAuth2Strategy requires a clientID option'); } This is the error coming what to do.I have created a socket.io chat application. This is a chat application using nodejs.socket.io and with package.json as - Package.json "dependencies": { "body-parser": "~1.15.1", "connect-flash": "^0.1.1", "connect-mongo": "^1.3.2", "cookie-parser": "^1.4.3", "debug": "~2.3.2", "escape-html": "^1.0.3", "express": "~4.14.0", "express-session": "^1.14.2", "hbs": "~4.0.0", "mongoose": "^4.6.8", "morgan": "~1.7.0", "passport": "^0.3.2", "passport-facebook": "^2.1.1", "passport.socketio": "^3.7.0", "serve-favicon": "~2.3.0", "shortid": "^2.2.6", "socket.io": "^1.5.1", "twemoji": "^2.2.0" }, "devDependencies": { "babel-preset-es2015": "^6.18.0", "gulp": "^3.9.1", "gulp-autoprefixer": "^3.1.1", "gulp-babel": "^6.1.2", "gulp-clean-css": "^2.0.13", "gulp-htmlmin": "^3.0.0", "gulp-imagemin": "^3.1.1", "gulp-sass": "*", "gulp-uglify": "^2.0.0" } } C:\Users\AAKASH\Desktop\Follower-Github\Chat-app-all-F-2\Babble-master\B abble-master\node_modules\passport-oauth2\lib\strategy.js:82 if (!options.clientID) { throw new TypeError('OAuth2Strategy requires a clientID option'); } ^ TypeError: OAuth2Strategy requires a clientID option at Strategy.OAuth2Strategy (C:\Users\AAKASH\Desktop\Follower-Github\ Chat-app-all-F-2\Babble-master\Babble-master\node_modules\passport-oauth 2\lib\strategy.js:82:34) at new Strategy (C:\Users\AAKASH\Desktop\Follower-Github\Chat-app-al l-F-2\Babble-master\Babble-master\node_modules\passport-facebook\lib\str ategy.js:54:18) at Object.<anonymous> (C:\Users\AAKASH\Desktop\Follower-Github\Chat- app-all-F-2\Babble-master\Babble-master\config\passport.js:11:14) at Module._compile (module.js:660:30) at Object.Module._extensions..js (module.js:671:10) at Module.load (module.js:573:32) at tryModuleLoad (module.js:513:12) at Function.Module._load (module.js:505:3) at Module.require (module.js:604:17) at require (internal/module.js:11:18) at Object.<anonymous> (C:\Users\AAKASH\Desktop\Follower-Github\Chat- app-all-F-2\Babble-master\Babble-master\routes\index.js:9:14) at Module._compile (module.js:660:30) at Object.Module._extensions..js (module.js:671:10) at Module.load (module.js:573:32) at tryModuleLoad (module.js:513:12) at Function.Module._load (module.js:505:3) at Module.require (module.js:604:17) at require (internal/module.js:11:18) at Object.<anonymous> (C:\Users\AAKASH\Desktop\Follower-Github\Chat- app-all-F-2\Babble-master\Babble-master\app.js:23:16) at Module._compile (module.js:660:30) at Object.Module._extensions..js (module.js:671:10) at Module.load (module.js:573:32) npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! [email protected] start: `node ./bin/www` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the [email protected] start script. npm ERR! This is probably not a problem with npm. There is likely additi onal logging output above. npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\AAKASH\AppData\Roaming\npm-cache\_logs\2018-06-08T 03_18_29_207Z-debug.log Below is of no use....... This is the error coming what to do.I have created a socket.io chat application. This is a chat application using nodejs.socket.io and with package.jsonThis is the error coming what to do.I have created a socket.io chat application. This is a chat application using nodejs.socket.io and with package.jsonThis is the error coming what to do.I have created a socket.io chat application. This is a chat application using nodejs.socket.io and with package.json A: Recently, I faced the same issue and later I realized that I forgot to require .env configuration. Error TypeError: OAuth2Strategy requires a clientID option Fix npm i dotenv and then require it at the top of your file require(dotenv).config() The issue is caused because your server has no idea that it has to use the environment variable. A: I had the same error yesterday. error was TypeError: OAuth2Strategy requires a clientID option I renamed client id as clientID instead of clientId passport.use(new FacebookStrategy({ clientID: FACEBOOK_CLIENT_ID, // previously was clientId clientSecret: FACEBOOK_CLIENT_SECRET, callbackURL: FACEBOOK_CALLBACK_URL, profileFields: ['id', 'email', 'first_name', 'last_name'], }, UserControllers.facebookCallback, )); A: I have had a kind of similar scenario with Passport for GoogleOAuth2 with Google. Precisely the same error which eventually turned out to be something trivial. Somewhere in my code, where the portion cliendID keys are inserted, (in my case it was under keys.js) instead of exportS I had a typo - export. module.exports = require('./something'); Fixing that solved the issue. Hope somebody may find it useful. A: Same thing had happened to me and then I added require('dotenv').config() in the header and everything worked. I read my clientID from .env file. A: I had faced the same problem. Then I save clientId and clientSecret in .env file GOOGLE_CLIENT_ID = xxxxxxxxxxxxxxxxxx GOOGLE_CLIENT_SECRET = xxxxxxxxxxxxxxxxxx passport.js passport.use( new GoogleStrategy( { clientID: process.env.GOOGLE_CLIENT_ID, clientSecret: process.env.GOOGLE_CLIENT_SECRET, callbackURL: "/auth/google/callback", }, (accessToken, refreshToken, profile, done) => { console.log(profile); done(null, profile); } ) ); user.js router.get('/auth/google', passport.authenticate('google', { scope: ['profile'] })); router.get('/auth/google/callback', passport.authenticate('google', { failureRedirect: '/login' }), function(req, res) { // Successful authentication, redirect home. res.redirect('/article'); }); A: Please be sure that your app can properly read your "clientID" variable. Try to read your variable "clientID" printing the value to the console. In my case the result was: console.log(`${clientID}`); undefined I forgot to include the module require('dotenv').config() So I was not reading the actual clientID value. Hope this could help you. A: Make sure you set up the environment variable properly in your main file. Add this before including the passport.js file in your server or app.js file require('dotenv').config() A: Apparently this error says, it cannot find the clientID value, which was expected to come from .env definitions. Working with nodejs and using for equal often : instead of = as expected in .env causes this issue as well. So ... clientID=12345..... then was Ok for me. A: Your dependencies do not help. You need to post your code where you create your Passport Facebook Strategy object. (I would XXXX out your personal codes/keys/ClientId) I had this error as well literally just now, which is why I am here on this question. I solved mine, and it may help you to hear how my issue came up. It is very bad practice to put security codes/keys/whatever in a source code repository exposed to the wild. It is a good practice to keep a file with your codes, and include it with the build as a reference, and use the keys out of it, but not check it in to the repo, but instead manually copy it from development computer to computer. In a much larger environment, you may just have it located on a common network drive instead. Or much more likely have it on a source code repository not hosted in the wild. In my case, I was hopping from my main dev computer, a laptop, to my much faster desktop. I have not been on the desktop in months, and just did a plain check out then compile/run. I got this error. I had forgotten to copy over the key file to run dev. (Interestingly, I could absolutely run a build and push to live server, since it runs on a prod key file that sits on the server.) If this is not your specific situation, then you need to either post your code (feel free to PM me if you want) or look at your code on how you are enacting your FacebookStrategy and your problem lies there. You are probably just forgetting to include your Facebook ClientID in the call. Cheers and good luck! A: Changing consumer to client, solved the issue. passport.use( new GoogleStrategy( { clientID: GOOGLE_CLIENT_ID, clientSecret: GOOGLE_CLIENT_SECRET, callbackURL: "http://www.example.com/auth/google/callback" }, function(token, tokenSecret, profile, done) { authUser(profile, done) } ) ); A: This error might also occur if you are placing multiple strategies under the same file 'passport.js', try making a different file for different strategies, try this when everything else is fine and the code still doesn't work. A: Steps to follow: Login to your Heroku account. Under the personal tab, click the current application you are working on. Go to settings. Select "Reveal Config vars". Add all the key-value pairs as you entered for your local environment. Save the values. Redeploy your application. After putting hours finally it helped me. A: Remember to add new Config vars for GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET in Heroku or any other providers if you have not. I had this error too and realized I forgot to add new Config vars to Heroku. A: You haven't posted your code along with your error, but as per the error message I am seeing it seems like your code can't properly access the clientID. If you are using storing your clientID in a .env file use the dotenv module and configure it in your file by module require("dotenv").config(). I hope this will solve your problem. Another reason can be, maybe you are not passing it as a string. The clientID key in the options expects a string as it's value. A: I had the same issue due to the fact that I set the key of the key-value pair of the environment variables in heroku as env.process.VARIABLE_KEY and not just as VARIABLE_KEY. A: I just had the same error. I fixed it by simply changing my ".env" file to the project path A: happened with me also, i had typo i spelled CLIENT as CLENT in env file, so make sure you dont have any typing mistakes. A: require('dotenv').config() It's due to .env file, I recently came across this issue and I was actually looking for the solution everywhere since I had required and installed dotenv, I still faced this issue: TypeError: OAuth2Strategy requires a clientID option. later I laughed at myself for a silly mistake I made inside my .env file which was I had accidently used ':' instead of '='. check around for typos A: Recently, I faced the same issue and later I realized that in this line of code require(dotenv).config() dotenv should be inside quotes like this require('dotenv').config() . A: Same thing had happened to me and then I added : require(dotenv).config() Then, add the client ID and secret. The contents of .env should look as follows. GOOGLE_CLIENT_ID=__INSERT_CLIENT_ID_HERE__ GOOGLE_CLIENT_SECRET=__INSERT_CLIENT_SECRET_HERE__
throw new TypeError('OAuth2Strategy requires a clientID option'); }
This is the error coming what to do.I have created a socket.io chat application. This is a chat application using nodejs.socket.io and with package.json as - Package.json "dependencies": { "body-parser": "~1.15.1", "connect-flash": "^0.1.1", "connect-mongo": "^1.3.2", "cookie-parser": "^1.4.3", "debug": "~2.3.2", "escape-html": "^1.0.3", "express": "~4.14.0", "express-session": "^1.14.2", "hbs": "~4.0.0", "mongoose": "^4.6.8", "morgan": "~1.7.0", "passport": "^0.3.2", "passport-facebook": "^2.1.1", "passport.socketio": "^3.7.0", "serve-favicon": "~2.3.0", "shortid": "^2.2.6", "socket.io": "^1.5.1", "twemoji": "^2.2.0" }, "devDependencies": { "babel-preset-es2015": "^6.18.0", "gulp": "^3.9.1", "gulp-autoprefixer": "^3.1.1", "gulp-babel": "^6.1.2", "gulp-clean-css": "^2.0.13", "gulp-htmlmin": "^3.0.0", "gulp-imagemin": "^3.1.1", "gulp-sass": "*", "gulp-uglify": "^2.0.0" } } C:\Users\AAKASH\Desktop\Follower-Github\Chat-app-all-F-2\Babble-master\B abble-master\node_modules\passport-oauth2\lib\strategy.js:82 if (!options.clientID) { throw new TypeError('OAuth2Strategy requires a clientID option'); } ^ TypeError: OAuth2Strategy requires a clientID option at Strategy.OAuth2Strategy (C:\Users\AAKASH\Desktop\Follower-Github\ Chat-app-all-F-2\Babble-master\Babble-master\node_modules\passport-oauth 2\lib\strategy.js:82:34) at new Strategy (C:\Users\AAKASH\Desktop\Follower-Github\Chat-app-al l-F-2\Babble-master\Babble-master\node_modules\passport-facebook\lib\str ategy.js:54:18) at Object.<anonymous> (C:\Users\AAKASH\Desktop\Follower-Github\Chat- app-all-F-2\Babble-master\Babble-master\config\passport.js:11:14) at Module._compile (module.js:660:30) at Object.Module._extensions..js (module.js:671:10) at Module.load (module.js:573:32) at tryModuleLoad (module.js:513:12) at Function.Module._load (module.js:505:3) at Module.require (module.js:604:17) at require (internal/module.js:11:18) at Object.<anonymous> (C:\Users\AAKASH\Desktop\Follower-Github\Chat- app-all-F-2\Babble-master\Babble-master\routes\index.js:9:14) at Module._compile (module.js:660:30) at Object.Module._extensions..js (module.js:671:10) at Module.load (module.js:573:32) at tryModuleLoad (module.js:513:12) at Function.Module._load (module.js:505:3) at Module.require (module.js:604:17) at require (internal/module.js:11:18) at Object.<anonymous> (C:\Users\AAKASH\Desktop\Follower-Github\Chat- app-all-F-2\Babble-master\Babble-master\app.js:23:16) at Module._compile (module.js:660:30) at Object.Module._extensions..js (module.js:671:10) at Module.load (module.js:573:32) npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! [email protected] start: `node ./bin/www` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the [email protected] start script. npm ERR! This is probably not a problem with npm. There is likely additi onal logging output above. npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\AAKASH\AppData\Roaming\npm-cache\_logs\2018-06-08T 03_18_29_207Z-debug.log Below is of no use....... This is the error coming what to do.I have created a socket.io chat application. This is a chat application using nodejs.socket.io and with package.jsonThis is the error coming what to do.I have created a socket.io chat application. This is a chat application using nodejs.socket.io and with package.jsonThis is the error coming what to do.I have created a socket.io chat application. This is a chat application using nodejs.socket.io and with package.json
[ "Recently, I faced the same issue and later I realized that I forgot to require .env configuration.\nError\nTypeError: OAuth2Strategy requires a clientID option\n\nFix\nnpm i dotenv\n\nand then require it at the top of your file\nrequire(dotenv).config()\n\nThe issue is caused because your server has no idea that it has to use the environment variable.\n", "I had the same error yesterday. error was \nTypeError: OAuth2Strategy requires a clientID option\n\nI renamed client id as clientID instead of clientId\npassport.use(new FacebookStrategy({\n clientID: FACEBOOK_CLIENT_ID, // previously was clientId\n clientSecret: FACEBOOK_CLIENT_SECRET,\n callbackURL: FACEBOOK_CALLBACK_URL,\n profileFields: ['id', 'email', 'first_name', 'last_name'],\n},\n UserControllers.facebookCallback,\n));\n\n", "I have had a kind of similar scenario with Passport for GoogleOAuth2 with Google. Precisely the same error which eventually turned out to be something trivial. Somewhere in my code, where the portion cliendID keys are inserted, (in my case it was under keys.js) instead of exportS I had a typo - export.\nmodule.exports = require('./something');\n\nFixing that solved the issue. Hope somebody may find it useful. \n", "Same thing had happened to me and then I added\nrequire('dotenv').config()\n\nin the header and everything worked.\nI read my clientID from .env file.\n", "I had faced the same problem. Then I save clientId and clientSecret in .env file\nGOOGLE_CLIENT_ID = xxxxxxxxxxxxxxxxxx\nGOOGLE_CLIENT_SECRET = xxxxxxxxxxxxxxxxxx\n\npassport.js\npassport.use(\n new GoogleStrategy(\n {\n clientID: process.env.GOOGLE_CLIENT_ID,\n clientSecret: process.env.GOOGLE_CLIENT_SECRET,\n callbackURL: \"/auth/google/callback\",\n },\n (accessToken, refreshToken, profile, done) => {\n console.log(profile);\n done(null, profile);\n }\n )\n);\n\nuser.js\nrouter.get('/auth/google',\n passport.authenticate('google', { scope: ['profile'] }));\n\nrouter.get('/auth/google/callback', \n passport.authenticate('google', { failureRedirect: '/login' }),\n function(req, res) {\n // Successful authentication, redirect home.\n res.redirect('/article');\n });\n\n", "Please be sure that your app can properly read your \"clientID\" variable. Try to read your variable \"clientID\" printing the value to the console. In my case the result was:\nconsole.log(`${clientID}`);\nundefined\n\nI forgot to include the module\nrequire('dotenv').config()\n\nSo I was not reading the actual clientID value.\nHope this could help you.\n", "Make sure you set up the environment variable properly in your main file.\nAdd this before including the passport.js file in your server or app.js file\nrequire('dotenv').config()\n\n", "Apparently this error says, it cannot find the clientID value, which was expected to come from .env definitions.\nWorking with nodejs and using for equal often : instead of = as expected in .env causes this issue as well.\nSo ... clientID=12345..... then was Ok for me.\n", "Your dependencies do not help. You need to post your code where you create your Passport Facebook Strategy object. (I would XXXX out your personal codes/keys/ClientId) \nI had this error as well literally just now, which is why I am here on this question. I solved mine, and it may help you to hear how my issue came up. \nIt is very bad practice to put security codes/keys/whatever in a source code repository exposed to the wild. It is a good practice to keep a file with your codes, and include it with the build as a reference, and use the keys out of it, but not check it in to the repo, but instead manually copy it from development computer to computer. In a much larger environment, you may just have it located on a common network drive instead. Or much more likely have it on a source code repository not hosted in the wild. \nIn my case, I was hopping from my main dev computer, a laptop, to my much faster desktop. I have not been on the desktop in months, and just did a plain check out then compile/run. I got this error. I had forgotten to copy over the key file to run dev. (Interestingly, I could absolutely run a build and push to live server, since it runs on a prod key file that sits on the server.) \nIf this is not your specific situation, then you need to either post your code (feel free to PM me if you want) or look at your code on how you are enacting your FacebookStrategy and your problem lies there. You are probably just forgetting to include your Facebook ClientID in the call. \nCheers and good luck!\n", "Changing consumer to client, solved the issue.\n passport.use(\n new GoogleStrategy(\n {\n clientID: GOOGLE_CLIENT_ID,\n clientSecret: GOOGLE_CLIENT_SECRET,\n callbackURL: \"http://www.example.com/auth/google/callback\"\n },\n function(token, tokenSecret, profile, done) {\n authUser(profile, done)\n }\n )\n );\n\n", "This error might also occur if you are placing multiple strategies under the same file 'passport.js', try making a different file for different strategies, try this when everything else is fine and the code still doesn't work.\n", "Steps to follow:\n\nLogin to your Heroku account. Under the personal tab, click the current application you are working on.\n\nGo to settings.\n\nSelect \"Reveal Config vars\".\n\nAdd all the key-value pairs as you entered for your local environment.\n\nSave the values.\n\nRedeploy your application.\n\n\nAfter putting hours finally it helped me.\n", "Remember to add new Config vars for GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET in Heroku or any other providers if you have not. I had this error too and realized I forgot to add new Config vars to Heroku.\n", "You haven't posted your code along with your error, but as per the error message I am seeing it seems like your code can't properly access the clientID. If you are using storing your clientID in a .env file use the dotenv module and configure it in your file by module require(\"dotenv\").config(). I hope this will solve your problem.\nAnother reason can be, maybe you are not passing it as a string. The clientID key in the options expects a string as it's value.\n", "I had the same issue due to the fact that I set the key of the key-value pair of the environment variables in heroku as env.process.VARIABLE_KEY and not just as VARIABLE_KEY.\n", "I just had the same error.\nI fixed it by simply changing my \".env\" file to the project path\n", "happened with me also, i had typo i spelled CLIENT as CLENT in env file, so make sure you dont have any typing mistakes.\n", "require('dotenv').config()\n\nIt's due to .env file, I recently came across this issue and I was actually looking for the solution everywhere since I had required and installed dotenv, I still faced this issue:\n\nTypeError: OAuth2Strategy requires a clientID option.\n\nlater I laughed at myself for a silly mistake I made inside my .env file which was I had accidently used ':' instead of '='.\ncheck around for typos\n", "Recently, I faced the same issue and later I realized that in this line of code\n\nrequire(dotenv).config()\n\ndotenv should be inside quotes like this\n\nrequire('dotenv').config()\n\n.\n", "Same thing had happened to me and then I added :\nrequire(dotenv).config()\nThen, add the client ID and secret. The contents of .env should look as follows.\nGOOGLE_CLIENT_ID=__INSERT_CLIENT_ID_HERE__ GOOGLE_CLIENT_SECRET=__INSERT_CLIENT_SECRET_HERE__\n" ]
[ 13, 4, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "node.js", "socket.io" ]
stackoverflow_0050752930_node.js_socket.io.txt
Q: How to override new Gutenberg product loop block markup Recently updated WP and plugins. The "Products by Category" block I had in the Gutenberg editor are now rendered very differently. After quite some digging, it appears that the WGPB_Block_Grid_Base render and render-product functions are now defining the appearance of the content. This is ignoring much of my custom action and markup work present before now for this block. This is a completely different animal, looking at it I'm not even sure where to begin. There's a filter applying directly to markup inline inside filter apply: return apply_filters( 'woocommerce_blocks_product_grid_item_html', "<li class=\"wc-block-grid__product\"> <a href=\"{$data->permalink}\" class=\"wc-block-grid__product-link\"> {$data->image} {$data->title} </a> {$data->price} {$data->badge} {$data->rating} {$data->button} </li>", $data, $product ); Is there an easy way to override the templates they're using? Or am I basically stuck writing my own entire Gutenberg block plugin to get this back to where it was? A: We wanted to change the position of the add-to-cart button and product name and add a wrapper for the thumbnail so we modified the output with some custom markup in a function like so... function my_product_block( $html, $data, $product ) { $html = '<li class="wc-block-grid__product"> <div class="image-wrap"> <a href="' . $data->permalink . '" class="wc-block-grid__product-link">' . $data->image . '</a> ' . $data->button . ' </div> <h3><a href="' . $data->permalink . '">' . $data->title . '</a></h3> ' . $data->badge . ' ' . $data->price . ' ' . $data->rating . ' </li>'; return $html; } add_filter( 'woocommerce_blocks_product_grid_item_html', 'my_product_block', 10, 3); A: I went through this recently and I was able to modify the render with the following function: add_filter( 'woocommerce_blocks_product_grid_item_html', 'change_render_product', 10, 3); function change_render_product( $html, $data, $product ) { $search = '<div class="wc-block-grid__product-title">'; $add = '<p>text for example</p>'.$search; $output = str_replace($search, $add, $html); return $output; } In my case, I added an element above the title. A: function my_product_block( $html, $data, $product ) { global $product; $newness_days = 30; $html = '<li class="wc-block-grid__product">'; if ( ( time() - ( 60 * 60 * 24 * $newness_days ) ) < $created) { $html .= '<div class="image-wrap"><span class="itsnew">' . esc_html__( 'New!', 'woocommerce' ) . '</span> </div> <h3><a href="' . $data->permalink . '">' . $data->title . '</a></h3> ' . $data->badge . ' ' . $data->price . ' ' . $data->rating . ' '; } else { $html .= '<div class="image-wrap"> <h3><a href="' . $data->permalink . '">' . $data->title . '</a></h3> ' . $data->badge . ' ' . $data->price . ' ' . $data->rating . ' '; } $html .= '</li>'; return $html; } add_filter( 'woocommerce_blocks_product_grid_item_html', 'my_product_block', 10, 3); A: Custom Field Suite add_filter( 'woocommerce_blocks_product_grid_item_html', 'my_custom_render_product_block', 10, 3); function my_custom_render_product_block( $html, $data, $post ) { $productID = url_to_postid( $data->permalink ); $product = wc_get_product( $productID ); $p_img = CFS()->get( 'p_img' , $productID ); if( $p_img ){ $p_img = '<img src="' . esc_url($p_img) . '" alt="" decoding="async">'; } else { $p_img = $data->image; } return ' <li class="wc-block-grid__product"> <a href="'.$product->get_permalink().'"> <span class="my-thumbnail">' . $p_img . '</span> <span>' . $product->get_title() . '</span> </a> </li> '; }
How to override new Gutenberg product loop block markup
Recently updated WP and plugins. The "Products by Category" block I had in the Gutenberg editor are now rendered very differently. After quite some digging, it appears that the WGPB_Block_Grid_Base render and render-product functions are now defining the appearance of the content. This is ignoring much of my custom action and markup work present before now for this block. This is a completely different animal, looking at it I'm not even sure where to begin. There's a filter applying directly to markup inline inside filter apply: return apply_filters( 'woocommerce_blocks_product_grid_item_html', "<li class=\"wc-block-grid__product\"> <a href=\"{$data->permalink}\" class=\"wc-block-grid__product-link\"> {$data->image} {$data->title} </a> {$data->price} {$data->badge} {$data->rating} {$data->button} </li>", $data, $product ); Is there an easy way to override the templates they're using? Or am I basically stuck writing my own entire Gutenberg block plugin to get this back to where it was?
[ "We wanted to change the position of the add-to-cart button and product name and add a wrapper for the thumbnail so we modified the output with some custom markup in a function like so...\nfunction my_product_block( $html, $data, $product ) {\n $html = '<li class=\"wc-block-grid__product\">\n <div class=\"image-wrap\">\n <a href=\"' . $data->permalink . '\" class=\"wc-block-grid__product-link\">' . $data->image . '</a>\n ' . $data->button . '\n </div>\n <h3><a href=\"' . $data->permalink . '\">' . $data->title . '</a></h3>\n ' . $data->badge . '\n ' . $data->price . '\n ' . $data->rating . '\n </li>';\n return $html;\n}\nadd_filter( 'woocommerce_blocks_product_grid_item_html', 'my_product_block', 10, 3);\n\n", "I went through this recently and I was able to modify the render with the following function:\nadd_filter( 'woocommerce_blocks_product_grid_item_html', 'change_render_product', 10, 3);\n\nfunction change_render_product( $html, $data, $product ) {\n $search = '<div class=\"wc-block-grid__product-title\">';\n $add = '<p>text for example</p>'.$search;\n $output = str_replace($search, $add, $html);\n\n return $output;\n}\n\nIn my case, I added an element above the title.\n", "function my_product_block( $html, $data, $product ) {\n global $product;\n $newness_days = 30;\n $html = '<li class=\"wc-block-grid__product\">';\n if ( ( time() - ( 60 * 60 * 24 * $newness_days ) ) < $created) {\n $html .= '<div class=\"image-wrap\"><span class=\"itsnew\">' . esc_html__( 'New!', 'woocommerce' ) . '</span>\n </div>\n <h3><a href=\"' . $data->permalink . '\">' . $data->title . '</a></h3>\n ' . $data->badge . '\n ' . $data->price . '\n ' . $data->rating . '\n ';\n }\n else\n {\n $html .= '<div class=\"image-wrap\">\n <h3><a href=\"' . $data->permalink . '\">' . $data->title . '</a></h3>\n ' . $data->badge . '\n ' . $data->price . '\n ' . $data->rating . '\n ';\n }\n $html .= '</li>';\n return $html;\n}\nadd_filter( 'woocommerce_blocks_product_grid_item_html', 'my_product_block', 10, 3);\n\n", "Custom Field Suite\nadd_filter( 'woocommerce_blocks_product_grid_item_html', 'my_custom_render_product_block', 10, 3);\nfunction my_custom_render_product_block( $html, $data, $post ) {\n $productID = url_to_postid( $data->permalink );\n $product = wc_get_product( $productID );\n\n $p_img = CFS()->get( 'p_img' , $productID );\n if( $p_img ){ \n $p_img = '<img src=\"' . esc_url($p_img) . '\" alt=\"\" decoding=\"async\">';\n } else { $p_img = $data->image; }\n\n return '\n <li class=\"wc-block-grid__product\">\n <a href=\"'.$product->get_permalink().'\">\n <span class=\"my-thumbnail\">' . $p_img . '</span>\n <span>' . $product->get_title() . '</span>\n </a>\n </li>\n';\n}\n\n" ]
[ 8, 0, 0, 0 ]
[]
[]
[ "gutenberg_blocks", "woocommerce", "wordpress", "wordpress_gutenberg" ]
stackoverflow_0056925957_gutenberg_blocks_woocommerce_wordpress_wordpress_gutenberg.txt
Q: How to convert this formula into an arrayformula? Google Sheets Been beating my head against this and can't get it. Here is the forumla: =IF(E3=E2,F2,F2+1) Pretty simple. All it does is look at the cell above it...if they are the same it doesn't increase the number iteration. If they are different it does. Somehow I can't figure out how to format this in order to make it an ArrayFormula. The only reason I want it to be an Arrayformula is so that rows can be added or removed and the formula would remain intact thus the spreadsheet would be easier to use. A: To turn this formula into an array formula, you need to enclose it in ARRAYFORMULA and press Ctrl + Shift + Enter (on Windows) or Cmd + Shift + Enter (on Mac) to enter the formula. Here's what the resulting array formula would look like: =ARRAYFORMULA(IF(E3:E=E2:E,F2:F,F2:F+1)) The ARRAYFORMULA function allows you to apply a formula to a range of cells at once, so when you add or remove rows, the formula will automatically be applied to the new or remaining cells in the range. Note that when you enter an array formula, you need to press Ctrl + Shift + Enter (on Windows) or Cmd + Shift + Enter (on Mac) to enter the formula. This will cause the formula to be surrounded by {} brackets, which indicates that it is an array formula. A: If you need to place it in column F from F3, you may try another approach or you'll get a circular dependency: =BYROW(E3:E,LAMBDA(each,IF(each="","",F2+sum(MAP(E3:each, LAMBDA(c,IF(c="","",IF(c=OFFSET(c,-1,),0,1)))))))) A: try: =INDEX(BYROW(E2:INDEX(E:E, MAX(ROW(E:E)*(E:E<>""))), LAMBDA(e, IF(OFFSET(e, 1, )=e, OFFSET(e,,1), OFFSET(e,,1)+1))
How to convert this formula into an arrayformula? Google Sheets
Been beating my head against this and can't get it. Here is the forumla: =IF(E3=E2,F2,F2+1) Pretty simple. All it does is look at the cell above it...if they are the same it doesn't increase the number iteration. If they are different it does. Somehow I can't figure out how to format this in order to make it an ArrayFormula. The only reason I want it to be an Arrayformula is so that rows can be added or removed and the formula would remain intact thus the spreadsheet would be easier to use.
[ "To turn this formula into an array formula, you need to enclose it in ARRAYFORMULA and press Ctrl + Shift + Enter (on Windows) or Cmd + Shift + Enter (on Mac) to enter the formula. Here's what the resulting array formula would look like:\n=ARRAYFORMULA(IF(E3:E=E2:E,F2:F,F2:F+1))\n\nThe ARRAYFORMULA function allows you to apply a formula to a range of cells at once, so when you add or remove rows, the formula will automatically be applied to the new or remaining cells in the range.\nNote that when you enter an array formula, you need to press Ctrl + Shift + Enter (on Windows) or Cmd + Shift + Enter (on Mac) to enter the formula. This will cause the formula to be surrounded by {} brackets, which indicates that it is an array formula.\n", "If you need to place it in column F from F3, you may try another approach or you'll get a circular dependency:\n=BYROW(E3:E,LAMBDA(each,IF(each=\"\",\"\",F2+sum(MAP(E3:each, LAMBDA(c,IF(c=\"\",\"\",IF(c=OFFSET(c,-1,),0,1))))))))\n\n", "try:\n=INDEX(BYROW(E2:INDEX(E:E, MAX(ROW(E:E)*(E:E<>\"\"))), \n LAMBDA(e, IF(OFFSET(e, 1, )=e, OFFSET(e,,1), OFFSET(e,,1)+1))\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "array_formulas", "google_sheets" ]
stackoverflow_0074672132_array_formulas_google_sheets.txt
Q: My teacher says my code isn't working. It's working when I run it and I'm not getting any errors My teacher is apparently getting no result from my code when they hit submit. I've had another classmate and a friend run my code and it's working on both their ends. So why is it not working for my teacher? The project is supposed to take in a username and write it into a CSV. file. The code should prevent the same user from being submitted into the file. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <form action="register.php" method="POST" onSubmit="window.location.reload()"> Enter <label for="username">username</label>:<br /> <input type="text" id="username" name="username" required /><br /> <input type="submit" name="submit" value="Submit" /> </form> <?php //write user input into file //identify file to store usernames $file = 'usernames.csv'; ini_set('auto_detect_line_ending', 1); $fp = fopen($file, 'a+'); //check if form is submitted and writeable, then add user input into storage file. if ($_SERVER['REQUEST_METHOD'] == 'POST') { $err = FALSE; while ($line = fgetcsv($fp, null, "\t" != FALSE)) { //check if username already exists if ($line[0] == $_POST['username']) { $err = TRUE; echo 'username already exists'; } } if (!empty($_POST['username'])) { if (is_writable($file) && !$err) { file_put_contents($file, $_POST['username'] . PHP_EOL, FILE_APPEND); echo 'Registration successful.'; } } else { echo '<p style="color:red;"> Registration unsuccessful due to system error. </p>'; } } ?> </body> </html> A: There is a bug in the following line of code: while ($line = fgetcsv($fp, null, "\t" != FALSE)) { This line is checking if the "\t" string is not equal to FALSE using the != operator, but this is not the intended behavior. The != operator is used to check if two values are not equal, but in this case it is being used to negate the result of the comparison between the "\t" string and the FALSE constant. The intended behavior is likely to use the fgetcsv() function to read lines from the file, using the "\t" string as the delimiter. However, the != operator is causing the condition of the while loop to always be TRUE, so the while` loop will never end and will keep reading lines from the file indefinitely. To fix this bug, you can either remove the != operator from the condition, or you can use the !== operator, which is the correct way to check if two values are not equal. For example, you could use the following line of code to fix the bug: while ($line = fgetcsv($fp, null, "\t") !== FALSE) {
My teacher says my code isn't working. It's working when I run it and I'm not getting any errors
My teacher is apparently getting no result from my code when they hit submit. I've had another classmate and a friend run my code and it's working on both their ends. So why is it not working for my teacher? The project is supposed to take in a username and write it into a CSV. file. The code should prevent the same user from being submitted into the file. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <form action="register.php" method="POST" onSubmit="window.location.reload()"> Enter <label for="username">username</label>:<br /> <input type="text" id="username" name="username" required /><br /> <input type="submit" name="submit" value="Submit" /> </form> <?php //write user input into file //identify file to store usernames $file = 'usernames.csv'; ini_set('auto_detect_line_ending', 1); $fp = fopen($file, 'a+'); //check if form is submitted and writeable, then add user input into storage file. if ($_SERVER['REQUEST_METHOD'] == 'POST') { $err = FALSE; while ($line = fgetcsv($fp, null, "\t" != FALSE)) { //check if username already exists if ($line[0] == $_POST['username']) { $err = TRUE; echo 'username already exists'; } } if (!empty($_POST['username'])) { if (is_writable($file) && !$err) { file_put_contents($file, $_POST['username'] . PHP_EOL, FILE_APPEND); echo 'Registration successful.'; } } else { echo '<p style="color:red;"> Registration unsuccessful due to system error. </p>'; } } ?> </body> </html>
[ "There is a bug in the following line of code:\nwhile ($line = fgetcsv($fp, null, \"\\t\" != FALSE)) {\n\nThis line is checking if the \"\\t\" string is not equal to FALSE using the != operator, but this is not the intended behavior. The != operator is used to check if two values are not equal, but in this case it is being used to negate the result of the comparison between the \"\\t\" string and the FALSE constant.\nThe intended behavior is likely to use the fgetcsv() function to read lines from the file, using the \"\\t\" string as the delimiter. However, the != operator is causing the condition of the while loop to always be TRUE, so the while` loop will never end and will keep reading lines from the file indefinitely.\nTo fix this bug, you can either remove the != operator from the condition, or you can use the !== operator, which is the correct way to check if two values are not equal. For example, you could use the following line of code to fix the bug:\nwhile ($line = fgetcsv($fp, null, \"\\t\") !== FALSE) {\n\n" ]
[ 0 ]
[]
[]
[ "php" ]
stackoverflow_0074672380_php.txt
Q: Selenium opens multiple chromedrivers and wont close them I started using headless chrome for jenkins integration and changed the code in my base file. but now when i run a test I see multiple chromedrivers are started and the driver doesn't close when the last test is finished. I didn't have this problem before switching to headless mode. Here is my TestBase class TestBase.class And here is the problem. After all these new chromedrivers the test runs successfully, but a lot of chromedriver accumulates in the background. problem I tried to use driver.close and driver.quit functions in the test's @After method but it didn't work like old times too. After using headless mode, I can't close them because as you can see there are multiple chromedrivers in the background. A: If you want to obtain control over the service itself you need to use DriverService object. The example: ChromeOptions options = new ChromeOptions() // Your options here ChromeDriverService service = ChromeDriverService .createServiceWithConfig(options); service.start(); WebDriver driver = new ChromeDriver(service); // Do your test here driver.quit(); // close session service.stop(); // stop service
Selenium opens multiple chromedrivers and wont close them
I started using headless chrome for jenkins integration and changed the code in my base file. but now when i run a test I see multiple chromedrivers are started and the driver doesn't close when the last test is finished. I didn't have this problem before switching to headless mode. Here is my TestBase class TestBase.class And here is the problem. After all these new chromedrivers the test runs successfully, but a lot of chromedriver accumulates in the background. problem I tried to use driver.close and driver.quit functions in the test's @After method but it didn't work like old times too. After using headless mode, I can't close them because as you can see there are multiple chromedrivers in the background.
[ "If you want to obtain control over the service itself you need to use DriverService object. The example:\nChromeOptions options = new ChromeOptions() // Your options here\nChromeDriverService service = ChromeDriverService\n .createServiceWithConfig(options);\nservice.start();\n\nWebDriver driver = new ChromeDriver(service);\n\n// Do your test here\n\ndriver.quit(); // close session\nservice.stop(); // stop service\n\n" ]
[ 0 ]
[ "For example test fraemworks with parallel execution and 'Allure' screenshots(when step fail):\nhttps://github.com/simileyskiy/cucumber7-selenium3.selenide5-junit5-Allure-parallelExecution\nhttps://github.com/simileyskiy/cucumber7-selenium4.selenide6-junit5-Allure-parallelExecution\n" ]
[ -6 ]
[ "cucumber", "google_chrome_headless", "java", "selenium", "selenium_chromedriver" ]
stackoverflow_0074516128_cucumber_google_chrome_headless_java_selenium_selenium_chromedriver.txt
Q: How to make popouts with CTkInput encode I'm working on password manager and had structure like that: def popUp(text): answer = simpledialocusg.askstring("input string", text) return answer And it works perfectly, but I want to make popouts looks better with Custom Tkinter. When I made def popUp(text): answer = customtkinter.CTkInputDialog("input string", text) return answer I got an error: AttributeError: 'CTkInputDialog' object has no attribute 'encode' Expect popouts works correctly A: You should check the wiki page before opening a question here. https://github.com/TomSchimansky/CustomTkinter/wiki/CTkInputDialog The syntax should be like this: def getInput(): answer = customtkinter.CTkInputDialog(text = "input string") print(answer.get_input()) root = customtkinter.CTk() button = customtkinter.CTkButton(root, command=getInput) button.pack(pady=30, padx=20) root.mainloop()
How to make popouts with CTkInput encode
I'm working on password manager and had structure like that: def popUp(text): answer = simpledialocusg.askstring("input string", text) return answer And it works perfectly, but I want to make popouts looks better with Custom Tkinter. When I made def popUp(text): answer = customtkinter.CTkInputDialog("input string", text) return answer I got an error: AttributeError: 'CTkInputDialog' object has no attribute 'encode' Expect popouts works correctly
[ "You should check the wiki page before opening a question here.\nhttps://github.com/TomSchimansky/CustomTkinter/wiki/CTkInputDialog\nThe syntax should be like this:\ndef getInput():\n answer = customtkinter.CTkInputDialog(text = \"input string\")\n print(answer.get_input())\n\nroot = customtkinter.CTk()\n\nbutton = customtkinter.CTkButton(root, command=getInput)\nbutton.pack(pady=30, padx=20)\n\nroot.mainloop()\n\n" ]
[ 0 ]
[]
[]
[ "customtkinter", "python", "tkinter" ]
stackoverflow_0074641655_customtkinter_python_tkinter.txt
Q: Clipping a rotating SVG to text in HTML/CSS I have an SVG image of a starburst pattern that is rotating using a transform: rotate animation, and a text element next to it. Whenever the brighter part of the starburst overlaps the text, I want that part of the text to change color. Basically, the starburst should clip to the text while rotating. The basic setup is: <div class="header"> <div class="starburst-image"> <img src="starburst-dark.svg" class="starburst"></img> <img src="starburst-bright.svg" class="starburst starburst-bright"></img> </div> <div class="title-wrapper"> <h1 class="title-text">Title</h1> </div> </div> Here, the two starburst images have position: absolute, perfectly coincide with each other, and are both rotating using a CSS animation. The entirety of starburst-dark should show up in the background without clipping, while the starburst-bright should clip to the title-text while it rotates. I can't seem to get the SVG to both rotate and clip to the text, after playing with properties like clip-path and background-clip. I also tried putting the text in an svg <text> element instead of <h1> and wrapping it in a <clipPath>, but it seemed like the text clip was then positioned relative to the coordinates of the starburst SVG rather than the coordinates of the title text itself. There must be some way to do this in HTML/CSS, right?
Clipping a rotating SVG to text in HTML/CSS
I have an SVG image of a starburst pattern that is rotating using a transform: rotate animation, and a text element next to it. Whenever the brighter part of the starburst overlaps the text, I want that part of the text to change color. Basically, the starburst should clip to the text while rotating. The basic setup is: <div class="header"> <div class="starburst-image"> <img src="starburst-dark.svg" class="starburst"></img> <img src="starburst-bright.svg" class="starburst starburst-bright"></img> </div> <div class="title-wrapper"> <h1 class="title-text">Title</h1> </div> </div> Here, the two starburst images have position: absolute, perfectly coincide with each other, and are both rotating using a CSS animation. The entirety of starburst-dark should show up in the background without clipping, while the starburst-bright should clip to the title-text while it rotates. I can't seem to get the SVG to both rotate and clip to the text, after playing with properties like clip-path and background-clip. I also tried putting the text in an svg <text> element instead of <h1> and wrapping it in a <clipPath>, but it seemed like the text clip was then positioned relative to the coordinates of the starburst SVG rather than the coordinates of the title text itself. There must be some way to do this in HTML/CSS, right?
[]
[]
[ "Yes, you can use the clip-path property in CSS to clip an element to the shape of another element, including text. You can also use the mask property, which is similar to clip-path, but it allows you to use a more complex set of shapes and images as the clipping path.\nHere is an example of how you could use the clip-path property to achieve the effect you want:\n<div class=\"header\">\n <div class=\"starburst-image\">\n <img src=\"starburst-dark.svg\" class=\"starburst\"></img>\n <img src=\"starburst-bright.svg\" class=\"starburst starburst-bright\"></img>\n </div>\n <div class=\"title-wrapper\">\n <h1 class=\"title-text\">Title</h1>\n </div>\n</div>\n\n<style>\n .starburst-bright {\n clip-path: inset(0 0 0 0 round 9px);\n -webkit-clip-path: inset(0 0 0 0 round 9px);\n }\n\n .title-text {\n clip-path: inset(0 0 0 0 round 9px);\n -webkit-clip-path: inset(0 0 0 0 round 9px);\n }\n</style>\n\nIn this example, the clip-path property is used on both the starburst-bright and title-text elements. The inset() function is used to create a rounded rectangle shape with 9px rounded corners, which is then used to clip both elements. This will cause the starburst-bright image to be clipped to the shape of the title-text element, creating the effect you want.\nYou can also use the mask property instead of clip-path, which allows for more complex clipping paths. For example:\n<div class=\"header\">\n <div class=\"starburst-image\">\n <img src=\"starburst-dark.svg\" class=\"starburst\"></img>\n <img src=\"starburst-bright.svg\" class=\"starburst starburst-bright\"></img>\n </div>\n <div class=\"title-wrapper\">\n <h1 class=\"title-text\">Title</h1>\n </div>\n</div>\n\n<style>\n .starburst-bright {\n mask: url(title-mask.svg#mask);\n -webkit-mask: url(title-mask.svg#mask);\n }\n\n .title-text {\n mask: url(title-mask.svg#mask);\n -webkit-mask: url(title-mask.svg#mask);\n }\n</style>\n\nIn this example, the mask property is used on both the starburst-bright and title-text elements, and a separate SVG image is used as the mask. This allows you to create a more complex mask shape using an SVG path or other shapes, rather than using the simple rectangle shape created by the inset() function.\n" ]
[ -1 ]
[ "clip_path", "css", "css_animations", "html", "svg" ]
stackoverflow_0074672392_clip_path_css_css_animations_html_svg.txt
Q: Amazon AWS: Passing append = TRUE to a s3write_using FUN = write_csv I am trying to pass an argument to the write_csv function in R but I can't seem to pass it correctly. It does not append the data to the current csv file in the S3 Bucket. Currently what I have is: data %>% s3write_using(., FUN = write_csv, bucket = "myBUCKET", object = "myLocationToSaveCSV.csv", append = TRUE) # Append = TRUE is what I would like to pass to write_csv I am able to write the data to the S3 bucket but when I want to append data to the current data it just re-writes the old data and no new data gets appended. How can I pass extra parameters ... to the write_csv(...) funcition inside the s3write_using? A: R uses positional arguments in functions (also nonpositional named args- its weird..) so try to put the additional args directly after FUN : data %>% s3write_using(., FUN = write_csv, append = TRUE, bucket = "myBUCKET", object = "myLocationToSaveCSV.csv" ) if this doesn't work you can try to use a lambda: data %>% s3write_using(., FUN = \(x) write_csv(x,append=TRUE), bucket = "myBUCKET", object = "myLocationToSaveCSV.csv")
Amazon AWS: Passing append = TRUE to a s3write_using FUN = write_csv
I am trying to pass an argument to the write_csv function in R but I can't seem to pass it correctly. It does not append the data to the current csv file in the S3 Bucket. Currently what I have is: data %>% s3write_using(., FUN = write_csv, bucket = "myBUCKET", object = "myLocationToSaveCSV.csv", append = TRUE) # Append = TRUE is what I would like to pass to write_csv I am able to write the data to the S3 bucket but when I want to append data to the current data it just re-writes the old data and no new data gets appended. How can I pass extra parameters ... to the write_csv(...) funcition inside the s3write_using?
[ "R uses positional arguments in functions (also nonpositional named args- its weird..)\nso try to put the additional args directly after FUN :\ndata %>% \n s3write_using(.,\n FUN = write_csv,\n append = TRUE,\n bucket = \"myBUCKET\",\n object = \"myLocationToSaveCSV.csv\"\n ) \n\nif this doesn't work you can try to use a lambda:\ndata %>% \n s3write_using(.,\n FUN = \\(x) write_csv(x,append=TRUE),\n bucket = \"myBUCKET\",\n object = \"myLocationToSaveCSV.csv\") \n\n" ]
[ 1 ]
[]
[]
[ "amazon_web_services", "r" ]
stackoverflow_0074671481_amazon_web_services_r.txt
Q: Google Sheet: If word is in a cell then return this word in another cell i did already several google-sessions and watched some youtube videos but it seems i just cant find the formula im Looking for for example: I have several words in different cells: A1: car A2: big A3: flower A4: nope I have a text in cell B1 like "the flowers are pretty today" In cell C1 the formula I want should only show the word from all the words A1:A4 which was found in B1 (my data is in a way that its not possible that two or more words from A1:A4 appear in B1, only one) not- case sensitive, with wildcards How would that look like? some REGEXMATCH or SEARCH function?? I would be so happy if anyone could help me with that, Thanks! The formulas i found only returned 0,1 or TRUE, false, but i need the actual word returned A: Try this: =CONCATENATE(MAP(A1:A4,LAMBDA(w,IF(REGEXMATCH(B1,"(?i)"&w),w,""))))
Google Sheet: If word is in a cell then return this word in another cell
i did already several google-sessions and watched some youtube videos but it seems i just cant find the formula im Looking for for example: I have several words in different cells: A1: car A2: big A3: flower A4: nope I have a text in cell B1 like "the flowers are pretty today" In cell C1 the formula I want should only show the word from all the words A1:A4 which was found in B1 (my data is in a way that its not possible that two or more words from A1:A4 appear in B1, only one) not- case sensitive, with wildcards How would that look like? some REGEXMATCH or SEARCH function?? I would be so happy if anyone could help me with that, Thanks! The formulas i found only returned 0,1 or TRUE, false, but i need the actual word returned
[ "Try this:\n=CONCATENATE(MAP(A1:A4,LAMBDA(w,IF(REGEXMATCH(B1,\"(?i)\"&w),w,\"\"))))\n\n" ]
[ 0 ]
[]
[]
[ "google_sheets", "google_sheets_formula", "if_statement", "return_value" ]
stackoverflow_0074671208_google_sheets_google_sheets_formula_if_statement_return_value.txt
Q: Generating PDF for downloading and opening it in new window I want to create a PDF with new File(["<body>text<body>"],"application/pdf") But I want it without any libs With Vanilla Js And get base 64 string with new FileReader Also tried URL.createObjectURL it returns a URL but if I open it, after some time it closes automatically Also, how can I add data to a new file for pdf Like => new File([data] , ...) const file = new File(['<body>Text</body>'] , "app.pdf") //Or. const file = new File(['<body>Text</body>'] , "app.pdf" , {type: 'application/pdf'}) const D = new FileReader() D.onload = ()=> {console.log(this.result || D.result)} D.readAsDataURL(file) Or with blob const file = new File(['<body>Text</body>'] , "app.pdf" ) //Or. const file = new File(['<body>Text</body>'] , "app.pdf" , {type: "application/pdf"}) URL.createObjectURL(file) A: To write mime type text/pdf i.e. a pdf without binary lib encodings you simply need to write a string like this (save as text.pdf to see how it works) %PDF-1.1 %âãÏÓ 1 0 obj<</Type/Catalog/Pages 2 0 R>>endobj 2 0 obj<</Type/Pages/Kids [3 0 R]/Count 1/MediaBox [0 0 594 792]>>endobj 3 0 obj<</Type/Page/Parent 2 0 R/Resources<</Font<</F1<</Type/Font/Subtype/Type1/BaseFont/Helvetica>>>>>>/Contents 4 0 R>>endobj 4 0 obj<</Length 78 >> stream BT /F1 18 Tf 036 740 Td (Body) Tj ET BT /F1 18 Tf 036 720 Td (Text) Tj ET endstream endobj xref 0 5 0000000000 65535 f 0000000021 00000 n 0000000065 00000 n 0000000139 00000 n 0000000269 00000 n trailer<</Root 1 0 R /Size 5>>startxref 401 %%EOF or use in an iFrame <iframe title="testing" type="application/pdf" width="98%" height="98%" src="data:application/pdf;base64,JVBERi0xLjENCiXDosOjw4/Dkw0KMSAwIG9iajw8L1R5cGUvQ2F0YWxvZy9QYWdlcyAyIDAgUj4+ZW5kb2JqDQoyIDAgb2JqPDwvVHlwZS9QYWdlcy9LaWRzIFszIDAgUl0vQ291bnQgMS9NZWRpYUJveCBbMCAwIDU5NCA3OTJdPj5lbmRvYmoNCjMgMCBvYmo8PC9UeXBlL1BhZ2UvUGFyZW50IDIgMCBSL1Jlc291cmNlczw8L0ZvbnQ8PC9GMTw8L1R5cGUvRm9udC9TdWJ0eXBlL1R5cGUxL0Jhc2VGb250L0hlbHZldGljYT4+Pj4+Pi9Db250ZW50cyA0IDAgUj4+ZW5kb2JqDQo0IDAgb2JqPDwvTGVuZ3RoIDc4DQo+Pg0Kc3RyZWFtDQoNCkJUIC9GMSAxOCBUZiAwMzYgNzQwIFRkIChCb2R5KSBUaiBFVA0KQlQgL0YxIDE4IFRmIDAzNiA3MjAgVGQgKFRleHQpIFRqIEVUDQoNCmVuZHN0cmVhbSANCmVuZG9iaiB4cmVmDQowIDUgDQowMDAwMDAwMDAwIDY1NTM1IGYNCjAwMDAwMDAwMjEgMDAwMDAgbg0KMDAwMDAwMDA2NSAwMDAwMCBuDQowMDAwMDAwMTM5IDAwMDAwIG4NCjAwMDAwMDAyNjkgMDAwMDAgbg0KdHJhaWxlcjw8L1Jvb3QgMSAwIFIgL1NpemUgNT4+c3RhcnR4cmVmDQo0MDEgJSVFT0YNCg==">Your browser does not support iFrame</Iframe>
Generating PDF for downloading and opening it in new window
I want to create a PDF with new File(["<body>text<body>"],"application/pdf") But I want it without any libs With Vanilla Js And get base 64 string with new FileReader Also tried URL.createObjectURL it returns a URL but if I open it, after some time it closes automatically Also, how can I add data to a new file for pdf Like => new File([data] , ...) const file = new File(['<body>Text</body>'] , "app.pdf") //Or. const file = new File(['<body>Text</body>'] , "app.pdf" , {type: 'application/pdf'}) const D = new FileReader() D.onload = ()=> {console.log(this.result || D.result)} D.readAsDataURL(file) Or with blob const file = new File(['<body>Text</body>'] , "app.pdf" ) //Or. const file = new File(['<body>Text</body>'] , "app.pdf" , {type: "application/pdf"}) URL.createObjectURL(file)
[ "To write mime type text/pdf i.e. a pdf without binary lib encodings you simply need to write a string like this (save as text.pdf to see how it works)\n%PDF-1.1\n%âãÏÓ\n1 0 obj<</Type/Catalog/Pages 2 0 R>>endobj\n2 0 obj<</Type/Pages/Kids [3 0 R]/Count 1/MediaBox [0 0 594 792]>>endobj\n3 0 obj<</Type/Page/Parent 2 0 R/Resources<</Font<</F1<</Type/Font/Subtype/Type1/BaseFont/Helvetica>>>>>>/Contents 4 0 R>>endobj\n4 0 obj<</Length 78\n>>\nstream\n\nBT /F1 18 Tf 036 740 Td (Body) Tj ET\nBT /F1 18 Tf 036 720 Td (Text) Tj ET\n\nendstream \nendobj xref\n0 5 \n0000000000 65535 f\n0000000021 00000 n\n0000000065 00000 n\n0000000139 00000 n\n0000000269 00000 n\ntrailer<</Root 1 0 R /Size 5>>startxref\n401 %%EOF\n\n\nor use in an iFrame\n\n\n<iframe title=\"testing\" type=\"application/pdf\" width=\"98%\" height=\"98%\" src=\"data:application/pdf;base64,JVBERi0xLjENCiXDosOjw4/Dkw0KMSAwIG9iajw8L1R5cGUvQ2F0YWxvZy9QYWdlcyAyIDAgUj4+ZW5kb2JqDQoyIDAgb2JqPDwvVHlwZS9QYWdlcy9LaWRzIFszIDAgUl0vQ291bnQgMS9NZWRpYUJveCBbMCAwIDU5NCA3OTJdPj5lbmRvYmoNCjMgMCBvYmo8PC9UeXBlL1BhZ2UvUGFyZW50IDIgMCBSL1Jlc291cmNlczw8L0ZvbnQ8PC9GMTw8L1R5cGUvRm9udC9TdWJ0eXBlL1R5cGUxL0Jhc2VGb250L0hlbHZldGljYT4+Pj4+Pi9Db250ZW50cyA0IDAgUj4+ZW5kb2JqDQo0IDAgb2JqPDwvTGVuZ3RoIDc4DQo+Pg0Kc3RyZWFtDQoNCkJUIC9GMSAxOCBUZiAwMzYgNzQwIFRkIChCb2R5KSBUaiBFVA0KQlQgL0YxIDE4IFRmIDAzNiA3MjAgVGQgKFRleHQpIFRqIEVUDQoNCmVuZHN0cmVhbSANCmVuZG9iaiB4cmVmDQowIDUgDQowMDAwMDAwMDAwIDY1NTM1IGYNCjAwMDAwMDAwMjEgMDAwMDAgbg0KMDAwMDAwMDA2NSAwMDAwMCBuDQowMDAwMDAwMTM5IDAwMDAwIG4NCjAwMDAwMDAyNjkgMDAwMDAgbg0KdHJhaWxlcjw8L1Jvb3QgMSAwIFIgL1NpemUgNT4+c3RhcnR4cmVmDQo0MDEgJSVFT0YNCg==\">Your browser does not support iFrame</Iframe>\n\n\n\n\n" ]
[ 0 ]
[]
[]
[ "blob", "createobjecturl", "javascript", "pdf" ]
stackoverflow_0074639664_blob_createobjecturl_javascript_pdf.txt
Q: Getting 403 Forbidden Error While Accessing Google Chrome Web Store API We have a chrome extension in Google web store under my Google user id and I want to give API access to my colleagues (in the same organization). I am following this guide but it is not allowing me to access API. Here is exactly what I did Created a Google Cloud console project using the email id that is used to access the chrome store Enabled Google Chrome Web Store API Generated Oauth credentials as described in the link Added my colleagues email address as test users under Oauth Consent section Generated the "code" as described in the link using Colleague's Google ID Successfully got the token by sending the curl request as described in the instructions above Sent a curl API GET request using the token as shown below curl \ -H "Authorization: $TOKEN" \ -H "x-goog-api-version: 2" \ -H "Content-Length: 0" \ -H "Expect:" \ -X GET \ -v \ https://www.googleapis.com/chromewebstore/v1.1/items/ITEM_ID?projection=DRAFT The response I get is this { "error": { "code": 403, "message": "Forbidden", "errors": [ { "message": "Forbidden", "domain": "global", "reason": "forbidden" } ] } } Any idea on what I am missing here? https://www.googleapis.com/chromewebstore/v1.1/items/ITEM_ID?projection=DRAFT A: The Authorization header is missing the token type: Bearer -H "Authorization: Bearer $TOKEN"
Getting 403 Forbidden Error While Accessing Google Chrome Web Store API
We have a chrome extension in Google web store under my Google user id and I want to give API access to my colleagues (in the same organization). I am following this guide but it is not allowing me to access API. Here is exactly what I did Created a Google Cloud console project using the email id that is used to access the chrome store Enabled Google Chrome Web Store API Generated Oauth credentials as described in the link Added my colleagues email address as test users under Oauth Consent section Generated the "code" as described in the link using Colleague's Google ID Successfully got the token by sending the curl request as described in the instructions above Sent a curl API GET request using the token as shown below curl \ -H "Authorization: $TOKEN" \ -H "x-goog-api-version: 2" \ -H "Content-Length: 0" \ -H "Expect:" \ -X GET \ -v \ https://www.googleapis.com/chromewebstore/v1.1/items/ITEM_ID?projection=DRAFT The response I get is this { "error": { "code": 403, "message": "Forbidden", "errors": [ { "message": "Forbidden", "domain": "global", "reason": "forbidden" } ] } } Any idea on what I am missing here? https://www.googleapis.com/chromewebstore/v1.1/items/ITEM_ID?projection=DRAFT
[ "The Authorization header is missing the token type: Bearer\n-H \"Authorization: Bearer $TOKEN\"\n\n" ]
[ 2 ]
[]
[]
[ "api", "google_chrome_extension", "google_cloud_platform" ]
stackoverflow_0074672237_api_google_chrome_extension_google_cloud_platform.txt
Q: An algorithm to find at most (n/2)-1 liars in n people where liars always lie I am working on a homework problem which is similar to this post:- Quickest algorithm to find at most (n/2)-1 liars in n people The problem is given below:- This problem is couched in terms of liars and truth tellers, but it has real applications in identifying which components of a complex system are good (functioning correctly) and which are faulty. Assume we have a community of n people and we know an integer number t < n/2, which has the property that most t of the n people are liars. This does not say that there actually are t liars, but only that there are at most t liars. The difference in my case is that the truth-tellers are always truthful and correct and a liar always speaks a lie. We will identify the liars in the community by successively picking pairs of people, (X, Y) say, and asking X: Is Y a liar?. The response is either “yes” or “no"; What is the optimum algorithm(minimum number of steps) to find all the liars? A: The answer to every question gives you a relationship: either X = Y or X = ~Y. Both kinds of relationships are still true if you invert X and Y, so no matter how many questions you ask, you can't figure the answer out without using the constraint. There are 2n possible assignments of truther/liar to the n people, so you need n answers. One of those is the constraint, and the other n-1 can be questions. The simplest way is to ask everyone else about Bob. Everyone will then be in the "same as Bob" group, or the "different than Bob" group. Then you choose whether Bob is a liar or not according to which choice implies fewer liars than truth tellers. Note that if n is even, then you might get lucky -- you can skip the last question if one of the answers would make the two groups the same size. A: Matt has the answer when t ~ n/2 (visualizing the questions as a graph, every arbitrarily directed spanning tree will provide enough info), but in general we don't need n−1 questions for smaller t. We can divide the n people into groups of 2t or larger and query a spanning tree in each group. At most one group may split 50/50, which we can resolve by asking one more question. Ignoring lower order terms, we need about ((2t−1)/2t) n queries. A: One possible algorithm for finding the liars in a community is to first select a random person and ask them whether the next person in the community is a liar. If they say "yes," then the next person is a liar. If they say "no," then the first person is a liar. Next, select the first non-liar and ask them whether the next non-liar is a liar. If they say "yes," then the next non-liar is a liar. If they say "no," then the first non-liar is a liar. Continue this process until all the liars have been identified. The total number of steps required to identify all the liars will be equal to the number of liars, plus one additional step to determine whether the first person selected is a liar. const int MAX_N = 100; int n; int t; bool people[MAX_N]; // True if the person is a liar, false otherwise void findLiars(void) { // Select a random person and ask them about the next person int p = rand() % n; if (people[p] != people[(p + 1) % n]) { // The next person is a liar people[(p + 1) % n] = true; } else { // The first person is a liar people[p] = true; } // Select the first non-liar and ask them about the next non-liar for (int i = 0; i < n; i++) { if (!people[i]) { p = i; break; } } while (true) { // Find the next non-liar int q = (p + 1) % n; while (people[q]) { q = (q + 1) % n; } if (people[p] != people[q]) { // The next non-liar is a liar people[q] = true; } else { // The first non-liar is a liar people[p] = true; break; } p = q; } } In this example, the findLiars function uses the described algorithm to identify all the liars in the community. It first selects a random person and asks them whether the next person in the community is a liar. If they say "yes," then the next person is a liar. If they say "no," then the first person is a liar. Next, the function selects the first non-liar and asks them about the next non-liar. If they say "yes," then the next non-liar is a liar. If they say "no," then the first non-liar is a liar. This process is repeated until all the liars have been identified. The people array is used to keep track of which people have been identified as liars.
An algorithm to find at most (n/2)-1 liars in n people where liars always lie
I am working on a homework problem which is similar to this post:- Quickest algorithm to find at most (n/2)-1 liars in n people The problem is given below:- This problem is couched in terms of liars and truth tellers, but it has real applications in identifying which components of a complex system are good (functioning correctly) and which are faulty. Assume we have a community of n people and we know an integer number t < n/2, which has the property that most t of the n people are liars. This does not say that there actually are t liars, but only that there are at most t liars. The difference in my case is that the truth-tellers are always truthful and correct and a liar always speaks a lie. We will identify the liars in the community by successively picking pairs of people, (X, Y) say, and asking X: Is Y a liar?. The response is either “yes” or “no"; What is the optimum algorithm(minimum number of steps) to find all the liars?
[ "The answer to every question gives you a relationship: either X = Y or X = ~Y. Both kinds of relationships are still true if you invert X and Y, so no matter how many questions you ask, you can't figure the answer out without using the constraint.\nThere are 2n possible assignments of truther/liar to the n people, so you need n answers. One of those is the constraint, and the other n-1 can be questions.\nThe simplest way is to ask everyone else about Bob. Everyone will then be in the \"same as Bob\" group, or the \"different than Bob\" group. Then you choose whether Bob is a liar or not according to which choice implies fewer liars than truth tellers.\nNote that if n is even, then you might get lucky -- you can skip the last question if one of the answers would make the two groups the same size.\n", "Matt has the answer when t ~ n/2 (visualizing the questions as a graph, every arbitrarily directed spanning tree will provide enough info), but in general we don't need n−1 questions for smaller t. We can divide the n people into groups of 2t or larger and query a spanning tree in each group. At most one group may split 50/50, which we can resolve by asking one more question. Ignoring lower order terms, we need about ((2t−1)/2t) n queries.\n", "One possible algorithm for finding the liars in a community is to first select a random person and ask them whether the next person in the community is a liar. If they say \"yes,\" then the next person is a liar. If they say \"no,\" then the first person is a liar.\nNext, select the first non-liar and ask them whether the next non-liar is a liar. If they say \"yes,\" then the next non-liar is a liar. If they say \"no,\" then the first non-liar is a liar.\nContinue this process until all the liars have been identified. The total number of steps required to identify all the liars will be equal to the number of liars, plus one additional step to determine whether the first person selected is a liar.\nconst int MAX_N = 100;\nint n;\nint t;\nbool people[MAX_N]; // True if the person is a liar, false otherwise\n\nvoid findLiars(void)\n{\n // Select a random person and ask them about the next person\n int p = rand() % n;\n if (people[p] != people[(p + 1) % n])\n {\n // The next person is a liar\n people[(p + 1) % n] = true;\n }\n else\n {\n // The first person is a liar\n people[p] = true;\n }\n\n // Select the first non-liar and ask them about the next non-liar\n for (int i = 0; i < n; i++)\n {\n if (!people[i])\n {\n p = i;\n break;\n }\n }\n\n while (true)\n {\n // Find the next non-liar\n int q = (p + 1) % n;\n while (people[q])\n {\n q = (q + 1) % n;\n }\n\n if (people[p] != people[q])\n {\n // The next non-liar is a liar\n people[q] = true;\n }\n else\n {\n // The first non-liar is a liar\n people[p] = true;\n break;\n }\n\n p = q;\n }\n}\n\nIn this example, the findLiars function uses the described algorithm to identify all the liars in the community. It first selects a random person and asks them whether the next person in the community is a liar. If they say \"yes,\" then the next person is a liar. If they say \"no,\" then the first person is a liar.\nNext, the function selects the first non-liar and asks them about the next non-liar. If they say \"yes,\" then the next non-liar is a liar. If they say \"no,\" then the first non-liar is a liar. This process is repeated until all the liars have been identified. The people array is used to keep track of which people have been identified as liars.\n" ]
[ 2, 2, 1 ]
[]
[]
[ "algorithm" ]
stackoverflow_0074668974_algorithm.txt
Q: Get full article in Google Sheet using Openai I'm trying to get full article in Google Sheet using Openai API. In column A I just mention the topic and want to get full article in column B. Here is what I'm trying /** * Use GPT-3 to generate an article * * @param {string} topic - the topic for the article * @return {string} the generated article * @customfunction */ function getArticle(topic) { // specify the API endpoint and API key const api_endpoint = 'https://api.openai.com/v1/completions'; const api_key = 'YOUR_API_KEY'; // specify the API parameters const api_params = { prompt: topic, max_tokens: 1024, temperature: 0.7, model: 'text-davinci-003', }; // make the API request using UrlFetchApp const response = UrlFetchApp.fetch(api_endpoint, { method: 'post', headers: { Authorization: 'Bearer ' + api_key, 'Content-Type': 'application/json', }, payload: JSON.stringify(api_params), }); // retrieve the article from the API response const json = JSON.parse(response.getContentText()); if (json.data && json.data.length > 0) { const article = json.data[0].text; return article; } else { return 'No article found for the given topic.'; } } How can I get the article? A: Modification points: When I saw the official document of OpenAI API, in your endpoint of https://api.openai.com/v1/completions, it seems that the following value is returned. Ref { "id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7", "object": "text_completion", "created": 1589478378, "model": "text-davinci-003", "choices": [ { "text": "\n\nThis is indeed a test", "index": 0, "logprobs": null, "finish_reason": "length" } ], "usage": { "prompt_tokens": 5, "completion_tokens": 7, "total_tokens": 12 } } In the case of json.data, it seems that the endpoint of https://api.openai.com/v1/models might be required to be used. Ref And, there is no property of json.data[0].text. I thought that this might be the reason for your current issue. If you want to retrieve the values of text from the endpoint of https://api.openai.com/v1/completions, how about the following modification? From: if (json.data && json.data.length > 0) { const article = json.data[0].text; return article; } else { return 'No article found for the given topic.'; } To: if (json.choices && json.choices.length > 0) { const article = json.choices[0].text; return article; } else { return 'No article found for the given topic.'; } Note: If the value of response.getContentText() is not your expected values, this modification might not be able to be used. Please be careful about this. Reference: Completions of OpenAI API A: You are saying that console.log(response.getContentText()) outputs this: { "title": "OpenAI API Example Article", "author": "John Doe", "content": "This is an example of an article retrieved using the OpenAI API. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco est laborum.", "source": "www.example.com" } To get the title and content, use this: const json = JSON.parse(response.getContentText()); const { title, content } = json; console.log(title); console.log(content); If console.log(response.getContentText()) outputs an array representation like [{ "title": ... }, { "title": ... }], use this to get the first array element: const { title, content } = json[0]; Note that your naming is a bit misleading. The json variable does not point to a JSON serialization but to an object obtained from JSON with JSON.parse(). It would be better to call it articleObject or something like that.
Get full article in Google Sheet using Openai
I'm trying to get full article in Google Sheet using Openai API. In column A I just mention the topic and want to get full article in column B. Here is what I'm trying /** * Use GPT-3 to generate an article * * @param {string} topic - the topic for the article * @return {string} the generated article * @customfunction */ function getArticle(topic) { // specify the API endpoint and API key const api_endpoint = 'https://api.openai.com/v1/completions'; const api_key = 'YOUR_API_KEY'; // specify the API parameters const api_params = { prompt: topic, max_tokens: 1024, temperature: 0.7, model: 'text-davinci-003', }; // make the API request using UrlFetchApp const response = UrlFetchApp.fetch(api_endpoint, { method: 'post', headers: { Authorization: 'Bearer ' + api_key, 'Content-Type': 'application/json', }, payload: JSON.stringify(api_params), }); // retrieve the article from the API response const json = JSON.parse(response.getContentText()); if (json.data && json.data.length > 0) { const article = json.data[0].text; return article; } else { return 'No article found for the given topic.'; } } How can I get the article?
[ "Modification points:\n\nWhen I saw the official document of OpenAI API, in your endpoint of https://api.openai.com/v1/completions, it seems that the following value is returned. Ref\n {\n \"id\": \"cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7\",\n \"object\": \"text_completion\",\n \"created\": 1589478378,\n \"model\": \"text-davinci-003\",\n \"choices\": [\n {\n \"text\": \"\\n\\nThis is indeed a test\",\n \"index\": 0,\n \"logprobs\": null,\n \"finish_reason\": \"length\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 5,\n \"completion_tokens\": 7,\n \"total_tokens\": 12\n }\n }\n\n\nIn the case of json.data, it seems that the endpoint of https://api.openai.com/v1/models might be required to be used. Ref And, there is no property of json.data[0].text.\n\n\nI thought that this might be the reason for your current issue. If you want to retrieve the values of text from the endpoint of https://api.openai.com/v1/completions, how about the following modification?\nFrom:\nif (json.data && json.data.length > 0) {\n const article = json.data[0].text;\n return article;\n} else {\n return 'No article found for the given topic.';\n}\n\nTo:\nif (json.choices && json.choices.length > 0) {\n const article = json.choices[0].text;\n return article;\n} else {\n return 'No article found for the given topic.';\n}\n\nNote:\n\nIf the value of response.getContentText() is not your expected values, this modification might not be able to be used. Please be careful about this.\n\nReference:\n\nCompletions of OpenAI API\n\n", "You are saying that console.log(response.getContentText()) outputs this:\n{ \"title\": \"OpenAI API Example Article\", \"author\": \"John Doe\", \"content\": \"This is an example of an article retrieved using the OpenAI API. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco est laborum.\", \"source\": \"www.example.com\" }\nTo get the title and content, use this:\n const json = JSON.parse(response.getContentText());\n const { title, content } = json;\n console.log(title);\n console.log(content);\n\nIf console.log(response.getContentText()) outputs an array representation like [{ \"title\": ... }, { \"title\": ... }], use this to get the first array element:\n const { title, content } = json[0];\n\nNote that your naming is a bit misleading. The json variable does not point to a JSON serialization but to an object obtained from JSON with JSON.parse(). It would be better to call it articleObject or something like that.\n" ]
[ 1, 0 ]
[]
[]
[ "google_apps_script", "google_sheets", "openai", "urlfetch" ]
stackoverflow_0074667621_google_apps_script_google_sheets_openai_urlfetch.txt
Q: Next.js server-side data fetching within `Suspense` boundaries not supported yet I'm reading the Next.js docs on how to use streaming server-rendering with React 18 and there's a section on data fetching that says the following: Data fetching within Suspense boundaries is currently only supported on the client side. Server-side data fetching is not supported yet. I understand that data can be fetched server-side only from pages, and the data would then be passed down to component through props. Does this mean we can't use getServerSideProps in pages that have components wrapped in Suspense, or just that we can't pass down fetch data to these components? A: You can theoretically have a Suspense component in a page component, but it will be somewhat worthless. You won't get that page in the browser until the SSR hasn't rendered it. And when it does you'll get the full HTML, therefore having a Suspense does not add any value to the page component. Because on the front-end you wait for data to come to you in order to modify the DOM, Suspense might bring some value, but for the SSR, I don't see any reason to implement it. A: It is available with next13, app directory. Let's say we have two components, Orders and Users component // fetch users const getUsers=async ()=>{ const res=await fetch(UserURL) return res.json} const User=()=>{ const users=getUsers() return ( <> jsx here</> )} Similarly "Orders" Component // fetch orders const getOrders=async ()=>{ const res=await fetch(OrderURL) return res.json } const Order=()=>{ const orders=getOrders() return ( <> jsx here</> ) } Let's say we have Admin Page and we render both components import {Suspense} from "react" const Admin=()=>{ return( <> <Suspense fallback=(<div>Loading Users</div>)> <Users/> </Suspense> <Suspense fallback=(<div>Loading Orders</div>)> <Orders/> </Suspense> </> )} Documentation shows data fetching with the new fetch API. I think somehow fetch api has to let React Suspense know that data fetching is complete so that React Suspense can render main jsx. I am not sure if other libraries do that Another thing, in next.js app directory for each route, there is also loading.js file which will be automatically rendered while the main page is mounting. If Users component takes 1 second to fetch, and Orders component takes 2 seconds to fetch, loading.js content will be shown for 2 seconds till our Admin page is fully loaded. But with Suspense, after 1-second User component will be displayed, Orders component will be showing its own loading fallback till its fetching completed.
Next.js server-side data fetching within `Suspense` boundaries not supported yet
I'm reading the Next.js docs on how to use streaming server-rendering with React 18 and there's a section on data fetching that says the following: Data fetching within Suspense boundaries is currently only supported on the client side. Server-side data fetching is not supported yet. I understand that data can be fetched server-side only from pages, and the data would then be passed down to component through props. Does this mean we can't use getServerSideProps in pages that have components wrapped in Suspense, or just that we can't pass down fetch data to these components?
[ "You can theoretically have a Suspense component in a page component, but it will be somewhat worthless. You won't get that page in the browser until the SSR hasn't rendered it. And when it does you'll get the full HTML, therefore having a Suspense does not add any value to the page component.\nBecause on the front-end you wait for data to come to you in order to modify the DOM, Suspense might bring some value, but for the SSR, I don't see any reason to implement it.\n", "It is available with next13, app directory. Let's say we have two components, Orders and Users component\n// fetch users\nconst getUsers=async ()=>{\n const res=await fetch(UserURL)\n return res.json}\n\nconst User=()=>{\n const users=getUsers()\n return (\n <> jsx here</>\n )}\n\nSimilarly \"Orders\" Component\n // fetch orders\nconst getOrders=async ()=>{\n const res=await fetch(OrderURL)\n return res.json }\n\nconst Order=()=>{\n const orders=getOrders()\n return (\n <> jsx here</>\n ) }\n\nLet's say we have Admin Page and we render both components\nimport {Suspense} from \"react\"\n\nconst Admin=()=>{\n return(\n <>\n <Suspense fallback=(<div>Loading Users</div>)>\n <Users/>\n </Suspense>\n\n <Suspense fallback=(<div>Loading Orders</div>)>\n <Orders/>\n </Suspense>\n </>\n )}\n\nDocumentation shows data fetching with the new fetch API. I think somehow fetch api has to let React Suspense know that data fetching is complete so that React Suspense can render main jsx. I am not sure if other libraries do that\nAnother thing, in next.js app directory for each route, there is also loading.js file which will be automatically rendered while the main page is mounting. If Users component takes 1 second to fetch, and Orders component takes 2 seconds to fetch, loading.js content will be shown for 2 seconds till our Admin page is fully loaded. But with Suspense, after 1-second User component will be displayed, Orders component will be showing its own loading fallback till its fetching completed.\n" ]
[ 0, 0 ]
[]
[]
[ "next.js" ]
stackoverflow_0073178136_next.js.txt
Q: Problem with writing UnitTests when multiple classes are in same file I tried solving some coding challenges and ran into a problem, I copy the challenges and the tests to my pc and try to solve them, but this time I can't get the test to work properly and i assume the problem is that there are basically two top level classes in one file but only one of them is public (so the other one is ?default?) and since the test is in another package it cant find the second class. so the situation is basically File 1 package src.main.java; public class A{ public String doSomething(B objB){ return objB.getName(); } } class B{ private String name; public B(String name){ this.name=name; } public String getName(){ return name; } } File 2 package src.test.java; import org.junit.Test; import static org.junit.Assert.assertEquals; import src.main.java.A; public class ATest{ @Test public void test01() { assertEquals("Name1", A.doSomething(new B("Name1")); } } I tried making it an inner class, that worked, i also assume it works if I put the second class in its own file, and i assume it works if i put challenge and test file in the same package (class B is default visible): File 1 package src.main.java; public class A{ public String doSomething(B objB){ return objB.getName(); } //make class B a public static inner class of A public static class B{ private String name; public B(String name){ this.name=name; } public String getName(){ return name; } } } File 2 package src.test.java; import org.junit.Test; import static org.junit.Assert.assertEquals; import src.main.java.A; // I could also put class B in a seperate file import src.main.java.B; public class ATest{ @Test public void test01() { assertEquals("Name1", A.doSomething(new A.B("Name1"));//call the inner class } } Those work, but the guy who wrote the challenge didn't have to do that, did he just have the challenge file and the test file in the same package? Is there a way to make it work without making it an inner class, putting the second class in its own file or putting test and challenge in the same package? A: Bootstrap I quickly bootstrapped a Spring Boot application using https://start.spring.io/ (to get the test set-ups) in my IDE and replicated your example using both JUnit4 and JUnit5 and both tests are successful. Junit 4/5 test package com.example; public class A { public String doSomething(B objB){ return objB.getName(); } } class B{ private String name; public B(String name){ this.name=name; } public String getName(){ return name; } } package com.example; //Replace imports accordingly for Junit4 import static org.junit.jupiter.api.Assertions.assertEquals; import org.junit.jupiter.api.Test; public class ATest { @Test public void test01() { assertEquals("Name1", new A().doSomething(new B("Name1"))); } } build.gradle plugins { id 'java' id 'org.springframework.boot' version '3.0.0' id 'io.spring.dependency-management' version '1.1.0' } group = 'com.example' version = '0.0.1-SNAPSHOT' sourceCompatibility = '17' repositories { mavenCentral() } dependencies { implementation 'org.springframework.boot:spring-boot-starter-web' testImplementation 'org.springframework.boot:spring-boot-starter-test' testImplementation("org.junit.vintage:junit-vintage-engine") //This is for JUnit4 } tasks.named('test') { useJUnitPlatform() }
Problem with writing UnitTests when multiple classes are in same file
I tried solving some coding challenges and ran into a problem, I copy the challenges and the tests to my pc and try to solve them, but this time I can't get the test to work properly and i assume the problem is that there are basically two top level classes in one file but only one of them is public (so the other one is ?default?) and since the test is in another package it cant find the second class. so the situation is basically File 1 package src.main.java; public class A{ public String doSomething(B objB){ return objB.getName(); } } class B{ private String name; public B(String name){ this.name=name; } public String getName(){ return name; } } File 2 package src.test.java; import org.junit.Test; import static org.junit.Assert.assertEquals; import src.main.java.A; public class ATest{ @Test public void test01() { assertEquals("Name1", A.doSomething(new B("Name1")); } } I tried making it an inner class, that worked, i also assume it works if I put the second class in its own file, and i assume it works if i put challenge and test file in the same package (class B is default visible): File 1 package src.main.java; public class A{ public String doSomething(B objB){ return objB.getName(); } //make class B a public static inner class of A public static class B{ private String name; public B(String name){ this.name=name; } public String getName(){ return name; } } } File 2 package src.test.java; import org.junit.Test; import static org.junit.Assert.assertEquals; import src.main.java.A; // I could also put class B in a seperate file import src.main.java.B; public class ATest{ @Test public void test01() { assertEquals("Name1", A.doSomething(new A.B("Name1"));//call the inner class } } Those work, but the guy who wrote the challenge didn't have to do that, did he just have the challenge file and the test file in the same package? Is there a way to make it work without making it an inner class, putting the second class in its own file or putting test and challenge in the same package?
[ "Bootstrap\nI quickly bootstrapped a Spring Boot application using https://start.spring.io/ (to get the test set-ups) in my IDE and replicated your example using both JUnit4 and JUnit5 and both tests are successful.\nJunit 4/5 test\npackage com.example;\n\npublic class A {\n public String doSomething(B objB){\n return objB.getName();\n }\n}\n\nclass B{\n private String name;\n\n public B(String name){\n this.name=name;\n }\n\n public String getName(){\n return name;\n }\n}\n\npackage com.example;\n\n//Replace imports accordingly for Junit4\nimport static org.junit.jupiter.api.Assertions.assertEquals;\n\nimport org.junit.jupiter.api.Test;\n\npublic class ATest {\n @Test\n public void test01() {\n assertEquals(\"Name1\", new A().doSomething(new B(\"Name1\")));\n }\n}\n\nbuild.gradle\nplugins {\n id 'java'\n id 'org.springframework.boot' version '3.0.0'\n id 'io.spring.dependency-management' version '1.1.0'\n}\n\ngroup = 'com.example'\nversion = '0.0.1-SNAPSHOT'\nsourceCompatibility = '17'\n\nrepositories {\n mavenCentral()\n}\n\ndependencies {\n implementation 'org.springframework.boot:spring-boot-starter-web'\n testImplementation 'org.springframework.boot:spring-boot-starter-test'\n testImplementation(\"org.junit.vintage:junit-vintage-engine\") //This is for JUnit4\n}\n\ntasks.named('test') {\n useJUnitPlatform()\n}\n\n" ]
[ 0 ]
[]
[]
[ "inner_classes", "java", "package", "unit_testing" ]
stackoverflow_0074671967_inner_classes_java_package_unit_testing.txt
Q: Modal not updating to new item in array,firebase/react native state my current issue with my react native app is that when a user wants to open a lesson (from the lessons array with each object being a lesson with a title,description,img url etc)to make it bigger through a modal, its state does not update. What i Mean by this is that the books title,description,and other attributes won't change if you press on a new lesson. What would be the solution to this? export default function Learn() { const [modalVisible, setModalVisible] = useState(false); const [lessons,setLessons] = useState() useEffect(() => { async function data() { try { let todos = [] const querySnapshot = await getDocs(collection(db, "lessons")); querySnapshot.forEach((doc) => { todos.push(doc.data()) }); setLessons(todos) console.log(lessons) } catch(E) { alert(E) } } data() }, []) return ( <View style={learnStyle.maincont}> <View> <Text style={{fontSize:28,marginTop:20}}>Courses</Text> <ScrollView style={{paddingBottom:200}}> {lessons && lessons.map((doc,key) => <> <Modal animationType="slide" transparent={true} visible={modalVisible} onRequestClose={() => { Alert.alert("Modal has been closed."); setModalVisible(!modalVisible); }} > <View style={styles.centeredView}> <View style={styles.modalView}> <Image source={{ uri:doc.imgURL }} style={{width:"100%",height:300}}/> <Text style={{fontWeight:"700",fontSize:25}}>{doc.title}</Text> <Text style={{fontWeight:"700",fontSize:16}}>{doc.desc}</Text> <Pressable style={[styles.button, styles.buttonClose]} onPress={() => setModalVisible(!modalVisible)} > <Text style={styles.textStyle}>Hide Modal</Text> </Pressable> </View> </View> </Modal> <LessonCard setModalVisible={setModalVisible} title={doc.title} desc={doc.desc} img1={doc.imgURL} modalVisible={modalVisible}/> </> )} <View style={{height:600,width:"100%"}}></View> </ScrollView> </View> </View> ) } What it looks like: **image 1 is before you press the modal and the 2nd one is after **the main issue though is that if you press cancel and press on another lesson the modal that opens has the the same state(title,imgurl,anddesc) as the first lesson and does not change. A: The problem is that you create a lot of modal windows through the map function, I suggest making one window and passing the key as a parameter and using it to search for a specific array of data that is shown to the user (photo, title, etc.) A: The problem is that all 3 Modals are controlled by the one state variable. So when the code sets modalVisible to true, all 3 modals are being opened at once. You can fix this in a few ways, but a simple way would be to move the Modal and its state into the LessonCard component. This way each modal will have its own state that's only opened by its card. So the loop in Learn will just be: {lessons && lessons.map((doc,key) => ( <LessonCard lesson={doc} key={key} /> )} Adding to address question in comments LessonCard should not accept setModalVisible or modalVisible props. The const [modalVisible, setModalVisible] = useState(false); should be inside LessonCard, not Learn. That way each Card/Modal pair will have its own state. Additionally, although React wants you to pass the key into LessonCard in the map function, LessonCard should not actually use the key prop for anything. See https://reactjs.org/docs/lists-and-keys.html#extracting-components-with-keys So, the LessonCard declaration should just be something like export default function LessonCard({lesson}) {
Modal not updating to new item in array,firebase/react native state
my current issue with my react native app is that when a user wants to open a lesson (from the lessons array with each object being a lesson with a title,description,img url etc)to make it bigger through a modal, its state does not update. What i Mean by this is that the books title,description,and other attributes won't change if you press on a new lesson. What would be the solution to this? export default function Learn() { const [modalVisible, setModalVisible] = useState(false); const [lessons,setLessons] = useState() useEffect(() => { async function data() { try { let todos = [] const querySnapshot = await getDocs(collection(db, "lessons")); querySnapshot.forEach((doc) => { todos.push(doc.data()) }); setLessons(todos) console.log(lessons) } catch(E) { alert(E) } } data() }, []) return ( <View style={learnStyle.maincont}> <View> <Text style={{fontSize:28,marginTop:20}}>Courses</Text> <ScrollView style={{paddingBottom:200}}> {lessons && lessons.map((doc,key) => <> <Modal animationType="slide" transparent={true} visible={modalVisible} onRequestClose={() => { Alert.alert("Modal has been closed."); setModalVisible(!modalVisible); }} > <View style={styles.centeredView}> <View style={styles.modalView}> <Image source={{ uri:doc.imgURL }} style={{width:"100%",height:300}}/> <Text style={{fontWeight:"700",fontSize:25}}>{doc.title}</Text> <Text style={{fontWeight:"700",fontSize:16}}>{doc.desc}</Text> <Pressable style={[styles.button, styles.buttonClose]} onPress={() => setModalVisible(!modalVisible)} > <Text style={styles.textStyle}>Hide Modal</Text> </Pressable> </View> </View> </Modal> <LessonCard setModalVisible={setModalVisible} title={doc.title} desc={doc.desc} img1={doc.imgURL} modalVisible={modalVisible}/> </> )} <View style={{height:600,width:"100%"}}></View> </ScrollView> </View> </View> ) } What it looks like: **image 1 is before you press the modal and the 2nd one is after **the main issue though is that if you press cancel and press on another lesson the modal that opens has the the same state(title,imgurl,anddesc) as the first lesson and does not change.
[ "The problem is that you create a lot of modal windows through the map function, I suggest making one window and passing the key as a parameter and using it to search for a specific array of data that is shown to the user (photo, title, etc.)\n", "The problem is that all 3 Modals are controlled by the one state variable. So when the code sets modalVisible to true, all 3 modals are being opened at once.\nYou can fix this in a few ways, but a simple way would be to move the Modal and its state into the LessonCard component. This way each modal will have its own state that's only opened by its card. So the loop in Learn will just be:\n {lessons && lessons.map((doc,key) => (\n <LessonCard lesson={doc} key={key} />\n )}\n\nAdding to address question in comments\nLessonCard should not accept setModalVisible or modalVisible props. The\nconst [modalVisible, setModalVisible] = useState(false);\n\nshould be inside LessonCard, not Learn. That way each Card/Modal pair will have its own state.\nAdditionally, although React wants you to pass the key into LessonCard in the map function, LessonCard should not actually use the key prop for anything. See https://reactjs.org/docs/lists-and-keys.html#extracting-components-with-keys\nSo, the LessonCard declaration should just be something like\nexport default function LessonCard({lesson}) {\n\n" ]
[ 1, 0 ]
[]
[]
[ "frontend", "javascript", "react_native", "reactjs" ]
stackoverflow_0074672372_frontend_javascript_react_native_reactjs.txt
Q: AWS CLI list and sync buckets using external account credentials I need to resolve the An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied error when I run aws s3 ls <my bucket>using a key & secret from an external account. I have tried adding a bucket policy as well as creating an IAM role and policy for the external account ARN. I have read every solution and documentation I could find. Thank you. Workflow: I was provided a key & secret for an external AWS account. I need to sync my s3 bucket (source) with an s3 bucket in the external account (destination). I added a bucket policy to the source bucket in my account: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<destination account arn>" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::<my bucket name>/*" }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:<destination account arn>" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::<my bucket name>" } ] I run aws configure and authenticate with the destination account credentials. I run aws sts get-caller-identity and verify my identity. I run aws s3 ls <destination bucket name> successfully. I run aws s3 ls <my source bucket name> I receive An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied error. I have also created a role and policy for the external account but I receive an error when I try to assume the role: Assume role: aws sts assume-role --role-arn "<role arn>" --role-session-name session-name and I receive the error: An error occurred (AccessDenied) when calling the AssumeRole operation: User... ** UPDATE ** Updated policy with suggested solutions. I am still recieving the (AccessDenied) when calling the ListObjectsV2 operation: Access Denied error when I run aws s3 ls s3://<bucket name> while configured with the external IAM user account arn:aws:iam::<account>:user/username. { "Version": "2012-10-17", "Id": "Policy1546414473940", "Statement": [ { "Sid": "Stmt1546414471931", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<external IAM user account>:root", "arn:aws:iam::<external IAM user account>:user/username" ] }, "Action": "s3:ListBucket", "Resource": [ "arn:aws:s3:::<source bucket name>", "arn:aws:s3:::<source bucket name>/*" ] } ] A: If the credentials you have been given are associated with an IAM User, then you should **use the ARN of that IAM User in the Principal, for example: "Principal":{ "AWS":"arn:aws:iam::111111111111:user/my-username" }, Alternatively, you can ask it to 'trust the other account', so that if the other account has given sufficient permission to access the bucket, then access will be granted: "Principal":{ "AWS":"arn:aws:iam::111111111111/root" }, Note the addition of /root at the end.
AWS CLI list and sync buckets using external account credentials
I need to resolve the An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied error when I run aws s3 ls <my bucket>using a key & secret from an external account. I have tried adding a bucket policy as well as creating an IAM role and policy for the external account ARN. I have read every solution and documentation I could find. Thank you. Workflow: I was provided a key & secret for an external AWS account. I need to sync my s3 bucket (source) with an s3 bucket in the external account (destination). I added a bucket policy to the source bucket in my account: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<destination account arn>" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::<my bucket name>/*" }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:<destination account arn>" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::<my bucket name>" } ] I run aws configure and authenticate with the destination account credentials. I run aws sts get-caller-identity and verify my identity. I run aws s3 ls <destination bucket name> successfully. I run aws s3 ls <my source bucket name> I receive An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied error. I have also created a role and policy for the external account but I receive an error when I try to assume the role: Assume role: aws sts assume-role --role-arn "<role arn>" --role-session-name session-name and I receive the error: An error occurred (AccessDenied) when calling the AssumeRole operation: User... ** UPDATE ** Updated policy with suggested solutions. I am still recieving the (AccessDenied) when calling the ListObjectsV2 operation: Access Denied error when I run aws s3 ls s3://<bucket name> while configured with the external IAM user account arn:aws:iam::<account>:user/username. { "Version": "2012-10-17", "Id": "Policy1546414473940", "Statement": [ { "Sid": "Stmt1546414471931", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<external IAM user account>:root", "arn:aws:iam::<external IAM user account>:user/username" ] }, "Action": "s3:ListBucket", "Resource": [ "arn:aws:s3:::<source bucket name>", "arn:aws:s3:::<source bucket name>/*" ] } ]
[ "If the credentials you have been given are associated with an IAM User, then you should **use the ARN of that IAM User in the Principal, for example:\n\"Principal\":{ \n \"AWS\":\"arn:aws:iam::111111111111:user/my-username\"\n },\n\nAlternatively, you can ask it to 'trust the other account', so that if the other account has given sufficient permission to access the bucket, then access will be granted:\n\"Principal\":{ \n \"AWS\":\"arn:aws:iam::111111111111/root\"\n },\n\nNote the addition of /root at the end.\n" ]
[ 0 ]
[]
[]
[ "amazon_iam", "amazon_s3", "amazon_web_services" ]
stackoverflow_0074671712_amazon_iam_amazon_s3_amazon_web_services.txt
Q: How to Get the font color of an Excel Cell in c# okay, so I'm trying to get the cell's font color as the program needs to do different things biased on the font color in the cell so I made a test file and I tried to access it like so: Range thrange = ws.UsedRange.Columns["A:A", Type.Missing].Rows; foreach (Range r in thrange) { Style sy = r.Style; Font font = sy.Font; ColorFormat color = (ColorFormat)font.Color; Console.WriteLine(" "+r.Value+" " + color.RGB); } I get Can not convert type 'double' to 'Microsoft.Office.Interop.Excel.ColorFormat' I saw people saying you set the color with a drawing object so I tried changing the last two lines to: Color color =(System.Drawing.Color)font.Color; Console.WriteLine(" "+r.Value+" " + color.ToArgb()); but that didn't work either Can not convert type 'double' to 'System.Drawing.Color' so I thought I'd see what this double is, then set the font to a known rgb value and work out how to convert the number I get back into that. but that didn't work either. as while Console.WriteLine(" "+r.Value+" "+r.style.font.color); didn't throw an error it still didn't give me anything useful: cyan 0 pink 0 blue 0 red 0 orange 0 purple 0 I thought maybe r.style.font.colorindex but that just gave me a 1 for everything instead of a 0 I was hopping for something like blue 0000ff red ff0000 I can't use a 3rd party libraries due to the rules set out by the project owner. So how do I get the actual color value out? A: Use the Font property of the Range not that of its Style. Then use the ColorTranslator class to convert the double value from Office to a .net Color. foreach (Microsoft.Office.Interop.Excel.Range r in thrange) { // Get Color from Font from Range directly int oleColor = Convert.ToInt32(r.Font.Color); // Convert to C# Color type System.Drawing.Color c = System.Drawing.ColorTranslator.FromOle(oleColor); // Output Console.WriteLine(r.Value + " " + c.ToString()); }
How to Get the font color of an Excel Cell in c#
okay, so I'm trying to get the cell's font color as the program needs to do different things biased on the font color in the cell so I made a test file and I tried to access it like so: Range thrange = ws.UsedRange.Columns["A:A", Type.Missing].Rows; foreach (Range r in thrange) { Style sy = r.Style; Font font = sy.Font; ColorFormat color = (ColorFormat)font.Color; Console.WriteLine(" "+r.Value+" " + color.RGB); } I get Can not convert type 'double' to 'Microsoft.Office.Interop.Excel.ColorFormat' I saw people saying you set the color with a drawing object so I tried changing the last two lines to: Color color =(System.Drawing.Color)font.Color; Console.WriteLine(" "+r.Value+" " + color.ToArgb()); but that didn't work either Can not convert type 'double' to 'System.Drawing.Color' so I thought I'd see what this double is, then set the font to a known rgb value and work out how to convert the number I get back into that. but that didn't work either. as while Console.WriteLine(" "+r.Value+" "+r.style.font.color); didn't throw an error it still didn't give me anything useful: cyan 0 pink 0 blue 0 red 0 orange 0 purple 0 I thought maybe r.style.font.colorindex but that just gave me a 1 for everything instead of a 0 I was hopping for something like blue 0000ff red ff0000 I can't use a 3rd party libraries due to the rules set out by the project owner. So how do I get the actual color value out?
[ "Use the Font property of the Range not that of its Style. Then use the ColorTranslator class to convert the double value from Office to a .net Color.\nforeach (Microsoft.Office.Interop.Excel.Range r in thrange)\n{\n // Get Color from Font from Range directly\n int oleColor = Convert.ToInt32(r.Font.Color);\n \n // Convert to C# Color type\n System.Drawing.Color c = System.Drawing.ColorTranslator.FromOle(oleColor);\n\n // Output\n Console.WriteLine(r.Value + \" \" + c.ToString());\n}\n\n" ]
[ 0 ]
[]
[]
[ "c#", "excel", "office_interop" ]
stackoverflow_0074672342_c#_excel_office_interop.txt
Q: Can not unmount a device using "umount" in Docker I don't know why but, umount is not working in docker. umount: loop3/: must be superuser to umount Let me share one more thing that is It creates loop3 under /mnt/loop3 in real machine. Which is most unexpected thing for me, because promises pure virtual environment. Why? Any solution? Scenario : I created docker ubuntu:13.04 to create cross compilation environment. Docker Linux machine (ubuntu): Linux 626089eadfeb 3.10.45-1-lts #1 SMP Fri Jun 27 06:44:23 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Linux Machine (rch Linux): Linux localhost 3.10.45-1-lts #1 SMP Fri Jun 27 06:44:23 UTC 2014 x86_64 GNU/Linux Docker Info Client version: 1.0.1 Client API version: 1.12 Go version (client): go1.3 Git commit (client): 990021a Server version: 1.0.1 Server API version: 1.12 Go version (server): go1.3 Git commit (server): 990021a A: I found the solution : In default docker run it's not a real Operating system as we expect. It doesn't have permissions to access the devices. So we have to use --privileged While running a docker. By default, Docker containers are "unprivileged" and cannot, for example, run a Docker daemon inside a Docker container. This is because by default a container is not allowed to access any devices, but a "privileged" container is given access to all devices. When the operator executes docker run --privileged, Docker will enable to access to all devices on the host as well as set some configuration in AppArmor to allow the container nearly all the same access to the host as processes running outside containers on the host. A: In my case it was the mount command of linux but it is also valid for unmount. The previous solution is valid but my scenario could not execute the command 'docker run' since it was being used in real time. The command I wanted to mount : mount -o remount,size=5G /dev/shm Solution Verification in the docker : [docker]$ df -h Filesystem Size Used Avail Use% Mounted on shm 64M 4.0K 64M 1% /dev/shm [docker]$ exit We look for the ID of our container : $ docker ps CONTAINER ID IMAGE <container_id> nameimage Let's memorize that beginning of the ID All the following commands must be executed with sudo (use in the shortest possible time) : # sudo -i We go to the folder that contains the docker : # By default cd /var/lib/docker/containers/ And we open the folder that begins on . cd <container_id> We execute our command with the necessary parameters : mount -o remount,size=5G shm (Note: I do not remember if the command must be executed to show the file system) Finally we enter the Docker and verify that the values ​​have been updated correctly: [docker]$ df -h Filesystem Size Used Avail Use% Mounted on shm 5.0G 0 5.0G 0% /dev/shm Sources https://github.com/docker/cli/issues/1278 A: In my case it was a problem during starting docker service. A cause was not having enough space left on the hard drive. After cleaning up the disk and rebooting - docker service started successfully. A: Please don't follow the accepted answer --privileged give too much permission consider using --cap-add=CAP_SYS_ADMIN that guard many many function but less than privileged: Perform a range of system administration operations including: quotactl(2), mount(2), umount(2), Here we want the mount(2)/umount(2). If you want to be safe you can cap-add and in the entrypoint drop this capability with prtcl (it depend on the language you use for C/python it is quite easy, maybe prctl has binding for your language).
Can not unmount a device using "umount" in Docker
I don't know why but, umount is not working in docker. umount: loop3/: must be superuser to umount Let me share one more thing that is It creates loop3 under /mnt/loop3 in real machine. Which is most unexpected thing for me, because promises pure virtual environment. Why? Any solution? Scenario : I created docker ubuntu:13.04 to create cross compilation environment. Docker Linux machine (ubuntu): Linux 626089eadfeb 3.10.45-1-lts #1 SMP Fri Jun 27 06:44:23 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Linux Machine (rch Linux): Linux localhost 3.10.45-1-lts #1 SMP Fri Jun 27 06:44:23 UTC 2014 x86_64 GNU/Linux Docker Info Client version: 1.0.1 Client API version: 1.12 Go version (client): go1.3 Git commit (client): 990021a Server version: 1.0.1 Server API version: 1.12 Go version (server): go1.3 Git commit (server): 990021a
[ "I found the solution : \nIn default docker run it's not a real Operating system as we expect. It doesn't have permissions to access the devices. So we have to use --privileged While running a docker.\nBy default, Docker containers are \"unprivileged\" and cannot, for example, run a Docker daemon inside a Docker container. This is because by default a container is not allowed to access any devices, but a \"privileged\" container is given access to all devices. \nWhen the operator executes docker run --privileged, Docker will enable to access to all devices on the host as well as set some configuration in AppArmor to allow the container nearly all the same access to the host as processes running outside containers on the host.\n", "In my case it was the mount command of linux but it is also valid for unmount.\nThe previous solution is valid but my scenario could not execute the command 'docker run' since it was being used in real time.\nThe command I wanted to mount :\nmount -o remount,size=5G /dev/shm\n\nSolution\nVerification in the docker :\n[docker]$ df -h\nFilesystem Size Used Avail Use% Mounted on\nshm 64M 4.0K 64M 1% /dev/shm\n[docker]$ exit\n\nWe look for the ID of our container :\n$ docker ps\nCONTAINER ID IMAGE \n<container_id> nameimage\n\nLet's memorize that beginning of the ID\nAll the following commands must be executed with sudo (use in the shortest possible time) :\n# sudo -i\n\nWe go to the folder that contains the docker :\n# By default\ncd /var/lib/docker/containers/ \n\nAnd we open the folder that begins on .\ncd <container_id>\n\nWe execute our command with the necessary parameters :\nmount -o remount,size=5G shm\n\n(Note: I do not remember if the command must be executed to show the file system)\nFinally we enter the Docker and verify that the values ​​have been updated correctly:\n[docker]$ df -h\nFilesystem Size Used Avail Use% Mounted on\nshm 5.0G 0 5.0G 0% /dev/shm\n\nSources\n\nhttps://github.com/docker/cli/issues/1278\n\n", "In my case it was a problem during starting docker service.\nA cause was not having enough space left on the hard drive.\nAfter cleaning up the disk and rebooting - docker service started successfully.\n", "Please don't follow the accepted answer --privileged give too much permission consider using --cap-add=CAP_SYS_ADMIN that guard many many function but less than privileged:\n\nPerform a range of system administration operations\nincluding: quotactl(2), mount(2), umount(2),\n\nHere we want the mount(2)/umount(2).\nIf you want to be safe you can cap-add and in the entrypoint drop this capability with prtcl (it depend on the language you use for C/python it is quite easy, maybe prctl has binding for your language).\n" ]
[ 14, 0, 0, 0 ]
[]
[]
[ "build", "docker" ]
stackoverflow_0024614513_build_docker.txt
Q: How to pin Youtube comments with python automatically I need to find a way to pin comments in YouTube automatically. I have checked YouTube API v3 documentation but it does not have this feature. Is there any idea? A: To initialize the automatic mechanism, you first need to open your web-browser Web Developer Tools Network tab, then pin an ad hoc comment, you should notice a XHR request to perform_comment_action endpoint. Right-click this request and copy it as cURL. Notice the last field actions in the JSON encoded --data-raw argument. Decode this base64 encoded field and modify the first plaintext argument Ug...Ag to the comment id you want to pin and re-encode the field in base64 and then execute the cURL request and that's it! Note that there is no need to modify any other parameter for pinning a comment on another video than the ad hoc comment is posted on.
How to pin Youtube comments with python automatically
I need to find a way to pin comments in YouTube automatically. I have checked YouTube API v3 documentation but it does not have this feature. Is there any idea?
[ "To initialize the automatic mechanism, you first need to open your web-browser Web Developer Tools Network tab, then pin an ad hoc comment, you should notice a XHR request to perform_comment_action endpoint. Right-click this request and copy it as cURL. Notice the last field actions in the JSON encoded --data-raw argument. Decode this base64 encoded field and modify the first plaintext argument Ug...Ag to the comment id you want to pin and re-encode the field in base64 and then execute the cURL request and that's it!\nNote that there is no need to modify any other parameter for pinning a comment on another video than the ad hoc comment is posted on.\n" ]
[ 0 ]
[]
[]
[ "api", "comments", "python", "youtube" ]
stackoverflow_0073444163_api_comments_python_youtube.txt
Q: Python - Adding Custom Values Into A Table From Web Scraping I wrote a basic web scraper that returns values into nested lists like the one below: results = [['a', 'b', 'c'], ['a', 'b', 'c'], ['a', 'b', 'c']] But I want to add 2 custom values when they get pushed into the lists to look like something below: results = [['customvalue1', 'customvalue2', 'a', 'b', 'c'], ['customvalue1', 'customvalue2', 'a', 'b', 'c'], ['customvalue1', 'customvalue2', 'a', 'b', 'c']] Specifically, I want 'customvalue1' to be the current date in dd/mm/yyyy format, and 'customvalue2' to be a string that I define. I tried to create the custom values within the for loop right before the append method, but haven' had luck so far. A: One of the ways to achieve it in the case of python. import datetime # Define customvalue2 customvalue2 = "mystring" results = [] # Get current date today = datetime.datetime.now() # Loop over the data you want to add to the results list for data in [['a', 'b', 'c'], ['a', 'b', 'c'], ['a', 'b', 'c']]: # Format current date as dd/mm/yyyy customvalue1 = today.strftime("%d/%m/%Y") # Create a new list with the custom values and the data entry = [customvalue1, customvalue2] + data # Append the entry to the results list results.append(entry)
Python - Adding Custom Values Into A Table From Web Scraping
I wrote a basic web scraper that returns values into nested lists like the one below: results = [['a', 'b', 'c'], ['a', 'b', 'c'], ['a', 'b', 'c']] But I want to add 2 custom values when they get pushed into the lists to look like something below: results = [['customvalue1', 'customvalue2', 'a', 'b', 'c'], ['customvalue1', 'customvalue2', 'a', 'b', 'c'], ['customvalue1', 'customvalue2', 'a', 'b', 'c']] Specifically, I want 'customvalue1' to be the current date in dd/mm/yyyy format, and 'customvalue2' to be a string that I define. I tried to create the custom values within the for loop right before the append method, but haven' had luck so far.
[ "One of the ways to achieve it in the case of python.\nimport datetime\n\n# Define customvalue2\ncustomvalue2 = \"mystring\"\n\nresults = []\n\n# Get current date\ntoday = datetime.datetime.now()\n\n# Loop over the data you want to add to the results list\nfor data in [['a', 'b', 'c'], ['a', 'b', 'c'], ['a', 'b', 'c']]:\n # Format current date as dd/mm/yyyy\n customvalue1 = today.strftime(\"%d/%m/%Y\")\n\n # Create a new list with the custom values and the data\n entry = [customvalue1, customvalue2] + data\n\n # Append the entry to the results list\n results.append(entry)\n\n" ]
[ 0 ]
[]
[]
[ "list", "nested", "python", "web_scraping" ]
stackoverflow_0074672411_list_nested_python_web_scraping.txt
Q: Apps Script (Google Sheet) not allowing me to Run Script I have a fairly simple dataset in a Google Sheet. I created an AutoSort script. I saved it, and when I click "Run," I get the following errors. One from a pop-up, and another from the Execution Log. Pop-up error: Authorization required This project requires your permission to access your data. *For this error, there is a button to "Review Permissions" and I log in using my google account and then just nothing happens. Execution Log error: Warning This project requires access to your Google Account to run. Please try again and allow it this time. The Owner of this Google Sheet is my personal Gmail account, and I am making these edits and created the script using my business Gmail Admin account. I also tried to access this sheet and run the script USING my personal Gmail account, and received the same error: Google hasn’t verified this app The app is requesting access to sensitive info in your Google Account. Until the developer ({mypersonalemail}@gmail.com) verifies this app with Google, you shouldn't use it. Any insight as to how I can authorize this would be appreciated. It sounds like something small I'm missing. Also, in my personal email I receive a message with subject: Review edits to your Apps Script project within your document and it allows me links to access the worksheet and the script, but I don't see any way to approve the edits, or anything like that. Expected behavior: What I am expecting is for the script to Run, when I click "Run." A: It's not a good idea to mix accounts from different domains, specially when using a free account and a Google Workspace account like you have done because that is the cause of the situation that you are facing. My hypothesis is that the Google Cloud default project linked to the bounded script is created with the account used to create the project. If you need that you personal account be the spreadsheet owner the best is to create the script using the personal account, and when needed, create a Google Cloud Standard project (GCSP) using the the personal account. You might try to fix the problem with your spreadsheet and the current Apps Script project by creating a GCSP, as was mentioned previously, by using the account that is the owner of the spreadsheet and linking it to the Apps Script project. Note: If your script is using sensitive scopes you might have to set the OAuth Consent Screen publishing status to tes and add your Google Workspace account as tester. Ref: Setting up your OAuth consent screen Once you have finished the setup of your Google Apps Script project you should be able to use your Google Workspace account to update and run the Apps Script code but any new deployment and new version should be done using your personal account. If you have access to Shared Drives and are allowed to use them for your spreasheet, consider to move it to a Shared Drive as this will make a lot easier to manage your script.
Apps Script (Google Sheet) not allowing me to Run Script
I have a fairly simple dataset in a Google Sheet. I created an AutoSort script. I saved it, and when I click "Run," I get the following errors. One from a pop-up, and another from the Execution Log. Pop-up error: Authorization required This project requires your permission to access your data. *For this error, there is a button to "Review Permissions" and I log in using my google account and then just nothing happens. Execution Log error: Warning This project requires access to your Google Account to run. Please try again and allow it this time. The Owner of this Google Sheet is my personal Gmail account, and I am making these edits and created the script using my business Gmail Admin account. I also tried to access this sheet and run the script USING my personal Gmail account, and received the same error: Google hasn’t verified this app The app is requesting access to sensitive info in your Google Account. Until the developer ({mypersonalemail}@gmail.com) verifies this app with Google, you shouldn't use it. Any insight as to how I can authorize this would be appreciated. It sounds like something small I'm missing. Also, in my personal email I receive a message with subject: Review edits to your Apps Script project within your document and it allows me links to access the worksheet and the script, but I don't see any way to approve the edits, or anything like that. Expected behavior: What I am expecting is for the script to Run, when I click "Run."
[ "It's not a good idea to mix accounts from different domains, specially when using a free account and a Google Workspace account like you have done because that is the cause of the situation that you are facing.\nMy hypothesis is that the Google Cloud default project linked to the bounded script is created with the account used to create the project.\nIf you need that you personal account be the spreadsheet owner the best is to create the script using the personal account, and when needed, create a Google Cloud Standard project (GCSP) using the the personal account. You might try to fix the problem with your spreadsheet and the current Apps Script project by creating a GCSP, as was mentioned previously, by using the account that is the owner of the spreadsheet and linking it to the Apps Script project.\nNote: If your script is using sensitive scopes you might have to set the OAuth Consent Screen publishing status to tes and add your Google Workspace account as tester.\nRef: Setting up your OAuth consent screen\nOnce you have finished the setup of your Google Apps Script project you should be able to use your Google Workspace account to update and run the Apps Script code but any new deployment and new version should be done using your personal account.\nIf you have access to Shared Drives and are allowed to use them for your spreasheet, consider to move it to a Shared Drive as this will make a lot easier to manage your script.\n" ]
[ 0 ]
[]
[]
[ "authorization", "google_apps_script", "google_sheets" ]
stackoverflow_0074672351_authorization_google_apps_script_google_sheets.txt
Q: Treat Mmap as volatile in Rust I am trying to monitor a binary file containing single 32 bit integer. I mapped the file into memory via MmapMut and read in a forever loop: fn read(mmap: &MmapMut) { let mut i = u32::from_ne_bytes(mmap[0..4].try_into().unwrap()); loop { let j = u32::from_ne_bytes(mmap[0..4].try_into().unwrap()); if j != i { assert!(false); // this assert should fail, but it never does i = j; } } } However, compiler seems to optimise the loop away assuming neither i or j can ever change. Is there a way to prevent this optimisation? A: You can use MmapRaw with read_volatile(). fn read(mmap: &MmapRaw) { let mut i = unsafe { mmap.as_ptr().cast::<u32>().read_volatile() }; loop { let j = unsafe { mmap.as_ptr().cast::<u32>().read_volatile() }; if j != i { assert!(false); i = j; } } }
Treat Mmap as volatile in Rust
I am trying to monitor a binary file containing single 32 bit integer. I mapped the file into memory via MmapMut and read in a forever loop: fn read(mmap: &MmapMut) { let mut i = u32::from_ne_bytes(mmap[0..4].try_into().unwrap()); loop { let j = u32::from_ne_bytes(mmap[0..4].try_into().unwrap()); if j != i { assert!(false); // this assert should fail, but it never does i = j; } } } However, compiler seems to optimise the loop away assuming neither i or j can ever change. Is there a way to prevent this optimisation?
[ "You can use MmapRaw with read_volatile().\nfn read(mmap: &MmapRaw) {\n let mut i = unsafe { mmap.as_ptr().cast::<u32>().read_volatile() };\n loop {\n let j = unsafe { mmap.as_ptr().cast::<u32>().read_volatile() };\n if j != i {\n assert!(false);\n i = j;\n }\n }\n}\n\n" ]
[ 1 ]
[]
[]
[ "rust" ]
stackoverflow_0074586750_rust.txt
Q: row and col classes are not expending the entire available width unless using form or p elements I keep getting strange behaviors when using bootstrap in the application I'm working on. This is the first time I encounter such behavior. Here's the HTML <div class="row" style="background-color: blue; padding: 2rem; width: 100%;"> <div class="col-6" style="background-color: yellow;"> <label for="useProdAddress"> <input type="checkbox"id="useProdAddress"> Use Producer Address </label> </div> <div class="col-6" style="background-color: orange;"> <button class="btn btn-outline-secondary"> Add Address </button> </div> </div> As shown in the above screenshot, the entire contain occupies like the 1/3 of the page. adding width = 100% didn't make any different. However, those are working using px or rem for width makes the div.row wider. using p or form elements <div class="col-6" style="background-color: orange;"> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit...</p> </div> I don't know whether the reason is that I'm using the nav angular-bootstrap component. <div class="d-flex"> <ul ngbNav #nav="ngbNav" [(activeId)]="active" class="nav-pills" orientation="vertical"> ... <li ngbNavItem="locations"> <a ngbNavLink>Locations</a> <ng-template ngbNavContent> <app-locations></app-locations> </ng-template> </li> ... </ul> <div [ngbNavOutlet]="nav" class="ms-4"></div> Thanks for helping A: Try putting the width style first. <div class="row" style="width: 100%; background-color: blue; padding: 2rem;">
row and col classes are not expending the entire available width unless using form or p elements
I keep getting strange behaviors when using bootstrap in the application I'm working on. This is the first time I encounter such behavior. Here's the HTML <div class="row" style="background-color: blue; padding: 2rem; width: 100%;"> <div class="col-6" style="background-color: yellow;"> <label for="useProdAddress"> <input type="checkbox"id="useProdAddress"> Use Producer Address </label> </div> <div class="col-6" style="background-color: orange;"> <button class="btn btn-outline-secondary"> Add Address </button> </div> </div> As shown in the above screenshot, the entire contain occupies like the 1/3 of the page. adding width = 100% didn't make any different. However, those are working using px or rem for width makes the div.row wider. using p or form elements <div class="col-6" style="background-color: orange;"> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit...</p> </div> I don't know whether the reason is that I'm using the nav angular-bootstrap component. <div class="d-flex"> <ul ngbNav #nav="ngbNav" [(activeId)]="active" class="nav-pills" orientation="vertical"> ... <li ngbNavItem="locations"> <a ngbNavLink>Locations</a> <ng-template ngbNavContent> <app-locations></app-locations> </ng-template> </li> ... </ul> <div [ngbNavOutlet]="nav" class="ms-4"></div> Thanks for helping
[ "Try putting the width style first. <div class=\"row\" style=\"width: 100%; background-color: blue; padding: 2rem;\"> \n" ]
[ 0 ]
[]
[]
[ "angular_bootstrap", "bootstrap_5", "css", "html" ]
stackoverflow_0074672198_angular_bootstrap_bootstrap_5_css_html.txt
Q: "detail": "CSRF Failed: CCSRF token missing." when sending post data from angular 13 to django connected database i need to send the post data from angular to DRF through angular form but geeting the error i checked almost all the answers available on the internet but did not found and useful answer. "detail": "CSRF Failed: CSRF token missing." //post logic sources.service.ts import { Injectable } from '@angular/core'; import { sources } from './sources'; import { HttpClient } from '@angular/common/http'; import { Observable , of, throwError } from 'rxjs'; import { catchError, retry } from 'rxjs/operators'; import { HttpHeaders } from '@angular/common/http'; const httpOptions = { headers: new HttpHeaders({ 'Content-Type': 'application/json', // Authorization: 'my-auth-token', cookieName: 'csrftoken', headerName: 'X-CSRFToken', // X-CSRFToken: 'sjd8q2x8hgjkvs1GJcOOcgnVGEkdP8f02shB', // headerName: 'X-CSRFToken', // headerName: , }) }; @Injectable({ providedIn: 'root' }) export class SourcesService { API_URL = 'http://127.0.0.1:8000/sourceapi.api'; constructor(private http: HttpClient) { } /** GET sources from the server */ Sources() : Observable<sources[]> { return this.http.get<sources[]>(this.API_URL); } /** POST: add a new source to the server */ // addSource(data: object) : Observable<object>{ // return this.http.post<object>(this.API_URL,data, httpOptions); // } addSource(source : sources[]): Observable<sources[]>{ return this.http.post<sources[]> (this.API_URL, source, httpOptions); //console.log(user); } } //add-source.component.ts import { Component, OnInit } from '@angular/core'; import { sources } from '../sources'; import { SourcesService } from '../sources.service'; import { FormGroup, FormControl, ReactiveFormsModule} from '@angular/forms'; @Component({ selector: 'app-add-source', templateUrl: './add-source.component.html', styleUrls: ['./add-source.component.css'] }) export class AddSourceComponent implements OnInit { // a form for entering and validating data sourceForm = new FormGroup({ name : new FormControl(), url : new FormControl(), client : new FormControl(), }); constructor(private sourcesService: SourcesService) { } ngOnInit(): void { } sourceData_post: any; saveSource(){ if(this.validate_form()){ this.sourceData_post = this.sourceForm.value; this.sourcesService.addSource(this.sourceData_post).subscribe((source)=>{ alert('source added'); }); } else{ alert('please fill from correctly'); } } validate_form(){ const formData = this.sourceForm.value; if(formData.name == null){ return false; }else if(formData.url == null){ return false; }else{ return true; } } } // add-source.component.html <div class="bread-crumb"> <div> <span>Add Source</span> </div> </div> <div class="container flex"> <div class="form"> <form action="" [formGroup]="sourceForm" (ngSubmit)="saveSource()"> <table> <tr> <td>Source Name:</td> <td> <input class="input" type="text" formControlName="name"> </td> </tr> <tr> <td>Source URL:</td> <td> <input class="input" type="text" formControlName="url"> </td> </tr> <tr> <td>Source client:</td> <td> <input class="input" type="text" formControlName="client"> </td> </tr> <tr> <td colspan="2"> <div class="center"> <button type="submit">submit</button> </div> </td> </tr> </table> </form> </div> </div> i tried imports: [ BrowserModule, AppRoutingModule, HttpClientModule, Ng2SearchPipeModule, FormsModule, ReactiveFormsModule, HttpClientXsrfModule, HttpClientXsrfModule.withOptions({ cookieName: 'XSRF-TOKEN', headerName: 'X-XSRF-TOKEN', }) but did not help Note :- this is angular 13 A: (Partial answer) You get this error message because the CSRF protection is activated by default and you don't send the CSRF token. Someone wrote a good description of what CSRF is here On the first GET request, the server sends you the CSRF token in a cookie, and you have to send it back on every request, as a cookie AND as a request header. The server will check that the CSRF value in the cookie matches with the CSRF value that is in the header. It can be tedious to repeat that on every request so Angular has a builtin module for that : HttpClientXsrfModule that you configured here : HttpClientXsrfModule.withOptions({ cookieName: 'XSRF-TOKEN', headerName: 'X-XSRF-TOKEN', }) One problem is that you override this behavior by setting again the header by hand here : const httpOptions = { headers: new HttpHeaders({ 'Content-Type': 'application/json', cookieName: 'csrftoken', headerName: 'X-CSRFToken', }) }; [...] addSource(source : sources[]): Observable<sources[]>{ return this.http.post<sources[]> (this.API_URL, source, httpOptions); You don't need that. Just leave it like this : addSource(source : sources[]): Observable<sources[]>{ return this.http.post<sources[]> (this.API_URL, source); Another problem is that the name for the CSRF header/cookie is not standard. It can be CSRF, XSRF, or whatever you want. Of course, if you send it as CSRF and the server expects it as XSRF, it will not be detected. As I can see from the comments on the question, the server sends you that Set-Cookie: csrftoken=sjd8q2xsdfgfhjgfnVGEkdP8f02shB So we are sure that the cookie name is csrftoken. So it should be the same in the configuration of the HttpClientXsrfModule. Can you try like this HttpClientXsrfModule.withOptions({ cookieName: 'csrftoken', // << This one is certain headerName: 'X-XSRF-TOKEN', // << For this one, I don't know yet }) Can you try this with different values for the headerName ? Preferably csrftoken also ? header name and cookie name are often the same. Update : According to the Django documentation, the default CSRF header name is HTTP_X_CSRFTOKEN. So you can try this : HttpClientXsrfModule.withOptions({ cookieName: 'csrftoken', headerName: 'HTTP_X_CSRFTOKEN', }) A: The logic on the front-end side was correct, the reason for showing csrf token missing was from the Django rest framework. once i removed the @api_view from my views.py and returned the json response it worked.
"detail": "CSRF Failed: CCSRF token missing." when sending post data from angular 13 to django connected database
i need to send the post data from angular to DRF through angular form but geeting the error i checked almost all the answers available on the internet but did not found and useful answer. "detail": "CSRF Failed: CSRF token missing." //post logic sources.service.ts import { Injectable } from '@angular/core'; import { sources } from './sources'; import { HttpClient } from '@angular/common/http'; import { Observable , of, throwError } from 'rxjs'; import { catchError, retry } from 'rxjs/operators'; import { HttpHeaders } from '@angular/common/http'; const httpOptions = { headers: new HttpHeaders({ 'Content-Type': 'application/json', // Authorization: 'my-auth-token', cookieName: 'csrftoken', headerName: 'X-CSRFToken', // X-CSRFToken: 'sjd8q2x8hgjkvs1GJcOOcgnVGEkdP8f02shB', // headerName: 'X-CSRFToken', // headerName: , }) }; @Injectable({ providedIn: 'root' }) export class SourcesService { API_URL = 'http://127.0.0.1:8000/sourceapi.api'; constructor(private http: HttpClient) { } /** GET sources from the server */ Sources() : Observable<sources[]> { return this.http.get<sources[]>(this.API_URL); } /** POST: add a new source to the server */ // addSource(data: object) : Observable<object>{ // return this.http.post<object>(this.API_URL,data, httpOptions); // } addSource(source : sources[]): Observable<sources[]>{ return this.http.post<sources[]> (this.API_URL, source, httpOptions); //console.log(user); } } //add-source.component.ts import { Component, OnInit } from '@angular/core'; import { sources } from '../sources'; import { SourcesService } from '../sources.service'; import { FormGroup, FormControl, ReactiveFormsModule} from '@angular/forms'; @Component({ selector: 'app-add-source', templateUrl: './add-source.component.html', styleUrls: ['./add-source.component.css'] }) export class AddSourceComponent implements OnInit { // a form for entering and validating data sourceForm = new FormGroup({ name : new FormControl(), url : new FormControl(), client : new FormControl(), }); constructor(private sourcesService: SourcesService) { } ngOnInit(): void { } sourceData_post: any; saveSource(){ if(this.validate_form()){ this.sourceData_post = this.sourceForm.value; this.sourcesService.addSource(this.sourceData_post).subscribe((source)=>{ alert('source added'); }); } else{ alert('please fill from correctly'); } } validate_form(){ const formData = this.sourceForm.value; if(formData.name == null){ return false; }else if(formData.url == null){ return false; }else{ return true; } } } // add-source.component.html <div class="bread-crumb"> <div> <span>Add Source</span> </div> </div> <div class="container flex"> <div class="form"> <form action="" [formGroup]="sourceForm" (ngSubmit)="saveSource()"> <table> <tr> <td>Source Name:</td> <td> <input class="input" type="text" formControlName="name"> </td> </tr> <tr> <td>Source URL:</td> <td> <input class="input" type="text" formControlName="url"> </td> </tr> <tr> <td>Source client:</td> <td> <input class="input" type="text" formControlName="client"> </td> </tr> <tr> <td colspan="2"> <div class="center"> <button type="submit">submit</button> </div> </td> </tr> </table> </form> </div> </div> i tried imports: [ BrowserModule, AppRoutingModule, HttpClientModule, Ng2SearchPipeModule, FormsModule, ReactiveFormsModule, HttpClientXsrfModule, HttpClientXsrfModule.withOptions({ cookieName: 'XSRF-TOKEN', headerName: 'X-XSRF-TOKEN', }) but did not help Note :- this is angular 13
[ "(Partial answer)\nYou get this error message because the CSRF protection is activated by default and you don't send the CSRF token. Someone wrote a good description of what CSRF is here\nOn the first GET request, the server sends you the CSRF token in a cookie, and you have to send it back on every request, as a cookie AND as a request header. The server will check that the CSRF value in the cookie matches with the CSRF value that is in the header.\nIt can be tedious to repeat that on every request so Angular has a builtin module for that : HttpClientXsrfModule that you configured here :\nHttpClientXsrfModule.withOptions({\n cookieName: 'XSRF-TOKEN',\n headerName: 'X-XSRF-TOKEN',\n})\n\nOne problem is that you override this behavior by setting again the header by hand here :\nconst httpOptions = {\n headers: new HttpHeaders({\n 'Content-Type': 'application/json',\n cookieName: 'csrftoken',\n headerName: 'X-CSRFToken',\n })\n};\n[...]\n\naddSource(source : sources[]): Observable<sources[]>{\n return this.http.post<sources[]> (this.API_URL, source, httpOptions);\n\nYou don't need that. Just leave it like this :\naddSource(source : sources[]): Observable<sources[]>{\n return this.http.post<sources[]> (this.API_URL, source);\n\nAnother problem is that the name for the CSRF header/cookie is not standard. It can be CSRF, XSRF, or whatever you want. Of course, if you send it as CSRF and the server expects it as XSRF, it will not be detected.\nAs I can see from the comments on the question, the server sends you that\n\nSet-Cookie: csrftoken=sjd8q2xsdfgfhjgfnVGEkdP8f02shB\n\nSo we are sure that the cookie name is csrftoken. So it should be the same in the configuration of the HttpClientXsrfModule. Can you try like this\nHttpClientXsrfModule.withOptions({\n cookieName: 'csrftoken', // << This one is certain\n headerName: 'X-XSRF-TOKEN', // << For this one, I don't know yet\n})\n\nCan you try this with different values for the headerName ? Preferably csrftoken also ? header name and cookie name are often the same.\nUpdate :\nAccording to the Django documentation, the default CSRF header name is HTTP_X_CSRFTOKEN. So you can try this :\nHttpClientXsrfModule.withOptions({\n cookieName: 'csrftoken',\n headerName: 'HTTP_X_CSRFTOKEN',\n})\n\n", "The logic on the front-end side was correct, the reason for showing csrf token missing was from the Django rest framework.\nonce i removed the @api_view from my views.py and returned the json response it worked.\n" ]
[ 0, 0 ]
[ "you need to exempt csrf in views.py\nfrom django.views.decorators.csrf import csrf_exempt\n\nand then\n@csrf_exempt\ndef index(request):\npass\n\n" ]
[ -1 ]
[ "angular", "angular_fullstack", "csrf", "django", "python" ]
stackoverflow_0074598711_angular_angular_fullstack_csrf_django_python.txt
Q: FileOpenPicker not working in C# on Windows 11 Desktop I am creating a Windows Form App in VS on Windows11 and get this error when I attempt to run my file picker function: System.NotImplementedException: 'The member IAsyncOperation<IReadOnlyList> FileOpenPicker.PickMultipleFilesAsync() is not implemented in Uno.' The Code : var picker = new FileOpenPicker(); picker.ViewMode = PickerViewMode.Thumbnail; picker.SuggestedStartLocation = PickerLocationId.PicturesLibrary; picker.FileTypeFilter.Add(".png"); var files = await picker.PickMultipleFilesAsync(); if(files!=null){//stuff n things} A: Windows 11 has it's own method for opening files. FilePicker will not work in the new App environtment. How To Open Files
FileOpenPicker not working in C# on Windows 11 Desktop
I am creating a Windows Form App in VS on Windows11 and get this error when I attempt to run my file picker function: System.NotImplementedException: 'The member IAsyncOperation<IReadOnlyList> FileOpenPicker.PickMultipleFilesAsync() is not implemented in Uno.' The Code : var picker = new FileOpenPicker(); picker.ViewMode = PickerViewMode.Thumbnail; picker.SuggestedStartLocation = PickerLocationId.PicturesLibrary; picker.FileTypeFilter.Add(".png"); var files = await picker.PickMultipleFilesAsync(); if(files!=null){//stuff n things}
[ "Windows 11 has it's own method for opening files. FilePicker will not work in the new App environtment.\nHow To Open Files\n" ]
[ 0 ]
[]
[]
[ "fileopenpicker", "uno", "visual_studio", "windows_11" ]
stackoverflow_0074671505_fileopenpicker_uno_visual_studio_windows_11.txt
Q: multiprocessing vs multithreading vs asyncio I found that in Python 3.4 there are few different libraries for multiprocessing/threading: multiprocessing vs threading vs asyncio. But I don't know which one to use or is the "recommended one". Do they do the same thing, or are different? If so, which one is used for what? I want to write a program that uses multicores in my computer. But I don't know which library I should learn. A: TL;DR Making the Right Choice: We have walked through the most popular forms of concurrency. But the question remains - when should choose which one? It really depends on the use cases. From my experience (and reading), I tend to follow this pseudo code: if io_bound: if io_very_slow: print("Use Asyncio") else: print("Use Threads") else: print("Multi Processing") CPU Bound => Multi Processing I/O Bound, Fast I/O, Limited Number of Connections => Multi Threading I/O Bound, Slow I/O, Many connections => Asyncio Reference [NOTE]: If you have a long call method (e.g. a method containing a sleep time or lazy I/O), the best choice is asyncio, Twisted or Tornado approach (coroutine methods), that works with a single thread as concurrency. asyncio works on Python3.4 and later. Tornado and Twisted are ready since Python2.7 uvloop is ultra fast asyncio event loop (uvloop makes asyncio 2-4x faster). [UPDATE (2019)]: Japranto (GitHub) is a very fast pipelining HTTP server based on uvloop. A: They are intended for (slightly) different purposes and/or requirements. CPython (a typical, mainline Python implementation) still has the global interpreter lock so a multi-threaded application (a standard way to implement parallel processing nowadays) is suboptimal. That's why multiprocessing may be preferred over threading. But not every problem may be effectively split into [almost independent] pieces, so there may be a need in heavy interprocess communications. That's why multiprocessing may not be preferred over threading in general. asyncio (this technique is available not only in Python, other languages and/or frameworks also have it, e.g. Boost.ASIO) is a method to effectively handle a lot of I/O operations from many simultaneous sources w/o need of parallel code execution. So it's just a solution (a good one indeed!) for a particular task, not for parallel processing in general. A: In multiprocessing you leverage multiple CPUs to distribute your calculations. Since each of the CPUs runs in parallel, you're effectively able to run multiple tasks simultaneously. You would want to use multiprocessing for CPU-bound tasks. An example would be trying to calculate a sum of all elements of a huge list. If your machine has 8 cores, you can "cut" the list into 8 smaller lists and calculate the sum of each of those lists separately on separate core and then just add up those numbers. You'll get a ~8x speedup by doing that. In (multi)threading you don't need multiple CPUs. Imagine a program that sends lots of HTTP requests to the web. If you used a single-threaded program, it would stop the execution (block) at each request, wait for a response, and then continue once received a response. The problem here is that your CPU isn't really doing work while waiting for some external server to do the job; it could have actually done some useful work in the meantime! The fix is to use threads - you can create many of them, each responsible for requesting some content from the web. The nice thing about threads is that, even if they run on one CPU, the CPU from time to time "freezes" the execution of one thread and jumps to executing the other one (it's called context switching and it happens constantly at non-deterministic intervals). So if your task is I/O bound - use threading. asyncio is essentially threading where not the CPU but you, as a programmer (or actually your application), decide where and when does the context switch happen. In Python you use an await keyword to suspend the execution of your coroutine (defined using async keyword). A: This is the basic idea: Is it IO-BOUND ? -----------> USE asyncio IS IT CPU-HEAVY ? ---------> USE multiprocessing ELSE ? ----------------------> USE threading So basically stick to threading unless you have IO/CPU problems. A: Many of the answers suggest how to choose only 1 option, but why not be able to use all 3? In this answer I explain how you can use asyncio to manage combining all 3 forms of concurrency instead as well as easily swap between them later if need be. The short answer Many developers that are first-timers to concurrency in Python will end up using processing.Process and threading.Thread. However, these are the low-level APIs which have been merged together by the high-level API provided by the concurrent.futures module. Furthermore, spawning processes and threads has overhead, such as requiring more memory, a problem which plagued one of the examples I showed below. To an extent, concurrent.futures manages this for you so that you cannot as easily do something like spawn a thousand processes and crash your computer by only spawning a few processes and then just re-using those processes each time one finishes. These high-level APIs are provided through concurrent.futures.Executor, which are then implemented by concurrent.futures.ProcessPoolExecutor and concurrent.futures.ThreadPoolExecutor. In most cases, you should use these over the multiprocessing.Process and threading.Thread, because it's easier to change from one to the other in the future when you use concurrent.futures and you don't have to learn the detailed differences of each. Since these share a unified interfaces, you'll also find that code using multiprocessing or threading will often use concurrent.futures. asyncio is no exception to this, and provides a way to use it via the following code: import asyncio from concurrent.futures import Executor from functools import partial from typing import Any, Callable, Optional, TypeVar T = TypeVar("T") async def run_in_executor( executor: Optional[Executor], func: Callable[..., T], /, *args: Any, **kwargs: Any, ) -> T: """ Run `func(*args, **kwargs)` asynchronously, using an executor. If the executor is None, use the default ThreadPoolExecutor. """ return await asyncio.get_running_loop().run_in_executor( executor, partial(func, *args, **kwargs), ) # Example usage for running `print` in a thread. async def main(): await run_in_executor(None, print, "O" * 100_000) asyncio.run(main()) In fact it turns out that using threading with asyncio was so common that in Python 3.9 they added asyncio.to_thread(func, *args, **kwargs) to shorten it for the default ThreadPoolExecutor. The long answer Are there any disadvantages to this approach? Yes. With asyncio, the biggest disadvantage is that asynchronous functions aren't the same as synchronous functions. This can trip up new users of asyncio a lot and cause a lot of rework to be done if you didn't start programming with asyncio in mind from the beginning. Another disadvantage is that users of your code will also become forced to use asyncio. All of this necessary rework will often leave first-time asyncio users with a really sour taste in their mouth. Are there any non-performance advantages to this? Yes. Similar to how using concurrent.futures is advantageous over threading.Thread and multiprocessing.Process for its unified interface, this approach can be considered a further abstraction from an Executor to an asynchronous function. You can start off using asyncio, and if later you find a part of it you need threading or multiprocessing, you can use asyncio.to_thread or run_in_executor. Likewise, you may later discover that an asynchronous version of what you're trying to run with threading already exists, so you can easily step back from using threading and switch to asyncio instead. Are there any performance advantages to this? Yes... and no. Ultimately it depends on the task. In some cases, it may not help (though it likely does not hurt), while in other cases it may help a lot. The rest of this answer provides some explanations as to why using asyncio to run an Executor may be advantageous. - Combining multiple executors and other asynchronous code asyncio essentially provides significantly more control over concurrency at the cost of you need to take control of the concurrency more. If you want to simultaneously run some code using a ThreadPoolExecutor along side some other code using a ProcessPoolExecutor, it is not so easy managing this using synchronous code, but it is very easy with asyncio. import asyncio from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor async def with_processing(): with ProcessPoolExecutor() as executor: tasks = [...] for task in asyncio.as_completed(tasks): result = await task ... async def with_threading(): with ThreadPoolExecutor() as executor: tasks = [...] for task in asyncio.as_completed(tasks): result = await task ... async def main(): await asyncio.gather(with_processing(), with_threading()) asyncio.run(main()) How does this work? Essentially asyncio asks the executors to run their functions. Then, while an executor is running, asyncio will go run other code. For example, the ProcessPoolExecutor starts a bunch of processes, and then while waiting for those processes to finish, the ThreadPoolExecutor starts a bunch of threads. asyncio will then check in on these executors and collect their results when they are done. Furthermore, if you have other code using asyncio, you can run them while waiting for the processes and threads to finish. - Narrowing in on what sections of code needs executors It is not common that you will have many executors in your code, but what is a common problem that I have seen when people use threads/processes is that they will shove the entirety of their code into a thread/process, expecting it to work. For example, I once saw the following code (approximately): from concurrent.futures import ThreadPoolExecutor import requests def get_data(url): return requests.get(url).json()["data"] urls = [...] with ThreadPoolExecutor() as executor: for data in executor.map(get_data, urls): print(data) The funny thing about this piece of code is that it was slower with concurrency than without. Why? Because the resulting json was large, and having many threads consume a huge amount of memory was disastrous. Luckily the solution was simple: from concurrent.futures import ThreadPoolExecutor import requests urls = [...] with ThreadPoolExecutor() as executor: for response in executor.map(requests.get, urls): print(response.json()["data"]) Now only one json is unloaded into memory at a time, and everything is fine. The lesson here? You shouldn't try to just slap all of your code into threads/processes, you should instead focus in on what part of the code actually needs concurrency. But what if get_data was not a function as simple as this case? What if we had to apply the executor somewhere deep in the middle of the function? This is where asyncio comes in: import asyncio import requests async def get_data(url): # A lot of code. ... # The specific part that needs threading. response = await asyncio.to_thread(requests.get, url, some_other_params) # A lot of code. ... return data urls = [...] async def main(): tasks = [get_data(url) for url in urls] for task in asyncio.as_completed(tasks): data = await task print(data) asyncio.run(main()) Attempting the same with concurrent.futures is by no means pretty. You could use things such as callbacks, queues, etc., but it would be significantly harder to manage than basic asyncio code. A: Already a lot of good answers. Can't elaborate more on the when to use each one. This is more an interesting combination of two. Multiprocessing + asyncio: https://pypi.org/project/aiomultiprocess/. The use case for which it was designed was highio, but still utilizing as many of the cores available. Facebook used this library to write some kind of python based File server. Asyncio allowing for IO bound traffic, but multiprocessing allowing multiple event loops and threads on multiple cores. Ex code from the repo: import asyncio from aiohttp import request from aiomultiprocess import Pool async def get(url): async with request("GET", url) as response: return await response.text("utf-8") async def main(): urls = ["https://jreese.sh", ...] async with Pool() as pool: async for result in pool.map(get, urls): ... # process result if __name__ == '__main__': # Python 3.7 asyncio.run(main()) # Python 3.6 # loop = asyncio.get_event_loop() # loop.run_until_complete(main()) Just and addition here, would not working in say jupyter notebook very well, as the notebook already has a asyncio loop running. Just a little note for you to not pull your hair out. A: Multiprocessing can be run parallelly. Multithreading and asyncio cannot be run parallelly. With Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz and 32.0 GB RAM, I timed how many prime numbers are between 2 and 100000 with 2 processes, 2 threads and 2 asyncio tasks as shown below. *This is CPU bound calculation: Multiprocessing Multithreading asyncio 23.87 seconds 45.24 seconds 44.77 seconds Because multiprocessing can be run parallelly so multiprocessing is double more faster than multithreading and asyncio as shown above. I used 3 sets of code below: Multiprocessing: # "process_test.py" from multiprocessing import Process import time start_time = time.time() def test(): num = 100000 primes = 0 for i in range(2, num + 1): for j in range(2, i): if i % j == 0: break else: primes += 1 print(primes) if __name__ == "__main__": # This is needed to run processes on Windows process_list = [] for _ in range(0, 2): # 2 processes process = Process(target=test) process_list.append(process) for process in process_list: process.start() for process in process_list: process.join() print(round((time.time() - start_time), 2), "seconds") # 23.87 seconds Result: ... 9592 9592 23.87 seconds Multithreading: # "thread_test.py" from threading import Thread import time start_time = time.time() def test(): num = 100000 primes = 0 for i in range(2, num + 1): for j in range(2, i): if i % j == 0: break else: primes += 1 print(primes) thread_list = [] for _ in range(0, 2): # 2 threads thread = Thread(target=test) thread_list.append(thread) for thread in thread_list: thread.start() for thread in thread_list: thread.join() print(round((time.time() - start_time), 2), "seconds") # 45.24 seconds Result: ... 9592 9592 45.24 seconds Asyncio: # "asyncio_test.py" import asyncio import time start_time = time.time() async def test(): num = 100000 primes = 0 for i in range(2, num + 1): for j in range(2, i): if i % j == 0: break else: primes += 1 print(primes) async def call_tests(): tasks = [] for _ in range(0, 2): # 2 asyncio tasks tasks.append(test()) await asyncio.gather(*tasks) asyncio.run(call_tests()) print(round((time.time() - start_time), 2), "seconds") # 44.77 seconds Result: ... 9592 9592 44.77 seconds A: Multiprocessing Each process has its own Python interpreter and can run on a separate core of a processor. Python multiprocessing is a package that supports spawning processes using an API similar to the threading module. The multiprocessing package offers true parallelism, effectively side-stepping the Global Interpreter Lock by using sub processes instead of threads. Use multiprocessing when you have CPU intensive tasks. Multithreading Python multithreading allows you to spawn multiple threads within the process. These threads can share the same memory and resources of the process. In CPython due to Global interpreter lock at any given time only a single thread can run, hence you cannot utilize multiple cores. Multithreading in Python does not offer true parallelism due to GIL limitation. Asyncio Asyncio works on co-operative multitasking concepts. Asyncio tasks run on the same thread so there is no parallelism, but it provides better control to the developer instead of the OS which is the case in multithreading. There is a nice discussion on this link regarding the advantages of asyncio over threads. There is a nice blog by Lei Mao on Python concurrency here Multiprocessing VS Threading VS AsyncIO in Python Summary A: I’m not a professional Python user, but as a student in computer architecture I think I can share some of my considerations when choosing between multi processing and multi threading. Besides, some of the other answers (even among those with higher votes) are misusing technical terminology, so I thinks it’s also necessary to make some clarification on those as well, and I’ll do it first. The fundamental difference between multiprocessing and multithreading is whether they share the same memory space. Threads share access to the same virtual memory space, so it is efficient and easy for threads to exchange their computation results (zero copy, and totally user-space execution). Processes on the other hand have separate virtual memory spaces. They cannot directly read or write the other process’ memory space, just like a person cannot read or alter the mind of another person without talking to him. (Allowing so would be a violation of memory protection and defeat the purpose of using virtual memory. ) To exchange data between processes, they have to rely on the operating system’s facility (e.g. message passing), and for more than one reasons this is more costly to do than the “shared memory” scheme used by threads. One reason is that invoking the OS’ message passing mechanism requires making a system call which will switch the code execution from user mode to kernel mode, which is time consuming; another reason is likely that OS message passing scheme will have to copy the data bytes from the senders’ memory space to the receivers’ memory space, so non-zero copy cost. It is incorrect to say a multithread program can only use one CPU. The reason why many people say so is due to an artifact of the CPython implementation: global interpreter lock (GIL). Because of the GIL, threads in a CPython process are serialized. As a result, it appears that the multithreaded python program only uses one CPU. But multi thread computer programs in general are not restricted to one core, and for Python, implementations that do not use the GIL can indeed run many threads concurrently, that is, run on more than one CPU at the same time. (See https://wiki.python.org/moin/GlobalInterpreterLock). Given that CPython is the predominant implementation of Python, it’s understandable why multithreaded python programs are commonly equated to being bound to a single core. With Python with GIL, the only way to unleash the power of multicores is to use multiprocessing (there are exceptions to this as mentioned below). But your problem better be easily partition-able into parallel sub-problems that have minimal intercommunication, otherwise a lot of inter-process communication will have to take place and as explained above, the overhead of using the OS’ message passing mechanism will be costly, sometimes so costly the benefits of parallel processing are totally offset. If the nature of your problem requires intense communication between concurrent routines, multithreading is the natural way to go. Unfortunately with CPython, true, effectively concurrent multithreading is not possible due to the GIL. In this case you should realize Python is not the optimal tool for your project and consider using another language. There’s one alternative solution, that is to implement the concurrent processing routines in an external library written in C (or other languages), and import that module to Python. The CPython GIL will not bother to block the threads spawned by that external library. So, with the burdens of GIL, is multithreading in CPython any good? It still offers benefits though, as other answers have mentioned, if you’re doing IO or network communication. In these cases the relevant computation is not done by your CPU but done by other devices (in the case of IO, the disk controller and DMA (direct memory access) controller will transfer the data with minimal CPU participation; in the case of networking, the NIC (network interface card) and DMA will take care of much of the task without CPU’s participation), so once a thread delegates such task to the NIC or disk controller, the OS can put that thread to a sleeping state and switch to other threads of the same program to do useful work. In my understanding, the asyncio module is essentially a specific case of multithreading for IO operations. So: CPU-intensive programs, that can easily be partitioned to run on multiple processes with limited communication: Use multithreading if GIL does not exist (eg Jython), or use multiprocess if GIL is present (eg CPython). CPU-intensive programs, that requires intensive communication between concurrent routines: Use multithreading if GIL does not exist, or use another programming language. Lot’s of IO: asyncio
multiprocessing vs multithreading vs asyncio
I found that in Python 3.4 there are few different libraries for multiprocessing/threading: multiprocessing vs threading vs asyncio. But I don't know which one to use or is the "recommended one". Do they do the same thing, or are different? If so, which one is used for what? I want to write a program that uses multicores in my computer. But I don't know which library I should learn.
[ "TL;DR\nMaking the Right Choice:\n\nWe have walked through the most popular forms of concurrency. But the question remains - when should choose which one? It really depends on the use cases. From my experience (and reading), I tend to follow this pseudo code:\n\nif io_bound:\n if io_very_slow:\n print(\"Use Asyncio\")\n else:\n print(\"Use Threads\")\nelse:\n print(\"Multi Processing\")\n\n\n\nCPU Bound => Multi Processing\nI/O Bound, Fast I/O, Limited Number of Connections => Multi Threading\nI/O Bound, Slow I/O, Many connections => Asyncio\n\n\nReference\n\n[NOTE]:\n\nIf you have a long call method (e.g. a method containing a sleep time or lazy I/O), the best choice is asyncio, Twisted or Tornado approach (coroutine methods), that works with a single thread as concurrency.\nasyncio works on Python3.4 and later.\nTornado and Twisted are ready since Python2.7\nuvloop is ultra fast asyncio event loop (uvloop makes asyncio 2-4x faster).\n\n\n[UPDATE (2019)]:\n\nJapranto (GitHub) is a very fast pipelining HTTP server based on uvloop.\n\n", "They are intended for (slightly) different purposes and/or requirements. CPython (a typical, mainline Python implementation) still has the global interpreter lock so a multi-threaded application (a standard way to implement parallel processing nowadays) is suboptimal. That's why multiprocessing may be preferred over threading. But not every problem may be effectively split into [almost independent] pieces, so there may be a need in heavy interprocess communications. That's why multiprocessing may not be preferred over threading in general.\nasyncio (this technique is available not only in Python, other languages and/or frameworks also have it, e.g. Boost.ASIO) is a method to effectively handle a lot of I/O operations from many simultaneous sources w/o need of parallel code execution. So it's just a solution (a good one indeed!) for a particular task, not for parallel processing in general.\n", "In multiprocessing you leverage multiple CPUs to distribute your calculations. Since each of the CPUs runs in parallel, you're effectively able to run multiple tasks simultaneously. You would want to use multiprocessing for CPU-bound tasks. An example would be trying to calculate a sum of all elements of a huge list. If your machine has 8 cores, you can \"cut\" the list into 8 smaller lists and calculate the sum of each of those lists separately on separate core and then just add up those numbers. You'll get a ~8x speedup by doing that.\nIn (multi)threading you don't need multiple CPUs. Imagine a program that sends lots of HTTP requests to the web. If you used a single-threaded program, it would stop the execution (block) at each request, wait for a response, and then continue once received a response. The problem here is that your CPU isn't really doing work while waiting for some external server to do the job; it could have actually done some useful work in the meantime! The fix is to use threads - you can create many of them, each responsible for requesting some content from the web. The nice thing about threads is that, even if they run on one CPU, the CPU from time to time \"freezes\" the execution of one thread and jumps to executing the other one (it's called context switching and it happens constantly at non-deterministic intervals). So if your task is I/O bound - use threading.\nasyncio is essentially threading where not the CPU but you, as a programmer (or actually your application), decide where and when does the context switch happen. In Python you use an await keyword to suspend the execution of your coroutine (defined using async keyword).\n", "This is the basic idea:\n\nIs it IO-BOUND ? -----------> USE asyncio\nIS IT CPU-HEAVY ? ---------> USE multiprocessing\nELSE ? ----------------------> USE threading\n\nSo basically stick to threading unless you have IO/CPU problems.\n", "Many of the answers suggest how to choose only 1 option, but why not be able to use all 3? In this answer I explain how you can use asyncio to manage combining all 3 forms of concurrency instead as well as easily swap between them later if need be.\nThe short answer\n\nMany developers that are first-timers to concurrency in Python will end up using processing.Process and threading.Thread. However, these are the low-level APIs which have been merged together by the high-level API provided by the concurrent.futures module. Furthermore, spawning processes and threads has overhead, such as requiring more memory, a problem which plagued one of the examples I showed below. To an extent, concurrent.futures manages this for you so that you cannot as easily do something like spawn a thousand processes and crash your computer by only spawning a few processes and then just re-using those processes each time one finishes.\nThese high-level APIs are provided through concurrent.futures.Executor, which are then implemented by concurrent.futures.ProcessPoolExecutor and concurrent.futures.ThreadPoolExecutor. In most cases, you should use these over the multiprocessing.Process and threading.Thread, because it's easier to change from one to the other in the future when you use concurrent.futures and you don't have to learn the detailed differences of each.\nSince these share a unified interfaces, you'll also find that code using multiprocessing or threading will often use concurrent.futures. asyncio is no exception to this, and provides a way to use it via the following code:\nimport asyncio\nfrom concurrent.futures import Executor\nfrom functools import partial\nfrom typing import Any, Callable, Optional, TypeVar\n\nT = TypeVar(\"T\")\n\nasync def run_in_executor(\n executor: Optional[Executor],\n func: Callable[..., T],\n /,\n *args: Any,\n **kwargs: Any,\n) -> T:\n \"\"\"\n Run `func(*args, **kwargs)` asynchronously, using an executor.\n\n If the executor is None, use the default ThreadPoolExecutor.\n \"\"\"\n return await asyncio.get_running_loop().run_in_executor(\n executor,\n partial(func, *args, **kwargs),\n )\n\n# Example usage for running `print` in a thread.\nasync def main():\n await run_in_executor(None, print, \"O\" * 100_000)\n\nasyncio.run(main())\n\nIn fact it turns out that using threading with asyncio was so common that in Python 3.9 they added asyncio.to_thread(func, *args, **kwargs) to shorten it for the default ThreadPoolExecutor.\nThe long answer\n\nAre there any disadvantages to this approach?\nYes. With asyncio, the biggest disadvantage is that asynchronous functions aren't the same as synchronous functions. This can trip up new users of asyncio a lot and cause a lot of rework to be done if you didn't start programming with asyncio in mind from the beginning.\nAnother disadvantage is that users of your code will also become forced to use asyncio. All of this necessary rework will often leave first-time asyncio users with a really sour taste in their mouth.\nAre there any non-performance advantages to this?\nYes. Similar to how using concurrent.futures is advantageous over threading.Thread and multiprocessing.Process for its unified interface, this approach can be considered a further abstraction from an Executor to an asynchronous function. You can start off using asyncio, and if later you find a part of it you need threading or multiprocessing, you can use asyncio.to_thread or run_in_executor. Likewise, you may later discover that an asynchronous version of what you're trying to run with threading already exists, so you can easily step back from using threading and switch to asyncio instead.\nAre there any performance advantages to this?\nYes... and no. Ultimately it depends on the task. In some cases, it may not help (though it likely does not hurt), while in other cases it may help a lot. The rest of this answer provides some explanations as to why using asyncio to run an Executor may be advantageous.\n- Combining multiple executors and other asynchronous code\nasyncio essentially provides significantly more control over concurrency at the cost of you need to take control of the concurrency more. If you want to simultaneously run some code using a ThreadPoolExecutor along side some other code using a ProcessPoolExecutor, it is not so easy managing this using synchronous code, but it is very easy with asyncio.\nimport asyncio\nfrom concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor\n\nasync def with_processing():\n with ProcessPoolExecutor() as executor:\n tasks = [...]\n for task in asyncio.as_completed(tasks):\n result = await task\n ...\n\nasync def with_threading():\n with ThreadPoolExecutor() as executor:\n tasks = [...]\n for task in asyncio.as_completed(tasks):\n result = await task\n ...\n\nasync def main():\n await asyncio.gather(with_processing(), with_threading())\n\nasyncio.run(main())\n\nHow does this work? Essentially asyncio asks the executors to run their functions. Then, while an executor is running, asyncio will go run other code. For example, the ProcessPoolExecutor starts a bunch of processes, and then while waiting for those processes to finish, the ThreadPoolExecutor starts a bunch of threads. asyncio will then check in on these executors and collect their results when they are done. Furthermore, if you have other code using asyncio, you can run them while waiting for the processes and threads to finish.\n- Narrowing in on what sections of code needs executors\nIt is not common that you will have many executors in your code, but what is a common problem that I have seen when people use threads/processes is that they will shove the entirety of their code into a thread/process, expecting it to work. For example, I once saw the following code (approximately):\nfrom concurrent.futures import ThreadPoolExecutor\nimport requests\n\ndef get_data(url):\n return requests.get(url).json()[\"data\"]\n\nurls = [...]\n\nwith ThreadPoolExecutor() as executor:\n for data in executor.map(get_data, urls):\n print(data)\n\nThe funny thing about this piece of code is that it was slower with concurrency than without. Why? Because the resulting json was large, and having many threads consume a huge amount of memory was disastrous. Luckily the solution was simple:\nfrom concurrent.futures import ThreadPoolExecutor\nimport requests\n\nurls = [...]\n\nwith ThreadPoolExecutor() as executor:\n for response in executor.map(requests.get, urls):\n print(response.json()[\"data\"])\n\nNow only one json is unloaded into memory at a time, and everything is fine.\nThe lesson here?\n\nYou shouldn't try to just slap all of your code into threads/processes, you should instead focus in on what part of the code actually needs concurrency.\n\nBut what if get_data was not a function as simple as this case? What if we had to apply the executor somewhere deep in the middle of the function? This is where asyncio comes in:\nimport asyncio\nimport requests\n\nasync def get_data(url):\n # A lot of code.\n ...\n # The specific part that needs threading.\n response = await asyncio.to_thread(requests.get, url, some_other_params)\n # A lot of code.\n ...\n return data\n\nurls = [...]\n\nasync def main():\n tasks = [get_data(url) for url in urls]\n for task in asyncio.as_completed(tasks):\n data = await task\n print(data)\n\nasyncio.run(main())\n\nAttempting the same with concurrent.futures is by no means pretty. You could use things such as callbacks, queues, etc., but it would be significantly harder to manage than basic asyncio code.\n", "Already a lot of good answers. Can't elaborate more on the when to use each one. This is more an interesting combination of two. Multiprocessing + asyncio: https://pypi.org/project/aiomultiprocess/.\nThe use case for which it was designed was highio, but still utilizing as many of the cores available. Facebook used this library to write some kind of python based File server. Asyncio allowing for IO bound traffic, but multiprocessing allowing multiple event loops and threads on multiple cores.\nEx code from the repo:\nimport asyncio\nfrom aiohttp import request\nfrom aiomultiprocess import Pool\n\nasync def get(url):\n async with request(\"GET\", url) as response:\n return await response.text(\"utf-8\")\n\nasync def main():\n urls = [\"https://jreese.sh\", ...]\n async with Pool() as pool:\n async for result in pool.map(get, urls):\n ... # process result\n \nif __name__ == '__main__':\n # Python 3.7\n asyncio.run(main())\n \n # Python 3.6\n # loop = asyncio.get_event_loop()\n # loop.run_until_complete(main())\n\nJust and addition here, would not working in say jupyter notebook very well, as the notebook already has a asyncio loop running. Just a little note for you to not pull your hair out.\n", "\nMultiprocessing can be run parallelly.\n\nMultithreading and asyncio cannot be run parallelly.\n\n\nWith Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz and 32.0 GB RAM, I timed how many prime numbers are between 2 and 100000 with 2 processes, 2 threads and 2 asyncio tasks as shown below. *This is CPU bound calculation:\n\n\n\n\nMultiprocessing\nMultithreading\nasyncio\n\n\n\n\n23.87 seconds\n45.24 seconds\n44.77 seconds\n\n\n\n\nBecause multiprocessing can be run parallelly so multiprocessing is double more faster than multithreading and asyncio as shown above.\nI used 3 sets of code below:\nMultiprocessing:\n# \"process_test.py\"\n\nfrom multiprocessing import Process\nimport time\nstart_time = time.time()\n\ndef test():\n num = 100000\n primes = 0\n for i in range(2, num + 1):\n for j in range(2, i):\n if i % j == 0:\n break\n else:\n primes += 1\n print(primes)\n\nif __name__ == \"__main__\": # This is needed to run processes on Windows\n process_list = []\n\n for _ in range(0, 2): # 2 processes\n process = Process(target=test)\n process_list.append(process)\n\n for process in process_list:\n process.start()\n\n for process in process_list:\n process.join()\n\n print(round((time.time() - start_time), 2), \"seconds\") # 23.87 seconds\n\nResult:\n...\n9592\n9592\n23.87 seconds\n\nMultithreading:\n# \"thread_test.py\"\n\nfrom threading import Thread\nimport time\nstart_time = time.time()\n\ndef test():\n num = 100000\n primes = 0\n for i in range(2, num + 1):\n for j in range(2, i):\n if i % j == 0:\n break\n else:\n primes += 1\n print(primes)\n\nthread_list = []\n\nfor _ in range(0, 2): # 2 threads\n thread = Thread(target=test)\n thread_list.append(thread)\n \nfor thread in thread_list:\n thread.start()\n\nfor thread in thread_list:\n thread.join()\n\nprint(round((time.time() - start_time), 2), \"seconds\") # 45.24 seconds\n\nResult:\n...\n9592\n9592\n45.24 seconds\n\nAsyncio:\n# \"asyncio_test.py\"\n\nimport asyncio\nimport time\nstart_time = time.time()\n\nasync def test():\n num = 100000\n primes = 0\n for i in range(2, num + 1):\n for j in range(2, i):\n if i % j == 0:\n break\n else:\n primes += 1\n print(primes)\n\nasync def call_tests():\n tasks = []\n\n for _ in range(0, 2): # 2 asyncio tasks\n tasks.append(test())\n\n await asyncio.gather(*tasks)\n\nasyncio.run(call_tests())\n\nprint(round((time.time() - start_time), 2), \"seconds\") # 44.77 seconds\n\nResult:\n...\n9592\n9592\n44.77 seconds\n\n", "Multiprocessing\nEach process has its own Python interpreter and can run on a separate core of a processor. Python multiprocessing is a package that supports spawning processes using an API similar to the threading module. The multiprocessing package offers true parallelism, effectively side-stepping the Global Interpreter Lock by using sub processes instead of threads.\nUse multiprocessing when you have CPU intensive tasks.\nMultithreading\nPython multithreading allows you to spawn multiple threads within the process. These threads can share the same memory and resources of the process. In CPython due to Global interpreter lock at any given time only a single thread can run, hence you cannot utilize multiple cores. Multithreading in Python does not offer true parallelism due to GIL limitation.\nAsyncio\nAsyncio works on co-operative multitasking concepts. Asyncio tasks run on the same thread so there is no parallelism, but it provides better control to the developer instead of the OS which is the case in multithreading.\nThere is a nice discussion on this link regarding the advantages of asyncio over threads.\nThere is a nice blog by Lei Mao on Python concurrency here\nMultiprocessing VS Threading VS AsyncIO in Python Summary\n", "I’m not a professional Python user, but as a student in computer architecture I think I can share some of my considerations when choosing between multi processing and multi threading. Besides, some of the other answers (even among those with higher votes) are misusing technical terminology, so I thinks it’s also necessary to make some clarification on those as well, and I’ll do it first.\nThe fundamental difference between multiprocessing and multithreading is whether they share the same memory space. Threads share access to the same virtual memory space, so it is efficient and easy for threads to exchange their computation results (zero copy, and totally user-space execution).\nProcesses on the other hand have separate virtual memory spaces. They cannot directly read or write the other process’ memory space, just like a person cannot read or alter the mind of another person without talking to him. (Allowing so would be a violation of memory protection and defeat the purpose of using virtual memory. ) To exchange data between processes, they have to rely on the operating system’s facility (e.g. message passing), and for more than one reasons this is more costly to do than the “shared memory” scheme used by threads. One reason is that invoking the OS’ message passing mechanism requires making a system call which will switch the code execution from user mode to kernel mode, which is time consuming; another reason is likely that OS message passing scheme will have to copy the data bytes from the senders’ memory space to the receivers’ memory space, so non-zero copy cost.\nIt is incorrect to say a multithread program can only use one CPU. The reason why many people say so is due to an artifact of the CPython implementation: global interpreter lock (GIL). Because of the GIL, threads in a CPython process are serialized. As a result, it appears that the multithreaded python program only uses one CPU.\nBut multi thread computer programs in general are not restricted to one core, and for Python, implementations that do not use the GIL can indeed run many threads concurrently, that is, run on more than one CPU at the same time. (See https://wiki.python.org/moin/GlobalInterpreterLock).\nGiven that CPython is the predominant implementation of Python, it’s understandable why multithreaded python programs are commonly equated to being bound to a single core.\nWith Python with GIL, the only way to unleash the power of multicores is to use multiprocessing (there are exceptions to this as mentioned below). But your problem better be easily partition-able into parallel sub-problems that have minimal intercommunication, otherwise a lot of inter-process communication will have to take place and as explained above, the overhead of using the OS’ message passing mechanism will be costly, sometimes so costly the benefits of parallel processing are totally offset. If the nature of your problem requires intense communication between concurrent routines, multithreading is the natural way to go. Unfortunately with CPython, true, effectively concurrent multithreading is not possible due to the GIL. In this case you should realize Python is not the optimal tool for your project and consider using another language.\nThere’s one alternative solution, that is to implement the concurrent processing routines in an external library written in C (or other languages), and import that module to Python. The CPython GIL will not bother to block the threads spawned by that external library.\nSo, with the burdens of GIL, is multithreading in CPython any good? It still offers benefits though, as other answers have mentioned, if you’re doing IO or network communication. In these cases the relevant computation is not done by your CPU but done by other devices (in the case of IO, the disk controller and DMA (direct memory access) controller will transfer the data with minimal CPU participation; in the case of networking, the NIC (network interface card) and DMA will take care of much of the task without CPU’s participation), so once a thread delegates such task to the NIC or disk controller, the OS can put that thread to a sleeping state and switch to other threads of the same program to do useful work.\nIn my understanding, the asyncio module is essentially a specific case of multithreading for IO operations.\nSo:\nCPU-intensive programs, that can easily be partitioned to run on multiple processes with limited communication: Use multithreading if GIL does not exist (eg Jython), or use multiprocess if GIL is present (eg CPython).\nCPU-intensive programs, that requires intensive communication between concurrent routines: Use multithreading if GIL does not exist, or use another programming language.\nLot’s of IO: asyncio\n" ]
[ 247, 140, 74, 39, 25, 7, 1, 0, 0 ]
[]
[]
[ "multiprocessing", "multithreading", "python", "python_3.x", "python_asyncio" ]
stackoverflow_0027435284_multiprocessing_multithreading_python_python_3.x_python_asyncio.txt
Q: deleting user input column range using an array hi im trying to learn using arrays, below code is a user will input a range then check the range for a value "abc" in each row 1 then add it to an array then with the array values of column numbers delete the entire column, below code has an error saying type mismatch saying x is empty ` Option Explicit Sub delete_column() Dim arr As Variant, x As Integer, myrange As String, rng As Range, cell, item as Variant Set arr = CreateObject("System.Collections.ArrayList") myrange = InputBox("Please enter the range:", "Range") If myrange = "" Then Exit Sub Set rng = Range(myrange) For Each cell In rng If cell.Value = "abc" Then arr.Add cell.Column Next cell For x = UBound(arr) To LBound(arr) Cells(1, arr(x)).EntireColumn.Delete Next x End Sub ` i cant figure out how to fix the error, after the error shows of type mismatch when i hover my mouse to the x it says empty when i hover my mouse to the arr(x) it has the correct column number i used 'For x = UBound(arr) To LBound(arr) Step -1' still error i used below but it only deleted some of the columns not all with values of abc For Each item In arr Cells(1, item).EntireColumn.Delete Next item A: i found the solution below using .count, i dont know why iteration with ubound and lbound did not work ''' Option Explicit Sub delete_column() Dim arr As Variant, x As Integer, myrange As String, rng As Range, cell Set arr = CreateObject("System.Collections.ArrayList") myrange = InputBox("Please enter the range:", "Range") If myrange = "" Then Exit Sub Set rng = Range(myrange) For Each cell In rng If cell.Value = "abc" Then arr.Add cell.Column Next cell For x = arr.Count - 1 To 0 Step -1 Cells(1, arr(x)).EntireColumn.Delete Next End Sub '''
deleting user input column range using an array
hi im trying to learn using arrays, below code is a user will input a range then check the range for a value "abc" in each row 1 then add it to an array then with the array values of column numbers delete the entire column, below code has an error saying type mismatch saying x is empty ` Option Explicit Sub delete_column() Dim arr As Variant, x As Integer, myrange As String, rng As Range, cell, item as Variant Set arr = CreateObject("System.Collections.ArrayList") myrange = InputBox("Please enter the range:", "Range") If myrange = "" Then Exit Sub Set rng = Range(myrange) For Each cell In rng If cell.Value = "abc" Then arr.Add cell.Column Next cell For x = UBound(arr) To LBound(arr) Cells(1, arr(x)).EntireColumn.Delete Next x End Sub ` i cant figure out how to fix the error, after the error shows of type mismatch when i hover my mouse to the x it says empty when i hover my mouse to the arr(x) it has the correct column number i used 'For x = UBound(arr) To LBound(arr) Step -1' still error i used below but it only deleted some of the columns not all with values of abc For Each item In arr Cells(1, item).EntireColumn.Delete Next item
[ "i found the solution below using .count, i dont know why iteration with ubound and lbound did not work\n'''\n Option Explicit\n\n Sub delete_column()\n\n Dim arr As Variant, x As Integer, myrange As String, rng As Range, cell\n\n Set arr = CreateObject(\"System.Collections.ArrayList\")\n myrange = InputBox(\"Please enter the range:\", \"Range\")\n If myrange = \"\" Then Exit Sub\n Set rng = Range(myrange)\n\n For Each cell In rng\n If cell.Value = \"abc\" Then arr.Add cell.Column\n Next cell\n\n For x = arr.Count - 1 To 0 Step -1\n Cells(1, arr(x)).EntireColumn.Delete\n Next\n\n End Sub\n\n'''\n" ]
[ 0 ]
[]
[]
[ "excel", "vba" ]
stackoverflow_0074672092_excel_vba.txt
Q: How to filter the unique values I have 900k rows and 10 unique values. First 100k rows have only one unique value remaining are after 100k rows. I want 100k rows with all the unique values from the 900k rows. I cant able to find solution for this. A: A solution to the problem to this problem: Use the set() function to create a set of the unique values in your data. This will remove any duplicates. Use the random.sample() function to select a random sample of 1 lakh (100000) items from the set of unique values. Use the random.shuffle() function to shuffle the list of 1 lakh items. Use a for loop to iterate over the first 1 lakh rows of your data. For each row, add one of the shuffled unique values from step 3 to the row. import random # Sample data with 9 lakh (900000) rows and 10 unique values data = [ ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'], ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'], ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'], # ... ] # Create a set of the unique values in the data unique_values = set([val for row in data for val in row]) # Select a random sample of 100000 items from the set of unique values sample = random.sample(unique_values, 100000) # Shuffle the list of unique values random.shuffle(sample) # Iterate over the first 100000 rows of the data for i in range(100000): # Add one of the shuffled unique values to the row data[i].append(sample[i]) # The first 100000 rows of data now have all the unique values This approach will randomly select 1 lakh unique values from the original data and add them to the first 1 lakh rows. You can adjust the code to fit your specific needs. With pandas: import pandas as pd # Load the DataFrame df = pd.read_csv("data.csv") # Select one lakh rows with replacement sample = df.sample(n=100000, replace=True) #Remove any duplicate rows df_unique = df.drop_duplicates() # Select one lakh rows without replacement sample = df_unique.sample(n=100000, replace=False)
How to filter the unique values
I have 900k rows and 10 unique values. First 100k rows have only one unique value remaining are after 100k rows. I want 100k rows with all the unique values from the 900k rows. I cant able to find solution for this.
[ "A solution to the problem to this problem:\n\nUse the set() function to create a set of the unique values in your\ndata. This will remove any duplicates.\n\nUse the random.sample() function to select a random sample of 1 lakh (100000) items from the set of unique values.\n\nUse the random.shuffle() function to shuffle the list of 1 lakh items.\n\nUse a for loop to iterate over the first 1 lakh rows of your data. For each row, add one of the shuffled unique values from step 3 to the row.\nimport random\n\n # Sample data with 9 lakh (900000) rows and 10 unique values\n data = [\n ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'],\n ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'],\n ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'],\n # ...\n ]\n\n # Create a set of the unique values in the data\n unique_values = set([val for row in data for val in row])\n\n # Select a random sample of 100000 items from the set of unique values\n sample = random.sample(unique_values, 100000)\n\n # Shuffle the list of unique values\n random.shuffle(sample)\n\n # Iterate over the first 100000 rows of the data\n for i in range(100000):\n # Add one of the shuffled unique values to the row\n data[i].append(sample[i])\n\n # The first 100000 rows of data now have all the unique values\n\n\n\nThis approach will randomly select 1 lakh unique values from the original data and add them to the first 1 lakh rows. You can adjust the code to fit your specific needs.\nWith pandas:\nimport pandas as pd\n\n# Load the DataFrame\ndf = pd.read_csv(\"data.csv\")\n\n# Select one lakh rows with replacement\nsample = df.sample(n=100000, replace=True) \n#Remove any duplicate rows\ndf_unique = df.drop_duplicates()\n \n# Select one lakh rows without replacement\nsample = df_unique.sample(n=100000, replace=False)\n\n" ]
[ 0 ]
[]
[]
[ "filter", "pandas", "python", "unique", "unique_values" ]
stackoverflow_0074672464_filter_pandas_python_unique_unique_values.txt
Q: How to add start time variable to Lite YouTube Embed code created by Amit Agarwal I use Amit Agarwal code from https://www.labnol.org/internet/light-youtube-embeds/27941/ in my website to lite embeded youtube video. Is there any way to add start time to the script so that I could load each video with different start time. <div class="youtube-player" data-id="VIDEO_ID" start-id="TIME"></div> His complete script are as follows: <script> function labnolIframe(div) { var iframe = document.createElement('iframe'); iframe.setAttribute('src', 'https://www.youtube.com/embed/' + div.dataset.id + '?autoplay=1&rel=0'); iframe.setAttribute('frameborder', '0'); iframe.setAttribute('allowfullscreen', '1'); iframe.setAttribute('allow', 'accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture'); div.parentNode.replaceChild(iframe, div); } function initYouTubeVideos() { var playerElements = document.getElementsByClassName('youtube-player'); for (var n = 0; n < playerElements.length; n++) { var videoId = playerElements[n].dataset.id; var div = document.createElement('div'); div.setAttribute('data-id', videoId); var thumbNode = document.createElement('img'); thumbNode.src='//i.ytimg.com/vi/ID/hqdefault.jpg'.replace('ID', videoId); div.appendChild(thumbNode); var playButton = document.createElement('div'); playButton.setAttribute('class', 'play'); div.appendChild(playButton); div.onclick = function () { labnolIframe(this); }; playerElements[n].appendChild(div); } } document.addEventListener('DOMContentLoaded', initYouTubeVideos); </script> I have added timeId var into the mix hoping that I could use start-id value to let embed video to start at specific time. It still start at 0 sec of the video. function labnolIframe(div) { var iframe = document.createElement('iframe'); iframe.setAttribute('src', 'https://www.youtube.com/embed/' + div.dataset.id + '?start='+ div.dataset.id + '&autoplay=1&rel=0'); iframe.setAttribute('frameborder', '0'); iframe.setAttribute('allowfullscreen', '1'); iframe.setAttribute('allow', 'accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture'); div.parentNode.replaceChild(iframe, div); } function initYouTubeVideos() { var playerElements = document.getElementsByClassName('youtube-player'); for (var n = 0; n < playerElements.length; n++) { var videoId = playerElements[n].dataset.id; var timeId = playerElements[n].dataset.id; var div = document.createElement('div'); div.setAttribute('data-id', videoId); div.setAttribute('start-id', timeId); var thumbNode = document.createElement('img'); thumbNode.src = '//i.ytimg.com/vi/ID/hqdefault.jpg'.replace('ID', videoId); div.appendChild(thumbNode); var playButton = document.createElement('div'); playButton.setAttribute('class', 'play'); div.appendChild(playButton); div.onclick = function () { labnolIframe(this); }; playerElements[n].appendChild(div); } } document.addEventListener('DOMContentLoaded', initYouTubeVideos); A: Just change: iframe.setAttribute('src', 'https://www.youtube.com/embed/' + div.dataset.id + '?autoplay=1&rel=0'); to: iframe.setAttribute('src', 'https://www.youtube.com/embed/' + div.dataset.id + '?start=YOUR_START_TIME_IN_SECONDS&autoplay=1&rel=0'); Note the added start=YOUR_START_TIME_IN_SECONDS& in the URL.
How to add start time variable to Lite YouTube Embed code created by Amit Agarwal
I use Amit Agarwal code from https://www.labnol.org/internet/light-youtube-embeds/27941/ in my website to lite embeded youtube video. Is there any way to add start time to the script so that I could load each video with different start time. <div class="youtube-player" data-id="VIDEO_ID" start-id="TIME"></div> His complete script are as follows: <script> function labnolIframe(div) { var iframe = document.createElement('iframe'); iframe.setAttribute('src', 'https://www.youtube.com/embed/' + div.dataset.id + '?autoplay=1&rel=0'); iframe.setAttribute('frameborder', '0'); iframe.setAttribute('allowfullscreen', '1'); iframe.setAttribute('allow', 'accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture'); div.parentNode.replaceChild(iframe, div); } function initYouTubeVideos() { var playerElements = document.getElementsByClassName('youtube-player'); for (var n = 0; n < playerElements.length; n++) { var videoId = playerElements[n].dataset.id; var div = document.createElement('div'); div.setAttribute('data-id', videoId); var thumbNode = document.createElement('img'); thumbNode.src='//i.ytimg.com/vi/ID/hqdefault.jpg'.replace('ID', videoId); div.appendChild(thumbNode); var playButton = document.createElement('div'); playButton.setAttribute('class', 'play'); div.appendChild(playButton); div.onclick = function () { labnolIframe(this); }; playerElements[n].appendChild(div); } } document.addEventListener('DOMContentLoaded', initYouTubeVideos); </script> I have added timeId var into the mix hoping that I could use start-id value to let embed video to start at specific time. It still start at 0 sec of the video. function labnolIframe(div) { var iframe = document.createElement('iframe'); iframe.setAttribute('src', 'https://www.youtube.com/embed/' + div.dataset.id + '?start='+ div.dataset.id + '&autoplay=1&rel=0'); iframe.setAttribute('frameborder', '0'); iframe.setAttribute('allowfullscreen', '1'); iframe.setAttribute('allow', 'accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture'); div.parentNode.replaceChild(iframe, div); } function initYouTubeVideos() { var playerElements = document.getElementsByClassName('youtube-player'); for (var n = 0; n < playerElements.length; n++) { var videoId = playerElements[n].dataset.id; var timeId = playerElements[n].dataset.id; var div = document.createElement('div'); div.setAttribute('data-id', videoId); div.setAttribute('start-id', timeId); var thumbNode = document.createElement('img'); thumbNode.src = '//i.ytimg.com/vi/ID/hqdefault.jpg'.replace('ID', videoId); div.appendChild(thumbNode); var playButton = document.createElement('div'); playButton.setAttribute('class', 'play'); div.appendChild(playButton); div.onclick = function () { labnolIframe(this); }; playerElements[n].appendChild(div); } } document.addEventListener('DOMContentLoaded', initYouTubeVideos);
[ "Just change:\niframe.setAttribute('src', 'https://www.youtube.com/embed/' + div.dataset.id + '?autoplay=1&rel=0');\n\nto:\niframe.setAttribute('src', 'https://www.youtube.com/embed/' + div.dataset.id + '?start=YOUR_START_TIME_IN_SECONDS&autoplay=1&rel=0');\n\nNote the added start=YOUR_START_TIME_IN_SECONDS& in the URL.\n" ]
[ 0 ]
[]
[]
[ "embed", "youtube" ]
stackoverflow_0074622549_embed_youtube.txt
Q: Python Pandas equivalent in JavaScript With this CSV example: Source,col1,col2,col3 foo,1,2,3 bar,3,4,5 The standard method I use Pandas is this: Parse CSV Select columns into a data frame (col1 and col3) Process the column (e.g. avarage the values of col1 and col3) Is there a JavaScript library that does that like Pandas? A: This wiki will summarize and compare many pandas-like Javascript libraries. In general, you should check out the d3 Javascript library. d3 is very useful "swiss army knife" for handling data in Javascript, just like pandas is helpful for Python. You may see d3 used frequently like pandas, even if d3 is not exactly a DataFrame/Pandas replacement (i.e. d3 doesn't have the same API; d3 does not have Series / DataFrame classes with methods that match the pandas behavior) Ahmed's answer explains how d3 can be used to achieve some DataFrame functionality, and some of the libraries below were inspired by things like LearnJsData which uses d3 and lodash. As for DataFrame-style data transformation (splitting, joining, group by etc) , here is a quick list of some of the Javascript libraries. Note some libraries are Node.js aka Server-side Javascript, some are browser-compatible aka client-side Javascript, and some are Typescript. So use the option that's right for you. danfo-js (browser-support AND NodeJS-support) From Vignesh's answer danfo (which is often imported and aliased as dfd); has a basic DataFrame-type data structure, with the ability to plot directly Built by the team at Tensorflow: "One of the main goals of Danfo.js is to bring data processing, machine learning and AI tools to JavaScript developers. ... Open-source libraries like Numpy and Pandas..." pandas is built on top of numpy; likewise danfo-js is built on tensorflow-js please note danfo may not (yet?) support multi-column indexes pandas-js UPDATE The pandas-js repo has not been updated in awhile From STEEL and Feras' answers "pandas.js is an open source (experimental) library mimicking the Python pandas library. It relies on Immutable.js as the NumPy logical equivalent. The main data objects in pandas.js are, like in Python pandas, the Series and the DataFrame." dataframe-js "DataFrame-js provides an immutable data structure for javascript and datascience, the DataFrame, which allows to work on rows and columns with a sql and functional programming inspired api." data-forge Seen in Ashley Davis' answer "JavaScript data transformation and analysis toolkit inspired by Pandas and LINQ." Note the old data-forge JS repository is no longer maintained; now a new repository uses Typescript jsdataframe "Jsdataframe is a JavaScript data wrangling library inspired by data frame functionality in R and Python Pandas." dataframe "explore data by grouping and reducing." SQL Frames "DataFrames meet SQL, in the Browser" "SQL Frames is a low code data management framework that can be directly embedded in the browser to provide rich data visualization and UX. Complex DataFrames can be composed using familiar SQL constructs. With its powerful built-in analytics engine, data sources can come in any shape, form and frequency and they can be analyzed directly within the browser. It allows scaling to big data backends by transpiling the composed DataFrame logic to SQL." Then after coming to this question, checking other answers here and doing more searching, I found options like: Apache Arrow in JS Thanks to user Back2Basics suggestion: "Apache Arrow is a columnar memory layout specification for encoding vectors and table-like containers of flat and nested data. Apache Arrow is the emerging standard for large in-memory columnar data (Spark, Pandas, Drill, Graphistry, ...)" polars Polars is a blazingly fast DataFrames library implemented in Rust using Apache Arrow Columnar Format as memory model. Observable At first glance, seems like a JS alternative to the IPython/Jupyter "notebooks" Observable's page promises: "Reactive programming", a "Community", on a "Web Platform" See 5 minute intro here portal.js (formerly recline; from Rufus' answer) MAY BE OUTDATED: Does not use a "DataFrame" API MAY BE OUTDATED: Instead emphasizes its "Multiview" (the UI) API, (similar to jQuery/DOM model) which doesn't require jQuery but does require a browser! More examples MAY BE OUTDATED: Also emphasizes its MVC-ish architecture; including back-end stuff (i.e. database connections) js-data Really more of an ORM! Most of its modules correspond to different data storage questions (js-data-mongodb, js-data-redis, js-data-cloud-datastore), sorting, filtering, etc. On plus-side does work on Node.js as a first-priority; "Works in Node.js and in the Browser." miso (another suggestion from Rufus) Impressive backers like Guardian and bocoup. AlaSQL "AlaSQL" is an open source SQL database for Javascript with a strong focus on query speed and data source flexibility for both relational data and schemaless data. It works in your browser, Node.js, and Cordova." Some thought experiments: "Scaling a DataFrame in Javascript" - Gary Sieling Here are the criteria we used to consider the above choices General Criteria Language (NodeJS vs browser JS vs Typescript) Dependencies (i.e. if it uses an underlying library / AJAX/remote API's) Actively supported (active user-base, active source repository, etc) Size/speed of JS library Panda's criterias in its R comparison Performance Functionality/flexibility Ease-of-use Similarity to Pandas / Dataframe API's Specifically hits on their main features Data-science emphasis Built-in visualization functions Demonstrated integration in combination with other tools like Jupyter (interactive notebooks), etc A: I've been working on a data wrangling library for JavaScript called data-forge. It's inspired by LINQ and Pandas. It can be installed like this: npm install --save data-forge Your example would work like this: var csvData = "Source,col1,col2,col3\n" + "foo,1,2,3\n" + "bar,3,4,5\n"; var dataForge = require('data-forge'); var dataFrame = dataForge.fromCSV(csvData) .parseInts([ "col1", "col2", "col3" ]) ; If your data was in a CSV file you could load it like this: var dataFrame = dataForge.readFileSync(fileName) .parseCSV() .parseInts([ "col1", "col2", "col3" ]) ; You can use the select method to transform rows. You can extract a column using getSeries then use the select method to transform values in that column. You get your data back out of the data-frame like this: var data = dataFrame.toArray(); To average a column: var avg = dataFrame.getSeries("col1").average(); There is much more you can do with this. You can find more documentation on npm. A: Ceaveat The following is applicable only to d3 v3, and not the latest d4v4! I am partial to d3.js, and while it won't be a total replacement for Pandas, if you spend some time learning its paradigm, it should be able to take care of all your data wrangling for you. (And if you wind up wanting to display results in the browser, it's ideally suited to that.) Example. My CSV file data.csv: name,age,color Mickey,65,black Donald,58,white Pluto,64,orange In the same directory, create an index.html containing the following: <!DOCTYPE html> <html> <head> <meta charset="utf-8"/> <title>My D3 demo</title> <script src="http://d3js.org/d3.v3.min.js" charset="utf-8"></script> </head> <body> <script charset="utf-8" src="demo.js"></script> </body> </html> and also a demo.js file containing the following: d3.csv('/data.csv', // How to format each row. Since the CSV file has a header, `row` will be // an object with keys derived from the header. function(row) { return {name : row.name, age : +row.age, color : row.color}; }, // Callback to run once all data's loaded and ready. function(data) { // Log the data to the JavaScript console console.log(data); // Compute some interesting results var averageAge = data.reduce(function(prev, curr) { return prev + curr.age; }, 0) / data.length; // Also, display it var ulSelection = d3.select('body').append('ul'); var valuesSelection = ulSelection.selectAll('li').data(data).enter().append('li').text( function(d) { return d.age; }); var totalSelection = ulSelection.append('li').text('Average: ' + averageAge); }); In the directory, run python -m SimpleHTTPServer 8181, and open http://localhost:8181 in your browser to see a simple listing of the ages and their average. This simple example shows a few relevant features of d3: Excellent support for ingesting online data (CSV, TSV, JSON, etc.) Data wrangling smarts baked in Data-driven DOM manipulation (maybe the hardest thing to wrap one's head around): your data gets transformed into DOM elements. A: Pandas.js at the moment is an experimental library, but seems very promising it uses under the hood immutable.js and NumpPy logic, both data objects series and DataFrame are there.. 10-Feb-2021 Update as @jarthur mentioned it seems no update on this repo for last 4 years A: @neversaint your wait is over. say welcome to Danfo.js which is pandas like Javascript library built on tensorflow.js and supports tensors out of the box. This means you can convert danfo data structure to Tensors. And you can do groupby, merging, joining, plotting and other data processing. A: Below is Python numpy and pandas ``` import numpy as np import pandas as pd data_frame = pd.DataFrame(np.random.randn(5, 4), ['A', 'B', 'C', 'D', 'E'], [1, 2, 3, 4]) data_frame[5] = np.random.randint(1, 50, 5) print(data_frame.loc[['C', 'D'], [2, 3]]) # axis 1 = Y | 0 = X data_frame.drop(5, axis=1, inplace=True) print(data_frame) ``` The same can be achieved in JavaScript* [numjs works only with Node.js] But D3.js has much advanced Data file set options. Both numjs and Pandas-js still in works.. import np from 'numjs'; import { DataFrame } from 'pandas-js'; const df = new DataFrame(np.random.randn(5, 4), ['A', 'B', 'C', 'D', 'E'], [1, 2, 3, 4]) // df /* 1 2 3 4 A 0.023126 1.078130 -0.521409 -1.480726 B 0.920194 -0.201019 0.028180 0.558041 C -0.650564 -0.505693 -0.533010 0.441858 D -0.973549 0.095626 -1.302843 1.109872 E -0.989123 -1.382969 -1.682573 -0.637132 */ A: I think the closest thing are libraries like: ReclineJS Miso Project Dataset Recline in particular has a Dataset object with a structure somewhat similar to Pandas data frames. It then allows you to connect your data with "Views" such as a data grid, graphing, maps etc. Views are usually thin wrappers around existing best of breed visualization libraries such as D3, Flot, SlickGrid etc. Here's an example for Recline: // Load some data var dataset = recline.Model.Dataset({ records: [ { value: 1, date: '2012-08-07' }, { value: 5, b: '2013-09-07' } ] // Load CSV data instead // (And Recline has support for many more data source types) // url: 'my-local-csv-file.csv', // backend: 'csv' }); // get an element from your HTML for the viewer var $el = $('#data-viewer'); var allInOneDataViewer = new recline.View.MultiView({ model: dataset, el: $el }); // Your new Data Viewer will be live! A: It's pretty easy to parse CSV in javascript because each line's already essentially a javascript array. If you load your csv into an array of strings (one per line) it's pretty easy to load an array of arrays with the values: var pivot = function(data){ var result = []; for (var i = 0; i < data.length; i++){ for (var j=0; j < data[i].length; j++){ if (i === 0){ result[j] = []; } result[j][i] = data[i][j]; } } return result; }; var getData = function() { var csvString = $(".myText").val(); var csvLines = csvString.split(/\n?$/m); var dataTable = []; for (var i = 0; i < csvLines.length; i++){ var values; eval("values = [" + csvLines[i] + "]"); dataTable[i] = values; } return pivot(dataTable); }; Then getData() returns a multidimensional array of values by column. I've demonstrated this in a jsFiddle for you. Of course, you can't do it quite this easily if you don't trust the input - if there could be script in your data which eval might pick up, etc. A: Here is an dynamic approach assuming an existing header on line 1. The csv is loaded with d3.js. function csvToColumnArrays(csv) { var mainObj = {}, header = Object.keys(csv[0]); for (var i = 0; i < header.length; i++) { mainObj[header[i]] = []; }; csv.map(function(d) { for (key in mainObj) { mainObj[key].push(d[key]) } }); return mainObj; } d3.csv(path, function(csv) { var df = csvToColumnArrays(csv); }); Then you are able to access each column of the data similar an R, python or Matlab dataframe with df.column_header[row_number]. A: Arquero is a library for handling relational data, with syntax similar to the popular R package dplyr (which is a sort of SQL-like). https://observablehq.com/@uwdata/introducing-arquero
Python Pandas equivalent in JavaScript
With this CSV example: Source,col1,col2,col3 foo,1,2,3 bar,3,4,5 The standard method I use Pandas is this: Parse CSV Select columns into a data frame (col1 and col3) Process the column (e.g. avarage the values of col1 and col3) Is there a JavaScript library that does that like Pandas?
[ "This wiki will summarize and compare many pandas-like Javascript libraries.\nIn general, you should check out the d3 Javascript library. d3 is very useful \"swiss army knife\" for handling data in Javascript, just like pandas is helpful for Python. You may see d3 used frequently like pandas, even if d3 is not exactly a DataFrame/Pandas replacement (i.e. d3 doesn't have the same API; d3 does not have Series / DataFrame classes with methods that match the pandas behavior)\nAhmed's answer explains how d3 can be used to achieve some DataFrame functionality, and some of the libraries below were inspired by things like LearnJsData which uses d3 and lodash.\nAs for DataFrame-style data transformation (splitting, joining, group by etc) , here is a quick list of some of the Javascript libraries.\nNote some libraries are Node.js aka Server-side Javascript, some are browser-compatible aka client-side Javascript, and some are Typescript. So use the option that's right for you.\n\ndanfo-js (browser-support AND NodeJS-support)\n\nFrom Vignesh's answer\n\ndanfo (which is often imported and aliased as dfd); has a basic DataFrame-type data structure, with the ability to plot directly\n\nBuilt by the team at Tensorflow: \"One of the main goals of Danfo.js is to bring data processing, machine learning and AI tools to JavaScript developers. ... Open-source libraries like Numpy and Pandas...\"\n\npandas is built on top of numpy; likewise danfo-js is built on tensorflow-js\n\nplease note danfo may not (yet?) support multi-column indexes\n\n\n\npandas-js\n\nUPDATE The pandas-js repo has not been updated in awhile\nFrom STEEL and Feras' answers\n\"pandas.js is an open source (experimental) library mimicking the Python pandas library. It relies on Immutable.js as the NumPy logical equivalent. The main data objects in pandas.js are, like in Python pandas, the Series and the DataFrame.\"\n\n\ndataframe-js\n\n\"DataFrame-js provides an immutable data structure for javascript and datascience, the DataFrame, which allows to work on rows and columns with a sql and functional programming inspired api.\"\n\n\ndata-forge\n\nSeen in Ashley Davis' answer\n\"JavaScript data transformation and analysis toolkit inspired by Pandas and LINQ.\"\nNote the old data-forge JS repository is no longer maintained; now a new repository uses Typescript\n\n\njsdataframe\n\n\"Jsdataframe is a JavaScript data wrangling library inspired by data frame functionality in R and Python Pandas.\"\n\n\ndataframe\n\n\"explore data by grouping and reducing.\"\n\n\nSQL Frames\n\n\"DataFrames meet SQL, in the Browser\"\n\"SQL Frames is a low code data management framework that can be directly embedded in the browser to provide rich data visualization and UX. Complex DataFrames can be composed using familiar SQL constructs. With its powerful built-in analytics engine, data sources can come in any shape, form and frequency and they can be analyzed directly within the browser. It allows scaling to big data backends by transpiling the composed DataFrame logic to SQL.\"\n\n\n\nThen after coming to this question, checking other answers here and doing more searching, I found options like:\n\nApache Arrow in JS\n\nThanks to user Back2Basics suggestion:\n\"Apache Arrow is a columnar memory layout specification for encoding vectors and table-like containers of flat and nested data. Apache Arrow is the emerging standard for large in-memory columnar data (Spark, Pandas, Drill, Graphistry, ...)\"\n\n\npolars\n\nPolars is a blazingly fast DataFrames library implemented in Rust using Apache Arrow Columnar Format as memory model.\n\n\nObservable\n\nAt first glance, seems like a JS alternative to the IPython/Jupyter \"notebooks\"\nObservable's page promises: \"Reactive programming\", a \"Community\", on a \"Web Platform\"\nSee 5 minute intro here\n\n\nportal.js (formerly recline; from Rufus' answer)\n\nMAY BE OUTDATED: Does not use a \"DataFrame\" API\nMAY BE OUTDATED: Instead emphasizes its \"Multiview\" (the UI) API, (similar to jQuery/DOM model) which doesn't require jQuery but does require a browser! More examples\nMAY BE OUTDATED: Also emphasizes its MVC-ish architecture; including back-end stuff (i.e. database connections)\n\n\njs-data\n\nReally more of an ORM! Most of its modules correspond to different data storage questions (js-data-mongodb, js-data-redis, js-data-cloud-datastore), sorting, filtering, etc.\nOn plus-side does work on Node.js as a first-priority; \"Works in Node.js and in the Browser.\"\n\n\nmiso (another suggestion from Rufus)\n\nImpressive backers like Guardian and bocoup.\n\n\nAlaSQL\n\n\"AlaSQL\" is an open source SQL database for Javascript with a strong focus on query speed and data source flexibility for both relational data and schemaless data. It works in your browser, Node.js, and Cordova.\"\n\n\nSome thought experiments:\n\n\"Scaling a DataFrame in Javascript\" - Gary Sieling\n\n\n\nHere are the criteria we used to consider the above choices\n\nGeneral Criteria\n\nLanguage (NodeJS vs browser JS vs Typescript)\nDependencies (i.e. if it uses an underlying library / AJAX/remote API's)\nActively supported (active user-base, active source repository, etc)\nSize/speed of JS library\n\n\nPanda's criterias in its R comparison\n\nPerformance\nFunctionality/flexibility\nEase-of-use\n\n\nSimilarity to Pandas / Dataframe API's\n\nSpecifically hits on their main features\nData-science emphasis\nBuilt-in visualization functions\nDemonstrated integration in combination with other tools like Jupyter\n(interactive notebooks), etc\n\n\n\n", "I've been working on a data wrangling library for JavaScript called data-forge. It's inspired by LINQ and Pandas.\nIt can be installed like this: \nnpm install --save data-forge\n\nYour example would work like this:\nvar csvData = \"Source,col1,col2,col3\\n\" +\n \"foo,1,2,3\\n\" +\n \"bar,3,4,5\\n\";\n\nvar dataForge = require('data-forge');\nvar dataFrame = \n dataForge.fromCSV(csvData)\n .parseInts([ \"col1\", \"col2\", \"col3\" ])\n ;\n\nIf your data was in a CSV file you could load it like this:\nvar dataFrame = dataForge.readFileSync(fileName)\n .parseCSV()\n .parseInts([ \"col1\", \"col2\", \"col3\" ])\n ;\n\nYou can use the select method to transform rows.\nYou can extract a column using getSeries then use the select method to transform values in that column.\nYou get your data back out of the data-frame like this:\nvar data = dataFrame.toArray();\n\nTo average a column:\n var avg = dataFrame.getSeries(\"col1\").average();\n\nThere is much more you can do with this.\nYou can find more documentation on npm.\n", "Ceaveat The following is applicable only to d3 v3, and not the latest d4v4!\nI am partial to d3.js, and while it won't be a total replacement for Pandas, if you spend some time learning its paradigm, it should be able to take care of all your data wrangling for you. (And if you wind up wanting to display results in the browser, it's ideally suited to that.)\nExample. My CSV file data.csv:\nname,age,color\nMickey,65,black\nDonald,58,white\nPluto,64,orange\n\nIn the same directory, create an index.html containing the following:\n<!DOCTYPE html>\n<html>\n <head>\n <meta charset=\"utf-8\"/>\n <title>My D3 demo</title>\n\n <script src=\"http://d3js.org/d3.v3.min.js\" charset=\"utf-8\"></script>\n </head>\n <body>\n\n <script charset=\"utf-8\" src=\"demo.js\"></script>\n </body>\n</html>\n\nand also a demo.js file containing the following:\nd3.csv('/data.csv',\n\n // How to format each row. Since the CSV file has a header, `row` will be\n // an object with keys derived from the header.\n function(row) {\n return {name : row.name, age : +row.age, color : row.color};\n },\n\n // Callback to run once all data's loaded and ready.\n function(data) {\n // Log the data to the JavaScript console\n console.log(data);\n\n // Compute some interesting results\n var averageAge = data.reduce(function(prev, curr) {\n return prev + curr.age;\n }, 0) / data.length;\n\n // Also, display it\n var ulSelection = d3.select('body').append('ul');\n var valuesSelection =\n ulSelection.selectAll('li').data(data).enter().append('li').text(\n function(d) { return d.age; });\n var totalSelection =\n ulSelection.append('li').text('Average: ' + averageAge);\n });\n\nIn the directory, run python -m SimpleHTTPServer 8181, and open http://localhost:8181 in your browser to see a simple listing of the ages and their average.\nThis simple example shows a few relevant features of d3:\n\nExcellent support for ingesting online data (CSV, TSV, JSON, etc.)\nData wrangling smarts baked in\nData-driven DOM manipulation (maybe the hardest thing to wrap one's head around): your data gets transformed into DOM elements.\n\n", "Pandas.js\nat the moment is an experimental library, but seems very promising it uses under the hood immutable.js and NumpPy logic, both data objects series and DataFrame are there..\n10-Feb-2021 Update as @jarthur mentioned it seems no update on this repo for last 4 years\n", "@neversaint your wait is over. say welcome to Danfo.js which is pandas like Javascript library built on tensorflow.js and supports tensors out of the box. This means you can convert danfo data structure to Tensors. And you can do groupby, merging, joining, plotting and other data processing.\n", "Below is Python numpy and pandas\n```\nimport numpy as np\nimport pandas as pd\n\ndata_frame = pd.DataFrame(np.random.randn(5, 4), ['A', 'B', 'C', 'D', 'E'], [1, 2, 3, 4])\n\ndata_frame[5] = np.random.randint(1, 50, 5)\n\nprint(data_frame.loc[['C', 'D'], [2, 3]])\n\n# axis 1 = Y | 0 = X\ndata_frame.drop(5, axis=1, inplace=True)\n\nprint(data_frame)\n\n```\nThe same can be achieved in JavaScript* [numjs works only with Node.js]\nBut D3.js has much advanced Data file set options. Both numjs and Pandas-js still in works..\n\n\nimport np from 'numjs';\r\nimport { DataFrame } from 'pandas-js';\r\n\r\nconst df = new DataFrame(np.random.randn(5, 4), ['A', 'B', 'C', 'D', 'E'], [1, 2, 3, 4])\r\n\r\n// df\r\n/*\r\n\r\n 1 2 3 4\r\nA 0.023126 1.078130 -0.521409 -1.480726\r\nB 0.920194 -0.201019 0.028180 0.558041\r\nC -0.650564 -0.505693 -0.533010 0.441858\r\nD -0.973549 0.095626 -1.302843 1.109872\r\nE -0.989123 -1.382969 -1.682573 -0.637132\r\n\r\n*/\n\n\n\n", "I think the closest thing are libraries like:\n\nReclineJS\nMiso Project Dataset\n\nRecline in particular has a Dataset object with a structure somewhat similar to Pandas data frames. It then allows you to connect your data with \"Views\" such as a data grid, graphing, maps etc. Views are usually thin wrappers around existing best of breed visualization libraries such as D3, Flot, SlickGrid etc.\nHere's an example for Recline:\n\n// Load some data\nvar dataset = recline.Model.Dataset({\n records: [\n { value: 1, date: '2012-08-07' },\n { value: 5, b: '2013-09-07' }\n ]\n // Load CSV data instead\n // (And Recline has support for many more data source types)\n // url: 'my-local-csv-file.csv',\n // backend: 'csv'\n});\n\n// get an element from your HTML for the viewer\nvar $el = $('#data-viewer');\n\nvar allInOneDataViewer = new recline.View.MultiView({\n model: dataset,\n el: $el\n});\n// Your new Data Viewer will be live!\n\n", "It's pretty easy to parse CSV in javascript because each line's already essentially a javascript array. If you load your csv into an array of strings (one per line) it's pretty easy to load an array of arrays with the values:\nvar pivot = function(data){\n var result = [];\n for (var i = 0; i < data.length; i++){\n for (var j=0; j < data[i].length; j++){\n if (i === 0){\n result[j] = [];\n }\n result[j][i] = data[i][j];\n }\n }\n return result;\n};\n\nvar getData = function() {\n var csvString = $(\".myText\").val();\n var csvLines = csvString.split(/\\n?$/m);\n\n var dataTable = [];\n\n for (var i = 0; i < csvLines.length; i++){\n var values;\n eval(\"values = [\" + csvLines[i] + \"]\");\n dataTable[i] = values;\n }\n\n return pivot(dataTable);\n};\n\nThen getData() returns a multidimensional array of values by column.\nI've demonstrated this in a jsFiddle for you.\nOf course, you can't do it quite this easily if you don't trust the input - if there could be script in your data which eval might pick up, etc.\n", "Here is an dynamic approach assuming an existing header on line 1. The csv is loaded with d3.js. \nfunction csvToColumnArrays(csv) {\n\n var mainObj = {},\n header = Object.keys(csv[0]);\n\n for (var i = 0; i < header.length; i++) {\n\n mainObj[header[i]] = [];\n };\n\n csv.map(function(d) {\n\n for (key in mainObj) {\n mainObj[key].push(d[key])\n }\n\n }); \n\n return mainObj;\n\n}\n\n\nd3.csv(path, function(csv) {\n\n var df = csvToColumnArrays(csv); \n\n});\n\nThen you are able to access each column of the data similar an R, python or Matlab dataframe with df.column_header[row_number]. \n", "Arquero is a library for handling relational data, with syntax similar to the popular R package dplyr (which is a sort of SQL-like).\nhttps://observablehq.com/@uwdata/introducing-arquero\n" ]
[ 193, 11, 8, 7, 7, 6, 3, 1, 1, 0 ]
[]
[]
[ "javascript", "pandas", "python" ]
stackoverflow_0030610675_javascript_pandas_python.txt
Q: how to swap 2 variables using recursion i've tried to swap 2 variables using recursion.So i passed them by reference and nothing change in the main frame. But inside the function scope it works...Can anyone explain me how this code works inside the stack and if is there any other solution to swap varibles using recursion? #include <iostream> void swap(int &a, int &b) { if (a>b) { swap(b,a); } } int main() { int x = 10; int y = 8;; swap(x, y); std::cout << x << ' ' << y; return 0; } A: Simply switching the parameters in the recursive call doesn't actually swap the values of the variables in the caller, or anywhere else. There's no (sensible) way to write this recursively because swapping isn't a recursive procedure. Recursion is used when you're traversing a data structure with multiple elements, like an array or a tree, or you're manipulating numbers repeatedly over time, as in a Fibonacci sequence. But here, there's no repeated decision to be had. All it is is "swap if a > b, otherwise don't", which is a simple if, plus one of the normal swapping approaches you described: #include <iostream> void swap_if_greater(int &a, int &b) { if (a > b) { std::swap(a, b); } } int main() { int x = 10; int y = 8; swap_if_greater(x, y); std::cout << x << ' ' << y; // => 8 10 } Note that I've renamed the function. swap implies an unconditional swap, but that's not what the function does. The function ensures the lower value will be in the first argument, so it's been renamed to swap_if_greater to reflect that. If you do want an unconditional swap, use std::swap.
how to swap 2 variables using recursion
i've tried to swap 2 variables using recursion.So i passed them by reference and nothing change in the main frame. But inside the function scope it works...Can anyone explain me how this code works inside the stack and if is there any other solution to swap varibles using recursion? #include <iostream> void swap(int &a, int &b) { if (a>b) { swap(b,a); } } int main() { int x = 10; int y = 8;; swap(x, y); std::cout << x << ' ' << y; return 0; }
[ "Simply switching the parameters in the recursive call doesn't actually swap the values of the variables in the caller, or anywhere else. There's no (sensible) way to write this recursively because swapping isn't a recursive procedure. Recursion is used when you're traversing a data structure with multiple elements, like an array or a tree, or you're manipulating numbers repeatedly over time, as in a Fibonacci sequence.\nBut here, there's no repeated decision to be had. All it is is \"swap if a > b, otherwise don't\", which is a simple if, plus one of the normal swapping approaches you described:\n#include <iostream>\n\nvoid swap_if_greater(int &a, int &b) {\n if (a > b) {\n std::swap(a, b);\n }\n}\n\nint main() {\n int x = 10;\n int y = 8;\n swap_if_greater(x, y);\n std::cout << x << ' ' << y; // => 8 10\n}\n\nNote that I've renamed the function. swap implies an unconditional swap, but that's not what the function does. The function ensures the lower value will be in the first argument, so it's been renamed to swap_if_greater to reflect that. If you do want an unconditional swap, use std::swap.\n" ]
[ 0 ]
[]
[]
[ "c++", "recursion", "reference", "swap" ]
stackoverflow_0074670279_c++_recursion_reference_swap.txt
Q: How to output a list of file names in React Native expo? I have been trying to figure out how to get a list of the image file names I have in my assets folder in my React Native project in Expo. I have tried a number of things like react-native-fs but it would say Your javascript code tried to access a native module that doesn't exist fs. I tried this solution but it's not compatible with React Native. Is there truly any way to simply output the file names from a directory for React Native? Any help is truly appreciated! A: use expo-module $ expo install expo-file-system Example for get file information info = await FileSystem.getInfoAsync(path);
How to output a list of file names in React Native expo?
I have been trying to figure out how to get a list of the image file names I have in my assets folder in my React Native project in Expo. I have tried a number of things like react-native-fs but it would say Your javascript code tried to access a native module that doesn't exist fs. I tried this solution but it's not compatible with React Native. Is there truly any way to simply output the file names from a directory for React Native? Any help is truly appreciated!
[ "use expo-module\n$ expo install expo-file-system\n\nExample for get file information\ninfo = await FileSystem.getInfoAsync(path);\n\n" ]
[ 0 ]
[]
[]
[ "expo", "filesystems", "react_native" ]
stackoverflow_0074671445_expo_filesystems_react_native.txt
Q: Fatal error: Uncaught mysqli_sql_exception: Access denied for user 'Adeleke'@'localhost' (using password: YES) I tried testing my php software with Xammp (Local Host) but after hitting the url, i kept getting: Fatal error: Uncaught mysqli_sql_exception: Access denied for user 'Adeleke'@'localhost' (using password: YES) I thought it was from xammp PhpMyAdmin so i installed laragon nd tried testing the software on it, below is what it shows Fatal error: Uncaught mysqli_sql_exception: Access denied for user 'Adeleke'@'localhost' (using password: YES) in C:\laragon\www\MyQueerDate\install\index.php:47 Stack trace: #0 C:\laragon\www\MyQueerDate\install\index.php(47): mysqli_connect('127.0.0.1', 'Adeleke', 'Badvibe019!', 'Laragon.MySQL') #1 {main} thrown in C:\laragon\www\MyQueerDate\install\index.php on line 47 Here is the code on MyQueerDate\install\index.php on line 46 - 49: if (!empty($_POST['install'])) { $con = mysqli_connect($_POST['sql_host'], $_POST['sql_user'], $_POST['sql_pass'], $_POST['sql_name']); if (mysqli_connect_errno()) { $ServerErrors[] = "Failed to connect to MySQL: " . mysqli_connect_error(); I was try to chech the output of the software and view the dashboard and the admin panel but it kept on showing; Fatal error: Uncaught mysqli_sql_exception: Access denied for user 'Adeleke'@'localhost' (using password: YES) in C:\laragon\www\MyQueerDate\install\index.php:47 Stack trace: #0 C:\laragon\www\MyQueerDate\install\index.php(47): mysqli_connect('127.0.0.1', 'Adeleke', 'Badvibe019!', 'Laragon.MySQL') #1 {main} thrown in C:\laragon\www\MyQueerDate\install\index.php on line 47 I tried it on cpanel and it works but i dont want to be testing softwares on cpanel, i cant affford paying hosting fees for now and i want to try localhost. A: Check user and login permissions select user,host from mysql.user; If you are in this list, you can connect normally Check whether the password is correct You can try to use mysql -uusername -ppassword to log in on the mysql server, you can better get the cause of the error, and then improve
Fatal error: Uncaught mysqli_sql_exception: Access denied for user 'Adeleke'@'localhost' (using password: YES)
I tried testing my php software with Xammp (Local Host) but after hitting the url, i kept getting: Fatal error: Uncaught mysqli_sql_exception: Access denied for user 'Adeleke'@'localhost' (using password: YES) I thought it was from xammp PhpMyAdmin so i installed laragon nd tried testing the software on it, below is what it shows Fatal error: Uncaught mysqli_sql_exception: Access denied for user 'Adeleke'@'localhost' (using password: YES) in C:\laragon\www\MyQueerDate\install\index.php:47 Stack trace: #0 C:\laragon\www\MyQueerDate\install\index.php(47): mysqli_connect('127.0.0.1', 'Adeleke', 'Badvibe019!', 'Laragon.MySQL') #1 {main} thrown in C:\laragon\www\MyQueerDate\install\index.php on line 47 Here is the code on MyQueerDate\install\index.php on line 46 - 49: if (!empty($_POST['install'])) { $con = mysqli_connect($_POST['sql_host'], $_POST['sql_user'], $_POST['sql_pass'], $_POST['sql_name']); if (mysqli_connect_errno()) { $ServerErrors[] = "Failed to connect to MySQL: " . mysqli_connect_error(); I was try to chech the output of the software and view the dashboard and the admin panel but it kept on showing; Fatal error: Uncaught mysqli_sql_exception: Access denied for user 'Adeleke'@'localhost' (using password: YES) in C:\laragon\www\MyQueerDate\install\index.php:47 Stack trace: #0 C:\laragon\www\MyQueerDate\install\index.php(47): mysqli_connect('127.0.0.1', 'Adeleke', 'Badvibe019!', 'Laragon.MySQL') #1 {main} thrown in C:\laragon\www\MyQueerDate\install\index.php on line 47 I tried it on cpanel and it works but i dont want to be testing softwares on cpanel, i cant affford paying hosting fees for now and i want to try localhost.
[ "\nCheck user and login permissions\nselect user,host from mysql.user;\nIf you are in this list, you can connect normally\nCheck whether the password is correct\nYou can try to use mysql -uusername -ppassword to log in on the mysql server, you can better get the cause of the error, and then improve\n\n" ]
[ 0 ]
[]
[]
[ "mysql" ]
stackoverflow_0074670167_mysql.txt
Q: How do I center this loader inside the SVG I would like to center this loader inside the grey SVG vertically and horizontally. I can't use external CSS. Just either inline CSS or another way. I tried doing myself but struggled for a while. Thanks <svg viewBox="0 0 2560 1440" fill="none" xmlns="http://www.w3.org/2000/svg"> <path fill="#F5F5F5" d="M0 0h2560v1440H0z"/> <path opacity=".2" fill="#000" d="M20.201 5.169c-8.254 0-14.946 6.692-14.946 14.946 0 8.255 6.692 14.946 14.946 14.946s14.946-6.691 14.946-14.946c-.001-8.254-6.692-14.946-14.946-14.946zm0 26.58c-6.425 0-11.634-5.208-11.634-11.634 0-6.425 5.209-11.634 11.634-11.634 6.425 0 11.633 5.209 11.633 11.634 0 6.426-5.208 11.634-11.633 11.634z"/> <path fill="#000" d="m26.013 10.047 1.654-2.866a14.855 14.855 0 0 0-7.466-2.012v3.312c2.119 0 4.1.576 5.812 1.566z"> <animateTransform attributeType="xml" attributeName="transform" type="rotate" from="0 20 20" to="360 20 20" dur="0.75s" repeatCount="indefinite"/> </path> </svg> A: Wrap your spinner in a <symbol> with a viewBox attribute as suggested by @exaneta and place a symbol instance with a specific width and height like so: body{ margin:0; } *{ border-box:border-box; } <svg width="100%" height="100vh" xmlns="http://www.w3.org/2000/svg" style="background:#F5F5F5"> <symbol id="spinner" viewBox="5 5 30 30"> <path opacity=".2" fill="#000" d="M20.2 5.17c-8.25 0-14.95 6.69-14.95 14.95s6.69 14.95 14.95 14.95s14.95-6.69 14.95-14.95c0-8.25-6.69-14.95-14.95-14.95zm0 26.58c-6.43 0-11.63-5.21-11.63-11.63s5.21-11.63 11.63-11.63s11.63 5.21 11.63 11.63s-5.21 11.63-11.63 11.63z" /> <path fill="#000" d="M26.01 10.05l1.65-2.87a14.86 14.86 0 0 0-7.47-2.01v3.31c2.12 0 4.1 0.58 5.81 1.57z"> <animateTransform attributeType="xml" attributeName="transform" type="rotate" from="0 20 20" to="360 20 20" dur="0.75s" repeatCount="indefinite" /> </path> </symbol> <use href="#spinner" x="50%" y="50%" transform="translate(-20)" width="40" height="40"> </svg> The <use> element's placement is adjusted by translate(-20) (height or width/2)
How do I center this loader inside the SVG
I would like to center this loader inside the grey SVG vertically and horizontally. I can't use external CSS. Just either inline CSS or another way. I tried doing myself but struggled for a while. Thanks <svg viewBox="0 0 2560 1440" fill="none" xmlns="http://www.w3.org/2000/svg"> <path fill="#F5F5F5" d="M0 0h2560v1440H0z"/> <path opacity=".2" fill="#000" d="M20.201 5.169c-8.254 0-14.946 6.692-14.946 14.946 0 8.255 6.692 14.946 14.946 14.946s14.946-6.691 14.946-14.946c-.001-8.254-6.692-14.946-14.946-14.946zm0 26.58c-6.425 0-11.634-5.208-11.634-11.634 0-6.425 5.209-11.634 11.634-11.634 6.425 0 11.633 5.209 11.633 11.634 0 6.426-5.208 11.634-11.633 11.634z"/> <path fill="#000" d="m26.013 10.047 1.654-2.866a14.855 14.855 0 0 0-7.466-2.012v3.312c2.119 0 4.1.576 5.812 1.566z"> <animateTransform attributeType="xml" attributeName="transform" type="rotate" from="0 20 20" to="360 20 20" dur="0.75s" repeatCount="indefinite"/> </path> </svg>
[ "Wrap your spinner in a <symbol> with a viewBox attribute as suggested by @exaneta and place a symbol instance with a specific width and height like so:\n\n\nbody{\n margin:0;\n}\n\n*{\n border-box:border-box;\n}\n<svg width=\"100%\" height=\"100vh\" xmlns=\"http://www.w3.org/2000/svg\" style=\"background:#F5F5F5\">\n <symbol id=\"spinner\" viewBox=\"5 5 30 30\">\n <path opacity=\".2\" fill=\"#000\" d=\"M20.2 5.17c-8.25 0-14.95 6.69-14.95 14.95s6.69 14.95 14.95 14.95s14.95-6.69 14.95-14.95c0-8.25-6.69-14.95-14.95-14.95zm0 26.58c-6.43 0-11.63-5.21-11.63-11.63s5.21-11.63 11.63-11.63s11.63 5.21 11.63 11.63s-5.21 11.63-11.63 11.63z\" />\n <path fill=\"#000\" d=\"M26.01 10.05l1.65-2.87a14.86 14.86 0 0 0-7.47-2.01v3.31c2.12 0 4.1 0.58 5.81 1.57z\">\n <animateTransform attributeType=\"xml\" attributeName=\"transform\" type=\"rotate\" from=\"0 20 20\" to=\"360 20 20\" dur=\"0.75s\" repeatCount=\"indefinite\" />\n </path>\n </symbol>\n <use href=\"#spinner\" x=\"50%\" y=\"50%\" transform=\"translate(-20)\" width=\"40\" height=\"40\">\n</svg>\n\n\n\nThe <use> element's placement is adjusted by translate(-20) (height or width/2)\n" ]
[ 0 ]
[]
[]
[ "css", "svg" ]
stackoverflow_0074670494_css_svg.txt
Q: How to remove outliers for variable versus another variable in imported dataset in R? I was asked to make boxplot of variable SAW for the 2 surgical intervention types defined by HSW and dataset name is mydata, Then i was asked to check if there any outliers in the boxplot and i found outliers but i can't remove them and i tried multiple ways but all goes with failure. could you please help me with that issue? and that is my boxplot boxplot(mydata$SAW~mydata$HSW,main="SAW for two surgical") no_outliers <- subset(mydata, mydata$SAW > (Q1 - 1.5*IQR) & mydata$HSW < (Q3 + 1.5*IQR)) This was my last trial but it gave me error says Error in surgery$SAW : $ operator is invalid for atomic vectors A: On way would be to use the boxplot object itself- old <- boxplot(disp~am,mtcars) # old$out has the outlier values stored # filter the df using those values new <- mtcars[!mtcars$disp %in% old$out,] ## new boxplot withut ouliers.. boxplot(disp~am,new)
How to remove outliers for variable versus another variable in imported dataset in R?
I was asked to make boxplot of variable SAW for the 2 surgical intervention types defined by HSW and dataset name is mydata, Then i was asked to check if there any outliers in the boxplot and i found outliers but i can't remove them and i tried multiple ways but all goes with failure. could you please help me with that issue? and that is my boxplot boxplot(mydata$SAW~mydata$HSW,main="SAW for two surgical") no_outliers <- subset(mydata, mydata$SAW > (Q1 - 1.5*IQR) & mydata$HSW < (Q3 + 1.5*IQR)) This was my last trial but it gave me error says Error in surgery$SAW : $ operator is invalid for atomic vectors
[ "On way would be to use the boxplot object itself-\nold <- boxplot(disp~am,mtcars)\n # old$out has the outlier values stored\n # filter the df using those values\nnew <- mtcars[!mtcars$disp %in% old$out,]\n## new boxplot withut ouliers..\nboxplot(disp~am,new)\n\n\n\n" ]
[ 0 ]
[]
[]
[ "atomic", "boxplot", "dollar_sign", "outliers", "r" ]
stackoverflow_0074672108_atomic_boxplot_dollar_sign_outliers_r.txt
Q: Advanced Custom Fields checkbox storing data in single string instead of array I want to get the checked boxes in my field in an array, but get_field of the checkbox field returns the values in an array of size 1 with the labels of the checked boxes in a single string, separated by ' | '. I can't seem to figure out how to get ACF to give me an array. So I want get_field('checkboxes')' to return this Array ( [0] => checkbox1 [1] => checkbox2 [2] => checkbox3 ) But I'm getting this Array ( [0] => checkbox1 | checkbox2 | checkbox3 ) I have the return format of the field set to Value, i've checked the ACF documentation for checkboxes and I'm doing exactly as it says but it's not working A: when set return is 'value' then your output will be: $field = get_field_object('checkboxes'); $choices = $field['value']; <?php if( $choices ): ?> <?php foreach( $choices as $checkbox): ?> <?php echo $field['choices'][ $checkbox]; ?> <?php endforeach; ?> <?php endif; ?>
Advanced Custom Fields checkbox storing data in single string instead of array
I want to get the checked boxes in my field in an array, but get_field of the checkbox field returns the values in an array of size 1 with the labels of the checked boxes in a single string, separated by ' | '. I can't seem to figure out how to get ACF to give me an array. So I want get_field('checkboxes')' to return this Array ( [0] => checkbox1 [1] => checkbox2 [2] => checkbox3 ) But I'm getting this Array ( [0] => checkbox1 | checkbox2 | checkbox3 ) I have the return format of the field set to Value, i've checked the ACF documentation for checkboxes and I'm doing exactly as it says but it's not working
[ "when set return is 'value' then your output will be:\n$field = get_field_object('checkboxes');\n$choices = $field['value'];\n\n<?php if( $choices ): ?>\n\n <?php foreach( $choices as $checkbox): ?>\n <?php echo $field['choices'][ $checkbox]; ?>\n <?php endforeach; ?>\n\n<?php endif; ?>\n\n" ]
[ 0 ]
[]
[]
[ "advanced_custom_fields", "wordpress" ]
stackoverflow_0074670000_advanced_custom_fields_wordpress.txt
Q: A vim plugin for translators Is there any vim plugin which allows splits for translations. (e.g. 2 vetical splits. one sentence per some block one left side, and mine translated text on right side. and even if translated text bigger than original it will display proper highlighted block on left side) A: Maybe this works for you: Use two files, one containing the original texts, one containing the translations. Both files have one sentence per line. Open both files in a vertically split window (:vs). Use :set wrap to see more lines at a time. While still on the first line, enter :set scrollbind and :set cursorbind in both windows. This will keep both windows in sync when you jump back and forth with the cursor. A: There is one new vim plugin: translate-shell.vim. You can try to use it. A: The OmegaT free translation memory program now has a Vim mode, through a plugin called Vimish. If what you're looking to do is translate texts (or post-edit translations) using familiar Vim keys, this might be worth a look.
A vim plugin for translators
Is there any vim plugin which allows splits for translations. (e.g. 2 vetical splits. one sentence per some block one left side, and mine translated text on right side. and even if translated text bigger than original it will display proper highlighted block on left side)
[ "Maybe this works for you: \n\nUse two files, one containing the original texts, one containing the translations. \nBoth files have one sentence per line. \nOpen both files in a vertically split window (:vs). \nUse :set wrap to see more lines at a time. \nWhile still on the first line, enter :set scrollbind and :set cursorbind in both windows. This will keep both windows in sync when you jump back and forth with the cursor.\n\n", "There is one new vim plugin: translate-shell.vim. You can try to use it.\n", "The OmegaT free translation memory program now has a Vim mode, through a plugin called Vimish. If what you're looking to do is translate texts (or post-edit translations) using familiar Vim keys, this might be worth a look.\n" ]
[ 7, 0, 0 ]
[]
[]
[ "translation", "vim" ]
stackoverflow_0011488241_translation_vim.txt
Q: Printing a nested JSON Array value I've been trying to find ways to print the individual data but can't seem to figured out where I'm going wrong. I started with this but I get no results. Before this I tried nesting loops but also got nowhere either. $data = curl_exec($ch); $d = json_decode($data, true); foreach($d as $k=>$v){ echo $v['value']['displayName']; } Then I tried the following with only got me some of the results. I'm not sure where I'm going wrong with this. foreach(json_decode($data,true) as $d){ foreach($d as $k=>$v){ foreach($v as $kk=>$vv){ echo $kk.$vv; } } } The JSON looks like the following: { "value": [ { "id": "", "name": "", "etag": "", "type": "Microsoft.SecurityInsights/alertRules", "kind": "Scheduled", "properties": { "incidentConfiguration": { "createIncident": true, "groupingConfiguration": { "enabled": false, "reopenClosedIncident": false, "lookbackDuration": "PT5M", "matchingMethod": "AllEntities", "groupByEntities": [], "groupByAlertDetails": null, "groupByCustomDetails": null } }, "entityMappings": [ { "entityType": "Account", "fieldMappings": [ { "identifier": "FullName", "columnName": "AccountCustomEntity" } ] }, { "entityType": "IP", "fieldMappings": [ { "identifier": "Address", "columnName": "IPCustomEntity" } ] } ], "queryFrequency": "P1D", "queryPeriod": "P1D", "triggerOperator": "GreaterThan", "triggerThreshold": 0, "severity": "Medium", "query": "", "suppressionDuration": "PT1H", "suppressionEnabled": false, "tactics": [ "Reconnaissance", "Discovery" ], "displayName": "MFA disabled for a user", "enabled": true, "description": "Multi-Factor Authentication (MFA) helps prevent credential compromise. This alert identifies when an attempt has been made to diable MFA for a user ", "alertRuleTemplateName": null, "lastModifiedUtc": "2022-11-14T02:20:28.8027697Z" } }, ... ... ... A: Here is how you can get the display name without a loop. Notice that the 0 is the key value of the array since it doesn't have a name. We start from the value, and we move one layer deeper by selecting the first array 0. Now we need to select the properties and finally, we can get the displayName from there. $displayName = $d["value"][0]["properties"]["displayName"]; echo($displayName); /* Here is a quick demonstration: value: { 0: { ... properties: { ... displayName: [We made it!] } } } */ And here is a very good post that explains this in more detail How to decode multi-layers nested JSON String and display in PHP?
Printing a nested JSON Array value
I've been trying to find ways to print the individual data but can't seem to figured out where I'm going wrong. I started with this but I get no results. Before this I tried nesting loops but also got nowhere either. $data = curl_exec($ch); $d = json_decode($data, true); foreach($d as $k=>$v){ echo $v['value']['displayName']; } Then I tried the following with only got me some of the results. I'm not sure where I'm going wrong with this. foreach(json_decode($data,true) as $d){ foreach($d as $k=>$v){ foreach($v as $kk=>$vv){ echo $kk.$vv; } } } The JSON looks like the following: { "value": [ { "id": "", "name": "", "etag": "", "type": "Microsoft.SecurityInsights/alertRules", "kind": "Scheduled", "properties": { "incidentConfiguration": { "createIncident": true, "groupingConfiguration": { "enabled": false, "reopenClosedIncident": false, "lookbackDuration": "PT5M", "matchingMethod": "AllEntities", "groupByEntities": [], "groupByAlertDetails": null, "groupByCustomDetails": null } }, "entityMappings": [ { "entityType": "Account", "fieldMappings": [ { "identifier": "FullName", "columnName": "AccountCustomEntity" } ] }, { "entityType": "IP", "fieldMappings": [ { "identifier": "Address", "columnName": "IPCustomEntity" } ] } ], "queryFrequency": "P1D", "queryPeriod": "P1D", "triggerOperator": "GreaterThan", "triggerThreshold": 0, "severity": "Medium", "query": "", "suppressionDuration": "PT1H", "suppressionEnabled": false, "tactics": [ "Reconnaissance", "Discovery" ], "displayName": "MFA disabled for a user", "enabled": true, "description": "Multi-Factor Authentication (MFA) helps prevent credential compromise. This alert identifies when an attempt has been made to diable MFA for a user ", "alertRuleTemplateName": null, "lastModifiedUtc": "2022-11-14T02:20:28.8027697Z" } }, ... ... ...
[ "Here is how you can get the display name without a loop. Notice that the 0 is the key value of the array since it doesn't have a name.\nWe start from the value, and we move one layer deeper by selecting the first array 0. Now we need to select the properties and finally, we can get the displayName from there.\n$displayName = $d[\"value\"][0][\"properties\"][\"displayName\"];\necho($displayName);\n\n/*\nHere is a quick demonstration:\n\nvalue:\n{\n 0:\n {\n ...\n properties:\n {\n ...\n displayName: [We made it!]\n }\n }\n\n}\n\n*/\n\nAnd here is a very good post that explains this in more detail\nHow to decode multi-layers nested JSON String and display in PHP?\n" ]
[ 1 ]
[]
[]
[ "arrays", "json", "php" ]
stackoverflow_0074672452_arrays_json_php.txt
Q: Junit (4.12) is not executing after spring-boot 2.6.2 migration I have migrated from Spring to Spring-boot version 2.6.2. mvn clean install is successful but none of the junit(version 4.12) is executing. After few research I got to know that Spring-boot 2.4 onwards, JUnit4 has been removed. I tried below solutions which didn't work. After updating to latest Spring boot version, spring-boot-starter-parent 2.6.2, my tests stop executing Spring Boot maven unit tests not being executed A: JUnit 5 = JUnit Platform + JUnit Jupiter + JUnit Vintage. Refer to: Junit5 doc If you import junit-jupiter by depending on latest spring-boot-starter-test, it will only run the Junit5 style test cases. So Junit provides junit-vintage to run old Junit3/Junit4 cases and by default it's not contained in the spring-boot-starter-test dependencies. So there are two solutions to keep running Junit4: Depend on both of junit-jupiter and junit-vintage to support all Junit3/4/5 cases. <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.junit.vintage</groupId> <artifactId>junit-vintage-engine</artifactId> <scope>test</scope> </dependency> Exclude junit-jupiter from spring-boot-starter-test to run Junit4 <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> <exclusions> <exclusion> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter</artifactId> </exclusion> </exclusions> </dependency> A: Exclude the junit-jupiter-engine and junit-vintage-engine from the spring-boot-starter-test dependency and then add JUnit 4 dependency in your pom: <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> <exclusions> <exclusion> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter-engine</artifactId> </exclusion> <exclusion> <groupId>org.junit.vintage</groupId> <artifactId>junit-vintage-engine</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> A: I think you ought to migrate to JUnit 5. A: As of SpringBoot V2.2, Spring only support Junit 5 by default and have removed backward compatibility with JUnit 4. It clearly states that in Spring 2.2 release notes : https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.2-Release-Notes "You can’t use the junit-vintage-engine and you’ll need to explicitly roll back to JUnit 4:" <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> <exclusions> <exclusion> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter</artifactId> </exclusion> <exclusion> <groupId>org.junit.vintage</groupId> <artifactId>junit-vintage-engine</artifactId> </exclusion> <exclusion> <groupId>org.mockito</groupId> <artifactId>mockito-junit-jupiter</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> </dependency> </dependencies> Springboot 2.4 release notes mentions this POM dependency change : <dependency> <groupId>org.junit.vintage</groupId> <artifactId>junit-vintage-engine</artifactId> <scope>test</scope> <exclusions> <exclusion> <groupId>org.hamcrest</groupId> <artifactId>hamcrest-core</artifactId> </exclusion> </exclusions> </dependency> https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.4-Release-Notes#junit-5s-vintage-engine-removed-from-spring-boot-starter-test A: I can run both JUnit4/JUnit5 using Spring Boot 3(This will work for Spring Boot 2.x.x as well) Step 1 (I am using Gradle as build tool, but Maven would be similar): testImplementation 'org.springframework.boot:spring-boot-starter-test' testImplementation("org.junit.vintage:junit-vintage-engine") //This is for Junit4 Step 2 package com.example; //Change the imports accordingly to toggle between JUnit4/5 import static org.junit.Assert.*; import org.junit.Test; public class ATest { @Test public void test01() { assertEquals("Optimizer", "Optimizer"); } }
Junit (4.12) is not executing after spring-boot 2.6.2 migration
I have migrated from Spring to Spring-boot version 2.6.2. mvn clean install is successful but none of the junit(version 4.12) is executing. After few research I got to know that Spring-boot 2.4 onwards, JUnit4 has been removed. I tried below solutions which didn't work. After updating to latest Spring boot version, spring-boot-starter-parent 2.6.2, my tests stop executing Spring Boot maven unit tests not being executed
[ "JUnit 5 = JUnit Platform + JUnit Jupiter + JUnit Vintage.\nRefer to: Junit5 doc\nIf you import junit-jupiter by depending on latest spring-boot-starter-test, it will only run the Junit5 style test cases. So Junit provides junit-vintage to run old Junit3/Junit4 cases and by default it's not contained in the spring-boot-starter-test dependencies.\nSo there are two solutions to keep running Junit4:\n\nDepend on both of junit-jupiter and junit-vintage to support all Junit3/4/5 cases.\n <dependency>\n <groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-starter-test</artifactId>\n <scope>test</scope>\n </dependency>\n <dependency>\n <groupId>org.junit.vintage</groupId>\n <artifactId>junit-vintage-engine</artifactId>\n <scope>test</scope>\n </dependency>\n\n\nExclude junit-jupiter from spring-boot-starter-test to run Junit4\n <dependency>\n <groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-starter-test</artifactId>\n <scope>test</scope>\n <exclusions>\n <exclusion>\n <groupId>org.junit.jupiter</groupId>\n <artifactId>junit-jupiter</artifactId>\n </exclusion>\n </exclusions>\n </dependency>\n\n\n\n", "Exclude the junit-jupiter-engine and junit-vintage-engine from the spring-boot-starter-test dependency and then add JUnit 4 dependency in your pom:\n<groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-starter-test</artifactId>\n <scope>test</scope>\n <exclusions>\n <exclusion>\n <groupId>org.junit.jupiter</groupId>\n <artifactId>junit-jupiter-engine</artifactId>\n </exclusion>\n <exclusion>\n <groupId>org.junit.vintage</groupId>\n <artifactId>junit-vintage-engine</artifactId>\n </exclusion>\n </exclusions>\n</dependency>\n<dependency>\n <groupId>junit</groupId>\n <artifactId>junit</artifactId>\n <version>4.12</version>\n <scope>test</scope>\n</dependency>\n\n", "I think you ought to migrate to JUnit 5.\n", "As of SpringBoot V2.2, Spring only support Junit 5 by default and have removed backward compatibility with JUnit 4.\nIt clearly states that in Spring 2.2 release notes :\nhttps://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.2-Release-Notes\n\"You can’t use the junit-vintage-engine and you’ll need to explicitly roll back to JUnit 4:\"\n<dependencies>\n <dependency>\n <groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-starter-test</artifactId>\n <scope>test</scope>\n <exclusions>\n <exclusion>\n <groupId>org.junit.jupiter</groupId>\n <artifactId>junit-jupiter</artifactId>\n </exclusion>\n <exclusion>\n <groupId>org.junit.vintage</groupId>\n <artifactId>junit-vintage-engine</artifactId>\n </exclusion>\n <exclusion>\n <groupId>org.mockito</groupId>\n <artifactId>mockito-junit-jupiter</artifactId>\n </exclusion>\n </exclusions>\n </dependency>\n <dependency>\n <groupId>junit</groupId>\n <artifactId>junit</artifactId>\n <version>4.12</version>\n </dependency>\n</dependencies>\n\nSpringboot 2.4 release notes mentions this POM dependency change :\n<dependency>\n <groupId>org.junit.vintage</groupId>\n <artifactId>junit-vintage-engine</artifactId>\n <scope>test</scope>\n <exclusions>\n <exclusion>\n <groupId>org.hamcrest</groupId>\n <artifactId>hamcrest-core</artifactId>\n </exclusion>\n </exclusions>\n</dependency>\n\nhttps://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.4-Release-Notes#junit-5s-vintage-engine-removed-from-spring-boot-starter-test\n", "I can run both JUnit4/JUnit5 using Spring Boot 3(This will work for Spring Boot 2.x.x as well)\nStep 1 (I am using Gradle as build tool, but Maven would be similar):\ntestImplementation 'org.springframework.boot:spring-boot-starter-test'\ntestImplementation(\"org.junit.vintage:junit-vintage-engine\") //This is for Junit4\n\nStep 2\npackage com.example;\n\n//Change the imports accordingly to toggle between JUnit4/5\nimport static org.junit.Assert.*;\nimport org.junit.Test;\n\npublic class ATest {\n @Test\n public void test01() {\n assertEquals(\"Optimizer\", \"Optimizer\");\n }\n}\n\n" ]
[ 6, 2, 0, 0, 0 ]
[]
[]
[ "java", "junit", "junit4", "spring", "spring_boot" ]
stackoverflow_0070892138_java_junit_junit4_spring_spring_boot.txt
Q: Unable to run ios simulator with react native cli Screenshot of npx react-native doctor: As you can see, the iOS is all green ticks. I npm start in a terminal tab to run react native metro server. I then open another terminal and npm run ios Yes, the simulator opens straight away, but then I get a long error: https://pastebin.com/6bFTMDTr I cd ios then pod install and got this: Anyone have any idea? A: Have you tried installing the pod dependency manually? $ cd iOS && pod install upd. Looks like you haven't installed Ruby /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" brew install ruby echo 'export PATH="/usr/local/opt/ruby/bin:$PATH"' >> ~/.bash_profile source ~/.bash_profile after that install a watchman depend $ brew update $ brew install watchman upd2. 1) brew reinstall cocoapods (error message will come up regarding linking) 2) brew link --overwrite cocoapods (to fix the link)
Unable to run ios simulator with react native cli
Screenshot of npx react-native doctor: As you can see, the iOS is all green ticks. I npm start in a terminal tab to run react native metro server. I then open another terminal and npm run ios Yes, the simulator opens straight away, but then I get a long error: https://pastebin.com/6bFTMDTr I cd ios then pod install and got this: Anyone have any idea?
[ "Have you tried installing the pod dependency manually?\n\n$ cd iOS && pod install\n\nupd.\nLooks like you haven't installed Ruby\n/usr/bin/ruby -e \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)\"\nbrew install ruby\necho 'export PATH=\"/usr/local/opt/ruby/bin:$PATH\"' >> ~/.bash_profile\nsource ~/.bash_profile\n\nafter that install a watchman depend\n$ brew update\n$ brew install watchman\n\nupd2.\n1) brew reinstall cocoapods\n\n(error message will come up regarding linking)\n2) brew link --overwrite cocoapods\n\n(to fix the link)\n" ]
[ 0 ]
[]
[]
[ "react_native" ]
stackoverflow_0074672507_react_native.txt
Q: FlatList not getting rerendered with new data after axios/API call I have a MyList Component where I search based on date specified and render the detail using Flatlist const MyList =( {date}) =>{ const [list, setList] = useState([]); useEffect(() => { const url = `https://some-service.com/list?date=${date}` axios.get(url).then((res) => { setList(res.data.result); }); }, [date]) return ( <FlatList data={list} extraData={list} renderItem={({item}) => (<RenderItem item={item}/>)} keyExtractor={(item) => item._id} ></FlatList> ) } When date is updated in parent it is passed to child component MyList //...some code...// const [date, setDate] = useState(moment().format('YYYY-MM-DD')); return ( <View style={styles.container}> <View style={styles.dateSelector}> <DateSelector date={date} setDate={setDate} /> </View> <MyList date={date} /> </View> ); Now when I change the date.. new set of data is fetched and stored in list state.. But FlatList is not getting rerendered to show the changes. NOTE: Below is the sample output of service call. When we make rest call, the result array is having same number of elements, with similar "_id", "name".. the only difference is value of "somelist" For Date 2022-12-04 { "status": "Success", "result": [ { "_id": "638b3ddc1b11677f6202eb4c", "name": "John Doe", "somelist": [ { "date": "2022-12-04T00:00:00.000Z", "status": "good" } ] }, { "_id": "638b3ddc1b11677f6202eb4f", "name": "Pappu", "somelist": [ { "date": "2022-12-04T00:00:00.000Z", "status": "bla" } ] } ..... For Date 2022-12-03 { "status": "Success", "result": [ { "_id": "638b3ddc1b11677f6202eb4c", "name": "John Doe", "somelist": [ { "date": "2022-12-03T00:00:00.000Z", "status": "bad" } ] }, { "_id": "638b3ddc1b11677f6202eb4f", "name": "Pappu", "somelist": [ { "date": "2022-12-04T00:00:00.000Z", "status": "good" } ] } ..... I have tried using extraData, but still i have having same issue. extraData ={list} A: useEffect(() => { const url = `https://some-service.com/list?date=${date}` axios.get(url).then((res) => { setList(res.data.result); }); return [...list] }, [date]) A: At glance I believe it may be due to how you’re setting state. So: setList(res.data.result) Should be: setList([…list, res.data.result])
FlatList not getting rerendered with new data after axios/API call
I have a MyList Component where I search based on date specified and render the detail using Flatlist const MyList =( {date}) =>{ const [list, setList] = useState([]); useEffect(() => { const url = `https://some-service.com/list?date=${date}` axios.get(url).then((res) => { setList(res.data.result); }); }, [date]) return ( <FlatList data={list} extraData={list} renderItem={({item}) => (<RenderItem item={item}/>)} keyExtractor={(item) => item._id} ></FlatList> ) } When date is updated in parent it is passed to child component MyList //...some code...// const [date, setDate] = useState(moment().format('YYYY-MM-DD')); return ( <View style={styles.container}> <View style={styles.dateSelector}> <DateSelector date={date} setDate={setDate} /> </View> <MyList date={date} /> </View> ); Now when I change the date.. new set of data is fetched and stored in list state.. But FlatList is not getting rerendered to show the changes. NOTE: Below is the sample output of service call. When we make rest call, the result array is having same number of elements, with similar "_id", "name".. the only difference is value of "somelist" For Date 2022-12-04 { "status": "Success", "result": [ { "_id": "638b3ddc1b11677f6202eb4c", "name": "John Doe", "somelist": [ { "date": "2022-12-04T00:00:00.000Z", "status": "good" } ] }, { "_id": "638b3ddc1b11677f6202eb4f", "name": "Pappu", "somelist": [ { "date": "2022-12-04T00:00:00.000Z", "status": "bla" } ] } ..... For Date 2022-12-03 { "status": "Success", "result": [ { "_id": "638b3ddc1b11677f6202eb4c", "name": "John Doe", "somelist": [ { "date": "2022-12-03T00:00:00.000Z", "status": "bad" } ] }, { "_id": "638b3ddc1b11677f6202eb4f", "name": "Pappu", "somelist": [ { "date": "2022-12-04T00:00:00.000Z", "status": "good" } ] } ..... I have tried using extraData, but still i have having same issue. extraData ={list}
[ "useEffect(() => {\n \n const url = `https://some-service.com/list?date=${date}`\n\n axios.get(url).then((res) => {\n\n setList(res.data.result);\n });\n \n return [...list]\n\n}, [date])\n\n", "At glance I believe it may be due to how you’re setting state. So:\nsetList(res.data.result)\nShould be:\nsetList([…list, res.data.result])\n" ]
[ 0, 0 ]
[]
[]
[ "axios", "javascript", "react_native", "react_native_flatlist" ]
stackoverflow_0074671589_axios_javascript_react_native_react_native_flatlist.txt
Q: Ubuntu Server how to start a service only when VPN is connected and restart it every time i got disconnected I created a test server when i got a service that need to be started on boot after i got connected to NordVPN. Anyway, i found that if i get also disconnected the service need to be restarted after the connection to the VPN is restored Can you help me with this? Thanks a lot I created a service and I delayed the time at boot [Unit] Description=qBittorrent-nox service Documentation=man:qbittorrent-nox(1) Wants=network-online.target After=network-online.target nss-lookup.target [Service] ExecStartPre=/bin/sleep 60 # if you have systemd < 240 (Ubuntu 18.10 and earlier, for example), you probably want to use Type=> Type=exec # change user as needed User=root # The -d flag should not be used in this setup ExecStart=/usr/bin/qbittorrent-nox Restart=on-failure RestartSec=1s # uncomment this for versions of qBittorrent < 4.2.0 to set the maximum number of open files to unl> #LimitNOFILE=infinity # uncomment this to use "Network interface" and/or "Optional IP address to bind to" options # without this binding will fail and qBittorrent's traffic will go through the default route # AmbientCapabilities=CAP_NET_RAW [Install] WantedBy=multi-user.target A: The Restart directive specifies the conditions under which the service should be restarted. The default value for this directive is on-failure, which means that the service will only be restarted if it exits with a non-zero exit code. In your case, you can modify this directive to specify that the service should be restarted whenever the VPN connection is restored. Here is an example of how you could modify the Restart directive in your unit file: [Service] ... Restart=on-failure-or-vpn-restored With this configuration, your service will be restarted whenever it exits with a non-zero exit code, or whenever the VPN connection is restored. Note that you will need to create a separate unit file to manage the VPN connection and specify this unit file as a dependency for your service's unit file. This will ensure that the service is only started after the VPN connection is established. For example, you could add the following lines to your service's unit file to make it dependent on a VPN connection: [Unit] ... After=vpn.service Wants=vpn.service This will ensure that your service is only started after the vpn.service unit has been started successfully. You will need to create the vpn.service unit file and specify the necessary details for managing the VPN connection.
Ubuntu Server how to start a service only when VPN is connected and restart it every time i got disconnected
I created a test server when i got a service that need to be started on boot after i got connected to NordVPN. Anyway, i found that if i get also disconnected the service need to be restarted after the connection to the VPN is restored Can you help me with this? Thanks a lot I created a service and I delayed the time at boot [Unit] Description=qBittorrent-nox service Documentation=man:qbittorrent-nox(1) Wants=network-online.target After=network-online.target nss-lookup.target [Service] ExecStartPre=/bin/sleep 60 # if you have systemd < 240 (Ubuntu 18.10 and earlier, for example), you probably want to use Type=> Type=exec # change user as needed User=root # The -d flag should not be used in this setup ExecStart=/usr/bin/qbittorrent-nox Restart=on-failure RestartSec=1s # uncomment this for versions of qBittorrent < 4.2.0 to set the maximum number of open files to unl> #LimitNOFILE=infinity # uncomment this to use "Network interface" and/or "Optional IP address to bind to" options # without this binding will fail and qBittorrent's traffic will go through the default route # AmbientCapabilities=CAP_NET_RAW [Install] WantedBy=multi-user.target
[ "The Restart directive specifies the conditions under which the service should be restarted. The default value for this directive is on-failure, which means that the service will only be restarted if it exits with a non-zero exit code. In your case, you can modify this directive to specify that the service should be restarted whenever the VPN connection is restored.\nHere is an example of how you could modify the Restart directive in your unit file:\n[Service]\n...\nRestart=on-failure-or-vpn-restored\n\nWith this configuration, your service will be restarted whenever it exits with a non-zero exit code, or whenever the VPN connection is restored.\nNote that you will need to create a separate unit file to manage the VPN connection and specify this unit file as a dependency for your service's unit file. This will ensure that the service is only started after the VPN connection is established.\nFor example, you could add the following lines to your service's unit file to make it dependent on a VPN connection:\n[Unit]\n...\nAfter=vpn.service\nWants=vpn.service\n\nThis will ensure that your service is only started after the vpn.service unit has been started successfully. You will need to create the vpn.service unit file and specify the necessary details for managing the VPN connection.\n" ]
[ 0 ]
[]
[]
[ "ubuntu_server" ]
stackoverflow_0074666970_ubuntu_server.txt
Q: Java Writing Data to Excel - WorkbookFactory I am trying to add some data to, already existing file which I created with; copyFileNIO(fromFile, toFile) method. To add data to already existing file I am using this code block: try { copyFileNIO(fromFile, toFile); System.out.println("Copy file is done."); // Creating file object of existing excel file File xlsxFile = new File(toFile); System.out.println("ok"); // Creating input stream InputStream inputStream = new FileInputStream(xlsxFile); System.out.println("okkk"); // Creating workbook from input stream Workbook wb = WorkbookFactory.create(inputStream); System.out.println("okkkk"); // Reading first sheet of excel file Sheet sheet = wb.getSheetAt(0); // Getting age cell of first row from the sheet Cell cell = sheet.getRow(1).getCell(3); // Updating the cell value with new data cell.setCellValue(30); } catch (IOException e) { e.printStackTrace(); } System.out.println("Copy file is done."); } However, Workbook is giving error, I tried also XSSF its not working. I do not know what causes this. You can see ambda$7 exception: at application.Sbt.lambda$7(Sbt.java:492) -> which leads Workbook wb = WorkbookFactory.create(inputStream); I added some System.out.println as can seen on code: output of these methods are Copy file is done. ok okkk Exception ... How can i solve this problem ? Thank you I confirmed my file exists and patch is correct. Changed inputstream and workbook to -> Workbook wb = WorkbookFactory.create(new File(toFile)); Still same error full error message and stack trace at org.apache.poi.poifs.filesystem.FileMagic.valueOf(FileMagic.java:177) at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:309) at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:277) at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:255) at application.Sbt.lambda$7(Sbt.java:491) Apache POI -> 5.2.3 A: It looks like the WorkbookFactory.create() method is throwing an exception when it is called on line 492 of your code. This means that there is something wrong with the input file that is being passed to the create() method. The most common cause of this error is that the input file is not a valid Excel file. There are a few different things you can try to solve this problem: Make sure that the toFile variable points to a valid Excel file that can be read by the WorkbookFactory.create() method. You can do this by checking the file's path and filename and verifying that it is correct. Try using a different method to read the Excel file. Instead of using the WorkbookFactory.create() method, you can try using the WorkbookFactory.load() method, which allows you to specify a File object instead of an InputStream. This method may be more forgiving if the input file is not in the exact format that WorkbookFactory.create() expects. If the input file is not a valid Excel file, try using a different file. If the file was created using the copyFileNIO() method, make sure that the fromFile variable points to a valid Excel file. If you continue to have problems, you can try to catch the exception that is being thrown by the WorkbookFactory.create() method and print out its stack trace. This can give you more information about what is causing the error and help you to debug the problem.
Java Writing Data to Excel - WorkbookFactory
I am trying to add some data to, already existing file which I created with; copyFileNIO(fromFile, toFile) method. To add data to already existing file I am using this code block: try { copyFileNIO(fromFile, toFile); System.out.println("Copy file is done."); // Creating file object of existing excel file File xlsxFile = new File(toFile); System.out.println("ok"); // Creating input stream InputStream inputStream = new FileInputStream(xlsxFile); System.out.println("okkk"); // Creating workbook from input stream Workbook wb = WorkbookFactory.create(inputStream); System.out.println("okkkk"); // Reading first sheet of excel file Sheet sheet = wb.getSheetAt(0); // Getting age cell of first row from the sheet Cell cell = sheet.getRow(1).getCell(3); // Updating the cell value with new data cell.setCellValue(30); } catch (IOException e) { e.printStackTrace(); } System.out.println("Copy file is done."); } However, Workbook is giving error, I tried also XSSF its not working. I do not know what causes this. You can see ambda$7 exception: at application.Sbt.lambda$7(Sbt.java:492) -> which leads Workbook wb = WorkbookFactory.create(inputStream); I added some System.out.println as can seen on code: output of these methods are Copy file is done. ok okkk Exception ... How can i solve this problem ? Thank you I confirmed my file exists and patch is correct. Changed inputstream and workbook to -> Workbook wb = WorkbookFactory.create(new File(toFile)); Still same error full error message and stack trace at org.apache.poi.poifs.filesystem.FileMagic.valueOf(FileMagic.java:177) at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:309) at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:277) at org.apache.poi.ss.usermodel.WorkbookFactory.create(WorkbookFactory.java:255) at application.Sbt.lambda$7(Sbt.java:491) Apache POI -> 5.2.3
[ "It looks like the WorkbookFactory.create() method is throwing an exception when it is called on line 492 of your code. This means that there is something wrong with the input file that is being passed to the create() method. The most common cause of this error is that the input file is not a valid Excel file.\nThere are a few different things you can try to solve this problem:\nMake sure that the toFile variable points to a valid Excel file that can be read by the WorkbookFactory.create() method. You can do this by checking the file's path and filename and verifying that it is correct.\nTry using a different method to read the Excel file. Instead of using the WorkbookFactory.create() method, you can try using the WorkbookFactory.load() method, which allows you to specify a File object instead of an InputStream. This method may be more forgiving if the input file is not in the exact format that WorkbookFactory.create() expects.\nIf the input file is not a valid Excel file, try using a different file. If the file was created using the copyFileNIO() method, make sure that the fromFile variable points to a valid Excel file.\nIf you continue to have problems, you can try to catch the exception that is being thrown by the WorkbookFactory.create() method and print out its stack trace. This can give you more information about what is causing the error and help you to debug the problem.\n" ]
[ 0 ]
[]
[]
[ "java" ]
stackoverflow_0074672526_java.txt
Q: How to use tokens from 2 time range inputs in single Splunk dashboard query? I'm using Splunk classic dashboards where I have 2 time range inputs. I want to compare data for 2 time frames in a single table. Essentially, I want to perform query which counts errors by type for period A and B, then join the searches by error type so that I can see how many errors of each type there were in period A as opposed to period B. I added a panel as follows: because I want to use tokens from both time inputs for the query: (index=myindex) earliest="$runATimeInput.earliest$" latest="$runATimeInput.latest$" environment="$runAEnvironment$" level=ERROR | spath input=message | stats count by logIdentifier | sort count desc | join left=L right=R where L.logIdentifier = R.logIdentifier [| search (index=myindex) earliest="$runBTimeInput.earliest$" latest="$runBTimeInput.latest$" environment="$runBEnvironment$" level=ERROR | spath input=message | stats count by logIdentifier ] The problem is that the query doesn't return any results although it should. The main query returns results: (index=myindex) earliest="$runATimeInput.earliest$" latest="$runATimeInput.latest$" environment="$runAEnvironment$" level=ERROR | spath input=message | stats count by logIdentifier | sort count desc However the subsearch query doesn't return any results (although a separate search for the same period in a new tab returns results): [| search (index=myindex) earliest="$runBTimeInput.earliest$" latest="$runBTimeInput.latest$" environment="$runBEnvironment$" level=ERROR | spath input=message | stats count by logIdentifier ] When I click on Run Search in Splunk panel in order to open the search in a new tab I see strange values for earliest/latest tokens. For the main query the values are: earliest="1669500000" latest="1669506493.677" where 1669500000 is the Tue Jan 20 1970 09:45:00 and 1669506493.677 is Sun Nov 27 2022 01:48:13 whereas the timeframe for period 1 was Sun Nov 27 2022 00:00:00 - Sun Nov 27 2022 01:48:13. That being said the main query works and it respects the original time frame. The values for the second query are earliest="1669813200" latest="1669816444.909" where 1669813200 is Tue Jan 20 1970 09:45:00 and 1669816444.909 is Wed Nov 30 2022 15:54:04 whereas the period 2 timeframe was Wed Nov 30 2022 15:00:04 - Wed Nov 30 2022 15:54:04`. Am I doing something wrong in the panel settings or the query? Or maybe there's another way to do this in Splunk? Below is the dashboard XML: <form> <label>My Dashboard</label> <description>My Dashboard</description> <fieldset submitButton="false" autoRun="true"> <input type="time" token="runATimeInput" searchWhenChanged="true"> <label>Run A</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="runAEnvironment" searchWhenChanged="true"> <label>Run A Environment</label> <choice value="prod">prod</choice> <default>prod</default> </input> <input type="time" token="runBTimeInput" searchWhenChanged="true"> <label>Run B</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="runBEnvironment" searchWhenChanged="true"> <label>Run B Environment</label> <choice value="prod">prod</choice> <default>prod</default> </input> </fieldset> <row> <panel> <title>Top Exceptions</title> <table> <title>Top Exceptions</title> <search> <query>(index=distapps) earliest="$runATimeInput.earliest$" latest="$runATimeInput.latest$" environment="$runAEnvironment$" level=ERROR | spath input=message | stats count by logIdentifier | sort count desc | join left=L right=R where L.logIdentifier = R.logIdentifier [| search (index=myindex) earliest="$runBTimeInput.earliest$" latest="$runBTimeInput.latest$" environment="$runBEnvironment$" level=ERROR | spath input=message | stats count by logIdentifier ]</query> <earliest>$runATimeInput.earliest$</earliest> <latest>$runBTimeInput.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> A: Don't use any tokens or time selector on the panel itself You should be able to reference your two time tokens' .earliest and .latest just fine in any searches on the dashboard A: Here's a test dashboard I created that uses two timepickers. It produces results for both time periods. How is yours different? Could it be the count field is used in both the main and subsearches? <form version="1.1"> <label>test</label> <fieldset submitButton="false"> <input type="time" token="runATimeInput"> <label>A</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="time" token="runBTimeInput"> <label>B</label> <default> <earliest>-48h@h</earliest> <latest>-24h@h</latest> </default> </input> </fieldset> <row> <panel> <table> <search> <query>(index=_internal) earliest="$runATimeInput.earliest$" latest="$runATimeInput.latest$" | stats count as countA by component | join component [| search (index=_internal) earliest="$runBTimeInput.earliest$" latest="$runBTimeInput.latest$" | stats count as countB by component ]</query> <earliest>$runATimeInput.earliest$</earliest> <latest>$runATimeInput.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>
How to use tokens from 2 time range inputs in single Splunk dashboard query?
I'm using Splunk classic dashboards where I have 2 time range inputs. I want to compare data for 2 time frames in a single table. Essentially, I want to perform query which counts errors by type for period A and B, then join the searches by error type so that I can see how many errors of each type there were in period A as opposed to period B. I added a panel as follows: because I want to use tokens from both time inputs for the query: (index=myindex) earliest="$runATimeInput.earliest$" latest="$runATimeInput.latest$" environment="$runAEnvironment$" level=ERROR | spath input=message | stats count by logIdentifier | sort count desc | join left=L right=R where L.logIdentifier = R.logIdentifier [| search (index=myindex) earliest="$runBTimeInput.earliest$" latest="$runBTimeInput.latest$" environment="$runBEnvironment$" level=ERROR | spath input=message | stats count by logIdentifier ] The problem is that the query doesn't return any results although it should. The main query returns results: (index=myindex) earliest="$runATimeInput.earliest$" latest="$runATimeInput.latest$" environment="$runAEnvironment$" level=ERROR | spath input=message | stats count by logIdentifier | sort count desc However the subsearch query doesn't return any results (although a separate search for the same period in a new tab returns results): [| search (index=myindex) earliest="$runBTimeInput.earliest$" latest="$runBTimeInput.latest$" environment="$runBEnvironment$" level=ERROR | spath input=message | stats count by logIdentifier ] When I click on Run Search in Splunk panel in order to open the search in a new tab I see strange values for earliest/latest tokens. For the main query the values are: earliest="1669500000" latest="1669506493.677" where 1669500000 is the Tue Jan 20 1970 09:45:00 and 1669506493.677 is Sun Nov 27 2022 01:48:13 whereas the timeframe for period 1 was Sun Nov 27 2022 00:00:00 - Sun Nov 27 2022 01:48:13. That being said the main query works and it respects the original time frame. The values for the second query are earliest="1669813200" latest="1669816444.909" where 1669813200 is Tue Jan 20 1970 09:45:00 and 1669816444.909 is Wed Nov 30 2022 15:54:04 whereas the period 2 timeframe was Wed Nov 30 2022 15:00:04 - Wed Nov 30 2022 15:54:04`. Am I doing something wrong in the panel settings or the query? Or maybe there's another way to do this in Splunk? Below is the dashboard XML: <form> <label>My Dashboard</label> <description>My Dashboard</description> <fieldset submitButton="false" autoRun="true"> <input type="time" token="runATimeInput" searchWhenChanged="true"> <label>Run A</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="runAEnvironment" searchWhenChanged="true"> <label>Run A Environment</label> <choice value="prod">prod</choice> <default>prod</default> </input> <input type="time" token="runBTimeInput" searchWhenChanged="true"> <label>Run B</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="runBEnvironment" searchWhenChanged="true"> <label>Run B Environment</label> <choice value="prod">prod</choice> <default>prod</default> </input> </fieldset> <row> <panel> <title>Top Exceptions</title> <table> <title>Top Exceptions</title> <search> <query>(index=distapps) earliest="$runATimeInput.earliest$" latest="$runATimeInput.latest$" environment="$runAEnvironment$" level=ERROR | spath input=message | stats count by logIdentifier | sort count desc | join left=L right=R where L.logIdentifier = R.logIdentifier [| search (index=myindex) earliest="$runBTimeInput.earliest$" latest="$runBTimeInput.latest$" environment="$runBEnvironment$" level=ERROR | spath input=message | stats count by logIdentifier ]</query> <earliest>$runATimeInput.earliest$</earliest> <latest>$runBTimeInput.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>
[ "Don't use any tokens or time selector on the panel itself\nYou should be able to reference your two time tokens' .earliest and .latest just fine in any searches on the dashboard\n", "Here's a test dashboard I created that uses two timepickers. It produces results for both time periods. How is yours different? Could it be the count field is used in both the main and subsearches?\n<form version=\"1.1\">\n <label>test</label>\n <fieldset submitButton=\"false\">\n <input type=\"time\" token=\"runATimeInput\">\n <label>A</label>\n <default>\n <earliest>-24h@h</earliest>\n <latest>now</latest>\n </default>\n </input>\n <input type=\"time\" token=\"runBTimeInput\">\n <label>B</label>\n <default>\n <earliest>-48h@h</earliest>\n <latest>-24h@h</latest>\n </default>\n </input>\n </fieldset>\n <row>\n <panel>\n <table>\n <search>\n <query>(index=_internal) earliest=\"$runATimeInput.earliest$\" latest=\"$runATimeInput.latest$\"\n| stats count as countA by component \n| join component [| search (index=_internal) earliest=\"$runBTimeInput.earliest$\" latest=\"$runBTimeInput.latest$\" \n | stats count as countB by component ]</query>\n <earliest>$runATimeInput.earliest$</earliest>\n <latest>$runATimeInput.latest$</latest>\n <sampleRatio>1</sampleRatio>\n </search>\n <option name=\"drilldown\">none</option>\n <option name=\"refresh.display\">progressbar</option>\n </table>\n </panel>\n </row>\n</form>\n\n" ]
[ 0, 0 ]
[]
[]
[ "splunk", "splunk_dashboard", "splunk_query" ]
stackoverflow_0074657676_splunk_splunk_dashboard_splunk_query.txt
Q: Flutter crossAxisAlignment vs mainAxisAlignment I'm confused about crossAxisAlignment and mainAxisAlignment. Can anyone please explain it in simple words? A: For Row: mainAxisAlignment = Horizontal Axis crossAxisAlignment = Vertical Axis For Column: mainAxisAlignment = Vertical Axis crossAxisAlignment = Horizontal Axis Image source A: This two pictures are clear to show the meaning of MainAxisAlignment and CrossAxisAlignment. (Pictures are from Network) A: Row/Column are associated to an axis: Horizontal for Row Vertical for Column mainAxisAlignment is how items are aligned on that axis. crossAxisAlignment is how items are aligned on the other axis. A: When you use a Row, its children are laid out in a row, which is horizontally. So a Row's main axis is horizontal. Using mainAxisAlignment in a Row lets you align the row's children horizontally (e.g. left, right). The cross axis to a Row's main axis is vertical. So using crossAxisAlignment in a Row lets you define, how its children are aligned vertically. In a Column, it's the opposite. The children of a column are laid out vertically, from top to bottom (per default). So its main axis is vertical. This means, using mainAxisAlignment in a Column aligns its children vertically (e.g. top, bottom) and crossAxisAlignment defines how the children are aligned horizontally in that Column. A: Depends on you how you wanna put your content on screen. we need to use mainAxis & CrossAxis alignment properties. For more basic layout concepts: https://flutter.dev/docs/codelabs/layout-basics A: In a column, to center(or align) vertically, mainAxisAlignment is used. to center(or align) horizontally, crossAxisAlignment is used. In a row, to center(or align) horizontally, mainAxisAlignment is used. to center(or align) vertically, crossAxisAlignment is used. A: In Row Main Axis Alignment run horizontally and Cross Axis Alignment run vertically. In Column Main Axis Alignment run vertically and Cross Axis Alignment run Horizontally.
Flutter crossAxisAlignment vs mainAxisAlignment
I'm confused about crossAxisAlignment and mainAxisAlignment. Can anyone please explain it in simple words?
[ "For Row:\nmainAxisAlignment = Horizontal Axis\ncrossAxisAlignment = Vertical Axis\n\n\nFor Column:\nmainAxisAlignment = Vertical Axis\ncrossAxisAlignment = Horizontal Axis\n\nImage source\n", "This two pictures are clear to show the meaning of MainAxisAlignment and CrossAxisAlignment.\n\n\n(Pictures are from Network)\n", "Row/Column are associated to an axis:\n\nHorizontal for Row\nVertical for Column\n\nmainAxisAlignment is how items are aligned on that axis. crossAxisAlignment is how items are aligned on the other axis.\n", "When you use a Row, its children are laid out in a row, which is horizontally. So a Row's main axis is horizontal. \nUsing mainAxisAlignment in a Row lets you align the row's children horizontally (e.g. left, right). \nThe cross axis to a Row's main axis is vertical. So using crossAxisAlignment in a Row lets you define, how its children are aligned vertically.\nIn a Column, it's the opposite. The children of a column are laid out vertically, from top to bottom (per default). So its main axis is vertical. This means, using mainAxisAlignment in a Column aligns its children vertically (e.g. top, bottom) and crossAxisAlignment defines how the children are aligned horizontally in that Column.\n", "\nDepends on you how you wanna put your content on screen. we need to use mainAxis & CrossAxis alignment properties.\nFor more basic layout concepts: https://flutter.dev/docs/codelabs/layout-basics\n", "In a column, \n\nto center(or align) vertically, mainAxisAlignment is used.\nto center(or align) horizontally, crossAxisAlignment is used.\n\nIn a row, \n\nto center(or align) horizontally, mainAxisAlignment is used.\nto center(or align) vertically, crossAxisAlignment is used.\n\n", "In Row Main Axis Alignment run horizontally and Cross Axis Alignment run vertically.\nIn Column Main Axis Alignment run vertically and Cross Axis Alignment run Horizontally.\n" ]
[ 173, 13, 6, 4, 4, 2, 0 ]
[]
[]
[ "flutter", "flutter_layout" ]
stackoverflow_0053850149_flutter_flutter_layout.txt
Q: Count the ammount of times a word is repeated in a text file I need to write a program that prompts for the name of a text file and prints the words with the maximum and minimum frequency, along with their frequency (separated by a space). This is my text I am Sam Sam I am That Sam-I-am That Sam-I-am I do not like that Sam-I-am Do you like green eggs and ham I do not like them Sam-I-am I do not like green eggs and ham file = open(fname,'r') dict1 = [] for line in file: line = line.lower() x = line.split(' ') if x in dict1: dict1[x] += 1 else: dict1[x] = 1 Then I wanted to iterate over the keys and values and find out which one was the max and min frequency however up to that point my console says "TypeError: list indices must be integers or slices, not list" I don't know what that means either for this problem the expected result is: Max frequency: i 5 Min frequency: you 1 A: you are using a list instead of a dictionary to store the word frequencies. You can't use a list to store key-value pairs like this, you need to use a dictionary instead. Here is how you could modify your code to use a dictionary to store the word frequencies: file = open(fname,'r') word_frequencies = {} # use a dictionary to store the word frequencies for line in file: line = line.lower() words = line.split(' ') for word in words: if word in word_frequencies: word_frequencies[word] += 1 else: word_frequencies[word] = 1 Then to iterate over the keys and find the min and max frequency # iterate over the keys and values in the word_frequencies dictionary # and find the word with the max and min frequency max_word = None min_word = None max_frequency = 0 min_frequency = float('inf') for word, frequency in word_frequencies.items(): if frequency > max_frequency: max_word = word max_frequency = frequency if frequency < min_frequency: min_word = word min_frequency = frequency Print the results print("Max frequency:", max_word, max_frequency) print("Min frequency:", min_word, min_frequency)
Count the ammount of times a word is repeated in a text file
I need to write a program that prompts for the name of a text file and prints the words with the maximum and minimum frequency, along with their frequency (separated by a space). This is my text I am Sam Sam I am That Sam-I-am That Sam-I-am I do not like that Sam-I-am Do you like green eggs and ham I do not like them Sam-I-am I do not like green eggs and ham file = open(fname,'r') dict1 = [] for line in file: line = line.lower() x = line.split(' ') if x in dict1: dict1[x] += 1 else: dict1[x] = 1 Then I wanted to iterate over the keys and values and find out which one was the max and min frequency however up to that point my console says "TypeError: list indices must be integers or slices, not list" I don't know what that means either for this problem the expected result is: Max frequency: i 5 Min frequency: you 1
[ "you are using a list instead of a dictionary to store the word frequencies. You can't use a list to store key-value pairs like this, you need to use a dictionary instead. Here is how you could modify your code to use a dictionary to store the word frequencies:\nfile = open(fname,'r')\nword_frequencies = {} # use a dictionary to store the word frequencies\n\nfor line in file:\n line = line.lower()\n words = line.split(' ')\n for word in words:\n if word in word_frequencies:\n word_frequencies[word] += 1\n else:\n word_frequencies[word] = 1\n\nThen to iterate over the keys and find the min and max frequency\n# iterate over the keys and values in the word_frequencies dictionary\n# and find the word with the max and min frequency\nmax_word = None\nmin_word = None\nmax_frequency = 0\nmin_frequency = float('inf')\n\nfor word, frequency in word_frequencies.items():\n if frequency > max_frequency:\n max_word = word\n max_frequency = frequency\n if frequency < min_frequency:\n min_word = word\n min_frequency = frequency\n\nPrint the results\nprint(\"Max frequency:\", max_word, max_frequency)\nprint(\"Min frequency:\", min_word, min_frequency)\n\n" ]
[ 0 ]
[]
[]
[ "list", "loops", "python_3.x", "text_files" ]
stackoverflow_0074672514_list_loops_python_3.x_text_files.txt
Q: Eslint error causing create-react-app failed to compile I am building a website using create-react-app and have just installed eslint to it. For some reason what was supposed to be shown as eslint warnings are showing up as errors and causing npm run start to fail. How can I bypass this issue and have them shown as warnings like before ? My .eslintrc.js env: { browser: true, es6: true, jest: true, }, extends: [ 'airbnb-typescript', 'plugin:@typescript-eslint/recommended', 'prettier/react', 'prettier/@typescript-eslint', 'plugin:prettier/recommended', ], globals: { Atomics: 'readonly', SharedArrayBuffer: 'readonly', }, parser: '@typescript-eslint/parser', parserOptions: { ecmaFeatures: { jsx: true, }, ecmaVersion: 2018, sourceType: 'module', project: './tsconfig.json', }, plugins: ['react', '@typescript-eslint'], rules: { 'class-methods-use-this': 'off', 'additional-rule': 'warn', }, ignorePatterns: ['**/node_modules/*', '**/build/*', 'config-overrides.js'], }; My package.json "name": "device-protection-renewal-web", "version": "0.1.0", "private": true, "dependencies": { "@testing-library/jest-dom": "^5.11.5", "@testing-library/react": "^11.1.0", "@testing-library/user-event": "^12.1.10", "@types/jest": "^26.0.15", "@types/node": "^12.19.3", "@types/react": "^16.9.55", "@types/react-dom": "^16.9.9", "babel-polyfill": "^6.26.0", "core-js": "^3.6.5", "i18next": "^19.8.3", "react": "^17.0.1", "react-app-polyfill": "^2.0.0", "react-dom": "^17.0.1", "react-i18next": "^11.7.3", "react-scripts": "4.0.0", "web-vitals": "^0.2.4" }, "scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject" }, "eslintConfig": { "extends": [ "react-app", "react-app/jest" ] }, "browserslist": { "production": [ ">0.2%", "not dead", "not op_mini all", "ie >= 9" ], "development": [ "last 1 chrome version", "last 1 firefox version", "last 1 safari version", "ie >= 9" ] }, "devDependencies": { "@babel/plugin-transform-arrow-functions": "^7.12.1", "@typescript-eslint/eslint-plugin": "^4.6.1", "@typescript-eslint/parser": "^4.6.1", "eslint": "^7.11.0", "eslint-config-airbnb-typescript": "^9.0.0", "eslint-config-prettier": "^6.11.0", "eslint-plugin-import": "^2.22.0", "eslint-plugin-jsx-a11y": "^6.3.1", "eslint-plugin-prettier": "^3.1.4", "eslint-plugin-react": "^7.20.6", "eslint-plugin-react-hooks": "^4.1.0", "prettier": "^2.1.2", "typescript": "^4.0.5" } }``` [1]: https://i.stack.imgur.com/WUKcz.png A: I assume that you have installed ESLint using npm install eslint --save-dev and defined a default configuration with node_modules/.bin/eslint --init answering the questions in the prompt. I noticed that in your .eslintrc.js file, the ESLint settings is missing in the extends option: extends: [ 'eslint:recommended', 'airbnb-typescript', 'plugin:@typescript-eslint/recommended', 'prettier/react', 'prettier/@typescript-eslint', 'plugin:prettier/recommended', ], Also in the package.json is recommended ESLint to have its own script that you run using npm run lint and use it combined with a eslint-plugin in your favorite code editor: { "scripts": { "start": "react-scripts start", // ... "lint": "eslint ." }, } Probably you will build your application at some point, so you should create a .eslintignore file and inside of it add build since files in the build directory also get checked by default when the command is ran. Source: https://fullstackopen.com/en/part3/validation_and_es_lint#lint A: This part in your package.json is unnecessary; since you have an aslant config file, it should be moved to .eslintrc.js. "eslintConfig": { "extends": [ "react-app", "react-app/jest" ] } Which would then turn into this: extends: [ 'react-app', 'react-app/jest', 'airbnb-typescript', 'plugin:@typescript-eslint/recommended', 'prettier/react', 'prettier/@typescript-eslint', 'plugin:prettier/recommended' ] However; in the latest versions of the eslint-config-react-app plugin at this time (7.31.11), I'm getting a jest plugin conflict error with a project that worked prior to updating unless I remove react-app/jest from my eslint configs extends section. Which is how I ended up here, currently trying to find out what caused this. Update: So my issue was caused because eslint-config-react-app depends on eslint-plugin-jest^25.7.0 and I was using the latest eslint-plugin-jest^27.1.6. I removed my package.json's dependency and will use the version included with eslint-config-react-app so there aren't any conflicts there, but if I need features of the newer plugin version, a quick npm i -D on it, changing eslint configuration from automatic to manual and specifying the path to the local node_modules version and .eslintrc.js should work as well; aside from any conflicts with the config plugin.
Eslint error causing create-react-app failed to compile
I am building a website using create-react-app and have just installed eslint to it. For some reason what was supposed to be shown as eslint warnings are showing up as errors and causing npm run start to fail. How can I bypass this issue and have them shown as warnings like before ? My .eslintrc.js env: { browser: true, es6: true, jest: true, }, extends: [ 'airbnb-typescript', 'plugin:@typescript-eslint/recommended', 'prettier/react', 'prettier/@typescript-eslint', 'plugin:prettier/recommended', ], globals: { Atomics: 'readonly', SharedArrayBuffer: 'readonly', }, parser: '@typescript-eslint/parser', parserOptions: { ecmaFeatures: { jsx: true, }, ecmaVersion: 2018, sourceType: 'module', project: './tsconfig.json', }, plugins: ['react', '@typescript-eslint'], rules: { 'class-methods-use-this': 'off', 'additional-rule': 'warn', }, ignorePatterns: ['**/node_modules/*', '**/build/*', 'config-overrides.js'], }; My package.json "name": "device-protection-renewal-web", "version": "0.1.0", "private": true, "dependencies": { "@testing-library/jest-dom": "^5.11.5", "@testing-library/react": "^11.1.0", "@testing-library/user-event": "^12.1.10", "@types/jest": "^26.0.15", "@types/node": "^12.19.3", "@types/react": "^16.9.55", "@types/react-dom": "^16.9.9", "babel-polyfill": "^6.26.0", "core-js": "^3.6.5", "i18next": "^19.8.3", "react": "^17.0.1", "react-app-polyfill": "^2.0.0", "react-dom": "^17.0.1", "react-i18next": "^11.7.3", "react-scripts": "4.0.0", "web-vitals": "^0.2.4" }, "scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject" }, "eslintConfig": { "extends": [ "react-app", "react-app/jest" ] }, "browserslist": { "production": [ ">0.2%", "not dead", "not op_mini all", "ie >= 9" ], "development": [ "last 1 chrome version", "last 1 firefox version", "last 1 safari version", "ie >= 9" ] }, "devDependencies": { "@babel/plugin-transform-arrow-functions": "^7.12.1", "@typescript-eslint/eslint-plugin": "^4.6.1", "@typescript-eslint/parser": "^4.6.1", "eslint": "^7.11.0", "eslint-config-airbnb-typescript": "^9.0.0", "eslint-config-prettier": "^6.11.0", "eslint-plugin-import": "^2.22.0", "eslint-plugin-jsx-a11y": "^6.3.1", "eslint-plugin-prettier": "^3.1.4", "eslint-plugin-react": "^7.20.6", "eslint-plugin-react-hooks": "^4.1.0", "prettier": "^2.1.2", "typescript": "^4.0.5" } }``` [1]: https://i.stack.imgur.com/WUKcz.png
[ "I assume that you have installed ESLint using npm install eslint --save-dev and defined a default configuration with node_modules/.bin/eslint --init answering the questions in the prompt.\nI noticed that in your .eslintrc.js file, the ESLint settings is missing in the extends option:\nextends: [\n 'eslint:recommended',\n 'airbnb-typescript',\n 'plugin:@typescript-eslint/recommended',\n 'prettier/react',\n 'prettier/@typescript-eslint',\n 'plugin:prettier/recommended',\n ],\n\nAlso in the package.json is recommended ESLint to have its own script that you run using npm run lint and use it combined with a eslint-plugin in your favorite code editor:\n{\n \"scripts\": {\n \"start\": \"react-scripts start\",\n // ...\n \"lint\": \"eslint .\"\n },\n}\n\nProbably you will build your application at some point, so you should create a .eslintignore file and inside of it add build since files in the build directory also get checked by default when the command is ran.\nSource: https://fullstackopen.com/en/part3/validation_and_es_lint#lint\n", "This part in your package.json is unnecessary; since you have an aslant config file, it should be moved to .eslintrc.js.\n\"eslintConfig\": {\n \"extends\": [\n \"react-app\",\n \"react-app/jest\"\n ]\n}\n\nWhich would then turn into this:\nextends: [\n 'react-app',\n 'react-app/jest',\n 'airbnb-typescript',\n 'plugin:@typescript-eslint/recommended',\n 'prettier/react',\n 'prettier/@typescript-eslint',\n 'plugin:prettier/recommended'\n]\n\nHowever; in the latest versions of the eslint-config-react-app plugin at this time (7.31.11), I'm getting a jest plugin conflict error with a project that worked prior to updating unless I remove react-app/jest from my eslint configs extends section.\nWhich is how I ended up here, currently trying to find out what caused this.\nUpdate: So my issue was caused because eslint-config-react-app depends on eslint-plugin-jest^25.7.0 and I was using the latest eslint-plugin-jest^27.1.6. I removed my package.json's dependency and will use the version included with eslint-config-react-app so there aren't any conflicts there, but if I need features of the newer plugin version, a quick npm i -D on it, changing eslint configuration from automatic to manual and specifying the path to the local node_modules version and .eslintrc.js should work as well; aside from any conflicts with the config plugin.\n" ]
[ 5, 0 ]
[]
[]
[ "eslint", "eslintrc", "reactjs", "typescript_eslint" ]
stackoverflow_0064657876_eslint_eslintrc_reactjs_typescript_eslint.txt
Q: can not call recognize in TesseractOcr react-native i am doing a feature that is "get text from image" and i used "react-native-tesseract-ocr". although i read document and followed it but i still get the error. when i print TesseractOcr this is null. i can not call recognize in TesseractOcr (TesseractOcr.recognize) (https://i.stack.imgur.com/L2kQn.png) how can i fix it (https://i.stack.imgur.com/zrYDL.png) A: the problem may be that you have not linked the package $ react-native link react-native-tesseract-ocr add the import to start import TesseractOcr, { LANG_ENGLISH } from 'react-native-tesseract-ocr'; test this finction await TesseractOcr.recognize(imageSource, LANG_ENGLISH, {});
can not call recognize in TesseractOcr react-native
i am doing a feature that is "get text from image" and i used "react-native-tesseract-ocr". although i read document and followed it but i still get the error. when i print TesseractOcr this is null. i can not call recognize in TesseractOcr (TesseractOcr.recognize) (https://i.stack.imgur.com/L2kQn.png) how can i fix it (https://i.stack.imgur.com/zrYDL.png)
[ "the problem may be that you have not linked the package\n$ react-native link react-native-tesseract-ocr\n\nadd the import to start\nimport TesseractOcr, { LANG_ENGLISH } from 'react-native-tesseract-ocr';\n\ntest this finction\nawait TesseractOcr.recognize(imageSource, LANG_ENGLISH, {});\n\n" ]
[ 0 ]
[]
[]
[ "orc", "react_native" ]
stackoverflow_0074672501_orc_react_native.txt
Q: Can I turn on extended regular expressions support in Vim? The characters for extended regular expressions are invaluable; is there a way to turn them on so that I don't have to escape them in my Vim regex, much like the -E flag I can pass to grep(1)? A: Do :help magic in vim and you'll see there are four levels (very magic, magic, nomagic, and very nomagic) but only the two central ones can be set globally (the default is magic, and with :set commands you can only toggle between magic and nomagic); start your RE with \v to make all the rest of it "very magic" ("all ASCII characters except '0'-'9', 'a'-'z', 'A'-'Z' and '_' have a special meaning") -- but that applies only to that one specific RE!-) A: A workaround is to remap / to prefix searches with "very magic" automatically: nnoremap / /\v vnoremap / /\v A: On the topic of wanting to do the same thing with substitutions, be aware of :sm.
Can I turn on extended regular expressions support in Vim?
The characters for extended regular expressions are invaluable; is there a way to turn them on so that I don't have to escape them in my Vim regex, much like the -E flag I can pass to grep(1)?
[ "Do :help magic in vim and you'll see there are four levels (very magic, magic, nomagic, and very nomagic) but only the two central ones can be set globally (the default is magic, and with :set commands you can only toggle between magic and nomagic); start your RE with \\v to make all the rest of it \"very magic\" (\"all ASCII characters except '0'-'9', 'a'-'z', 'A'-'Z' and '_' have a special meaning\") -- but that applies only to that one specific RE!-)\n", "A workaround is to remap / to prefix searches with \"very magic\" automatically:\nnnoremap / /\\v\nvnoremap / /\\v\n\n", "On the topic of wanting to do the same thing with substitutions, be aware of :sm.\n" ]
[ 58, 15, 0 ]
[]
[]
[ "command_line", "regex", "vim" ]
stackoverflow_0001623160_command_line_regex_vim.txt
Q: How to change hovers? I want to change the font size and color of the a when I hover over p. It is not working. Probably there is a simple solution, but I am struggling with this since a few hours. If anyone has not to complicated links related to this topic I also would be happy <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Document</title> <style> p:hover div nav a { color: blue; font-size: 22px; } </style> </head> <body> <div> <nav> <p>Ceramics</p> <a href="">One</a> <a href="">Two</a> <a href="">Three</a> </nav> </div> </body> </html> A: Your CSS selector p:hover div nav a is incorrect. This would refer to an <a/> within a <nav/> within a <div/> within a <p/> that is being hovered. You can fix this with the change I made below. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Document</title> <style> nav p { /*selects all hovered <p/> within a <nav/>*/ color: blue; font-size: 22px; } </style> </head> <body> <div> <nav> <p>Ceramics</p> <a href="">One</a> <a href="">Two</a> <a href="">Three</a> </nav> </div> </body> </html> A: Hi there i get the simple solution for you . <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Document</title> <style> p:hover + .hv { color: blue; font-size: 22px; } </style> </head> <body> <div> <nav> <p>Ceramics</p> <div class="hv"> <a href="">One</a> <a href="">Two</a> <a href="">Three</a> </div> </nav> </div> </body> </html> A: Hi there ok so if i might get you . you want to get <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Document</title> <style> .hv a:hover{ color: blue; font-size: 22px; } </style> </head> <body> <div> <nav> <p>Ceramics</p> <div class="hv"> <a href="">One</a> <a href="">Two</a> <a href="">Three</a> </div> </nav> </div> </body> </html> A: Try something like: p:hover, a:hover {color: blue;font-size: 22px;}
How to change hovers?
I want to change the font size and color of the a when I hover over p. It is not working. Probably there is a simple solution, but I am struggling with this since a few hours. If anyone has not to complicated links related to this topic I also would be happy <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Document</title> <style> p:hover div nav a { color: blue; font-size: 22px; } </style> </head> <body> <div> <nav> <p>Ceramics</p> <a href="">One</a> <a href="">Two</a> <a href="">Three</a> </nav> </div> </body> </html>
[ "Your CSS selector p:hover div nav a is incorrect. This would refer to an <a/> within a <nav/> within a <div/> within a <p/> that is being hovered. You can fix this with the change I made below.\n\n\n<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <meta charset=\"UTF-8\" />\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\" />\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n <title>Document</title>\n <style>\n nav p { /*selects all hovered <p/> within a <nav/>*/\n color: blue;\n font-size: 22px;\n }\n </style>\n </head>\n <body>\n <div>\n <nav>\n <p>Ceramics</p>\n <a href=\"\">One</a>\n <a href=\"\">Two</a>\n <a href=\"\">Three</a>\n </nav>\n </div>\n </body>\n</html>\n\n\n\n", "Hi there i get the simple solution for you .\n\n\n<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <meta charset=\"UTF-8\" />\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\" />\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n <title>Document</title>\n <style>\n \n \n p:hover + .hv {\n \n color: blue;\n font-size: 22px;\n \n }\n \n </style>\n </head>\n <body>\n <div>\n <nav>\n <p>Ceramics</p>\n <div class=\"hv\">\n <a href=\"\">One</a>\n <a href=\"\">Two</a>\n <a href=\"\">Three</a>\n </div>\n </nav>\n </div>\n </body>\n</html>\n\n\n\n", "Hi there ok so if i might get you . you want to get\n\n\n<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <meta charset=\"UTF-8\" />\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\" />\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n <title>Document</title>\n <style>\n \n \n .hv a:hover{\n \n color: blue;\n font-size: 22px;\n \n }\n \n \n </style>\n </head>\n <body>\n <div>\n <nav>\n <p>Ceramics</p>\n <div class=\"hv\">\n <a href=\"\">One</a>\n <a href=\"\">Two</a>\n <a href=\"\">Three</a>\n </div>\n </nav>\n </div>\n </body>\n</html>\n\n\n\n", "Try something like:\np:hover, a:hover {color: blue;font-size: 22px;}\n" ]
[ 0, 0, 0, 0 ]
[]
[]
[ "css", "hover", "html" ]
stackoverflow_0074671432_css_hover_html.txt
Q: pset2 - Caesar. Outputs match but fails check50 Sorry, I know this has been asked before but I have read all the answers and nothing works! Please help. When I check acutal and expected outputs, they match, but in check50 It gives an error message. Here is the link for check50 results: https://submit.cs50.io/check50/6d939efb8a55e3fadec1c60952311f6198cd0eb0 #include <cs50.h> #include <stdio.h> #include <ctype.h> #include <stdlib.h> #include <string.h> char crypt(int k,char w) { if ('a'<= w && w <='z') { return (((w-'a')+ k)%26+'a'); } else if ('A' <= w && w <= 'Z') { return (((w -'A') + k) % 26 + 'A'); } else { return (w); } } int main(int argc, string argv[]) { //input check if (argc != 2) { printf("Usage: ./caesar key\n"); return 1; } int lngg = strlen(argv[1]); for (int i = 0; i < lngg; i++) { if (isdigit(argv[1][i]) == 0) { printf("Usage: ./caesar key\n"); return 1; } } int key = atoi(argv[1])%26; //input string plain = get_string("plaintext: "); //crpypt starts printf("ciphertext: "); int lng = strlen(plain)+1; for (int a = 0; a < lng ;a++) { printf("%c", crypt(key, plain[a])); } printf("\n"); } Hi, When I look expected and actual outputs, everything seems perfect but it errors A: The problem is that you are also printing the terminating null character to standard output using printf. This character is not printable, so you cannot see it, but check50 does detect it and therefore reports that your program failed. The loop for(int a=0; a<lng ;a++) { printf("%c",crypt(key,plain[a])); } will run for strlen(plain)+1 iterations, because that is the value to which you set the variable lng. You should instead set the value of lng to strlen(plain), in order to prevent the terminating null character from being printed. A: @w is correct. Also, code can avoid 2 passes down the string and do only 1. // int lng = strlen(plain)+1; // for (int a = 0; a < lng ;a++) for (int a = 0; plain[a] != '\0'; a++)
pset2 - Caesar. Outputs match but fails check50
Sorry, I know this has been asked before but I have read all the answers and nothing works! Please help. When I check acutal and expected outputs, they match, but in check50 It gives an error message. Here is the link for check50 results: https://submit.cs50.io/check50/6d939efb8a55e3fadec1c60952311f6198cd0eb0 #include <cs50.h> #include <stdio.h> #include <ctype.h> #include <stdlib.h> #include <string.h> char crypt(int k,char w) { if ('a'<= w && w <='z') { return (((w-'a')+ k)%26+'a'); } else if ('A' <= w && w <= 'Z') { return (((w -'A') + k) % 26 + 'A'); } else { return (w); } } int main(int argc, string argv[]) { //input check if (argc != 2) { printf("Usage: ./caesar key\n"); return 1; } int lngg = strlen(argv[1]); for (int i = 0; i < lngg; i++) { if (isdigit(argv[1][i]) == 0) { printf("Usage: ./caesar key\n"); return 1; } } int key = atoi(argv[1])%26; //input string plain = get_string("plaintext: "); //crpypt starts printf("ciphertext: "); int lng = strlen(plain)+1; for (int a = 0; a < lng ;a++) { printf("%c", crypt(key, plain[a])); } printf("\n"); } Hi, When I look expected and actual outputs, everything seems perfect but it errors
[ "The problem is that you are also printing the terminating null character to standard output using printf. This character is not printable, so you cannot see it, but check50 does detect it and therefore reports that your program failed.\nThe loop\nfor(int a=0; a<lng ;a++)\n{\n printf(\"%c\",crypt(key,plain[a]));\n}\n\nwill run for strlen(plain)+1 iterations, because that is the value to which you set the variable lng.\nYou should instead set the value of lng to strlen(plain), in order to prevent the terminating null character from being printed.\n", "@w is correct.\nAlso, code can avoid 2 passes down the string and do only 1.\n// int lng = strlen(plain)+1;\n// for (int a = 0; a < lng ;a++)\n\nfor (int a = 0; plain[a] != '\\0'; a++)\n\n" ]
[ 1, 0 ]
[]
[]
[ "c", "cs50" ]
stackoverflow_0074672168_c_cs50.txt
Q: self is not defined error when i am using jodti-react text edior in nextjs project self is not defined error when i use jodti-react in nextjs project import React, { useState, useRef, useMemo } from "react"; import Dashborad from "./Dashborad"; import JoditEditor from "jodit-react"; import dynamic from "next/dynamic"; export default function edit() { const editor = useRef(); const [content, setContent] = useState(""); return ( <Dashborad> <JoditEditor ref={editor} value={content} tabIndex={1} // tabIndex of textarea onBlur={(newContent) => setContent(newContent)} // preferred to use only this option to update the content for performance reasons onChange={(newContent) => setContent(newContent)} /> </Dashborad> ); } } how to solve this error ? A: This error can occur when the JoditEditor component is used in a Next.js project because Next.js uses server-side rendering, which means that the code is executed on the server rather than in the browser. The JoditEditor component relies on certain browser APIs that are not available on the server, which can cause this error to occur. To fix this error, you can use the dynamic component provided by Next.js to wrap the JoditEditor component. This will ensure that the JoditEditor component is only loaded and rendered on the client-side, after the initial render has been completed on the server. Here is an example of how you can use the dynamic component to fix the "self is not defined" error: import React, { useState, useRef, useMemo } from "react"; import Dashborad from "./Dashborad"; import dynamic from "next/dynamic"; const JoditEditor = dynamic(() => import("jodit-react"), { ssr: false, }); export default function edit() { const editor = useRef(); const [content, setContent] = useState(""); return ( <Dashborad> <JoditEditor ref={editor} value={content} tabIndex={1} // tabIndex of textarea onBlur={(newContent) => setContent(newContent)} // preferred to use only this option to update the content for performance reasons onChange={(newContent) => setContent(newContent)} /> </Dashborad> ); } By wrapping the JoditEditor component in a dynamic component and setting the ssr option to false, you can ensure that the JoditEditor component is only loaded and rendered on the client-side, which should fix the "self is not defined" error.
self is not defined error when i am using jodti-react text edior in nextjs project
self is not defined error when i use jodti-react in nextjs project import React, { useState, useRef, useMemo } from "react"; import Dashborad from "./Dashborad"; import JoditEditor from "jodit-react"; import dynamic from "next/dynamic"; export default function edit() { const editor = useRef(); const [content, setContent] = useState(""); return ( <Dashborad> <JoditEditor ref={editor} value={content} tabIndex={1} // tabIndex of textarea onBlur={(newContent) => setContent(newContent)} // preferred to use only this option to update the content for performance reasons onChange={(newContent) => setContent(newContent)} /> </Dashborad> ); } } how to solve this error ?
[ "This error can occur when the JoditEditor component is used in a Next.js project because Next.js uses server-side rendering, which means that the code is executed on the server rather than in the browser. The JoditEditor component relies on certain browser APIs that are not available on the server, which can cause this error to occur.\nTo fix this error, you can use the dynamic component provided by Next.js to wrap the JoditEditor component. This will ensure that the JoditEditor component is only loaded and rendered on the client-side, after the initial render has been completed on the server.\nHere is an example of how you can use the dynamic component to fix the \"self is not defined\" error:\nimport React, { useState, useRef, useMemo } from \"react\";\nimport Dashborad from \"./Dashborad\";\nimport dynamic from \"next/dynamic\";\n\nconst JoditEditor = dynamic(() => import(\"jodit-react\"), {\n ssr: false,\n});\n\nexport default function edit() {\n const editor = useRef();\n const [content, setContent] = useState(\"\");\n\n return (\n <Dashborad>\n <JoditEditor\n ref={editor}\n value={content}\n tabIndex={1} // tabIndex of textarea\n onBlur={(newContent) => setContent(newContent)} // preferred to use only this option to update the content for performance reasons\n onChange={(newContent) => setContent(newContent)}\n />\n </Dashborad>\n );\n}\n\nBy wrapping the JoditEditor component in a dynamic component and setting the ssr option to false, you can ensure that the JoditEditor component is only loaded and rendered on the client-side, which should fix the \"self is not defined\" error.\n" ]
[ 0 ]
[]
[]
[ "angularjs", "javascript", "next.js", "node.js", "reactjs" ]
stackoverflow_0074672539_angularjs_javascript_next.js_node.js_reactjs.txt
Q: Prolog writeln with variables I need to use writeln specifically in Prolog with a variable in it. What I am trying to make is an error that takes the first element of a list, and format does what I need almost perfectly, but again it specifically needs to be writeln. I experimented for awhile and tried using '+' to concatenate the string like how it works in other languages, and when I use this writeln("ERROR: \"" + Head + "\" is invalid.") I almost succeed in what I want, and it prints ERROR: " + a + " is invalid. with the variable 'a' highlighted (another requirement) when I am trying to get ERROR: "a" is invalid. But I am unable to print it without using characters such as +, -, or | to contain the variable. I don't really understand what is going on and I haven't been able to find a reason on my own. Using string_concat twice makes the proper string, but the variable is not highlighted like it is supposed to be. A: As you already mentioned, using string_concat solves your problem as you want. String concatenation does not work like this in Prolog as it does in different languages. The reason why it still prints something when using the + while it throws without it is that the + is in an infix operator and is displayed as such in when using writeln because it also writes out the predicate names. It will write the + infix when provided as prefix such as in this example: writeln(+(3,2)). It does not work without the + because then you simply have three different values after another. Quoted Atom, Variable filled with an Atom, Quoted Atom. Prolog expects a term though, so you run into a syntax error.
Prolog writeln with variables
I need to use writeln specifically in Prolog with a variable in it. What I am trying to make is an error that takes the first element of a list, and format does what I need almost perfectly, but again it specifically needs to be writeln. I experimented for awhile and tried using '+' to concatenate the string like how it works in other languages, and when I use this writeln("ERROR: \"" + Head + "\" is invalid.") I almost succeed in what I want, and it prints ERROR: " + a + " is invalid. with the variable 'a' highlighted (another requirement) when I am trying to get ERROR: "a" is invalid. But I am unable to print it without using characters such as +, -, or | to contain the variable. I don't really understand what is going on and I haven't been able to find a reason on my own. Using string_concat twice makes the proper string, but the variable is not highlighted like it is supposed to be.
[ "As you already mentioned, using string_concat solves your problem as you want. String concatenation does not work like this in Prolog as it does in different languages. The reason why it still prints something when using the + while it throws without it is that the + is in an infix operator and is displayed as such in when using writeln because it also writes out the predicate names.\nIt will write the + infix when provided as prefix such as in this example:\nwriteln(+(3,2)).\nIt does not work without the + because then you simply have three different values after another. Quoted Atom, Variable filled with an Atom, Quoted Atom. Prolog expects a term though, so you run into a syntax error.\n" ]
[ 0 ]
[]
[]
[ "prolog" ]
stackoverflow_0074672425_prolog.txt
Q: asio tcp server hanging I know this is probably a really simple problem but ive been trying to get the asio examples to work correctly for over a week now. whenever I run the program, the terminal hangs and dosent print anything and dosent send any info to the client. Im using Ubuntu Linux and a basic compiler command g++ main.cpp -o main.exe -I include #define ASIO_STANDALONE; #include <ctime> #include <iostream> #include <string> #include <asio.hpp> using asio::ip::tcp; int main() { try { asio::io_context io_context; tcp::acceptor acceptor(io_context, tcp::endpoint(tcp::v4(), 1326)); for (;;) { std::cout << "hi"; tcp::socket socket(io_context); acceptor.accept(socket); std::string message = "e"; asio::error_code ignored_error; asio::write(socket, asio::buffer(message), ignored_error); break; } } catch (std::exception& e) { std::cerr << e.what() << std::endl; } return 0; } any help would be much appreciated A: the terminal hangs and dosent print anything and dosent send any info to the client You need to connect a client first, because the first thing you do is a blocking accept which never completes unless a connection arrives. I've compiled your program (with minor modification for Boost Asio): Live On Coliru //#define ASIO_STANDALONE #include <boost/asio.hpp> #include <ctime> #include <iostream> #include <string> namespace asio = boost::asio; using asio::ip::tcp; using boost::system::error_code; int main() { try { asio::io_context io_context; tcp::acceptor acceptor(io_context, tcp::endpoint(tcp::v4(), 1326)); for (;;) { tcp::socket socket(io_context); acceptor.accept(socket); std::cout << "hi " << socket.remote_endpoint() << std::endl; std::string message = "server message works\n"; error_code ignored_error; asio::write(socket, asio::buffer(message), ignored_error); break; } } catch (std::exception const& e) { std::cerr << e.what() << std::endl; } } Using netcat to emulate a client: nc 127.0.0.1 1326 -w 1 <<< "Hello world" We see: hi 127.0.0.1:45448 server message works Or more clearly in separate terminals:
asio tcp server hanging
I know this is probably a really simple problem but ive been trying to get the asio examples to work correctly for over a week now. whenever I run the program, the terminal hangs and dosent print anything and dosent send any info to the client. Im using Ubuntu Linux and a basic compiler command g++ main.cpp -o main.exe -I include #define ASIO_STANDALONE; #include <ctime> #include <iostream> #include <string> #include <asio.hpp> using asio::ip::tcp; int main() { try { asio::io_context io_context; tcp::acceptor acceptor(io_context, tcp::endpoint(tcp::v4(), 1326)); for (;;) { std::cout << "hi"; tcp::socket socket(io_context); acceptor.accept(socket); std::string message = "e"; asio::error_code ignored_error; asio::write(socket, asio::buffer(message), ignored_error); break; } } catch (std::exception& e) { std::cerr << e.what() << std::endl; } return 0; } any help would be much appreciated
[ "\nthe terminal hangs and dosent print anything and dosent send any info to the client\n\nYou need to connect a client first, because the first thing you do is a blocking accept which never completes unless a connection arrives.\nI've compiled your program (with minor modification for Boost Asio):\nLive On Coliru\n//#define ASIO_STANDALONE\n#include <boost/asio.hpp>\n#include <ctime>\n#include <iostream>\n#include <string>\n\nnamespace asio = boost::asio;\nusing asio::ip::tcp;\nusing boost::system::error_code;\n\nint main() {\n try {\n asio::io_context io_context;\n\n tcp::acceptor acceptor(io_context, tcp::endpoint(tcp::v4(), 1326));\n\n for (;;) {\n tcp::socket socket(io_context);\n acceptor.accept(socket);\n std::cout << \"hi \" << socket.remote_endpoint() << std::endl;\n\n std::string message = \"server message works\\n\";\n\n error_code ignored_error;\n asio::write(socket, asio::buffer(message), ignored_error);\n break;\n }\n } catch (std::exception const& e) {\n std::cerr << e.what() << std::endl;\n }\n}\n\nUsing netcat to emulate a client:\nnc 127.0.0.1 1326 -w 1 <<< \"Hello world\"\n\nWe see:\nhi 127.0.0.1:45448\nserver message works\n\nOr more clearly in separate terminals:\n" ]
[ 0 ]
[]
[]
[ "asio", "boost_asio", "c++", "networking" ]
stackoverflow_0074672502_asio_boost_asio_c++_networking.txt
Q: Is there any way to find out where the Xcode build description path comes from? Attached image is a Prebuild log in Xcode, which is enabled by doing this in the Terminal: defaults write com.apple.dt.Xcode IDEShowPrebuildLogs -bool YES. I'd like to know where the "Build description path" comes from so that I can modify it. To make it clearer, I'm working on a big legacy Xcode project, the path to the XCBuilData folder in attached image is not ~/Library/Developer/Xcode/DerivedData, and I want to modify the path so that XCBuilData is in ~/Library/Developer/Xcode/DerivedData. A: The "Build description path" in the Prebuild log in Xcode is the path to the directory where the project's build artifacts are stored. In a default Xcode installation, this path is ~/Library/Developer/Xcode/DerivedData/[ProjectName]-[UniqueID]/Build/Intermediates.noindex/[ProjectName].build, where [ProjectName] is the name of your project and [UniqueID] is a unique identifier for the project. You can modify the path to the build artifacts directory by changing the "Derived Data Location" in the Xcode project settings. To do this, follow these steps: In Xcode, select your project in the Project navigator on the left side of the window. In the main Xcode window, click the "File" menu and select "Project Settings". In the "Project Settings" dialog that appears, select the "Build System" tab. Under the "Derived Data" section, you should see a drop-down menu labeled "Derived Data Location". Click the drop-down menu and select "Custom". In the "Custom Location" text field that appears, enter the path to the directory where you want to store your project's build artifacts. After making these changes, Xcode will store your project's build artifacts in the directory you specified. This will also change the "Build description path" in the Prebuild log to the new location.
Is there any way to find out where the Xcode build description path comes from?
Attached image is a Prebuild log in Xcode, which is enabled by doing this in the Terminal: defaults write com.apple.dt.Xcode IDEShowPrebuildLogs -bool YES. I'd like to know where the "Build description path" comes from so that I can modify it. To make it clearer, I'm working on a big legacy Xcode project, the path to the XCBuilData folder in attached image is not ~/Library/Developer/Xcode/DerivedData, and I want to modify the path so that XCBuilData is in ~/Library/Developer/Xcode/DerivedData.
[ "The \"Build description path\" in the Prebuild log in Xcode is the path to the directory where the project's build artifacts are stored. In a default Xcode installation, this path is ~/Library/Developer/Xcode/DerivedData/[ProjectName]-[UniqueID]/Build/Intermediates.noindex/[ProjectName].build, where [ProjectName] is the name of your project and [UniqueID] is a unique identifier for the project.\nYou can modify the path to the build artifacts directory by changing the \"Derived Data Location\" in the Xcode project settings. To do this, follow these steps:\n\nIn Xcode, select your project in the Project navigator on the left\nside of the window.\nIn the main Xcode window, click the \"File\" menu and select \"Project Settings\". In the \"Project Settings\" dialog that appears, select the \"Build System\" tab.\nUnder the \"Derived Data\" section, you should see a drop-down menu labeled \"Derived Data Location\".\nClick the drop-down menu and select \"Custom\".\nIn the \"Custom Location\" text field that appears, enter the path to the directory where you want to store your project's build artifacts.\n\nAfter making these changes, Xcode will store your project's build artifacts in the directory you specified. This will also change the \"Build description path\" in the Prebuild log to the new location.\n" ]
[ 0 ]
[]
[]
[ "prebuild", "xcode" ]
stackoverflow_0074672474_prebuild_xcode.txt
Q: beautiful soup to grab data from table I had recently asked for help using beautiful soup to grab forex prices from a site. the data was hidden in the span. I was lucky enough to get help from two people who were amazing and helped me work through it. I have since found a different site that i want to scrape from, this time there is no span the text is in tr and td from the table. https://www.wsj.com/market-data/quotes/fx/AUDNZD/historical-prices is the website.. as you can see the high and low prices go back i believe 30 days on this table i would like to grab the whole table so i can use the data as needed for different calculations when i attempt to grab the data its still just coming back as an empty list.. and i have tried alot of different places to grab it from. Can someone not only help me get what i want but explain what im doing wrong so i can learn to use the beautiful soup for myself so i dont have to keep asking for help. last time i grabbed from span it saved it in a list of lists that i was able to use and save as variables for differnt days and then do calculations with it. this is what i am attempting to do again. '''import requests from bs4 import BeautifulSoup import re result = [] URL = "https://www.wsj.com/market-data/quotes/fx/AUDNZD/historical-prices" page = requests.get(URL) soup = BeautifulSoup(page.content, "html.parser") table = soup.select('cr_dataTable') print(table)''' i did not save all my attempts at different ways i tried.. i literally got down to this super basic attempt to just try to get a response back from somewhere that im grabbing so i could then continue into breaking it down to just the text.. everything i put in that soup.select() came back empty list.. so i kinda just got to a point where i decided i must not be doing any of this right. the soup is grabbing the html though. my find_all and find() and soup.select .. nothing seemed to work or get a repsonse back. please advise on how i am going about this wrong.. this simple code here should come back with lots of data for all the code in the table correct.. then i can go through it to grab text and grab what i want?? '''import requests from bs4 import BeautifulSoup import re result = [] URL = "https://www.wsj.com/market-data/quotes/fx/AUDNZD/historical-prices" page = requests.get(URL) soup = BeautifulSoup(page.content, "html.parser") table = soup.find('table', class_='cr_dataTable') print(table)''' comes back none! A: You hadn't added headers thus the request was fetching output for robots. Full Code import requests from bs4 import BeautifulSoup import json import os result = [] headers = { 'user-agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Mobile Safari/537.36', } r = URL = "https://www.wsj.com/market-data/quotes/fx/AUDNZD/historical-prices" page = requests.get(URL, headers=headers) soup = BeautifulSoup(page.content, "html.parser") div = soup.find('div', {"id": "historical_data_table"}) table = div.find('table', {"class": "cr_dataTable"}) for i in table.findAll("tr"): row = i row = row.findAll("td") DATE = row[0].text OPEN = row[1].text HIGH = row[2].text LOW = row[3].text CLOSE = row[4].text output = {"DATE": DATE, "OPEN": OPEN, "HIGH": HIGH, "LOW": LOW, "CLOSE": CLOSE} result.append(output) if (os.path.exists("Data.json") == False): f = open("Data.json", "w") json.dump(result, f, indent=4) else: with open('Data.json', 'w') as f: json.dump(result, f, indent=4) Output [ { "DATE": "12/02/22", "OPEN": "1.0691", "HIGH": "1.0709", "LOW": "1.0568", "CLOSE": "1.0602" }, { "DATE": "12/01/22", "OPEN": "1.0768", "HIGH": "1.0792", "LOW": "1.0669", "CLOSE": "1.0692" }, { "DATE": "11/30/22", "OPEN": "1.0787", "HIGH": "1.0813", "LOW": "1.0737", "CLOSE": "1.0783" }, { "DATE": "11/29/22", "OPEN": "1.0794", "HIGH": "1.0820", "LOW": "1.0773", "CLOSE": "1.0788" }, { "DATE": "11/28/22", "OPEN": "1.0807", "HIGH": "1.0815", "LOW": "1.0752", "CLOSE": "1.0792" }, { "DATE": "11/25/22", "OPEN": "1.0805", "HIGH": "1.0822", "LOW": "1.0782", "CLOSE": "1.0804" }, { "DATE": "11/24/22", "OPEN": "1.0787", "HIGH": "1.0819", "LOW": "1.0765", "CLOSE": "1.0797" }, { "DATE": "11/23/22", "OPEN": "1.0801", "HIGH": "1.0837", "LOW": "1.0747", "CLOSE": "1.0781" }, { "DATE": "11/22/22", "OPEN": "1.0826", "HIGH": "1.0838", "LOW": "1.0781", "CLOSE": "1.0804" }, { "DATE": "11/21/22", "OPEN": "1.0891", "HIGH": "1.0891", "LOW": "1.0799", "CLOSE": "1.0828" }, { "DATE": "11/18/22", "OPEN": "1.0915", "HIGH": "1.0934", "LOW": "1.0833", "CLOSE": "1.0849" }, { "DATE": "11/17/22", "OPEN": "1.0964", "HIGH": "1.0981", "LOW": "1.0912", "CLOSE": "1.0917" }, { "DATE": "11/16/22", "OPEN": "1.0971", "HIGH": "1.0997", "LOW": "1.0941", "CLOSE": "1.0963" }, { "DATE": "11/15/22", "OPEN": "1.0995", "HIGH": "1.1002", "LOW": "1.0946", "CLOSE": "1.0975" }, { "DATE": "11/14/22", "OPEN": "1.0957", "HIGH": "1.1015", "LOW": "1.0953", "CLOSE": "1.0994" }, { "DATE": "11/11/22", "OPEN": "1.0987", "HIGH": "1.1046", "LOW": "1.0949", "CLOSE": "1.0965" }, { "DATE": "11/10/22", "OPEN": "1.0927", "HIGH": "1.0992", "LOW": "1.0913", "CLOSE": "1.0986" }, { "DATE": "11/09/22", "OPEN": "1.0927", "HIGH": "1.0975", "LOW": "1.0901", "CLOSE": "1.0929" }, { "DATE": "11/08/22", "OPEN": "1.0908", "HIGH": "1.0928", "LOW": "1.0882", "CLOSE": "1.0919" }, { "DATE": "11/07/22", "OPEN": "1.0863", "HIGH": "1.0977", "LOW": "1.0863", "CLOSE": "1.0910" }, { "DATE": "11/04/22", "OPEN": "1.0896", "HIGH": "1.0960", "LOW": "1.0877", "CLOSE": "1.0909" }, { "DATE": "11/03/22", "OPEN": "1.0914", "HIGH": "1.0937", "LOW": "1.0883", "CLOSE": "1.0898" }, { "DATE": "11/02/22", "OPEN": "1.0945", "HIGH": "1.0957", "LOW": "1.0902", "CLOSE": "1.0913" }, { "DATE": "11/01/22", "OPEN": "1.1003", "HIGH": "1.1033", "LOW": "1.0930", "CLOSE": "1.0944" }, { "DATE": "10/31/22", "OPEN": "1.1031", "HIGH": "1.1348", "LOW": "1.0989", "CLOSE": "1.1004" }, { "DATE": "10/28/22", "OPEN": "1.1070", "HIGH": "1.1084", "LOW": "1.1012", "CLOSE": "1.1032" }, { "DATE": "10/27/22", "OPEN": "1.1140", "HIGH": "1.1154", "LOW": "1.1058", "CLOSE": "1.1072" }, { "DATE": "10/26/22", "OPEN": "1.1130", "HIGH": "1.1176", "LOW": "1.1092", "CLOSE": "1.1133" }, { "DATE": "10/25/22", "OPEN": "1.1089", "HIGH": "1.1122", "LOW": "1.1065", "CLOSE": "1.1111" }, { "DATE": "10/24/22", "OPEN": "1.1124", "HIGH": "1.1124", "LOW": "1.1020", "CLOSE": "1.1085" }, { "DATE": "10/21/22", "OPEN": "1.1063", "HIGH": "1.1102", "LOW": "1.1044", "CLOSE": "1.1077" }, { "DATE": "10/20/22", "OPEN": "1.1056", "HIGH": "1.1094", "LOW": "1.1023", "CLOSE": "1.1062" }, { "DATE": "10/19/22", "OPEN": "1.1100", "HIGH": "1.1107", "LOW": "1.1052", "CLOSE": "1.1055" }, { "DATE": "10/18/22", "OPEN": "1.1151", "HIGH": "1.1210", "LOW": "1.1071", "CLOSE": "1.1101" }, { "DATE": "10/17/22", "OPEN": "1.1138", "HIGH": "1.1193", "LOW": "1.1137", "CLOSE": "1.1161" }, { "DATE": "10/14/22", "OPEN": "1.1176", "HIGH": "1.1191", "LOW": "1.1121", "CLOSE": "1.1151" }, { "DATE": "10/13/22", "OPEN": "1.1192", "HIGH": "1.1215", "LOW": "1.1157", "CLOSE": "1.1163" }, { "DATE": "10/12/22", "OPEN": "1.1235", "HIGH": "1.1244", "LOW": "1.1172", "CLOSE": "1.1188" }, { "DATE": "10/11/22", "OPEN": "1.1318", "HIGH": "1.1328", "LOW": "1.1195", "CLOSE": "1.1237" }, { "DATE": "10/10/22", "OPEN": "1.1367", "HIGH": "1.1370", "LOW": "1.1266", "CLOSE": "1.1317" }, { "DATE": "10/07/22", "OPEN": "1.1322", "HIGH": "1.1376", "LOW": "1.1301", "CLOSE": "1.1358" }, { "DATE": "10/06/22", "OPEN": "1.1309", "HIGH": "1.1355", "LOW": "1.1244", "CLOSE": "1.1327" }, { "DATE": "10/05/22", "OPEN": "1.1348", "HIGH": "1.1381", "LOW": "1.1242", "CLOSE": "1.1308" }, { "DATE": "10/04/22", "OPEN": "1.1386", "HIGH": "1.1426", "LOW": "1.1306", "CLOSE": "1.1349" }, { "DATE": "10/03/22", "OPEN": "1.1460", "HIGH": "1.1460", "LOW": "1.1362", "CLOSE": "1.1388" }, { "DATE": "09/30/22", "OPEN": "1.1387", "HIGH": "1.1444", "LOW": "1.1320", "CLOSE": "1.1439" }, { "DATE": "09/29/22", "OPEN": "1.1382", "HIGH": "1.1417", "LOW": "1.1346", "CLOSE": "1.1350" }, { "DATE": "09/28/22", "OPEN": "1.1417", "HIGH": "1.1495", "LOW": "1.1290", "CLOSE": "1.1385" }, { "DATE": "09/27/22", "OPEN": "1.1453", "HIGH": "1.1466", "LOW": "1.1370", "CLOSE": "1.1419" }, { "DATE": "09/26/22", "OPEN": "1.1365", "HIGH": "1.1465", "LOW": "1.1328", "CLOSE": "1.1454" }, { "DATE": "09/23/22", "OPEN": "1.1365", "HIGH": "1.1378", "LOW": "1.1323", "CLOSE": "1.1373" }, { "DATE": "09/22/22", "OPEN": "1.1329", "HIGH": "1.1373", "LOW": "1.1303", "CLOSE": "1.1366" }, { "DATE": "09/21/22", "OPEN": "1.1342", "HIGH": "1.1363", "LOW": "1.1315", "CLOSE": "1.1334" }, { "DATE": "09/20/22", "OPEN": "1.1284", "HIGH": "1.1365", "LOW": "1.1273", "CLOSE": "1.1347" }, { "DATE": "09/19/22", "OPEN": "1.1221", "HIGH": "1.1295", "LOW": "1.1206", "CLOSE": "1.1289" }, { "DATE": "09/16/22", "OPEN": "1.1232", "HIGH": "1.1256", "LOW": "1.1197", "CLOSE": "1.1218" }, { "DATE": "09/15/22", "OPEN": "1.1247", "HIGH": "1.1261", "LOW": "1.1212", "CLOSE": "1.1233" }, { "DATE": "09/14/22", "OPEN": "1.1255", "HIGH": "1.1255", "LOW": "1.1201", "CLOSE": "1.1239" }, { "DATE": "09/13/22", "OPEN": "1.1218", "HIGH": "1.1259", "LOW": "1.1194", "CLOSE": "1.1223" }, { "DATE": "09/12/22", "OPEN": "1.1186", "HIGH": "1.1240", "LOW": "1.1181", "CLOSE": "1.1225" }, { "DATE": "09/09/22", "OPEN": "1.1156", "HIGH": "1.1215", "LOW": "1.1139", "CLOSE": "1.1212" }, { "DATE": "09/08/22", "OPEN": "1.1142", "HIGH": "1.1157", "LOW": "1.1115", "CLOSE": "1.1151" }, { "DATE": "09/07/22", "OPEN": "1.1153", "HIGH": "1.1181", "LOW": "1.1134", "CLOSE": "1.1141" }, { "DATE": "09/06/22", "OPEN": "1.1150", "HIGH": "1.1173", "LOW": "1.1127", "CLOSE": "1.1152" }, { "DATE": "09/05/22", "OPEN": "1.1113", "HIGH": "1.1167", "LOW": "1.1113", "CLOSE": "1.1153" } ]
beautiful soup to grab data from table
I had recently asked for help using beautiful soup to grab forex prices from a site. the data was hidden in the span. I was lucky enough to get help from two people who were amazing and helped me work through it. I have since found a different site that i want to scrape from, this time there is no span the text is in tr and td from the table. https://www.wsj.com/market-data/quotes/fx/AUDNZD/historical-prices is the website.. as you can see the high and low prices go back i believe 30 days on this table i would like to grab the whole table so i can use the data as needed for different calculations when i attempt to grab the data its still just coming back as an empty list.. and i have tried alot of different places to grab it from. Can someone not only help me get what i want but explain what im doing wrong so i can learn to use the beautiful soup for myself so i dont have to keep asking for help. last time i grabbed from span it saved it in a list of lists that i was able to use and save as variables for differnt days and then do calculations with it. this is what i am attempting to do again. '''import requests from bs4 import BeautifulSoup import re result = [] URL = "https://www.wsj.com/market-data/quotes/fx/AUDNZD/historical-prices" page = requests.get(URL) soup = BeautifulSoup(page.content, "html.parser") table = soup.select('cr_dataTable') print(table)''' i did not save all my attempts at different ways i tried.. i literally got down to this super basic attempt to just try to get a response back from somewhere that im grabbing so i could then continue into breaking it down to just the text.. everything i put in that soup.select() came back empty list.. so i kinda just got to a point where i decided i must not be doing any of this right. the soup is grabbing the html though. my find_all and find() and soup.select .. nothing seemed to work or get a repsonse back. please advise on how i am going about this wrong.. this simple code here should come back with lots of data for all the code in the table correct.. then i can go through it to grab text and grab what i want?? '''import requests from bs4 import BeautifulSoup import re result = [] URL = "https://www.wsj.com/market-data/quotes/fx/AUDNZD/historical-prices" page = requests.get(URL) soup = BeautifulSoup(page.content, "html.parser") table = soup.find('table', class_='cr_dataTable') print(table)''' comes back none!
[ "You hadn't added headers thus the request was fetching output for robots.\nFull Code\nimport requests\nfrom bs4 import BeautifulSoup\nimport json\nimport os\nresult = []\nheaders = {\n 'user-agent':\n 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Mobile Safari/537.36',\n}\nr = URL = \"https://www.wsj.com/market-data/quotes/fx/AUDNZD/historical-prices\"\npage = requests.get(URL, headers=headers)\nsoup = BeautifulSoup(page.content, \"html.parser\")\ndiv = soup.find('div', {\"id\": \"historical_data_table\"})\ntable = div.find('table', {\"class\": \"cr_dataTable\"})\nfor i in table.findAll(\"tr\"):\n row = i\n row = row.findAll(\"td\")\n DATE = row[0].text\n OPEN = row[1].text\n HIGH = row[2].text\n LOW = row[3].text\n CLOSE = row[4].text\n output = {\"DATE\": DATE, \"OPEN\": OPEN,\n \"HIGH\": HIGH, \"LOW\": LOW, \"CLOSE\": CLOSE}\n result.append(output)\nif (os.path.exists(\"Data.json\") == False):\n f = open(\"Data.json\", \"w\")\n json.dump(result, f, indent=4)\nelse:\n with open('Data.json', 'w') as f:\n json.dump(result, f, indent=4)\n\nOutput\n[\n {\n \"DATE\": \"12/02/22\",\n \"OPEN\": \"1.0691\",\n \"HIGH\": \"1.0709\",\n \"LOW\": \"1.0568\",\n \"CLOSE\": \"1.0602\"\n },\n {\n \"DATE\": \"12/01/22\",\n \"OPEN\": \"1.0768\",\n \"HIGH\": \"1.0792\",\n \"LOW\": \"1.0669\",\n \"CLOSE\": \"1.0692\"\n },\n {\n \"DATE\": \"11/30/22\",\n \"OPEN\": \"1.0787\",\n \"HIGH\": \"1.0813\",\n \"LOW\": \"1.0737\",\n \"CLOSE\": \"1.0783\"\n },\n {\n \"DATE\": \"11/29/22\",\n \"OPEN\": \"1.0794\",\n \"HIGH\": \"1.0820\",\n \"LOW\": \"1.0773\",\n \"CLOSE\": \"1.0788\"\n },\n {\n \"DATE\": \"11/28/22\",\n \"OPEN\": \"1.0807\",\n \"HIGH\": \"1.0815\",\n \"LOW\": \"1.0752\",\n \"CLOSE\": \"1.0792\"\n },\n {\n \"DATE\": \"11/25/22\",\n \"OPEN\": \"1.0805\",\n \"HIGH\": \"1.0822\",\n \"LOW\": \"1.0782\",\n \"CLOSE\": \"1.0804\"\n },\n {\n \"DATE\": \"11/24/22\",\n \"OPEN\": \"1.0787\",\n \"HIGH\": \"1.0819\",\n \"LOW\": \"1.0765\",\n \"CLOSE\": \"1.0797\"\n },\n {\n \"DATE\": \"11/23/22\",\n \"OPEN\": \"1.0801\",\n \"HIGH\": \"1.0837\",\n \"LOW\": \"1.0747\",\n \"CLOSE\": \"1.0781\"\n },\n {\n \"DATE\": \"11/22/22\",\n \"OPEN\": \"1.0826\",\n \"HIGH\": \"1.0838\",\n \"LOW\": \"1.0781\",\n \"CLOSE\": \"1.0804\"\n },\n {\n \"DATE\": \"11/21/22\",\n \"OPEN\": \"1.0891\",\n \"HIGH\": \"1.0891\",\n \"LOW\": \"1.0799\",\n \"CLOSE\": \"1.0828\"\n },\n {\n \"DATE\": \"11/18/22\",\n \"OPEN\": \"1.0915\",\n \"HIGH\": \"1.0934\",\n \"LOW\": \"1.0833\",\n \"CLOSE\": \"1.0849\"\n },\n {\n \"DATE\": \"11/17/22\",\n \"OPEN\": \"1.0964\",\n \"HIGH\": \"1.0981\",\n \"LOW\": \"1.0912\",\n \"CLOSE\": \"1.0917\"\n },\n {\n \"DATE\": \"11/16/22\",\n \"OPEN\": \"1.0971\",\n \"HIGH\": \"1.0997\",\n \"LOW\": \"1.0941\",\n \"CLOSE\": \"1.0963\"\n },\n {\n \"DATE\": \"11/15/22\",\n \"OPEN\": \"1.0995\",\n \"HIGH\": \"1.1002\",\n \"LOW\": \"1.0946\",\n \"CLOSE\": \"1.0975\"\n },\n {\n \"DATE\": \"11/14/22\",\n \"OPEN\": \"1.0957\",\n \"HIGH\": \"1.1015\",\n \"LOW\": \"1.0953\",\n \"CLOSE\": \"1.0994\"\n },\n {\n \"DATE\": \"11/11/22\",\n \"OPEN\": \"1.0987\",\n \"HIGH\": \"1.1046\",\n \"LOW\": \"1.0949\",\n \"CLOSE\": \"1.0965\"\n },\n {\n \"DATE\": \"11/10/22\",\n \"OPEN\": \"1.0927\",\n \"HIGH\": \"1.0992\",\n \"LOW\": \"1.0913\",\n \"CLOSE\": \"1.0986\"\n },\n {\n \"DATE\": \"11/09/22\",\n \"OPEN\": \"1.0927\",\n \"HIGH\": \"1.0975\",\n \"LOW\": \"1.0901\",\n \"CLOSE\": \"1.0929\"\n },\n {\n \"DATE\": \"11/08/22\",\n \"OPEN\": \"1.0908\",\n \"HIGH\": \"1.0928\",\n \"LOW\": \"1.0882\",\n \"CLOSE\": \"1.0919\"\n },\n {\n \"DATE\": \"11/07/22\",\n \"OPEN\": \"1.0863\",\n \"HIGH\": \"1.0977\",\n \"LOW\": \"1.0863\",\n \"CLOSE\": \"1.0910\"\n },\n {\n \"DATE\": \"11/04/22\",\n \"OPEN\": \"1.0896\",\n \"HIGH\": \"1.0960\",\n \"LOW\": \"1.0877\",\n \"CLOSE\": \"1.0909\"\n },\n {\n \"DATE\": \"11/03/22\",\n \"OPEN\": \"1.0914\",\n \"HIGH\": \"1.0937\",\n \"LOW\": \"1.0883\",\n \"CLOSE\": \"1.0898\"\n },\n {\n \"DATE\": \"11/02/22\",\n \"OPEN\": \"1.0945\",\n \"HIGH\": \"1.0957\",\n \"LOW\": \"1.0902\",\n \"CLOSE\": \"1.0913\"\n },\n {\n \"DATE\": \"11/01/22\",\n \"OPEN\": \"1.1003\",\n \"HIGH\": \"1.1033\",\n \"LOW\": \"1.0930\",\n \"CLOSE\": \"1.0944\"\n },\n {\n \"DATE\": \"10/31/22\",\n \"OPEN\": \"1.1031\",\n \"HIGH\": \"1.1348\",\n \"LOW\": \"1.0989\",\n \"CLOSE\": \"1.1004\"\n },\n {\n \"DATE\": \"10/28/22\",\n \"OPEN\": \"1.1070\",\n \"HIGH\": \"1.1084\",\n \"LOW\": \"1.1012\",\n \"CLOSE\": \"1.1032\"\n },\n {\n \"DATE\": \"10/27/22\",\n \"OPEN\": \"1.1140\",\n \"HIGH\": \"1.1154\",\n \"LOW\": \"1.1058\",\n \"CLOSE\": \"1.1072\"\n },\n {\n \"DATE\": \"10/26/22\",\n \"OPEN\": \"1.1130\",\n \"HIGH\": \"1.1176\",\n \"LOW\": \"1.1092\",\n \"CLOSE\": \"1.1133\"\n },\n {\n \"DATE\": \"10/25/22\",\n \"OPEN\": \"1.1089\",\n \"HIGH\": \"1.1122\",\n \"LOW\": \"1.1065\",\n \"CLOSE\": \"1.1111\"\n },\n {\n \"DATE\": \"10/24/22\",\n \"OPEN\": \"1.1124\",\n \"HIGH\": \"1.1124\",\n \"LOW\": \"1.1020\",\n \"CLOSE\": \"1.1085\"\n },\n {\n \"DATE\": \"10/21/22\",\n \"OPEN\": \"1.1063\",\n \"HIGH\": \"1.1102\",\n \"LOW\": \"1.1044\",\n \"CLOSE\": \"1.1077\"\n },\n {\n \"DATE\": \"10/20/22\",\n \"OPEN\": \"1.1056\",\n \"HIGH\": \"1.1094\",\n \"LOW\": \"1.1023\",\n \"CLOSE\": \"1.1062\"\n },\n {\n \"DATE\": \"10/19/22\",\n \"OPEN\": \"1.1100\",\n \"HIGH\": \"1.1107\",\n \"LOW\": \"1.1052\",\n \"CLOSE\": \"1.1055\"\n },\n {\n \"DATE\": \"10/18/22\",\n \"OPEN\": \"1.1151\",\n \"HIGH\": \"1.1210\",\n \"LOW\": \"1.1071\",\n \"CLOSE\": \"1.1101\"\n },\n {\n \"DATE\": \"10/17/22\",\n \"OPEN\": \"1.1138\",\n \"HIGH\": \"1.1193\",\n \"LOW\": \"1.1137\",\n \"CLOSE\": \"1.1161\"\n },\n {\n \"DATE\": \"10/14/22\",\n \"OPEN\": \"1.1176\",\n \"HIGH\": \"1.1191\",\n \"LOW\": \"1.1121\",\n \"CLOSE\": \"1.1151\"\n },\n {\n \"DATE\": \"10/13/22\",\n \"OPEN\": \"1.1192\",\n \"HIGH\": \"1.1215\",\n \"LOW\": \"1.1157\",\n \"CLOSE\": \"1.1163\"\n },\n {\n \"DATE\": \"10/12/22\",\n \"OPEN\": \"1.1235\",\n \"HIGH\": \"1.1244\",\n \"LOW\": \"1.1172\",\n \"CLOSE\": \"1.1188\"\n },\n {\n \"DATE\": \"10/11/22\",\n \"OPEN\": \"1.1318\",\n \"HIGH\": \"1.1328\",\n \"LOW\": \"1.1195\",\n \"CLOSE\": \"1.1237\"\n },\n {\n \"DATE\": \"10/10/22\",\n \"OPEN\": \"1.1367\",\n \"HIGH\": \"1.1370\",\n \"LOW\": \"1.1266\",\n \"CLOSE\": \"1.1317\"\n },\n {\n \"DATE\": \"10/07/22\",\n \"OPEN\": \"1.1322\",\n \"HIGH\": \"1.1376\",\n \"LOW\": \"1.1301\",\n \"CLOSE\": \"1.1358\"\n },\n {\n \"DATE\": \"10/06/22\",\n \"OPEN\": \"1.1309\",\n \"HIGH\": \"1.1355\",\n \"LOW\": \"1.1244\",\n \"CLOSE\": \"1.1327\"\n },\n {\n \"DATE\": \"10/05/22\",\n \"OPEN\": \"1.1348\",\n \"HIGH\": \"1.1381\",\n \"LOW\": \"1.1242\",\n \"CLOSE\": \"1.1308\"\n },\n {\n \"DATE\": \"10/04/22\",\n \"OPEN\": \"1.1386\",\n \"HIGH\": \"1.1426\",\n \"LOW\": \"1.1306\",\n \"CLOSE\": \"1.1349\"\n },\n {\n \"DATE\": \"10/03/22\",\n \"OPEN\": \"1.1460\",\n \"HIGH\": \"1.1460\",\n \"LOW\": \"1.1362\",\n \"CLOSE\": \"1.1388\"\n },\n {\n \"DATE\": \"09/30/22\",\n \"OPEN\": \"1.1387\",\n \"HIGH\": \"1.1444\",\n \"LOW\": \"1.1320\",\n \"CLOSE\": \"1.1439\"\n },\n {\n \"DATE\": \"09/29/22\",\n \"OPEN\": \"1.1382\",\n \"HIGH\": \"1.1417\",\n \"LOW\": \"1.1346\",\n \"CLOSE\": \"1.1350\"\n },\n {\n \"DATE\": \"09/28/22\",\n \"OPEN\": \"1.1417\",\n \"HIGH\": \"1.1495\",\n \"LOW\": \"1.1290\",\n \"CLOSE\": \"1.1385\"\n },\n {\n \"DATE\": \"09/27/22\",\n \"OPEN\": \"1.1453\",\n \"HIGH\": \"1.1466\",\n \"LOW\": \"1.1370\",\n \"CLOSE\": \"1.1419\"\n },\n {\n \"DATE\": \"09/26/22\",\n \"OPEN\": \"1.1365\",\n \"HIGH\": \"1.1465\",\n \"LOW\": \"1.1328\",\n \"CLOSE\": \"1.1454\"\n },\n {\n \"DATE\": \"09/23/22\",\n \"OPEN\": \"1.1365\",\n \"HIGH\": \"1.1378\",\n \"LOW\": \"1.1323\",\n \"CLOSE\": \"1.1373\"\n },\n {\n \"DATE\": \"09/22/22\",\n \"OPEN\": \"1.1329\",\n \"HIGH\": \"1.1373\",\n \"LOW\": \"1.1303\",\n \"CLOSE\": \"1.1366\"\n },\n {\n \"DATE\": \"09/21/22\",\n \"OPEN\": \"1.1342\",\n \"HIGH\": \"1.1363\",\n \"LOW\": \"1.1315\",\n \"CLOSE\": \"1.1334\"\n },\n {\n \"DATE\": \"09/20/22\",\n \"OPEN\": \"1.1284\",\n \"HIGH\": \"1.1365\",\n \"LOW\": \"1.1273\",\n \"CLOSE\": \"1.1347\"\n },\n {\n \"DATE\": \"09/19/22\",\n \"OPEN\": \"1.1221\",\n \"HIGH\": \"1.1295\",\n \"LOW\": \"1.1206\",\n \"CLOSE\": \"1.1289\"\n },\n {\n \"DATE\": \"09/16/22\",\n \"OPEN\": \"1.1232\",\n \"HIGH\": \"1.1256\",\n \"LOW\": \"1.1197\",\n \"CLOSE\": \"1.1218\"\n },\n {\n \"DATE\": \"09/15/22\",\n \"OPEN\": \"1.1247\",\n \"HIGH\": \"1.1261\",\n \"LOW\": \"1.1212\",\n \"CLOSE\": \"1.1233\"\n },\n {\n \"DATE\": \"09/14/22\",\n \"OPEN\": \"1.1255\",\n \"HIGH\": \"1.1255\",\n \"LOW\": \"1.1201\",\n \"CLOSE\": \"1.1239\"\n },\n {\n \"DATE\": \"09/13/22\",\n \"OPEN\": \"1.1218\",\n \"HIGH\": \"1.1259\",\n \"LOW\": \"1.1194\",\n \"CLOSE\": \"1.1223\"\n },\n {\n \"DATE\": \"09/12/22\",\n \"OPEN\": \"1.1186\",\n \"HIGH\": \"1.1240\",\n \"LOW\": \"1.1181\",\n \"CLOSE\": \"1.1225\"\n },\n {\n \"DATE\": \"09/09/22\",\n \"OPEN\": \"1.1156\",\n \"HIGH\": \"1.1215\",\n \"LOW\": \"1.1139\",\n \"CLOSE\": \"1.1212\"\n },\n {\n \"DATE\": \"09/08/22\",\n \"OPEN\": \"1.1142\",\n \"HIGH\": \"1.1157\",\n \"LOW\": \"1.1115\",\n \"CLOSE\": \"1.1151\"\n },\n {\n \"DATE\": \"09/07/22\",\n \"OPEN\": \"1.1153\",\n \"HIGH\": \"1.1181\",\n \"LOW\": \"1.1134\",\n \"CLOSE\": \"1.1141\"\n },\n {\n \"DATE\": \"09/06/22\",\n \"OPEN\": \"1.1150\",\n \"HIGH\": \"1.1173\",\n \"LOW\": \"1.1127\",\n \"CLOSE\": \"1.1152\"\n },\n {\n \"DATE\": \"09/05/22\",\n \"OPEN\": \"1.1113\",\n \"HIGH\": \"1.1167\",\n \"LOW\": \"1.1113\",\n \"CLOSE\": \"1.1153\"\n }\n]\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "python" ]
stackoverflow_0074672389_beautifulsoup_python.txt
Q: How to make a kind of synoptic table Excuse me for having to ask, but the following topic is costing me a lot. I want to make a table with connections, but I can't find the formula to do it, and I couldn't find any example, I tried to find an example and copy its code but it doesn't convince me either. This is what I want to do: enter image description here The idea is that, this is how I currently have it: ` import React from "react"; import { useState, useEffect } from "react"; import { makeStyles } from "@material-ui/core/styles"; import Card from "@material-ui/core/Card"; import CardActions from "@material-ui/core/CardActions"; import CardContent from "@material-ui/core/CardContent"; import Button from "@material-ui/core/Button"; import Typography from "@material-ui/core/Typography"; import Grid from "@material-ui/core/Grid"; import Paper from "@material-ui/core/Paper"; import MundialButtons from "../../componentes/mundial-buttons"; const useStyles = makeStyles((theme) => ({ root: { minWidth: 100, maxWidth: 250, flexGrow: 1 }, bullet: { display: "inline-block", margin: "0 2px", transform: "scale(0.8)" }, title: { fontSize: 14 }, pos: { marginBottom: 12 }, paper: { padding: theme.spacing(2), textAlign: "center", color: theme.palette.text.primary }, logo: { float: "left" } })); export default function Two() { const url = "https://adad1EUmIOwosuGTI7L2DD6S02RjOG7vbxU3FjVVD1u-iYiw/a!A1:Z1000"; const [todos, setTodos] = useState(); const fetchApi = async () => { const response = await fetch(url); const responseJSON = await response.json(); setTodos(responseJSON); }; useEffect(() => { fetchApi(); }, []); const classes = useStyles(); const styleRed2 = [ { marginTop: "90px" }, { marginTop: "180px" }, { marginTop: "270px" } ]; return ( <div id="all"> <MundialButtons /> <div className={classes.root}> <Grid container spacing={3}> <Grid item xs={12}> <Paper className={classes.paper}> <Typography>asd</Typography> </Paper> </Grid> </Grid> </div> <div id="all"> {!todos ? "Cargando..." : todos.map((todo, index) => { return ( <Grid item xs={6}> <div className={classes.root} id="octavos"> {todo.Local === undefined || todo.Local === "" || todo.Visitante === undefined || todo.Visitante === "" ? ( <div></div> ) : ( <Card className={classes.root} variant="outlined"> <CardContent> <Typography className={classes.title} color="textSecondary" gutterBottom > Octavos: </Typography> <Typography variant="h5" component="h2"> test </Typography> <Typography variant="h5" component="h2"> test </Typography> {/*<Typography className={classes.pos} color="textSecondary" > hora </Typography>*/} </CardContent> </Card> )} <p></p> </div> </Grid> ); })} </div> </div> ); } ` A box with connections between them A: It's probably best to lean on open-source libraries for this, no need to reinvent the wheel. As it happens, what you require kind of already exists: https://www.npmjs.com/package/@g-loot/react-tournament-brackets. Demo.
How to make a kind of synoptic table
Excuse me for having to ask, but the following topic is costing me a lot. I want to make a table with connections, but I can't find the formula to do it, and I couldn't find any example, I tried to find an example and copy its code but it doesn't convince me either. This is what I want to do: enter image description here The idea is that, this is how I currently have it: ` import React from "react"; import { useState, useEffect } from "react"; import { makeStyles } from "@material-ui/core/styles"; import Card from "@material-ui/core/Card"; import CardActions from "@material-ui/core/CardActions"; import CardContent from "@material-ui/core/CardContent"; import Button from "@material-ui/core/Button"; import Typography from "@material-ui/core/Typography"; import Grid from "@material-ui/core/Grid"; import Paper from "@material-ui/core/Paper"; import MundialButtons from "../../componentes/mundial-buttons"; const useStyles = makeStyles((theme) => ({ root: { minWidth: 100, maxWidth: 250, flexGrow: 1 }, bullet: { display: "inline-block", margin: "0 2px", transform: "scale(0.8)" }, title: { fontSize: 14 }, pos: { marginBottom: 12 }, paper: { padding: theme.spacing(2), textAlign: "center", color: theme.palette.text.primary }, logo: { float: "left" } })); export default function Two() { const url = "https://adad1EUmIOwosuGTI7L2DD6S02RjOG7vbxU3FjVVD1u-iYiw/a!A1:Z1000"; const [todos, setTodos] = useState(); const fetchApi = async () => { const response = await fetch(url); const responseJSON = await response.json(); setTodos(responseJSON); }; useEffect(() => { fetchApi(); }, []); const classes = useStyles(); const styleRed2 = [ { marginTop: "90px" }, { marginTop: "180px" }, { marginTop: "270px" } ]; return ( <div id="all"> <MundialButtons /> <div className={classes.root}> <Grid container spacing={3}> <Grid item xs={12}> <Paper className={classes.paper}> <Typography>asd</Typography> </Paper> </Grid> </Grid> </div> <div id="all"> {!todos ? "Cargando..." : todos.map((todo, index) => { return ( <Grid item xs={6}> <div className={classes.root} id="octavos"> {todo.Local === undefined || todo.Local === "" || todo.Visitante === undefined || todo.Visitante === "" ? ( <div></div> ) : ( <Card className={classes.root} variant="outlined"> <CardContent> <Typography className={classes.title} color="textSecondary" gutterBottom > Octavos: </Typography> <Typography variant="h5" component="h2"> test </Typography> <Typography variant="h5" component="h2"> test </Typography> {/*<Typography className={classes.pos} color="textSecondary" > hora </Typography>*/} </CardContent> </Card> )} <p></p> </div> </Grid> ); })} </div> </div> ); } ` A box with connections between them
[ "It's probably best to lean on open-source libraries for this, no need to reinvent the wheel. As it happens, what you require kind of already exists: https://www.npmjs.com/package/@g-loot/react-tournament-brackets. Demo.\n" ]
[ 0 ]
[]
[]
[ "reactjs" ]
stackoverflow_0074672377_reactjs.txt
Q: 'str' object has no attribute 'decode' on djangorestframework_simplejwt I was trying to follow this quick start from djangorestframework-simplejwt documentation with link https://django-rest-framework-simplejwt.readthedocs.io/en/latest/getting_started.html But I have problem when try to obtain token, and always return this error 'str' object has no attribute 'decode' Edited: This my code on urls.py from django.contrib import admin from django.urls import path from rest_framework_simplejwt import views as jwt_views from core.views import HelloView urlpatterns = [ path('admin/', admin.site.urls), path('api/token/', jwt_views.TokenObtainPairView.as_view(), name='token_obtain_pair'), path('api/token/refresh/', jwt_views.TokenRefreshView.as_view(), name='token_refresh'), path('hello/', HelloView.as_view(), name='hello'), ] settings.py ... INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rest_framework', ] ... REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': [ 'rest_framework_simplejwt.authentication.JWTAuthentication', ], } ... views.py from django.shortcuts import render # Create your views here. from rest_framework.views import APIView from rest_framework.response import Response from rest_framework.permissions import IsAuthenticated class HelloView(APIView): permission_classes = (IsAuthenticated,) def get(self, request): content = {'message': 'Hello, World!'} return Response(content) A: Downgrading PyJWT did the job for me. To achieve that, change the corresponding line in your requirements.txt to PyJWT==v1.7.1 or install the specified package with: pip install pyjwt==v1.7.1 A: for me let downgrade jwt to version 1.7.1 to do that update corresponding line requirement.txt or if not PyJWT line than add this line PyJWT==v1.7.1 A: Remove .decode('utf-8') in "/venv/lib/python3.9/site-packages/rest_framework_jwt/utils.py" like # /venv/lib/python3.9/site-packages/rest_framework_jwt/utils.py def jwt_encode_handler(payload): key = api_settings.JWT_PRIVATE_KEY or jwt_get_secret_key(payload) return jwt.encode( payload, key, api_settings.JWT_ALGORITHM )#.decode('utf-8') ==> delete this jwt2.3.0-utils-encode_handler As you installed simple-jwt, jwt has been upgraded to version-2.3.0 from 1.7.1(maybe). and by this, jwt2.3.0-api_jwt-encode "-> str:" has been added at the method "encode", and this means "encode" returns "str". so we don't need .decode('utf-8') anymore in utils.py in rest_framework_jwt. and if you run server again you may encounter the problem below. except jwt.ExpiredSignature: rest_framework.request.WrappedAttributeError: module 'jwt' has no attribute 'ExpiredSignature' you can fix this as below, # /venv/lib/python3.9/site-packages/rest_framework_jwt/authentication.py try: payload = jwt_decode_handler(jwt_value) except jwt.ExpiredSignatureError: # ExpiredSignature => no more exists msg = _('Signature has expired.') raise exceptions.AuthenticationFailed(msg) except jwt.DecodeError: msg = _('Error decoding signature.') raise exceptions.AuthenticationFailed(msg) jwt2.3.0-authentication-except this problem has occured because the jwt-version has been upgraded to 2.3.0, and ExpiredSignature doesn't exist anymore. But we recommend you to downgrade jwt version to 1.7.1 as we don't know all the changes made by upgrade. jwt2.3.0-init.py jwt1.7.1-init.py A: I had PyJWT-2.3.0 installed and I was not even guessing if this issue could be related to the version. The above answers helped me to crack this. So I am just writing the same thing with some extra log details. pip install PyJWT==1.7.1 (venv) ip-192-168-1-36:django_proj rishi$ pip install PyJWT==1.7.1 Collecting PyJWT==1.7.1 Using cached PyJWT-1.7.1-py2.py3-none-any.whl (18 kB) Installing collected packages: PyJWT Attempting uninstall: PyJWT Found existing installation: PyJWT 2.3.0 Uninstalling PyJWT-2.3.0: Successfully uninstalled PyJWT-2.3.0 Successfully installed PyJWT-1.7.1 Finally my problem got resolved. Thanks to the previous answers to help me on this. A: For the updated version of PyJWT (2.6.0), refer to the documentation. It shows how to encode and decode with the latest version of PyJWT. jwt.decode(encoded_jwt, "secret", algorithms=["HS256"]) Notice, how you pass in the encoded jwt.
'str' object has no attribute 'decode' on djangorestframework_simplejwt
I was trying to follow this quick start from djangorestframework-simplejwt documentation with link https://django-rest-framework-simplejwt.readthedocs.io/en/latest/getting_started.html But I have problem when try to obtain token, and always return this error 'str' object has no attribute 'decode' Edited: This my code on urls.py from django.contrib import admin from django.urls import path from rest_framework_simplejwt import views as jwt_views from core.views import HelloView urlpatterns = [ path('admin/', admin.site.urls), path('api/token/', jwt_views.TokenObtainPairView.as_view(), name='token_obtain_pair'), path('api/token/refresh/', jwt_views.TokenRefreshView.as_view(), name='token_refresh'), path('hello/', HelloView.as_view(), name='hello'), ] settings.py ... INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rest_framework', ] ... REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': [ 'rest_framework_simplejwt.authentication.JWTAuthentication', ], } ... views.py from django.shortcuts import render # Create your views here. from rest_framework.views import APIView from rest_framework.response import Response from rest_framework.permissions import IsAuthenticated class HelloView(APIView): permission_classes = (IsAuthenticated,) def get(self, request): content = {'message': 'Hello, World!'} return Response(content)
[ "Downgrading PyJWT did the job for me.\nTo achieve that, change the corresponding line in your requirements.txt to\nPyJWT==v1.7.1\n\nor install the specified package with:\npip install pyjwt==v1.7.1\n\n", "for me let downgrade jwt to version 1.7.1\nto do that update corresponding line requirement.txt\nor if not PyJWT line than add this line\nPyJWT==v1.7.1\n\n", "Remove .decode('utf-8') in \"/venv/lib/python3.9/site-packages/rest_framework_jwt/utils.py\" like\n# /venv/lib/python3.9/site-packages/rest_framework_jwt/utils.py\ndef jwt_encode_handler(payload):\nkey = api_settings.JWT_PRIVATE_KEY or jwt_get_secret_key(payload)\nreturn jwt.encode(\n payload,\n key,\n api_settings.JWT_ALGORITHM\n)#.decode('utf-8') ==> delete this\n\njwt2.3.0-utils-encode_handler\nAs you installed simple-jwt, jwt has been upgraded to version-2.3.0 from 1.7.1(maybe).\nand by this,\njwt2.3.0-api_jwt-encode\n\"-> str:\" has been added at the method \"encode\", and this means \"encode\" returns \"str\".\nso we don't need .decode('utf-8') anymore in utils.py in rest_framework_jwt.\nand if you run server again you may encounter the problem below.\nexcept jwt.ExpiredSignature:\nrest_framework.request.WrappedAttributeError: module 'jwt' has no attribute 'ExpiredSignature'\nyou can fix this as below,\n# /venv/lib/python3.9/site-packages/rest_framework_jwt/authentication.py\ntry:\n payload = jwt_decode_handler(jwt_value)\nexcept jwt.ExpiredSignatureError: # ExpiredSignature => no more exists\n msg = _('Signature has expired.')\n raise exceptions.AuthenticationFailed(msg)\nexcept jwt.DecodeError:\n msg = _('Error decoding signature.')\n raise exceptions.AuthenticationFailed(msg)\n\njwt2.3.0-authentication-except\nthis problem has occured because the jwt-version has been upgraded to 2.3.0, and ExpiredSignature doesn't exist anymore.\nBut we recommend you to downgrade jwt version to 1.7.1 as we don't know all the changes made by upgrade.\njwt2.3.0-init.py\njwt1.7.1-init.py\n", "I had PyJWT-2.3.0 installed and I was not even guessing if this issue could be related to the version.\nThe above answers helped me to crack this. So I am just writing the same thing with some extra log details.\n\npip install PyJWT==1.7.1\n\n(venv) ip-192-168-1-36:django_proj rishi$ pip install PyJWT==1.7.1\nCollecting PyJWT==1.7.1\n Using cached PyJWT-1.7.1-py2.py3-none-any.whl (18 kB)\nInstalling collected packages: PyJWT\n Attempting uninstall: PyJWT\n Found existing installation: PyJWT 2.3.0\n Uninstalling PyJWT-2.3.0:\n Successfully uninstalled PyJWT-2.3.0\nSuccessfully installed PyJWT-1.7.1\n\nFinally my problem got resolved. Thanks to the previous answers to help me on this.\n", "For the updated version of PyJWT (2.6.0), refer to the documentation.\nIt shows how to encode and decode with the latest version of PyJWT.\njwt.decode(encoded_jwt, \"secret\", algorithms=[\"HS256\"])\n\nNotice, how you pass in the encoded jwt.\n\n" ]
[ 21, 6, 2, 1, 0 ]
[]
[]
[ "django", "django_rest_framework", "jwt" ]
stackoverflow_0067089855_django_django_rest_framework_jwt.txt
Q: MongoDB - How to get last inserted record from a collection asynchronously I found a way to do it synchronously but I am unable to do it asynchronously. public async Task<UserModel> GetLastCreatedUser() { return _users .Find(_ => true) .SortByDescending(u => u.DateCreated) .Limit(1); } The synchronous way gives me this error: Error CS0266 Cannot implicitly convert type 'MongoDB.Driver.IFindFluent<BankingAppLibrary.Models.UserModel, BankingAppLibrary.Models.UserModel>' to 'BankingAppLibrary.Models.UserModel'. An explicit conversion exists (are you missing a cast?) BankingAppLibrary C:\Users\lucas\source\repos\BankingApp\BankingAppLibrary\DataAccess\MongoUserData.cs 36 Active A: You need to add .FirstOrDefaultAsync() at the end of IFindFluent<UserModel, UserModel> in order to return the value with Task<UserModel>. And since your method is an asynchronous method, don't forget to add await as well. Your code should be as below: public async Task<UserModel> GetLastCreatedUser() { return await _users .Find(_ => true) .SortByDescending(u => u.DateCreated) .Limit(1) .FirstOrDefaultAsync(); }
MongoDB - How to get last inserted record from a collection asynchronously
I found a way to do it synchronously but I am unable to do it asynchronously. public async Task<UserModel> GetLastCreatedUser() { return _users .Find(_ => true) .SortByDescending(u => u.DateCreated) .Limit(1); } The synchronous way gives me this error: Error CS0266 Cannot implicitly convert type 'MongoDB.Driver.IFindFluent<BankingAppLibrary.Models.UserModel, BankingAppLibrary.Models.UserModel>' to 'BankingAppLibrary.Models.UserModel'. An explicit conversion exists (are you missing a cast?) BankingAppLibrary C:\Users\lucas\source\repos\BankingApp\BankingAppLibrary\DataAccess\MongoUserData.cs 36 Active
[ "You need to add .FirstOrDefaultAsync() at the end of IFindFluent<UserModel, UserModel> in order to return the value with Task<UserModel>.\nAnd since your method is an asynchronous method, don't forget to add await as well.\nYour code should be as below:\npublic async Task<UserModel> GetLastCreatedUser()\n{\n return await _users\n .Find(_ => true)\n .SortByDescending(u => u.DateCreated)\n .Limit(1)\n .FirstOrDefaultAsync();\n}\n\n" ]
[ 1 ]
[]
[]
[ "asynchronous", "c#", "mongodb", "mongodb_.net_driver" ]
stackoverflow_0074669396_asynchronous_c#_mongodb_mongodb_.net_driver.txt
Q: How to change the position of each PVector objects in an ArrayList that is made by an ArrayList? I am currently working on an ArrayList of an ArrayList that I want the outer ArrayList to have different locations. So I tried to use thetranslate(); to distinguish the position of the outer ArrayList but it doesn't work. On screen it is still showing all the ArrayList at one place. I have also tried to multiply the points location with point(v3.x * gap, v3.y * gap, v3.z * gap); but it doesn't work neither. Does anyone knows is ther a way on how can I alter the location of outer ArrayList? Here are the codes I am working on: //import import peasy.*; //import variables PeasyCam acam; //position variable for pointSphere float x; float y; float z; float gap = 500; //arraylist of arraylist ArrayList<ArrayList <PVector>> sphereList = new ArrayList<ArrayList<PVector>>(); //empty arraylist ArrayList<PVector> sphere1 = new ArrayList<PVector>(); void setup() { size(1920, 1080, P3D); //camera setting acam = new PeasyCam(this, 500, 500, 500, 2000); acam.rotateX(0); acam.rotateY(0); acam.rotateZ(0); pointSphere(270, 27, 27); } void draw() { background(0); for (int i = 0; i < sphere1.size(); i++) { PVector v1; v1 = sphere1.get(i); stroke(255); strokeWeight(random(2, 10)); point(v1.x, v1.y, v1.z); for (int j = 0; j < sphereList.size(); j++) { PVector v3; v3=sphereList.get(i).get(j); pushMatrix(); gap = gap + 500; translate(1000 + gap, 1000 + gap, 1000 + gap); point(v3.x, v3.y, v3.z); popMatrix(); } } } void pointSphere (float r, float uAmount, float wAmount) { for (float u= 0; u < 180; u += 180/uAmount) { for (float w = 0; w < 360; w += 360/wAmount) { //define PVector PVector vS1 = new PVector(r*sin(radians(u))*cos(radians(w)), r*sin(radians(u))*sin(radians(w)), r*cos(radians(u))); //storing the location of Pvector into sphere1 sphere1.add(vS1); sphereList.add(sphere1); } } } A: You have most of the "ingredients" in place (sphere points, a scene to render into, translate to offset each sphere, etc.), however there are two sections that are confusing: 1.ArrayList<ArrayList <PVector>> sphereList = new ArrayList<ArrayList<PVector>>(); this would store a list of lists of points. Do you need to render multiple spheres each with a different set of points (e.g. different float r, float uAmount, float wAmount values ?) If so, this makes sense. However if you simply want to render the same sphere in different locations, you can get away with 1 list of points (i.e. just sphere1) and render it in multiple locations. The nested for loops in draw(): the first loop accesses and draws the sphere points. The second loop iterates for every sphere point (which is a lot). This second loop would iterate through each point and translate. If simply rendering the same sphere 3 times in 3 locations, you can translate the group of points (i.e. before the for loop rendering each point). Here's one way of writing it: I recommend tweaking the pointSphere() points function so it returns the points (instead of using the hardcoded list to add points to). The advantage is you can calculate sphere points with various settings that can be stored into independent lists of points (as opposed to the single hardcoded list). (Additionally, you could easily reuse this function in other sketches as it requires copything the function indepdent of the hardcoded list of points) e.g. ArrayList<PVector> pointSpherePoints(float r, float uAmount, float wAmount) { ArrayList<PVector> points = new ArrayList<PVector>(); for (float u= 0; u < 180; u += 180/uAmount) { for (float w = 0; w < 360; w += 360/wAmount) { //define PVector PVector vS1 = new PVector(r*sin(radians(u))*cos(radians(w)), r*sin(radians(u))*sin(radians(w)), r*cos(radians(u))); //storing the location of Pvector points.add(vS1); } } return points; } you can write a function that renders a list of points which can be reused. void drawSpherePoints(ArrayList<PVector> points){ for (int i = 0 ; i < points.size(); i++) { PVector v = points.get(i); stroke(255); strokeWeight(random(2, 10)); point(v.x, v.y, v.z); } } Putting the two together makes it simple to render the sphere in different places: //import import peasy.*; //import variables PeasyCam acam; ArrayList<PVector> sphere1 = pointSpherePoints(270, 27, 27); void setup() { size(1920, 1080, P3D); //camera setting acam = new PeasyCam(this, 500, 500, 500, 2000); acam.rotateX(0); acam.rotateY(0); acam.rotateZ(0); } void draw() { background(0); translate(width * 0.5, height * 0.5); float gap = -1000; for(int i = 0 ; i < 3; i++){ gap = gap + 500; translate(1000 + gap, 1000 + gap, 1000 + gap); drawSpherePoints(sphere1); } } void drawSpherePoints(ArrayList<PVector> points){ for (int i = 0 ; i < points.size(); i++) { PVector v = points.get(i); stroke(255); strokeWeight(random(2, 10)); point(v.x, v.y, v.z); } } ArrayList<PVector> pointSpherePoints(float r, float uAmount, float wAmount) { ArrayList<PVector> points = new ArrayList<PVector>(); for (float u= 0; u < 180; u += 180/uAmount) { for (float w = 0; w < 360; w += 360/wAmount) { //define PVector PVector vS1 = new PVector(r*sin(radians(u))*cos(radians(w)), r*sin(radians(u))*sin(radians(w)), r*cos(radians(u))); //storing the location of Pvector points.add(vS1); } } return points; } The encapsulation makes it easier to draw spheres with independent parameters (radius, detail level): //import import peasy.*; //import variables PeasyCam acam; int numSpheres = 3; ArrayList<ArrayList<PVector>> spheres = new ArrayList<ArrayList<PVector>>(); void setup() { size(1920, 1080, P3D); //camera setting acam = new PeasyCam(this, 500, 500, 500, 2000); for(int i = 0 ; i < numSpheres ; i++){ int detailLevel = i + 1; spheres.add(pointSpherePoints(30 * (detailLevel * 2), 9 * detailLevel, 9 * detailLevel)); } } void draw() { background(0); translate(width * 0.5, height * 0.5); for(int i = 0 ; i < numSpheres ; i++){ ArrayList<PVector> sphere = spheres.get(i); translate(100 * i, 0, 0); drawSpherePoints(sphere); } } void drawSpherePoints(ArrayList<PVector> points){ for (int i = 0 ; i < points.size(); i++) { PVector v = points.get(i); stroke(255); strokeWeight(random(2, 10)); point(v.x, v.y, v.z); } } ArrayList<PVector> pointSpherePoints(float r, float uAmount, float wAmount) { ArrayList<PVector> points = new ArrayList<PVector>(); for (float u= 0; u < 180; u += 180/uAmount) { for (float w = 0; w < 360; w += 360/wAmount) { //define PVector PVector vS1 = new PVector(r*sin(radians(u))*cos(radians(w)), r*sin(radians(u))*sin(radians(w)), r*cos(radians(u))); //storing the location of Pvector points.add(vS1); } } return points; } or even a sphere of spheres: //import import peasy.*; //import variables PeasyCam acam; ArrayList<PVector> sphere1 = pointSpherePoints(270, 27, 27); void setup() { size(1920, 1080, P3D); //camera setting acam = new PeasyCam(this, 500, 500, 500, 2000); } void draw() { background(0); translate(width * 0.5, height * 0.5); for(int i = 0 ; i < sphere1.size(); i += 10){ // scale vector point PVector v = PVector.mult(sphere1.get(i), 10); pushMatrix(); translate(v.x, v.y, v.z); drawSpherePoints(sphere1); popMatrix(); } } void drawSpherePoints(ArrayList<PVector> points){ for (int i = 0 ; i < points.size(); i++) { PVector v = points.get(i); stroke(255); strokeWeight(random(2, 10)); point(v.x, v.y, v.z); } } ArrayList<PVector> pointSpherePoints(float r, float uAmount, float wAmount) { ArrayList<PVector> points = new ArrayList<PVector>(); for (float u= 0; u < 180; u += 180/uAmount) { for (float w = 0; w < 360; w += 360/wAmount) { //define PVector PVector vS1 = new PVector(r*sin(radians(u))*cos(radians(w)), r*sin(radians(u))*sin(radians(w)), r*cos(radians(u))); //storing the location of Pvector points.add(vS1); } } return points; }
How to change the position of each PVector objects in an ArrayList that is made by an ArrayList?
I am currently working on an ArrayList of an ArrayList that I want the outer ArrayList to have different locations. So I tried to use thetranslate(); to distinguish the position of the outer ArrayList but it doesn't work. On screen it is still showing all the ArrayList at one place. I have also tried to multiply the points location with point(v3.x * gap, v3.y * gap, v3.z * gap); but it doesn't work neither. Does anyone knows is ther a way on how can I alter the location of outer ArrayList? Here are the codes I am working on: //import import peasy.*; //import variables PeasyCam acam; //position variable for pointSphere float x; float y; float z; float gap = 500; //arraylist of arraylist ArrayList<ArrayList <PVector>> sphereList = new ArrayList<ArrayList<PVector>>(); //empty arraylist ArrayList<PVector> sphere1 = new ArrayList<PVector>(); void setup() { size(1920, 1080, P3D); //camera setting acam = new PeasyCam(this, 500, 500, 500, 2000); acam.rotateX(0); acam.rotateY(0); acam.rotateZ(0); pointSphere(270, 27, 27); } void draw() { background(0); for (int i = 0; i < sphere1.size(); i++) { PVector v1; v1 = sphere1.get(i); stroke(255); strokeWeight(random(2, 10)); point(v1.x, v1.y, v1.z); for (int j = 0; j < sphereList.size(); j++) { PVector v3; v3=sphereList.get(i).get(j); pushMatrix(); gap = gap + 500; translate(1000 + gap, 1000 + gap, 1000 + gap); point(v3.x, v3.y, v3.z); popMatrix(); } } } void pointSphere (float r, float uAmount, float wAmount) { for (float u= 0; u < 180; u += 180/uAmount) { for (float w = 0; w < 360; w += 360/wAmount) { //define PVector PVector vS1 = new PVector(r*sin(radians(u))*cos(radians(w)), r*sin(radians(u))*sin(radians(w)), r*cos(radians(u))); //storing the location of Pvector into sphere1 sphere1.add(vS1); sphereList.add(sphere1); } } }
[ "You have most of the \"ingredients\" in place (sphere points, a scene to render into, translate to offset each sphere, etc.), however there are two sections that are confusing:\n1.ArrayList<ArrayList <PVector>> sphereList = new ArrayList<ArrayList<PVector>>(); this would store a list of lists of points. Do you need to render multiple spheres each with a different set of points (e.g. different float r, float uAmount, float wAmount values ?) If so, this makes sense. However if you simply want to render the same sphere in different locations, you can get away with 1 list of points (i.e. just sphere1) and render it in multiple locations.\n\nThe nested for loops in draw(): the first loop accesses and draws the sphere points. The second loop iterates for every sphere point (which is a lot). This second loop would iterate through each point and translate.\n\nIf simply rendering the same sphere 3 times in 3 locations, you can translate the group of points (i.e. before the for loop rendering each point).\nHere's one way of writing it:\n\nI recommend tweaking the pointSphere() points function so it returns the points (instead of using the hardcoded list to add points to). The advantage is you can calculate sphere points with various settings that can be stored into independent lists of points (as opposed to the single hardcoded list). (Additionally, you could easily reuse this function in other sketches as it requires copything the function indepdent of the hardcoded list of points)\ne.g.\n\nArrayList<PVector> pointSpherePoints(float r, float uAmount, float wAmount) {\n ArrayList<PVector> points = new ArrayList<PVector>();\n \n for (float u= 0; u < 180; u += 180/uAmount) {\n for (float w = 0; w < 360; w += 360/wAmount) {\n\n //define PVector\n PVector vS1 = new PVector(r*sin(radians(u))*cos(radians(w)), r*sin(radians(u))*sin(radians(w)), r*cos(radians(u)));\n //storing the location of Pvector\n points.add(vS1);\n }\n }\n \n return points;\n}\n\n\nyou can write a function that renders a list of points which can be reused.\n\nvoid drawSpherePoints(ArrayList<PVector> points){\n for (int i = 0 ; i < points.size(); i++) {\n PVector v = points.get(i);\n stroke(255);\n strokeWeight(random(2, 10));\n point(v.x, v.y, v.z);\n }\n}\n\nPutting the two together makes it simple to render the sphere in different places:\n//import\nimport peasy.*;\n\n//import variables\nPeasyCam acam;\n\nArrayList<PVector> sphere1 = pointSpherePoints(270, 27, 27);\n\nvoid setup() {\n size(1920, 1080, P3D);\n\n\n //camera setting\n acam = new PeasyCam(this, 500, 500, 500, 2000);\n acam.rotateX(0);\n acam.rotateY(0);\n acam.rotateZ(0);\n}\n\nvoid draw() {\n background(0);\n translate(width * 0.5, height * 0.5);\n \n float gap = -1000;\n for(int i = 0 ; i < 3; i++){\n gap = gap + 500;\n translate(1000 + gap, 1000 + gap, 1000 + gap);\n drawSpherePoints(sphere1);\n }\n}\n\nvoid drawSpherePoints(ArrayList<PVector> points){\n for (int i = 0 ; i < points.size(); i++) {\n PVector v = points.get(i);\n stroke(255);\n strokeWeight(random(2, 10));\n point(v.x, v.y, v.z);\n }\n}\n\n\nArrayList<PVector> pointSpherePoints(float r, float uAmount, float wAmount) {\n ArrayList<PVector> points = new ArrayList<PVector>();\n \n for (float u= 0; u < 180; u += 180/uAmount) {\n for (float w = 0; w < 360; w += 360/wAmount) {\n\n //define PVector\n PVector vS1 = new PVector(r*sin(radians(u))*cos(radians(w)), r*sin(radians(u))*sin(radians(w)), r*cos(radians(u)));\n //storing the location of Pvector\n points.add(vS1);\n }\n }\n \n return points;\n}\n\nThe encapsulation makes it easier to draw spheres with independent parameters (radius, detail level):\n//import\nimport peasy.*;\n\n//import variables\nPeasyCam acam;\n\nint numSpheres = 3;\nArrayList<ArrayList<PVector>> spheres = new ArrayList<ArrayList<PVector>>();\n\nvoid setup() {\n size(1920, 1080, P3D);\n\n\n //camera setting\n acam = new PeasyCam(this, 500, 500, 500, 2000);\n \n for(int i = 0 ; i < numSpheres ; i++){\n int detailLevel = i + 1;\n spheres.add(pointSpherePoints(30 * (detailLevel * 2), 9 * detailLevel, 9 * detailLevel));\n }\n}\n\nvoid draw() {\n background(0);\n translate(width * 0.5, height * 0.5);\n \n for(int i = 0 ; i < numSpheres ; i++){\n ArrayList<PVector> sphere = spheres.get(i);\n translate(100 * i, 0, 0);\n drawSpherePoints(sphere);\n }\n}\n\nvoid drawSpherePoints(ArrayList<PVector> points){\n for (int i = 0 ; i < points.size(); i++) {\n PVector v = points.get(i);\n stroke(255);\n strokeWeight(random(2, 10));\n point(v.x, v.y, v.z);\n }\n}\n\n\nArrayList<PVector> pointSpherePoints(float r, float uAmount, float wAmount) {\n ArrayList<PVector> points = new ArrayList<PVector>();\n \n for (float u= 0; u < 180; u += 180/uAmount) {\n for (float w = 0; w < 360; w += 360/wAmount) {\n\n //define PVector\n PVector vS1 = new PVector(r*sin(radians(u))*cos(radians(w)), r*sin(radians(u))*sin(radians(w)), r*cos(radians(u)));\n //storing the location of Pvector\n points.add(vS1);\n }\n }\n \n return points;\n}\n\nor even a sphere of spheres:\n//import\nimport peasy.*;\n\n//import variables\nPeasyCam acam;\n\nArrayList<PVector> sphere1 = pointSpherePoints(270, 27, 27);\n\nvoid setup() {\n size(1920, 1080, P3D);\n\n\n //camera setting\n acam = new PeasyCam(this, 500, 500, 500, 2000);\n}\n\nvoid draw() {\n background(0);\n translate(width * 0.5, height * 0.5);\n \n for(int i = 0 ; i < sphere1.size(); i += 10){\n // scale vector point\n PVector v = PVector.mult(sphere1.get(i), 10);\n pushMatrix();\n translate(v.x, v.y, v.z);\n drawSpherePoints(sphere1);\n popMatrix();\n }\n}\n\nvoid drawSpherePoints(ArrayList<PVector> points){\n for (int i = 0 ; i < points.size(); i++) {\n PVector v = points.get(i);\n stroke(255);\n strokeWeight(random(2, 10));\n point(v.x, v.y, v.z);\n }\n}\n\n\nArrayList<PVector> pointSpherePoints(float r, float uAmount, float wAmount) {\n ArrayList<PVector> points = new ArrayList<PVector>();\n \n for (float u= 0; u < 180; u += 180/uAmount) {\n for (float w = 0; w < 360; w += 360/wAmount) {\n\n //define PVector\n PVector vS1 = new PVector(r*sin(radians(u))*cos(radians(w)), r*sin(radians(u))*sin(radians(w)), r*cos(radians(u)));\n //storing the location of Pvector\n points.add(vS1);\n }\n }\n \n return points;\n}\n\n" ]
[ 0 ]
[]
[]
[ "processing" ]
stackoverflow_0074660131_processing.txt
Q: No builds available In Testflight for internal users I've uploaded my build to my App Store Connect and it's "Waiting for approval", however, my intention is to test with few internal users. I've got them all in an internal group and they should be able to test even if the app is not reviewed. When i go to invite them it stated there are no builds available and I don't understand why. I'm adding a picture of what my Testflight looks like. Is there anything I have to do? Thank you for any help A: Following the discussion here, it seems to be a bug on Apple's side that can be worked around by adding the "ITSAppUsesNonExemptEncryption" key to your Info.plist A: @kjyv is correct. The alternate approach: Delete all the testers within your internal testers group who is not receiving access to your latest build, then reinvite them. Give it a few minutes for Apple to propogate the changes and you should receive an invite via TestFlight to the latest build. A: After the uploaded build was processed, there was an option for me in the app store window to add the build, and after adding it there, it started appearing in test flight as well.
No builds available In Testflight for internal users
I've uploaded my build to my App Store Connect and it's "Waiting for approval", however, my intention is to test with few internal users. I've got them all in an internal group and they should be able to test even if the app is not reviewed. When i go to invite them it stated there are no builds available and I don't understand why. I'm adding a picture of what my Testflight looks like. Is there anything I have to do? Thank you for any help
[ "Following the discussion here, it seems to be a bug on Apple's side that can be worked around by adding the \"ITSAppUsesNonExemptEncryption\" key to your Info.plist\n", "@kjyv is correct.\nThe alternate approach:\nDelete all the testers within your internal testers group who is not receiving access to your latest build, then reinvite them.\nGive it a few minutes for Apple to propogate the changes and you should receive an invite via TestFlight to the latest build.\n", "After the uploaded build was processed, there was an option for me in the app store window to add the build, and after adding it there, it started appearing in test flight as well.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "build", "ios", "testflight", "user_testing" ]
stackoverflow_0067109730_build_ios_testflight_user_testing.txt
Q: Django - How to access prefetch_related fields? students = Student.objects.prefetch_related('user__applications').all() students.user__applications # Error So a student has a foreign key to a User object which is associated with a list of applications. But how do I access the list of applications from the Student object? A: Neither prefetch_related nor select_related change the way you access the related data; you do it via the fields or reverse relations, just as you would if you didn't use those methods. In this case, you have a queryset composed of students; each one of those will have a user attribute, which gives a User object which in turn has an applications field which gives another queryset. So, for example: students[0].user.applications.all()[0] A: You need to have related_name attribute set in your ForeignKey field in Application model in order to call it from Student Model. For example: class Student(models.Model): email = models.CharField(max_length=100, unique=True) created = models.DateTimeField(auto_now_add=True) class Application(models.Model): student = models.ForeignKey(Student,related_name="applications", on_delete=models.CASCADE) """other fields""" Then you can call it from your student model like this: students = Student.objects.all().prefetch_related('applications') You will have list of students. So you need to access each student object and then you can access that particular student's applications like this: for student in students: app = student.applications A: While prefetch_related doesn't change the way you access the related data, if you only care about the model at the end of the "joining chain" in a prefetch_related call, you can use QuerySet.union(). Something like this: students = Student.objects.prefetch_related('user__applications').all() application_qs_list = [student.user.applications.all() for student in students] applications = application_qs_list[0].union(application_qs_list[1:]) By default, this will remove duplicate entries. If you want to change this behavior, you can pass all=True to union().
Django - How to access prefetch_related fields?
students = Student.objects.prefetch_related('user__applications').all() students.user__applications # Error So a student has a foreign key to a User object which is associated with a list of applications. But how do I access the list of applications from the Student object?
[ "Neither prefetch_related nor select_related change the way you access the related data; you do it via the fields or reverse relations, just as you would if you didn't use those methods.\nIn this case, you have a queryset composed of students; each one of those will have a user attribute, which gives a User object which in turn has an applications field which gives another queryset. So, for example:\nstudents[0].user.applications.all()[0]\n\n", "You need to have related_name attribute set in your ForeignKey field in Application model in order to call it from Student Model. For example:\nclass Student(models.Model):\n email = models.CharField(max_length=100, unique=True)\n created = models.DateTimeField(auto_now_add=True)\n\nclass Application(models.Model):\n student = models.ForeignKey(Student,related_name=\"applications\", on_delete=models.CASCADE)\n \"\"\"other fields\"\"\"\n\nThen you can call it from your student model like this:\nstudents = Student.objects.all().prefetch_related('applications')\n\nYou will have list of students. So you need to access each student object and then you can access that particular student's applications like this:\nfor student in students:\n app = student.applications\n\n", "While prefetch_related doesn't change the way you access the related data, if you only care about the model at the end of the \"joining chain\" in a prefetch_related call, you can use QuerySet.union().\nSomething like this:\nstudents = Student.objects.prefetch_related('user__applications').all()\napplication_qs_list = [student.user.applications.all() for student in students]\napplications = application_qs_list[0].union(application_qs_list[1:])\n\nBy default, this will remove duplicate entries. If you want to change this behavior, you can pass all=True to union().\n" ]
[ 3, 0, 0 ]
[]
[]
[ "django" ]
stackoverflow_0050258861_django.txt
Q: CUDA atomicAdd being run too many times I am trying to initialise a numpy matrix with a preset initial cache size, and then each CUDA thread with run atomicAdd at most once, hopefully as long as the accumulated sum is still within the initial cache size. The problem here is that when the initial cache size (500) is smaller than the number of threads (1024), it returns with very unexpected number that the accumulated sum becomes very large (1140850688). Could anyone kindly advise why it does not work? or how to make it works? Ideally I would like the "if (result_count[0] < InitialResultCacheSize - 1)" to stop atomicAdd from accumulating to over the initial cache size (InitialResultCacheSize). import os _path = r"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\bin\Hostx64\x64" if os.system("cl.exe"): os.environ['PATH'] += ';' + _path if os.system("cl.exe"): raise RuntimeError("cl.exe still not found, path probably incorrect") import pycuda.driver as cuda import pycuda.autoinit from pycuda.compiler import SourceModule import pandas as pd import numpy as np RESULT_COLUMN_COUNT = 4 InitialResultCacheSize = 10000 # InitialResultCacheSize = 500 number_matrix = np.zeros(InitialResultCacheSize * RESULT_COLUMN_COUNT) number_matrix = number_matrix.astype(np.float32) number_matrix_gpu = cuda.mem_alloc(number_matrix.nbytes) cuda.memcpy_htod(number_matrix_gpu, number_matrix) result_count = np.int32(0) result_count_gpu = cuda.mem_alloc(result_count.nbytes) cuda.memcpy_htod(result_count_gpu, result_count) mod = SourceModule(""" #include <cstdlib> __global__ void test_cuda_utilisation(int InitialResultCacheSize, int RESULT_COLUMN_COUNT, float *number_matrix, int *result_count) { int result_index, result_index_offset; if (result_count[0] < InitialResultCacheSize - 1) { result_index = atomicAdd(result_count,1); result_index_offset = result_index * RESULT_COLUMN_COUNT; number_matrix[result_index_offset + 0] = result_index; number_matrix[result_index_offset + 1] = result_count[0]; number_matrix[result_index_offset + 2] = InitialResultCacheSize; number_matrix[result_index_offset + 3] = RESULT_COLUMN_COUNT; } } """) func = mod.get_function("test_cuda_utilisation") func(np.int32(InitialResultCacheSize), np.int32(RESULT_COLUMN_COUNT), number_matrix_gpu, result_count_gpu, block=(4,16,16)) result_count_out = np.empty_like(result_count) cuda.memcpy_dtoh(result_count_out, result_count_gpu) print('result_count_out = ' + str(result_count_out) + ' and InitialResultCacheSize is ' + str(InitialResultCacheSize)) number_matrix_out = np.empty((result_count_out, RESULT_COLUMN_COUNT), dtype=np.float32) cuda.memcpy_dtoh(number_matrix_out, number_matrix_gpu) print('number_matrix_out is with len ' + str(len(number_matrix_out)) + ' x ' + str(len(number_matrix_out[0]))) print(number_matrix_out) Result of InitialResultCacheSize = 10000 'cl.exe' is not recognized as an internal or external command, operable program or batch file. Microsoft (R) C/C++ Optimizing Compiler Version 19.33.31629 for x64 Copyright (C) Microsoft Corporation. All rights reserved. result_count_out = 1024 and InitialResultCacheSize is 10000 number_matrix_out is with len 1024 x 4 [[0.000e+00 1.024e+03 1.000e+04 4.000e+00] [1.000e+00 1.024e+03 1.000e+04 4.000e+00] [2.000e+00 1.024e+03 1.000e+04 4.000e+00] ... [1.021e+03 1.024e+03 1.000e+04 4.000e+00] [1.022e+03 1.024e+03 1.000e+04 4.000e+00] [1.023e+03 1.024e+03 1.000e+04 4.000e+00]] Result of InitialResultCacheSize = 500 'cl.exe' is not recognized as an internal or external command, operable program or batch file. Microsoft (R) C/C++ Optimizing Compiler Version 19.33.31629 for x64 Copyright (C) Microsoft Corporation. All rights reserved. result_count_out = 1140850688 and InitialResultCacheSize is 500 Traceback (most recent call last): File C:\PythonProjects\TradeAnalysis\Test\TestCUDAUtilisation.py:67 in <module> cuda.memcpy_dtoh(number_matrix_out, number_matrix_gpu) LogicError: cuMemcpyDtoH failed: invalid argument Trial on another approach, not sure if it is the legal behaviour approach import os # _path = r"D:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29910\bin\Hostx64\x64" _path = r"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\bin\Hostx64\x64" # _path = r"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\bin\Hostx64" # _path = r"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\bin\Hostx64\x64\" if os.system("cl.exe"): os.environ['PATH'] += ';' + _path if os.system("cl.exe"): raise RuntimeError("cl.exe still not found, path probably incorrect") import pycuda.driver as cuda import pycuda.autoinit from pycuda.compiler import SourceModule import pandas as pd import numpy as np RESULT_COLUMN_COUNT = 4 # InitialResultCacheSize = 10000 InitialResultCacheSize = 500 number_matrix = np.zeros(InitialResultCacheSize * RESULT_COLUMN_COUNT) number_matrix = number_matrix.astype(np.float32) number_matrix_gpu = cuda.mem_alloc(number_matrix.nbytes) cuda.memcpy_htod(number_matrix_gpu, number_matrix) result_count = 0 mod = SourceModule(""" #include <cstdlib> __global__ void test_cuda_utilisation(int InitialResultCacheSize, int RESULT_COLUMN_COUNT, int result_count, float *number_matrix) { int result_index, result_index_offset; result_index = atomicAdd(&result_count,1); if (result_index < InitialResultCacheSize - 1) { result_index_offset = result_index * RESULT_COLUMN_COUNT; number_matrix[result_index_offset + 0] = result_index; number_matrix[result_index_offset + 1] = result_count; number_matrix[result_index_offset + 2] = InitialResultCacheSize; number_matrix[result_index_offset + 3] = RESULT_COLUMN_COUNT; } } """) func = mod.get_function("test_cuda_utilisation") func(np.int32(InitialResultCacheSize), np.int32(RESULT_COLUMN_COUNT), np.int32(result_count), number_matrix_gpu, block=(4,16,16)) print('result_count = ' + str(result_count) + ' and InitialResultCacheSize = ' + str(InitialResultCacheSize)) which result with the below. Still cannot get the count of how many times atomicAdd has been run result_count = 0 and InitialResultCacheSize = 500 A: You changed several things, most incorrectly (e.g. you cannot do atomics on a local variable, you are not actually copying any results back to the host, etc.) in between your first and second postings, more than just the one thing I suggested you change. If we start with your first listing, here are the changes I suggest to address the usage of atomics: $ cat t35.py import pycuda.driver as cuda import pycuda.autoinit from pycuda.compiler import SourceModule import numpy as np RESULT_COLUMN_COUNT = 4 InitialResultCacheSize = 500 number_matrix = np.zeros(InitialResultCacheSize * RESULT_COLUMN_COUNT) number_matrix = number_matrix.astype(np.float32) number_matrix_gpu = cuda.mem_alloc(number_matrix.nbytes) cuda.memcpy_htod(number_matrix_gpu, number_matrix) result_count = np.zeros(1, dtype=np.int32) result_count_gpu = cuda.mem_alloc(result_count.nbytes) cuda.memcpy_htod(result_count_gpu, result_count) mod = SourceModule(""" #include <cstdlib> __global__ void test_cuda_utilisation(int InitialResultCacheSize, int RESULT_COLUMN_COUNT, float *number_matrix, int *result_count) { int result_index, result_index_offset; result_index = atomicAdd(result_count, 1); if (result_index < InitialResultCacheSize - 1) { result_index_offset = result_index * RESULT_COLUMN_COUNT; number_matrix[result_index_offset + 0] = result_index; number_matrix[result_index_offset + 1] = result_count[0]; number_matrix[result_index_offset + 2] = InitialResultCacheSize; number_matrix[result_index_offset + 3] = RESULT_COLUMN_COUNT; } } """) func = mod.get_function("test_cuda_utilisation") func(np.int32(InitialResultCacheSize), np.int32(RESULT_COLUMN_COUNT), number_matrix_gpu, result_count_gpu, block=(4,16,16)) result_count_out = np.empty_like(result_count) cuda.memcpy_dtoh(result_count_out, result_count_gpu) print('result_count_out = ' + str(result_count_out) + ' and InitialResultCacheSize is ' + str(InitialResultCacheSize)) if InitialResultCacheSize < result_count_out: result_count_out = InitialResultCacheSize number_matrix_out = np.empty((result_count_out, RESULT_COLUMN_COUNT), dtype=np.float32) cuda.memcpy_dtoh(number_matrix_out, number_matrix_gpu) print('number_matrix_out is with len ' + str(len(number_matrix_out)) + ' x ' + str(len(number_matrix_out[0]))) print(number_matrix_out) $ python t35.py result_count_out = [1024] and InitialResultCacheSize is 500 number_matrix_out is with len 500 x 4 [[ 0.00000000e+00 1.02400000e+03 5.00000000e+02 4.00000000e+00] [ 1.00000000e+00 1.02400000e+03 5.00000000e+02 4.00000000e+00] [ 2.00000000e+00 1.02400000e+03 5.00000000e+02 4.00000000e+00] ..., [ 4.97000000e+02 1.02400000e+03 5.00000000e+02 4.00000000e+00] [ 4.98000000e+02 1.02400000e+03 5.00000000e+02 4.00000000e+00] [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]] $ You'll note that the reported output of the atomic variable is 1024, but since there are only 500 "slots" available for output, we are limiting ourselves (both in kernel/device code, and in your host code) to the number of output slots available. So 1024 atomic ops were done, one per thread, because you are launching 1024 threads. But we are limiting the kernel to only write to 500 rows of output, because that's all that are allocated. Likewise, when retrieving results to the host, we must acknowledge that if the reported number of atomic ops is larger than the InitialResultCacheSize, then we must limit ourselves to the lower number.
CUDA atomicAdd being run too many times
I am trying to initialise a numpy matrix with a preset initial cache size, and then each CUDA thread with run atomicAdd at most once, hopefully as long as the accumulated sum is still within the initial cache size. The problem here is that when the initial cache size (500) is smaller than the number of threads (1024), it returns with very unexpected number that the accumulated sum becomes very large (1140850688). Could anyone kindly advise why it does not work? or how to make it works? Ideally I would like the "if (result_count[0] < InitialResultCacheSize - 1)" to stop atomicAdd from accumulating to over the initial cache size (InitialResultCacheSize). import os _path = r"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\bin\Hostx64\x64" if os.system("cl.exe"): os.environ['PATH'] += ';' + _path if os.system("cl.exe"): raise RuntimeError("cl.exe still not found, path probably incorrect") import pycuda.driver as cuda import pycuda.autoinit from pycuda.compiler import SourceModule import pandas as pd import numpy as np RESULT_COLUMN_COUNT = 4 InitialResultCacheSize = 10000 # InitialResultCacheSize = 500 number_matrix = np.zeros(InitialResultCacheSize * RESULT_COLUMN_COUNT) number_matrix = number_matrix.astype(np.float32) number_matrix_gpu = cuda.mem_alloc(number_matrix.nbytes) cuda.memcpy_htod(number_matrix_gpu, number_matrix) result_count = np.int32(0) result_count_gpu = cuda.mem_alloc(result_count.nbytes) cuda.memcpy_htod(result_count_gpu, result_count) mod = SourceModule(""" #include <cstdlib> __global__ void test_cuda_utilisation(int InitialResultCacheSize, int RESULT_COLUMN_COUNT, float *number_matrix, int *result_count) { int result_index, result_index_offset; if (result_count[0] < InitialResultCacheSize - 1) { result_index = atomicAdd(result_count,1); result_index_offset = result_index * RESULT_COLUMN_COUNT; number_matrix[result_index_offset + 0] = result_index; number_matrix[result_index_offset + 1] = result_count[0]; number_matrix[result_index_offset + 2] = InitialResultCacheSize; number_matrix[result_index_offset + 3] = RESULT_COLUMN_COUNT; } } """) func = mod.get_function("test_cuda_utilisation") func(np.int32(InitialResultCacheSize), np.int32(RESULT_COLUMN_COUNT), number_matrix_gpu, result_count_gpu, block=(4,16,16)) result_count_out = np.empty_like(result_count) cuda.memcpy_dtoh(result_count_out, result_count_gpu) print('result_count_out = ' + str(result_count_out) + ' and InitialResultCacheSize is ' + str(InitialResultCacheSize)) number_matrix_out = np.empty((result_count_out, RESULT_COLUMN_COUNT), dtype=np.float32) cuda.memcpy_dtoh(number_matrix_out, number_matrix_gpu) print('number_matrix_out is with len ' + str(len(number_matrix_out)) + ' x ' + str(len(number_matrix_out[0]))) print(number_matrix_out) Result of InitialResultCacheSize = 10000 'cl.exe' is not recognized as an internal or external command, operable program or batch file. Microsoft (R) C/C++ Optimizing Compiler Version 19.33.31629 for x64 Copyright (C) Microsoft Corporation. All rights reserved. result_count_out = 1024 and InitialResultCacheSize is 10000 number_matrix_out is with len 1024 x 4 [[0.000e+00 1.024e+03 1.000e+04 4.000e+00] [1.000e+00 1.024e+03 1.000e+04 4.000e+00] [2.000e+00 1.024e+03 1.000e+04 4.000e+00] ... [1.021e+03 1.024e+03 1.000e+04 4.000e+00] [1.022e+03 1.024e+03 1.000e+04 4.000e+00] [1.023e+03 1.024e+03 1.000e+04 4.000e+00]] Result of InitialResultCacheSize = 500 'cl.exe' is not recognized as an internal or external command, operable program or batch file. Microsoft (R) C/C++ Optimizing Compiler Version 19.33.31629 for x64 Copyright (C) Microsoft Corporation. All rights reserved. result_count_out = 1140850688 and InitialResultCacheSize is 500 Traceback (most recent call last): File C:\PythonProjects\TradeAnalysis\Test\TestCUDAUtilisation.py:67 in <module> cuda.memcpy_dtoh(number_matrix_out, number_matrix_gpu) LogicError: cuMemcpyDtoH failed: invalid argument Trial on another approach, not sure if it is the legal behaviour approach import os # _path = r"D:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29910\bin\Hostx64\x64" _path = r"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\bin\Hostx64\x64" # _path = r"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\bin\Hostx64" # _path = r"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\bin\Hostx64\x64\" if os.system("cl.exe"): os.environ['PATH'] += ';' + _path if os.system("cl.exe"): raise RuntimeError("cl.exe still not found, path probably incorrect") import pycuda.driver as cuda import pycuda.autoinit from pycuda.compiler import SourceModule import pandas as pd import numpy as np RESULT_COLUMN_COUNT = 4 # InitialResultCacheSize = 10000 InitialResultCacheSize = 500 number_matrix = np.zeros(InitialResultCacheSize * RESULT_COLUMN_COUNT) number_matrix = number_matrix.astype(np.float32) number_matrix_gpu = cuda.mem_alloc(number_matrix.nbytes) cuda.memcpy_htod(number_matrix_gpu, number_matrix) result_count = 0 mod = SourceModule(""" #include <cstdlib> __global__ void test_cuda_utilisation(int InitialResultCacheSize, int RESULT_COLUMN_COUNT, int result_count, float *number_matrix) { int result_index, result_index_offset; result_index = atomicAdd(&result_count,1); if (result_index < InitialResultCacheSize - 1) { result_index_offset = result_index * RESULT_COLUMN_COUNT; number_matrix[result_index_offset + 0] = result_index; number_matrix[result_index_offset + 1] = result_count; number_matrix[result_index_offset + 2] = InitialResultCacheSize; number_matrix[result_index_offset + 3] = RESULT_COLUMN_COUNT; } } """) func = mod.get_function("test_cuda_utilisation") func(np.int32(InitialResultCacheSize), np.int32(RESULT_COLUMN_COUNT), np.int32(result_count), number_matrix_gpu, block=(4,16,16)) print('result_count = ' + str(result_count) + ' and InitialResultCacheSize = ' + str(InitialResultCacheSize)) which result with the below. Still cannot get the count of how many times atomicAdd has been run result_count = 0 and InitialResultCacheSize = 500
[ "You changed several things, most incorrectly (e.g. you cannot do atomics on a local variable, you are not actually copying any results back to the host, etc.) in between your first and second postings, more than just the one thing I suggested you change.\nIf we start with your first listing, here are the changes I suggest to address the usage of atomics:\n$ cat t35.py\nimport pycuda.driver as cuda\nimport pycuda.autoinit\nfrom pycuda.compiler import SourceModule\nimport numpy as np\n\nRESULT_COLUMN_COUNT = 4\n\nInitialResultCacheSize = 500\n\nnumber_matrix = np.zeros(InitialResultCacheSize * RESULT_COLUMN_COUNT)\nnumber_matrix = number_matrix.astype(np.float32)\n\nnumber_matrix_gpu = cuda.mem_alloc(number_matrix.nbytes)\ncuda.memcpy_htod(number_matrix_gpu, number_matrix)\n\nresult_count = np.zeros(1, dtype=np.int32)\nresult_count_gpu = cuda.mem_alloc(result_count.nbytes)\ncuda.memcpy_htod(result_count_gpu, result_count)\n\nmod = SourceModule(\"\"\"\n #include <cstdlib>\n\n __global__ void test_cuda_utilisation(int InitialResultCacheSize, int RESULT_COLUMN_COUNT, float *number_matrix, int *result_count)\n {\n int result_index, result_index_offset;\n result_index = atomicAdd(result_count, 1);\n if (result_index < InitialResultCacheSize - 1) {\n result_index_offset = result_index * RESULT_COLUMN_COUNT;\n number_matrix[result_index_offset + 0] = result_index;\n number_matrix[result_index_offset + 1] = result_count[0];\n number_matrix[result_index_offset + 2] = InitialResultCacheSize;\n number_matrix[result_index_offset + 3] = RESULT_COLUMN_COUNT;\n }\n }\n \"\"\")\n\nfunc = mod.get_function(\"test_cuda_utilisation\")\nfunc(np.int32(InitialResultCacheSize), np.int32(RESULT_COLUMN_COUNT), number_matrix_gpu, result_count_gpu, block=(4,16,16))\n\nresult_count_out = np.empty_like(result_count)\ncuda.memcpy_dtoh(result_count_out, result_count_gpu)\nprint('result_count_out = ' + str(result_count_out) + ' and InitialResultCacheSize is ' + str(InitialResultCacheSize))\nif InitialResultCacheSize < result_count_out:\n result_count_out = InitialResultCacheSize\nnumber_matrix_out = np.empty((result_count_out, RESULT_COLUMN_COUNT), dtype=np.float32)\ncuda.memcpy_dtoh(number_matrix_out, number_matrix_gpu)\n\nprint('number_matrix_out is with len ' + str(len(number_matrix_out)) + ' x ' + str(len(number_matrix_out[0])))\nprint(number_matrix_out)\n$ python t35.py\nresult_count_out = [1024] and InitialResultCacheSize is 500\nnumber_matrix_out is with len 500 x 4\n[[ 0.00000000e+00 1.02400000e+03 5.00000000e+02 4.00000000e+00]\n [ 1.00000000e+00 1.02400000e+03 5.00000000e+02 4.00000000e+00]\n [ 2.00000000e+00 1.02400000e+03 5.00000000e+02 4.00000000e+00]\n ...,\n [ 4.97000000e+02 1.02400000e+03 5.00000000e+02 4.00000000e+00]\n [ 4.98000000e+02 1.02400000e+03 5.00000000e+02 4.00000000e+00]\n [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]]\n$\n\nYou'll note that the reported output of the atomic variable is 1024, but since there are only 500 \"slots\" available for output, we are limiting ourselves (both in kernel/device code, and in your host code) to the number of output slots available.\nSo 1024 atomic ops were done, one per thread, because you are launching 1024 threads. But we are limiting the kernel to only write to 500 rows of output, because that's all that are allocated. Likewise, when retrieving results to the host, we must acknowledge that if the reported number of atomic ops is larger than the InitialResultCacheSize, then we must limit ourselves to the lower number.\n" ]
[ 1 ]
[]
[]
[ "cuda", "python" ]
stackoverflow_0074662945_cuda_python.txt
Q: Failed to build ta-lib ERROR: Could not build wheels for ta-lib, which is required to install pyproject.toml-based project I'm getting below error, while pip installing ta-lib. I used command : !pip install ta-lib Please provide me solution. Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Collecting ta-lib Using cached TA-Lib-0.4.25.tar.gz (271 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: numpy in /usr/local/lib/python3.8/dist-packages (from ta-lib) (1.21.6) Building wheels for collected packages: ta-lib error: subprocess-exited-with-error × Building wheel for ta-lib (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. Building wheel for ta-lib (pyproject.toml) ... error ERROR: Failed building wheel for ta-lib Failed to build ta-lib ERROR: Could not build wheels for ta-lib, which is required to install pyproject.toml-based projects I tried following commands : pip install --upgrade pip setuptools wheel pip install pep517 !pip3 install --upgrade pip !pip install pyproject-toml pip install TA_Lib‑0.4.10‑cp35‑cp35m‑win_amd64.whl !pip install ta-lib A: https://www.lfd.uci.edu/~gohlke/pythonlibs/#ta-lib in this web download ta-lib.whl and then pip install gotch
Failed to build ta-lib ERROR: Could not build wheels for ta-lib, which is required to install pyproject.toml-based project
I'm getting below error, while pip installing ta-lib. I used command : !pip install ta-lib Please provide me solution. Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Collecting ta-lib Using cached TA-Lib-0.4.25.tar.gz (271 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: numpy in /usr/local/lib/python3.8/dist-packages (from ta-lib) (1.21.6) Building wheels for collected packages: ta-lib error: subprocess-exited-with-error × Building wheel for ta-lib (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. Building wheel for ta-lib (pyproject.toml) ... error ERROR: Failed building wheel for ta-lib Failed to build ta-lib ERROR: Could not build wheels for ta-lib, which is required to install pyproject.toml-based projects I tried following commands : pip install --upgrade pip setuptools wheel pip install pep517 !pip3 install --upgrade pip !pip install pyproject-toml pip install TA_Lib‑0.4.10‑cp35‑cp35m‑win_amd64.whl !pip install ta-lib
[ "https://www.lfd.uci.edu/~gohlke/pythonlibs/#ta-lib\nin this web download ta-lib.whl and then pip install\ngotch\n" ]
[ 0 ]
[]
[]
[ "algorithmic_trading", "artificial_intelligence", "python", "technical_indicator" ]
stackoverflow_0074651107_algorithmic_trading_artificial_intelligence_python_technical_indicator.txt
Q: I lose leading zeros when copy data from dataframe to openpyxl.workbook I use openpyxl and pandas to fill row color with specified condition. Everything works fine but in some cells I lose leading zeros (like 0345 -> output 345), I don't want that. How can I get the exact data? dt = pd.read_excel(file_luu, sheet_name="Sheet1") dt = pd.DataFrame(dt) dinhDanh = len(dt.columns) - 1 wb = load_workbook(file_luu) print(type(wb)) ws = wb['Sheet1'] for i in range(0, dt.shape[1]): ws.cell(row=1, column=i + 1).value = dt.columns[i] for row in range(dt.shape[0]): for col in range(dt.shape[1] ): ws.cell(row + 2, col + 1).value = str(dt.iat[row, col]) if (str(dt.iat[row, col]) != "nan") else " " if dt.iat[row, dinhDanh] == True: ws.cell(row + 2, col + 1).fill = PatternFill(start_color='FFD970', end_color='FFD970', fill_type="solid") # used hex code for brown color ws.delete_cols(1) ws.delete_cols(dinhDanh) wb.save(file_luu) Copy exactly all characters A: To prevent losing leading zeros when writing data to an Excel file with openpyxl and pandas, you can specify that the cell should be formatted as a string by setting the number_format property of the cell to @. This tells Excel that the cell should be treated as a string, and any leading zeros will be preserved. # Import the openpyxl Workbook and cell classes from openpyxl.workbook import Workbook from openpyxl.cell import Cell dt = pd.read_excel(file_luu, sheet_name="Sheet1") dt = pd.DataFrame(dt) dinhDanh = len(dt.columns) - 1 wb = load_workbook(file_luu) print(type(wb)) ws = wb['Sheet1'] for i in range(0, dt.shape[1]): ws.cell(row=1, column=i + 1).value = dt.columns[i] for row in range(dt.shape[0]): for col in range(dt.shape[1] ): # Create a cell with the value from the DataFrame, and specify that it should be formatted as a string cell = Cell(ws, row + 2, col + 1, value=dt.iat[row, col], number_format="@") # Set the cell's value and fill color ws.cell(row + 2, col + 1).value = str(dt.iat[row, col]) if (str(dt.iat[row, col]) != "nan") else " " if dt.iat[row, dinhDanh] == True: ws.cell(row + 2, col + 1).fill = PatternFill(start_color='FFD970', end_color='FFD970', fill_type="solid") # used hex code for brown color ws.delete_
I lose leading zeros when copy data from dataframe to openpyxl.workbook
I use openpyxl and pandas to fill row color with specified condition. Everything works fine but in some cells I lose leading zeros (like 0345 -> output 345), I don't want that. How can I get the exact data? dt = pd.read_excel(file_luu, sheet_name="Sheet1") dt = pd.DataFrame(dt) dinhDanh = len(dt.columns) - 1 wb = load_workbook(file_luu) print(type(wb)) ws = wb['Sheet1'] for i in range(0, dt.shape[1]): ws.cell(row=1, column=i + 1).value = dt.columns[i] for row in range(dt.shape[0]): for col in range(dt.shape[1] ): ws.cell(row + 2, col + 1).value = str(dt.iat[row, col]) if (str(dt.iat[row, col]) != "nan") else " " if dt.iat[row, dinhDanh] == True: ws.cell(row + 2, col + 1).fill = PatternFill(start_color='FFD970', end_color='FFD970', fill_type="solid") # used hex code for brown color ws.delete_cols(1) ws.delete_cols(dinhDanh) wb.save(file_luu) Copy exactly all characters
[ "To prevent losing leading zeros when writing data to an Excel file with openpyxl and pandas, you can specify that the cell should be formatted as a string by setting the number_format property of the cell to @. This tells Excel that the cell should be treated as a string, and any leading zeros will be preserved.\n# Import the openpyxl Workbook and cell classes\nfrom openpyxl.workbook import Workbook\nfrom openpyxl.cell import Cell\n\ndt = pd.read_excel(file_luu, sheet_name=\"Sheet1\")\ndt = pd.DataFrame(dt)\ndinhDanh = len(dt.columns) - 1\nwb = load_workbook(file_luu)\nprint(type(wb))\nws = wb['Sheet1']\nfor i in range(0, dt.shape[1]):\n ws.cell(row=1, column=i + 1).value = dt.columns[i]\nfor row in range(dt.shape[0]):\n for col in range(dt.shape[1] ):\n\n # Create a cell with the value from the DataFrame, and specify that it should be formatted as a string\n cell = Cell(ws, row + 2, col + 1, value=dt.iat[row, col], number_format=\"@\")\n\n # Set the cell's value and fill color\n ws.cell(row + 2, col + 1).value = str(dt.iat[row, col]) if (str(dt.iat[row, col]) != \"nan\") else \" \"\n if dt.iat[row, dinhDanh] == True:\n ws.cell(row + 2, col + 1).fill = PatternFill(start_color='FFD970', end_color='FFD970',\n fill_type=\"solid\") # used hex code for brown color\n\nws.delete_\n\n" ]
[ 0 ]
[]
[]
[ "openpyxl", "pandas", "python" ]
stackoverflow_0074672592_openpyxl_pandas_python.txt
Q: How do I join a char vector in Rust I'm doing the rustlings exercises and I tried this to make a capitalize function. But the join part does not work. It says: "the method join exists for struct Vec<char>, but its trait bounds were not satisfied the following trait bounds were not satisfied: <[char] as Join<_>>::Output = _" which I don't know what means. What would be the right way to join a char vector? pub fn capitalize_first(input: &str) -> String { let mut c = input.chars(); match c.next() { None => String::new(), Some(first) => { let upper = first.to_ascii_uppercase(); let mut v = c.collect::<Vec<char>>(); v[0] = upper; v.join("") }, } } A: To answer your question, the best way to get a string from a vec of chars is to iter and collect: let my_string = my_char_vec.iter().collect(); But there are other problems in your code: you're taking the first char to check the string isn't empty, then you build a string from the rest of the iteration, and you make the first char replace the first char of that string... losing this char which was the second one of the initial str you're building a useless vec and iterating again. Those are expensive steps that you don't need You can fix those problems by adapting the code to directly write from the iteration into the string: pub fn capitalize_first(input: &str) -> String { let mut chars = input.chars(); let mut string = String::new(); if let Some(first) = chars.next() { string.push(first.to_ascii_uppercase()); for c in chars { string.push(c); } } string } Note that you're using a function dedicated to ASCII characters. This is fine when you're sure you're only dealing with ASCII but if you want something which works in an international context, you want to use the more general to_uppercase. As an unicode uppercased character may be several characters, the code is a little more complex: pub fn capitalize_first(input: &str) -> String { let mut chars = input.chars(); let mut string = String::new(); if let Some(first) = chars.next() { let first = first.to_uppercase(); for c in first { string.push(c); } for c in chars { string.push(c); } } string } If you're sure you can use to_ascii_upercase, then there's another solution. Because ASCII chars are just one byte in lowercase and uppercase, you can change them in place in the UTF8 string: pub fn capitalize_first(input: &str) -> String { let mut string = input.to_string(); if !string.is_empty() { string[..1].make_ascii_uppercase(); } string } This second approach could be used on a mutable string with zero allocation. But it would panic if the first char weren't one byte long. A: There's a constraint in the return type of the join method that the char type does not meet, since it doesn't have a static lifetime: pub fn join<Separator>(&self, sep: Separator) -> <Self as Join<Separator>>::Output where Self: Join<Separator>, { Join::join(self, sep) } You should use String::from_iter instead: pub fn capitalize_first(input: &str) -> String { let mut c = input.chars(); match c.next() { None => String::new(), Some(first) => { let upper = first.to_ascii_uppercase(); let mut v = c.collect::<Vec<char>>(); v[0] = upper; String::from_iter(v) }, } } A: You can just use as_str() pub fn capitalize_first(input: &str) -> String { let mut c = input.chars(); match c.next() { None => String::new(), Some(first) => first.to_uppercase().to_string() + c.as_str(), } }
How do I join a char vector in Rust
I'm doing the rustlings exercises and I tried this to make a capitalize function. But the join part does not work. It says: "the method join exists for struct Vec<char>, but its trait bounds were not satisfied the following trait bounds were not satisfied: <[char] as Join<_>>::Output = _" which I don't know what means. What would be the right way to join a char vector? pub fn capitalize_first(input: &str) -> String { let mut c = input.chars(); match c.next() { None => String::new(), Some(first) => { let upper = first.to_ascii_uppercase(); let mut v = c.collect::<Vec<char>>(); v[0] = upper; v.join("") }, } }
[ "To answer your question, the best way to get a string from a vec of chars is to iter and collect:\n let my_string = my_char_vec.iter().collect();\n\nBut there are other problems in your code:\n\nyou're taking the first char to check the string isn't empty, then you build a string from the rest of the iteration, and you make the first char replace the first char of that string... losing this char which was the second one of the initial str\nyou're building a useless vec and iterating again. Those are expensive steps that you don't need\n\nYou can fix those problems by adapting the code to directly write from the iteration into the string:\npub fn capitalize_first(input: &str) -> String {\n let mut chars = input.chars();\n let mut string = String::new();\n if let Some(first) = chars.next() {\n string.push(first.to_ascii_uppercase());\n for c in chars {\n string.push(c);\n }\n }\n string\n}\n\nNote that you're using a function dedicated to ASCII characters. This is fine when you're sure you're only dealing with ASCII but if you want something which works in an international context, you want to use the more general to_uppercase.\nAs an unicode uppercased character may be several characters, the code is a little more complex:\npub fn capitalize_first(input: &str) -> String {\n let mut chars = input.chars();\n let mut string = String::new();\n if let Some(first) = chars.next() {\n let first = first.to_uppercase();\n for c in first {\n string.push(c);\n }\n for c in chars {\n string.push(c);\n }\n }\n string\n}\n\nIf you're sure you can use to_ascii_upercase, then there's another solution.\nBecause ASCII chars are just one byte in lowercase and uppercase, you can change them in place in the UTF8 string:\npub fn capitalize_first(input: &str) -> String {\n let mut string = input.to_string();\n if !string.is_empty() {\n string[..1].make_ascii_uppercase();\n }\n string\n}\n\nThis second approach could be used on a mutable string with zero allocation. But it would panic if the first char weren't one byte long.\n", "There's a constraint in the return type of the join method that the char type does not meet, since it doesn't have a static lifetime:\npub fn join<Separator>(&self, sep: Separator) -> <Self as Join<Separator>>::Output\n where\n Self: Join<Separator>,\n {\n Join::join(self, sep)\n }\n\nYou should use String::from_iter instead:\npub fn capitalize_first(input: &str) -> String {\n let mut c = input.chars();\n match c.next() {\n None => String::new(),\n Some(first) => {\n let upper = first.to_ascii_uppercase();\n let mut v = c.collect::<Vec<char>>();\n v[0] = upper;\n String::from_iter(v)\n },\n }\n}\n\n", "You can just use as_str()\npub fn capitalize_first(input: &str) -> String {\n let mut c = input.chars();\n match c.next() {\n None => String::new(),\n Some(first) => first.to_uppercase().to_string() + c.as_str(),\n }\n}\n\n" ]
[ 4, 0, 0 ]
[]
[]
[ "char", "iterator", "rust", "string" ]
stackoverflow_0070050432_char_iterator_rust_string.txt
Q: Given 3 series merge to create a new series with 1 column containing numbers of each dataframe I have 3 series with data like below s1 = [1,1,3] s2 = [2,3,2] s3 = [4,2,1] I want to create a new series with values such that s_new = [124,132,321] please note that s_new = int(''.join(s1,s2,s3)) I know the above syntax is wrong but you get the idea. A: You can do with pandas agg s = pd.DataFrame([s1,s2,s3]).astype(str).agg(''.join).astype(int).tolist() Out[334]: [124, 132, 321] A: s1 = [1,1,3] s2 = [2,3,2] s3 = [4,2,1] matrix = [s1,s2,s3] s_new = ["" for x in range(len(matrix))]; for i in range(0,len(matrix)): for j in range(0,len(matrix[i])): s_new[i]+=str(matrix[j][i]) print(s_new) unfortunately, I am not pyton programmer but this algo still works A: Using numpy import numpy as np s1 = [1, 1, 3] s2 = [2, 3, 2] s3 = [4, 2, 1] s4 = [int("".join(str(i) for i in x)) for x in np.column_stack([s1, s2, s3])] print(s4) [124, 132, 321]
Given 3 series merge to create a new series with 1 column containing numbers of each dataframe
I have 3 series with data like below s1 = [1,1,3] s2 = [2,3,2] s3 = [4,2,1] I want to create a new series with values such that s_new = [124,132,321] please note that s_new = int(''.join(s1,s2,s3)) I know the above syntax is wrong but you get the idea.
[ "You can do with pandas agg\ns = pd.DataFrame([s1,s2,s3]).astype(str).agg(''.join).astype(int).tolist()\nOut[334]: [124, 132, 321]\n\n", "s1 = [1,1,3]\ns2 = [2,3,2]\ns3 = [4,2,1]\nmatrix = [s1,s2,s3]\ns_new = [\"\" for x in range(len(matrix))];\n\nfor i in range(0,len(matrix)):\n for j in range(0,len(matrix[i])):\n s_new[i]+=str(matrix[j][i])\n \nprint(s_new)\n\nunfortunately, I am not pyton programmer but this algo still works\n", "Using numpy\nimport numpy as np\n\n\ns1 = [1, 1, 3]\ns2 = [2, 3, 2]\ns3 = [4, 2, 1]\n\ns4 = [int(\"\".join(str(i) for i in x)) for x in np.column_stack([s1, s2, s3])]\nprint(s4)\n\n[124, 132, 321]\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "pandas", "python_3.x" ]
stackoverflow_0074672405_pandas_python_3.x.txt
Q: Laravel and Xdebug - I can't debug because of Fatal error: Can't find Controller class that all controllers extend it So I managed to configure Xdebug (2.4.0) for PHP 7 (7.0.4). However I can't use it in my Laravel project. I am trying to debug a block of code inside my CartController. However it says that there is an error because it can't find the Controller that my CartController extends. This is what I get in my PhpStorm console: C:\xampp\php\php.exe -dxdebug.remote_enable=1 -dxdebug.remote_mode=req -dxdebug.remote_port=9000 -dxdebug.remote_host=127.0.0.1 C:\Users\Nikolay\Dropbox\store\app\Http\Controllers\CartController.php PHP Fatal error: Class 'App\Http\Controllers\Controller' not found in C:\Users\Nikolay\Dropbox\store\app\Http\Controllers\CartController.php on line 14 PHP Stack trace: PHP 1. {main}() C:\Users\Nikolay\Dropbox\store\app\Http\Controllers\CartController.php:0 Fatal error: Class 'App\Http\Controllers\Controller' not found in C:\Users\Nikolay\Dropbox\store\app\Http\Controllers\CartController.php on line 14 Call Stack: 2.1491 376944 1. {main}() C:\Users\Nikolay\Dropbox\store\app\Http\Controllers\CartController.php:0 Process finished with exit code 255 There is no problem in the application itself when I run it on a server. How can I fix that and why does it occur? Edit - 28/05/2016 - Here is the CartController: <?php namespace App\Http\Controllers; use App\Cart; use App\CartItem; use App\Product; use Illuminate\Http\Request; use App\Http\Requests; use Illuminate\Support\Facades\Auth; use Illuminate\Support\Facades\Session; class CartController extends Controller { public function showCart() {...} public function addItem(Request $request, $product_id) {...} public function deleteItem($product_id, $size) {...} public function showCheckout() {...} private function calculateTotalPrice() {...} } A: I'm probably mistaken ... but it almost sounds like you're trying to debug just the controller without going through bootstrapping/routing etc? You application is working one the server because all of the bootstrapping, instantiation, dependency injection etc. is done. To debug the application flow you could debug public/index.php with breakpoints in your controller methods and either manually set/override $_SERVER['REQUEST_URI'] with the route you want to debug in index.php before ... or you install the xdebug browser extension, start the debug listener in PHPStorm and the visit your route in the browser. A: I'm working with VsCode. In the root of the Laravel project, I created a .vscode folder. Inside this new folder I created a launch.json file Containing the following content: { "version": "0.2.0", "configurations": [ { "name": "Listen for XDebug", "type": "php", "request": "launch", "port": 9000, "pathMappings": { "/var/www/html": "${workspaceFolder}/" }, "xdebugSettings": { "max_data": -1, "show_hidden": 1, "max_children": 100, "max_depth": 5 } } ] } A: I had the same issue, using any IDE editor (either using netbeans or VS code). The problem was I started debugging from files other than main index.php (auto loader file). When I started from index.php file everything started to work. I would anyway explain in stepwise how to correct Locate the proper php.ini file. Create a info.php file in /var/www/html/ and write then browse this file in browser and locate php.ini. Also locate which xdebug version your system have from there. Configure first the xdebug (in this php.ini file). There are major 2 (as of the time of writing) xdebug versions. a. XDebug 2 b. XDebug 3 Create an xdebug section in the very last of this php.ini file. [xdebug] XDebug 2 had those setting zend_extension=/usr/lib/php/20170718/xdebug.so xdebug.default_enable=1 xdebug.remote_enable=on xdebug.remote_handler=dbgp xdebug.remote_host=localhost xdebug.remote_port=9000 change the path to xdebug which you had. Find xdebug using this command find /usr/lib/php -iname xdebug.so if you had multiple version of php installed you would get more than one xdebug. To know which one you would be using is then just removing and installing xdebug and in which folder it disappear and reappears that xdebug is you need. e.g. to know what version php7.4 is using remove php7.4-xdebug module and then install it; using sudo apt remove php7.4-xdebug and then install it again using sudo apt install php7.4-xdebug while Xdebug 3 had those setting instead zend_extension=/usr/lib/php/20190902/xdebug.so xdebug.mode=debug xdebug.client_host=localhost xdebug.remote_port=9003 some setting is the same in both versions xdebug.profiler_output_dir=/tmp xdebug.profiler_output_name=cachegrind.out.%p xdebug.profiler_enable_trigger=1 xdebug.profiler_enable=0 xdebug.remote_autostart=1 xdebug.idekey="netbeans-xdebug" Change xdebug.idekey to vsc for VS code or phpstorm etc for your ide start your ide and open your laravel project. e.g. in VScode open folder and selecting your project folder open your main index.php file in public folder and start debugging
Laravel and Xdebug - I can't debug because of Fatal error: Can't find Controller class that all controllers extend it
So I managed to configure Xdebug (2.4.0) for PHP 7 (7.0.4). However I can't use it in my Laravel project. I am trying to debug a block of code inside my CartController. However it says that there is an error because it can't find the Controller that my CartController extends. This is what I get in my PhpStorm console: C:\xampp\php\php.exe -dxdebug.remote_enable=1 -dxdebug.remote_mode=req -dxdebug.remote_port=9000 -dxdebug.remote_host=127.0.0.1 C:\Users\Nikolay\Dropbox\store\app\Http\Controllers\CartController.php PHP Fatal error: Class 'App\Http\Controllers\Controller' not found in C:\Users\Nikolay\Dropbox\store\app\Http\Controllers\CartController.php on line 14 PHP Stack trace: PHP 1. {main}() C:\Users\Nikolay\Dropbox\store\app\Http\Controllers\CartController.php:0 Fatal error: Class 'App\Http\Controllers\Controller' not found in C:\Users\Nikolay\Dropbox\store\app\Http\Controllers\CartController.php on line 14 Call Stack: 2.1491 376944 1. {main}() C:\Users\Nikolay\Dropbox\store\app\Http\Controllers\CartController.php:0 Process finished with exit code 255 There is no problem in the application itself when I run it on a server. How can I fix that and why does it occur? Edit - 28/05/2016 - Here is the CartController: <?php namespace App\Http\Controllers; use App\Cart; use App\CartItem; use App\Product; use Illuminate\Http\Request; use App\Http\Requests; use Illuminate\Support\Facades\Auth; use Illuminate\Support\Facades\Session; class CartController extends Controller { public function showCart() {...} public function addItem(Request $request, $product_id) {...} public function deleteItem($product_id, $size) {...} public function showCheckout() {...} private function calculateTotalPrice() {...} }
[ "I'm probably mistaken ... but it almost sounds like you're trying to debug just the controller without going through bootstrapping/routing etc?\nYou application is working one the server because all of the bootstrapping, instantiation, dependency injection etc. is done.\nTo debug the application flow you could debug public/index.php with breakpoints in your controller methods and either manually set/override $_SERVER['REQUEST_URI'] with the route you want to debug in index.php before ... or you install the xdebug browser extension, start the debug listener in PHPStorm and the visit your route in the browser.\n", "I'm working with VsCode.\nIn the root of the Laravel project, I created a .vscode folder. Inside this new folder I created a launch.json file\nContaining the following content:\n{\n \"version\": \"0.2.0\",\n \"configurations\": [\n {\n \"name\": \"Listen for XDebug\",\n \"type\": \"php\",\n \"request\": \"launch\",\n \"port\": 9000,\n \"pathMappings\": {\n \"/var/www/html\": \"${workspaceFolder}/\"\n },\n \"xdebugSettings\": {\n \"max_data\": -1,\n \"show_hidden\": 1,\n \"max_children\": 100,\n \"max_depth\": 5\n }\n }\n ]\n}\n\n", "I had the same issue, using any IDE editor (either using netbeans or VS code). The problem was I started debugging from files other than main index.php (auto loader file). When I started from index.php file everything started to work. I would anyway explain in stepwise how to correct\n\nLocate the proper php.ini file. Create a info.php file in /var/www/html/ and write\n\n\nthen browse this file in browser and locate php.ini. Also locate which xdebug version your system have from there.\n\nConfigure first the xdebug (in this php.ini file). There are major 2 (as of the time of writing) xdebug versions.\na. XDebug 2\nb. XDebug 3\n\nCreate an xdebug section in the very last of this php.ini file.\n[xdebug]\n\n\nXDebug 2 had those setting\nzend_extension=/usr/lib/php/20170718/xdebug.so\nxdebug.default_enable=1\nxdebug.remote_enable=on\nxdebug.remote_handler=dbgp\nxdebug.remote_host=localhost\nxdebug.remote_port=9000\n\n\nchange the path to xdebug which you had. Find xdebug using this command\nfind /usr/lib/php -iname xdebug.so\n\nif you had multiple version of php installed you would get more than one xdebug. To know which one you would be using is then just removing and installing xdebug and in which folder it disappear and reappears that xdebug is you need. e.g. to know what version php7.4 is using remove php7.4-xdebug module and then install it; using\nsudo apt remove php7.4-xdebug\n\nand then install it again using\nsudo apt install php7.4-xdebug\n\nwhile Xdebug 3 had those setting instead\n zend_extension=/usr/lib/php/20190902/xdebug.so\n xdebug.mode=debug\n xdebug.client_host=localhost\n xdebug.remote_port=9003\n\nsome setting is the same in both versions\n xdebug.profiler_output_dir=/tmp\n xdebug.profiler_output_name=cachegrind.out.%p\n xdebug.profiler_enable_trigger=1\n xdebug.profiler_enable=0\n xdebug.remote_autostart=1\n xdebug.idekey=\"netbeans-xdebug\"\n\nChange xdebug.idekey to vsc for VS code or phpstorm etc for your ide\n\nstart your ide and open your laravel project. e.g. in VScode open folder and selecting your project folder\n\nopen your main index.php file in public folder\n\n\nand start debugging\n" ]
[ 4, 0, 0 ]
[]
[]
[ "debugging", "laravel", "php", "xdebug" ]
stackoverflow_0037497637_debugging_laravel_php_xdebug.txt
Q: Can I put all the space between two elements in a flex box or grid? I have a header that looks like this: <header> <div>A</div> <div>B</div> <div>C</div> </header> Rather than spacing these items evenly, I want all the space to be between A and B, like this: Is there any way to do this with plain CSS, without adding extra wrapper elements in HTML? I'm very specifically talking about spacing, adding flex-grow: 1 to A is cheating In my case both A and B are clickable, and I don't want them smeared across the entire header. A: You can add margin-left:auto; to B to have it move across. You can see more examples here .container { display:flex; height:100px; width:500px; background-color:red; align-items:center; padding-left:15px; padding-right:15px } .container div { height:90px; font-size:30px; background-color:green; width:90px; margin-left:15px; margin-right:15px; text-align:center; } .container .B { margin-left:auto; } <div class="container"> <div class="A">A</div> <div class="B">B</div> <div class="C">C</div> </div> A: Here are some basic solutions I can think of: For a flex container, use margin-right: auto on "A". For a grid container, set grid-template-column: 1fr repeat(2, min-content) so that "A" will be placed at the 1fr column. Example: header { display: flex; width: 400px; outline: 3px solid #666; } div { width: 50px; height: 50px; background-color: pink; display: flex; justify-content: center; align-items: center; font-size: x-large; outline: 2px solid #000; } header>div:first-of-type { margin-right: auto; } main { display: grid; width: 400px; outline: 3px solid #666; grid-template-columns: 1fr repeat(2, min-content); } <h3> Flex container</h3> <header> <div>A</div> <div>B</div> <div>C</div> </header> <h3> Grid container </h3> <main> <div>A</div> <div>B</div> <div>C</div> </main> And here are some of everyone's favorite weird solutions I can think of, this example uses pseudo element as a spacer (not recommended, just to show the possibility): For a flex container, specify order for the pseudo element spacer to have it placed after "A". For a grid container, specify the pseudo element spacer to take the desired column, so that auto placement can cover the other elements. Example: header { display: flex; width: 400px; outline: 3px solid #666; } header::before { content: "I'm a pseudo spacer"; display: flex; justify-content: center; align-items: center; font-size: large; order: 2; flex: 1; } div { width: 50px; height: 50px; background-color: pink; display: flex; justify-content: center; align-items: center; font-size: x-large; outline: 2px solid #000; } header > div { order: 3; } header > div:nth-of-type(1) { order: 1; } main { display: grid; width: 400px; outline: 3px solid #666; grid-template-columns: min-content 1fr repeat(2, min-content); grid-auto-flow: row dense; } main::before { content: "I'm a pseudo spacer"; display: flex; justify-content: center; align-items: center; font-size: large; grid-column: 2 / 3; } <h3> Flex but with pseudo element as spacer</h3> <header> <div>A</div> <div>B</div> <div>C</div> </header> <h3> Grid but with pseudo element as spacer</h3> <main> <div>A</div> <div>B</div> <div>C</div> </main>
Can I put all the space between two elements in a flex box or grid?
I have a header that looks like this: <header> <div>A</div> <div>B</div> <div>C</div> </header> Rather than spacing these items evenly, I want all the space to be between A and B, like this: Is there any way to do this with plain CSS, without adding extra wrapper elements in HTML? I'm very specifically talking about spacing, adding flex-grow: 1 to A is cheating In my case both A and B are clickable, and I don't want them smeared across the entire header.
[ "You can add margin-left:auto; to B to have it move across. You can see more examples here\n\n\n.container {\n display:flex;\n height:100px;\n width:500px;\n background-color:red;\n align-items:center;\n padding-left:15px;\n padding-right:15px\n}\n\n.container div {\n height:90px;\n font-size:30px;\n background-color:green;\n width:90px;\n margin-left:15px;\n margin-right:15px;\n text-align:center;\n}\n\n.container .B {\n margin-left:auto;\n}\n<div class=\"container\">\n <div class=\"A\">A</div>\n <div class=\"B\">B</div>\n <div class=\"C\">C</div>\n</div>\n\n\n\n", "Here are some basic solutions I can think of:\nFor a flex container, use margin-right: auto on \"A\".\nFor a grid container, set grid-template-column: 1fr repeat(2, min-content) so that \"A\" will be placed at the 1fr column.\nExample:\n\n\nheader {\n display: flex;\n width: 400px;\n outline: 3px solid #666;\n}\n\ndiv {\n width: 50px;\n height: 50px;\n background-color: pink;\n display: flex;\n justify-content: center;\n align-items: center;\n font-size: x-large;\n outline: 2px solid #000;\n}\n\nheader>div:first-of-type {\n margin-right: auto;\n}\n\nmain {\n display: grid;\n width: 400px;\n outline: 3px solid #666;\n grid-template-columns: 1fr repeat(2, min-content);\n}\n<h3> Flex container</h3>\n<header>\n <div>A</div>\n <div>B</div>\n <div>C</div>\n</header>\n\n<h3> Grid container </h3>\n<main>\n <div>A</div>\n <div>B</div>\n <div>C</div>\n</main>\n\n\n\nAnd here are some of everyone's favorite weird solutions I can think of, this example uses pseudo element as a spacer (not recommended, just to show the possibility):\nFor a flex container, specify order for the pseudo element spacer to have it placed after \"A\".\nFor a grid container, specify the pseudo element spacer to take the desired column, so that auto placement can cover the other elements.\nExample:\n\n\nheader {\n display: flex;\n width: 400px;\n outline: 3px solid #666;\n}\n\nheader::before {\n content: \"I'm a pseudo spacer\";\n display: flex;\n justify-content: center;\n align-items: center;\n font-size: large;\n order: 2;\n flex: 1;\n}\n\ndiv {\n width: 50px;\n height: 50px;\n background-color: pink;\n display: flex;\n justify-content: center;\n align-items: center;\n font-size: x-large;\n outline: 2px solid #000;\n}\n\nheader > div {\n order: 3;\n}\n\nheader > div:nth-of-type(1) {\n order: 1;\n}\n\nmain {\n display: grid;\n width: 400px;\n outline: 3px solid #666;\n grid-template-columns: min-content 1fr repeat(2, min-content);\n grid-auto-flow: row dense;\n}\n\nmain::before {\n content: \"I'm a pseudo spacer\";\n display: flex;\n justify-content: center;\n align-items: center;\n font-size: large;\n grid-column: 2 / 3;\n}\n<h3> Flex but with pseudo element as spacer</h3>\n<header>\n <div>A</div>\n <div>B</div>\n <div>C</div>\n</header>\n\n<h3> Grid but with pseudo element as spacer</h3>\n<main>\n <div>A</div>\n <div>B</div>\n <div>C</div>\n</main>\n\n\n\n" ]
[ 1, 1 ]
[]
[]
[ "css", "html" ]
stackoverflow_0074672587_css_html.txt
Q: How can I install @ng-bootstrap/ng-bootstrap on Angular 15? Trying both ng install or npm install fails: The package @ng-bootstrap/[email protected] will be installed and executed. Would you like to proceed? Yes npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree npm ERR! npm ERR! While resolving: [email protected] npm ERR! Found: @angular/[email protected] npm ERR! node_modules/@angular/common npm ERR! @angular/common@"^15.0.0" from the root project npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer @angular/common@"^14.1.0" from @ng-bootstrap/[email protected] npm ERR! node_modules/@ng-bootstrap/ng-bootstrap npm ERR! @ng-bootstrap/ng-bootstrap@"13.1.1" from the root project npm ERR! npm ERR! Fix the upstream dependency conflict, or retry npm ERR! this command with --force, or --legacy-peer-deps npm ERR! to accept an incorrect (and potentially broken) dependency resolution. npm ERR! npm ERR! See /home/node/.npm/eresolve-report.txt for a full report. $ npm install @ng-bootstrap/ng-bootstrap npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree npm ERR! npm ERR! While resolving: [email protected] npm ERR! Found: @angular/[email protected] npm ERR! node_modules/@angular/common npm ERR! @angular/common@"^15.0.0" from the root project npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer @angular/common@"^14.1.0" from @ng-bootstrap/[email protected] npm ERR! node_modules/@ng-bootstrap/ng-bootstrap npm ERR! @ng-bootstrap/ng-bootstrap@"*" from the root project npm ERR! npm ERR! Fix the upstream dependency conflict, or retry npm ERR! this command with --force, or --legacy-peer-deps npm ERR! to accept an incorrect (and potentially broken) dependency resolution. npm ERR! npm ERR! See /home/node/.npm/eresolve-report.txt for a full report. A: skip the dependency tree checking. npm install @ng-bootstrap/ng-bootstrap --legacy-peer-deps A: It seems like it is a version incompatibility issue. To fix this issue, you need to update the @angular/common package in your Angular project to the version that is compatible with the @ng-bootstrap/ng-bootstrap package. Open a terminal window and navigate to the root directory of your Angular project. Run the following command to update the @angular/common package to the version that is compatible with the @ng-bootstrap/ng-bootstrap package: npm install --save @angular/[email protected]
How can I install @ng-bootstrap/ng-bootstrap on Angular 15?
Trying both ng install or npm install fails: The package @ng-bootstrap/[email protected] will be installed and executed. Would you like to proceed? Yes npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree npm ERR! npm ERR! While resolving: [email protected] npm ERR! Found: @angular/[email protected] npm ERR! node_modules/@angular/common npm ERR! @angular/common@"^15.0.0" from the root project npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer @angular/common@"^14.1.0" from @ng-bootstrap/[email protected] npm ERR! node_modules/@ng-bootstrap/ng-bootstrap npm ERR! @ng-bootstrap/ng-bootstrap@"13.1.1" from the root project npm ERR! npm ERR! Fix the upstream dependency conflict, or retry npm ERR! this command with --force, or --legacy-peer-deps npm ERR! to accept an incorrect (and potentially broken) dependency resolution. npm ERR! npm ERR! See /home/node/.npm/eresolve-report.txt for a full report. $ npm install @ng-bootstrap/ng-bootstrap npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree npm ERR! npm ERR! While resolving: [email protected] npm ERR! Found: @angular/[email protected] npm ERR! node_modules/@angular/common npm ERR! @angular/common@"^15.0.0" from the root project npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer @angular/common@"^14.1.0" from @ng-bootstrap/[email protected] npm ERR! node_modules/@ng-bootstrap/ng-bootstrap npm ERR! @ng-bootstrap/ng-bootstrap@"*" from the root project npm ERR! npm ERR! Fix the upstream dependency conflict, or retry npm ERR! this command with --force, or --legacy-peer-deps npm ERR! to accept an incorrect (and potentially broken) dependency resolution. npm ERR! npm ERR! See /home/node/.npm/eresolve-report.txt for a full report.
[ "skip the dependency tree checking.\nnpm install @ng-bootstrap/ng-bootstrap --legacy-peer-deps\n", "It seems like it is a version incompatibility issue.\nTo fix this issue, you need to update the @angular/common package in your Angular project to the version that is compatible with the @ng-bootstrap/ng-bootstrap package.\n\nOpen a terminal window and navigate to the root directory of your Angular project.\n\nRun the following command to update the @angular/common package to the version that is compatible with the @ng-bootstrap/ng-bootstrap package:\n\n\nnpm install --save @angular/[email protected]\n" ]
[ 0, 0 ]
[]
[]
[ "angular" ]
stackoverflow_0074672225_angular.txt
Q: How does Nuxt3 know which code to execute on the server-side vs client-side? I am wanting to use Nuxt3. From my understanding, it uses universal rendering, which is like a cross between CSR and SSR. I have a few questions, however, before I get started. How does Nuxt3 determine which code is executed on the client-side vs. the server-side? If I wanted to use JWT auth, will Nuxt3 know that this would be stored on the client-side? If I wanted to use Pinia as my state management library, which will manage state in the client, how would Nuxt3 be able to distinguish that? What about API requests or using 3rd party realtime services like Pusher? A: [assuming that you do have ssr: true in your Nuxt config file] Nuxt runs your app in an isomorphic way, meaning that most of the code should both run on the server and on the client. Here are all the various hooks available for Nuxt3, and an explanation as of where they are supposed to run (server, client or both). This lifecycle for Nuxt2 may be useful because it's more visual overall. Note that some parts of Nuxt can be exclusive to the server (server routes) or client (middleware). Depending on how you use and implement your JWT, you may give some hints to Nuxt. Assuming that you want to use it as a plugin, you could have: /plugins/myPlugin.ts to have it isomorphic /plugins/myPlugin.client.ts to have it only on the client side (reciprocity for the server.ts suffix) It all depends if your package/implementation can be isomorphic or not. There is no need to have everything running on the server if there is no benefit. Please also note that some code can only run on the client (using window) or on the server (using fs). You can also of course use some dirty conditional like if (process.client) { ... into an isomorphic place (middleware, composable, etc). Pinia would probably be used with it's Nuxt module, so you don't really need to worry about it running on the server or the client. I'm not sure if there are parts that could run on both btw. If there are such situations, don't worry: the core team of Vue have already done that work for you. Too vague of a question, I'd say it depends. You need to take that decision, based on how a given NPM package works and what you do want to achieve with it. If it supports a server, that would probably be faster there rather than on the client. For things only available on the client, you can import it globally with a client side-only plugin, or import it into a local component + make a conditional (or use a lifecycle hook that is run only on the client).
How does Nuxt3 know which code to execute on the server-side vs client-side?
I am wanting to use Nuxt3. From my understanding, it uses universal rendering, which is like a cross between CSR and SSR. I have a few questions, however, before I get started. How does Nuxt3 determine which code is executed on the client-side vs. the server-side? If I wanted to use JWT auth, will Nuxt3 know that this would be stored on the client-side? If I wanted to use Pinia as my state management library, which will manage state in the client, how would Nuxt3 be able to distinguish that? What about API requests or using 3rd party realtime services like Pusher?
[ "[assuming that you do have ssr: true in your Nuxt config file]\nNuxt runs your app in an isomorphic way, meaning that most of the code should both run on the server and on the client.\nHere are all the various hooks available for Nuxt3, and an explanation as of where they are supposed to run (server, client or both). This lifecycle for Nuxt2 may be useful because it's more visual overall.\nNote that some parts of Nuxt can be exclusive to the server (server routes) or client (middleware).\n\nDepending on how you use and implement your JWT, you may give some hints to Nuxt. Assuming that you want to use it as a plugin, you could have:\n\n/plugins/myPlugin.ts to have it isomorphic\n/plugins/myPlugin.client.ts to have it only on the client side (reciprocity for the server.ts suffix)\n\nIt all depends if your package/implementation can be isomorphic or not. There is no need to have everything running on the server if there is no benefit.\nPlease also note that some code can only run on the client (using window) or on the server (using fs).\nYou can also of course use some dirty conditional like if (process.client) { ... into an isomorphic place (middleware, composable, etc).\n\nPinia would probably be used with it's Nuxt module, so you don't really need to worry about it running on the server or the client. I'm not sure if there are parts that could run on both btw.\nIf there are such situations, don't worry: the core team of Vue have already done that work for you.\n\nToo vague of a question, I'd say it depends. You need to take that decision, based on how a given NPM package works and what you do want to achieve with it.\nIf it supports a server, that would probably be faster there rather than on the client.\nFor things only available on the client, you can import it globally with a client side-only plugin, or import it into a local component + make a conditional (or use a lifecycle hook that is run only on the client).\n" ]
[ 0 ]
[]
[]
[ "nuxt.js", "nuxtjs3", "pinia", "vue.js" ]
stackoverflow_0074672537_nuxt.js_nuxtjs3_pinia_vue.js.txt
Q: Python combinations of multiple list of different sizes Am trying to swap items between multiple lists and I wanted to know if there is any method to generate combinations between multiple list of different size? For example, I have this 3 lists: a = [(0, 0), (1, 0), (2, 0)] b = [(0, 2), (1, 2), (2, 2)] c = [(0, 3), (1, 3)] Expected result: a : [(0, 3), (0, 2), (0, 0)] b : [(1, 3), (1, 2), (1, 0)] c : [(2, 2), (2, 0)] a : [(0, 3), (0, 2), (0, 0)] b : [(1, 3), (1, 2), (2, 0)] c : [(2, 2), (1, 0)] ... a : [(0, 3), (0, 2)] b : [(1, 3), (1, 2), (0, 0)] c : [(2, 2), (2, 0), (1, 0)] I found this code here (python combinations of multiple list): import itertools as it import numpy as np a = [(0, 0), (1, 0), (2, 0)] b = [(0, 2), (1, 2), (2, 2)] c = [(0, 3), (1, 3)] def combination(first, *rest): for i in it.product([first], *(it.permutations(j) for j in rest)): yield tuple(zip(*i)) for i in combination(c, b, a): print("a :", list(i[0])) print("b :", list(i[1])) print("c :", list(i[2])) It works perfectly fine if the list are the same size. A: Try adding None to your lists so that they all have the same length, use sympy.utilities.iterables.multiset_permutations instead of, it.permutations, and finally filter out None values from the output. That should generalize in a natural way your approach for lists of equal sizes: import itertools as it from sympy.utilities.iterables import multiset_permutations a = [(0, 0), (1, 0), (2, 0)] b = [(0, 2), (1, 2), (2, 2)] c = [(0, 3), (1, 3), None] def combination(first, *rest): for i in it.product([first], *(multiset_permutations(j) for j in rest)): yield tuple(zip(*i)) for i in combination(c, b, a): print("a :", [val for val in i[0] if val]) print("b :", [val for val in i[1] if val]) print("c :", [val for val in i[2] if val])
Python combinations of multiple list of different sizes
Am trying to swap items between multiple lists and I wanted to know if there is any method to generate combinations between multiple list of different size? For example, I have this 3 lists: a = [(0, 0), (1, 0), (2, 0)] b = [(0, 2), (1, 2), (2, 2)] c = [(0, 3), (1, 3)] Expected result: a : [(0, 3), (0, 2), (0, 0)] b : [(1, 3), (1, 2), (1, 0)] c : [(2, 2), (2, 0)] a : [(0, 3), (0, 2), (0, 0)] b : [(1, 3), (1, 2), (2, 0)] c : [(2, 2), (1, 0)] ... a : [(0, 3), (0, 2)] b : [(1, 3), (1, 2), (0, 0)] c : [(2, 2), (2, 0), (1, 0)] I found this code here (python combinations of multiple list): import itertools as it import numpy as np a = [(0, 0), (1, 0), (2, 0)] b = [(0, 2), (1, 2), (2, 2)] c = [(0, 3), (1, 3)] def combination(first, *rest): for i in it.product([first], *(it.permutations(j) for j in rest)): yield tuple(zip(*i)) for i in combination(c, b, a): print("a :", list(i[0])) print("b :", list(i[1])) print("c :", list(i[2])) It works perfectly fine if the list are the same size.
[ "Try\n\nadding None to your lists so that they all have the same length,\nuse sympy.utilities.iterables.multiset_permutations instead of,\nit.permutations, and\nfinally filter out None values from the output.\n\nThat should generalize in a natural way your approach for lists of equal sizes:\nimport itertools as it\nfrom sympy.utilities.iterables import multiset_permutations\n\na = [(0, 0), (1, 0), (2, 0)]\nb = [(0, 2), (1, 2), (2, 2)]\nc = [(0, 3), (1, 3), None]\n\ndef combination(first, *rest):\n for i in it.product([first], *(multiset_permutations(j) for j in rest)):\n yield tuple(zip(*i))\n\nfor i in combination(c, b, a):\n print(\"a :\", [val for val in i[0] if val])\n print(\"b :\", [val for val in i[1] if val])\n print(\"c :\", [val for val in i[2] if val])\n\n" ]
[ 0 ]
[]
[]
[ "combinations", "list", "python", "python_itertools" ]
stackoverflow_0074646518_combinations_list_python_python_itertools.txt
Q: Validating an IP address using bash script I am very new to bash script so basically i can not understand it very much so please can any one suggest me ways that i can learn faster. I am tryong to write a bash script to read ip adress and validate it. So please can you tell me what mistaking I am making in the script that i have used. function valid_ip() { local IPA1=$1 local stat=1 if [[ $IPA1 =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then OIFS=$IFS IFS='.' ip=($ip) IFS=$OIFS [[ ${ip[0]} -le 255 && ${ip[1]} -le 255 \ && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]] stat=$? fi return $stat } This code also i have taken from the internet itself just to understand the concept but still i cannot get it. A: Please read comments: function valid_ip() { local IPA1=$1 local stat=1 if [[ $IPA1 =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then OIFS=$IFS IFS='.' #read man, you will understand, this is internal field separator; which is set as '.' ip=($ip) # IP value is saved as array IFS=$OIFS #setting IFS back to its original value; [[ ${ip[0]} -le 255 && ${ip[1]} -le 255 \ && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]] # It's testing if any part of IP is more than 255 stat=$? #If any part of IP as tested above is more than 255 stat will have a non zero value fi return $stat # as expected returning You can check the default value of IFS by printf '%q' $IFS before setting it to any other value. A: function valid_ip() { local IPA1=$1 local stat=1 if [[ $IPA1 =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then OIFS=$IFS # Save the actual IFS in a var named OIFS IFS='.' # IFS (Internal Field Separator) set to . ip=($ip) # ¿Converts $ip into an array saving ip fields on it? IFS=$OIFS # Restore the old IFS [[ ${ip[0]} -le 255 && ${ip[1]} -le 255 && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]] # If $ip[0], $ip[1], $ip[2] and $ip[3] are minor or equal than 255 then stat=$? # $stat is equal to TRUE if is a valid IP or FALSE if it isn't fi # End if return $stat # Returns $stat } A: I have had issues finding a good answer to this, as well, but I finally came up with a line that properly validates if it is a real IP or not (format-wise). Change the IP to non-valid IP and see no output instead: echo '255.154.12.231' | grep -E '(([0-9]{1,3})\.){3}([0-9]{1,3}){1}' | grep -vE '25[6-9]|2[6-9][0-9]|[3-9][0-9][0-9]' | grep -Eo '(([0-9]{1,2}|1[0-9]{1,2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]{1,2}|1[0-9]{1,2}|2[0-4][0-9]|25[0-5]){1}' 1st part checks for 1-3 digits from 0-9 followed by a dot. It will do it 3 times{3}then repeats without the . 2nd part gets rid of any line containing 256 to 999 in separate stages. You have to be explicit explaining the numbers; so 256-259, 260-299,300-999. There is probably a way to make this check better with a + somewhere, as to say any number greater than X. 3rd part, was where I started and it was grepping the exact term, meaning it was taking with it undesirable characters hanging on either side of it. It is only while writing this that I realized I once again find my answer by getting rid of the impossibles, rather than looking for the possibles, so the last part is not needed. Try this: echo '255.154.12.231' | grep -E '(([0-9]{1,3})\.){3}([0-9]{1,3}){1}' | grep -vE '25[6-9]|2[6-9][0-9]|[3-9][0-9][0-9]' A: This is only checking IPv4. The easiest way to check if it's any kind of IP address (IPv4 or IPv6) is with a python one-liner: #!/bin/bash IP_ADDR="::1" if python3 -c "import ipaddress; ipaddress.ip_address('${IP_ADDR}')" 2>/dev/null; then echo "IP_ADDR is a valid IPv4 or IPv6 address" else echo "IP_ADDR is not a valid IPv4 or IPv6 address" fi
Validating an IP address using bash script
I am very new to bash script so basically i can not understand it very much so please can any one suggest me ways that i can learn faster. I am tryong to write a bash script to read ip adress and validate it. So please can you tell me what mistaking I am making in the script that i have used. function valid_ip() { local IPA1=$1 local stat=1 if [[ $IPA1 =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then OIFS=$IFS IFS='.' ip=($ip) IFS=$OIFS [[ ${ip[0]} -le 255 && ${ip[1]} -le 255 \ && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]] stat=$? fi return $stat } This code also i have taken from the internet itself just to understand the concept but still i cannot get it.
[ "Please read comments: \n function valid_ip()\n{\n local IPA1=$1\n local stat=1\n\n if [[ $IPA1 =~ ^[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}$ ]];\n then\n OIFS=$IFS\n\n IFS='.' #read man, you will understand, this is internal field separator; which is set as '.' \n ip=($ip) # IP value is saved as array\n IFS=$OIFS #setting IFS back to its original value;\n\n [[ ${ip[0]} -le 255 && ${ip[1]} -le 255 \\\n && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]] # It's testing if any part of IP is more than 255\n stat=$? #If any part of IP as tested above is more than 255 stat will have a non zero value\n fi\n return $stat # as expected returning\n\nYou can check the default value of IFS by printf '%q' $IFS before setting it to any other value.\n", "function valid_ip()\n{\n local IPA1=$1\n local stat=1\n\n if [[ $IPA1 =~ ^[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}$ ]];\n then\n OIFS=$IFS # Save the actual IFS in a var named OIFS\n IFS='.' # IFS (Internal Field Separator) set to .\n ip=($ip) # ¿Converts $ip into an array saving ip fields on it?\n IFS=$OIFS # Restore the old IFS\n\n [[ ${ip[0]} -le 255 && ${ip[1]} -le 255 && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]] # If $ip[0], $ip[1], $ip[2] and $ip[3] are minor or equal than 255 then\n\n stat=$? # $stat is equal to TRUE if is a valid IP or FALSE if it isn't\n\n fi # End if\n\n return $stat # Returns $stat\n}\n\n", "I have had issues finding a good answer to this, as well, but I finally came up with a line that properly validates if it is a real IP or not (format-wise). Change the IP to non-valid IP and see no output instead:\necho '255.154.12.231' | grep -E '(([0-9]{1,3})\\.){3}([0-9]{1,3}){1}' | grep -vE '25[6-9]|2[6-9][0-9]|[3-9][0-9][0-9]' | grep -Eo '(([0-9]{1,2}|1[0-9]{1,2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]{1,2}|1[0-9]{1,2}|2[0-4][0-9]|25[0-5]){1}'\n\n1st part checks for 1-3 digits from 0-9 followed by a dot. It will do it 3 times{3}then repeats without the .\n2nd part gets rid of any line containing 256 to 999 in separate stages. You have to be explicit explaining the numbers; so 256-259, 260-299,300-999.\nThere is probably a way to make this check better with a + somewhere, as to say any number greater than X.\n3rd part, was where I started and it was grepping the exact term, meaning it was taking with it undesirable characters hanging on either side of it. It is only while writing this that I realized I once again find my answer by getting rid of the impossibles, rather than looking for the possibles, so the last part is not needed. Try this:\necho '255.154.12.231' | grep -E '(([0-9]{1,3})\\.){3}([0-9]{1,3}){1}' | grep -vE '25[6-9]|2[6-9][0-9]|[3-9][0-9][0-9]'\n\n", "This is only checking IPv4. The easiest way to check if it's any kind of IP address (IPv4 or IPv6) is with a python one-liner:\n#!/bin/bash\n\nIP_ADDR=\"::1\"\n\nif python3 -c \"import ipaddress; ipaddress.ip_address('${IP_ADDR}')\" 2>/dev/null; then\n echo \"IP_ADDR is a valid IPv4 or IPv6 address\"\nelse\n echo \"IP_ADDR is not a valid IPv4 or IPv6 address\"\nfi\n\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "bash" ]
stackoverflow_0023675400_bash.txt
Q: next-auth/discord callbacks arent modifying data I'm using next-auth/discord however when using the session callback to set a user id to the session it does not set the property. [...nextauth].js import NextAuth from "next-auth/next"; import DiscordProvider from "next-auth/providers/discord"; export default NextAuth({ providers: [ DiscordProvider({ ... session: { strategy: "jwt", ... }, callbacks: { async session({ session, user }) { session.user.id = user.id; return session; } } }) ] }); /api/page.js import { getSession } from 'next-auth/react'; export default async function handler(req, res) { const session = await getSession({ req }); console.log(session); } This logs: { user: { name: ..., email: ..., image: ... }, expires: ... } With no user.id property. A: Fixed it, callbacks should have been in NextAuth object. Also callbacks shouldve been: async jwt({ token, user }) { if (user) { token.id = user.id } return token }, async session({ session, token }) { session.user.id = token.id return session }
next-auth/discord callbacks arent modifying data
I'm using next-auth/discord however when using the session callback to set a user id to the session it does not set the property. [...nextauth].js import NextAuth from "next-auth/next"; import DiscordProvider from "next-auth/providers/discord"; export default NextAuth({ providers: [ DiscordProvider({ ... session: { strategy: "jwt", ... }, callbacks: { async session({ session, user }) { session.user.id = user.id; return session; } } }) ] }); /api/page.js import { getSession } from 'next-auth/react'; export default async function handler(req, res) { const session = await getSession({ req }); console.log(session); } This logs: { user: { name: ..., email: ..., image: ... }, expires: ... } With no user.id property.
[ "Fixed it, callbacks should have been in NextAuth object. Also callbacks shouldve been:\nasync jwt({ token, user }) {\n if (user) {\n token.id = user.id\n }\n return token\n },\n async session({ session, token }) {\n session.user.id = token.id\n return session\n }\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "next.js", "next_auth" ]
stackoverflow_0074672451_javascript_next.js_next_auth.txt
Q: How to take a sum (in denominator) for calculating group by weighted average in a dataframe? I have a data frame that looks like this. import pandas as pd import numpy as np data = [ ['A',1,2,3,4], ['A',5,6,7,8], ['A',9,10,11,12], ['B',13,14,15,16], ['B',17,18,19,20], ['B',21,22,23,24], ['B',25,26,27,28], ['C',29,30,31,32], ['C',33,34,35,36], ['C',37,38,39,40], ['D',13,14,15,0], ['D',0,18,19,0], ['D',0,0,23,0], ['D',0,0,0,0], ['E',13,14,15,0], ['E',0,18,19,0], ['F',0,0,23,0], ] df = pd.DataFrame(data, columns=['Name', 'num1', 'num2', 'num3', 'num4']) df Then I have the following code to calculate the group by weighted average. weights = [10,20,30,40] df=df.groupby('Name').agg(lambda g: sum(g*weights[:len(g)])/sum(weights[:len(g)])) The problem lies in sum(weights[:len(g)]) because all the groups do not have equal rows. As you can see above, group A has 3 rows, B has 4 rows, C has 3 rows, D has 4 rows, E has 2 rows and F has 1 row. Depending upon the rows, it needs to calculate the sum. Now, the above code returns me the weighted average by calculating For Group A, the first column calculates the weighted average as (1 X 10+5 X 20+9 X 30)/60 but it should calculate the weighted average as (1 X20+5 X 30+9 X 40)/90 For Group E, the first column calculates the weighted average as (13 X 10+0 X 20)/30 but it should calculate the weighted average as (13 X 30+0 X 40)/70 Current Result Expected result A: i edit your code little bit n = len(weights) df=df.groupby('Name').agg(lambda g: sum(g*weights[n-len(g):])/sum(weights[n-len(g):])) output(df): num1 num2 num3 num4 Name A 5.9 6.9 7.9 8.9 B 21.0 22.0 23.0 24.0 C 33.9 34.9 35.9 36.9 D 1.3 5.0 12.2 0.0 E 5.6 16.3 17.3 0.0 F 0.0 0.0 23.0 0.0 A: @PandaKim's solution suffices; for efficiency, depending on your data size, you may have to take a longer route: n = len(weights) pos = n - df.groupby('Name').size() pos = [weights[posn : n] for posn in pos] pos = np.concatenate(pos) (df .set_index('Name') .mul(pos, axis=0) .assign(wt = pos) .groupby('Name') .sum() .pipe(lambda df: df.filter(like='num') .div(df.wt, axis=0) ) ) num1 num2 num3 num4 Name A 5.888889 6.888889 7.888889 8.888889 B 21.000000 22.000000 23.000000 24.000000 C 33.888889 34.888889 35.888889 36.888889 D 1.300000 5.000000 12.200000 0.000000 E 5.571429 16.285714 17.285714 0.000000 F 0.000000 0.000000 23.000000 0.000000
How to take a sum (in denominator) for calculating group by weighted average in a dataframe?
I have a data frame that looks like this. import pandas as pd import numpy as np data = [ ['A',1,2,3,4], ['A',5,6,7,8], ['A',9,10,11,12], ['B',13,14,15,16], ['B',17,18,19,20], ['B',21,22,23,24], ['B',25,26,27,28], ['C',29,30,31,32], ['C',33,34,35,36], ['C',37,38,39,40], ['D',13,14,15,0], ['D',0,18,19,0], ['D',0,0,23,0], ['D',0,0,0,0], ['E',13,14,15,0], ['E',0,18,19,0], ['F',0,0,23,0], ] df = pd.DataFrame(data, columns=['Name', 'num1', 'num2', 'num3', 'num4']) df Then I have the following code to calculate the group by weighted average. weights = [10,20,30,40] df=df.groupby('Name').agg(lambda g: sum(g*weights[:len(g)])/sum(weights[:len(g)])) The problem lies in sum(weights[:len(g)]) because all the groups do not have equal rows. As you can see above, group A has 3 rows, B has 4 rows, C has 3 rows, D has 4 rows, E has 2 rows and F has 1 row. Depending upon the rows, it needs to calculate the sum. Now, the above code returns me the weighted average by calculating For Group A, the first column calculates the weighted average as (1 X 10+5 X 20+9 X 30)/60 but it should calculate the weighted average as (1 X20+5 X 30+9 X 40)/90 For Group E, the first column calculates the weighted average as (13 X 10+0 X 20)/30 but it should calculate the weighted average as (13 X 30+0 X 40)/70 Current Result Expected result
[ "i edit your code little bit\nn = len(weights)\ndf=df.groupby('Name').agg(lambda g: sum(g*weights[n-len(g):])/sum(weights[n-len(g):]))\n\noutput(df):\n num1 num2 num3 num4\nName \nA 5.9 6.9 7.9 8.9\nB 21.0 22.0 23.0 24.0\nC 33.9 34.9 35.9 36.9\nD 1.3 5.0 12.2 0.0\nE 5.6 16.3 17.3 0.0\nF 0.0 0.0 23.0 0.0\n\n", "@PandaKim's solution suffices; for efficiency, depending on your data size, you may have to take a longer route:\nn = len(weights)\npos = n - df.groupby('Name').size()\npos = [weights[posn : n] for posn in pos]\npos = np.concatenate(pos)\n(df\n.set_index('Name')\n.mul(pos, axis=0)\n.assign(wt = pos)\n.groupby('Name')\n.sum()\n.pipe(lambda df: df.filter(like='num')\n .div(df.wt, axis=0)\n )\n)\n\n num1 num2 num3 num4\nName\nA 5.888889 6.888889 7.888889 8.888889\nB 21.000000 22.000000 23.000000 24.000000\nC 33.888889 34.888889 35.888889 36.888889\nD 1.300000 5.000000 12.200000 0.000000\nE 5.571429 16.285714 17.285714 0.000000\nF 0.000000 0.000000 23.000000 0.000000\n\n" ]
[ 2, 0 ]
[]
[]
[ "data_science_experience", "dataframe", "pandas", "python" ]
stackoverflow_0074672338_data_science_experience_dataframe_pandas_python.txt
Q: Flutter: Index out of range error, List Element Availability In Widget import 'dart:convert'; import 'package:flutter/material.dart'; import 'package:http/http.dart' as http; import '../data/monthly_cpi.dart'; import '../model/monthlycpi.dart'; class MonthlyCPIList extends StatefulWidget { MonthlyCPIList({Key? key}) : super(key: key); @override _MonthlyCPIListState createState() => _MonthlyCPIListState(); } class _MonthlyCPIListState extends State<MonthlyCPIList> { List<MonthlyCPI> monthlyCPIList = []; void getMonthlyCPIfromApi() async { var url = Uri.parse('http://127.0.0.1:8000/'); final response = await http.get(url); Iterable decoded_response = json.decode(response.body); monthlyCPIList = decoded_response.map((model) => MonthlyCPI.fromJson(model)).toList(); } @override void initState() { super.initState(); getMonthlyCPIfromApi(); print(monthlyCPIList.elementAt(0).cpi_description); } My code is above. What I'm not understanding is if I put the statement: print(monthlyCPIList.elementAt(0).cpi_description); at the end of the getMonthlyCPIfromAPI() method, the element prints fine. However, if I place the print statement at the end of the initState() method, I get an error: Index out of range, no indices are valid: 0. I feel like I'm missing something fundamental about the context flutter sets around lists (i.e. local vs global etc.). Any help would be much appreciated, thank you! A: getMonthlyCPIfromApi() is an async function internally but for initState it's just another function call, hence initState code doesn't wait for getMonthlyCPIfromApi() to complete and then execute print(monthlyCPIList.elementAt(0).cpi_description); initState itself is not an async function, if you try to add async and await on initState you will get errors, print statement would not have thrown any error if the code was something like this which is not possible since its an overridden function INCORRECT CODE (HYPOTHETICAL EXAMPLE) @override void initState() async { super.initState(); await getMonthlyCPIfromApi(); print(monthlyCPIList.elementAt(0).cpi_description); } To make your code work what can be done is getMonthlyCPIfromApi().then((value) { print(monthlyCPIList.elementAt(0).cpi_description);});
Flutter: Index out of range error, List Element Availability In Widget
import 'dart:convert'; import 'package:flutter/material.dart'; import 'package:http/http.dart' as http; import '../data/monthly_cpi.dart'; import '../model/monthlycpi.dart'; class MonthlyCPIList extends StatefulWidget { MonthlyCPIList({Key? key}) : super(key: key); @override _MonthlyCPIListState createState() => _MonthlyCPIListState(); } class _MonthlyCPIListState extends State<MonthlyCPIList> { List<MonthlyCPI> monthlyCPIList = []; void getMonthlyCPIfromApi() async { var url = Uri.parse('http://127.0.0.1:8000/'); final response = await http.get(url); Iterable decoded_response = json.decode(response.body); monthlyCPIList = decoded_response.map((model) => MonthlyCPI.fromJson(model)).toList(); } @override void initState() { super.initState(); getMonthlyCPIfromApi(); print(monthlyCPIList.elementAt(0).cpi_description); } My code is above. What I'm not understanding is if I put the statement: print(monthlyCPIList.elementAt(0).cpi_description); at the end of the getMonthlyCPIfromAPI() method, the element prints fine. However, if I place the print statement at the end of the initState() method, I get an error: Index out of range, no indices are valid: 0. I feel like I'm missing something fundamental about the context flutter sets around lists (i.e. local vs global etc.). Any help would be much appreciated, thank you!
[ "\ngetMonthlyCPIfromApi() is an async function internally but for initState it's just another function call, hence initState code doesn't wait for getMonthlyCPIfromApi() to complete and then execute print(monthlyCPIList.elementAt(0).cpi_description);\n\ninitState itself is not an async function, if you try to add async and await on initState you will get errors, print statement would not have thrown any error if the code was something like this which is not possible since its an overridden function\n INCORRECT CODE (HYPOTHETICAL EXAMPLE)\n @override\n void initState() async {\n super.initState();\n await getMonthlyCPIfromApi();\n print(monthlyCPIList.elementAt(0).cpi_description);\n }\n\n\nTo make your code work what can be done is\n getMonthlyCPIfromApi().then((value) {\n print(monthlyCPIList.elementAt(0).cpi_description);});\n\n\n\n" ]
[ 1 ]
[]
[]
[ "dart", "flutter", "web" ]
stackoverflow_0074671219_dart_flutter_web.txt
Q: Convert Iterator of Result to Result of Iterator Until now, I have used std::fs::read_to_string and then String.lines's std::str::Lines (which is an Iterator<Item = &str>) to read a file "line by line". This obviously reads the whole file into memory, which is not ideal. So, there's BufRead.lines() to read a file truly line by line. This returns std::io::Lines (which is an Iterator<Item = Result<String>>). How do I convert from one iterator type to the other without collecting first? A: You cannot transform a Iterator<Item = Result<_, _>> into Result<Iterator<Item = _>, _> because if we haven't iterated the iterator yet we don't know whether we yield an error. What you can do is to collect() all items ahead of time into a Result<Vec<_>, _> (which of course you can iterate over) since Result implements FromIterator. If you're fine with getting Err only for the first Err (and successfully iterating over all items until that), you can also use itertools::process_results(): let result: Result<SomeType, _> = itertools::process_results(iter, |iter| -> SomeType { // Here we have `iter` of type `Iterator<Item = _>`. Process it and return some result. });
Convert Iterator of Result to Result of Iterator
Until now, I have used std::fs::read_to_string and then String.lines's std::str::Lines (which is an Iterator<Item = &str>) to read a file "line by line". This obviously reads the whole file into memory, which is not ideal. So, there's BufRead.lines() to read a file truly line by line. This returns std::io::Lines (which is an Iterator<Item = Result<String>>). How do I convert from one iterator type to the other without collecting first?
[ "You cannot transform a Iterator<Item = Result<_, _>> into Result<Iterator<Item = _>, _> because if we haven't iterated the iterator yet we don't know whether we yield an error.\nWhat you can do is to collect() all items ahead of time into a Result<Vec<_>, _> (which of course you can iterate over) since Result implements FromIterator.\nIf you're fine with getting Err only for the first Err (and successfully iterating over all items until that), you can also use itertools::process_results():\nlet result: Result<SomeType, _> = itertools::process_results(iter, |iter| -> SomeType {\n // Here we have `iter` of type `Iterator<Item = _>`. Process it and return some result.\n});\n\n" ]
[ 0 ]
[ "You can't there has to be an owner of the values which is the full String in case of String.lines.\nYou can however turn the Iterator<Item = Result<String> into an iterator over Strings:\nlet mut read = BufReader::new(File::open(\"src/main.rs\").unwrap());\nlet lines_iter = read.lines().map(Result::unwrap_or_default);\n\nYou can take an Iterator over items of either String or &str like this:\nfn solve<T: AsRef<str>>(input: impl Iterator<Item = T>) {\n for line in input {\n let line = line.as_ref();\n // do something with line\n }\n}\n\n" ]
[ -1 ]
[ "iterator", "rust" ]
stackoverflow_0074670709_iterator_rust.txt
Q: Can't get attribute to work on a method of a class Putting the "self.budget" attribute on the buy method returns error 'Shopper' object has no attribute 'budget', this alsoo happense when calling the gifts list to append the additional gift bought through the buy method. As such, both the list of gifts is not adjusted, the budget remains unchanged and the quantity is not updated. class Shopper(): def __init__(self, gifts, quantity, budget): self.buy(gifts, quantity) self.budget = budget self.quantity=0 self.gifts=gifts #list to store the gifts bought self.gifts=[] if self.budget < quantity * 100: self.budget = budget print("Insuffecient budget") else: self.gifts.extend(gifts) self.quantity+=quantity self.budget-=quantity*100 #method for buying an additional gift at certain quantity def buy(self, gift, quantit): if self.budget < quantity * 100: self.budget = budget print("Insuffecient budget") else: self.gifts.extend(gift) self.quantity+=quantity self.budget-=quantity*100 #the other part prints the list of gifts bought and the budget left, and the total number of gifts bought. It workds when the buy method is is empty or is removed due to attribute errors. #Input #Shopper1 = Shopper(['Toys', 'Clothes', 'Foods'], 10, 5000) #Shopper1.enlist("book", 1) #Gives error, "Attribute Error: 'Shopper' object has no attribute 'budget'"". A: It looks like you've named both classes the same thing, and the first definition doesn't have the 'budget' attribute.
Can't get attribute to work on a method of a class
Putting the "self.budget" attribute on the buy method returns error 'Shopper' object has no attribute 'budget', this alsoo happense when calling the gifts list to append the additional gift bought through the buy method. As such, both the list of gifts is not adjusted, the budget remains unchanged and the quantity is not updated. class Shopper(): def __init__(self, gifts, quantity, budget): self.buy(gifts, quantity) self.budget = budget self.quantity=0 self.gifts=gifts #list to store the gifts bought self.gifts=[] if self.budget < quantity * 100: self.budget = budget print("Insuffecient budget") else: self.gifts.extend(gifts) self.quantity+=quantity self.budget-=quantity*100 #method for buying an additional gift at certain quantity def buy(self, gift, quantit): if self.budget < quantity * 100: self.budget = budget print("Insuffecient budget") else: self.gifts.extend(gift) self.quantity+=quantity self.budget-=quantity*100 #the other part prints the list of gifts bought and the budget left, and the total number of gifts bought. It workds when the buy method is is empty or is removed due to attribute errors. #Input #Shopper1 = Shopper(['Toys', 'Clothes', 'Foods'], 10, 5000) #Shopper1.enlist("book", 1) #Gives error, "Attribute Error: 'Shopper' object has no attribute 'budget'"".
[ "It looks like you've named both classes the same thing, and the first definition doesn't have the 'budget' attribute.\n" ]
[ 0 ]
[]
[]
[ "class", "inheritance", "methods", "oop", "python" ]
stackoverflow_0074672632_class_inheritance_methods_oop_python.txt
Q: How does a for loop inside of an array (square brackets) work? I need to increase the size of a list using a for loop, and I figured out how to do it, I just don't understand the math and the logic behind it. from random import randint() random_values = randint(0,5) size = 5 list = [ random_values for i in range(size)] This will create a list(array) with 5 random values. I just don't understand the logic behind the for loop in the square brackets. How does this increase the size and add commas to the list(array)? Please let me know, it will help a lot. A: That's a "list comprehension" - a way of writing a for loop that generates a list. The term itself is from way back in the day, and I've never really comprehended why its called a comprehension, but lets just go with it. You start with an iterable on the right side of the for and an expression on the left: [expression for value in iterable]. Python will iterate the values from the iterator, run the expression on each, and build a list. It works the same as if you used a regular for loop and appended to an existing list.
How does a for loop inside of an array (square brackets) work?
I need to increase the size of a list using a for loop, and I figured out how to do it, I just don't understand the math and the logic behind it. from random import randint() random_values = randint(0,5) size = 5 list = [ random_values for i in range(size)] This will create a list(array) with 5 random values. I just don't understand the logic behind the for loop in the square brackets. How does this increase the size and add commas to the list(array)? Please let me know, it will help a lot.
[ "That's a \"list comprehension\" - a way of writing a for loop that generates a list. The term itself is from way back in the day, and I've never really comprehended why its called a comprehension, but lets just go with it.\nYou start with an iterable on the right side of the for and an expression on the left: [expression for value in iterable]. Python will iterate the values from the iterator, run the expression on each, and build a list. It works the same as if you used a regular for loop and appended to an existing list.\n" ]
[ 1 ]
[]
[]
[ "list_comprehension", "python" ]
stackoverflow_0074672606_list_comprehension_python.txt
Q: Using jq to count Using jq-1.5 if I have a file of JSON that looks like [{... ,"sapm_score":40.776, ...} {..., "spam_score":17.376, ...} ...] How would I get a count of the ones where sapm_score > 40? Thanks, Dan Update: I looked at the input file and the format is actually {... ,"sapm_score":40.776, ...} {..., "spam_score":17.376, ...} ... Does this change how one needs to count? A: [UPDATE: If the input is not an array, see the last section below.] count/1 I'd recommend defining a count filter (and maybe putting it in your ~/.jq), perhaps as follows: def count(s): reduce s as $_ (0;.+1); With this, assuming the input is an array, you'd write: count(.[] | select(.sapm_score > 40)) or slightly more efficiently: count(.[] | (.sapm_score > 40) // empty) This approach (counting items in a stream) is usually preferable to using length as it avoids the costs associated with constructing an array. count/2 Here's another definition of count that you might like to use (and perhaps add to ~/.jq as well): def count(stream; cond): count(stream | cond // empty); This counts the elements of the stream for which cond is neither false nor null. Now, assuming the input consists of an array, you can simply write: count(.[]; .sapm_score > 40) "sapm_score" vs "spam_score" If the point is that you want to normalize "sapm_score" to "spam_score", then (for example) you could use count/2 as defined above, like so: count(.[]; .spam_score > 40 or .sapm_score > 40) This assumes all the items in the array are JSON objects. If that is not the case, then you might want to try adding "?" after the key names: count(.[]; .spam_score? > 40 or .sapm_score? > 40) Of course all the above assumes the input is valid JSON. If that is not the case, then please see https://github.com/stedolan/jq/wiki/FAQ#processing-not-quite-valid-json If the input is a stream of JSON objects ... The revised question indicates the input consists of a stream of JSON objects (whereas originally the input was said to be an array of JSON objects). If the input consists of a stream of JSON objects, then the above solutions can easily be adapted, depending on the version of jq that you have. If your version of jq has inputs then (2) is recommended. (1) All versions: use the -s command-line option. (2) If your jq has inputs: use the -n command line option, and change .[] above to inputs, e.g. count(inputs; .spam_score? > 40 or .sapm_score? > 40) A: Filter the items that satisfy the condition then get the length. map(select(.sapm_score > 40)) | length A: Here is one way: reduce .[] as $s(0; if $s.spam_score > 40 then .+1 else . end) Try it online at jqplay.org If instead of an array the input is a sequence of newline delimited objects (jsonlines) reduce inputs as $s(0; if $s.spam_score > 40 then .+1 else . end) will work if jq is invoked with the -n flag. Here is an example: $ cat data.json { "spam_score":40.776 } { "spam_score":17.376 } $ jq -Mn 'reduce inputs as $s(0; if $s.spam_score > 40 then .+1 else . end)' data.json 1 Try it online at tio.run A: cat input.json | jq -c '. | select(.sapm_score > 40)' | wc -l should do it. The -c option prints a one-liner compact json representation of each match, and we count the number of lines jq prints.
Using jq to count
Using jq-1.5 if I have a file of JSON that looks like [{... ,"sapm_score":40.776, ...} {..., "spam_score":17.376, ...} ...] How would I get a count of the ones where sapm_score > 40? Thanks, Dan Update: I looked at the input file and the format is actually {... ,"sapm_score":40.776, ...} {..., "spam_score":17.376, ...} ... Does this change how one needs to count?
[ "[UPDATE: If the input is not an array, see the last section below.]\ncount/1\nI'd recommend defining a count filter (and maybe putting it in your ~/.jq), perhaps as follows:\n def count(s): reduce s as $_ (0;.+1);\n\nWith this, assuming the input is an array, you'd write:\n count(.[] | select(.sapm_score > 40))\n\nor slightly more efficiently:\n count(.[] | (.sapm_score > 40) // empty)\n\nThis approach (counting items in a stream) is usually preferable to using length as it avoids the costs associated with constructing an array.\ncount/2\nHere's another definition of count that you might like to use (and perhaps add to ~/.jq as well):\ndef count(stream; cond): count(stream | cond // empty);\n\nThis counts the elements of the stream for which cond is neither false nor null.\nNow, assuming the input consists of an array, you can simply write:\ncount(.[]; .sapm_score > 40)\n\n\"sapm_score\" vs \"spam_score\"\nIf the point is that you want to normalize \"sapm_score\" to \"spam_score\", then (for example) you could use count/2 as defined above, like so:\n count(.[]; .spam_score > 40 or .sapm_score > 40)\n\nThis assumes all the items in the array are JSON objects. If that is not the case, then you might want to try adding \"?\" after the key names:\ncount(.[]; .spam_score? > 40 or .sapm_score? > 40)\n\nOf course all the above assumes the input is valid JSON. If that is not the case, then please see https://github.com/stedolan/jq/wiki/FAQ#processing-not-quite-valid-json\nIf the input is a stream of JSON objects ...\nThe revised question indicates the input consists of a stream of JSON objects (whereas originally the input was said to be an array of JSON objects). If the input consists of a stream of JSON objects, then the above solutions can easily be adapted, depending on the version of jq that you have. If your version of jq has inputs then (2) is recommended.\n(1) All versions: use the -s command-line option. \n(2) If your jq has inputs: use the -n command line option, and change .[] above to inputs, e.g.\ncount(inputs; .spam_score? > 40 or .sapm_score? > 40)\n\n", "Filter the items that satisfy the condition then get the length.\nmap(select(.sapm_score > 40)) | length\n\n", "Here is one way:\nreduce .[] as $s(0; if $s.spam_score > 40 then .+1 else . end)\n\nTry it online at jqplay.org\nIf instead of an array the input is a sequence of newline delimited objects (jsonlines) \nreduce inputs as $s(0; if $s.spam_score > 40 then .+1 else . end)\n\nwill work if jq is invoked with the -n flag. Here is an example:\n$ cat data.json\n{ \"spam_score\":40.776 }\n{ \"spam_score\":17.376 }\n\n$ jq -Mn 'reduce inputs as $s(0; if $s.spam_score > 40 then .+1 else . end)' data.json\n1\n\nTry it online at tio.run\n", "cat input.json | jq -c '. | select(.sapm_score > 40)' | wc -l\n\nshould do it.\nThe -c option prints a one-liner compact json representation of each match, and we count the number of lines jq prints.\n" ]
[ 6, 3, 0, 0 ]
[]
[]
[ "conditional_statements", "count", "jq", "json" ]
stackoverflow_0047063311_conditional_statements_count_jq_json.txt
Q: Power BI Deneb - how to increase white space between bars I am new to Deneb. I start with the simple barchart and want to increase the space between the bars. I've read a lot of documentations, search a lot, but cannot find anything. I am using Vega-Light. Is it possible? How? A: Switch from the Specification tab to the Config tab in Deneb and add these lines to the "bar": {}, section: "bar": { "discreteBandSize": {"band": 0.8}, "continuousBandSize": {"band": 0.8} }, See https://vega.github.io/vega-lite/docs/bar.html#config in the doc.
Power BI Deneb - how to increase white space between bars
I am new to Deneb. I start with the simple barchart and want to increase the space between the bars. I've read a lot of documentations, search a lot, but cannot find anything. I am using Vega-Light. Is it possible? How?
[ "Switch from the Specification tab to the Config tab in Deneb and add these lines to the \"bar\": {}, section:\n \"bar\": {\n \"discreteBandSize\": {\"band\": 0.8},\n \"continuousBandSize\": {\"band\": 0.8}\n },\n\nSee https://vega.github.io/vega-lite/docs/bar.html#config in the doc.\n\n" ]
[ 0 ]
[]
[]
[ "deneb", "powerbi" ]
stackoverflow_0071527985_deneb_powerbi.txt
Q: Number of foods that scored "true" in being good, grouped by culture SQL Okay, so I've been driving myself crazy trying to get this to display in SQL. I have a table that stores types of food, the culture they come from, a score, and a boolean value about whether or not they are good. I want to display a record of how many "goods" each culture racks up. Here's the table (don't ask about the database name): So I've tried: SELECT count(good = 1), culture FROM animals_db.foods group by culture; Or SELECT count(good = true), culture FROM animals_db.foods group by culture; But it doesn't present the correct results, it seems to include anything that has any "good" value (1 or 0) at all. How do I get the data I want? A: instead of count , use sum. SELECT sum(good), culture FROM animals_db.foods group by culture; -- assume good column value have integer data type and good value is represent as 1 otherwise 0 or other way is using count select count( case when good=1 then 1 end) , culture from animals_db.foods group by culture; A: If the purpose is to count the number of good=1 for each culture, this works: select culture, count(*) from foods where good=1 group by 1 order by 1; Result: culture |count(*)| --------+--------+ | 1| American| 1| Chinese | 1| European| 1| Italian | 2| The reason your first query doesn't return the result can be explained as below: select culture, good=1 as is_good from foods order by 1; You actually get: culture |is_good| --------+-------+ | 1| American| 0| American| 1| Chinese | 1| European| 1| French | 0| French | 0| German | 0| Italian | 1| Italian | 1| After applied group by culture and count(good=1), you're actually counting the number of NOT NULL values in good=1. For example: select culture, count(good=0) as c0, count(good=1) as c1, count(good=2) as c2, count(good) as c3, count(null) as c4 from foods group by culture order by culture; Outcome: culture |c0|c1|c2|c3|c4| --------+--+--+--+--+--+ | 1| 1| 1| 1| 0| American| 2| 2| 2| 2| 0| Chinese | 1| 1| 1| 1| 0| European| 1| 1| 1| 1| 0| French | 2| 2| 2| 2| 0| German | 1| 1| 1| 1| 0| Italian | 2| 2| 2| 2| 0| Update: This is similar to your question: Is it possible to specify condition in Count()?.
Number of foods that scored "true" in being good, grouped by culture SQL
Okay, so I've been driving myself crazy trying to get this to display in SQL. I have a table that stores types of food, the culture they come from, a score, and a boolean value about whether or not they are good. I want to display a record of how many "goods" each culture racks up. Here's the table (don't ask about the database name): So I've tried: SELECT count(good = 1), culture FROM animals_db.foods group by culture; Or SELECT count(good = true), culture FROM animals_db.foods group by culture; But it doesn't present the correct results, it seems to include anything that has any "good" value (1 or 0) at all. How do I get the data I want?
[ "instead of count , use sum.\nSELECT sum(good), culture FROM animals_db.foods group by culture; -- assume good column value have integer data type and good value is represent as 1 otherwise 0\n\nor other way is using count\nselect count( case when good=1 then 1 end) , culture from animals_db.foods group by culture;\n\n", "If the purpose is to count the number of good=1 for each culture, this works:\nselect culture,\n count(*)\n from foods\n where good=1\n group by 1\n order by 1;\n\nResult:\nculture |count(*)|\n--------+--------+\n | 1|\nAmerican| 1|\nChinese | 1|\nEuropean| 1|\nItalian | 2|\n\nThe reason your first query doesn't return the result can be explained as below:\nselect culture,\n good=1 as is_good\n from foods\n order by 1;\n\nYou actually get:\nculture |is_good|\n--------+-------+\n | 1|\nAmerican| 0|\nAmerican| 1|\nChinese | 1|\nEuropean| 1|\nFrench | 0|\nFrench | 0|\nGerman | 0|\nItalian | 1|\nItalian | 1|\n\nAfter applied group by culture and count(good=1), you're actually counting the number of NOT NULL values in good=1. For example:\nselect culture,\n count(good=0) as c0,\n count(good=1) as c1,\n count(good=2) as c2,\n count(good) as c3,\n count(null) as c4\n from foods\n group by culture\n order by culture;\n\nOutcome:\nculture |c0|c1|c2|c3|c4|\n--------+--+--+--+--+--+\n | 1| 1| 1| 1| 0|\nAmerican| 2| 2| 2| 2| 0|\nChinese | 1| 1| 1| 1| 0|\nEuropean| 1| 1| 1| 1| 0|\nFrench | 2| 2| 2| 2| 0|\nGerman | 1| 1| 1| 1| 0|\nItalian | 2| 2| 2| 2| 0|\n\nUpdate: This is similar to your question: Is it possible to specify condition in Count()?.\n" ]
[ 0, 0 ]
[]
[]
[ "count", "sql" ]
stackoverflow_0074672423_count_sql.txt
Q: how to make a random matrix in bash in that program #!/bin/bash # How to make a random matrix in bash in that program, # I don't understand how to make a random matrix in shell script. # read the matrix order read -p "Give rows and columns: " n # accept elements echo "Matrix element:" let i=0 while [ $i -lt $n ] do let j=0 while [ $j -lt $n ] do read x[$(($n*$i+$j))] j=$(($j+1)) done i=$(($i+1)) done A: If you are only looking to display numbers arranged as if they were in a n X n matrix, then the following will give you some ideas on how to get the random numbers and passing those to the "formatting" portion of the script. #!/bin/sh # read the matrix order read -p "Enter dimension of symmetric matrix: " n count=$(( $n * $n )) NUMBERS="./numbers.txt" shuf -n ${count} --input-range=0-99 >"${NUMBERS}" # accept elements echo "Matrix element:" i=0 while [ $i -lt $n ] do j=0 while [ $j -lt $n ] do read pos echo "\t${pos}\c" j=$(($j+1)) done echo "" i=$(($i+1)) done <"${NUMBERS}"
how to make a random matrix in bash in that program
#!/bin/bash # How to make a random matrix in bash in that program, # I don't understand how to make a random matrix in shell script. # read the matrix order read -p "Give rows and columns: " n # accept elements echo "Matrix element:" let i=0 while [ $i -lt $n ] do let j=0 while [ $j -lt $n ] do read x[$(($n*$i+$j))] j=$(($j+1)) done i=$(($i+1)) done
[ "If you are only looking to display numbers arranged as if they were in a n X n matrix, then the following will give you some ideas on how to get the random numbers and passing those to the \"formatting\" portion of the script.\n#!/bin/sh\n\n# read the matrix order\nread -p \"Enter dimension of symmetric matrix: \" n\n\ncount=$(( $n * $n ))\n\nNUMBERS=\"./numbers.txt\"\nshuf -n ${count} --input-range=0-99 >\"${NUMBERS}\"\n\n# accept elements\necho \"Matrix element:\"\n\ni=0\nwhile [ $i -lt $n ]\ndo\n j=0\n while [ $j -lt $n ]\n do\n read pos\n echo \"\\t${pos}\\c\"\n\n j=$(($j+1))\n done\n echo \"\"\n i=$(($i+1))\ndone <\"${NUMBERS}\"\n\n" ]
[ 0 ]
[]
[]
[ "bash", "matrix", "random", "shell" ]
stackoverflow_0074665434_bash_matrix_random_shell.txt
Q: loader-utils is vulnerable to Regular Expression Denial of Service (ReDoS) I got some errors in my VSCode terminal in my Angular App: loader-utils 3.0.0 - 3.2.0 Severity: high loader-utils is vulnerable to Regular Expression Denial of Service (ReDoS) via url variable - https://github.com/advisories/GHSA-3rfm-jhwj-7488 loader-utils is vulnerable to Regular Expression Denial of Service (ReDoS) - https://github.com/advisories/GHSA-hhq3-ff78-jv3g fix available via `npm audit fix` node_modules/@angular-devkit/build-angular/node_modules/loader-utils @angular-devkit/build-angular 13.0.0-next.0 - 13.3.9 || 14.0.0-next.0 - 14.2.9 || 15.0.0-next.0 - 15.0.0-rc.5 Depends on vulnerable versions of loader-utils node_modules/@angular-devkit/build-angular 2 high severity vulnerabilities I tried to use npm audit fix but didn't help. How to fix it safely (I am quite new w Angular)? I attach screenshot from terminal. Thank you for a help! A: To fix the vulnerabilities in your Angular app, you need to update the @angular-devkit/build-angular package and its dependencies to the latest version. Open a terminal window and navigate to the root directory of your Angular app. Run the following command to update the @angular-devkit/build-angular package and its dependencies to the latest version: npm update @angular-devkit/build-angular Run the following command to verify that the vulnerabilities have been fixed: npm audit You should see a message that indicates that the vulnerabilities have been fixed, and that there are no more vulnerabilities in your Angular app. Alternatively, you can use the npm audit fix --force command to automatically fix the vulnerabilities without manually updating the packages. However, this may cause other issues or conflicts in your Angular app, so it is recommended to update the packages manually. It is also important to regularly update your Angular app and its dependencies to the latest version to avoid security vulnerabilities and other issues. You can use the npm outdated command to check for outdated packages in your Angular app, and update them using the npm update command.
loader-utils is vulnerable to Regular Expression Denial of Service (ReDoS)
I got some errors in my VSCode terminal in my Angular App: loader-utils 3.0.0 - 3.2.0 Severity: high loader-utils is vulnerable to Regular Expression Denial of Service (ReDoS) via url variable - https://github.com/advisories/GHSA-3rfm-jhwj-7488 loader-utils is vulnerable to Regular Expression Denial of Service (ReDoS) - https://github.com/advisories/GHSA-hhq3-ff78-jv3g fix available via `npm audit fix` node_modules/@angular-devkit/build-angular/node_modules/loader-utils @angular-devkit/build-angular 13.0.0-next.0 - 13.3.9 || 14.0.0-next.0 - 14.2.9 || 15.0.0-next.0 - 15.0.0-rc.5 Depends on vulnerable versions of loader-utils node_modules/@angular-devkit/build-angular 2 high severity vulnerabilities I tried to use npm audit fix but didn't help. How to fix it safely (I am quite new w Angular)? I attach screenshot from terminal. Thank you for a help!
[ "To fix the vulnerabilities in your Angular app, you need to update the @angular-devkit/build-angular package and its dependencies to the latest version.\nOpen a terminal window and navigate to the root directory of your Angular app.\nRun the following command to update the @angular-devkit/build-angular package and its dependencies to the latest version:\nnpm update @angular-devkit/build-angular\n\nRun the following command to verify that the vulnerabilities have been fixed:\nnpm audit\n\nYou should see a message that indicates that the vulnerabilities have been fixed, and that there are no more vulnerabilities in your Angular app.\nAlternatively, you can use the npm audit fix --force command to automatically fix the vulnerabilities without manually updating the packages. However, this may cause other issues or conflicts in your Angular app, so it is recommended to update the packages manually.\nIt is also important to regularly update your Angular app and its dependencies to the latest version to avoid security vulnerabilities and other issues. You can use the npm outdated command to check for outdated packages in your Angular app, and update them using the npm update command.\n" ]
[ 1 ]
[]
[]
[ "angular", "node.js", "npm_vulnerabilities" ]
stackoverflow_0074670996_angular_node.js_npm_vulnerabilities.txt
Q: Assign value numbers for alphabet in Python I have alphabets that I want to assign as follows: lowercase items a-z have value of 1-26 uppercase items A-Z have value of 27-52 What is the shortest way to implement this [a,B,h,R] Expected Output: [1,28,8,44] How can we go about doing this in Python Thank you A: The python string module is perfect for this. from string import ascii_letters print([ascii_letters.index(letter) + 1 for letter in ["a", "B", "h", "R"]]) A: I think I recognize an Advent of Code question! I developed the alphabet to score mapping as follows: import string from collections import OrderedDict lower_priorities = OrderedDict(zip(string.ascii_lowercase, range(1,27))) upper_priorities = OrderedDict(zip(string.ascii_uppercase, range(27,53))) You can then call the dictionary by the letter value you are interested in after checking whether is is uppercase or lowercase and then sorting it to the correct dictionary. Otherwise, combine the two dictionaries and just query the combined dictionary, i.e. lower_priorities["a"] would return 1. Loop through your array and obtain your outputs. Can't guarantee it's the shortest, but I can say it works! A: This is a way that you can implement what you want: print([ord(item) - 38 if ord(item) < 97 else ord(item) - 96 for item in ['a','B','h','R']]) converting each item into an int value and finding which positioning they are in (Capitalized letters come before lowercase) https://appdividend.com/2022/06/15/how-to-convert-python-char-to-int/
Assign value numbers for alphabet in Python
I have alphabets that I want to assign as follows: lowercase items a-z have value of 1-26 uppercase items A-Z have value of 27-52 What is the shortest way to implement this [a,B,h,R] Expected Output: [1,28,8,44] How can we go about doing this in Python Thank you
[ "The python string module is perfect for this.\nfrom string import ascii_letters\nprint([ascii_letters.index(letter) + 1 for letter in [\"a\", \"B\", \"h\", \"R\"]])\n\n", "I think I recognize an Advent of Code question! I developed the alphabet to score mapping as follows:\nimport string\nfrom collections import OrderedDict\nlower_priorities = OrderedDict(zip(string.ascii_lowercase, range(1,27)))\nupper_priorities = OrderedDict(zip(string.ascii_uppercase, range(27,53)))\n\nYou can then call the dictionary by the letter value you are interested in after checking whether is is uppercase or lowercase and then sorting it to the correct dictionary. Otherwise, combine the two dictionaries and just query the combined dictionary, i.e. lower_priorities[\"a\"] would return 1. Loop through your array and obtain your outputs. Can't guarantee it's the shortest, but I can say it works!\n", "This is a way that you can implement what you want:\nprint([ord(item) - 38 if ord(item) < 97 else ord(item) - 96 for item in ['a','B','h','R']])\n\nconverting each item into an int value and finding which positioning they are in (Capitalized letters come before lowercase)\nhttps://appdividend.com/2022/06/15/how-to-convert-python-char-to-int/\n" ]
[ 3, 1, 0 ]
[]
[]
[ "list", "python", "python_3.x" ]
stackoverflow_0074672541_list_python_python_3.x.txt
Q: Why does soup.get_text() include comments in some situations? I have an HTML document that I've created by exporting a MS Word doc. In Word, I saved as the Web Page (.HTM) format, not the Web Page, Filtered (.HTM) format. When I run get_text on the doc, it includes the comments in the style tag for some reason. This is unexpected. BS4 does ignore the 2nd comment in the body as expected. I've tried with both the lxml and html.parser parsers. Same result. Python 3.9.12, IPython 8.4.0, BS 4.8.2 (though when I use pkg_resources.get_distribution("bs4").version, it shows 0.0.1) html = """<html> <head> <meta http-equiv=Content-Type content="text/html; charset=utf-8"> <meta name=Generator content="Microsoft Word 15 (filtered)"> <style> <!-- /* Font Definitions */ @font-face {font-family:Helvetica; panose-1:2 11 6 4 2 2 2 2 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; text-align:justify; text-justify:inter-ideograph; font-size:12.0pt; font-family:"Times New Roman",serif;} /* Page Definitions */ @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.25in 1.0in 1.25in;} div.WordSection1 {page:WordSection1;} /* List Definitions */ ol {margin-bottom:0in;} --> </style> </head> <body lang=EN-US link=blue vlink=purple style='word-wrap:break-word'> <div class=WordSection1> <div style='border-top:double windowtext 2.25pt;border-left:none;border-bottom: double windowtext 2.25pt;border-right:none;padding:1.0pt 0in 1.0pt 0in'> <p class=MsoNormal style='margin-bottom:.25in;border:none;padding:0in'><span style='font-size:9.0pt'>&nbsp;</span></p> <!-- Some other random comment that get_text ignores --> </div> </body> </html>""" soup = BeautifulSoup(html, "lxml") soup.get_text() In [5]: soup.get_text() Out[5]: '\n\n\n\n\n<!--\n /* Font Definitions */\n @font-face\n\t{font-family:Helvetica;\n\tpanose-1:2 11 6 4 2 2 2 2 2 4;}\n /* Style Definitions */\n p.MsoNormal, li.MsoNormal, div.MsoNormal\n\t{margin:0in;\n\ttext-align:justify;\n\ttext-justify:inter-ideograph;\n\tfont-size:12.0pt;\n\tfont-family:"Times New Roman",serif;}\n /* Page Definitions */\n @page WordSection1\n\t{size:8.5in 11.0in;\n\tmargin:1.0in 1.25in 1.0in 1.25in;}\ndiv.WordSection1\n\t{page:WordSection1;}\n /* List Definitions */\n ol\n\t{margin-bottom:0in;}\n-->\n\n\n\n\n\n\xa0\n\n\n\n\n' A: I can't reproduce the issue [running your code just returns \n\n\n\n\n\n\n\n\n\xa0\n\n\n\n for me (bs4 4.11.1, python 3.7.15, IPython 7.9.0)], but I expect it's because everything inside the style tag is stored as one Stylesheet element rather than being further parsed into Tag/NavigableString/Comment/etc I don't think that's a valid html anyway - stylesheet comments should be like /*Some Comment*/, and you can't just put html inside a stylesheet like that...
Why does soup.get_text() include comments in some situations?
I have an HTML document that I've created by exporting a MS Word doc. In Word, I saved as the Web Page (.HTM) format, not the Web Page, Filtered (.HTM) format. When I run get_text on the doc, it includes the comments in the style tag for some reason. This is unexpected. BS4 does ignore the 2nd comment in the body as expected. I've tried with both the lxml and html.parser parsers. Same result. Python 3.9.12, IPython 8.4.0, BS 4.8.2 (though when I use pkg_resources.get_distribution("bs4").version, it shows 0.0.1) html = """<html> <head> <meta http-equiv=Content-Type content="text/html; charset=utf-8"> <meta name=Generator content="Microsoft Word 15 (filtered)"> <style> <!-- /* Font Definitions */ @font-face {font-family:Helvetica; panose-1:2 11 6 4 2 2 2 2 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; text-align:justify; text-justify:inter-ideograph; font-size:12.0pt; font-family:"Times New Roman",serif;} /* Page Definitions */ @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.25in 1.0in 1.25in;} div.WordSection1 {page:WordSection1;} /* List Definitions */ ol {margin-bottom:0in;} --> </style> </head> <body lang=EN-US link=blue vlink=purple style='word-wrap:break-word'> <div class=WordSection1> <div style='border-top:double windowtext 2.25pt;border-left:none;border-bottom: double windowtext 2.25pt;border-right:none;padding:1.0pt 0in 1.0pt 0in'> <p class=MsoNormal style='margin-bottom:.25in;border:none;padding:0in'><span style='font-size:9.0pt'>&nbsp;</span></p> <!-- Some other random comment that get_text ignores --> </div> </body> </html>""" soup = BeautifulSoup(html, "lxml") soup.get_text() In [5]: soup.get_text() Out[5]: '\n\n\n\n\n<!--\n /* Font Definitions */\n @font-face\n\t{font-family:Helvetica;\n\tpanose-1:2 11 6 4 2 2 2 2 2 4;}\n /* Style Definitions */\n p.MsoNormal, li.MsoNormal, div.MsoNormal\n\t{margin:0in;\n\ttext-align:justify;\n\ttext-justify:inter-ideograph;\n\tfont-size:12.0pt;\n\tfont-family:"Times New Roman",serif;}\n /* Page Definitions */\n @page WordSection1\n\t{size:8.5in 11.0in;\n\tmargin:1.0in 1.25in 1.0in 1.25in;}\ndiv.WordSection1\n\t{page:WordSection1;}\n /* List Definitions */\n ol\n\t{margin-bottom:0in;}\n-->\n\n\n\n\n\n\xa0\n\n\n\n\n'
[ "I can't reproduce the issue [running your code just returns \\n\\n\\n\\n\\n\\n\\n\\n\\n\\xa0\\n\\n\\n\\n for me (bs4 4.11.1, python 3.7.15, IPython 7.9.0)], but I expect it's because everything inside the style tag is stored as one Stylesheet element rather than being further parsed into Tag/NavigableString/Comment/etc\n\nI don't think that's a valid html anyway - stylesheet comments should be like /*Some Comment*/, and you can't just put html inside a stylesheet like that...\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "lxml", "ms_word" ]
stackoverflow_0074647276_beautifulsoup_lxml_ms_word.txt
Q: How to delete image path from list of currently selected photo? I need a button to delete the currently viewed image. The image paths are stored in a List<String>. The images appear via CarouselSlider from ImagePicker. I've tried to see if I can delete the image via a hardcoded index with .removeAt() and that works but again, can't figure how to get current image What is right way to grab the current index of the on screen image? I've tried with single images and it works find but getting the current index of the currently viewed photo is where I'm running into trouble. This is how I am getting my Images: List<String> pickedImageList = []; final imagePicker = ImagePicker(); Future pickImage() async { final userPickedImages = await imagePicker.pickMultiImage(); for (var image in userPickedImages!) { pickedImageList.add(image.path); } setState(() {}); } This is how I am showing the images. Button to delete currently selected photo is at the bottom with generic function: Column( children: [ ElevatedButton( onPressed: pickImage, child: const Text('Add another image')), CarouselSlider( options: CarouselOptions( // height: 200, viewportFraction: .9, enlargeCenterPage: true, ), items: pickedImageList .map( (item) => SizedBox( width: 200, height: 200, child: ClipRRect( borderRadius: BorderRadius.circular(10), child: Image.asset( item, height: 200, width: 200, )), ), // color: Colors.green, ) .toList(), ), ElevatedButton( onPressed: () { setState(() { removeImage(); }); }, child: const Text('Remove')) ], ), All of the delete buttons that people have solved on SO all involve single images which is not the problem. I thought maybe a GestureDetector counting up and down with left and right swipes, resetting the counter when it reaches the length of the list. Negatives would be multiplied by it self to get current index usable. A: To get the index of the currently viewed photo in the CarouselSlider widget, you can use the onPageChanged callback property to update a _currentPage variable with the current page index. Then, you can use the _currentPage variable to delete the currently viewed photo from the pickedImageList list. Here's an example of how you could implement this: // Initialize the _currentPage variable with the first photo in the list int _currentPage = 0; // In the CarouselSlider widget, set the onPageChanged callback property CarouselSlider( options: CarouselOptions( // height: 200, viewportFraction: .9, enlargeCenterPage: true, // Set the onPageChanged callback property onPageChanged: (index, reason) { // Update the _currentPage variable with the current page index setState(() { _currentPage = index; }); }, ), items: pickedImageList .map( (item) => SizedBox( width: 200, height: 200, child: ClipRRect( borderRadius: BorderRadius.circular(10), child: Image.asset( item, height: 200, width: 200, )), ), // color: Colors.green, ) .toList(), ), // In the delete button, use the _currentPage variable to remove the currently viewed photo from the pickedImageList list ElevatedButton( onPressed: () { setState(() { pickedImageList.removeAt(_currentPage); }); }, child: const Text('Remove') ) In this example, the onPageChanged callback will be called every time the user swipes to a new photo in the CarouselSlider widget. The callback will update the _currentPage variable with the index of the current page, which you can then use to delete the currently viewed photo from the pickedImageList list. I hope this helps! Let me know if you have any other questions.
How to delete image path from list of currently selected photo?
I need a button to delete the currently viewed image. The image paths are stored in a List<String>. The images appear via CarouselSlider from ImagePicker. I've tried to see if I can delete the image via a hardcoded index with .removeAt() and that works but again, can't figure how to get current image What is right way to grab the current index of the on screen image? I've tried with single images and it works find but getting the current index of the currently viewed photo is where I'm running into trouble. This is how I am getting my Images: List<String> pickedImageList = []; final imagePicker = ImagePicker(); Future pickImage() async { final userPickedImages = await imagePicker.pickMultiImage(); for (var image in userPickedImages!) { pickedImageList.add(image.path); } setState(() {}); } This is how I am showing the images. Button to delete currently selected photo is at the bottom with generic function: Column( children: [ ElevatedButton( onPressed: pickImage, child: const Text('Add another image')), CarouselSlider( options: CarouselOptions( // height: 200, viewportFraction: .9, enlargeCenterPage: true, ), items: pickedImageList .map( (item) => SizedBox( width: 200, height: 200, child: ClipRRect( borderRadius: BorderRadius.circular(10), child: Image.asset( item, height: 200, width: 200, )), ), // color: Colors.green, ) .toList(), ), ElevatedButton( onPressed: () { setState(() { removeImage(); }); }, child: const Text('Remove')) ], ), All of the delete buttons that people have solved on SO all involve single images which is not the problem. I thought maybe a GestureDetector counting up and down with left and right swipes, resetting the counter when it reaches the length of the list. Negatives would be multiplied by it self to get current index usable.
[ "To get the index of the currently viewed photo in the CarouselSlider widget, you can use the onPageChanged callback property to update a _currentPage variable with the current page index. Then, you can use the _currentPage variable to delete the currently viewed photo from the pickedImageList list.\nHere's an example of how you could implement this:\n// Initialize the _currentPage variable with the first photo in the list\nint _currentPage = 0;\n\n// In the CarouselSlider widget, set the onPageChanged callback property\nCarouselSlider(\n options: CarouselOptions(\n // height: 200,\n viewportFraction: .9,\n enlargeCenterPage: true,\n // Set the onPageChanged callback property\n onPageChanged: (index, reason) {\n // Update the _currentPage variable with the current page index\n setState(() {\n _currentPage = index;\n });\n },\n ),\n items: pickedImageList\n .map(\n (item) => SizedBox(\n width: 200,\n height: 200,\n child: ClipRRect(\n borderRadius: BorderRadius.circular(10),\n child: Image.asset(\n item,\n height: 200,\n width: 200,\n )),\n ),\n // color: Colors.green,\n )\n .toList(),\n),\n\n// In the delete button, use the _currentPage variable to remove the currently viewed photo from the pickedImageList list\nElevatedButton(\n onPressed: () {\n setState(() {\n pickedImageList.removeAt(_currentPage);\n });\n },\n child: const Text('Remove')\n)\n\nIn this example, the onPageChanged callback will be called every time the user swipes to a new photo in the CarouselSlider widget. The callback will update the _currentPage variable with the index of the current page, which you can then use to delete the currently viewed photo from the pickedImageList list.\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 1 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0074672572_dart_flutter.txt
Q: ERROR:root:can't pickle fasttext_pybind.fasttext objects I am using gunicorn with multiple workers for my machine learning project. But the problem is when I send a train request only the worker getting the training request gets updated with the latest model after training is done. Here it is worth to mention that, to make the inference faster I have programmed to load the model once after each training. This is why, the only worker which is used for current training operation loads the latest model and the other workers still keeps the previously loaded model. Right now the model file (binary format) is loaded once after each training in a global dictionary variable where key is the model name and the value is the model file. Obviously, this problem won't occur if I program it to load the model every time from disk for each prediction, but I cannot do it, as it will make the prediction slower. I studied further on global variables and further investigation shows that, in a multi-processing environment, all the workers (processes) create their own copies of global variables. Apart from the binary model file, I also have some other global variables (in dictionary type) need to be synced across all processes. So, how to handle this situation? TL;DR: I need some approach which can help me to store variable which will be common across all the processes (workers). Any way to do this? With multiprocessing.Manager, dill etc.? Update 1: I have multiple machine learning algorithms in my project and they have their own model files, which are being loaded to memory in a dictionary where the key is the model name and the value is the corresponding model object. I need to share all of them (in other words, I need to share the dictionary). But some of the models are not pickle serializable like - FastText. So, when I try to use a proxy variable (in my case a dictionary to hold models) with multiprocessing.Manager I get error for those non-pickle-serializable object while assigning the loaded model file to this dictionary. Like: can't pickle fasttext_pybind.fasttext objects. More information on multiprocessing.Manager can be found here: Proxy Objects Following is the summary what I have done: import multiprocessing import fasttext mgr = multiprocessing.Manager() model_dict = mgr.dict() model_file = fasttext.load_model("path/to/model/file/which/is/in/.bin/format") model_dict["fasttext"] = model_file # This line throws this error Error: can't pickle fasttext_pybind.fasttext objects I printed the model_file which I am trying to assign, it is: <fasttext.FastText._FastText object at 0x7f86e2b682e8> Update 2: According to this answer I modified my code a little bit: import fasttext from multiprocessing.managers import SyncManager def Manager(): m = SyncManager() m.start() return m # As the model file has a type of "<fasttext.FastText._FastText object at 0x7f86e2b682e8>" so, using "fasttext.FastText._FastText" as the class of it SyncManager.register("fast", fasttext.FastText._FastText) # Now this is the Manager as a replacement of the old one. mgr = Manager() ft = mgr.fast() # This line gives error. This gives me EOFError. Update 3: I tried using dill both with multiprocessing and multiprocess. The summary of changes are as the following: import multiprocessing import multiprocess import dill # Any one of the following two lines mgr = multiprocessing.Manager() # Or, mgr = multiprocess.Manager() model_dict = mgr.dict() ... ... ... ... ... ... model_file = dill.dumps(model_file) # This line throws the error model_dict["fasttext"] = model_file ... ... ... ... ... ... # During loading model_file = dill.loads(model_dict["fasttext"]) But still getting the error: can't pickle fasttext_pybind.fasttext objects. Update 4: This time I am using another library called jsonpickle. It seems to be that serialization and de-serialization occurs properly (as it is not reporting any issue while running). But surprisingly enough, after de-serialization whenever I am making a prediction, it faces segmentation fault. More details and the steps to reproduce it can be found here: Segmentation fault (core dumped) Update 5: Tried cloudpickle, srsly, but couldn't make the program working. A: For the sake of completeness I am providing the solution that worked for me. All the approaches I have tried to serialize FastText went in vain. Finally, as @MedetTleukabiluly mentioned in the comment, I managed to share the message of loading the model from the disk with other workers with redis-pubsub. Obviously, it is not actually sharing the model from the same memory space, rather, just sharing the message to other workers to inform them they should load the model from the disk (as a new training just happened). Following is the general solution: # redis_pubsub.py import logging import os import fasttext import socket import threading import time """The whole purpose of GLOBAL_NAMESPACE is to keep the whole pubsub mechanism separate. As this might be a case another service also publishing in the same channel. """ GLOBAL_NAMESPACE = "SERVICE_0" def get_ip(): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) try: # doesn't even have to be reachable s.connect(('10.255.255.255', 1)) IP = s.getsockname()[0] except Exception: IP = '127.0.0.1' finally: s.close() return IP class RedisPubSub: def __init__(self): self.redis_client = get_redis_client() #TODO: A SAMPLE METHOD WHICH CAN RETURN YOUR REDIS CLIENT (you have to implement) # Unique ID is used, to identify which worker from which server is the publisher. Just to avoid updating # getting a message which message is indeed sent by itself. self.unique_id = "IP_" + get_ip() + "__" + str(GLOBAL_NAMESPACE) + "__" + "PID_" + str(os.getpid()) def listen_to_channel_and_update_models(self, channel): try: pubsub = self.redis_client.pubsub() pubsub.subscribe(channel) except Exception as exception: logging.error(f"REDIS_ERROR: Model Update Listening: {exception}") while True: try: message = pubsub.get_message() # Successful operation gives 1 and unsuccessful gives 0 # ..we are not interested to receive these flags if message and message["data"] != 1 and message["data"] != 0: message = message["data"].decode("utf-8") message = str(message) splitted_msg = message.split("__SEPERATOR__") # Not only making sure the message is coming from another worker # but also we have to make sure the message sender and receiver (i.e, both of the workers) are under the same namespace if (splitted_msg[0] != self.unique_id) and (splitted_msg[0].split('__')[1] == GLOBAL_NAMESPACE): algo_name = splitted_msg[1] model_path = splitted_msg[2] # Fasttext if "fasttext" in algo_name: try: #TODO: YOU WILL GET THE LOADED NEW FILE IN model_file. USE IT TO UPDATE THE OLD ONE. model_file = fasttext.load_model(model_path + '.bin') except Exception as exception: logging.error(exception) else: logging.info(f"{algo_name} model is updated for process with unique_id: {self.unique_id} by process with unique_id: {splitted_msg[0]}") time.sleep(1) # sleeping for 1 second to avoid hammering the CPU too much except Exception as exception: time.sleep(1) logging.error(f"PUBSUB_ERROR: Model or component update: {exception}") def publish_to_channel(self, channel, algo_name, model_path): def _publish_to_channel(): try: message = self.unique_id + '__SEPERATOR__' + str(algo_name) + '__SEPERATOR__' + str(model_path) time.sleep(3) self.redis_client.publish(channel, message) except Exception as exception: logging.error(f"PUBSUB_ERROR: Model or component publishing: {exception}") # As the delay before pubsub can pause the next activities which are independent, hence, doing this publishing in another thread. thread = threading.Thread(target = _publish_to_channel) thread.start() Also you have to start the listener: from redis_pubsub import RedisPubSub pubsub = RedisPubSub() # start the listener: thread = threading.Thread(target = pubsub.listen_to_channel_and_update_models, args = ("sync-ml-models", )) thread.start() From fasttext training module, when you finish the training, publish this message to other workers, such that the other workers get a chance to re-load the model from the disk: # fasttext_api.py from redis_pubsub import RedisPubSub pubsub = RedisPubSub() pubsub.publish_to_channel(channel = "sync-ml-models", # a sample name for the channel algo_name = f"fasttext", model_path = "path/to/fasttext/model")
ERROR:root:can't pickle fasttext_pybind.fasttext objects
I am using gunicorn with multiple workers for my machine learning project. But the problem is when I send a train request only the worker getting the training request gets updated with the latest model after training is done. Here it is worth to mention that, to make the inference faster I have programmed to load the model once after each training. This is why, the only worker which is used for current training operation loads the latest model and the other workers still keeps the previously loaded model. Right now the model file (binary format) is loaded once after each training in a global dictionary variable where key is the model name and the value is the model file. Obviously, this problem won't occur if I program it to load the model every time from disk for each prediction, but I cannot do it, as it will make the prediction slower. I studied further on global variables and further investigation shows that, in a multi-processing environment, all the workers (processes) create their own copies of global variables. Apart from the binary model file, I also have some other global variables (in dictionary type) need to be synced across all processes. So, how to handle this situation? TL;DR: I need some approach which can help me to store variable which will be common across all the processes (workers). Any way to do this? With multiprocessing.Manager, dill etc.? Update 1: I have multiple machine learning algorithms in my project and they have their own model files, which are being loaded to memory in a dictionary where the key is the model name and the value is the corresponding model object. I need to share all of them (in other words, I need to share the dictionary). But some of the models are not pickle serializable like - FastText. So, when I try to use a proxy variable (in my case a dictionary to hold models) with multiprocessing.Manager I get error for those non-pickle-serializable object while assigning the loaded model file to this dictionary. Like: can't pickle fasttext_pybind.fasttext objects. More information on multiprocessing.Manager can be found here: Proxy Objects Following is the summary what I have done: import multiprocessing import fasttext mgr = multiprocessing.Manager() model_dict = mgr.dict() model_file = fasttext.load_model("path/to/model/file/which/is/in/.bin/format") model_dict["fasttext"] = model_file # This line throws this error Error: can't pickle fasttext_pybind.fasttext objects I printed the model_file which I am trying to assign, it is: <fasttext.FastText._FastText object at 0x7f86e2b682e8> Update 2: According to this answer I modified my code a little bit: import fasttext from multiprocessing.managers import SyncManager def Manager(): m = SyncManager() m.start() return m # As the model file has a type of "<fasttext.FastText._FastText object at 0x7f86e2b682e8>" so, using "fasttext.FastText._FastText" as the class of it SyncManager.register("fast", fasttext.FastText._FastText) # Now this is the Manager as a replacement of the old one. mgr = Manager() ft = mgr.fast() # This line gives error. This gives me EOFError. Update 3: I tried using dill both with multiprocessing and multiprocess. The summary of changes are as the following: import multiprocessing import multiprocess import dill # Any one of the following two lines mgr = multiprocessing.Manager() # Or, mgr = multiprocess.Manager() model_dict = mgr.dict() ... ... ... ... ... ... model_file = dill.dumps(model_file) # This line throws the error model_dict["fasttext"] = model_file ... ... ... ... ... ... # During loading model_file = dill.loads(model_dict["fasttext"]) But still getting the error: can't pickle fasttext_pybind.fasttext objects. Update 4: This time I am using another library called jsonpickle. It seems to be that serialization and de-serialization occurs properly (as it is not reporting any issue while running). But surprisingly enough, after de-serialization whenever I am making a prediction, it faces segmentation fault. More details and the steps to reproduce it can be found here: Segmentation fault (core dumped) Update 5: Tried cloudpickle, srsly, but couldn't make the program working.
[ "For the sake of completeness I am providing the solution that worked for me. All the approaches I have tried to serialize FastText went in vain. Finally, as @MedetTleukabiluly mentioned in the comment, I managed to share the message of loading the model from the disk with other workers with redis-pubsub. Obviously, it is not actually sharing the model from the same memory space, rather, just sharing the message to other workers to inform them they should load the model from the disk (as a new training just happened). Following is the general solution:\n# redis_pubsub.py\n\nimport logging\nimport os\nimport fasttext\nimport socket\nimport threading\nimport time\n\n\"\"\"The whole purpose of GLOBAL_NAMESPACE is to keep the whole pubsub mechanism separate.\nAs this might be a case another service also publishing in the same channel.\n\"\"\"\nGLOBAL_NAMESPACE = \"SERVICE_0\"\n\ndef get_ip():\n s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n try:\n # doesn't even have to be reachable\n s.connect(('10.255.255.255', 1))\n IP = s.getsockname()[0]\n except Exception:\n IP = '127.0.0.1'\n finally:\n s.close()\n return IP\n\n\nclass RedisPubSub:\n def __init__(self):\n self.redis_client = get_redis_client() #TODO: A SAMPLE METHOD WHICH CAN RETURN YOUR REDIS CLIENT (you have to implement)\n # Unique ID is used, to identify which worker from which server is the publisher. Just to avoid updating\n # getting a message which message is indeed sent by itself.\n self.unique_id = \"IP_\" + get_ip() + \"__\" + str(GLOBAL_NAMESPACE) + \"__\" + \"PID_\" + str(os.getpid())\n\n\n def listen_to_channel_and_update_models(self, channel):\n try:\n pubsub = self.redis_client.pubsub()\n pubsub.subscribe(channel)\n except Exception as exception:\n logging.error(f\"REDIS_ERROR: Model Update Listening: {exception}\")\n\n while True:\n try:\n message = pubsub.get_message()\n\n # Successful operation gives 1 and unsuccessful gives 0\n # ..we are not interested to receive these flags\n if message and message[\"data\"] != 1 and message[\"data\"] != 0: \n message = message[\"data\"].decode(\"utf-8\")\n message = str(message)\n splitted_msg = message.split(\"__SEPERATOR__\")\n\n\n # Not only making sure the message is coming from another worker\n # but also we have to make sure the message sender and receiver (i.e, both of the workers) are under the same namespace\n if (splitted_msg[0] != self.unique_id) and (splitted_msg[0].split('__')[1] == GLOBAL_NAMESPACE):\n algo_name = splitted_msg[1]\n model_path = splitted_msg[2]\n\n # Fasttext\n if \"fasttext\" in algo_name:\n try:\n #TODO: YOU WILL GET THE LOADED NEW FILE IN model_file. USE IT TO UPDATE THE OLD ONE.\n model_file = fasttext.load_model(model_path + '.bin')\n except Exception as exception:\n logging.error(exception)\n else:\n logging.info(f\"{algo_name} model is updated for process with unique_id: {self.unique_id} by process with unique_id: {splitted_msg[0]}\")\n\n\n time.sleep(1) # sleeping for 1 second to avoid hammering the CPU too much\n\n except Exception as exception:\n time.sleep(1)\n logging.error(f\"PUBSUB_ERROR: Model or component update: {exception}\")\n\n\n def publish_to_channel(self, channel, algo_name, model_path):\n def _publish_to_channel():\n try:\n message = self.unique_id + '__SEPERATOR__' + str(algo_name) + '__SEPERATOR__' + str(model_path)\n time.sleep(3)\n self.redis_client.publish(channel, message)\n except Exception as exception:\n logging.error(f\"PUBSUB_ERROR: Model or component publishing: {exception}\")\n\n # As the delay before pubsub can pause the next activities which are independent, hence, doing this publishing in another thread.\n thread = threading.Thread(target = _publish_to_channel)\n thread.start()\n\nAlso you have to start the listener:\nfrom redis_pubsub import RedisPubSub\npubsub = RedisPubSub()\n\n\n# start the listener:\nthread = threading.Thread(target = pubsub.listen_to_channel_and_update_models, args = (\"sync-ml-models\", ))\nthread.start()\n\nFrom fasttext training module, when you finish the training, publish this message to other workers, such that the other workers get a chance to re-load the model from the disk:\n# fasttext_api.py\n\nfrom redis_pubsub import RedisPubSub\npubsub = RedisPubSub()\n\npubsub.publish_to_channel(channel = \"sync-ml-models\", # a sample name for the channel\n algo_name = f\"fasttext\",\n model_path = \"path/to/fasttext/model\")\n\n\n" ]
[ 0 ]
[]
[]
[ "dill", "fasttext", "gunicorn", "multiprocessing", "python" ]
stackoverflow_0069430747_dill_fasttext_gunicorn_multiprocessing_python.txt
Q: waiting for user answer in python telegram bot with inlinekeyboardbutton I have a telegram bot and I use the python-telegram-bot library. what I try to is when user for example press on trade button I want to ask him about some deatails so I have to wait for the user input then process the answer I have this this CmmandHandler which working well but its work just with command ConversationHandler( entry_points=[commandhandler('trade',trade)], states={ NAME_adduser: [MessageHandler(Filters.text, callback=binance)], }, fallbacks=[CommandHandler('quit', quit)] ) I search alot but found nothing about use InlineKeyboardButton value as a command A: It sounds like you want to use the value from an inline keyboard button as a command in your Telegram bot. To do this, you can use the callback_data parameter of the InlineKeyboardButton class in the python-telegram-bot library. When a user clicks on the button, the callback_data value will be sent to your bot in the form of a CallbackQuery object. You can then use this value to determine which command to run. Here's an example of how you might use the callback_data parameter in your code: from telegram import InlineKeyboardButton, InlineKeyboardMarkup def start(update, context): # Create an inline keyboard with a button that says "Trade" trade_button = InlineKeyboardButton("Trade", callback_data="trade") keyboard = [[trade_button]] reply_markup = InlineKeyboardMarkup(keyboard) # Send the keyboard to the user update.message.reply_text("Press the 'Trade' button to start a trade:", reply_markup=reply_markup) def trade(update, context): # When the user clicks the "Trade" button, this function will be called query = update.callback_query query.answer() # Ask the user for more details about the trade query.edit_message_text("Enter the details of the trade:") You can then use a CallbackQueryHandler to handle the CallbackQuery object that is sent when the user clicks on the button. For example from telegram.ext import CallbackQueryHandler # Set up a CallbackQueryHandler to handle the "trade" command trade_handler = CallbackQueryHandler(trade, pattern="trade") # Add the handler to the dispatcher dispatcher.add_handler(trade_handler) When the user clicks on the "Trade" button, the trade function will be called, and you can use the information from the CallbackQuery object to process the user's input and continue the conversation.
waiting for user answer in python telegram bot with inlinekeyboardbutton
I have a telegram bot and I use the python-telegram-bot library. what I try to is when user for example press on trade button I want to ask him about some deatails so I have to wait for the user input then process the answer I have this this CmmandHandler which working well but its work just with command ConversationHandler( entry_points=[commandhandler('trade',trade)], states={ NAME_adduser: [MessageHandler(Filters.text, callback=binance)], }, fallbacks=[CommandHandler('quit', quit)] ) I search alot but found nothing about use InlineKeyboardButton value as a command
[ "It sounds like you want to use the value from an inline keyboard button as a command in your Telegram bot. To do this, you can use the callback_data parameter of the InlineKeyboardButton class in the python-telegram-bot library. When a user clicks on the button, the callback_data value will be sent to your bot in the form of a CallbackQuery object. You can then use this value to determine which command to run.\nHere's an example of how you might use the callback_data parameter in your code:\nfrom telegram import InlineKeyboardButton, InlineKeyboardMarkup\n\ndef start(update, context):\n # Create an inline keyboard with a button that says \"Trade\"\n trade_button = InlineKeyboardButton(\"Trade\", callback_data=\"trade\")\n keyboard = [[trade_button]]\n reply_markup = InlineKeyboardMarkup(keyboard)\n\n # Send the keyboard to the user\n update.message.reply_text(\"Press the 'Trade' button to start a trade:\", reply_markup=reply_markup)\n\ndef trade(update, context):\n # When the user clicks the \"Trade\" button, this function will be called\n query = update.callback_query\n query.answer()\n\n # Ask the user for more details about the trade\n query.edit_message_text(\"Enter the details of the trade:\")\n\nYou can then use a CallbackQueryHandler to handle the CallbackQuery object that is sent when the user clicks on the button. For example\nfrom telegram.ext import CallbackQueryHandler\n\n# Set up a CallbackQueryHandler to handle the \"trade\" command\ntrade_handler = CallbackQueryHandler(trade, pattern=\"trade\")\n\n# Add the handler to the dispatcher\ndispatcher.add_handler(trade_handler)\n\nWhen the user clicks on the \"Trade\" button, the trade function will be called, and you can use the information from the CallbackQuery object to process the user's input and continue the conversation.\n" ]
[ 0 ]
[]
[]
[ "bots", "python_telegram_bot", "telegram" ]
stackoverflow_0074672620_bots_python_telegram_bot_telegram.txt
Q: PrestaShop admin login I'm new in Prestashop and I've created my account. When I login in the Admin page it doesn't take me to another one. It stays in the same page of logging in without my email and password like a refresh but when I type a false password I get this message: There is one error. The employee does not exist, or the password provided is incorrect
PrestaShop admin login
I'm new in Prestashop and I've created my account. When I login in the Admin page it doesn't take me to another one. It stays in the same page of logging in without my email and password like a refresh but when I type a false password I get this message: There is one error. The employee does not exist, or the password provided is incorrect
[]
[]
[ "sudo cat /home/bitnami/bitnami_credentials in Linux\n\nThe default username and password is '[email protected]' and '1CdDjm7lzdFv'.\n\n" ]
[ -1 ]
[ "authentication", "prestashop", "prestashop_1.7" ]
stackoverflow_0048665563_authentication_prestashop_prestashop_1.7.txt
Q: Selenium driver - not able to close the popup, element not found is the main issue Using selenium find element (using xpath) method to close the popup but it is not able to detect it. time.sleep(10) driver.fin_element(By.XPATH,"XPATH").close() I have also use time.sleep and webdriver wait methods but it not working Website: www.multcloud.com time.sleep(10) webdriver wait ec. Element traceable method Also try Find elements but it is showing empty list. Tried find element using class_name,xpath, full xpath,cs locator,link text A: Try the below code driver.get("https://www.multcloud.com/") sleep(4) driver.switch_to.frame("layui-layer-iframe1") You could use explicit wait in the place of sleep sleep(3) button = driver.find_element(By.XPATH,"//div[@onclick='if (!window.__cfRLUnblockHandlers) return false; closePopup()']") button.click() Imports from time import sleep from selenium import webdriver from selenium.webdriver.common.by import By
Selenium driver - not able to close the popup, element not found is the main issue
Using selenium find element (using xpath) method to close the popup but it is not able to detect it. time.sleep(10) driver.fin_element(By.XPATH,"XPATH").close() I have also use time.sleep and webdriver wait methods but it not working Website: www.multcloud.com time.sleep(10) webdriver wait ec. Element traceable method Also try Find elements but it is showing empty list. Tried find element using class_name,xpath, full xpath,cs locator,link text
[ "Try the below code\ndriver.get(\"https://www.multcloud.com/\")\nsleep(4)\ndriver.switch_to.frame(\"layui-layer-iframe1\")\n\nYou could use explicit wait in the place of sleep\nsleep(3)\nbutton = driver.find_element(By.XPATH,\"//div[@onclick='if (!window.__cfRLUnblockHandlers) return false; closePopup()']\")\n\nbutton.click()\n\nImports\nfrom time import sleep\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\n\n" ]
[ 0 ]
[]
[]
[ "popup", "python_3.x", "selenium", "selenium_webdriver", "webdriver" ]
stackoverflow_0074672133_popup_python_3.x_selenium_selenium_webdriver_webdriver.txt
Q: Gaurd Node in torrc file Hello I have some questions related to tor. How to disable Guard node in torrc file or by using stem Is there any method in stem where I can specify my Exit Node. I know a method in torrc file but I don't know how to do it in stem or using controler. for example. I want this because I want my entry node to be change for every circuit and Exit to be same controller.set_options({'__DisablePredictedCircuits': '1', 'MaxOnionsPending': '0', 'newcircuitperiod': '999999999', 'maxcircuitdirtiness': '999999999'}) also for if possible in this part it will be good too. mean if I pass my nodes as an argument here controller.new_circuit() A: To disable Guard nodes in the torrc file, you can add the following lines: UseEntryGuards 0 NumEntryGuards 0 To specify an Exit node in the torrc file, you can add the following line: ExitNodes $fingerprint where $fingerprint is the fingerprint of the Exit node you want to use. You can use the set_options method of the Controller class to set the UseEntryGuards, NumEntryGuards, and ExitNodes options as follows: from stem import Controller with Controller.from_port() as controller: controller.authenticate() controller.set_options({ 'UseEntryGuards': '0', 'NumEntryGuards': '0', 'ExitNodes': '$fingerprint', }) To specify the Exit node when creating a new circuit, you can use the extend_circuit method of the Controller class as follows: from stem import Controller with Controller.from_port() as controller: controller.authenticate() controller.new_circuit() controller.extend_circuit('$fingerprint') where $fingerprint is the fingerprint of the Exit node you want to use.
Gaurd Node in torrc file
Hello I have some questions related to tor. How to disable Guard node in torrc file or by using stem Is there any method in stem where I can specify my Exit Node. I know a method in torrc file but I don't know how to do it in stem or using controler. for example. I want this because I want my entry node to be change for every circuit and Exit to be same controller.set_options({'__DisablePredictedCircuits': '1', 'MaxOnionsPending': '0', 'newcircuitperiod': '999999999', 'maxcircuitdirtiness': '999999999'}) also for if possible in this part it will be good too. mean if I pass my nodes as an argument here controller.new_circuit()
[ "To disable Guard nodes in the torrc file, you can add the following lines:\nUseEntryGuards 0\nNumEntryGuards 0\n\nTo specify an Exit node in the torrc file, you can add the following line:\nExitNodes $fingerprint\n\nwhere $fingerprint is the fingerprint of the Exit node you want to use.\nYou can use the set_options method of the Controller class to set the UseEntryGuards, NumEntryGuards, and ExitNodes options as follows:\nfrom stem import Controller\n\nwith Controller.from_port() as controller:\n controller.authenticate()\n \n controller.set_options({\n 'UseEntryGuards': '0',\n 'NumEntryGuards': '0',\n 'ExitNodes': '$fingerprint',\n })\n\nTo specify the Exit node when creating a new circuit, you can use the extend_circuit method of the Controller class as follows:\nfrom stem import Controller\n\nwith Controller.from_port() as controller:\n controller.authenticate()\n \n controller.new_circuit()\n controller.extend_circuit('$fingerprint')\n\nwhere $fingerprint is the fingerprint of the Exit node you want to use.\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "stem" ]
stackoverflow_0074672521_python_python_3.x_stem.txt
Q: Cost of operations in Azure SQL Database I have an SQL Database in Azure with General Purpose type, really basic one: It is not used frequnetly, only sometimes when I test some things on my website, so that I didn't delete this resouce. Recently, I noticed that database management cost increased but I didn't use the database at that time: Is there any way to investigate what caused this spikes on the diagram (Nov 22 - Nov 28)? I tried to find information about operations that were executred at that time with no success. Maybe there are some kind of logs in Azure that can help me with this?
Cost of operations in Azure SQL Database
I have an SQL Database in Azure with General Purpose type, really basic one: It is not used frequnetly, only sometimes when I test some things on my website, so that I didn't delete this resouce. Recently, I noticed that database management cost increased but I didn't use the database at that time: Is there any way to investigate what caused this spikes on the diagram (Nov 22 - Nov 28)? I tried to find information about operations that were executred at that time with no success. Maybe there are some kind of logs in Azure that can help me with this?
[]
[]
[ "Please consider to open Azure Portal and access your Azure SQL Database, on the left panel you will see \"Query Performance Insights\", use that option. Use sliders or zoom icons to change the observed interval. Read step-by-step procedure here.\nWhile you Investigate this issue, please consider the following causes also:\n\nMake sure you did not enable temporarily a tool to monitor or make sure your web site is up and running.\nDid you enable a feature temporarily on Azure portal for Azure SQL? Azure SQL Database features like geo-replication, failover group, long-term backup retention, Azure SQL Data Sync, Elastic Jobs create activity on the database.\nDid you enable temporarily features using T-SQL like Full-Text Search, that constantly generate queries against the database?\nIf your database is serverless, did you left accidentally a tool like SQL Server Management Tool or Visual Studio connected a couple of dates to the database until you shutdown the client computer.\n\nMy suggestion, if you rarely use this database and you have not set it up as serverless, it is a good time to try serverless.\n" ]
[ -1 ]
[ "azure", "azure_sql_database" ]
stackoverflow_0074670536_azure_azure_sql_database.txt
Q: Dequeue function is not working correctly #include<stdio.h> #include<string.h> #include<stdlib.h> #define max 100 int enqueue(); int dequeue(); int peek(); int main() { char name[max][80], data[80]; int front = 0; int rear = 0; int value; int ch; printf("------------------------------\n"); printf("\tMenu"); printf("\n------------------------------"); printf("\n [1] ENQUEUE"); printf("\n [2] DEQUEUE"); printf("\n [3] PEEK"); printf("\n [4] DISPLAY"); printf("\n------------------------------\n"); while(1) { printf("Choice : "); scanf("%d", &ch); switch(ch) { case 1 : // insert printf("\nEnter the Name : "); scanf("%s",data); value = enqueue(name, &rear, data); if(value == -1 ) printf("\n QUEUE is Full \n"); else printf("\n'%s' is inserted in QUEUE.\n\n",data); break; case 2 : // delete value = dequeue(name, &front, &rear, data); if( value == -1 ) printf("\n QUEUE is Empty \n"); else printf("\n Deleted Name from QUEUE is : %s\n", data); printf("\n"); break; case 3: value = peek(name, &front, &rear, data); if(value != -1) { printf("\n The front is: %s\n", data); } break; case 5 : exit(0); default: printf("Invalid Choice \n"); } } return 0; } int enqueue(char name[max][80], int *rear, const char data[80]) { if(*rear + 1 == max) return -1; strcpy(name[*rear], data); (*rear)++; return 1; } int dequeue(char name[max][80], int *front, int *rear, char data[80]) { if(*front == *rear) return(-1); else { (*front)++; strcpy(data, name[*front]); return(1); } } int peek(char name[max][80], int *front, int *rear, char data[80]) { if(*front == *rear) { printf(" QUEUE IS EMPTY\n"); return -1; } strcpy(data, name[*front]); return 1; } Student here. My dequeue is not working correctly. The dequeue function is not deleting the first element but the second element. For example, The user, first inputs the name "Jennie" and then the second is "Lisa", when the user selects the dequeue function, "Jennie" should be deleted, but my program deletes the second element which is "Lisa". How to fix this? A: You need to increment the front after you dequeue the value: int dequeue(char name[max][80], int *front, int *rear, char data[80]) { if(*front == *rear) return -1; strcpy(data, name[(*front)++]); return 1; }
Dequeue function is not working correctly
#include<stdio.h> #include<string.h> #include<stdlib.h> #define max 100 int enqueue(); int dequeue(); int peek(); int main() { char name[max][80], data[80]; int front = 0; int rear = 0; int value; int ch; printf("------------------------------\n"); printf("\tMenu"); printf("\n------------------------------"); printf("\n [1] ENQUEUE"); printf("\n [2] DEQUEUE"); printf("\n [3] PEEK"); printf("\n [4] DISPLAY"); printf("\n------------------------------\n"); while(1) { printf("Choice : "); scanf("%d", &ch); switch(ch) { case 1 : // insert printf("\nEnter the Name : "); scanf("%s",data); value = enqueue(name, &rear, data); if(value == -1 ) printf("\n QUEUE is Full \n"); else printf("\n'%s' is inserted in QUEUE.\n\n",data); break; case 2 : // delete value = dequeue(name, &front, &rear, data); if( value == -1 ) printf("\n QUEUE is Empty \n"); else printf("\n Deleted Name from QUEUE is : %s\n", data); printf("\n"); break; case 3: value = peek(name, &front, &rear, data); if(value != -1) { printf("\n The front is: %s\n", data); } break; case 5 : exit(0); default: printf("Invalid Choice \n"); } } return 0; } int enqueue(char name[max][80], int *rear, const char data[80]) { if(*rear + 1 == max) return -1; strcpy(name[*rear], data); (*rear)++; return 1; } int dequeue(char name[max][80], int *front, int *rear, char data[80]) { if(*front == *rear) return(-1); else { (*front)++; strcpy(data, name[*front]); return(1); } } int peek(char name[max][80], int *front, int *rear, char data[80]) { if(*front == *rear) { printf(" QUEUE IS EMPTY\n"); return -1; } strcpy(data, name[*front]); return 1; } Student here. My dequeue is not working correctly. The dequeue function is not deleting the first element but the second element. For example, The user, first inputs the name "Jennie" and then the second is "Lisa", when the user selects the dequeue function, "Jennie" should be deleted, but my program deletes the second element which is "Lisa". How to fix this?
[ "You need to increment the front after you dequeue the value:\nint dequeue(char name[max][80], int *front, int *rear, char data[80])\n{\n if(*front == *rear)\n return -1;\n strcpy(data, name[(*front)++]);\n return 1;\n}\n\n" ]
[ 2 ]
[]
[]
[ "c", "queue" ]
stackoverflow_0074672558_c_queue.txt
Q: Why should I still add function keyword when I configured a vue3 project with Typescript support? I come from Angular background, and in which we don't use the function keyword when we declare methods inside a component. This is because I'm using TypeScript which is ECMAScript6 compliant. Now, I'm learning vue3 (composition API) with TypeScript enabled. But if I omit the function keyword when declaring events inside a component, it's not being recognized and the compiler is throwing an error. I'm trying to understand why is this difference but unable to figure out. Thank you for your help in making me understand the concepts. // Doesn't work updateName(fName: string, lName: string) {} // works function updateName(fName: string, lName: string) {} A: In Angular, we define methods in a class-based component: @Component() class MyAngularComponent { // Method inside a class myClassMethod() {} } This uses the ES6 method definition in a class. But in Vue 3 Composition API, we just write and call a script, there is no longer a wrapping object (like in Vue 2 Options API) or class (like in vue-class-component, Angular or React class-based components): <script setup> // No wrapping object or class // Functions are directly in the script body function someHelperFunction() {} </script> Therefore the programming language syntax (whether JavaScript or TypeScript) does not allow the method definition syntax in this case.
Why should I still add function keyword when I configured a vue3 project with Typescript support?
I come from Angular background, and in which we don't use the function keyword when we declare methods inside a component. This is because I'm using TypeScript which is ECMAScript6 compliant. Now, I'm learning vue3 (composition API) with TypeScript enabled. But if I omit the function keyword when declaring events inside a component, it's not being recognized and the compiler is throwing an error. I'm trying to understand why is this difference but unable to figure out. Thank you for your help in making me understand the concepts. // Doesn't work updateName(fName: string, lName: string) {} // works function updateName(fName: string, lName: string) {}
[ "In Angular, we define methods in a class-based component:\n@Component()\nclass MyAngularComponent {\n // Method inside a class\n myClassMethod() {}\n}\n\nThis uses the ES6 method definition in a class.\nBut in Vue 3 Composition API, we just write and call a script, there is no longer a wrapping object (like in Vue 2 Options API) or class (like in vue-class-component, Angular or React class-based components):\n<script setup>\n// No wrapping object or class\n// Functions are directly in the script body\nfunction someHelperFunction() {}\n</script>\n\nTherefore the programming language syntax (whether JavaScript or TypeScript) does not allow the method definition syntax in this case.\n" ]
[ 0 ]
[]
[]
[ "typescript", "vue_composition_api", "vuejs3" ]
stackoverflow_0074672529_typescript_vue_composition_api_vuejs3.txt
Q: JavaScript Swiper Native Navigation Function is not working I´m using swiper to make a slider on my website. Unfortunately the navigation isn´t working in Chrome.. The buttons appear but don´t do anything. This is my code: <div class="swiper-container"> <div class="swiper-wrapper"> <div class="swiper-slide"> </div> <div class="swiper-slide"> </div> <div class="swiper-slide"> </div> <div class="swiper-slide"> </div> <div class="swiper-slide"> </div> </div> <div class="swiper-button-next"></div> <div class="swiper-button-prev"></div> </div> <script src="js/swiper/swiper.min.js"></script> <script> var swiper = new Swiper('.swiper-container', { navigation: { nextEl: '.swiper-button-next', prevEl: '.swiper-button-prev', }, slidesPerView: 3, spaceBetween: 5, loop: true, centeredSlides: true, }); </script> I hope someone can help me, since I could not find any information relating this topic. A: Try importing Navigation from the Swiper lib. And then Swiper.use() import Swiper, { Navigation } from 'swiper'; Swiper.use([Navigation]); const swiper = new Swiper(...); A: I had this problem while working with Next.JS,React. I spent almost a day figuring out what is wrong. Until I found that I should import the library using SwiperCore. The documentation is kinda straight forward to it. import SwiperCore, { Navigation } from 'swiper'; SwiperCore.use([Navigation]); So basically, there's no fancy in here, it is just the library splice its code into smaller pieces so the things that you only really need will be include to your bundled. (Though tree-shaking is already a thing now). So I added this to my _app.tsx, and the native function will be included to all the swiper using a navigation. A: I had exactly the same issue and could not understand. This question helped me understand that the problem was there until the windows was resized. Adding: observer: true, observeParents: true To the Swiper config solved the problem for me A: As said in documentation By default Swiper exports only core version without additional modules (like Navigation, Pagination, etc.). So you need to import and configure them too: // core version + navigation, pagination modules: import Swiper, { Navigation, Pagination } from 'swiper'; // configure Swiper to use modules Swiper.use([Navigation, Pagination]);
JavaScript Swiper Native Navigation Function is not working
I´m using swiper to make a slider on my website. Unfortunately the navigation isn´t working in Chrome.. The buttons appear but don´t do anything. This is my code: <div class="swiper-container"> <div class="swiper-wrapper"> <div class="swiper-slide"> </div> <div class="swiper-slide"> </div> <div class="swiper-slide"> </div> <div class="swiper-slide"> </div> <div class="swiper-slide"> </div> </div> <div class="swiper-button-next"></div> <div class="swiper-button-prev"></div> </div> <script src="js/swiper/swiper.min.js"></script> <script> var swiper = new Swiper('.swiper-container', { navigation: { nextEl: '.swiper-button-next', prevEl: '.swiper-button-prev', }, slidesPerView: 3, spaceBetween: 5, loop: true, centeredSlides: true, }); </script> I hope someone can help me, since I could not find any information relating this topic.
[ "Try importing Navigation from the Swiper lib. And then Swiper.use()\nimport Swiper, { Navigation } from 'swiper';\n\nSwiper.use([Navigation]);\n\nconst swiper = new Swiper(...);\n\n", "I had this problem while working with Next.JS,React. I spent almost a day figuring out what is wrong. Until I found that I should import the library using SwiperCore. The documentation is kinda straight forward to it.\nimport SwiperCore, { Navigation } from 'swiper';\nSwiperCore.use([Navigation]);\n\nSo basically, there's no fancy in here, it is just the library splice its code into smaller pieces so the things that you only really need will be include to your bundled. (Though tree-shaking is already a thing now).\nSo I added this to my _app.tsx, and the native function will be included to all the swiper using a navigation.\n", "I had exactly the same issue and could not understand.\nThis question helped me understand that the problem was there until the windows was resized.\nAdding:\nobserver: true, \nobserveParents: true\n\nTo the Swiper config solved the problem for me\n", "As said in documentation\n\nBy default Swiper exports only core version without additional modules\n(like Navigation, Pagination, etc.). So you need to import and\nconfigure them too:\n\n// core version + navigation, pagination modules:\nimport Swiper, { Navigation, Pagination } from 'swiper';\n\n// configure Swiper to use modules\nSwiper.use([Navigation, Pagination]);\n\n" ]
[ 30, 16, 14, 11 ]
[ "if you are writing java Script in different file like script.js and adding it to the main html ,..and then you are using you are using swiper cdn, you have to add the cdn before custom js file....\n" ]
[ -1 ]
[ "javascript", "swiper.js" ]
stackoverflow_0050009818_javascript_swiper.js.txt
Q: What is the purpose of X-Sender and its official definition? I see X-Sender is in the header of some email raw messages. I don't find a definition of it. Could anybody show me where its definition is? What is its purpose? A: The X-Sender header is not a standardized header defined in any official specification. It is typically added to an email message by the email client or server that is sending the message, and its purpose is to identify the sender of the message. The X-Sender header is not part of the original email protocol, which only includes a From header to identify the sender. The X-Sender header is used as an additional means of identifying the sender, and it may be used in some email clients or servers to help prevent email spoofing or for other purposes. The exact format and content of the X-Sender header may vary depending on the email client or server that is adding it to the message. In some cases, it may include the sender's email address, while in other cases it may contain the sender's name or other identifying information. Because the X-Sender header is not a standardized part of the email protocol, it is not guaranteed to be present in all email messages, and its content may not always be reliable or accurate. It is generally not recommended to rely on the X-Sender header for any critical information about the sender of an email message.
What is the purpose of X-Sender and its official definition?
I see X-Sender is in the header of some email raw messages. I don't find a definition of it. Could anybody show me where its definition is? What is its purpose?
[ "The X-Sender header is not a standardized header defined in any official specification. It is typically added to an email message by the email client or server that is sending the message, and its purpose is to identify the sender of the message.\nThe X-Sender header is not part of the original email protocol, which only includes a From header to identify the sender. The X-Sender header is used as an additional means of identifying the sender, and it may be used in some email clients or servers to help prevent email spoofing or for other purposes.\nThe exact format and content of the X-Sender header may vary depending on the email client or server that is adding it to the message. In some cases, it may include the sender's email address, while in other cases it may contain the sender's name or other identifying information.\nBecause the X-Sender header is not a standardized part of the email protocol, it is not guaranteed to be present in all email messages, and its content may not always be reliable or accurate. It is generally not recommended to rely on the X-Sender header for any critical information about the sender of an email message.\n" ]
[ 0 ]
[]
[]
[ "email" ]
stackoverflow_0074672664_email.txt
Q: unreal engine 4 open level (by name) does not work? Hi, I am new in unreal engine platform. Now, I am studying a first person shooter project. I did everything but I could not make work start button. I tried a lot of thing to start button work. Firstly, I fixed the name of map because the name of map should be same. Secondly, I have entered the map files to the packages in the project settings. But I could not find any other thing to fix the mistake. There are no error except building color errors. İs it related to it or are there any other thing to fix it? Also, İt works for other maps. However, it does not work for first person shooter map. What should I do. I really worked hard to do this project and I was too excited. A: Make sure capitalization is correct, if not try opening level by string instead of object reference. Also try putting a print string after your button is clicked to make sure the program is actually running. If the program is not running than make sure that the z order is at like 10 in the add to viewport block. As a last resort, add the open level to your player character, if that dosen't work, consider making a new project, verifying the engine (epic games launcher only) or install a later engine version (i.e 4.27 or 4.26) if you are on an older one (though I think open level by object reference is only in 4.27) here are some opening levels with widget examples: Opening level by object reference Open level by name Edit: Click the blue text (hyperlinks) to open the imgur website with the images (idk why there isnt a preview) A: Thank you so much, I have been spending few hours on this, but never I thought I should try to not use the name, My issue was in UE5, but it also has the open level by Object reference this solved my issue, thank you! speaking of long names, there is something in the editor settings > Enable support for long paths ( >260 characters) - even in 4.27
unreal engine 4 open level (by name) does not work?
Hi, I am new in unreal engine platform. Now, I am studying a first person shooter project. I did everything but I could not make work start button. I tried a lot of thing to start button work. Firstly, I fixed the name of map because the name of map should be same. Secondly, I have entered the map files to the packages in the project settings. But I could not find any other thing to fix the mistake. There are no error except building color errors. İs it related to it or are there any other thing to fix it? Also, İt works for other maps. However, it does not work for first person shooter map. What should I do. I really worked hard to do this project and I was too excited.
[ "Make sure capitalization is correct, if not try opening level by string instead of object reference. Also try putting a print string after your button is clicked to make sure the program is actually running. If the program is not running than make sure that the z order is at like 10 in the add to viewport block. As a last resort, add the open level to your player character, if that dosen't work, consider making a new project, verifying the engine (epic games launcher only) or install a later engine version (i.e 4.27 or 4.26) if you are on an older one (though I think open level by object reference is only in 4.27)\nhere are some opening levels with widget examples:\nOpening level by object reference\nOpen level by name\nEdit: Click the blue text (hyperlinks) to open the imgur website with the images (idk why there isnt a preview)\n", "Thank you so much, I have been spending few hours on this, but never I thought I should try to not use the name,\nMy issue was in UE5, but it also has the open level by Object reference\nthis solved my issue, thank you!\nspeaking of long names, there is something in the editor settings > Enable support for long paths ( >260 characters) - even in 4.27\n" ]
[ 0, 0 ]
[]
[]
[ "game_development", "unreal_engine4" ]
stackoverflow_0072651885_game_development_unreal_engine4.txt