content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: key shortcut to cycle between two windows in vim Is there an easy way to cycle between two windows (buffers!) in vim? With vim file1 file2 one can use :n and :N to go back and forth. But this is cumbersome, leads to errors if either :n or :N is typed twice. I'm hoping for something simple as ZZ to go back and forth with a single key. A: Is there an easy way to cycle between two windows (buffers!) Well… "windows", "buffers", or, since you are using :n and :N, "files"? They are not the same thing at all. Files As was mentioned in the comments, <C-^> (or <C-6> on some keyboards) can be used to switch between two files but, since it relies on the notion of "alternate file", you must do :next first in order to establish the relationship between the two files. The problem with the argument list, the list where the files you passed as argument to Vim are stored, is that it is quite low-level. :n and :N don't wrap around, the alternate file is not set automatically, etc. Bummer. Example: $ vim file1 file2 # open two files, file1 is current :n " switch to file2 <C-^> " switch to file1 <C-^> " switch to file2 <C-^> " switch to file1 … See :help argument-list. Buffers Every "file" you open in Vim (from within Vim or from the shell) becomes a buffer so, in addition to the low-level argument list, you have the buffer list which operates at a slightly higher level. Since you only opened two files, you only have two buffers, between which you can switch with :help :bnext because, unlike :n and :N, :bn (and :bN) wrap around. Example: $ vim file1 file2 # open two files and therefore two buffers, file1 is current :bn " switch to file2 :bn " switch to file1 :bn " switch to file2 :bn " switch to file1 … Note that you can map :bn to something easier if you want. <C-^> can also be used in this context because the boundary between "file" and "buffer" is a bit murky, but you still have to establish the relationship first by doing a manual switch. Windows You can switch between two windows with <C-w>p: :help CTRL-W_p. Going forward <C-^> is quite useful but you need to establish a relationship between two files/buffers for it to work and Vim unfortunately doesn't do that at startup. It is possible to force it, though, but it is not exactly intuitive. YMMV: $ vim +n +N file1 file2 See :help -+c.
key shortcut to cycle between two windows in vim
Is there an easy way to cycle between two windows (buffers!) in vim? With vim file1 file2 one can use :n and :N to go back and forth. But this is cumbersome, leads to errors if either :n or :N is typed twice. I'm hoping for something simple as ZZ to go back and forth with a single key.
[ "\nIs there an easy way to cycle between two windows (buffers!)\n\nWell… \"windows\", \"buffers\", or, since you are using :n and :N, \"files\"? They are not the same thing at all.\nFiles\nAs was mentioned in the comments, <C-^> (or <C-6> on some keyboards) can be used to switch between two files but, since it relies on the notion of \"alternate file\", you must do :next first in order to establish the relationship between the two files.\nThe problem with the argument list, the list where the files you passed as argument to Vim are stored, is that it is quite low-level. :n and :N don't wrap around, the alternate file is not set automatically, etc. Bummer.\nExample:\n$ vim file1 file2 # open two files, file1 is current\n:n \" switch to file2\n<C-^> \" switch to file1\n<C-^> \" switch to file2\n<C-^> \" switch to file1\n…\n\nSee :help argument-list.\nBuffers\nEvery \"file\" you open in Vim (from within Vim or from the shell) becomes a buffer so, in addition to the low-level argument list, you have the buffer list which operates at a slightly higher level.\nSince you only opened two files, you only have two buffers, between which you can switch with :help :bnext because, unlike :n and :N, :bn (and :bN) wrap around.\nExample:\n$ vim file1 file2 # open two files and therefore two buffers, file1 is current\n:bn \" switch to file2\n:bn \" switch to file1\n:bn \" switch to file2\n:bn \" switch to file1\n…\n\nNote that you can map :bn to something easier if you want.\n<C-^> can also be used in this context because the boundary between \"file\" and \"buffer\" is a bit murky, but you still have to establish the relationship first by doing a manual switch.\nWindows\nYou can switch between two windows with <C-w>p: :help CTRL-W_p.\nGoing forward\n<C-^> is quite useful but you need to establish a relationship between two files/buffers for it to work and Vim unfortunately doesn't do that at startup. It is possible to force it, though, but it is not exactly intuitive. YMMV:\n$ vim +n +N file1 file2\n\nSee :help -+c.\n" ]
[ 1 ]
[]
[]
[ "vim" ]
stackoverflow_0074669673_vim.txt
Q: Abp version update hi guys i am trying to update abp version from 5.2.2 to 6.0.1 its single layer web app with blazor server. I executed abp update then did schema, blazor, npm and yarn update and now when i run app i get this error message from browser console rror: System.InvalidOperationException: Unable to set property 'Clicked' on object of type 'Blazorise.BarDropdownItem'. The error was: Unable to cast object of type 'Microsoft.AspNetCore.Components.EventCallback' to type 'Microsoft.AspNetCore.Components.EventCallback`1[Microsoft.AspNetCore.Components.Web.MouseEventArgs]'. ---> System.InvalidCastException: Unable to cast object of type 'Microsoft.AspNetCore.Components.EventCallback' to type 'Microsoft.AspNetCore.Components.EventCallback`1[Microsoft.AspNetCore.Components.Web.MouseEventArgs]'. at Microsoft.AspNetCore.Components.Reflection.PropertySetter.CallPropertySetter[TTarget,TValue](Action`2 setter, Object target, Object value) at Microsoft.AspNetCore.Components.Reflection.PropertySetter.SetValue(Object target, Object value) at Microsoft.AspNetCore.Components.Reflection.ComponentProperties.<SetProperties>g__SetProperty|3_0(Object target, PropertySetter writer, String parameterName, Object value) --- End of inner exception stack trace --- at Microsoft.AspNetCore.Components.Reflection.ComponentProperties.<SetProperties>g__SetProperty|3_0(Object target, PropertySetter writer, String parameterName, Object value) at Microsoft.AspNetCore.Components.Reflection.ComponentProperties.SetProperties(ParameterView& parameters, Object target) at Microsoft.AspNetCore.Components.ParameterView.SetParameterProperties(Object target) at Microsoft.AspNetCore.Components.ComponentBase.SetParametersAsync(ParameterView parameters) at Blazorise.BaseComponent.SetParametersAsync(ParameterView parameters) at Microsoft.AspNetCore.Components.Rendering.ComponentState.SupplyCombinedParameters(ParameterView directAndCascadingParameters) A: after creating new project with abp cli i copied packages versions and it worked. issue was with "Blazorise.Bootstrap5" Version="1.0.4" /> "Blazorise.Icons.FontAwesome" Version="1.0.4" /> this is correct version
Abp version update
hi guys i am trying to update abp version from 5.2.2 to 6.0.1 its single layer web app with blazor server. I executed abp update then did schema, blazor, npm and yarn update and now when i run app i get this error message from browser console rror: System.InvalidOperationException: Unable to set property 'Clicked' on object of type 'Blazorise.BarDropdownItem'. The error was: Unable to cast object of type 'Microsoft.AspNetCore.Components.EventCallback' to type 'Microsoft.AspNetCore.Components.EventCallback`1[Microsoft.AspNetCore.Components.Web.MouseEventArgs]'. ---> System.InvalidCastException: Unable to cast object of type 'Microsoft.AspNetCore.Components.EventCallback' to type 'Microsoft.AspNetCore.Components.EventCallback`1[Microsoft.AspNetCore.Components.Web.MouseEventArgs]'. at Microsoft.AspNetCore.Components.Reflection.PropertySetter.CallPropertySetter[TTarget,TValue](Action`2 setter, Object target, Object value) at Microsoft.AspNetCore.Components.Reflection.PropertySetter.SetValue(Object target, Object value) at Microsoft.AspNetCore.Components.Reflection.ComponentProperties.<SetProperties>g__SetProperty|3_0(Object target, PropertySetter writer, String parameterName, Object value) --- End of inner exception stack trace --- at Microsoft.AspNetCore.Components.Reflection.ComponentProperties.<SetProperties>g__SetProperty|3_0(Object target, PropertySetter writer, String parameterName, Object value) at Microsoft.AspNetCore.Components.Reflection.ComponentProperties.SetProperties(ParameterView& parameters, Object target) at Microsoft.AspNetCore.Components.ParameterView.SetParameterProperties(Object target) at Microsoft.AspNetCore.Components.ComponentBase.SetParametersAsync(ParameterView parameters) at Blazorise.BaseComponent.SetParametersAsync(ParameterView parameters) at Microsoft.AspNetCore.Components.Rendering.ComponentState.SupplyCombinedParameters(ParameterView directAndCascadingParameters)
[ "after creating new project with abp cli i copied packages versions and it worked. issue was with\n\"Blazorise.Bootstrap5\" Version=\"1.0.4\" />\n\"Blazorise.Icons.FontAwesome\" Version=\"1.0.4\" />\n\nthis is correct version\n" ]
[ 0 ]
[]
[]
[ "abp", "blazorise" ]
stackoverflow_0074671774_abp_blazorise.txt
Q: How do i query the ng-model table with xpath in selenium and java I am trying to get table from DOCTYPE Html through xpath or by className in Selenium/java. But i cannot find the locator. How can i get the table through selenium java. Both below paths are not working. Table screenshot is at Table screenshot WebElement tableElement= driver.findElement(By.className("table table-striped")); WebElement tableElement1 = driver.findElement(By.xpath("/html/body/div[2]/div[2]/div/div/div/div/table/tbody")); I would like to get all the rows in the table through selenium. I am getting Exception in thread "main" org.openqa.selenium.NoSuchElementException: no such element: Unable to locate element: {"method":"xpath","selector":"/html/body/div[2]/div[2]/div/div/div/div/table/tbody"} Exception in thread "main" org.openqa.selenium.NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":".table\.table\-striped"} A: Well, that's a very interesting question because you highlight a well-know phenomena (which, for some reason, skip the radar of many developers). Today, we are heavily relying on modern web stacks (Yes, angularjs is still considered as modern) and thus, exposed to the "elements" of it. As users, it's fun and intuitive when page loades in a fancy way (aka lazy loaded) which means: the page loads it's essential resources first, and only then the rest. You mentioned a table, which probably query the data via an API call and may not be rendered when the page load. For us, humans, it's seems pretty fast but - the element is not there when the page first initiate. How to check it? go to you page, wait for everything to load properly, then hit the right-click mouse button and choose view page source. Is the table there? if not - it is lazy loaded and you need to.. wait for it How to fix this? Let's start with reading the documentation and fully understand what we are doing (and when.. and why). Then we may use a mechanism proposed there - itmay look something like this: WebElement table = new WebDriverWait(driver, Duration.ofSeconds(3)) .until(driver -> driver.findElement(By.cssSelector("table .table-striped"))); I assume that the .table-stripped element is inside a table element - but you can play with it according to your html structure until you get the proper selector for your table You may need to use this approach more than you think, so - take your time to properly educate yourself and feel comfortable before rushing into the next problem. Happy coding
How do i query the ng-model table with xpath in selenium and java
I am trying to get table from DOCTYPE Html through xpath or by className in Selenium/java. But i cannot find the locator. How can i get the table through selenium java. Both below paths are not working. Table screenshot is at Table screenshot WebElement tableElement= driver.findElement(By.className("table table-striped")); WebElement tableElement1 = driver.findElement(By.xpath("/html/body/div[2]/div[2]/div/div/div/div/table/tbody")); I would like to get all the rows in the table through selenium. I am getting Exception in thread "main" org.openqa.selenium.NoSuchElementException: no such element: Unable to locate element: {"method":"xpath","selector":"/html/body/div[2]/div[2]/div/div/div/div/table/tbody"} Exception in thread "main" org.openqa.selenium.NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":".table\.table\-striped"}
[ "Well, that's a very interesting question because you highlight a well-know phenomena (which, for some reason, skip the radar of many developers).\nToday, we are heavily relying on modern web stacks (Yes, angularjs is still considered as modern) and thus, exposed to the \"elements\" of it.\nAs users, it's fun and intuitive when page loades in a fancy way (aka lazy loaded) which means: the page loads it's essential resources first, and only then the rest. You mentioned a table, which probably query the data via an API call and may not be rendered when the page load. For us, humans, it's seems pretty fast but - the element is not there when the page first initiate.\nHow to check it?\ngo to you page, wait for everything to load properly, then hit the right-click mouse button and choose view page source. Is the table there? if not - it is lazy loaded and you need to.. wait for it\nHow to fix this?\nLet's start with reading the documentation and fully understand what we are doing (and when.. and why). Then we may use a mechanism proposed there - itmay look something like this:\nWebElement table = new WebDriverWait(driver, Duration.ofSeconds(3))\n .until(driver -> driver.findElement(By.cssSelector(\"table .table-striped\")));\n\nI assume that the .table-stripped element is inside a table element - but you can play with it according to your html structure until you get the proper selector for your table\nYou may need to use this approach more than you think, so - take your time to properly educate yourself and feel comfortable before rushing into the next problem. Happy coding\n" ]
[ 0 ]
[]
[]
[ "angularjs", "java", "javascript", "selenium", "xpath" ]
stackoverflow_0074673821_angularjs_java_javascript_selenium_xpath.txt
Q: Arduino mega with L298n and Motors with Encoders not registering encoders I am trying to follow a tutorial from youtube on using ROS with Arduino to control motors, and I have connected my L298N with the battery precisely as the video describes and have uploaded sketch 1 with the supporting folder and it loads properly. The Arduino is powered properly via USB, but that connection is not shown in the diagram. When I type the "e" command, I get the proper response of "0 0" and when I do the "o 255 255" it says "OK" and drives properly but upon using "e" to recheck the encoders I am getting the same "0 0". If anyone can spot something wrong with this, I would really appreciate the help in fixing it. Diagram and Code Below Code: #define USE_BASE // Enable the base controller code //#undef USE_BASE // Disable the base controller code /* Define the motor controller and encoder library you are using */ #ifdef USE_BASE /* The Pololu VNH5019 dual motor driver shield */ //#define POLOLU_VNH5019 /* The Pololu MC33926 dual motor driver shield */ //#define POLOLU_MC33926 /* The RoboGaia encoder shield */ //#define ROBOGAIA /* Encoders directly attached to Arduino board */ #define ARDUINO_ENC_COUNTER /* L298 Motor driver*/ #define L298_MOTOR_DRIVER #endif //#define USE_SERVOS // Enable use of PWM servos as defined in servos.h #undef USE_SERVOS // Disable use of PWM servos /* Serial port baud rate */ #define BAUDRATE 57600 /* Maximum PWM signal */ #define MAX_PWM 255 #if defined(ARDUINO) && ARDUINO >= 100 #include "Arduino.h" #else #include "WProgram.h" #endif /* Include definition of serial commands */ #include "commands.h" /* Sensor functions */ #include "sensors.h" /* Include servo support if required */ #ifdef USE_SERVOS #include <Servo.h> #include "servos.h" #endif #ifdef USE_BASE /* Motor driver function definitions */ #include "motor_driver.h" /* Encoder driver function definitions */ #include "encoder_driver.h" /* PID parameters and functions */ #include "diff_controller.h" /* Run the PID loop at 30 times per second */ #define PID_RATE 30 // Hz /* Convert the rate into an interval */ const int PID_INTERVAL = 1000 / PID_RATE; /* Track the next time we make a PID calculation */ unsigned long nextPID = PID_INTERVAL; /* Stop the robot if it hasn't received a movement command in this number of milliseconds */ #define AUTO_STOP_INTERVAL 2000 long lastMotorCommand = AUTO_STOP_INTERVAL; #endif /* Variable initialization */ // A pair of varibles to help parse serial commands (thanks Fergs) int arg = 0; int index = 0; // Variable to hold an input character char chr; // Variable to hold the current single-character command char cmd; // Character arrays to hold the first and second arguments char argv1[16]; char argv2[16]; // The arguments converted to integers long arg1; long arg2; /* Clear the current command parameters */ void resetCommand() { cmd = NULL; memset(argv1, 0, sizeof(argv1)); memset(argv2, 0, sizeof(argv2)); arg1 = 0; arg2 = 0; arg = 0; index = 0; } /* Run a command. Commands are defined in commands.h */ int runCommand() { int i = 0; char *p = argv1; char *str; int pid_args[4]; arg1 = atoi(argv1); arg2 = atoi(argv2); switch(cmd) { case GET_BAUDRATE: Serial.println(BAUDRATE); break; case ANALOG_READ: Serial.println(analogRead(arg1)); break; case DIGITAL_READ: Serial.println(digitalRead(arg1)); break; case ANALOG_WRITE: analogWrite(arg1, arg2); Serial.println("OK"); break; case DIGITAL_WRITE: if (arg2 == 0) digitalWrite(arg1, LOW); else if (arg2 == 1) digitalWrite(arg1, HIGH); Serial.println("OK"); break; case PIN_MODE: if (arg2 == 0) pinMode(arg1, INPUT); else if (arg2 == 1) pinMode(arg1, OUTPUT); Serial.println("OK"); break; case PING: Serial.println(Ping(arg1)); break; #ifdef USE_SERVOS case SERVO_WRITE: servos[arg1].setTargetPosition(arg2); Serial.println("OK"); break; case SERVO_READ: Serial.println(servos[arg1].getServo().read()); break; #endif #ifdef USE_BASE case READ_ENCODERS: Serial.print(readEncoder(LEFT)); Serial.print(" "); Serial.println(readEncoder(RIGHT)); break; case RESET_ENCODERS: resetEncoders(); resetPID(); Serial.println("OK"); break; case MOTOR_SPEEDS: /* Reset the auto stop timer */ lastMotorCommand = millis(); if (arg1 == 0 && arg2 == 0) { setMotorSpeeds(0, 0); resetPID(); moving = 0; } else moving = 1; leftPID.TargetTicksPerFrame = arg1; rightPID.TargetTicksPerFrame = arg2; Serial.println("OK"); break; case MOTOR_RAW_PWM: /* Reset the auto stop timer */ lastMotorCommand = millis(); resetPID(); moving = 0; // Sneaky way to temporarily disable the PID setMotorSpeeds(arg1, arg2); Serial.println("OK"); break; case UPDATE_PID: while ((str = strtok_r(p, ":", &p)) != '\0') { pid_args[i] = atoi(str); i++; } Kp = pid_args[0]; Kd = pid_args[1]; Ki = pid_args[2]; Ko = pid_args[3]; Serial.println("OK"); break; #endif default: Serial.println("Invalid Command"); break; } } /* Setup function--runs once at startup. */ void setup() { Serial.begin(BAUDRATE); // Initialize the motor controller if used */ #ifdef USE_BASE #ifdef ARDUINO_ENC_COUNTER //set as inputs DDRD &= ~(1<<LEFT_ENC_PIN_A); DDRD &= ~(1<<LEFT_ENC_PIN_B); DDRC &= ~(1<<RIGHT_ENC_PIN_A); DDRC &= ~(1<<RIGHT_ENC_PIN_B); //enable pull up resistors PORTD |= (1<<LEFT_ENC_PIN_A); PORTD |= (1<<LEFT_ENC_PIN_B); PORTC |= (1<<RIGHT_ENC_PIN_A); PORTC |= (1<<RIGHT_ENC_PIN_B); // tell pin change mask to listen to left encoder pins PCMSK2 |= (1 << LEFT_ENC_PIN_A)|(1 << LEFT_ENC_PIN_B); // tell pin change mask to listen to right encoder pins PCMSK1 |= (1 << RIGHT_ENC_PIN_A)|(1 << RIGHT_ENC_PIN_B); // enable PCINT1 and PCINT2 interrupt in the general interrupt mask PCICR |= (1 << PCIE1) | (1 << PCIE2); #endif initMotorController(); resetPID(); #endif /* Attach servos if used */ #ifdef USE_SERVOS int i; for (i = 0; i < N_SERVOS; i++) { servos[i].initServo( servoPins[i], stepDelay[i], servoInitPosition[i]); } #endif } /* Enter the main loop. Read and parse input from the serial port and run any valid commands. Run a PID calculation at the target interval and check for auto-stop conditions. */ void loop() { while (Serial.available() > 0) { // Read the next character chr = Serial.read(); // Terminate a command with a CR if (chr == 13) { if (arg == 1) argv1[index] = NULL; else if (arg == 2) argv2[index] = NULL; runCommand(); resetCommand(); } // Use spaces to delimit parts of the command else if (chr == ' ') { // Step through the arguments if (arg == 0) arg = 1; else if (arg == 1) { argv1[index] = NULL; arg = 2; index = 0; } continue; } else { if (arg == 0) { // The first arg is the single-letter command cmd = chr; } else if (arg == 1) { // Subsequent arguments can be more than one character argv1[index] = chr; index++; } else if (arg == 2) { argv2[index] = chr; index++; } } } // If we are using base control, run a PID calculation at the appropriate intervals #ifdef USE_BASE if (millis() > nextPID) { updatePID(); nextPID += PID_INTERVAL; } // Check to see if we have exceeded the auto-stop interval if ((millis() - lastMotorCommand) > AUTO_STOP_INTERVAL) {; setMotorSpeeds(0, 0); moving = 0; } #endif // Sweep servos #ifdef USE_SERVOS int i; for (i = 0; i < N_SERVOS; i++) { servos[i].doSweep(); } #endif } Encoder Pin Designations: /* ************************************************************* Encoder driver function definitions - by James Nugen ************************************************************ */ #ifdef ARDUINO_ENC_COUNTER //below can be changed, but should be PORTD pins; //otherwise additional changes in the code are required #define LEFT_ENC_PIN_A PD2 //pin 2 #define LEFT_ENC_PIN_B PD3 //pin 3 //below can be changed, but should be PORTC pins #define RIGHT_ENC_PIN_A PC4 //pin A4 #define RIGHT_ENC_PIN_B PC5 //pin A5 #endif long readEncoder(int i); void resetEncoder(int i); void resetEncoders(); Encoder Driver: /* ************************************************************* Encoder definitions Add an "#ifdef" block to this file to include support for a particular encoder board or library. Then add the appropriate #define near the top of the main ROSArduinoBridge.ino file. ************************************************************ */ #ifdef USE_BASE #ifdef ROBOGAIA /* The Robogaia Mega Encoder shield */ #include "MegaEncoderCounter.h" /* Create the encoder shield object */ MegaEncoderCounter encoders = MegaEncoderCounter(4); // Initializes the Mega Encoder Counter in the 4X Count mode /* Wrap the encoder reading function */ long readEncoder(int i) { if (i == LEFT) return encoders.YAxisGetCount(); else return encoders.XAxisGetCount(); } /* Wrap the encoder reset function */ void resetEncoder(int i) { if (i == LEFT) return encoders.YAxisReset(); else return encoders.XAxisReset(); } #elif defined(ARDUINO_ENC_COUNTER) volatile long left_enc_pos = 0L; volatile long right_enc_pos = 0L; static const int8_t ENC_STATES [] = {0,1,-1,0,-1,0,0,1,1,0,0,-1,0,-1,1,0}; //encoder lookup table /* Interrupt routine for LEFT encoder, taking care of actual counting */ ISR (PCINT2_vect){ static uint8_t enc_last=0; enc_last <<=2; //shift previous state two places enc_last |= (PIND & (3 << 2)) >> 2; //read the current state into lowest 2 bits left_enc_pos += ENC_STATES[(enc_last & 0x0f)]; } /* Interrupt routine for RIGHT encoder, taking care of actual counting */ ISR (PCINT1_vect){ static uint8_t enc_last=0; enc_last <<=2; //shift previous state two places enc_last |= (PINC & (3 << 4)) >> 4; //read the current state into lowest 2 bits right_enc_pos += ENC_STATES[(enc_last & 0x0f)]; } /* Wrap the encoder reading function */ long readEncoder(int i) { if (i == LEFT) return left_enc_pos; else return right_enc_pos; } /* Wrap the encoder reset function */ void resetEncoder(int i) { if (i == LEFT){ left_enc_pos=0L; return; } else { right_enc_pos=0L; return; } } #else #error A encoder driver must be selected! #endif /* Wrap the encoder reset function */ void resetEncoders() { resetEncoder(LEFT); resetEncoder(RIGHT); } #endif A: I think if u use a Mega instead of an Uno, the pin ports are different. So change the port from PD4 to PE4 and PD3 to PE5. Also change PC4 to PF4 and PC5 to PF5. In the Encoder.ino you've to change also the ports accordingly. Encoder.h #define LEFT_ENC_PIN_A PE4 //pin 2 #define LEFT_ENC_PIN_B PE5 //pin 3 //below can be changed, but should be PORTC pins #define RIGHT_ENC_PIN_A PF5 //pin A4 #define RIGHT_ENC_PIN_B PF5 //pin A5 Encoder.ino /* Interrupt routine for LEFT encoder, taking care of actual counting */ ISR (PCINT2_vect){ static uint8_t enc_last=0; enc_last <<=2; //shift previous state two places enc_last |= (PINE & (3 << 2)) >> 2; //read the current state into lowest 2 bits left_enc_pos += ENC_STATES[(enc_last & 0x0f)]; } /* Interrupt routine for RIGHT encoder, taking care of actual counting */ ISR (PCINT1_vect){ static uint8_t enc_last=0; enc_last <<=2; //shift previous state two places enc_last |= (PINF & (3 << 4)) >> 4; //read the current state into lowest 2 bits right_enc_pos += ENC_STATES[(enc_last & 0x0f)]; } Please leave a feedback, while I will run into the same problem... :) BR Thomas
Arduino mega with L298n and Motors with Encoders not registering encoders
I am trying to follow a tutorial from youtube on using ROS with Arduino to control motors, and I have connected my L298N with the battery precisely as the video describes and have uploaded sketch 1 with the supporting folder and it loads properly. The Arduino is powered properly via USB, but that connection is not shown in the diagram. When I type the "e" command, I get the proper response of "0 0" and when I do the "o 255 255" it says "OK" and drives properly but upon using "e" to recheck the encoders I am getting the same "0 0". If anyone can spot something wrong with this, I would really appreciate the help in fixing it. Diagram and Code Below Code: #define USE_BASE // Enable the base controller code //#undef USE_BASE // Disable the base controller code /* Define the motor controller and encoder library you are using */ #ifdef USE_BASE /* The Pololu VNH5019 dual motor driver shield */ //#define POLOLU_VNH5019 /* The Pololu MC33926 dual motor driver shield */ //#define POLOLU_MC33926 /* The RoboGaia encoder shield */ //#define ROBOGAIA /* Encoders directly attached to Arduino board */ #define ARDUINO_ENC_COUNTER /* L298 Motor driver*/ #define L298_MOTOR_DRIVER #endif //#define USE_SERVOS // Enable use of PWM servos as defined in servos.h #undef USE_SERVOS // Disable use of PWM servos /* Serial port baud rate */ #define BAUDRATE 57600 /* Maximum PWM signal */ #define MAX_PWM 255 #if defined(ARDUINO) && ARDUINO >= 100 #include "Arduino.h" #else #include "WProgram.h" #endif /* Include definition of serial commands */ #include "commands.h" /* Sensor functions */ #include "sensors.h" /* Include servo support if required */ #ifdef USE_SERVOS #include <Servo.h> #include "servos.h" #endif #ifdef USE_BASE /* Motor driver function definitions */ #include "motor_driver.h" /* Encoder driver function definitions */ #include "encoder_driver.h" /* PID parameters and functions */ #include "diff_controller.h" /* Run the PID loop at 30 times per second */ #define PID_RATE 30 // Hz /* Convert the rate into an interval */ const int PID_INTERVAL = 1000 / PID_RATE; /* Track the next time we make a PID calculation */ unsigned long nextPID = PID_INTERVAL; /* Stop the robot if it hasn't received a movement command in this number of milliseconds */ #define AUTO_STOP_INTERVAL 2000 long lastMotorCommand = AUTO_STOP_INTERVAL; #endif /* Variable initialization */ // A pair of varibles to help parse serial commands (thanks Fergs) int arg = 0; int index = 0; // Variable to hold an input character char chr; // Variable to hold the current single-character command char cmd; // Character arrays to hold the first and second arguments char argv1[16]; char argv2[16]; // The arguments converted to integers long arg1; long arg2; /* Clear the current command parameters */ void resetCommand() { cmd = NULL; memset(argv1, 0, sizeof(argv1)); memset(argv2, 0, sizeof(argv2)); arg1 = 0; arg2 = 0; arg = 0; index = 0; } /* Run a command. Commands are defined in commands.h */ int runCommand() { int i = 0; char *p = argv1; char *str; int pid_args[4]; arg1 = atoi(argv1); arg2 = atoi(argv2); switch(cmd) { case GET_BAUDRATE: Serial.println(BAUDRATE); break; case ANALOG_READ: Serial.println(analogRead(arg1)); break; case DIGITAL_READ: Serial.println(digitalRead(arg1)); break; case ANALOG_WRITE: analogWrite(arg1, arg2); Serial.println("OK"); break; case DIGITAL_WRITE: if (arg2 == 0) digitalWrite(arg1, LOW); else if (arg2 == 1) digitalWrite(arg1, HIGH); Serial.println("OK"); break; case PIN_MODE: if (arg2 == 0) pinMode(arg1, INPUT); else if (arg2 == 1) pinMode(arg1, OUTPUT); Serial.println("OK"); break; case PING: Serial.println(Ping(arg1)); break; #ifdef USE_SERVOS case SERVO_WRITE: servos[arg1].setTargetPosition(arg2); Serial.println("OK"); break; case SERVO_READ: Serial.println(servos[arg1].getServo().read()); break; #endif #ifdef USE_BASE case READ_ENCODERS: Serial.print(readEncoder(LEFT)); Serial.print(" "); Serial.println(readEncoder(RIGHT)); break; case RESET_ENCODERS: resetEncoders(); resetPID(); Serial.println("OK"); break; case MOTOR_SPEEDS: /* Reset the auto stop timer */ lastMotorCommand = millis(); if (arg1 == 0 && arg2 == 0) { setMotorSpeeds(0, 0); resetPID(); moving = 0; } else moving = 1; leftPID.TargetTicksPerFrame = arg1; rightPID.TargetTicksPerFrame = arg2; Serial.println("OK"); break; case MOTOR_RAW_PWM: /* Reset the auto stop timer */ lastMotorCommand = millis(); resetPID(); moving = 0; // Sneaky way to temporarily disable the PID setMotorSpeeds(arg1, arg2); Serial.println("OK"); break; case UPDATE_PID: while ((str = strtok_r(p, ":", &p)) != '\0') { pid_args[i] = atoi(str); i++; } Kp = pid_args[0]; Kd = pid_args[1]; Ki = pid_args[2]; Ko = pid_args[3]; Serial.println("OK"); break; #endif default: Serial.println("Invalid Command"); break; } } /* Setup function--runs once at startup. */ void setup() { Serial.begin(BAUDRATE); // Initialize the motor controller if used */ #ifdef USE_BASE #ifdef ARDUINO_ENC_COUNTER //set as inputs DDRD &= ~(1<<LEFT_ENC_PIN_A); DDRD &= ~(1<<LEFT_ENC_PIN_B); DDRC &= ~(1<<RIGHT_ENC_PIN_A); DDRC &= ~(1<<RIGHT_ENC_PIN_B); //enable pull up resistors PORTD |= (1<<LEFT_ENC_PIN_A); PORTD |= (1<<LEFT_ENC_PIN_B); PORTC |= (1<<RIGHT_ENC_PIN_A); PORTC |= (1<<RIGHT_ENC_PIN_B); // tell pin change mask to listen to left encoder pins PCMSK2 |= (1 << LEFT_ENC_PIN_A)|(1 << LEFT_ENC_PIN_B); // tell pin change mask to listen to right encoder pins PCMSK1 |= (1 << RIGHT_ENC_PIN_A)|(1 << RIGHT_ENC_PIN_B); // enable PCINT1 and PCINT2 interrupt in the general interrupt mask PCICR |= (1 << PCIE1) | (1 << PCIE2); #endif initMotorController(); resetPID(); #endif /* Attach servos if used */ #ifdef USE_SERVOS int i; for (i = 0; i < N_SERVOS; i++) { servos[i].initServo( servoPins[i], stepDelay[i], servoInitPosition[i]); } #endif } /* Enter the main loop. Read and parse input from the serial port and run any valid commands. Run a PID calculation at the target interval and check for auto-stop conditions. */ void loop() { while (Serial.available() > 0) { // Read the next character chr = Serial.read(); // Terminate a command with a CR if (chr == 13) { if (arg == 1) argv1[index] = NULL; else if (arg == 2) argv2[index] = NULL; runCommand(); resetCommand(); } // Use spaces to delimit parts of the command else if (chr == ' ') { // Step through the arguments if (arg == 0) arg = 1; else if (arg == 1) { argv1[index] = NULL; arg = 2; index = 0; } continue; } else { if (arg == 0) { // The first arg is the single-letter command cmd = chr; } else if (arg == 1) { // Subsequent arguments can be more than one character argv1[index] = chr; index++; } else if (arg == 2) { argv2[index] = chr; index++; } } } // If we are using base control, run a PID calculation at the appropriate intervals #ifdef USE_BASE if (millis() > nextPID) { updatePID(); nextPID += PID_INTERVAL; } // Check to see if we have exceeded the auto-stop interval if ((millis() - lastMotorCommand) > AUTO_STOP_INTERVAL) {; setMotorSpeeds(0, 0); moving = 0; } #endif // Sweep servos #ifdef USE_SERVOS int i; for (i = 0; i < N_SERVOS; i++) { servos[i].doSweep(); } #endif } Encoder Pin Designations: /* ************************************************************* Encoder driver function definitions - by James Nugen ************************************************************ */ #ifdef ARDUINO_ENC_COUNTER //below can be changed, but should be PORTD pins; //otherwise additional changes in the code are required #define LEFT_ENC_PIN_A PD2 //pin 2 #define LEFT_ENC_PIN_B PD3 //pin 3 //below can be changed, but should be PORTC pins #define RIGHT_ENC_PIN_A PC4 //pin A4 #define RIGHT_ENC_PIN_B PC5 //pin A5 #endif long readEncoder(int i); void resetEncoder(int i); void resetEncoders(); Encoder Driver: /* ************************************************************* Encoder definitions Add an "#ifdef" block to this file to include support for a particular encoder board or library. Then add the appropriate #define near the top of the main ROSArduinoBridge.ino file. ************************************************************ */ #ifdef USE_BASE #ifdef ROBOGAIA /* The Robogaia Mega Encoder shield */ #include "MegaEncoderCounter.h" /* Create the encoder shield object */ MegaEncoderCounter encoders = MegaEncoderCounter(4); // Initializes the Mega Encoder Counter in the 4X Count mode /* Wrap the encoder reading function */ long readEncoder(int i) { if (i == LEFT) return encoders.YAxisGetCount(); else return encoders.XAxisGetCount(); } /* Wrap the encoder reset function */ void resetEncoder(int i) { if (i == LEFT) return encoders.YAxisReset(); else return encoders.XAxisReset(); } #elif defined(ARDUINO_ENC_COUNTER) volatile long left_enc_pos = 0L; volatile long right_enc_pos = 0L; static const int8_t ENC_STATES [] = {0,1,-1,0,-1,0,0,1,1,0,0,-1,0,-1,1,0}; //encoder lookup table /* Interrupt routine for LEFT encoder, taking care of actual counting */ ISR (PCINT2_vect){ static uint8_t enc_last=0; enc_last <<=2; //shift previous state two places enc_last |= (PIND & (3 << 2)) >> 2; //read the current state into lowest 2 bits left_enc_pos += ENC_STATES[(enc_last & 0x0f)]; } /* Interrupt routine for RIGHT encoder, taking care of actual counting */ ISR (PCINT1_vect){ static uint8_t enc_last=0; enc_last <<=2; //shift previous state two places enc_last |= (PINC & (3 << 4)) >> 4; //read the current state into lowest 2 bits right_enc_pos += ENC_STATES[(enc_last & 0x0f)]; } /* Wrap the encoder reading function */ long readEncoder(int i) { if (i == LEFT) return left_enc_pos; else return right_enc_pos; } /* Wrap the encoder reset function */ void resetEncoder(int i) { if (i == LEFT){ left_enc_pos=0L; return; } else { right_enc_pos=0L; return; } } #else #error A encoder driver must be selected! #endif /* Wrap the encoder reset function */ void resetEncoders() { resetEncoder(LEFT); resetEncoder(RIGHT); } #endif
[ "I think if u use a Mega instead of an Uno, the pin ports are different.\nSo change the port from PD4 to PE4 and PD3 to PE5. Also change PC4 to PF4 and PC5 to PF5.\nIn the Encoder.ino you've to change also the ports accordingly.\nEncoder.h\n #define LEFT_ENC_PIN_A PE4 //pin 2\n #define LEFT_ENC_PIN_B PE5 //pin 3\n \n //below can be changed, but should be PORTC pins\n #define RIGHT_ENC_PIN_A PF5 //pin A4\n #define RIGHT_ENC_PIN_B PF5 //pin A5\n\nEncoder.ino\n /* Interrupt routine for LEFT encoder, taking care of actual counting */\n ISR (PCINT2_vect){\n static uint8_t enc_last=0;\n \n enc_last <<=2; //shift previous state two places\n enc_last |= (PINE & (3 << 2)) >> 2; //read the current state into lowest 2 bits\n \n left_enc_pos += ENC_STATES[(enc_last & 0x0f)];\n }\n \n /* Interrupt routine for RIGHT encoder, taking care of actual counting */\n ISR (PCINT1_vect){\n static uint8_t enc_last=0;\n \n enc_last <<=2; //shift previous state two places\n enc_last |= (PINF & (3 << 4)) >> 4; //read the current state into lowest 2 bits\n \n right_enc_pos += ENC_STATES[(enc_last & 0x0f)];\n }\n\nPlease leave a feedback, while I will run into the same problem... :)\nBR Thomas\n" ]
[ 0 ]
[]
[]
[ "arduino", "arduino_c++", "encoder", "motordriver", "robotics" ]
stackoverflow_0074481393_arduino_arduino_c++_encoder_motordriver_robotics.txt
Q: How to melt and unpivot a multi-header dataframe? I have this data that I want to unpivot and melt into columns. The data is a multi-header table. I have a sample dictionary of the data. Edit here___ I don't know how to convert a dictionary with multiple keys like I had shown previously into a df so let's restructure the dictionary like so... data = { "id": { 0: "month", 1: "11/30/2021", 2: "12/31/2021", 3: "1/31/2022", 4: "2/28/2022", 5: "3/31/2022", }, "A48": {0: "storage", 1: "0", 2: "29", 3: "35", 4: "33", 5: "30"}, "A48.1": {0: "use", 1: "0", 2: "1", 3: "0", 4: "0", 5: "0"}, "A62": {0: "direct", 1: "0", 2: "0", 3: "2", 4: "3", 5: "2"}, "A62.1": {0: "storage", 1: "0", 2: "57", 3: "69", 4: "65", 5: "59"}, "A62.2": {0: "use", 1: "0", 2: "1", 3: "0", 4: "0", 5: "0"}, } Now let's get the Dataframe... dfc = pd.DataFrame.from_dict(data) dfc.columns=pd.MultiIndex.from_arrays([dfc.columns,dfc.iloc[0]]) dfc = dfc.iloc[2:].reset_index(drop=True) Which looks like this: id A48 A48.1 A62 A62.1 A62.2 month storage use direct storage use 0 12/31/2021 29 1 0 57 1 1 1/31/2022 35 0 2 69 0 2 2/28/2022 33 0 3 65 0 3 3/31/2022 30 0 2 59 0 What I am looking for is a table like this. month id direct storage use 11/30/2021 A48 NaN 0 0 12/31/2021 A48 NaN 29 1 1/31/2022 A48 NaN 35 0 2/28/2022 A48 NaN 33 0 3/31/2022 A48 NaN 30 0 11/30/2021 A62 0 0 0 12/31/2021 A62 0 57 1 1/31/2022 A62 2 69 0 2/28/2022 A62 3 65 0 3/31/2022 A62 2 59 0 A: Define for later use the following helper function: import pandas as pd def helper(df): return df.pipe( lambda df_: df_.rename(columns={"col1": df_["col0"].unique()[0]}) .drop(columns="col0") .reset_index(drop=True) ) Then, with Pandas melt, concat and merge methods: # Setup n = dfc.shape[0] # Melt dataframe and cleanup melted_dfc = dfc.melt() melted_dfc.columns = ["id", "col0", "col1"] melted_dfc["id"] = melted_dfc["id"].replace(r"[.]\d+", "", regex=True) # Get intermediate dataframes month_df = helper(melted_dfc.loc[: n - 1, :]).drop(columns="id") sub_dfs = [ pd.concat([month_df, helper(df)], axis=1) for df in [ melted_dfc.loc[i : i + n - 1, :] for i in range(n, melted_dfc.shape[0], n) ] ] # Merge intermediate dataframes final_df = sub_dfs[0] for sub_df in sub_dfs[1:]: final_df = pd.merge( left=final_df, right=sub_df, how="outer", on=["month", "id"] ).fillna(0) # Cleanup temporary columns created during merge columns_to_merge = set( col[:-2] for col in final_df.columns if col.endswith(("_x", "_y")) ) for col in columns_to_merge: final_df[col] = final_df[f"{col}_x"].astype(int) + final_df[f"{col}_y"].astype(int) final_df = final_df.drop(columns=[f"{col}_x", f"{col}_y"]) Finally: print(final_df) # Output month id direct storage use 0 12/31/2021 A48 0 29 1 1 1/31/2022 A48 0 35 0 2 2/28/2022 A48 0 33 0 3 3/31/2022 A48 0 30 0 4 12/31/2021 A62 0 57 1 5 1/31/2022 A62 2 69 0 6 2/28/2022 A62 3 65 0 7 3/31/2022 A62 2 59 0
How to melt and unpivot a multi-header dataframe?
I have this data that I want to unpivot and melt into columns. The data is a multi-header table. I have a sample dictionary of the data. Edit here___ I don't know how to convert a dictionary with multiple keys like I had shown previously into a df so let's restructure the dictionary like so... data = { "id": { 0: "month", 1: "11/30/2021", 2: "12/31/2021", 3: "1/31/2022", 4: "2/28/2022", 5: "3/31/2022", }, "A48": {0: "storage", 1: "0", 2: "29", 3: "35", 4: "33", 5: "30"}, "A48.1": {0: "use", 1: "0", 2: "1", 3: "0", 4: "0", 5: "0"}, "A62": {0: "direct", 1: "0", 2: "0", 3: "2", 4: "3", 5: "2"}, "A62.1": {0: "storage", 1: "0", 2: "57", 3: "69", 4: "65", 5: "59"}, "A62.2": {0: "use", 1: "0", 2: "1", 3: "0", 4: "0", 5: "0"}, } Now let's get the Dataframe... dfc = pd.DataFrame.from_dict(data) dfc.columns=pd.MultiIndex.from_arrays([dfc.columns,dfc.iloc[0]]) dfc = dfc.iloc[2:].reset_index(drop=True) Which looks like this: id A48 A48.1 A62 A62.1 A62.2 month storage use direct storage use 0 12/31/2021 29 1 0 57 1 1 1/31/2022 35 0 2 69 0 2 2/28/2022 33 0 3 65 0 3 3/31/2022 30 0 2 59 0 What I am looking for is a table like this. month id direct storage use 11/30/2021 A48 NaN 0 0 12/31/2021 A48 NaN 29 1 1/31/2022 A48 NaN 35 0 2/28/2022 A48 NaN 33 0 3/31/2022 A48 NaN 30 0 11/30/2021 A62 0 0 0 12/31/2021 A62 0 57 1 1/31/2022 A62 2 69 0 2/28/2022 A62 3 65 0 3/31/2022 A62 2 59 0
[ "Define for later use the following helper function:\nimport pandas as pd\n\ndef helper(df):\n return df.pipe(\n lambda df_: df_.rename(columns={\"col1\": df_[\"col0\"].unique()[0]})\n .drop(columns=\"col0\")\n .reset_index(drop=True)\n )\n\nThen, with Pandas melt, concat and merge methods:\n# Setup\nn = dfc.shape[0]\n\n# Melt dataframe and cleanup\nmelted_dfc = dfc.melt()\nmelted_dfc.columns = [\"id\", \"col0\", \"col1\"]\nmelted_dfc[\"id\"] = melted_dfc[\"id\"].replace(r\"[.]\\d+\", \"\", regex=True)\n\n# Get intermediate dataframes\nmonth_df = helper(melted_dfc.loc[: n - 1, :]).drop(columns=\"id\")\nsub_dfs = [\n pd.concat([month_df, helper(df)], axis=1)\n for df in [\n melted_dfc.loc[i : i + n - 1, :] for i in range(n, melted_dfc.shape[0], n)\n ]\n]\n\n# Merge intermediate dataframes\nfinal_df = sub_dfs[0]\nfor sub_df in sub_dfs[1:]:\n final_df = pd.merge(\n left=final_df, right=sub_df, how=\"outer\", on=[\"month\", \"id\"]\n ).fillna(0)\n\n# Cleanup temporary columns created during merge\ncolumns_to_merge = set(\n col[:-2] for col in final_df.columns if col.endswith((\"_x\", \"_y\"))\n)\nfor col in columns_to_merge:\n final_df[col] = final_df[f\"{col}_x\"].astype(int) + final_df[f\"{col}_y\"].astype(int)\n final_df = final_df.drop(columns=[f\"{col}_x\", f\"{col}_y\"])\n\nFinally:\nprint(final_df)\n# Output\n month id direct storage use\n0 12/31/2021 A48 0 29 1\n1 1/31/2022 A48 0 35 0\n2 2/28/2022 A48 0 33 0\n3 3/31/2022 A48 0 30 0\n4 12/31/2021 A62 0 57 1\n5 1/31/2022 A62 2 69 0\n6 2/28/2022 A62 3 65 0\n7 3/31/2022 A62 2 59 0\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "pandas_melt" ]
stackoverflow_0074618706_pandas_pandas_melt.txt
Q: SQL INSERT INTO from table 1 and 2 using Inner join This is table Students: StudentID Firstname Lastname ------------------------------------------- 1 John Doe 2 Jane Doenot This is table Subjects: SubjectID Subject Description -------------------------------------------------------- 1 EVEDRI Event Driven Programming 2 DATSYS Database Systems I also created an empty table StudSubs with columns StudentID (FK to Students) SubjectID (FK to Subjects) My question is: I want to insert data from Students and Subjects tables into StudSubs, so that the StudSub table would look like this: StudentID Firstname Lastname Subject ------------------------------------------------------- 1 John Doe EVEDRI 1 John Doe DATSYS 2 Jane Doenot EVEDRI 2 Jane Doenot DATSYS What is query code for my stored procedure to insert this data into StudSub? A: As I understand your question after your changes and your comment above, you don't need any JOIN at all. You just want to select the data from both tables: SELECT st.studentID, st.FirstName, st.LastName, sj.Subject FROM students st, subjects sj ORDER BY st.studentID; This will produce following result for your sample data: StudentID Firstname Lastname Subject 1 John Doe EVEDRI 1 John Doe DATSYS 2 Jane Doenot EVEDRI 2 Jane Doenot DATSYS So your insert command would be this: INSERT INTO StudSubs SELECT st.studentID, st.FirstName, st.LastName, sj.Subject FROM students st, subjects sj; Try out: db<>fiddle
SQL INSERT INTO from table 1 and 2 using Inner join
This is table Students: StudentID Firstname Lastname ------------------------------------------- 1 John Doe 2 Jane Doenot This is table Subjects: SubjectID Subject Description -------------------------------------------------------- 1 EVEDRI Event Driven Programming 2 DATSYS Database Systems I also created an empty table StudSubs with columns StudentID (FK to Students) SubjectID (FK to Subjects) My question is: I want to insert data from Students and Subjects tables into StudSubs, so that the StudSub table would look like this: StudentID Firstname Lastname Subject ------------------------------------------------------- 1 John Doe EVEDRI 1 John Doe DATSYS 2 Jane Doenot EVEDRI 2 Jane Doenot DATSYS What is query code for my stored procedure to insert this data into StudSub?
[ "As I understand your question after your changes and your comment above, you don't need any JOIN at all.\nYou just want to select the data from both tables:\nSELECT \nst.studentID,\nst.FirstName,\nst.LastName,\nsj.Subject\nFROM students st, subjects sj\nORDER BY st.studentID;\n\nThis will produce following result for your sample data:\nStudentID Firstname Lastname Subject\n1 John Doe EVEDRI\n1 John Doe DATSYS\n2 Jane Doenot EVEDRI\n2 Jane Doenot DATSYS\n\nSo your insert command would be this:\nINSERT INTO StudSubs\nSELECT \nst.studentID,\nst.FirstName,\nst.LastName,\nsj.Subject\nFROM students st, subjects sj;\n\nTry out: db<>fiddle\n" ]
[ -1 ]
[]
[]
[ "inner_join", "select", "sql", "sql_insert", "stored_procedures" ]
stackoverflow_0074673862_inner_join_select_sql_sql_insert_stored_procedures.txt
Q: How to replace method calls in VS Code? We are replacing Redux with useContext, so dispatch() method need to be replaced with auth. how? We have this: dispatch( updateClientAndServer({ keyPath: [organizationShortId, 'posts', postId, 'text'], value: v, operation: 'setValue', }) ) and would have this: auth.updateClientAndServer({ keyPath: [organizationShortId, 'posts', postId, 'text'], value: v, operation: 'setValue', }) Is it possible make a find and replace like this? The challenge is the closing bracket. :) I hope I do not have to replace them manually. Of course the content, the object can be arbitrary.
How to replace method calls in VS Code?
We are replacing Redux with useContext, so dispatch() method need to be replaced with auth. how? We have this: dispatch( updateClientAndServer({ keyPath: [organizationShortId, 'posts', postId, 'text'], value: v, operation: 'setValue', }) ) and would have this: auth.updateClientAndServer({ keyPath: [organizationShortId, 'posts', postId, 'text'], value: v, operation: 'setValue', }) Is it possible make a find and replace like this? The challenge is the closing bracket. :) I hope I do not have to replace them manually. Of course the content, the object can be arbitrary.
[]
[]
[ "Yes, You can simply replace the entire code with find and replace;\ncopy and paste the entire dispatch block in find and the auth block in replace\n" ]
[ -1 ]
[ "regexp_replace", "visual_studio_code" ]
stackoverflow_0074673865_regexp_replace_visual_studio_code.txt
Q: Where to put my sqlalchemy join queries in mvc pattern? I hope you are doing well I have two models, UserModel and PostModel in separate files, inside a Model folder. The SQLAlchemy queries are within those models and my controllers calls those methods: UserModel class UserModel(db.Model): __tablename__ = 'user' id = db.Column(db.String(36), primary_key = True) username = db.Column(db.String(50), unique = True) password = db.Column(db.String(100)) def __init__(self, id: str = None, username: str = None, password: str = None) -> None: self.id = id self.username = username self.password = password def serialize(self) -> Dict[str, str]: return { 'id': self.id, 'username': self.username, 'password': self.password } @classmethod def get_users(cls): return cls.query.all() @classmethod def get_user(cls, id): return cls.query.filter_by(id = id).one() PostModel class PostModel(db.Model): __tablename__ = 'post' id = db.Column(db.String(36), primary_key=True) uid = db.Column(db.String(36), db.ForeignKey('user.id'), nullable=False) title = db.Column(db.String()) body = db.Column(db.String()) def __init__(self, id: str = None, uid: str = None, title: str = None, body: str = None) -> None: self.id = id self.uid = uid self.title = title self.body = body def serialize(self) -> Dict[str, str]: return { 'id': self.id, 'uid': self.uid, 'title': self.title, 'body': self.body } @classmethod def get_posts(cls): return cls.query.all() @classmethod def get_post_by_id(cls, id: str): return cls.query.filter_by(id = id).one() @classmethod def get_post_by_uid(cls, uid: str): return cls.query.filter_by(uid = uid).all() Suppose I want to join post with user by uid, then if this is my PostModel, I would likely do something like: cls.query.join(UserModel, UserModel.id == PostModel.uid).all() But to achieve that I would need to import my UserModel into my PostModel, which is bad because they should be independant from each other. I thought of having a separate file named "queries.py" or something like that and put all my queries there, and then have the controllers import from this file, but I'm not sure if that's good or bad practice. If it's good, it should be inside my Models folder? Is there another work around? Thanks A: Yeah thats totally fine to have all models in seperate file. See some boiler plate examples below https://github.com/realpython/flask-boilerplate https://github.com/wilson-boca/flask-boilerplate-blueprint/tree/master/flaskapp https://github.com/abstractkitchen/flask-backbone
Where to put my sqlalchemy join queries in mvc pattern?
I hope you are doing well I have two models, UserModel and PostModel in separate files, inside a Model folder. The SQLAlchemy queries are within those models and my controllers calls those methods: UserModel class UserModel(db.Model): __tablename__ = 'user' id = db.Column(db.String(36), primary_key = True) username = db.Column(db.String(50), unique = True) password = db.Column(db.String(100)) def __init__(self, id: str = None, username: str = None, password: str = None) -> None: self.id = id self.username = username self.password = password def serialize(self) -> Dict[str, str]: return { 'id': self.id, 'username': self.username, 'password': self.password } @classmethod def get_users(cls): return cls.query.all() @classmethod def get_user(cls, id): return cls.query.filter_by(id = id).one() PostModel class PostModel(db.Model): __tablename__ = 'post' id = db.Column(db.String(36), primary_key=True) uid = db.Column(db.String(36), db.ForeignKey('user.id'), nullable=False) title = db.Column(db.String()) body = db.Column(db.String()) def __init__(self, id: str = None, uid: str = None, title: str = None, body: str = None) -> None: self.id = id self.uid = uid self.title = title self.body = body def serialize(self) -> Dict[str, str]: return { 'id': self.id, 'uid': self.uid, 'title': self.title, 'body': self.body } @classmethod def get_posts(cls): return cls.query.all() @classmethod def get_post_by_id(cls, id: str): return cls.query.filter_by(id = id).one() @classmethod def get_post_by_uid(cls, uid: str): return cls.query.filter_by(uid = uid).all() Suppose I want to join post with user by uid, then if this is my PostModel, I would likely do something like: cls.query.join(UserModel, UserModel.id == PostModel.uid).all() But to achieve that I would need to import my UserModel into my PostModel, which is bad because they should be independant from each other. I thought of having a separate file named "queries.py" or something like that and put all my queries there, and then have the controllers import from this file, but I'm not sure if that's good or bad practice. If it's good, it should be inside my Models folder? Is there another work around? Thanks
[ "Yeah thats totally fine to have all models in seperate file.\nSee some boiler plate examples below\nhttps://github.com/realpython/flask-boilerplate\nhttps://github.com/wilson-boca/flask-boilerplate-blueprint/tree/master/flaskapp\nhttps://github.com/abstractkitchen/flask-backbone\n" ]
[ 0 ]
[]
[]
[ "flask_sqlalchemy", "model_view_controller", "python_3.x" ]
stackoverflow_0074673800_flask_sqlalchemy_model_view_controller_python_3.x.txt
Q: Product > Archive error in Xcode related to Signing & Capabilities I have an Xcode project that I want to deploy to App Store. When I run Product >Archive I receive an "Archive failed" error in the Signing & Capabilities section. (Please see the screenshots) I tried both "Automatically Manage Signing" checked and unchecked. When "Automatically Manage Signing" is unchecked, I receive 2 errors: "Failed to create provisioning profile. There are no devices registered in your account on the developer website..." and "No profiles for 'net.myprojectname' were found. XCode could not find any IOS development provisioning profiles matching 'net.myprojectname' When "Automatically Manage Signing" is checked, I receive those same 2 errors even before I run Product>Archive. I do have a provisioning profile that appears to be visible to Xcode when the Automatically Manage Signing is unchecked. When I try to just build the project and run it on the simulator, it works fine. Again, all I want is to deploy the project to the App Store. Do I really need a registered iPhone in order to do so? (I don't have one) Or is there another way to solve it? Thank you in advance for any help. I tried Automatically Manage Signing checked and unchecked. I have created (and recreated) provisioning profile that matches my project name. I have tried running regular build beforehand. None of these actions solved the problem. A: Be sure you did those steps first: Register that net.myprojectname identifier in the Developer section Certificates, Identifiers & Profiles. Have an app created using that identifier in the App Store Connect. If you did those the automatic signing management should work for you with no issue. Also registering a device is pretty straight forward (if you own one, obviously), just run the app on a device and it gets registered for you. A: I found the solution: my provisioning profile was for release, but Xcode (for reasons unknown to me) was trying to use a (non-existent) development provisioning profile. Once I tweaked Xcode to use the release profile, things started to work and I was able to build the archive.
Product > Archive error in Xcode related to Signing & Capabilities
I have an Xcode project that I want to deploy to App Store. When I run Product >Archive I receive an "Archive failed" error in the Signing & Capabilities section. (Please see the screenshots) I tried both "Automatically Manage Signing" checked and unchecked. When "Automatically Manage Signing" is unchecked, I receive 2 errors: "Failed to create provisioning profile. There are no devices registered in your account on the developer website..." and "No profiles for 'net.myprojectname' were found. XCode could not find any IOS development provisioning profiles matching 'net.myprojectname' When "Automatically Manage Signing" is checked, I receive those same 2 errors even before I run Product>Archive. I do have a provisioning profile that appears to be visible to Xcode when the Automatically Manage Signing is unchecked. When I try to just build the project and run it on the simulator, it works fine. Again, all I want is to deploy the project to the App Store. Do I really need a registered iPhone in order to do so? (I don't have one) Or is there another way to solve it? Thank you in advance for any help. I tried Automatically Manage Signing checked and unchecked. I have created (and recreated) provisioning profile that matches my project name. I have tried running regular build beforehand. None of these actions solved the problem.
[ "Be sure you did those steps first:\n\nRegister that net.myprojectname identifier in the Developer section Certificates, Identifiers & Profiles.\nHave an app created using that identifier in the App Store Connect.\n\nIf you did those the automatic signing management should work for you with no issue. Also registering a device is pretty straight forward (if you own one, obviously), just run the app on a device and it gets registered for you.\n", "I found the solution: my provisioning profile was for release, but Xcode (for reasons unknown to me) was trying to use a (non-existent) development provisioning profile.\nOnce I tweaked Xcode to use the release profile, things started to work and I was able to build the archive.\n" ]
[ 0, 0 ]
[]
[]
[ "archive", "macos", "provisioning_profile", "signing", "xcode" ]
stackoverflow_0074612639_archive_macos_provisioning_profile_signing_xcode.txt
Q: Saxon out of memory when processing OpenStreetMap notes file from Planet I am trying to process the OpenStreetMap notes file from the Planet that contains the whole history of notes (more than 3 million notes), and all of them are in a huge XML: https://planet.openstreetmap.org/notes/ The XML is a bit more than a 1 GB size and I can only process it with Saxon HE in big machines with more than 6 GB of RAM; otherwise, I hit the Out of memory exception in Java. The command I am running is this: java -Xmx6000m -cp saxon-he-11.4.jar net.sf.saxon.Transform \ -s:"planet-notes-latest.osn.xml" -xsl:"notes-csv.xslt" -o:"planet-notes.csv" But it requires 6 GB of RAM, which is a lot. How can I configure Saxon to use the memory better from the Command line? Ideally, I need to run on a Raspberry 4. Or what other tool can I use to process this file with a simple structure? The whole code is at: https://github.com/OSMLatam/OSM-Notes-profile The XSD file is: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text" /> <xsl:template match="/"> <xsl:for-each select="osm-notes/note"><xsl:value-of select="@id"/>,<xsl:value-of select="@lat"/>,<xsl:value-of select="@lon"/>,"<xsl:value-of select="@created_at"/>",<xsl:choose><xsl:when test="@closed_at != ''">"<xsl:value-of select="@closed_at"/>","close" </xsl:when><xsl:otherwise>,"open"<xsl:text> </xsl:text></xsl:otherwise></xsl:choose> </xsl:for-each> </xsl:template> </xsl:stylesheet> A: A simple strip-space e.g. <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:strip-space elements="*"/> <xsl:output method="text" /> <xsl:template match="/"> <xsl:for-each select="osm-notes/note"><xsl:value-of select="@id"/>,<xsl:value-of select="@lat"/>,<xsl:value-of select="@lon"/>,"<xsl:value-of select="@created_at"/>",<xsl:choose><xsl:when test="@closed_at != ''">"<xsl:value-of select="@closed_at"/>","close" </xsl:when><xsl:otherwise>,"open"<xsl:text> </xsl:text></xsl:otherwise></xsl:choose> </xsl:for-each> </xsl:template> </xsl:stylesheet> might help create a tree with less memory, on my machine Saxon HE 11.4 reports "Memory used: 4967Mb" and "Execution time: 19.101996s (19101.996ms)". Now compare that to Saxon EE 11.4 and streaming <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="3.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:mode streamable="yes"/> <xsl:strip-space elements="*"/> <xsl:output method="text" /> <xsl:template match="/"> <xsl:for-each select="osm-notes/note"><xsl:value-of select="@id"/>,<xsl:value-of select="@lat"/>,<xsl:value-of select="@lon"/>,"<xsl:value-of select="@created_at"/>",<xsl:choose><xsl:when test="@closed_at != ''">"<xsl:value-of select="@closed_at"/>","close" </xsl:when><xsl:otherwise>,"open"<xsl:text> </xsl:text></xsl:otherwise></xsl:choose> </xsl:for-each> </xsl:template> </xsl:stylesheet> and the memory used drops to "Memory used: 196Mb" and with less time "Execution time: 16.3387564s (16338.7564ms)". It seems using xsl:iterate and xsl:value-of separator reduces the memory footprint with streaming even more ("Memory used: 111Mb"): <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="3.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="#all"> <xsl:mode streamable="yes"/> <xsl:output method="text"/> <xsl:template match="/"> <xsl:iterate select="osm-notes/note"> <xsl:value-of select="@id, @lat, @lon, '&quot;' || @created_at || '&quot;', if (@closed_at != '') then ('&quot;' || @closed_at || '&quot;', '&quot;close&quot;') else '&quot;open&quot;'" separator=","/> <xsl:text>&#10;</xsl:text> </xsl:iterate> </xsl:template> </xsl:stylesheet> Your second stylesheet converted to XSLT 3 and to use streaming is <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="3.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:mode streamable="yes"/> <xsl:output method="text"/> <xsl:strip-space elements="*"/> <xsl:template match="/"> <xsl:for-each select="osm-notes/note"> <xsl:variable name="note_id"><xsl:value-of select="@id"/></xsl:variable> <xsl:for-each select="comment"> <xsl:choose> <xsl:when test="@uid != ''"> <xsl:copy-of select="$note_id" />,'<xsl:value-of select="@action" />','<xsl:value-of select="@timestamp"/>',<xsl:value-of select="@uid"/>,'<xsl:value-of select="replace(@user,'''','''''')"/>'<xsl:text> </xsl:text></xsl:when><xsl:otherwise> <xsl:copy-of select="$note_id" />,'<xsl:value-of select="@action" />','<xsl:value-of select="@timestamp"/>',,<xsl:text> </xsl:text></xsl:otherwise> </xsl:choose> </xsl:for-each> </xsl:for-each> </xsl:template> </xsl:stylesheet> and consumes only "Memory used: 218Mb" with Saxon EE that way.
Saxon out of memory when processing OpenStreetMap notes file from Planet
I am trying to process the OpenStreetMap notes file from the Planet that contains the whole history of notes (more than 3 million notes), and all of them are in a huge XML: https://planet.openstreetmap.org/notes/ The XML is a bit more than a 1 GB size and I can only process it with Saxon HE in big machines with more than 6 GB of RAM; otherwise, I hit the Out of memory exception in Java. The command I am running is this: java -Xmx6000m -cp saxon-he-11.4.jar net.sf.saxon.Transform \ -s:"planet-notes-latest.osn.xml" -xsl:"notes-csv.xslt" -o:"planet-notes.csv" But it requires 6 GB of RAM, which is a lot. How can I configure Saxon to use the memory better from the Command line? Ideally, I need to run on a Raspberry 4. Or what other tool can I use to process this file with a simple structure? The whole code is at: https://github.com/OSMLatam/OSM-Notes-profile The XSD file is: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text" /> <xsl:template match="/"> <xsl:for-each select="osm-notes/note"><xsl:value-of select="@id"/>,<xsl:value-of select="@lat"/>,<xsl:value-of select="@lon"/>,"<xsl:value-of select="@created_at"/>",<xsl:choose><xsl:when test="@closed_at != ''">"<xsl:value-of select="@closed_at"/>","close" </xsl:when><xsl:otherwise>,"open"<xsl:text> </xsl:text></xsl:otherwise></xsl:choose> </xsl:for-each> </xsl:template> </xsl:stylesheet>
[ "A simple strip-space e.g.\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<xsl:stylesheet version=\"1.0\"\nxmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\">\n<xsl:strip-space elements=\"*\"/>\n<xsl:output method=\"text\" />\n<xsl:template match=\"/\">\n <xsl:for-each select=\"osm-notes/note\"><xsl:value-of select=\"@id\"/>,<xsl:value-of select=\"@lat\"/>,<xsl:value-of select=\"@lon\"/>,\"<xsl:value-of select=\"@created_at\"/>\",<xsl:choose><xsl:when test=\"@closed_at != ''\">\"<xsl:value-of select=\"@closed_at\"/>\",\"close\"\n</xsl:when><xsl:otherwise>,\"open\"<xsl:text>\n</xsl:text></xsl:otherwise></xsl:choose>\n </xsl:for-each>\n</xsl:template>\n</xsl:stylesheet>\n\nmight help create a tree with less memory, on my machine Saxon HE 11.4 reports \"Memory used: 4967Mb\" and \"Execution time: 19.101996s (19101.996ms)\".\nNow compare that to Saxon EE 11.4 and streaming\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<xsl:stylesheet version=\"3.0\"\nxmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\">\n<xsl:mode streamable=\"yes\"/>\n<xsl:strip-space elements=\"*\"/>\n<xsl:output method=\"text\" />\n<xsl:template match=\"/\">\n <xsl:for-each select=\"osm-notes/note\"><xsl:value-of select=\"@id\"/>,<xsl:value-of select=\"@lat\"/>,<xsl:value-of select=\"@lon\"/>,\"<xsl:value-of select=\"@created_at\"/>\",<xsl:choose><xsl:when test=\"@closed_at != ''\">\"<xsl:value-of select=\"@closed_at\"/>\",\"close\"\n</xsl:when><xsl:otherwise>,\"open\"<xsl:text>\n</xsl:text></xsl:otherwise></xsl:choose>\n </xsl:for-each>\n</xsl:template>\n</xsl:stylesheet>\n\nand the memory used drops to \"Memory used: 196Mb\" and with less time \"Execution time: 16.3387564s (16338.7564ms)\".\nIt seems using xsl:iterate and xsl:value-of separator reduces the memory footprint with streaming even more (\"Memory used: 111Mb\"):\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<xsl:stylesheet version=\"3.0\"\n xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\"\n xmlns:xs=\"http://www.w3.org/2001/XMLSchema\"\n exclude-result-prefixes=\"#all\">\n\n<xsl:mode streamable=\"yes\"/>\n\n<xsl:output method=\"text\"/>\n\n<xsl:template match=\"/\">\n <xsl:iterate select=\"osm-notes/note\">\n <xsl:value-of \n select=\"@id, \n @lat, \n @lon, \n '&quot;' || @created_at || '&quot;', \n if (@closed_at != '') \n then ('&quot;' || @closed_at || '&quot;', '&quot;close&quot;') \n else '&quot;open&quot;'\"\n separator=\",\"/>\n <xsl:text>&#10;</xsl:text>\n </xsl:iterate>\n</xsl:template>\n\n</xsl:stylesheet>\n\nYour second stylesheet converted to XSLT 3 and to use streaming is\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<xsl:stylesheet version=\"3.0\"\nxmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\">\n<xsl:mode streamable=\"yes\"/>\n<xsl:output method=\"text\"/>\n<xsl:strip-space elements=\"*\"/>\n<xsl:template match=\"/\">\n <xsl:for-each select=\"osm-notes/note\">\n <xsl:variable name=\"note_id\"><xsl:value-of select=\"@id\"/></xsl:variable>\n <xsl:for-each select=\"comment\">\n<xsl:choose> <xsl:when test=\"@uid != ''\"> <xsl:copy-of select=\"$note_id\" />,'<xsl:value-of select=\"@action\" />','<xsl:value-of select=\"@timestamp\"/>',<xsl:value-of select=\"@uid\"/>,'<xsl:value-of select=\"replace(@user,'''','''''')\"/>'<xsl:text>\n</xsl:text></xsl:when><xsl:otherwise>\n<xsl:copy-of select=\"$note_id\" />,'<xsl:value-of select=\"@action\" />','<xsl:value-of select=\"@timestamp\"/>',,<xsl:text>\n</xsl:text></xsl:otherwise> </xsl:choose>\n </xsl:for-each>\n </xsl:for-each>\n</xsl:template>\n</xsl:stylesheet>\n\nand consumes only \"Memory used: 218Mb\" with Saxon EE that way.\n" ]
[ 1 ]
[]
[]
[ "openstreetmap", "saxon", "xml", "xslt", "xslt_2.0" ]
stackoverflow_0074672609_openstreetmap_saxon_xml_xslt_xslt_2.0.txt
Q: How to change the modified time of a wordpress post? We are able to make use of ajax to update our post_meta as we wanted. However, it does not change the modified_time of the post. We depend on the get_modified_time to show users when the post was last updated. (The newer the better) I have searched around, and I don't see anyone using this technique yet. Does anyone have an answer? Thanks! A: I used wpdb::query() to do this: global $wpdb; //eg. time one year ago.. $time = time() - DAY_IN_SECONDS * 365; $mysql_time_format= "Y-m-d H:i:s"; $post_modified = gmdate( $mysql_time_format, $time ); $post_modified_gmt = gmdate( $mysql_time_format, ( $time + get_option( 'gmt_offset' ) * HOUR_IN_SECONDS ) ); $post_id = /*the post id*/; $wpdb->query("UPDATE $wpdb->posts SET post_modified = '{$post_modified}', post_modified_gmt = '{$post_modified_gmt}' WHERE ID = {$post_id}" ); Note: You can't use wp_update_post() if you want to explicity set the modified date(s) on the post, because it calls wp_insert_post(), which determines that the post exists and sets the post_modified and post_modified variables to the current date. A: Very simple in PHP, where 80 is the post number: // update post_modified and post_modified_gmt `datetime` on a post $update = array( 'ID' => 80 ); wp_update_post( $update ); A: If you want to change for set of posts, better use Query Loop to get each post id. A: Assuming that you have previously collected the post ID from your ajax request you can then update post_modified and post_modified_gmt using the regular wp_update_post function: $date = current_time('mysql'); wp_update_post(array( 'ID' => $post_id, 'post_modified' => $date, 'post_modified_gmt' => get_gmt_from_date($date), )); Resources https://developer.wordpress.org/reference/functions/wp_update_post/
How to change the modified time of a wordpress post?
We are able to make use of ajax to update our post_meta as we wanted. However, it does not change the modified_time of the post. We depend on the get_modified_time to show users when the post was last updated. (The newer the better) I have searched around, and I don't see anyone using this technique yet. Does anyone have an answer? Thanks!
[ "I used wpdb::query() to do this:\nglobal $wpdb;\n\n//eg. time one year ago..\n$time = time() - DAY_IN_SECONDS * 365;\n\n$mysql_time_format= \"Y-m-d H:i:s\";\n\n$post_modified = gmdate( $mysql_time_format, $time );\n\n$post_modified_gmt = gmdate( $mysql_time_format, ( $time + get_option( 'gmt_offset' ) * HOUR_IN_SECONDS ) );\n\n$post_id = /*the post id*/;\n\n$wpdb->query(\"UPDATE $wpdb->posts SET post_modified = '{$post_modified}', post_modified_gmt = '{$post_modified_gmt}' WHERE ID = {$post_id}\" );\n\nNote: You can't use wp_update_post() if you want to explicity set the modified date(s) on the post, because it calls wp_insert_post(), which determines that the post exists and sets the post_modified and post_modified variables to the current date.\n", "Very simple in PHP, where 80 is the post number:\n// update post_modified and post_modified_gmt `datetime` on a post\n$update = array( 'ID' => 80 );\nwp_update_post( $update );\n\n", "If you want to change for set of posts, better use Query Loop to get each post id.\n", "Assuming that you have previously collected the post ID from your ajax request you can then update post_modified and post_modified_gmt using the regular wp_update_post function:\n$date = current_time('mysql');\n \nwp_update_post(array(\n \n 'ID' => $post_id,\n 'post_modified' => $date,\n 'post_modified_gmt' => get_gmt_from_date($date),\n));\n\nResources\nhttps://developer.wordpress.org/reference/functions/wp_update_post/\n" ]
[ 16, 5, 0, 0 ]
[]
[]
[ "wordpress" ]
stackoverflow_0017412202_wordpress.txt
Q: ASP.NET Core MVC route based localization with areas I have created a project in ASP.NET Core and wanted the language to be detected based on the url: https://localhost:7090/en If controllers and actions are used in the url, everything works as planned. However, when the registration or login page is called, it does not work (default asp.net identity registration page). Works: https://localhost:7090/en/Home/Index Does not work: https://localhost:7090/en/Identity/Account/Register In the startup, I configured the following for MVC routing: builder.Services .AddLocalization() .AddMvc(options => options.EnableEndpointRouting = false); builder.Services.Configure<RequestLocalizationOptions>(options => { var supportedCultures = CultureHelper.GetSupportedCultures(); options.DefaultRequestCulture = new RequestCulturne("en"); options.SupportedCultures = supportedCultures; options.SupportedUICultures = supportedCultures; var provider = new RouteDataRequestCultureProvider { RouteDataStringKey = "culture", UIRouteDataStringKey = "culture", Options = options }; options.RequestCultureProviders = new[] { provider }; }); builder.Services.Configure<RouteOptions>(options => { options.ConstraintMap.Add("culture", typeof(LanguageRouteConstraint)); }); var options = app.Services.GetService<IOptions<RequestLocalizationOptions>>(); app.UseRequestLocalization(options.Value); app.UseMvc(routes => { app.MapRazorPages(); routes.MapRoute( name: "LocalizedDefault", template: "{culture:culture}/{controller=Home}/{action=Index}/{id?}" ); }); The language constraint is then used to set the CurrentCulture and CurrentUICulture. Here is a code snippet for calling the login/register pages: <li class="nav-item"> <a class="nav-link" asp-area="Identity" asp-page="/Account/Register"> @Language.register </a> </li> <li class="nav-item"> <a class="nav-link" asp-area="Identity" asp-page="/Account/Login"> @Language.login </a> </li> I've tried pretty much everything I've found on Google, but nothing seems to work.... I just think knowing that it does not work because of the pages [UPDATE] I was able to get the url to be valid with the following code: builder.Services .AddLocalization(options => options.ResourcesPath = "Resources") .AddMvc(options => options.EnableEndpointRouting = false) .AddRazorPagesOptions(options => { options.Conventions.Add(new LanguageRouteModelConversion()); }); public class LanguageRouteModelConversion : IPageRouteModelConvention { public void Apply(PageRouteModel pageRouteModel) { var selectorModels = new List<SelectorModel>(); foreach (var selector in pageRouteModel.Selectors.ToList()) { var template = selector.AttributeRouteModel?.Template; selectorModels.Add(new SelectorModel() { AttributeRouteModel = new AttributeRouteModel { Template = "/{culture}/" + template } }); } foreach (var model in selectorModels) pageRouteModel.Selectors.Add(model); } } But i still don't know how to call the page properly A: You said you have no Account folder in Identity directory. If so then how you do you expect register,login page router will work? builder.Services.AddDefaultIdentity<IdentityUser>(options => options.SignIn.RequireConfirmedAccount = true) .AddEntityFrameworkStores<YourDbContextName>(); Then Scaffold Register, Login, LogOut page etc. After scaffolded, you will find a separate Account folder in Pages folder. and Your Account folder actually contains login,register page. It should resolve your issue. A: builder.Services .AddLocalization(options => options.ResourcesPath = "Resources") .AddMvc(options => options.EnableEndpointRouting = false) .AddRazorPagesOptions(options => { options.Conventions.AddAreaFolderRouteModelConvention("Identity", "/", pageRouteModel => { foreach (var selectorModel in pageRouteModel.Selectors) selectorModel.AttributeRouteModel.Template = "{culture:culture}/" + selectorModel.AttributeRouteModel.Template; }); }); This did the trick for me, i finally figured it out...
ASP.NET Core MVC route based localization with areas
I have created a project in ASP.NET Core and wanted the language to be detected based on the url: https://localhost:7090/en If controllers and actions are used in the url, everything works as planned. However, when the registration or login page is called, it does not work (default asp.net identity registration page). Works: https://localhost:7090/en/Home/Index Does not work: https://localhost:7090/en/Identity/Account/Register In the startup, I configured the following for MVC routing: builder.Services .AddLocalization() .AddMvc(options => options.EnableEndpointRouting = false); builder.Services.Configure<RequestLocalizationOptions>(options => { var supportedCultures = CultureHelper.GetSupportedCultures(); options.DefaultRequestCulture = new RequestCulturne("en"); options.SupportedCultures = supportedCultures; options.SupportedUICultures = supportedCultures; var provider = new RouteDataRequestCultureProvider { RouteDataStringKey = "culture", UIRouteDataStringKey = "culture", Options = options }; options.RequestCultureProviders = new[] { provider }; }); builder.Services.Configure<RouteOptions>(options => { options.ConstraintMap.Add("culture", typeof(LanguageRouteConstraint)); }); var options = app.Services.GetService<IOptions<RequestLocalizationOptions>>(); app.UseRequestLocalization(options.Value); app.UseMvc(routes => { app.MapRazorPages(); routes.MapRoute( name: "LocalizedDefault", template: "{culture:culture}/{controller=Home}/{action=Index}/{id?}" ); }); The language constraint is then used to set the CurrentCulture and CurrentUICulture. Here is a code snippet for calling the login/register pages: <li class="nav-item"> <a class="nav-link" asp-area="Identity" asp-page="/Account/Register"> @Language.register </a> </li> <li class="nav-item"> <a class="nav-link" asp-area="Identity" asp-page="/Account/Login"> @Language.login </a> </li> I've tried pretty much everything I've found on Google, but nothing seems to work.... I just think knowing that it does not work because of the pages [UPDATE] I was able to get the url to be valid with the following code: builder.Services .AddLocalization(options => options.ResourcesPath = "Resources") .AddMvc(options => options.EnableEndpointRouting = false) .AddRazorPagesOptions(options => { options.Conventions.Add(new LanguageRouteModelConversion()); }); public class LanguageRouteModelConversion : IPageRouteModelConvention { public void Apply(PageRouteModel pageRouteModel) { var selectorModels = new List<SelectorModel>(); foreach (var selector in pageRouteModel.Selectors.ToList()) { var template = selector.AttributeRouteModel?.Template; selectorModels.Add(new SelectorModel() { AttributeRouteModel = new AttributeRouteModel { Template = "/{culture}/" + template } }); } foreach (var model in selectorModels) pageRouteModel.Selectors.Add(model); } } But i still don't know how to call the page properly
[ "You said you have no Account folder in Identity directory. If so then how you do you expect register,login page router will work?\n builder.Services.AddDefaultIdentity<IdentityUser>(options => options.SignIn.RequireConfirmedAccount = true)\n .AddEntityFrameworkStores<YourDbContextName>();\n\nThen Scaffold Register, Login, LogOut page etc. After scaffolded, you will find a separate Account folder in Pages folder. and Your Account folder actually contains login,register page. It should resolve your issue.\n", "builder.Services\n.AddLocalization(options => options.ResourcesPath = \"Resources\")\n.AddMvc(options => options.EnableEndpointRouting = false)\n.AddRazorPagesOptions(options =>\n{\n options.Conventions.AddAreaFolderRouteModelConvention(\"Identity\", \"/\", pageRouteModel =>\n {\n foreach (var selectorModel in pageRouteModel.Selectors)\n selectorModel.AttributeRouteModel.Template = \"{culture:culture}/\" + selectorModel.AttributeRouteModel.Template;\n });\n});\n\nThis did the trick for me, i finally figured it out...\n" ]
[ 0, 0 ]
[]
[]
[ "asp.net_core", "asp.net_core_localization", "asp.net_mvc" ]
stackoverflow_0074659075_asp.net_core_asp.net_core_localization_asp.net_mvc.txt
Q: Keycloak - 401 response (USER_INFO_REQUEST_ERROR) when obtaining userinfo via /realms/{realm}/protocol/openid-connect/userinfo I have a Keycloak deployed locally with the following Docker command: docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:20.0.1 start-dev I get a token from Keycloak. Example: eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJMZjRfWHJjWkpTaVJYWlFLS254VS1NdU9FTHA4d3NaaHlLMDQ0UjRIRjdnIn0.eyJleHAiOjE2NzAwODc1MDgsImlhdCI6MTY3MDA4NzIwOCwiYXV0aF90aW1lIjoxNjcwMDg2NDcwLCJqdGkiOiIyYWQxODQ5ZC0xMjI0LTQ4YjYtYWZjYy01ZmFjMWZjODY3ZjQiLCJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjgwODAvcmVhbG1zL2RpYWxvZy1mZWF0IiwiYXVkIjoiYWNjb3VudCIsInN1YiI6IjRkYjdiNjg1LTRkYTAtNGZjMy1iNjI1LTgyZmM1MTdjNjA3NiIsInR5cCI6IkJlYXJlciIsImF6cCI6InNvbWV4NSIsIm5vbmNlIjoiR0tNb1JWRTVDajZSVjJMcFQ1Mjg5eVQ3RUdWeFMzZk4iLCJzZXNzaW9uX3N0YXRlIjoiMTY4Y2JmZGQtMmFmYS00Mjk5LWI4YmUtMmExM2FjMjI2NzJiIiwiYWNyIjoiMCIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsInVtYV9hdXRob3JpemF0aW9uIiwiZGVmYXVsdC1yb2xlcy1kaWFsb2ctZmVhdCJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sInNjb3BlIjoib3BlbmlkIHByb2ZpbGUgZW1haWwiLCJzaWQiOiIxNjhjYmZkZC0yYWZhLTQyOTktYjhiZS0yYTEzYWMyMjY3MmIiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwibmFtZSI6IkpvaG4gU25vdyIsInByZWZlcnJlZF91c2VybmFtZSI6ImpvaG4uc25vdyIsImdpdmVuX25hbWUiOiJKb2huIiwiZmFtaWx5X25hbWUiOiJTbm93IiwiZW1haWwiOiJqb2huLnNub3dAeDUucnUifQ.j_rFqVxICtj7NR-myEsWhSkwBeCABplFrmlBuRMAhF4N8HzdOOtExdmw_mXdx60snKTaE5GJHPofjllpM353lY8H9NGxaczUgL20GjVmMhwtihGGBLpiw7TXyGQGkfdBXdweCuS0W1avegXrhRYvCYlFGJMoxsdmskYkDt4DjuESlTkMEOndVjv5LBp3rLB6lRopq0Qg3Abp_rv57KvlVeeul24OKoisFohnZ4VfsiDPAuVW1u1xaYmjCRDlBwIcGosdwasL_WNAgvJkaKdVtvu7NU-ghPa1vQkWJkMZrVIZDsCc5LKZqwspw3U2iOcUc5EDC6FumBWdfvWCx8cszw Its payload: { "exp": 1670087508, "iat": 1670087208, "auth_time": 1670086470, "jti": "2ad1849d-1224-48b6-afcc-5fac1fc867f4", "iss": "http://localhost:8080/realms/dialog-feat", "aud": "account", "sub": "4db7b685-4da0-4fc3-b625-82fc517c6076", "typ": "Bearer", "azp": "somex5", "nonce": "GKMoRVE5Cj6RV2LpT5289yT7EGVxS3fN", "session_state": "168cbfdd-2afa-4299-b8be-2a13ac22672b", "acr": "0", "realm_access": { "roles": [ "offline_access", "uma_authorization", "default-roles-dialog-feat" ] }, "resource_access": { "account": { "roles": [ "manage-account", "manage-account-links", "view-profile" ] } }, "scope": "openid profile email", "sid": "168cbfdd-2afa-4299-b8be-2a13ac22672b", "email_verified": true, "name": "John Snow", "preferred_username": "john.snow", "given_name": "John", "family_name": "Snow", "email": "[email protected]" } It seems valid. Then I'm making a request to http://127.0.0.1:8080/realms/dialog-feat/protocol/openid-connect/userinfo with the token: curl --location --request GET 'http://127.0.0.1:8080/realms/dialog-feat/protocol/openid-connect/userinfo' --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJMZjRfWHJjWkpTaVJYWlFLS254VS1NdU9FTHA4d3NaaHlLMDQ0UjRIRjdnIn0.eyJleHAiOjE2NzAwODc1MDgsImlhdCI6MTY3MDA4NzIwOCwiYXV0aF90aW1lIjoxNjcwMDg2NDcwLCJqdGkiOiIyYWQxODQ5ZC0xMjI0LTQ4YjYtYWZjYy01ZmFjMWZjODY3ZjQiLCJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjgwODAvcmVhbG1zL2RpYWxvZy1mZWF0IiwiYXVkIjoiYWNjb3VudCIsInN1YiI6IjRkYjdiNjg1LTRkYTAtNGZjMy1iNjI1LTgyZmM1MTdjNjA3NiIsInR5cCI6IkJlYXJlciIsImF6cCI6InNvbWV4NSIsIm5vbmNlIjoiR0tNb1JWRTVDajZSVjJMcFQ1Mjg5eVQ3RUdWeFMzZk4iLCJzZXNzaW9uX3N0YXRlIjoiMTY4Y2JmZGQtMmFmYS00Mjk5LWI4YmUtMmExM2FjMjI2NzJiIiwiYWNyIjoiMCIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsInVtYV9hdXRob3JpemF0aW9uIiwiZGVmYXVsdC1yb2xlcy1kaWFsb2ctZmVhdCJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sInNjb3BlIjoib3BlbmlkIHByb2ZpbGUgZW1haWwiLCJzaWQiOiIxNjhjYmZkZC0yYWZhLTQyOTktYjhiZS0yYTEzYWMyMjY3MmIiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwibmFtZSI6IkpvaG4gU25vdyIsInByZWZlcnJlZF91c2VybmFtZSI6ImpvaG4uc25vdyIsImdpdmVuX25hbWUiOiJKb2huIiwiZmFtaWx5X25hbWUiOiJTbm93IiwiZW1haWwiOiJqb2huLnNub3dAeDUucnUifQ.j_rFqVxICtj7NR-myEsWhSkwBeCABplFrmlBuRMAhF4N8HzdOOtExdmw_mXdx60snKTaE5GJHPofjllpM353lY8H9NGxaczUgL20GjVmMhwtihGGBLpiw7TXyGQGkfdBXdweCuS0W1avegXrhRYvCYlFGJMoxsdmskYkDt4DjuESlTkMEOndVjv5LBp3rLB6lRopq0Qg3Abp_rv57KvlVeeul24OKoisFohnZ4VfsiDPAuVW1u1xaYmjCRDlBwIcGosdwasL_WNAgvJkaKdVtvu7NU-ghPa1vQkWJkMZrVIZDsCc5LKZqwspw3U2iOcUc5EDC6FumBWdfvWCx8cszw' But I get a 401 status code returned. For example: type=USER_INFO_REQUEST_ERROR, realmId=(...), clientId=null, userId=null, ipAddress=(...), error=access_denied, auth_method=validate_access_token How to fix this? My Keycloak settings: A: The problem seems to be a mismatch between the issuer of the access token sent to the userinfo endpoint (i.e., "iss": "http://localhost:8080/realms/dialog-feat") and the issuer that the access token validator triggered by the userinfo endpoint is expecting. Instead of: Then I'm making a request to http://127.0.0.1:8080/realms/dialog-feat/protocol/openid-connect/userinfo with the token (...): Use the same hostname in the userinfo endpoint has the one that you have used to acquire the access token, for instance: curl http://localhost:8080/realms/dialog-feat/protocol/openid-connect/userinfo -H "Authorization: Bearer (..<your access token..)" If the problem still persistes then you also facing the issues related with the Keycloak endpoint implementation described in UserInfo endpoint not fully standards compliant. In short in your request for a the access token explicitly add the parameter scope=openid. An example: curl --request POST \ --url "http://localhost:8080/realms/dialog-feat/protocol/openid-connect/token" \ --data client_id=somex5 \ --data username=john.snow \ --data password=...<the password..> \ --data grant_type=password \ --data scope=openid
Keycloak - 401 response (USER_INFO_REQUEST_ERROR) when obtaining userinfo via /realms/{realm}/protocol/openid-connect/userinfo
I have a Keycloak deployed locally with the following Docker command: docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:20.0.1 start-dev I get a token from Keycloak. Example: eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJMZjRfWHJjWkpTaVJYWlFLS254VS1NdU9FTHA4d3NaaHlLMDQ0UjRIRjdnIn0.eyJleHAiOjE2NzAwODc1MDgsImlhdCI6MTY3MDA4NzIwOCwiYXV0aF90aW1lIjoxNjcwMDg2NDcwLCJqdGkiOiIyYWQxODQ5ZC0xMjI0LTQ4YjYtYWZjYy01ZmFjMWZjODY3ZjQiLCJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjgwODAvcmVhbG1zL2RpYWxvZy1mZWF0IiwiYXVkIjoiYWNjb3VudCIsInN1YiI6IjRkYjdiNjg1LTRkYTAtNGZjMy1iNjI1LTgyZmM1MTdjNjA3NiIsInR5cCI6IkJlYXJlciIsImF6cCI6InNvbWV4NSIsIm5vbmNlIjoiR0tNb1JWRTVDajZSVjJMcFQ1Mjg5eVQ3RUdWeFMzZk4iLCJzZXNzaW9uX3N0YXRlIjoiMTY4Y2JmZGQtMmFmYS00Mjk5LWI4YmUtMmExM2FjMjI2NzJiIiwiYWNyIjoiMCIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsInVtYV9hdXRob3JpemF0aW9uIiwiZGVmYXVsdC1yb2xlcy1kaWFsb2ctZmVhdCJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sInNjb3BlIjoib3BlbmlkIHByb2ZpbGUgZW1haWwiLCJzaWQiOiIxNjhjYmZkZC0yYWZhLTQyOTktYjhiZS0yYTEzYWMyMjY3MmIiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwibmFtZSI6IkpvaG4gU25vdyIsInByZWZlcnJlZF91c2VybmFtZSI6ImpvaG4uc25vdyIsImdpdmVuX25hbWUiOiJKb2huIiwiZmFtaWx5X25hbWUiOiJTbm93IiwiZW1haWwiOiJqb2huLnNub3dAeDUucnUifQ.j_rFqVxICtj7NR-myEsWhSkwBeCABplFrmlBuRMAhF4N8HzdOOtExdmw_mXdx60snKTaE5GJHPofjllpM353lY8H9NGxaczUgL20GjVmMhwtihGGBLpiw7TXyGQGkfdBXdweCuS0W1avegXrhRYvCYlFGJMoxsdmskYkDt4DjuESlTkMEOndVjv5LBp3rLB6lRopq0Qg3Abp_rv57KvlVeeul24OKoisFohnZ4VfsiDPAuVW1u1xaYmjCRDlBwIcGosdwasL_WNAgvJkaKdVtvu7NU-ghPa1vQkWJkMZrVIZDsCc5LKZqwspw3U2iOcUc5EDC6FumBWdfvWCx8cszw Its payload: { "exp": 1670087508, "iat": 1670087208, "auth_time": 1670086470, "jti": "2ad1849d-1224-48b6-afcc-5fac1fc867f4", "iss": "http://localhost:8080/realms/dialog-feat", "aud": "account", "sub": "4db7b685-4da0-4fc3-b625-82fc517c6076", "typ": "Bearer", "azp": "somex5", "nonce": "GKMoRVE5Cj6RV2LpT5289yT7EGVxS3fN", "session_state": "168cbfdd-2afa-4299-b8be-2a13ac22672b", "acr": "0", "realm_access": { "roles": [ "offline_access", "uma_authorization", "default-roles-dialog-feat" ] }, "resource_access": { "account": { "roles": [ "manage-account", "manage-account-links", "view-profile" ] } }, "scope": "openid profile email", "sid": "168cbfdd-2afa-4299-b8be-2a13ac22672b", "email_verified": true, "name": "John Snow", "preferred_username": "john.snow", "given_name": "John", "family_name": "Snow", "email": "[email protected]" } It seems valid. Then I'm making a request to http://127.0.0.1:8080/realms/dialog-feat/protocol/openid-connect/userinfo with the token: curl --location --request GET 'http://127.0.0.1:8080/realms/dialog-feat/protocol/openid-connect/userinfo' --header 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJMZjRfWHJjWkpTaVJYWlFLS254VS1NdU9FTHA4d3NaaHlLMDQ0UjRIRjdnIn0.eyJleHAiOjE2NzAwODc1MDgsImlhdCI6MTY3MDA4NzIwOCwiYXV0aF90aW1lIjoxNjcwMDg2NDcwLCJqdGkiOiIyYWQxODQ5ZC0xMjI0LTQ4YjYtYWZjYy01ZmFjMWZjODY3ZjQiLCJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjgwODAvcmVhbG1zL2RpYWxvZy1mZWF0IiwiYXVkIjoiYWNjb3VudCIsInN1YiI6IjRkYjdiNjg1LTRkYTAtNGZjMy1iNjI1LTgyZmM1MTdjNjA3NiIsInR5cCI6IkJlYXJlciIsImF6cCI6InNvbWV4NSIsIm5vbmNlIjoiR0tNb1JWRTVDajZSVjJMcFQ1Mjg5eVQ3RUdWeFMzZk4iLCJzZXNzaW9uX3N0YXRlIjoiMTY4Y2JmZGQtMmFmYS00Mjk5LWI4YmUtMmExM2FjMjI2NzJiIiwiYWNyIjoiMCIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsInVtYV9hdXRob3JpemF0aW9uIiwiZGVmYXVsdC1yb2xlcy1kaWFsb2ctZmVhdCJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sInNjb3BlIjoib3BlbmlkIHByb2ZpbGUgZW1haWwiLCJzaWQiOiIxNjhjYmZkZC0yYWZhLTQyOTktYjhiZS0yYTEzYWMyMjY3MmIiLCJlbWFpbF92ZXJpZmllZCI6dHJ1ZSwibmFtZSI6IkpvaG4gU25vdyIsInByZWZlcnJlZF91c2VybmFtZSI6ImpvaG4uc25vdyIsImdpdmVuX25hbWUiOiJKb2huIiwiZmFtaWx5X25hbWUiOiJTbm93IiwiZW1haWwiOiJqb2huLnNub3dAeDUucnUifQ.j_rFqVxICtj7NR-myEsWhSkwBeCABplFrmlBuRMAhF4N8HzdOOtExdmw_mXdx60snKTaE5GJHPofjllpM353lY8H9NGxaczUgL20GjVmMhwtihGGBLpiw7TXyGQGkfdBXdweCuS0W1avegXrhRYvCYlFGJMoxsdmskYkDt4DjuESlTkMEOndVjv5LBp3rLB6lRopq0Qg3Abp_rv57KvlVeeul24OKoisFohnZ4VfsiDPAuVW1u1xaYmjCRDlBwIcGosdwasL_WNAgvJkaKdVtvu7NU-ghPa1vQkWJkMZrVIZDsCc5LKZqwspw3U2iOcUc5EDC6FumBWdfvWCx8cszw' But I get a 401 status code returned. For example: type=USER_INFO_REQUEST_ERROR, realmId=(...), clientId=null, userId=null, ipAddress=(...), error=access_denied, auth_method=validate_access_token How to fix this? My Keycloak settings:
[ "The problem seems to be a mismatch between the issuer of the access token sent to the userinfo endpoint (i.e., \"iss\": \"http://localhost:8080/realms/dialog-feat\") and the issuer that the access token validator triggered by the userinfo endpoint is expecting.\nInstead of:\n\nThen I'm making a request to\nhttp://127.0.0.1:8080/realms/dialog-feat/protocol/openid-connect/userinfo\nwith the token (...):\n\nUse the same hostname in the userinfo endpoint has the one that you have used to acquire the access token, for instance:\ncurl http://localhost:8080/realms/dialog-feat/protocol/openid-connect/userinfo -H \"Authorization: Bearer (..<your access token..)\"\n\nIf the problem still persistes then you also facing the issues related with the Keycloak endpoint implementation described in UserInfo endpoint not fully standards compliant.\nIn short in your request for a the access token explicitly add the parameter scope=openid. An example:\ncurl --request POST \\\n --url \"http://localhost:8080/realms/dialog-feat/protocol/openid-connect/token\" \\\n --data client_id=somex5 \\\n --data username=john.snow \\\n --data password=...<the password..> \\\n --data grant_type=password \\\n --data scope=openid\n\n" ]
[ 1 ]
[]
[]
[ "keycloak" ]
stackoverflow_0074668939_keycloak.txt
Q: How to Deploy Laravel project on VPS and How to manage changes? I'm using Windows with XAMPP when developing web applications (with Laravel). Also, I use Git for version control. When I finish a project, I'll have to deploy it on VPS (LAMP). How to do it? For now, two ideas come to my mind: SFTP - For example, I would use MobaXterm's Graphical SFTP browser, I would just copy my project (files)... and then I would import MySQL database (or run migrations). Git/GitHub - on my VPS I would install Git and then I would: create a remote repository on GitHub (should it be private?) git push (from localhost to GitHub) and then, on VPS, I would do git clone (from GitHub to VPS) finally, I just need to import MySQL database (or run migrations). Do you work in this way, or there is a better solution? I suppose that the second way (Git/GitHub) is better than the first (SFTP) because if I have to add some new features or fix bugs - all I will have to do on the server is: git pull (from GitHub). EDIT: Now I see that there are services such as envoyer.io and forge, but they are not free. So, what are the disadvantages of the second way (2. Git/GitHub ) that I described in my question, which is free? A: This is a question that will attract primarily opinion based answers as everyone prefers to work in different ways. The most basic, and free way would be to do this using Git. This will provide you with strong version control and allow you to push all the changes you make in your local development to your repository and then pull them down on to your VPS. You could even set up webhooks to automatically update the version on your VPS every time you push or merge changes to your master branch. Doing it via SFTP can be quite slow due to the process it takes, and you lose out on version control. Which means that if you accidentally broke something then you couldn't easily undo it. With Git you could just roll back to a previous commit. If you are wanting to make private repos and don't wish to pay for them then you could consider using GitLab instead of GitHub. GitLab allow you to either host the repository with them or you can deploy your own GitLab instance on your VPS and host it all of your VPS. There's tonnes of options here though, and the best approach is really just what you deem to be fit for purpose. A: There are lots of tools to help you deploy Laravel or pretty much any application. I've used https://deploybot.com/ and https://envoyer.io/ in the past. You can also use https://forge.laravel.com/ to manage your VPS and deploy your laravel application as well.
How to Deploy Laravel project on VPS and How to manage changes?
I'm using Windows with XAMPP when developing web applications (with Laravel). Also, I use Git for version control. When I finish a project, I'll have to deploy it on VPS (LAMP). How to do it? For now, two ideas come to my mind: SFTP - For example, I would use MobaXterm's Graphical SFTP browser, I would just copy my project (files)... and then I would import MySQL database (or run migrations). Git/GitHub - on my VPS I would install Git and then I would: create a remote repository on GitHub (should it be private?) git push (from localhost to GitHub) and then, on VPS, I would do git clone (from GitHub to VPS) finally, I just need to import MySQL database (or run migrations). Do you work in this way, or there is a better solution? I suppose that the second way (Git/GitHub) is better than the first (SFTP) because if I have to add some new features or fix bugs - all I will have to do on the server is: git pull (from GitHub). EDIT: Now I see that there are services such as envoyer.io and forge, but they are not free. So, what are the disadvantages of the second way (2. Git/GitHub ) that I described in my question, which is free?
[ "This is a question that will attract primarily opinion based answers as everyone prefers to work in different ways.\nThe most basic, and free way would be to do this using Git. This will provide you with strong version control and allow you to push all the changes you make in your local development to your repository and then pull them down on to your VPS.\nYou could even set up webhooks to automatically update the version on your VPS every time you push or merge changes to your master branch. \nDoing it via SFTP can be quite slow due to the process it takes, and you lose out on version control. Which means that if you accidentally broke something then you couldn't easily undo it. With Git you could just roll back to a previous commit.\nIf you are wanting to make private repos and don't wish to pay for them then you could consider using GitLab instead of GitHub. GitLab allow you to either host the repository with them or you can deploy your own GitLab instance on your VPS and host it all of your VPS.\nThere's tonnes of options here though, and the best approach is really just what you deem to be fit for purpose.\n", "There are lots of tools to help you deploy Laravel or pretty much any application.\nI've used https://deploybot.com/ and https://envoyer.io/ in the past.\nYou can also use https://forge.laravel.com/ to manage your VPS and deploy your laravel application as well.\n" ]
[ 3, 1 ]
[ "You can also try deploying your projects with https://appsailer.com\n" ]
[ -2 ]
[ "git", "github", "lamp", "laravel", "php" ]
stackoverflow_0038416834_git_github_lamp_laravel_php.txt
Q: Is there any possibility to speed the nested for loop in pandas dataframe? Is there any possibility to speed the nested for loop in pandas dataframe? I have tried itertuples instead of iterrows. But the expected outcome(speed) was not good enough. How to use list comprehension and vectorization in this code. lst3 = [] for i,j in enumerate(df2.itertuples()): Tagging1=False #if ("Con" in str(j["CMS Classification"])): #print(j) if ("Con" in j._9): for k,l in enumerate(df3.itertuples()): #print(1) if(str(j._8) in str(l._1)): print(2) Tagging1=True lst3.append(str(l._11)) continue elif("Amen" in str(j["CMS Classification"])): for m,n in enumerate(df3.itertuples(index= False)): print(n) if(str(j['Tagged ID']) in str(n['Amendment ID'])): Tagging1=True #print(n['Amend Vetting Year Qtr']) lst3.append(str(n['Amend Vetting Year Qtr'])) continue if(Tagging1==False): lst3.append("") df2['Contract/ Amend Start Qtr']=lst3 A: Yes, it is possible to improve the performance of the nested for loop in your code by using vectorized operations and list comprehension in Pandas. Instead of using for loops to iterate over the rows of the DataFrame, you can use the apply() method and a lambda function to apply a function to each row of the DataFrame, which can be much faster than using a for loop. Here is an example of how you can use the apply() method and a lambda function to vectorize the nested for loop in your code: def get_start_qtr(row): Tagging1 = False if "Con" in row["CMS Classification"]: matches = df3[df3["Tagged ID"].str.contains(row["Tagged ID"])] if matches.shape[0] > 0: Tagging1 = True return matches.iloc[0]["Contract Vetting Year Qtr"] elif "Amen" in row["CMS Classification"]: matches = df3[df3["Amendment ID"].str.contains(row["Tagged ID"])] if matches.shape[0] > 0: Tagging1 = True return matches.iloc[0]["Amend Vetting Year Qtr"] if Tagging1 == False: return "" df2["Contract/ Amend Start Qtr"] = df2.apply(lambda row: get_start_qtr(row), axis=1) In this example, the get_start_qtr() function takes a row of the df2 DataFrame as input and returns the corresponding value from the df3 DataFrame based on the values in the CMS Classification and Tagged ID columns. The apply() method is used to apply the get_start_qtr() function to each row of the df2 DataFrame, and the resulting values
Is there any possibility to speed the nested for loop in pandas dataframe?
Is there any possibility to speed the nested for loop in pandas dataframe? I have tried itertuples instead of iterrows. But the expected outcome(speed) was not good enough. How to use list comprehension and vectorization in this code. lst3 = [] for i,j in enumerate(df2.itertuples()): Tagging1=False #if ("Con" in str(j["CMS Classification"])): #print(j) if ("Con" in j._9): for k,l in enumerate(df3.itertuples()): #print(1) if(str(j._8) in str(l._1)): print(2) Tagging1=True lst3.append(str(l._11)) continue elif("Amen" in str(j["CMS Classification"])): for m,n in enumerate(df3.itertuples(index= False)): print(n) if(str(j['Tagged ID']) in str(n['Amendment ID'])): Tagging1=True #print(n['Amend Vetting Year Qtr']) lst3.append(str(n['Amend Vetting Year Qtr'])) continue if(Tagging1==False): lst3.append("") df2['Contract/ Amend Start Qtr']=lst3
[ "Yes, it is possible to improve the performance of the nested for loop in your code by using vectorized operations and list comprehension in Pandas. Instead of using for loops to iterate over the rows of the DataFrame, you can use the apply() method and a lambda function to apply a function to each row of the DataFrame, which can be much faster than using a for loop.\nHere is an example of how you can use the apply() method and a lambda function to vectorize the nested for loop in your code:\ndef get_start_qtr(row):\n Tagging1 = False\n if \"Con\" in row[\"CMS Classification\"]:\n matches = df3[df3[\"Tagged ID\"].str.contains(row[\"Tagged ID\"])]\n if matches.shape[0] > 0:\n Tagging1 = True\n return matches.iloc[0][\"Contract Vetting Year Qtr\"]\n elif \"Amen\" in row[\"CMS Classification\"]:\n matches = df3[df3[\"Amendment ID\"].str.contains(row[\"Tagged ID\"])]\n if matches.shape[0] > 0:\n Tagging1 = True\n return matches.iloc[0][\"Amend Vetting Year Qtr\"]\n if Tagging1 == False:\n return \"\"\n\ndf2[\"Contract/ Amend Start Qtr\"] = df2.apply(lambda row: get_start_qtr(row), axis=1)\n\nIn this example, the get_start_qtr() function takes a row of the df2 DataFrame as input and returns the corresponding value from the df3 DataFrame based on the values in the CMS Classification and Tagged ID columns. The apply() method is used to apply the get_start_qtr() function to each row of the df2 DataFrame, and the resulting values\n" ]
[ 0 ]
[]
[]
[ "list", "numpy", "pandas", "python" ]
stackoverflow_0074673201_list_numpy_pandas_python.txt
Q: Python function to get the t-statistic I am looking for a Python function (or to write my own if there is not one) to get the t-statistic in order to use in a confidence interval calculation. I have found tables that give answers for various probabilities / degrees of freedom like this one, but I would like to be able to calculate this for any given probability. For anyone not already familiar with this degrees of freedom is the number of data points (n) in your sample -1 and the numbers for the column headings at the top are probabilities (p) e.g. a 2 tailed significance level of 0.05 is used if you are looking up the t-score to use in the calculation for 95% confidence that if you repeated n tests the result would fall within the mean +/- the confidence interval. I have looked into using various functions within scipy.stats, but none that I can see seem to allow for the simple inputs I described above. Excel has a simple implementation of this e.g. to get the t-score for a sample of 1000, where I need to be 95% confident I would use: =TINV(0.05,999) and get the score ~1.96 Here is the code that I have used to implement confidence intervals so far, as you can see I am using a very crude way of getting the t-score at present (just allowing a few values for perc_conf and warning that it is not accurate for samples < 1000): # -*- coding: utf-8 -*- from __future__ import division import math def mean(lst): # μ = 1/N Σ(xi) return sum(lst) / float(len(lst)) def variance(lst): """ Uses standard variance formula (sum of each (data point - mean) squared) all divided by number of data points """ # σ² = 1/N Σ((xi-μ)²) mu = mean(lst) return 1.0/len(lst) * sum([(i-mu)**2 for i in lst]) def conf_int(lst, perc_conf=95): """ Confidence interval - given a list of values compute the square root of the variance of the list (v) divided by the number of entries (n) multiplied by a constant factor of (c). This means that I can be confident of a result +/- this amount from the mean. The constant factor can be looked up from a table, for 95% confidence on a reasonable size sample (>=500) 1.96 is used. """ if perc_conf == 95: c = 1.96 elif perc_conf == 90: c = 1.64 elif perc_conf == 99: c = 2.58 else: c = 1.96 print 'Only 90, 95 or 99 % are allowed for, using default 95%' n, v = len(lst), variance(lst) if n < 1000: print 'WARNING: constant factor may not be accurate for n < ~1000' return math.sqrt(v/n) * c Here is an example call for the above code: # Example: 1000 coin tosses on a fair coin. What is the range that I can be 95% # confident the result will f all within. # list of 1000 perfectly distributed... perc_conf_req = 95 n, p = 1000, 0.5 # sample_size, probability of heads for each coin l = [0 for i in range(int(n*(1-p)))] + [1 for j in range(int(n*p))] exp_heads = mean(l) * len(l) c_int = conf_int(l, perc_conf_req) print 'I can be '+str(perc_conf_req)+'% confident that the result of '+str(n)+ \ ' coin flips will be within +/- '+str(round(c_int*100,2))+'% of '+\ str(int(exp_heads)) x = round(n*c_int,0) print 'i.e. between '+str(int(exp_heads-x))+' and '+str(int(exp_heads+x))+\ ' heads (assuming a probability of '+str(p)+' for each flip).' The output for this is: I can be 95% confident that the result of 1000 coin flips will be within +/- 3.1% of 500 i.e. between 469 and 531 heads (assuming a probability of 0.5 for each flip). I also looked into calculating the t-distribution for a range and then returning the t-score that got the probability closest to that required, but I had issues implementing the formula. Let me know if this is relevant and you want to see the code, but I have assumed not as there is probably an easier way. A: Have you tried scipy? You will need to installl the scipy library...more about installing it here: http://www.scipy.org/install.html Once installed, you can replicate the Excel functionality like such: from scipy import stats #Studnt, n=999, p<0.05, 2-tail #equivalent to Excel TINV(0.05,999) print stats.t.ppf(1-0.025, 999) #Studnt, n=999, p<0.05%, Single tail #equivalent to Excel TINV(2*0.05,999) print stats.t.ppf(1-0.05, 999) You can also read about installing the library here: how to install scipy for python? A: Try the following code: from scipy import stats #Studnt, n=22, 2-tail #stats.t.ppf(1-0.025, df) # df=n-1=22-1=21 print (stats.t.ppf(1-0.025, 21)) A: You can try this code: # for small samples (<50) we use t-statistics # n = 9, degree of freedom = 9-1 = 8 # for 99% confidence interval, alpha = 1% = 0.01 and alpha/2 = 0.005 from scipy import stats ci = 99 n = 9 t = stats.t.ppf(1- ((100-ci)/2/100), n-1) # 99% CI, t8,0.005 print(t) # 3.36 A: scipy.stats.t has another method isf that directly returns the quantile that corresponds to the upper tail probability alpha. This is an implementation of the inverse survival function and returns the exact same value as t.ppf(1-alpha, dof). from scipy import stats alpha, dof = 0.05, 999 stats.t.isf(alpha, dof) # 1.6463803454275356 For two-tailed, halve alpha: stats.t.isf(alpha/2, dof) # 1.962341461133449
Python function to get the t-statistic
I am looking for a Python function (or to write my own if there is not one) to get the t-statistic in order to use in a confidence interval calculation. I have found tables that give answers for various probabilities / degrees of freedom like this one, but I would like to be able to calculate this for any given probability. For anyone not already familiar with this degrees of freedom is the number of data points (n) in your sample -1 and the numbers for the column headings at the top are probabilities (p) e.g. a 2 tailed significance level of 0.05 is used if you are looking up the t-score to use in the calculation for 95% confidence that if you repeated n tests the result would fall within the mean +/- the confidence interval. I have looked into using various functions within scipy.stats, but none that I can see seem to allow for the simple inputs I described above. Excel has a simple implementation of this e.g. to get the t-score for a sample of 1000, where I need to be 95% confident I would use: =TINV(0.05,999) and get the score ~1.96 Here is the code that I have used to implement confidence intervals so far, as you can see I am using a very crude way of getting the t-score at present (just allowing a few values for perc_conf and warning that it is not accurate for samples < 1000): # -*- coding: utf-8 -*- from __future__ import division import math def mean(lst): # μ = 1/N Σ(xi) return sum(lst) / float(len(lst)) def variance(lst): """ Uses standard variance formula (sum of each (data point - mean) squared) all divided by number of data points """ # σ² = 1/N Σ((xi-μ)²) mu = mean(lst) return 1.0/len(lst) * sum([(i-mu)**2 for i in lst]) def conf_int(lst, perc_conf=95): """ Confidence interval - given a list of values compute the square root of the variance of the list (v) divided by the number of entries (n) multiplied by a constant factor of (c). This means that I can be confident of a result +/- this amount from the mean. The constant factor can be looked up from a table, for 95% confidence on a reasonable size sample (>=500) 1.96 is used. """ if perc_conf == 95: c = 1.96 elif perc_conf == 90: c = 1.64 elif perc_conf == 99: c = 2.58 else: c = 1.96 print 'Only 90, 95 or 99 % are allowed for, using default 95%' n, v = len(lst), variance(lst) if n < 1000: print 'WARNING: constant factor may not be accurate for n < ~1000' return math.sqrt(v/n) * c Here is an example call for the above code: # Example: 1000 coin tosses on a fair coin. What is the range that I can be 95% # confident the result will f all within. # list of 1000 perfectly distributed... perc_conf_req = 95 n, p = 1000, 0.5 # sample_size, probability of heads for each coin l = [0 for i in range(int(n*(1-p)))] + [1 for j in range(int(n*p))] exp_heads = mean(l) * len(l) c_int = conf_int(l, perc_conf_req) print 'I can be '+str(perc_conf_req)+'% confident that the result of '+str(n)+ \ ' coin flips will be within +/- '+str(round(c_int*100,2))+'% of '+\ str(int(exp_heads)) x = round(n*c_int,0) print 'i.e. between '+str(int(exp_heads-x))+' and '+str(int(exp_heads+x))+\ ' heads (assuming a probability of '+str(p)+' for each flip).' The output for this is: I can be 95% confident that the result of 1000 coin flips will be within +/- 3.1% of 500 i.e. between 469 and 531 heads (assuming a probability of 0.5 for each flip). I also looked into calculating the t-distribution for a range and then returning the t-score that got the probability closest to that required, but I had issues implementing the formula. Let me know if this is relevant and you want to see the code, but I have assumed not as there is probably an easier way.
[ "Have you tried scipy?\nYou will need to installl the scipy library...more about installing it here: http://www.scipy.org/install.html\nOnce installed, you can replicate the Excel functionality like such:\nfrom scipy import stats\n#Studnt, n=999, p<0.05, 2-tail\n#equivalent to Excel TINV(0.05,999)\nprint stats.t.ppf(1-0.025, 999)\n\n#Studnt, n=999, p<0.05%, Single tail\n#equivalent to Excel TINV(2*0.05,999)\nprint stats.t.ppf(1-0.05, 999)\n\nYou can also read about installing the library here: how to install scipy for python?\n", "Try the following code:\nfrom scipy import stats\n#Studnt, n=22, 2-tail\n#stats.t.ppf(1-0.025, df)\n# df=n-1=22-1=21\nprint (stats.t.ppf(1-0.025, 21))\n\n", "You can try this code:\n# for small samples (<50) we use t-statistics\n# n = 9, degree of freedom = 9-1 = 8\n# for 99% confidence interval, alpha = 1% = 0.01 and alpha/2 = 0.005\nfrom scipy import stats\n\nci = 99\nn = 9\nt = stats.t.ppf(1- ((100-ci)/2/100), n-1) # 99% CI, t8,0.005\nprint(t) # 3.36\n\n", "scipy.stats.t has another method isf that directly returns the quantile that corresponds to the upper tail probability alpha. This is an implementation of the inverse survival function and returns the exact same value as t.ppf(1-alpha, dof).\nfrom scipy import stats\nalpha, dof = 0.05, 999\n\nstats.t.isf(alpha, dof) \n# 1.6463803454275356\n\nFor two-tailed, halve alpha:\nstats.t.isf(alpha/2, dof)\n# 1.962341461133449\n\n" ]
[ 60, 3, 0, 0 ]
[]
[]
[ "confidence_interval", "python", "python_2.7", "statistics" ]
stackoverflow_0019339305_confidence_interval_python_python_2.7_statistics.txt
Q: Homebrew installation is not working using CURL on MacOS Monterey I am trying to install Homebrew by following steps they have shared :- /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" note i am not connected to any VPN . guide me to resolve this I tried with the command /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" and got the error . But getting following error : curl: (28) Failed to connect to raw.githubusercontent.com port 443 after 75306 ms: Operation timed out A: I tried setting up DNS for my network and able to resolve the issue
Homebrew installation is not working using CURL on MacOS Monterey
I am trying to install Homebrew by following steps they have shared :- /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" note i am not connected to any VPN . guide me to resolve this I tried with the command /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" and got the error . But getting following error : curl: (28) Failed to connect to raw.githubusercontent.com port 443 after 75306 ms: Operation timed out
[ "I tried setting up DNS for my network and able to resolve the issue\n" ]
[ 0 ]
[]
[]
[ "homebrew", "macos", "macos_monterey" ]
stackoverflow_0074667239_homebrew_macos_macos_monterey.txt
Q: How to Convert Datetime value from lookup activity output to only a Date value into Set variable activity using AzureDataFactory Can you please help me in formatting the Lookupactivity Output value from Datetime to Date type and pass into set_variable activity. Step 1: I am using a query in Lookup activity as SELECT CAST(MAX([DWHModifiedDate]) AS DATE) AS DWHModifiedDate FROM [Schema].[TableName] The output from lookup activity is like "DWHModifiedDate": "2022-11-18T00:00:00Z" Step 2: Now i added a Set_variable activity and i want to store only the date from Lookup activity output for example the variable value should be only "2022-11-18". Can you please help how to achieve this. A: You can also just try to extract the required date directly using @split() function. The following is a sample and the output of look up activity looks as shown below: You can split on T and extract the 0th index to get the required output as well (since the date is already in the format of yyyy-MM-dd). @split(activity('Lookup1').output.firstRow.dt,'T')[0] This would give the required output like yyyy-MM-dd format. Using formatDateTime also gives desired output.
How to Convert Datetime value from lookup activity output to only a Date value into Set variable activity using AzureDataFactory
Can you please help me in formatting the Lookupactivity Output value from Datetime to Date type and pass into set_variable activity. Step 1: I am using a query in Lookup activity as SELECT CAST(MAX([DWHModifiedDate]) AS DATE) AS DWHModifiedDate FROM [Schema].[TableName] The output from lookup activity is like "DWHModifiedDate": "2022-11-18T00:00:00Z" Step 2: Now i added a Set_variable activity and i want to store only the date from Lookup activity output for example the variable value should be only "2022-11-18". Can you please help how to achieve this.
[ "You can also just try to extract the required date directly using @split() function. The following is a sample and the output of look up activity looks as shown below:\n\n\nYou can split on T and extract the 0th index to get the required output as well (since the date is already in the format of yyyy-MM-dd).\n\n@split(activity('Lookup1').output.firstRow.dt,'T')[0]\n\n\n\nThis would give the required output like yyyy-MM-dd format.\n\n\n\nUsing formatDateTime also gives desired output.\n\n" ]
[ 0 ]
[ "@formatDateTime(activity('lookupactivity').output.firstRow.DWHModifiedDate,'yyyy-MM-dd')\n" ]
[ -1 ]
[ "azure_data_factory" ]
stackoverflow_0074551756_azure_data_factory.txt
Q: How to Change the Format of a DateTimeField Object when it is Displayed in HTML through Ajax? models.py class Log(models.Model): source = models.CharField(max_length=1000, default='') date = models.DateTimeField(default=datetime.now, blank = True) views.py The objects in the Log model are filtered so that only those with source names that match a specific account name are considered. The values of these valid objects will then be listed and returned using a JsonResponse. def backlog_list(request): account_name = request.POST['account_name'] access_log = Log.objects.filter(source=account_name) return JsonResponse({"access_log":list(access_log.values())}) dashboard.html This Ajax script is the one that brings back the account name to the views.py. If there are no valid objects, the HTML will be empty; however, it will display it like this otherwise. <h3>You scanned the QR code during these times.</h3> <div id="display"> </div> <script> $(document).ready(function(){ setInterval(function(){ $.ajax({ type: 'POST', url : "/backlog_list", data:{ account_name:$('#account_name').val(), csrfmiddlewaretoken:$('input[name=csrfmiddlewaretoken]').val(), }, success: function(response){ console.log(response); $("#display").empty(); for (var key in response.access_log) { var temp="<div class='container darker'><span class='time-left'>"+response.access_log[key].date+"</span></div>"; $("#display").append(temp); } }, error: function(response){ alert('An error occurred') } }); },1000); }) </script> My goal is to have the Date and time displayed like "Jan. 10, 2000, 9:30:20 A.M." I've tried changing the format directly from the models.py by adding "strftime" but the error response is triggered. A: You're trying to format the date in the HTML by appending it to a string. Unfortunately, this won't work because the date value will be treated as a string and not as a date object. To format the date in the desired way, you will need to convert it to a date object in JavaScript and then use a date formatting function to convert it to the desired string format. Here is an example of how you could do this: // Parse the date value from the response into a date object var date = new Date(response.access_log[key].date); // Use the toLocaleDateString() function to format the date as "Jan. 10, 2000" var dateString = date.toLocaleDateString('en-US', { month: 'short', day: 'numeric', year: 'numeric' }); // Use the toLocaleTimeString() function to format the time as "9:30:20 A.M." var timeString = date.toLocaleTimeString('en-US', { hour: 'numeric', minute: 'numeric', second: 'numeric', hour12: true }); // Append the formatted date and time to the HTML var temp="<div class='container darker'><span class='time-left'>" + dateString + ", " + timeString + "</span></div>"; $("#display").append(temp); You can read more about the toLocaleDateString() and toLocaleTimeString() functions in the JavaScript documentation: toLocaleDateString() toLocaleTimeString() A: One way to set the format you need is via Javascript, Tharun posted an example in his answer. Alternatively, you can specify the format you need in views.py: def backlog_list(request): ... dates = [ val.strftime('%b. %d, %Y, %I:%M:%S %p') for val in access_log.values_list("date", flat=True) ] return JsonResponse({"access_log":[{"date": d} for d in dates]}) Format string reference - https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes
How to Change the Format of a DateTimeField Object when it is Displayed in HTML through Ajax?
models.py class Log(models.Model): source = models.CharField(max_length=1000, default='') date = models.DateTimeField(default=datetime.now, blank = True) views.py The objects in the Log model are filtered so that only those with source names that match a specific account name are considered. The values of these valid objects will then be listed and returned using a JsonResponse. def backlog_list(request): account_name = request.POST['account_name'] access_log = Log.objects.filter(source=account_name) return JsonResponse({"access_log":list(access_log.values())}) dashboard.html This Ajax script is the one that brings back the account name to the views.py. If there are no valid objects, the HTML will be empty; however, it will display it like this otherwise. <h3>You scanned the QR code during these times.</h3> <div id="display"> </div> <script> $(document).ready(function(){ setInterval(function(){ $.ajax({ type: 'POST', url : "/backlog_list", data:{ account_name:$('#account_name').val(), csrfmiddlewaretoken:$('input[name=csrfmiddlewaretoken]').val(), }, success: function(response){ console.log(response); $("#display").empty(); for (var key in response.access_log) { var temp="<div class='container darker'><span class='time-left'>"+response.access_log[key].date+"</span></div>"; $("#display").append(temp); } }, error: function(response){ alert('An error occurred') } }); },1000); }) </script> My goal is to have the Date and time displayed like "Jan. 10, 2000, 9:30:20 A.M." I've tried changing the format directly from the models.py by adding "strftime" but the error response is triggered.
[ "You're trying to format the date in the HTML by appending it to a string. Unfortunately, this won't work because the date value will be treated as a string and not as a date object.\nTo format the date in the desired way, you will need to convert it to a date object in JavaScript and then use a date formatting function to convert it to the desired string format.\nHere is an example of how you could do this:\n// Parse the date value from the response into a date object\nvar date = new Date(response.access_log[key].date);\n\n// Use the toLocaleDateString() function to format the date as \"Jan. 10, 2000\"\nvar dateString = date.toLocaleDateString('en-US', {\n month: 'short',\n day: 'numeric',\n year: 'numeric'\n});\n\n// Use the toLocaleTimeString() function to format the time as \"9:30:20 A.M.\"\nvar timeString = date.toLocaleTimeString('en-US', {\n hour: 'numeric',\n minute: 'numeric',\n second: 'numeric',\n hour12: true\n});\n\n// Append the formatted date and time to the HTML\nvar temp=\"<div class='container darker'><span class='time-left'>\" + dateString + \", \" + timeString + \"</span></div>\";\n$(\"#display\").append(temp);\n\n\nYou can read more about the toLocaleDateString() and toLocaleTimeString() functions in the JavaScript documentation:\n\ntoLocaleDateString()\ntoLocaleTimeString()\n\n", "One way to set the format you need is via Javascript, Tharun posted an example in his answer.\nAlternatively, you can specify the format you need in views.py:\ndef backlog_list(request): \n ...\n dates = [\n val.strftime('%b. %d, %Y, %I:%M:%S %p') \n for val in access_log.values_list(\"date\", flat=True)\n ]\n return JsonResponse({\"access_log\":[{\"date\": d} for d in dates]})\n\nFormat string reference - https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes\n" ]
[ 0, 0 ]
[]
[]
[ "ajax", "datetime", "django", "python" ]
stackoverflow_0074673906_ajax_datetime_django_python.txt
Q: How to call my app api through google cloud scheduler I am able to call public endpoints but I need to call this private endpoint. I can put authentication header but trying to find out if I can automate the process through some other google cloud service. A: We recommend that you use the Google-provided client libraries to call this service.When making API requests, use the following information if your application must use your own libraries to call this service. The Cloud Scheduler API uses service account credentials as described in https://cloud.google.com/docs/authentication/production . You only need to grant that service account permission to interact with Cloud Scheduler via IAM if you are running the code to interact with the Cloud Scheduler API on App Engine, Cloud Functions, or Cloud Run. The service account is already built into those platforms. The documentation includes simplified instructions for setting up the Cloud Scheduler client libraries. This document explains how to implement OAuth 2.0 authorization to access Google APIs
How to call my app api through google cloud scheduler
I am able to call public endpoints but I need to call this private endpoint. I can put authentication header but trying to find out if I can automate the process through some other google cloud service.
[ "We recommend that you use the Google-provided client libraries to call this service.When making API requests, use the following information if your application must use your own libraries to call this service.\nThe Cloud Scheduler API uses service account credentials as described in https://cloud.google.com/docs/authentication/production\n. You only need to grant that service account permission to interact with Cloud Scheduler via IAM if you are running the code to interact with the Cloud Scheduler API on App Engine, Cloud Functions, or Cloud Run. The service account is already built into those platforms.\nThe documentation includes simplified instructions for setting up the Cloud Scheduler client libraries.\nThis document explains how to implement OAuth 2.0 authorization to access Google APIs\n" ]
[ 2 ]
[]
[]
[ "google_cloud_functions", "google_cloud_platform", "google_cloud_scheduler" ]
stackoverflow_0074657994_google_cloud_functions_google_cloud_platform_google_cloud_scheduler.txt
Q: Expected an operand but found const const error in Jsrs223 sampler in JMeter I am running Java script code in JSR 223 sampler in JMeter const allCapsAlpha = [..."ABCDEFGHIJKLMNOPQRSTUVWXYZ"]; const allLowerAlpha = [..."abcdefghijklmnopqrstuvwxyz"]; const allNumbers = [..."0123456789"]; const base = [...allCapsAlpha, ...allNumbers, ...allLowerAlpha]; const characters ='ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'; const generator = (base, len) getting below error Problem in JSR223 script JSR223 Sampler, message: Expected an operand but found const const all Caps Alpha = [..."ABCDEFGHIJKLMNOPQRSTUVWXYZ"]; A: The keyword const is not implemented in Nashorn JavaScript engine, see JDK-8024712 for more details. In general it's recommended to use Groovy language for scripting, the reasons are in: Groovy is way faster comparing to other JMeter scripting engines Nashorn engine has been removed as of Java 15 Particular your case can be implemented using the next line: generator = org.apache.commons.lang3.RandomStringUtils.randomAlphanumeric(len)
Expected an operand but found const const error in Jsrs223 sampler in JMeter
I am running Java script code in JSR 223 sampler in JMeter const allCapsAlpha = [..."ABCDEFGHIJKLMNOPQRSTUVWXYZ"]; const allLowerAlpha = [..."abcdefghijklmnopqrstuvwxyz"]; const allNumbers = [..."0123456789"]; const base = [...allCapsAlpha, ...allNumbers, ...allLowerAlpha]; const characters ='ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'; const generator = (base, len) getting below error Problem in JSR223 script JSR223 Sampler, message: Expected an operand but found const const all Caps Alpha = [..."ABCDEFGHIJKLMNOPQRSTUVWXYZ"];
[ "The keyword const is not implemented in Nashorn JavaScript engine, see JDK-8024712 for more details.\nIn general it's recommended to use Groovy language for scripting, the reasons are in:\n\nGroovy is way faster comparing to other JMeter scripting engines\nNashorn engine has been removed as of Java 15\n\nParticular your case can be implemented using the next line:\ngenerator = org.apache.commons.lang3.RandomStringUtils.randomAlphanumeric(len)\n\n\n" ]
[ 0 ]
[]
[]
[ "jmeter" ]
stackoverflow_0074673673_jmeter.txt
Q: Calculate time span between two specific statuses on the database for each ID I have a table on the database that contains statuses updated on each vehicle I have, I want to calculate how many days each vehicle spends time between two specific statuses 'Maintenance' and 'Read'. My table looks something like this and I want to result to be like this, only show the number of days a vehicle spends in maintenance before becoming ready on a specific day The code I written looks like this drop table if exists #temps1 select VehicleId, json_value(VehiclesHistoryStatusID.text,'$.en') as VehiclesHistoryStatus, VehiclesHistory.CreationTime, datediff(day, VehiclesHistory.CreationTime , lead(VehiclesHistory.CreationTime ) over (order by VehiclesHistory.CreationTime ) ) as days, lag(json_value(VehiclesHistoryStatusID.text,'$.en')) over (order by VehiclesHistory.CreationTime) as PrevStatus, case when (lag(json_value(VehiclesHistoryStatusID.text,'$.en')) over (order by VehiclesHistory.CreationTime) <> json_value(VehiclesHistoryStatusID.text,'$.en')) THEN datediff(day, VehiclesHistory.CreationTime , (lag(VehiclesHistory.CreationTime ) over (order by VehiclesHistory.CreationTime ))) else 0 end as testing into #temps1 from fleet.VehicleHistory VehiclesHistory left join Fleet.Lookups as VehiclesHistoryStatusID on VehiclesHistoryStatusID.Id = VehiclesHistory.StatusId where (year(VehiclesHistory.CreationTime) > 2021 and (VehiclesHistory.StatusId = 140 Or VehiclesHistory.StatusId = 144) ) group by VehiclesHistory.VehicleId ,VehiclesHistory.CreationTime , VehiclesHistoryStatusID.text order by VehicleId desc drop table if exists #temps2 select * into #temps2 from #temps1 where testing <> 0 select * from #temps2 A: Try this SELECT innerQ.VehichleID,innerQ.CreationDate,innerQ.Status ,SUM(DATEDIFF(DAY,innerQ.PrevMaintenance,innerQ.CreationDate)) AS DayDuration FROM ( SELECT t1.VehichleID,t1.CreationDate,t1.Status, (SELECT top(1) t2.CreationDate FROM dbo.Test t2 WHERE t1.VehichleID=t2.VehichleID AND t2.CreationDate<t1.CreationDate AND t2.Status='Maintenance' ORDER BY t2.CreationDate Desc) AS PrevMaintenance FROM dbo.Test t1 WHERE t1.Status='Ready' ) innerQ WHERE innerQ.PrevMaintenance IS NOT NULL GROUP BY innerQ.VehichleID,innerQ.CreationDate,innerQ.Status In this query first we are finding the most recent 'maintenance' date before each 'ready' date in the inner most query (if exists). Then calculate the time span with DATEDIFF and sum all this spans for each vehicle.
Calculate time span between two specific statuses on the database for each ID
I have a table on the database that contains statuses updated on each vehicle I have, I want to calculate how many days each vehicle spends time between two specific statuses 'Maintenance' and 'Read'. My table looks something like this and I want to result to be like this, only show the number of days a vehicle spends in maintenance before becoming ready on a specific day The code I written looks like this drop table if exists #temps1 select VehicleId, json_value(VehiclesHistoryStatusID.text,'$.en') as VehiclesHistoryStatus, VehiclesHistory.CreationTime, datediff(day, VehiclesHistory.CreationTime , lead(VehiclesHistory.CreationTime ) over (order by VehiclesHistory.CreationTime ) ) as days, lag(json_value(VehiclesHistoryStatusID.text,'$.en')) over (order by VehiclesHistory.CreationTime) as PrevStatus, case when (lag(json_value(VehiclesHistoryStatusID.text,'$.en')) over (order by VehiclesHistory.CreationTime) <> json_value(VehiclesHistoryStatusID.text,'$.en')) THEN datediff(day, VehiclesHistory.CreationTime , (lag(VehiclesHistory.CreationTime ) over (order by VehiclesHistory.CreationTime ))) else 0 end as testing into #temps1 from fleet.VehicleHistory VehiclesHistory left join Fleet.Lookups as VehiclesHistoryStatusID on VehiclesHistoryStatusID.Id = VehiclesHistory.StatusId where (year(VehiclesHistory.CreationTime) > 2021 and (VehiclesHistory.StatusId = 140 Or VehiclesHistory.StatusId = 144) ) group by VehiclesHistory.VehicleId ,VehiclesHistory.CreationTime , VehiclesHistoryStatusID.text order by VehicleId desc drop table if exists #temps2 select * into #temps2 from #temps1 where testing <> 0 select * from #temps2
[ "Try this\nSELECT innerQ.VehichleID,innerQ.CreationDate,innerQ.Status\n,SUM(DATEDIFF(DAY,innerQ.PrevMaintenance,innerQ.CreationDate)) AS DayDuration\n\nFROM\n(\nSELECT t1.VehichleID,t1.CreationDate,t1.Status,\n(SELECT top(1) t2.CreationDate FROM dbo.Test t2 \n WHERE t1.VehichleID=t2.VehichleID\n AND t2.CreationDate<t1.CreationDate\n AND t2.Status='Maintenance'\n ORDER BY t2.CreationDate Desc) AS PrevMaintenance\nFROM\ndbo.Test t1 WHERE t1.Status='Ready'\n) innerQ\nWHERE innerQ.PrevMaintenance IS NOT NULL \nGROUP BY innerQ.VehichleID,innerQ.CreationDate,innerQ.Status\n\nIn this query first we are finding the most recent 'maintenance' date before each 'ready' date in the inner most query (if exists). Then calculate the time span with DATEDIFF and sum all this spans for each vehicle.\n" ]
[ 1 ]
[]
[]
[ "sql", "sql_server" ]
stackoverflow_0074674003_sql_sql_server.txt
Q: Glowing border animation with CSS doesn't have a fluid transition I'm following a tutorial on youtube on how to create a Glowing Border Animation with CSS I tried to implement it myself and was pretty successful, however, I encountered a problem which I'm unable to solve. When I view my animation there is an uneven transition. It looks like as if two images are stuck together where the colours transition is cut off. How can I solve the issue there with my transition looks smooth? I created a JSFiddle to display what I mean: * { margin: 0; padding: 0; } body { height: 100vh; display: flex; align-items: center; justify-content: center; background: #151320; } .box { position: relative; width: 300px; height: 300px; color: #fff; font: 300 2rem 'Montserrat'; text-align: center; text-transform: uppercase; display: flex; align-items: center; } .box::before, .box::after { content: ''; z-index: -1; position: absolute; width: calc(100% + 30px); height: calc(100% + 30px); top: -15px; left: -15px; background: linear-gradient(45deg, #0096FF, #0047AB, #000000, #6082B6, #87CEEB, #00008B, #145DA0, #00008B, #145DA0, #0096FF, #0047AB, #000000, #6082B6, #87CEEB); background-repeat: repeat; border-radius: 5px; background-size: 600%; animation: border 12s linear infinite; } .box::after { filter: blur(25px); } @keyframes border { 0% { background-position: 0% 0%; } 100% { background-position: 250% 250%; } } <link href='https://fonts.googleapis.com/css?family=Montserrat' rel='stylesheet'> <div class="box"> Greetings fellow developer! </div> Note: The animation looks smooth at first but after about 7ish seconds you encounter the "cut off" where the transition doesn't line up. A: Your gradient need to have a kind of repetition to achieve such effect. Make its size 200% 200% then use a repeating gradient where the first color start at 0% and the last one at 50%. Notice how the list of color is repeated twice but in the opposite order. body { background: #151320; } .box { position: relative; width: 300px; height: 300px; } .box::before, .box::after { content: ''; z-index: -1; position: absolute; inset: -15px; background: repeating-linear-gradient(45deg, #0096FF 0%, #0047AB, #6082B6, #87CEEB, #00008B, #00008B, #87CEEB, #6082B6,#0047AB,#0096FF 50%); border-radius: 5px; background-size: 200% 200%; animation: border 2s linear infinite; } .box::after { filter: blur(25px); } @keyframes border { 0% { background-position: bottom left; } 100% { background-position: top right; } } <div class="box"> </div>
Glowing border animation with CSS doesn't have a fluid transition
I'm following a tutorial on youtube on how to create a Glowing Border Animation with CSS I tried to implement it myself and was pretty successful, however, I encountered a problem which I'm unable to solve. When I view my animation there is an uneven transition. It looks like as if two images are stuck together where the colours transition is cut off. How can I solve the issue there with my transition looks smooth? I created a JSFiddle to display what I mean: * { margin: 0; padding: 0; } body { height: 100vh; display: flex; align-items: center; justify-content: center; background: #151320; } .box { position: relative; width: 300px; height: 300px; color: #fff; font: 300 2rem 'Montserrat'; text-align: center; text-transform: uppercase; display: flex; align-items: center; } .box::before, .box::after { content: ''; z-index: -1; position: absolute; width: calc(100% + 30px); height: calc(100% + 30px); top: -15px; left: -15px; background: linear-gradient(45deg, #0096FF, #0047AB, #000000, #6082B6, #87CEEB, #00008B, #145DA0, #00008B, #145DA0, #0096FF, #0047AB, #000000, #6082B6, #87CEEB); background-repeat: repeat; border-radius: 5px; background-size: 600%; animation: border 12s linear infinite; } .box::after { filter: blur(25px); } @keyframes border { 0% { background-position: 0% 0%; } 100% { background-position: 250% 250%; } } <link href='https://fonts.googleapis.com/css?family=Montserrat' rel='stylesheet'> <div class="box"> Greetings fellow developer! </div> Note: The animation looks smooth at first but after about 7ish seconds you encounter the "cut off" where the transition doesn't line up.
[ "Your gradient need to have a kind of repetition to achieve such effect. Make its size 200% 200% then use a repeating gradient where the first color start at 0% and the last one at 50%. Notice how the list of color is repeated twice but in the opposite order.\n\n\nbody {\n background: #151320;\n}\n\n.box {\n position: relative;\n width: 300px;\n height: 300px;\n}\n\n.box::before,\n.box::after {\n content: '';\n z-index: -1;\n position: absolute;\n inset: -15px;\n background: \n repeating-linear-gradient(45deg,\n #0096FF 0%, #0047AB, #6082B6, #87CEEB, #00008B,\n #00008B, #87CEEB, #6082B6,#0047AB,#0096FF 50%);\n border-radius: 5px;\n background-size: 200% 200%;\n animation: border 2s linear infinite;\n}\n\n.box::after {\n filter: blur(25px);\n}\n\n@keyframes border {\n 0% {\n background-position: bottom left;\n }\n 100% {\n background-position: top right;\n }\n}\n<div class=\"box\">\n</div>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "css", "html" ]
stackoverflow_0074671337_css_html.txt
Q: How can I remove the values on top of the grouped bars with the bar_plot using axes.bar in matplotlib? I want to remove the percentage values on top of each plot, or possibly round them width = 0.2 x = np.arange(len(labels)) fig2,ax = plt.subplots() rects1 = ax.bar(x - width/2, precision_data, width, label='precision',color ='firebrick') rects2 = ax.bar(x + width/2 , recall_data, width, label='recall',color = 'royalblue') ax.set_ylabel('Score %') ax.set_title('precision-recall average classifiers scores') ax.set_xticks(x, labels) ax.legend() ax.bar_label(rects1) ax.bar_label(rects2)` A: To remove the text on top of your bars, simply comment out ax.bar_label(rects1) and ax.bar_label(rects2): To round the labels, you may use the fmt argument: ax.bar_label(labels, fmt='%.2f')
How can I remove the values on top of the grouped bars with the bar_plot using axes.bar in matplotlib?
I want to remove the percentage values on top of each plot, or possibly round them width = 0.2 x = np.arange(len(labels)) fig2,ax = plt.subplots() rects1 = ax.bar(x - width/2, precision_data, width, label='precision',color ='firebrick') rects2 = ax.bar(x + width/2 , recall_data, width, label='recall',color = 'royalblue') ax.set_ylabel('Score %') ax.set_title('precision-recall average classifiers scores') ax.set_xticks(x, labels) ax.legend() ax.bar_label(rects1) ax.bar_label(rects2)`
[ "\nTo remove the text on top of your bars, simply comment out ax.bar_label(rects1) and ax.bar_label(rects2):\n\nTo round the labels, you may use the fmt argument: ax.bar_label(labels, fmt='%.2f')\n\n\n" ]
[ 1 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0074674124_matplotlib_python.txt
Q: How to disable horizontal scrolling on a website for iOS (CSS) I can't seem to be able to disable horizontal scrolling in CSS for iOS. I need the extra width on the side to make room for my navbar so it is essential for me to disable it. I've tried html, body { overflow: hidden; overflow-x: hidden; max-width: 100%; -ms-overflow-style: none; scrollbar-width: none; } html::-webkit-scrollbar { display: none; } body::-webkit-scrollbar { display: none; } But I can still scroll horizontally on my iPhone A: Have you tried *{ overflow: hidden; } I believe overflow attribute is enough.
How to disable horizontal scrolling on a website for iOS (CSS)
I can't seem to be able to disable horizontal scrolling in CSS for iOS. I need the extra width on the side to make room for my navbar so it is essential for me to disable it. I've tried html, body { overflow: hidden; overflow-x: hidden; max-width: 100%; -ms-overflow-style: none; scrollbar-width: none; } html::-webkit-scrollbar { display: none; } body::-webkit-scrollbar { display: none; } But I can still scroll horizontally on my iPhone
[ "Have you tried\n*{\n overflow: hidden;\n}\n\nI believe overflow attribute is enough.\n" ]
[ 0 ]
[]
[]
[ "css", "horizontal_scrolling", "html", "javascript", "overflow" ]
stackoverflow_0074674097_css_horizontal_scrolling_html_javascript_overflow.txt
Q: Could I reuse an existing protobuf binary, when marshaling a message including it?(protobuf3) Protobuf definations are like that: syntax = "proto3" message HugeMessage { // omitted } message Request { string name = 1; HugeMessage payload = 2; } In a situation I received a HugeMessage from somebody, and I want to pack it with additional fields, and then transmit the message to someone else. So that I have to Unmarshal the HugeMessage binary into Go structure, pack it into Request, and Marshal again. Due to the hgue size for HugeMessage, the cost for Unmarshal and Marshal are unaffordable. so could I reuse the HugeMessage binary without change the protobuf definations? func main() { // receive it from file or network, not important. bins, _ := os.ReadFile("hugeMessage.dump") var message HugeMessage _ = proto.Unmarshal(bins, &message) // slow request := Request{ name: "xxxx", payload: message, } requestBinary, _ := proto.Marshal(&request) // slow // send it. os.WriteFile("request.dump", requestBinary, 0644) } A: The short answer is: no, there is no simple or standard way to achieve this. The most obvious strategy is to do as you currently have - unmarshal the HugeMessage, set it into Request, then marshal again. The golang protobuf API surface doesn't really provide a means to do much beyond that - with good reason. That said, there are ways to achieve what you're looking to do. But these aren't necessarily safe or reliable, so you have to weigh that cost vs the cost of what you have now. One way you can avoid the unmarshal is to take advantage of the way a message is normally serialized; message Request { string name = 1; HugeMessage payload = 2; } .. is equivalent to message Request { string name = 1; bytes payload = 2; } .. where payload contains the result of calling Marshal(...) against some HugeMessage. So, if we have the following definitions: syntax = "proto3"; message HugeMessage { bytes field1 = 1; string field2 = 2; int64 field3 = 3; } message Request { string name = 1; HugeMessage payload = 2; } message RawRequest { string name = 1; bytes payload = 2; } The following code: req1, err := proto.Marshal(&pb.Request{ Name: "name", Payload: &pb.HugeMessage{ Field1: []byte{1, 2, 3}, Field2: "test", Field3: 948414, }, }) if err != nil { panic(err) } huge, err := proto.Marshal(&pb.HugeMessage{ Field1: []byte{1, 2, 3}, Field2: "test", Field3: 948414, }) if err != nil { panic(err) } req2, err := proto.Marshal(&pb.RawRequest{ Name: "name", Payload: huge, }) if err != nil { panic(err) } fmt.Printf("equal? %t\n", bytes.Equal(req1, req2)) outputs equal? true Whether this "quirk" is entirely reliable isn't clear, and there is no guarantees it will continue to work indefinitely. And obviously the RawRequest type has to fully mirror the Request type, which isn't ideal. Another alternative is to construct the message in a more manual fashion, i.e. using the protowire package - again, haphazard, caution advised. A: Shortly, it could be done via protowire, and not really hard if structure reused isn't complex. I asked this question not long ago, and I finally work it out inspired by @nj_ 's post. According to the encoding chapter of protobuf, a protocol buffer message is a series of field-value pairs, and the order of those pairs doesn't matter. An obvious idea comes to me: just works like the protoc compiler, make up the embedded field handly and append it to the end of the request. In this situation, we want to reuse the HugeMessage in Request, so the key-value pair of the field would be 2:{${HugeMessageBinary}}. So the code(a little different) could be: func binaryEmbeddingImplementation(messageBytes []byte, name string) (requestBytes []byte, err error) { // 1. create a request with all ready except the payload. and marshal it. request := protodef.Request{ Name: name, } requestBytes, err = proto.Marshal(&request) if err != nil { return nil, err } // 2. manually append the payload to the request, by protowire. requestBytes = protowire.AppendTag(requestBytes, 2, protowire.BytesType) // embedded message is same as a bytes field, in wire view. requestBytes = protowire.AppendBytes(requestBytes, messageBytes) return requestBytes, nil } Tell the field number, field type and the bytes, That's all. Commom way is like that. func commonImplementation(messageBytes []byte, name string) (requestBytes []byte, err error) { // receive it from file or network, not important. var message protodef.HugeMessage _ = proto.Unmarshal(messageBytes, &message) // slow request := protodef.Request{ Name: name, Payload: &message, } return proto.Marshal(&request) // slow } Some benchmark. $ go test -bench=a -benchtime 10s ./pkg/ goos: darwin goarch: arm64 pkg: pbembedding/pkg BenchmarkCommon-8 49 288026442 ns/op BenchmarkEmbedding-8 201 176032133 ns/op PASS ok pbembedding/pkg 80.196s package pkg import ( "github.com/stretchr/testify/assert" "golang.org/x/exp/rand" "google.golang.org/protobuf/proto" "pbembedding/pkg/protodef" "testing" ) var hugeMessageSample = receiveHugeMessageFromSomewhere() func TestEquivalent(t *testing.T) { requestBytes1, _ := commonImplementation(hugeMessageSample, "xxxx") requestBytes2, _ := binaryEmbeddingImplementation(hugeMessageSample, "xxxx") // They are not always equal int bytes. you should compare them in message view instead of binary from // due to: https://developers.google.com/protocol-buffers/docs/encoding#implications // I'm Lazy. assert.NotEmpty(t, requestBytes1) assert.Equal(t, requestBytes1, requestBytes2) var request protodef.Request err := proto.Unmarshal(requestBytes1, &request) assert.NoError(t, err) assert.Equal(t, "xxxx", request.Name) } // actually mock one. func receiveHugeMessageFromSomewhere() []byte { buffer := make([]byte, 1024*1024*1024) _, _ = rand.Read(buffer) message := protodef.HugeMessage{ Data: buffer, } res, _ := proto.Marshal(&message) return res } func BenchmarkCommon(b *testing.B) { b.ResetTimer() for i := 0; i < b.N; i++ { _, err := commonImplementation(hugeMessageSample, "xxxx") if err != nil { panic(err) } } } func BenchmarkEmbedding(b *testing.B) { b.ResetTimer() for i := 0; i < b.N; i++ { _, err := binaryEmbeddingImplementation(hugeMessageSample, "xxxx") if err != nil { panic(err) } } }
Could I reuse an existing protobuf binary, when marshaling a message including it?(protobuf3)
Protobuf definations are like that: syntax = "proto3" message HugeMessage { // omitted } message Request { string name = 1; HugeMessage payload = 2; } In a situation I received a HugeMessage from somebody, and I want to pack it with additional fields, and then transmit the message to someone else. So that I have to Unmarshal the HugeMessage binary into Go structure, pack it into Request, and Marshal again. Due to the hgue size for HugeMessage, the cost for Unmarshal and Marshal are unaffordable. so could I reuse the HugeMessage binary without change the protobuf definations? func main() { // receive it from file or network, not important. bins, _ := os.ReadFile("hugeMessage.dump") var message HugeMessage _ = proto.Unmarshal(bins, &message) // slow request := Request{ name: "xxxx", payload: message, } requestBinary, _ := proto.Marshal(&request) // slow // send it. os.WriteFile("request.dump", requestBinary, 0644) }
[ "The short answer is: no, there is no simple or standard way to achieve this.\nThe most obvious strategy is to do as you currently have - unmarshal the HugeMessage, set it into Request, then marshal again. The golang protobuf API surface doesn't really provide a means to do much beyond that - with good reason.\nThat said, there are ways to achieve what you're looking to do. But these aren't necessarily safe or reliable, so you have to weigh that cost vs the cost of what you have now.\nOne way you can avoid the unmarshal is to take advantage of the way a message is normally serialized;\nmessage Request {\n string name = 1;\n HugeMessage payload = 2;\n}\n\n.. is equivalent to\nmessage Request {\n string name = 1;\n bytes payload = 2;\n}\n\n.. where payload contains the result of calling Marshal(...) against some HugeMessage.\nSo, if we have the following definitions:\nsyntax = \"proto3\";\n\nmessage HugeMessage {\n bytes field1 = 1;\n string field2 = 2;\n int64 field3 = 3;\n}\n\nmessage Request {\n string name = 1;\n HugeMessage payload = 2;\n}\n\nmessage RawRequest {\n string name = 1;\n bytes payload = 2;\n}\n\nThe following code:\nreq1, err := proto.Marshal(&pb.Request{\n Name: \"name\",\n Payload: &pb.HugeMessage{\n Field1: []byte{1, 2, 3},\n Field2: \"test\",\n Field3: 948414,\n },\n})\nif err != nil {\n panic(err)\n}\n\nhuge, err := proto.Marshal(&pb.HugeMessage{\n Field1: []byte{1, 2, 3},\n Field2: \"test\",\n Field3: 948414,\n})\nif err != nil {\n panic(err)\n}\n\nreq2, err := proto.Marshal(&pb.RawRequest{\n Name: \"name\",\n Payload: huge,\n})\nif err != nil {\n panic(err)\n}\n\nfmt.Printf(\"equal? %t\\n\", bytes.Equal(req1, req2))\n\noutputs equal? true\nWhether this \"quirk\" is entirely reliable isn't clear, and there is no guarantees it will continue to work indefinitely. And obviously the RawRequest type has to fully mirror the Request type, which isn't ideal.\nAnother alternative is to construct the message in a more manual fashion, i.e. using the protowire package - again, haphazard, caution advised.\n", "Shortly, it could be done via protowire, and not really hard if structure reused isn't complex.\nI asked this question not long ago, and I finally work it out inspired by @nj_ 's post. According to the encoding chapter of protobuf, a protocol buffer message is a series of field-value pairs, and the order of those pairs doesn't matter. An obvious idea comes to me: just works like the protoc compiler, make up the embedded field handly and append it to the end of the request.\nIn this situation, we want to reuse the HugeMessage in Request, so the key-value pair of the field would be 2:{${HugeMessageBinary}}. So the code(a little different) could be:\nfunc binaryEmbeddingImplementation(messageBytes []byte, name string) (requestBytes []byte, err error) {\n // 1. create a request with all ready except the payload. and marshal it.\n request := protodef.Request{\n Name: name,\n }\n requestBytes, err = proto.Marshal(&request)\n if err != nil {\n return nil, err\n }\n // 2. manually append the payload to the request, by protowire.\n requestBytes = protowire.AppendTag(requestBytes, 2, protowire.BytesType) // embedded message is same as a bytes field, in wire view.\n requestBytes = protowire.AppendBytes(requestBytes, messageBytes)\n return requestBytes, nil\n}\n\n\nTell the field number, field type and the bytes, That's all. Commom way is like that.\nfunc commonImplementation(messageBytes []byte, name string) (requestBytes []byte, err error) {\n // receive it from file or network, not important.\n var message protodef.HugeMessage\n _ = proto.Unmarshal(messageBytes, &message) // slow\n request := protodef.Request{\n Name: name,\n Payload: &message,\n }\n return proto.Marshal(&request) // slow\n}\n\nSome benchmark.\n$ go test -bench=a -benchtime 10s ./pkg/ \ngoos: darwin\ngoarch: arm64\npkg: pbembedding/pkg\nBenchmarkCommon-8 49 288026442 ns/op\nBenchmarkEmbedding-8 201 176032133 ns/op\nPASS\nok pbembedding/pkg 80.196s\n\n\npackage pkg\n\nimport (\n \"github.com/stretchr/testify/assert\"\n \"golang.org/x/exp/rand\"\n \"google.golang.org/protobuf/proto\"\n \"pbembedding/pkg/protodef\"\n \"testing\"\n)\n\nvar hugeMessageSample = receiveHugeMessageFromSomewhere()\n\nfunc TestEquivalent(t *testing.T) {\n requestBytes1, _ := commonImplementation(hugeMessageSample, \"xxxx\")\n requestBytes2, _ := binaryEmbeddingImplementation(hugeMessageSample, \"xxxx\")\n // They are not always equal int bytes. you should compare them in message view instead of binary from\n // due to: https://developers.google.com/protocol-buffers/docs/encoding#implications\n // I'm Lazy.\n assert.NotEmpty(t, requestBytes1)\n assert.Equal(t, requestBytes1, requestBytes2)\n var request protodef.Request\n err := proto.Unmarshal(requestBytes1, &request)\n assert.NoError(t, err)\n assert.Equal(t, \"xxxx\", request.Name)\n}\n\n// actually mock one.\nfunc receiveHugeMessageFromSomewhere() []byte {\n buffer := make([]byte, 1024*1024*1024)\n _, _ = rand.Read(buffer)\n message := protodef.HugeMessage{\n Data: buffer,\n }\n res, _ := proto.Marshal(&message)\n return res\n}\n\nfunc BenchmarkCommon(b *testing.B) {\n b.ResetTimer()\n for i := 0; i < b.N; i++ {\n _, err := commonImplementation(hugeMessageSample, \"xxxx\")\n if err != nil {\n panic(err)\n }\n }\n}\n\nfunc BenchmarkEmbedding(b *testing.B) {\n b.ResetTimer()\n for i := 0; i < b.N; i++ {\n _, err := binaryEmbeddingImplementation(hugeMessageSample, \"xxxx\")\n if err != nil {\n panic(err)\n }\n }\n}\n\n\n" ]
[ 1, 0 ]
[]
[]
[ "go", "proto3", "protobuf_go", "protocol_buffers" ]
stackoverflow_0074486451_go_proto3_protobuf_go_protocol_buffers.txt
Q: How can I configure metro to resolve modules outside of my project directory? For reasons that are out of my control, I need to resolve a module that is outside of my react-native project directory. So, consider the following directory structure: react-native-project/ ├─ App.jsx ├─ babel.config.js external-directory/ ├─ Foo.jsx I would like any import Foo from 'Foo' inside of react-native-project to resolve ../external-directory/Foo.jsx. My first attempt at this was to use babel-plugin-module-loader with the following configuration: plugins: [ [ 'module-resolver', { alias: { Foo: '/absolute/path/to/external-directory/Foo', }, }, ], ], This doesn't work, with metro emitting the following error: error: Error: Unable to resolve module /absolute/path/to/external-directory/Foo from /absolute/path/to/react-native-project/App.jsx: None of these files exist: * ../external-directory/Foo(.native|.ios.js|.native.js|.js|.ios.jsx|.native.jsx|.jsx|.ios.json|.native.json|.json|.ios.ts|.native.ts|.ts|.ios.tsx|.native.tsx|.tsx) * ../external-directory/Foo/index(.native|.ios.js|.native.js|.js|.ios.jsx|.native.jsx|.jsx|.ios.json|.native.json|.json|.ios.ts|.native.ts|.ts|.ios.tsx|.native.tsx|.tsx) This error message is wrong: ../external-directory/Foo.jsx does exist. I've verified this numerous times. I've also set up a standalone babel package to test an identical import scenario, and babel correctly resolves the external module. The other approach I took was to add a custom resolveRequest function in my metro.config.js: const defaultResolver = require('metro-resolver').resolve; module.exports = { ... resolver: { resolveRequest: (context, moduleName, platform, realModuleName) => { if (moduleName === 'Foo') { return { filePath: '/absolute/path/to/external-directory/Foo.jsx', type: 'sourceFile', }; } else { return defaultResolver( { ...context, resolveRequest: null, }, moduleName, platform, realModuleName, ); } }, }, }; This also doesn't work, emitting the following error message: error: ReferenceError: SHA-1 for file /absolute/path/to/external-directory/Foo.jsx (/absolute/path/to/external-directory/Foo.jsx) is not computed. Potential causes: 1) You have symlinks in your project - watchman does not follow symlinks. 2) Check `blockList` in your metro.config.js and make sure it isn't excluding the file path. The potential causes do not apply in this scenario: There are no symlinks nor does the blockList contain the external directory (I explicitly configured blockList: null to verify). Is there any way to accomplish what I'm trying to do? Or does metro (either by design or incidentally) prevent this? A: You can use a metro bundler build in option - extraNodeModules and watchFolders. const path = require('path'); module.exports = { resolver: { ..., extraNodeModules: { app: path.resolve(__dirname + '/../app') } }, ..., watchFolders: [ path.resolve(__dirname + '/../app') ] };
How can I configure metro to resolve modules outside of my project directory?
For reasons that are out of my control, I need to resolve a module that is outside of my react-native project directory. So, consider the following directory structure: react-native-project/ ├─ App.jsx ├─ babel.config.js external-directory/ ├─ Foo.jsx I would like any import Foo from 'Foo' inside of react-native-project to resolve ../external-directory/Foo.jsx. My first attempt at this was to use babel-plugin-module-loader with the following configuration: plugins: [ [ 'module-resolver', { alias: { Foo: '/absolute/path/to/external-directory/Foo', }, }, ], ], This doesn't work, with metro emitting the following error: error: Error: Unable to resolve module /absolute/path/to/external-directory/Foo from /absolute/path/to/react-native-project/App.jsx: None of these files exist: * ../external-directory/Foo(.native|.ios.js|.native.js|.js|.ios.jsx|.native.jsx|.jsx|.ios.json|.native.json|.json|.ios.ts|.native.ts|.ts|.ios.tsx|.native.tsx|.tsx) * ../external-directory/Foo/index(.native|.ios.js|.native.js|.js|.ios.jsx|.native.jsx|.jsx|.ios.json|.native.json|.json|.ios.ts|.native.ts|.ts|.ios.tsx|.native.tsx|.tsx) This error message is wrong: ../external-directory/Foo.jsx does exist. I've verified this numerous times. I've also set up a standalone babel package to test an identical import scenario, and babel correctly resolves the external module. The other approach I took was to add a custom resolveRequest function in my metro.config.js: const defaultResolver = require('metro-resolver').resolve; module.exports = { ... resolver: { resolveRequest: (context, moduleName, platform, realModuleName) => { if (moduleName === 'Foo') { return { filePath: '/absolute/path/to/external-directory/Foo.jsx', type: 'sourceFile', }; } else { return defaultResolver( { ...context, resolveRequest: null, }, moduleName, platform, realModuleName, ); } }, }, }; This also doesn't work, emitting the following error message: error: ReferenceError: SHA-1 for file /absolute/path/to/external-directory/Foo.jsx (/absolute/path/to/external-directory/Foo.jsx) is not computed. Potential causes: 1) You have symlinks in your project - watchman does not follow symlinks. 2) Check `blockList` in your metro.config.js and make sure it isn't excluding the file path. The potential causes do not apply in this scenario: There are no symlinks nor does the blockList contain the external directory (I explicitly configured blockList: null to verify). Is there any way to accomplish what I'm trying to do? Or does metro (either by design or incidentally) prevent this?
[ "You can use a metro bundler build in option - extraNodeModules and watchFolders.\nconst path = require('path');\n\nmodule.exports = {\n resolver: {\n ...,\n extraNodeModules: {\n app: path.resolve(__dirname + '/../app')\n }\n },\n ...,\n watchFolders: [\n path.resolve(__dirname + '/../app')\n ]\n};\n\n" ]
[ 0 ]
[]
[]
[ "metro_bundler", "react_native" ]
stackoverflow_0074032259_metro_bundler_react_native.txt
Q: Why getting this selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element I know already upload answer to this same question but I try them they are not working for me because there is also some some update in selenium code too. Getting this Error selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element <div class="up-typeahead-fake" data-test="up-c-typeahead-input-fake">...</div> is not clickable at point (838, 0). Other element would receive the click: <div class="up-modal-header">...</div> , When trying to send my searching keyword in this input with labeled "Skills Search" in advance searching pop-pup form. Here is the URL: https://www.upwork.com/nx/jobs/search/modals/advanced-search?sort=recency&pageTitle=Advanced%20Search&_navType=modal&_modalInfo=%5B%7B%22navType%22%3A%22modal%22,%22title%22%3A%22Advanced%20Search%22,%22modalId%22%3A%221670133126002%22,%22channelName%22%3A%22advanced-search-modal%22%7D%5D Here is my code: from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By from selenium.webdriver.common.proxy import Proxy, ProxyType import time from fake_useragent import UserAgent import pyttsx3 from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC def main(): options = Options() service = Service('F:\\work\\chromedriver_win32\\chromedriver.exe') options.add_argument("start-maximized") options.add_argument('--disable-blink-features=AutomationControlled') #Adding the argument options.add_experimental_option("excludeSwitches",["enable-automation"])#Disable chrome contrlled message (Exclude the collection of enable-automation switches) options.add_experimental_option('useAutomationExtension', False) #Turn-off useAutomationExtension options.add_experimental_option('useAutomationExtension', False) #Turn-off useAutomationExtension prefs = {"credentials_enable_service": False, "profile.password_manager_enabled": False} options.add_experimental_option("prefs", prefs) ua = UserAgent() userAgent = ua.random options.add_argument(f'user-agent={userAgent}') driver = webdriver.Chrome(service=service , options=options) url = 'https://www.upwork.com/nx/jobs/search/?sort=recency' driver.get(url) time.sleep(7) advsearch = driver.find_element(By.XPATH,'//button[contains(@title,"Advanced Search")]') advsearch.click() time.sleep(10) skill = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH,'//div[contains(@class,"up-typeahead")]'))) skill.click() time.sleep(10) keys = ["Web Scraping","Selenium WebDriver", "Data Scraping", "selenium", "Web Crawling", "Beautiful Soup", "Scrapy", "Data Extraction", "Automation"] for i in range(len(keys)): skill.send_keys(Keys[i],Keys.ENTER) time.sleep (2) main() I try to send keys to the input field but its give me Error .ElementClickInterceptedException , I try old answer from stack previous question answer related to this error but they are not working for me because there is also some some update in selenium code too. A: That error indicates that you have to click using JS execution like: import time skill = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH,'//div[contains(@class,"up-typeahead")]'))) driver.execute_script("arguments[0].click();" ,skill) time.sleep(1) A: By clicking on "Advanced search" button an advanced search modal dialog is opened. So, when this dialog is opened you can not insert your search inputs into the regular search input, only in that modal dialog input. Then you need to close the button on that dialog to perform the search. The following code is working: import time from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 10) url = "https://www.upwork.com/nx/jobs/search/?sort=recency" driver.get(url) keys = ["Web Scraping","Selenium WebDriver", "Data Scraping", "selenium", "Web Crawling", "Beautiful Soup", "Scrapy", "Data Extraction", "Automation"] wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler'))) time.sleep(5) wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler'))).click() for i in range(len(keys)): wait.until(EC.element_to_be_clickable((By.XPATH, '//button[contains(@title,"Advanced Search")]'))).click() advanced_search_input = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'[data-test="modal-advanced-search-and_terms"]'))) advanced_search_input.clear() advanced_search_input.send_keys(keys[i]) wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'[data-test="modal-advanced-search-search-btn"]'))).click() Also, when using Selenium you should never use JavaScript clicks until you have no alternatives since Selenium imitates human GUI actions while JavaScript clicks can perform clicks on invisible, covered elements etc. In this case, when the dialog is opened as a user you can not click on elements covered by that dialog. So, when performing GUI testing with Selenium (this is what Selenium for) you should not perform force clicks on such elements with the use of JavaScript.
Why getting this selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element
I know already upload answer to this same question but I try them they are not working for me because there is also some some update in selenium code too. Getting this Error selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element <div class="up-typeahead-fake" data-test="up-c-typeahead-input-fake">...</div> is not clickable at point (838, 0). Other element would receive the click: <div class="up-modal-header">...</div> , When trying to send my searching keyword in this input with labeled "Skills Search" in advance searching pop-pup form. Here is the URL: https://www.upwork.com/nx/jobs/search/modals/advanced-search?sort=recency&pageTitle=Advanced%20Search&_navType=modal&_modalInfo=%5B%7B%22navType%22%3A%22modal%22,%22title%22%3A%22Advanced%20Search%22,%22modalId%22%3A%221670133126002%22,%22channelName%22%3A%22advanced-search-modal%22%7D%5D Here is my code: from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.by import By from selenium.webdriver.common.proxy import Proxy, ProxyType import time from fake_useragent import UserAgent import pyttsx3 from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC def main(): options = Options() service = Service('F:\\work\\chromedriver_win32\\chromedriver.exe') options.add_argument("start-maximized") options.add_argument('--disable-blink-features=AutomationControlled') #Adding the argument options.add_experimental_option("excludeSwitches",["enable-automation"])#Disable chrome contrlled message (Exclude the collection of enable-automation switches) options.add_experimental_option('useAutomationExtension', False) #Turn-off useAutomationExtension options.add_experimental_option('useAutomationExtension', False) #Turn-off useAutomationExtension prefs = {"credentials_enable_service": False, "profile.password_manager_enabled": False} options.add_experimental_option("prefs", prefs) ua = UserAgent() userAgent = ua.random options.add_argument(f'user-agent={userAgent}') driver = webdriver.Chrome(service=service , options=options) url = 'https://www.upwork.com/nx/jobs/search/?sort=recency' driver.get(url) time.sleep(7) advsearch = driver.find_element(By.XPATH,'//button[contains(@title,"Advanced Search")]') advsearch.click() time.sleep(10) skill = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH,'//div[contains(@class,"up-typeahead")]'))) skill.click() time.sleep(10) keys = ["Web Scraping","Selenium WebDriver", "Data Scraping", "selenium", "Web Crawling", "Beautiful Soup", "Scrapy", "Data Extraction", "Automation"] for i in range(len(keys)): skill.send_keys(Keys[i],Keys.ENTER) time.sleep (2) main() I try to send keys to the input field but its give me Error .ElementClickInterceptedException , I try old answer from stack previous question answer related to this error but they are not working for me because there is also some some update in selenium code too.
[ "That error indicates that you have to click using JS execution like:\n import time\n\n skill = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH,'//div[contains(@class,\"up-typeahead\")]')))\n driver.execute_script(\"arguments[0].click();\" ,skill)\n time.sleep(1)\n\n", "By clicking on \"Advanced search\" button an advanced search modal dialog is opened. So, when this dialog is opened you can not insert your search inputs into the regular search input, only in that modal dialog input. Then you need to close the button on that dialog to perform the search.\nThe following code is working:\nimport time\n\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 10)\n\nurl = \"https://www.upwork.com/nx/jobs/search/?sort=recency\"\ndriver.get(url)\n\nkeys = [\"Web Scraping\",\"Selenium WebDriver\", \"Data Scraping\", \"selenium\", \"Web Crawling\", \"Beautiful Soup\", \"Scrapy\", \"Data Extraction\", \"Automation\"]\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler')))\ntime.sleep(5)\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, 'button#onetrust-accept-btn-handler'))).click()\nfor i in range(len(keys)):\n wait.until(EC.element_to_be_clickable((By.XPATH, '//button[contains(@title,\"Advanced Search\")]'))).click()\n advanced_search_input = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'[data-test=\"modal-advanced-search-and_terms\"]')))\n advanced_search_input.clear()\n advanced_search_input.send_keys(keys[i])\n wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,'[data-test=\"modal-advanced-search-search-btn\"]'))).click()\n\nAlso, when using Selenium you should never use JavaScript clicks until you have no alternatives since Selenium imitates human GUI actions while JavaScript clicks can perform clicks on invisible, covered elements etc.\nIn this case, when the dialog is opened as a user you can not click on elements covered by that dialog. So, when performing GUI testing with Selenium (this is what Selenium for) you should not perform force clicks on such elements with the use of JavaScript.\n" ]
[ 1, 0 ]
[]
[]
[ "automation", "python", "selenium", "selenium_webdriver", "xpath" ]
stackoverflow_0074673772_automation_python_selenium_selenium_webdriver_xpath.txt
Q: Generate unique ID in sequential order (Format: xxx-xx) I want to perform a logic stored procedure. I want to generate a unique ID in sequential order in the SQL Server. Format: xxx-xx For example, 001-01 001-02 001-03 001-04 . . . 002-01 002-02 002-03 . . . 019-10 020-01 020-02 . . . 128-04 128-05 200-01 200-02 Can anyone please help me here? A: Here is my sample: DECLARE @Ids TABLE ( id NVARCHAR(6) ); INSERT @Ids ( id ) VALUES (N'001-01'); DECLARE @first_part INT; DECLARE @second_part INT; SELECT TOP (1) @first_part = LEFT(i.id, 3), @second_part = RIGHT(i.id, 2) FROM @Ids AS i ORDER BY CAST(LEFT(i.id, 3) AS INT) DESC, CAST(RIGHT(i.id, 2) AS INT) DESC; DECLARE @next_firstpart INT = IIF(@second_part = 99, @first_part + 1, @second_part) DECLARE @next_secondpart INT = IIF(@second_part = 99, 1, @second_part + 1) DECLARE @next_id NVARCHAR(6) = CONCAT(REPLICATE('0', 3 - LEN(@next_firstpart + @next_firstpart)), @next_firstpart, '-', REPLICATE('0', 2 - LEN(@next_secondpart)), @next_secondpart); INSERT @Ids ( id ) VALUES (@next_id) SELECT * FROM @Ids AS i
Generate unique ID in sequential order (Format: xxx-xx)
I want to perform a logic stored procedure. I want to generate a unique ID in sequential order in the SQL Server. Format: xxx-xx For example, 001-01 001-02 001-03 001-04 . . . 002-01 002-02 002-03 . . . 019-10 020-01 020-02 . . . 128-04 128-05 200-01 200-02 Can anyone please help me here?
[ "Here is my sample:\nDECLARE @Ids TABLE (\n id NVARCHAR(6)\n);\nINSERT @Ids (\n id\n)\nVALUES (N'001-01');\n\nDECLARE @first_part INT;\nDECLARE @second_part INT;\nSELECT TOP (1)\n @first_part = LEFT(i.id, 3),\n @second_part = RIGHT(i.id, 2)\nFROM @Ids AS i\nORDER BY CAST(LEFT(i.id, 3) AS INT) DESC,\n CAST(RIGHT(i.id, 2) AS INT) DESC;\n\nDECLARE @next_firstpart INT = IIF(@second_part = 99, @first_part + 1, @second_part)\nDECLARE @next_secondpart INT = IIF(@second_part = 99, 1, @second_part + 1)\n\nDECLARE @next_id NVARCHAR(6) = CONCAT(REPLICATE('0', 3 - LEN(@next_firstpart + @next_firstpart)), @next_firstpart, '-', REPLICATE('0', 2 - LEN(@next_secondpart)), @next_secondpart);\n\nINSERT @Ids (\n id\n)\nVALUES (@next_id)\n\nSELECT * FROM @Ids AS i\n\n" ]
[ -2 ]
[]
[]
[ "sql", "sql_server", "sql_server_2008", "sql_server_2012" ]
stackoverflow_0074674034_sql_sql_server_sql_server_2008_sql_server_2012.txt
Q: Problem in Displaying/Hiding Visual based upon a Slicer value, using Measure I have a problem where I have to hide a visual and show another visual based upon Slicer Selection. I followed this tutorial: https://exceleratorbi.com.au/show-or-hide-a-power-bi-visual-based-on-selection/#:~:text=Click%20on%2.... My problem is, I have a slicer for Capacity. If a user Selects "All" Capacity, then it should show a Bar Graph with Capacity on X-Axis. If a user selects any particular capacity, then the bar graph shown should have Operation on X-axis To solve this issue, I created two Bar Graphs. I created a Measure that checks whether capacity is filtered or not, Is Capacity Selected = IF( ISFILTERED('Main Sheet'[Capacity]), 1, 2 ) And added this to both Graph Visual Filters. The problem I am facing is when I select 2, the Bar Graph with Operation at X-Axis disappears (as expected), but the bar graph with Capacity at X-Axis does not show. I also added a card to check the value of Measure and it's also 2, which means that the bar graph with Capacity at X-Axis should show when I select "All" from the Capacity Filter. Even more interestingly, if I change the X-Axis to any attribute other than Capacity, then this bar graph works totally fine. Can anyone help me out in this? How can I show the Visual of Bar Graph containing Capacity at X-Axis, whenever "All" is selected from Capacity Slicer. Here's the Power BI Workbook that you can download and use: https://drive.google.com/file/d/1T8YAYZ8spOLKlA9HLE1coN17mnDyTA1w/view?usp=sharing I also uploaded a small video on Youtube showing the expected behavior of what I am doing and where is it causing the problem, https://youtu.be/1-teUkPKZ8Q As you can see, when using BillingPool (any attribute other than Capacity), I get the expected Results. But as soon as I select Capacity on X-Axis, the same behavior doesn't happen. A: You're misunderstanding how the DAX is being evaluated. It is being evaluated per data point. Convert the chart to a table and place the measure in a column to see what I mean. To achieve your desired behaviour, create a new table as follows named Selection: Change your measure: Is Capacity Selected = IF( SELECTEDVALUE(Selection[Column1]) == "First Capacity" || SELECTEDVALUE(Selection[Column1]) == "Second Capacity" , 1, 2 ) Change the slicer to use the new table you just created and everything should work as desired.
Problem in Displaying/Hiding Visual based upon a Slicer value, using Measure
I have a problem where I have to hide a visual and show another visual based upon Slicer Selection. I followed this tutorial: https://exceleratorbi.com.au/show-or-hide-a-power-bi-visual-based-on-selection/#:~:text=Click%20on%2.... My problem is, I have a slicer for Capacity. If a user Selects "All" Capacity, then it should show a Bar Graph with Capacity on X-Axis. If a user selects any particular capacity, then the bar graph shown should have Operation on X-axis To solve this issue, I created two Bar Graphs. I created a Measure that checks whether capacity is filtered or not, Is Capacity Selected = IF( ISFILTERED('Main Sheet'[Capacity]), 1, 2 ) And added this to both Graph Visual Filters. The problem I am facing is when I select 2, the Bar Graph with Operation at X-Axis disappears (as expected), but the bar graph with Capacity at X-Axis does not show. I also added a card to check the value of Measure and it's also 2, which means that the bar graph with Capacity at X-Axis should show when I select "All" from the Capacity Filter. Even more interestingly, if I change the X-Axis to any attribute other than Capacity, then this bar graph works totally fine. Can anyone help me out in this? How can I show the Visual of Bar Graph containing Capacity at X-Axis, whenever "All" is selected from Capacity Slicer. Here's the Power BI Workbook that you can download and use: https://drive.google.com/file/d/1T8YAYZ8spOLKlA9HLE1coN17mnDyTA1w/view?usp=sharing I also uploaded a small video on Youtube showing the expected behavior of what I am doing and where is it causing the problem, https://youtu.be/1-teUkPKZ8Q As you can see, when using BillingPool (any attribute other than Capacity), I get the expected Results. But as soon as I select Capacity on X-Axis, the same behavior doesn't happen.
[ "You're misunderstanding how the DAX is being evaluated. It is being evaluated per data point. Convert the chart to a table and place the measure in a column to see what I mean.\nTo achieve your desired behaviour, create a new table as follows named Selection:\n\nChange your measure:\nIs Capacity Selected = \n IF(\n SELECTEDVALUE(Selection[Column1]) == \"First Capacity\" || SELECTEDVALUE(Selection[Column1]) == \"Second Capacity\" , \n 1, \n 2\n )\n\nChange the slicer to use the new table you just created and everything should work as desired.\n" ]
[ 1 ]
[]
[]
[ "dax", "powerbi" ]
stackoverflow_0074671974_dax_powerbi.txt
Q: Python: How to print a looping nested list as a Matrix I want to print a matrix of p*p (where p is an input taken from the user). The matrix should be in a format of [m,n] i.e [[[3,0],[3,1],[3,2],[3,3]],[2,0],[2,1],[2,2],[2,3]]... and so on. a = int(input()) l1 = [] for i in range(a): l1.append([]) for j in range(a): l1[i] = [j,i] print(l1) I tried using this code and realized it is wrong, what can I do to achieve the desired output. A: # Take input from the user p = int(input()) # Create an empty list l1 = [] # Iterate over the range 0 to p for i in range(p): # Create a new empty sublist for each iteration of the outer loop l1.append([]) # Iterate over the range 0 to p for j in range(p): # Append the values [j, i] to the sublist l1[i].append([j, i]) # Print the matrix print(l1)
Python: How to print a looping nested list as a Matrix
I want to print a matrix of p*p (where p is an input taken from the user). The matrix should be in a format of [m,n] i.e [[[3,0],[3,1],[3,2],[3,3]],[2,0],[2,1],[2,2],[2,3]]... and so on. a = int(input()) l1 = [] for i in range(a): l1.append([]) for j in range(a): l1[i] = [j,i] print(l1) I tried using this code and realized it is wrong, what can I do to achieve the desired output.
[ "# Take input from the user\np = int(input())\n\n# Create an empty list\nl1 = []\n\n# Iterate over the range 0 to p\nfor i in range(p):\n # Create a new empty sublist for each iteration of the outer loop\n l1.append([])\n\n # Iterate over the range 0 to p\n for j in range(p):\n # Append the values [j, i] to the sublist\n l1[i].append([j, i])\n\n# Print the matrix\nprint(l1)\n\n" ]
[ 0 ]
[]
[]
[ "list", "loops", "matrix", "python" ]
stackoverflow_0074672951_list_loops_matrix_python.txt
Q: How to redirected to the main page using an elevated button on alert dialog without clicking on the button? How to redirected to the main page using an elevated button on alert dialog without clicking on the button? I just want to hover the mouse on the button and it will do the specific action and also redirected to the main page without clicking on the button that will appear on alert dialog box. How to redirected to the main page using an elevated button on alert dialog without clicking on the button? I just want to hover the mouse on the button and it will do the specific action and also redirected to the main page without clicking on the button that will appear on alert dialog box. A: You can use onHover method of your Button class. Check documentation example of ElevatedButton page onHover → ValueChanged<bool>? Called when a pointer enters or exits the button response area. final, inherited
How to redirected to the main page using an elevated button on alert dialog without clicking on the button?
How to redirected to the main page using an elevated button on alert dialog without clicking on the button? I just want to hover the mouse on the button and it will do the specific action and also redirected to the main page without clicking on the button that will appear on alert dialog box. How to redirected to the main page using an elevated button on alert dialog without clicking on the button? I just want to hover the mouse on the button and it will do the specific action and also redirected to the main page without clicking on the button that will appear on alert dialog box.
[ "You can use onHover method of your Button class.\nCheck documentation example of ElevatedButton page\nonHover → ValueChanged<bool>?\nCalled when a pointer enters or exits the button response area.\nfinal, inherited\n\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0074673940_dart_flutter.txt
Q: How can I protect a class member variable pointer? I am looking for a way to protect a class from receiving a NULL pointer at compiler time. class B { // gives some API } class A { private: B* ptrB_; public: A(B* ptrB) { // How can I prevent the class to be created with a null pointer? ptrB_ = ptrB; } // multiple member function using: ptrB_ void A::Func1(void) { if(!ptrB_ ) rerturn; // I don't want to add it in every function. ... return; } } int main() { B tempB = NULL; A tempA(&tempB); } How can I force user to always pass a pointer (not NULL) to A constructor? Was looking at constexpr , but this forces to use static , and that's something I am trying to avoid. A: To avoid the risk of NULL pointers, don't pass a pointer at all, instead use references (there are no 'null' references). e.g. A(B* ptrB) { ptrB_ = ptrB; } //instead of passing the address of an object A(B& ptrB) { ptrB_ = &ptrB; } //take and store the address of the reference
How can I protect a class member variable pointer?
I am looking for a way to protect a class from receiving a NULL pointer at compiler time. class B { // gives some API } class A { private: B* ptrB_; public: A(B* ptrB) { // How can I prevent the class to be created with a null pointer? ptrB_ = ptrB; } // multiple member function using: ptrB_ void A::Func1(void) { if(!ptrB_ ) rerturn; // I don't want to add it in every function. ... return; } } int main() { B tempB = NULL; A tempA(&tempB); } How can I force user to always pass a pointer (not NULL) to A constructor? Was looking at constexpr , but this forces to use static , and that's something I am trying to avoid.
[ "To avoid the risk of NULL pointers, don't pass a pointer at all, instead use references (there are no 'null' references).\ne.g.\nA(B* ptrB) { ptrB_ = ptrB; } //instead of passing the address of an object\nA(B& ptrB) { ptrB_ = &ptrB; } //take and store the address of the reference\n\n" ]
[ 0 ]
[]
[]
[ "c++" ]
stackoverflow_0074674125_c++.txt
Q: How to use multiprocessing in Python for for loop? I'm new to Python and multiprocessing, I would like to speed up my current code processing speed as it takes around 8 mins for 80 images. I only show 1 image for this code for reference purpose. I got into know that multiprocessing helps on this and gave it a try but somehow not working as what I expected. import numpy as np import cv2 import time import os import multiprocessing img = cv2.imread("C://Users/jason/Desktop/test.bmp") gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) _,blackMask = cv2.threshold(gry, 0, 255, cv2.THRESH_BINARY_INV) x1 = [] y1 = [] def verticle(mask, y, x): vertiPixel = 0 while(y < mask.shape[0]): if (y + 1) == mask.shape[0]: break else: if(mask[y + 1][x] == 255): vertiPixel += 1 y += 1 else: break y1.append(vertiPixel) def horizontal(mask, y, x): horiPixel = 0 while(x < mask.shape[1]): if (x + 1) == mask.shape[1]: break else: if(mask[y][x + 1] == 255): horiPixel += 1 x += 1 else: break x1.append(horiPixel) def mask(mask): for y in range (mask.shape[0]): for x in range (mask.shape[1]): if(mask[y][x] == 255): verticle(mask, y, x) horizontal(mask, y, x) mask(blackMask) print(np.average(x1), np.average(y1)) This is what I tried to work on my side. Although it's not working, added pool class for multiprocessing but getting None result. import numpy as np import cv2 import time import os from multiprocessing import Pool folderDir = "C://Users/ruler/Desktop/testseg/" total = [] with open('readme.txt', 'w') as f: count = 0 for allImages in os.listdir(folderDir): if (allImages.startswith('TRAIN_SET') and allImages.endswith(".bmp")): img = cv2.imread(os.path.join(folderDir, allImages)) gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) _,blackMask = cv2.threshold(gry, 0, 255, cv2.THRESH_BINARY_INV) x1 = [] y1 = [] def verticle(mask, y, x): vertiPixel = 0 while(y < mask.shape[0]): if (y + 1) == mask.shape[0]: break else: if(mask[y + 1][x] == 255): vertiPixel += 1 y += 1 else: break y1.append(vertiPixel) def horizontal(mask, y, x): horiPixel = 0 while(x < mask.shape[1]): if (x + 1) == mask.shape[1]: break else: if(mask[y][x + 1] == 255): horiPixel += 1 x += 1 else: break x1.append(horiPixel) def mask(mask): for y in range (mask.shape[0]): for x in range (mask.shape[1]): if(mask[y][x] == 255): verticle(mask, y, x) horizontal(mask, y, x) equation(y,x) def equation(y,x): a = np.average(y) * (9.9 / 305) c = np.average(x) * (9.9 / 305) final = (a + c) / 2 total.append(final) if __name__ == "__main__": pool = Pool(8) print(pool.map(mask, [blackMask] * 3)) pool.close() A: To use multiprocessing to speed up your code, you can use the Pool class from the multiprocessing module. The Pool class allows you to run multiple processes in parallel, which can help speed up your code. To use the Pool class, you need to first create a Pool object and then use the map method to apply a function to each element in a list in parallel. For example, to use the Pool class to speed up your code, you could do the following: # Import the Pool class from the multiprocessing module from multiprocessing import Pool # Create a Pool object with the desired number of processes pool = Pool(8) # Use the map method to apply the mask function to each element in a list in parallel pool.map(mask, [blackMask] * 80) # Close the pool when finished pool.close() This will create a Pool object with 8 processes, and then apply the mask function to 80 copies of the blackMask image in parallel. This should speed up your code by running multiple processes in parallel. However, note that using multiprocessing can be complex and may not always result in significant speedups, especially for relatively small and simple tasks like the one in your code. It may be worth trying to optimize your code in other ways before resorting to multiprocessing.
How to use multiprocessing in Python for for loop?
I'm new to Python and multiprocessing, I would like to speed up my current code processing speed as it takes around 8 mins for 80 images. I only show 1 image for this code for reference purpose. I got into know that multiprocessing helps on this and gave it a try but somehow not working as what I expected. import numpy as np import cv2 import time import os import multiprocessing img = cv2.imread("C://Users/jason/Desktop/test.bmp") gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) _,blackMask = cv2.threshold(gry, 0, 255, cv2.THRESH_BINARY_INV) x1 = [] y1 = [] def verticle(mask, y, x): vertiPixel = 0 while(y < mask.shape[0]): if (y + 1) == mask.shape[0]: break else: if(mask[y + 1][x] == 255): vertiPixel += 1 y += 1 else: break y1.append(vertiPixel) def horizontal(mask, y, x): horiPixel = 0 while(x < mask.shape[1]): if (x + 1) == mask.shape[1]: break else: if(mask[y][x + 1] == 255): horiPixel += 1 x += 1 else: break x1.append(horiPixel) def mask(mask): for y in range (mask.shape[0]): for x in range (mask.shape[1]): if(mask[y][x] == 255): verticle(mask, y, x) horizontal(mask, y, x) mask(blackMask) print(np.average(x1), np.average(y1)) This is what I tried to work on my side. Although it's not working, added pool class for multiprocessing but getting None result. import numpy as np import cv2 import time import os from multiprocessing import Pool folderDir = "C://Users/ruler/Desktop/testseg/" total = [] with open('readme.txt', 'w') as f: count = 0 for allImages in os.listdir(folderDir): if (allImages.startswith('TRAIN_SET') and allImages.endswith(".bmp")): img = cv2.imread(os.path.join(folderDir, allImages)) gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) _,blackMask = cv2.threshold(gry, 0, 255, cv2.THRESH_BINARY_INV) x1 = [] y1 = [] def verticle(mask, y, x): vertiPixel = 0 while(y < mask.shape[0]): if (y + 1) == mask.shape[0]: break else: if(mask[y + 1][x] == 255): vertiPixel += 1 y += 1 else: break y1.append(vertiPixel) def horizontal(mask, y, x): horiPixel = 0 while(x < mask.shape[1]): if (x + 1) == mask.shape[1]: break else: if(mask[y][x + 1] == 255): horiPixel += 1 x += 1 else: break x1.append(horiPixel) def mask(mask): for y in range (mask.shape[0]): for x in range (mask.shape[1]): if(mask[y][x] == 255): verticle(mask, y, x) horizontal(mask, y, x) equation(y,x) def equation(y,x): a = np.average(y) * (9.9 / 305) c = np.average(x) * (9.9 / 305) final = (a + c) / 2 total.append(final) if __name__ == "__main__": pool = Pool(8) print(pool.map(mask, [blackMask] * 3)) pool.close()
[ "To use multiprocessing to speed up your code, you can use the Pool class from the multiprocessing module. The Pool class allows you to run multiple processes in parallel, which can help speed up your code.\nTo use the Pool class, you need to first create a Pool object and then use the map method to apply a function to each element in a list in parallel. For example, to use the Pool class to speed up your code, you could do the following:\n# Import the Pool class from the multiprocessing module\nfrom multiprocessing import Pool\n\n# Create a Pool object with the desired number of processes\npool = Pool(8)\n\n# Use the map method to apply the mask function to each element in a list in parallel\npool.map(mask, [blackMask] * 80)\n\n# Close the pool when finished\npool.close()\n\nThis will create a Pool object with 8 processes, and then apply the mask function to 80 copies of the blackMask image in parallel. This should speed up your code by running multiple processes in parallel.\nHowever, note that using multiprocessing can be complex and may not always result in significant speedups, especially for relatively small and simple tasks like the one in your code. It may be worth trying to optimize your code in other ways before resorting to multiprocessing.\n" ]
[ 1 ]
[]
[]
[ "multiprocessing", "python", "python_3.x", "python_multiprocessing" ]
stackoverflow_0074674131_multiprocessing_python_python_3.x_python_multiprocessing.txt
Q: libmpg123 force floating point output when I read mp3 with integer encoding I use libsndfile to read audio files but MP3 isn't available. So, I want to use libmp123 to read mp3 files. I found easily how to read a "short int" encoding file and then convert datas read to floating point [-1.0 ... 1.0]. My question is: "libmp123 can do this automatically like libsndfile?" A: Did you look at the example program? You can use mpg123_format_none to clear the list of allowed format/encoding pairs and then use mpg123_format to add new format/encoding pairs that only contain the encoding you want (MPG123_ENC_FLOAT_32, for example). A: It does work for me. I use libmp123 inside my own program. I wish to load mp3 datas as float but with, for example, a mp3 'short int' encoded. int err; err = mpg123_format_none(m_handle); std::cout << err << std::endl; err = mpg123_format(m_handle,m_rate,m_channels, MPG123_ENC_FLOAT_32); std::cout << err << std::endl; ... if (mpg123_read(m_handle, (unsigned char*)titi, nb_framesToRead*mpg123_encsize(m_encoding), &nb_framesRead) != MPG123_OK) { std::cerr << mpg123_strerror(m_handle); } titi is a preallocated float table. m_encoding = 208(SIGNED_16) To sum up, I wish to have an equivalent to "sf_readf_float" http://www.mega-nerd.com/libsndfile/api.html#readf A: This worked for me: int ret; mpg123_handle *m = mpg123_new(NULL, &ret); mpg123_param(m, MPG123_ADD_FLAGS, MPG123_FORCE_FLOAT, 0.);
libmpg123 force floating point output when I read mp3 with integer encoding
I use libsndfile to read audio files but MP3 isn't available. So, I want to use libmp123 to read mp3 files. I found easily how to read a "short int" encoding file and then convert datas read to floating point [-1.0 ... 1.0]. My question is: "libmp123 can do this automatically like libsndfile?"
[ "Did you look at the example program?\nYou can use mpg123_format_none to clear the list of allowed format/encoding pairs and then use mpg123_format to add new format/encoding pairs that only contain the encoding you want (MPG123_ENC_FLOAT_32, for example).\n", "It does work for me.\nI use libmp123 inside my own program.\nI wish to load mp3 datas as float but with, for example, a mp3 'short int' encoded.\nint err;\nerr = mpg123_format_none(m_handle);\nstd::cout << err << std::endl;\nerr = mpg123_format(m_handle,m_rate,m_channels, MPG123_ENC_FLOAT_32);\nstd::cout << err << std::endl;\n\n... \nif (mpg123_read(m_handle,\n (unsigned char*)titi,\n nb_framesToRead*mpg123_encsize(m_encoding),\n &nb_framesRead) != MPG123_OK) {\n std::cerr << mpg123_strerror(m_handle);\n}\n\ntiti is a preallocated float table.\nm_encoding = 208(SIGNED_16)\nTo sum up, I wish to have an equivalent to \"sf_readf_float\" http://www.mega-nerd.com/libsndfile/api.html#readf\n", "This worked for me:\n int ret;\n mpg123_handle *m = mpg123_new(NULL, &ret);\n mpg123_param(m, MPG123_ADD_FLAGS, MPG123_FORCE_FLOAT, 0.);\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "c++", "mp3" ]
stackoverflow_0059264157_c++_mp3.txt
Q: webgl how to draw a ring As shown below, all I can think of is to describe all the points on a circle and then use triangulation to draw a ring with width I can also think of a way to use overlay. First draw a circle and then draw a circle with a smaller radius const TAU_SEGMENTS = 360; const TAU = Math.PI * 2; export function arc(x0: number, y0: number, radius: number, startAng = 0, endAng = Math.PI * 2) { const ang = Math.min(TAU, endAng - startAng); const ret = ang === TAU ? [] : [x0, y0]; const segments = Math.round(TAU_SEGMENTS * ang / TAU); for(let i = 0; i <= segments; i++) { const x = x0 + radius * Math.cos(startAng + ang * i / segments); const y = y0 + radius * Math.sin(startAng + ang * i / segments); ret.push(x, y); } return ret; } const gl = container.current.getContext("webgl2"); const program = initProgram(gl); const position = new Float32Array(arc(0, 0, 1).concat(...arc(0, 0, 0.9))); let buffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, buffer); gl.bufferData(gl.ARRAY_BUFFER, position, gl.STATIC_DRAW); let gl_position = gl.getAttribLocation(program, "position"); gl.vertexAttribPointer(gl_position, 2, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(gl_position); gl.drawArrays(gl.LINE_STRIP, 0, position.length / 2); The final effect of the code I wrote is as follows. May I ask how I should modify it to become the same as the above picture A: You have to add a color attributes for the vertices and you have to draw a gl.TRIANGLE_STRIP primitive instead of a gl.LINE_STRIP primitive. The color can be calculated from the angle. Map the angle from the range [0, PI] to the range [0, 1] and use the formula for the hue value from the HSL and HSV color space: function HUEtoRGB(hue) { return [ Math.min(1, Math.max(0, Math.abs(hue * 6.0 - 3.0) - 1.0)), Math.min(1, Math.max(0, 2.0 - Math.abs(hue * 6.0 - 2.0))), Math.min(1, Math.max(0, 2.0 - Math.abs(hue * 6.0 - 4.0))) ]; } Create vertices in pairs for the inner and outer arcs with the corresponding color attribute: const TAU_SEGMENTS = 360; const TAU = Math.PI * 2; function arc(x0, y0, innerRadius, outerRadius, startAng = 0, endAng = Math.PI * 2) { const ang = Math.min(TAU, endAng - startAng); const position = ang === TAU ? [] : [x0, y0]; const color = [] const segments = Math.round(TAU_SEGMENTS * ang / TAU); for(let i = 0; i <= segments; i++) { const angle = startAng + ang * i / segments; const x1 = x0 + innerRadius * Math.cos(angle); const y1 = y0 + innerRadius * Math.sin(angle); const x2 = x0 + outerRadius * Math.cos(angle); const y2 = y0 + outerRadius * Math.sin(angle); position.push(x1, y1, x2, y2); let hue = (Math.PI/2 - angle) / (2 * Math.PI); if (hue < 0) hue += 1; const rgb = HUEtoRGB(hue); color.push(...rgb); color.push(...rgb); } return { 'position': position, 'color': color }; } Create a shader with a color attribute and pass the color attribute from the vertex to the fragment shader: #version 300 es precision highp float; in vec2 position; in vec4 color; out vec4 vColor; void main() { vColor = color; gl_Position = vec4(position.xy, 0.0, 1.0); } #version 300 es precision highp float; in vec4 vColor; out vec4 fragColor; void main() { fragColor = vColor; } Vertex specification: const attributes = arc(0, 0, 0.6, 0.9); position = new Float32Array(attributes.position); color = new Float32Array(attributes.color); let position_buffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, position_buffer); gl.bufferData(gl.ARRAY_BUFFER, position, gl.STATIC_DRAW); let gl_position = gl.getAttribLocation(program, "position"); gl.vertexAttribPointer(gl_position, 2, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(gl_position); let color_buffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, color_buffer); gl.bufferData(gl.ARRAY_BUFFER, color, gl.STATIC_DRAW); let gl_color = gl.getAttribLocation(program, "color"); gl.vertexAttribPointer(gl_color, 3, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(gl_color); Complet and runnable example: const canvas = document.getElementById( "ogl-canvas"); const gl = canvas.getContext("webgl2"); const program = gl.createProgram(); for (let i = 0; i < 2; ++i) { let source = document.getElementById(i==0 ? "draw-shader-vs" : "draw-shader-fs").text; let shaderObj = gl.createShader(i==0 ? gl.VERTEX_SHADER : gl.FRAGMENT_SHADER); gl.shaderSource(shaderObj, source); gl.compileShader(shaderObj); let status = gl.getShaderParameter(shaderObj, gl.COMPILE_STATUS); if (!status) alert(gl.getShaderInfoLog(shaderObj)); gl.attachShader(program, shaderObj); gl.linkProgram(program); } status = gl.getProgramParameter(program, gl.LINK_STATUS); if ( !status ) alert(gl.getProgramInfoLog(program)); gl.useProgram(program); function HUEtoRGB(hue) { return [ Math.min(1, Math.max(0, Math.abs(hue * 6.0 - 3.0) - 1.0)), Math.min(1, Math.max(0, 2.0 - Math.abs(hue * 6.0 - 2.0))), Math.min(1, Math.max(0, 2.0 - Math.abs(hue * 6.0 - 4.0))) ]; } const TAU_SEGMENTS = 360; const TAU = Math.PI * 2; function arc(x0, y0, innerRadius, outerRadius, startAng = 0, endAng = Math.PI * 2) { const ang = Math.min(TAU, endAng - startAng); const position = ang === TAU ? [] : [x0, y0]; const color = [] const segments = Math.round(TAU_SEGMENTS * ang / TAU); for(let i = 0; i <= segments; i++) { const angle = startAng + ang * i / segments; const x1 = x0 + innerRadius * Math.cos(angle); const y1 = y0 + innerRadius * Math.sin(angle); const x2 = x0 + outerRadius * Math.cos(angle); const y2 = y0 + outerRadius * Math.sin(angle); position.push(x1, y1, x2, y2); let hue = (Math.PI/2 - angle) / (2 * Math.PI); if (hue < 0) hue += 1; const rgb = HUEtoRGB(hue); color.push(...rgb); color.push(...rgb); } return { 'position': position, 'color': color }; } const attributes = arc(0, 0, 0.6, 0.9); position = new Float32Array(attributes.position); color = new Float32Array(attributes.color); let position_buffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, position_buffer); gl.bufferData(gl.ARRAY_BUFFER, position, gl.STATIC_DRAW); let gl_position = gl.getAttribLocation(program, "position"); gl.vertexAttribPointer(gl_position, 2, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(gl_position); let color_buffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, color_buffer); gl.bufferData(gl.ARRAY_BUFFER, color, gl.STATIC_DRAW); let gl_color = gl.getAttribLocation(program, "color"); gl.vertexAttribPointer(gl_color, 3, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(gl_color); gl.enable( gl.DEPTH_TEST ); gl.clearColor( 0.0, 0.0, 0.0, 1.0 ); //vp_size = [gl.drawingBufferWidth, gl.drawingBufferHeight]; vp_size = [window.innerWidth, window.innerHeight]; vp_size = [256, 256] canvas.width = vp_size[0]; canvas.height = vp_size[1]; gl.viewport( 0, 0, canvas.width, canvas.height ); gl.clearColor(1, 1, 1, 1); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); gl.drawArrays(gl.TRIANGLE_STRIP, 0, attributes.position.length / 2); <script id="draw-shader-vs" type="x-shader/x-vertex">#version 300 es precision highp float; in vec2 position; in vec4 color; out vec4 vColor; void main() { vColor = color; gl_Position = vec4(position.xy, 0.0, 1.0); } </script> <script id="draw-shader-fs" type="x-shader/x-fragment">#version 300 es precision highp float; in vec4 vColor; out vec4 fragColor; void main() { fragColor = vColor; } </script> <canvas id="ogl-canvas" style="border: none"></canvas>
webgl how to draw a ring
As shown below, all I can think of is to describe all the points on a circle and then use triangulation to draw a ring with width I can also think of a way to use overlay. First draw a circle and then draw a circle with a smaller radius const TAU_SEGMENTS = 360; const TAU = Math.PI * 2; export function arc(x0: number, y0: number, radius: number, startAng = 0, endAng = Math.PI * 2) { const ang = Math.min(TAU, endAng - startAng); const ret = ang === TAU ? [] : [x0, y0]; const segments = Math.round(TAU_SEGMENTS * ang / TAU); for(let i = 0; i <= segments; i++) { const x = x0 + radius * Math.cos(startAng + ang * i / segments); const y = y0 + radius * Math.sin(startAng + ang * i / segments); ret.push(x, y); } return ret; } const gl = container.current.getContext("webgl2"); const program = initProgram(gl); const position = new Float32Array(arc(0, 0, 1).concat(...arc(0, 0, 0.9))); let buffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, buffer); gl.bufferData(gl.ARRAY_BUFFER, position, gl.STATIC_DRAW); let gl_position = gl.getAttribLocation(program, "position"); gl.vertexAttribPointer(gl_position, 2, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(gl_position); gl.drawArrays(gl.LINE_STRIP, 0, position.length / 2); The final effect of the code I wrote is as follows. May I ask how I should modify it to become the same as the above picture
[ "You have to add a color attributes for the vertices and you have to draw a gl.TRIANGLE_STRIP primitive instead of a gl.LINE_STRIP primitive.\nThe color can be calculated from the angle. Map the angle from the range [0, PI] to the range [0, 1] and use the formula for the hue value from the HSL and HSV color space:\nfunction HUEtoRGB(hue) {\n return [\n Math.min(1, Math.max(0, Math.abs(hue * 6.0 - 3.0) - 1.0)),\n Math.min(1, Math.max(0, 2.0 - Math.abs(hue * 6.0 - 2.0))),\n Math.min(1, Math.max(0, 2.0 - Math.abs(hue * 6.0 - 4.0)))\n ];\n}\n\nCreate vertices in pairs for the inner and outer arcs with the corresponding color attribute:\nconst TAU_SEGMENTS = 360;\nconst TAU = Math.PI * 2;\nfunction arc(x0, y0, innerRadius, outerRadius, startAng = 0, endAng = Math.PI * 2) {\n const ang = Math.min(TAU, endAng - startAng);\n const position = ang === TAU ? [] : [x0, y0];\n const color = []\n const segments = Math.round(TAU_SEGMENTS * ang / TAU);\n for(let i = 0; i <= segments; i++) {\n const angle = startAng + ang * i / segments;\n const x1 = x0 + innerRadius * Math.cos(angle);\n const y1 = y0 + innerRadius * Math.sin(angle);\n const x2 = x0 + outerRadius * Math.cos(angle);\n const y2 = y0 + outerRadius * Math.sin(angle);\n position.push(x1, y1, x2, y2);\n let hue = (Math.PI/2 - angle) / (2 * Math.PI);\n if (hue < 0) hue += 1;\n const rgb = HUEtoRGB(hue);\n color.push(...rgb);\n color.push(...rgb);\n }\n return { 'position': position, 'color': color };\n}\n\nCreate a shader with a color attribute and pass the color attribute from the vertex to the fragment shader:\n#version 300 es\nprecision highp float;\nin vec2 position;\nin vec4 color;\nout vec4 vColor;\n\nvoid main()\n{\n vColor = color;\n gl_Position = vec4(position.xy, 0.0, 1.0);\n}\n\n#version 300 es\nprecision highp float;\nin vec4 vColor;\nout vec4 fragColor;\n\nvoid main() \n{\n fragColor = vColor;\n}\n\nVertex specification:\nconst attributes = arc(0, 0, 0.6, 0.9);\nposition = new Float32Array(attributes.position);\ncolor = new Float32Array(attributes.color);\n\nlet position_buffer = gl.createBuffer();\ngl.bindBuffer(gl.ARRAY_BUFFER, position_buffer);\ngl.bufferData(gl.ARRAY_BUFFER, position, gl.STATIC_DRAW);\nlet gl_position = gl.getAttribLocation(program, \"position\");\ngl.vertexAttribPointer(gl_position, 2, gl.FLOAT, false, 0, 0);\ngl.enableVertexAttribArray(gl_position);\n\nlet color_buffer = gl.createBuffer();\ngl.bindBuffer(gl.ARRAY_BUFFER, color_buffer);\ngl.bufferData(gl.ARRAY_BUFFER, color, gl.STATIC_DRAW);\nlet gl_color = gl.getAttribLocation(program, \"color\");\ngl.vertexAttribPointer(gl_color, 3, gl.FLOAT, false, 0, 0);\ngl.enableVertexAttribArray(gl_color);\n\n\nComplet and runnable example:\n\n\nconst canvas = document.getElementById( \"ogl-canvas\");\nconst gl = canvas.getContext(\"webgl2\");\n\nconst program = gl.createProgram();\nfor (let i = 0; i < 2; ++i) {\n let source = document.getElementById(i==0 ? \"draw-shader-vs\" : \"draw-shader-fs\").text;\n let shaderObj = gl.createShader(i==0 ? gl.VERTEX_SHADER : gl.FRAGMENT_SHADER);\n gl.shaderSource(shaderObj, source);\n gl.compileShader(shaderObj);\n let status = gl.getShaderParameter(shaderObj, gl.COMPILE_STATUS);\n if (!status) alert(gl.getShaderInfoLog(shaderObj));\n gl.attachShader(program, shaderObj);\n gl.linkProgram(program);\n}\nstatus = gl.getProgramParameter(program, gl.LINK_STATUS);\nif ( !status ) alert(gl.getProgramInfoLog(program));\ngl.useProgram(program);\n\nfunction HUEtoRGB(hue) {\n return [\n Math.min(1, Math.max(0, Math.abs(hue * 6.0 - 3.0) - 1.0)),\n Math.min(1, Math.max(0, 2.0 - Math.abs(hue * 6.0 - 2.0))),\n Math.min(1, Math.max(0, 2.0 - Math.abs(hue * 6.0 - 4.0)))\n ];\n}\n\nconst TAU_SEGMENTS = 360;\nconst TAU = Math.PI * 2;\nfunction arc(x0, y0, innerRadius, outerRadius, startAng = 0, endAng = Math.PI * 2) {\n const ang = Math.min(TAU, endAng - startAng);\n const position = ang === TAU ? [] : [x0, y0];\n const color = []\n const segments = Math.round(TAU_SEGMENTS * ang / TAU);\n for(let i = 0; i <= segments; i++) {\n const angle = startAng + ang * i / segments;\n const x1 = x0 + innerRadius * Math.cos(angle);\n const y1 = y0 + innerRadius * Math.sin(angle);\n const x2 = x0 + outerRadius * Math.cos(angle);\n const y2 = y0 + outerRadius * Math.sin(angle);\n position.push(x1, y1, x2, y2);\n let hue = (Math.PI/2 - angle) / (2 * Math.PI);\n if (hue < 0) hue += 1;\n const rgb = HUEtoRGB(hue);\n color.push(...rgb);\n color.push(...rgb);\n }\n return { 'position': position, 'color': color };\n}\n\nconst attributes = arc(0, 0, 0.6, 0.9);\nposition = new Float32Array(attributes.position);\ncolor = new Float32Array(attributes.color);\n\nlet position_buffer = gl.createBuffer();\ngl.bindBuffer(gl.ARRAY_BUFFER, position_buffer);\ngl.bufferData(gl.ARRAY_BUFFER, position, gl.STATIC_DRAW);\nlet gl_position = gl.getAttribLocation(program, \"position\");\ngl.vertexAttribPointer(gl_position, 2, gl.FLOAT, false, 0, 0);\ngl.enableVertexAttribArray(gl_position);\n\nlet color_buffer = gl.createBuffer();\ngl.bindBuffer(gl.ARRAY_BUFFER, color_buffer);\ngl.bufferData(gl.ARRAY_BUFFER, color, gl.STATIC_DRAW);\nlet gl_color = gl.getAttribLocation(program, \"color\");\ngl.vertexAttribPointer(gl_color, 3, gl.FLOAT, false, 0, 0);\ngl.enableVertexAttribArray(gl_color);\n\ngl.enable( gl.DEPTH_TEST );\ngl.clearColor( 0.0, 0.0, 0.0, 1.0 );\n\n//vp_size = [gl.drawingBufferWidth, gl.drawingBufferHeight];\nvp_size = [window.innerWidth, window.innerHeight];\nvp_size = [256, 256]\ncanvas.width = vp_size[0];\ncanvas.height = vp_size[1];\n\ngl.viewport( 0, 0, canvas.width, canvas.height );\ngl.clearColor(1, 1, 1, 1);\ngl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);\n\ngl.drawArrays(gl.TRIANGLE_STRIP, 0, attributes.position.length / 2);\n<script id=\"draw-shader-vs\" type=\"x-shader/x-vertex\">#version 300 es\nprecision highp float;\n\nin vec2 position;\nin vec4 color;\nout vec4 vColor;\n\nvoid main()\n{\n vColor = color;\n gl_Position = vec4(position.xy, 0.0, 1.0);\n}\n</script>\n\n<script id=\"draw-shader-fs\" type=\"x-shader/x-fragment\">#version 300 es\nprecision highp float;\n\nin vec4 vColor;\nout vec4 fragColor;\n\nvoid main() \n{\n fragColor = vColor;\n}\n</script>\n<canvas id=\"ogl-canvas\" style=\"border: none\"></canvas>\n\n\n\n" ]
[ 1 ]
[]
[]
[ "javascript", "webgl", "webgl2" ]
stackoverflow_0074673371_javascript_webgl_webgl2.txt
Q: Data won't populate into a subdocument using mongoose/mongodb This is my first question so thanks and sorry if I don't get the format for asking questions perfect. I'm trying to put together a database for a quiz application and I can't quite figure out why I can't get the data from the API I'm trying to put together to populate in the "possibleanswers" array in my database. Image of how the data currently comes back Here is my models file. const mongoose = require("mongoose"); const { Schema } = mongoose; const possibleAnswersSchema = new Schema({ a: { type: String, required: true, }, b: { type: String, required: true, }, c: { type: String, required: true, }, d: { type: String, required: true, }, }); const questionSchema = new Schema({ description: { type: String, required: true, trim: true, }, question: { type: String, required: true, }, possibleAnswers: [possibleAnswersSchema], level: { type: String, required: true, }, questionType: { type: String, required: true, }, }); const Question = mongoose.model("question", questionSchema); module.exports = Question; And this is my "seeds" file await Question.deleteMany(); const possibleAnswersData = [{ a: "した", b: "じた", c: "しだ", d: "ちた"}]; const questions = await Question.insertMany( { description: "What is the reading of the Kanji below?", question: "上", possibleanswers: possibleAnswersData, level: "N5", questionType: "kanji", }, (err, data) => { if (err) { console.log(err); } else { console.log(data) } }); console.log("questions seeded"); Thanks for all your help! A: From your schema declaration, each possibleAnswers should be an object with 4 properties (a, b, c and d). You are trying to add 4 objects with one properties (which won't work because the properties are all required). Try with: const questions = await Question.insertMany([ { description: 'What is the reading of the Kanji below?', question: '上', possibleAnswers: [{ a: 'した', b: 'じた', c: 'しだ', d: 'ちた' }], level: 'N5', questionType: 'kanji', }, { description: 'What is the reading of the Kanji below?', question: '下', possibleAnswers: [{ a: 'した', b: 'じた', c: 'しだ', d: 'ちた' }], level: 'N5', questionType: 'kanji', }, ]);
Data won't populate into a subdocument using mongoose/mongodb
This is my first question so thanks and sorry if I don't get the format for asking questions perfect. I'm trying to put together a database for a quiz application and I can't quite figure out why I can't get the data from the API I'm trying to put together to populate in the "possibleanswers" array in my database. Image of how the data currently comes back Here is my models file. const mongoose = require("mongoose"); const { Schema } = mongoose; const possibleAnswersSchema = new Schema({ a: { type: String, required: true, }, b: { type: String, required: true, }, c: { type: String, required: true, }, d: { type: String, required: true, }, }); const questionSchema = new Schema({ description: { type: String, required: true, trim: true, }, question: { type: String, required: true, }, possibleAnswers: [possibleAnswersSchema], level: { type: String, required: true, }, questionType: { type: String, required: true, }, }); const Question = mongoose.model("question", questionSchema); module.exports = Question; And this is my "seeds" file await Question.deleteMany(); const possibleAnswersData = [{ a: "した", b: "じた", c: "しだ", d: "ちた"}]; const questions = await Question.insertMany( { description: "What is the reading of the Kanji below?", question: "上", possibleanswers: possibleAnswersData, level: "N5", questionType: "kanji", }, (err, data) => { if (err) { console.log(err); } else { console.log(data) } }); console.log("questions seeded"); Thanks for all your help!
[ "From your schema declaration, each possibleAnswers should be an object with 4 properties (a, b, c and d).\nYou are trying to add 4 objects with one properties (which won't work because the properties are all required).\nTry with:\nconst questions = await Question.insertMany([\n {\n description: 'What is the reading of the Kanji below?',\n question: '上',\n possibleAnswers: [{ a: 'した', b: 'じた', c: 'しだ', d: 'ちた' }],\n level: 'N5',\n questionType: 'kanji',\n },\n {\n description: 'What is the reading of the Kanji below?',\n question: '下',\n possibleAnswers: [{ a: 'した', b: 'じた', c: 'しだ', d: 'ちた' }],\n level: 'N5',\n questionType: 'kanji',\n },\n]);\n\n" ]
[ 0 ]
[]
[]
[ "mongodb", "mongoose", "mongoose_schema" ]
stackoverflow_0074673677_mongodb_mongoose_mongoose_schema.txt
Q: Morris js chart loads twice I'm using Morris js to draw basic line chart. Here is my code: function getChart(range) { $.ajax({ type: 'GET', url: "page.php?doChart=1&range=" + range, dataType: 'json' }).done(function(json) { Morris.Line({ element: 'chart', data: json.data, xkey: 'month', ykeys: json.xkey, labels: json.label, parseTime: false }); }); } $(document).ready(function() { getChart('all'); $("#timeRange").on('click', function() { getChart($(this).data('value')) }); }); The above code works just fine on page load, the problem is when I try to load chart for different period, using on click event. Original container id #chart is being replaced, but for some reason the same instances of chart is being created just below #chart div. A: Try it like this: function getChart(range) { $("#chart").empty(); $.ajax({ ... etc Every time you run the Morris.Line function, it inserts an <svg> element into your chosen element ("chart" in your case). It doesn't overwrite the previous one, it adds an extra one. So you need to clear the old chart out first. See this demo to demonstrate the duplication issue: http://jsbin.com/yelonizoce/1/edit?html,js,output And this one to demonstrate the use of .empty(); http://jsbin.com/xinegovoqo/1/edit?html,js,output A: Before create new chart, old chart should be destroyed. For this use Jquery .empty() function. $('#chartID').empty(); more information in this post.
Morris js chart loads twice
I'm using Morris js to draw basic line chart. Here is my code: function getChart(range) { $.ajax({ type: 'GET', url: "page.php?doChart=1&range=" + range, dataType: 'json' }).done(function(json) { Morris.Line({ element: 'chart', data: json.data, xkey: 'month', ykeys: json.xkey, labels: json.label, parseTime: false }); }); } $(document).ready(function() { getChart('all'); $("#timeRange").on('click', function() { getChart($(this).data('value')) }); }); The above code works just fine on page load, the problem is when I try to load chart for different period, using on click event. Original container id #chart is being replaced, but for some reason the same instances of chart is being created just below #chart div.
[ "Try it like this:\nfunction getChart(range) {\n $(\"#chart\").empty();\n $.ajax({\n... etc\n\nEvery time you run the Morris.Line function, it inserts an <svg> element into your chosen element (\"chart\" in your case). It doesn't overwrite the previous one, it adds an extra one. So you need to clear the old chart out first.\nSee this demo to demonstrate the duplication issue:\nhttp://jsbin.com/yelonizoce/1/edit?html,js,output\nAnd this one to demonstrate the use of .empty();\nhttp://jsbin.com/xinegovoqo/1/edit?html,js,output\n", "Before create new chart, old chart should be destroyed. For this use Jquery .empty() function.\n$('#chartID').empty();\n\nmore information in this post.\n" ]
[ 3, 1 ]
[]
[]
[ "jquery", "morris.js" ]
stackoverflow_0038533484_jquery_morris.js.txt
Q: Parser unrecognized arguments I accept a file path as an argument for my .huy file type python editor but when i change to exe and run it it says: Editor.exe: error: unrecognized arguments: C:\Users\Doan 1\Desktop\test.huy but when i run the python file: Editor.py -f "C:\Users\Doan 1\Desktop\test.huy" it works how do i fix this? this was the parser part: #get arguments parser = argparse.ArgumentParser(description='test') parser.add_argument('-f', metavar='FILE') args = parser.parse_args() location = str(args)[13:-2] if location and location != 'on': load(location) A: To fix this issue, you need to pass the -f flag and the file path to the EXE file when you run it from the command line, just like you do when running the Python file. Here is an example of how you can run the EXE file and pass the required arguments: Editor.exe -f "C:\Users\Doan 1\Desktop\test.huy" Make sure to include the -f flag and the file path in quotes if the file path contains spaces. This should allow the EXE file to parse the arguments and access the file at the specified location. Here is the updated code for the parser part of your script: #get arguments parser = argparse.ArgumentParser(description='test') parser.add_argument('-f', metavar='FILE') args = parser.parse_args() location = args.f if location and location != 'on': load(location) I have removed the str() and slicing operations from the location variable, and instead directly accessed the f attribute of the args object. This should fix the issue and allow the EXE file to parse the arguments correctly.
Parser unrecognized arguments
I accept a file path as an argument for my .huy file type python editor but when i change to exe and run it it says: Editor.exe: error: unrecognized arguments: C:\Users\Doan 1\Desktop\test.huy but when i run the python file: Editor.py -f "C:\Users\Doan 1\Desktop\test.huy" it works how do i fix this? this was the parser part: #get arguments parser = argparse.ArgumentParser(description='test') parser.add_argument('-f', metavar='FILE') args = parser.parse_args() location = str(args)[13:-2] if location and location != 'on': load(location)
[ "To fix this issue, you need to pass the -f flag and the file path to the EXE file when you run it from the command line, just like you do when running the Python file.\nHere is an example of how you can run the EXE file and pass the required arguments:\nEditor.exe -f \"C:\\Users\\Doan 1\\Desktop\\test.huy\"\n\nMake sure to include the -f flag and the file path in quotes if the file path contains spaces. This should allow the EXE file to parse the arguments and access the file at the specified location.\nHere is the updated code for the parser part of your script:\n#get arguments\nparser = argparse.ArgumentParser(description='test')\nparser.add_argument('-f', metavar='FILE')\nargs = parser.parse_args()\nlocation = args.f\nif location and location != 'on':\n load(location)\n\nI have removed the str() and slicing operations from the location variable, and instead directly accessed the f attribute of the args object. This should fix the issue and allow the EXE file to parse the arguments correctly.\n" ]
[ 0 ]
[]
[]
[ "argparse", "python", "python_3.x" ]
stackoverflow_0074672656_argparse_python_python_3.x.txt
Q: why posting data to firebase using flutter bloc is not emitting? I'm creating an app with firebase as a database. After sending data to firebase, app screen should pop out for that I had bloclistener on the screen but after sending the data to firestore database, nothing is happening, flow is stopped after coming to loaded state in bloc file why? check my code so that you will know. I can see my data in firebase but it is not popping out because flow is not coming to listener. state: class SampletestInitial extends SampletestState { @override List<Object> get props => []; } class SampletestLoaded extends SampletestState { SampletestLoaded(); @override List<Object> get props => []; } class SampletestError extends SampletestState { final error; SampletestError({required this.error}); @override List<Object> get props => [error]; } bloc: class SampletestBloc extends Bloc<SampletestEvent, SampletestState> { SampletestBloc() : super(SampletestInitial()) { on<SampletestPostData>((event, emit) async { emit(SampletestInitial()); try { await Repo().sampleTesting(event.des); emit(SampletestLoaded()); } catch (e) { emit(SampletestError(error: e.toString())); print(e); } }); } } Repo: ---- Firebase post data Future<void> sampleTesting(String des) async { final docTicket = FirebaseFirestore.instance.collection('sample').doc(); final json = {'Same': des}; await docTicket.set(json); } TicketScreen: //After clicking the button --- BlocProvider<SampletestBloc>.value( value: BlocProvider.of<SampletestBloc>(context, listen: false) ..add(SampletestPostData(description.text)), child: BlocListener<SampletestBloc, SampletestState>( listener: (context, state) { if (state is SampletestLoaded) { Navigator.pop(context); print("Popped out"); } }, ), ); A: im not sure but i think that you have the same hash of: AllData? data; try to remove AllData? data; and create new data variable so you can be sure that you has a new hash code every time you call createTicket method; final AllData data = await repo.createTicket(AllData( A: Check your AllData class properties. BLoC will not show a new state if it not unique. You need to check whether all fields of the AllData class are specified in the props field. And check your BlocProvider. For what you set listen: false ? BlocProvider.of<SampletestBloc>(context, listen: false)
why posting data to firebase using flutter bloc is not emitting?
I'm creating an app with firebase as a database. After sending data to firebase, app screen should pop out for that I had bloclistener on the screen but after sending the data to firestore database, nothing is happening, flow is stopped after coming to loaded state in bloc file why? check my code so that you will know. I can see my data in firebase but it is not popping out because flow is not coming to listener. state: class SampletestInitial extends SampletestState { @override List<Object> get props => []; } class SampletestLoaded extends SampletestState { SampletestLoaded(); @override List<Object> get props => []; } class SampletestError extends SampletestState { final error; SampletestError({required this.error}); @override List<Object> get props => [error]; } bloc: class SampletestBloc extends Bloc<SampletestEvent, SampletestState> { SampletestBloc() : super(SampletestInitial()) { on<SampletestPostData>((event, emit) async { emit(SampletestInitial()); try { await Repo().sampleTesting(event.des); emit(SampletestLoaded()); } catch (e) { emit(SampletestError(error: e.toString())); print(e); } }); } } Repo: ---- Firebase post data Future<void> sampleTesting(String des) async { final docTicket = FirebaseFirestore.instance.collection('sample').doc(); final json = {'Same': des}; await docTicket.set(json); } TicketScreen: //After clicking the button --- BlocProvider<SampletestBloc>.value( value: BlocProvider.of<SampletestBloc>(context, listen: false) ..add(SampletestPostData(description.text)), child: BlocListener<SampletestBloc, SampletestState>( listener: (context, state) { if (state is SampletestLoaded) { Navigator.pop(context); print("Popped out"); } }, ), );
[ "im not sure but i think that you have the same hash of:\nAllData? data;\n\ntry to remove AllData? data; and create new data variable so you can be sure that you has a new hash code every time you call createTicket method;\nfinal AllData data = await repo.createTicket(AllData(\n\n", "Check your AllData class properties.\nBLoC will not show a new state if it not unique.\nYou need to check whether all fields of the AllData class are specified in the props field.\nAnd check your BlocProvider. For what you set listen: false ?\nBlocProvider.of<SampletestBloc>(context, listen: false)\n\n" ]
[ 0, 0 ]
[]
[]
[ "bloc", "dart", "firebase", "flutter", "google_cloud_firestore" ]
stackoverflow_0074673631_bloc_dart_firebase_flutter_google_cloud_firestore.txt
Q: Heroku: Login system - authentication loop failure I am trying to login to my heroku account. I keep getting an error message that says "There was a problem with your login". There are no details of what the problem is. I tried changing my password through the forgot password action and I still get directed back around to the above error message. I can't contact Heroku's support team because I can't login. Has anyone found this problem and found a way around it - or even a way to contact Heroku? A: I had the same problem, couldn't login even after resetting my password. I use the Last Pass chrome extension to fill in forms. When I entered the (same) credentials in manually I was able to login. A: I started getting this error very recently. I believe it's linked to a recent email that I got regarding password requirement changes: Heroku will start resetting user account passwords today, May 4, 2022, as mentioned in our previous notification. We recommend that you reset your user account password in advance here and follow the best practices below: Minimum of 16 characters Minimum complexity of 3 out of 4: Uppercase, Lowercase, Numeric, Symbol Don't just add a letter or a 1 digit number to the existing password while changing Passwords may not be duplicated across accounts As mentioned elsewhere, resetting my password and ensuring LastPass included symbols resolved it. A: I reset my password and it helped. A: After a research I found that Last Pass auto generated password was not strong enough as per Heroku password reset requirement. I solved it by opening password reset link on different browser (in my case safari). enter strong password (ex: 51lxgpf2F52PgOBAPdAM@) A: I had this problem on "Opera", then I went to "Chrome", and still the error, but in the end it worked on "Microsoft Edge". So try changing your browser to this one)
Heroku: Login system - authentication loop failure
I am trying to login to my heroku account. I keep getting an error message that says "There was a problem with your login". There are no details of what the problem is. I tried changing my password through the forgot password action and I still get directed back around to the above error message. I can't contact Heroku's support team because I can't login. Has anyone found this problem and found a way around it - or even a way to contact Heroku?
[ "I had the same problem, couldn't login even after resetting my password. I use the Last Pass chrome extension to fill in forms. When I entered the (same) credentials in manually I was able to login. \n", "I started getting this error very recently. I believe it's linked to a recent email that I got regarding password requirement changes:\n\nHeroku will start resetting user account passwords today, May 4, 2022, as mentioned in our previous notification. We recommend that you reset your user account password in advance here and follow the best practices below:\n\nMinimum of 16 characters\nMinimum complexity of 3 out of 4: Uppercase, Lowercase, Numeric, Symbol\nDon't just add a letter or a 1 digit number to the existing password while changing\nPasswords may not be duplicated across accounts\n\n\nAs mentioned elsewhere, resetting my password and ensuring LastPass included symbols resolved it.\n", "I reset my password and it helped.\n", "After a research I found that Last Pass auto generated password was not strong enough as per Heroku password reset requirement. \nI solved it by opening password reset link on different browser (in my case safari). enter strong password (ex: 51lxgpf2F52PgOBAPdAM@)\n", "I had this problem on \"Opera\", then I went to \"Chrome\", and still the error, but in the end it worked on \"Microsoft Edge\". So try changing your browser to this one)\n" ]
[ 15, 14, 8, 3, 0 ]
[]
[]
[ "authentication", "heroku", "heroku_toolbelt" ]
stackoverflow_0044610158_authentication_heroku_heroku_toolbelt.txt
Q: Discord bot cannot connect a voice channel I'm trying to make a discord bot first time, but the bot can't connect to the voice channel without any error, please help me, thanks. This command worked successfully but the bot cannot connect my voice channel when enter 'else' statement. Please help me.Thanks a lot. ` class music_cog(commands.Cog): def __init__(self, bot): self.bot = bot @commands.command() async def join(self, ctx): if not ctx.author.voice: await ctx.send("You are not in a voice channel") else: print('join') channel = ctx.author.voice.channel await channel.connect() ` I tried to give my bot administrator in my server, but it still didn't work. A: Simple all u need to do is to download PyNaCl, pip install PyNaCl here is the error that u got
Discord bot cannot connect a voice channel
I'm trying to make a discord bot first time, but the bot can't connect to the voice channel without any error, please help me, thanks. This command worked successfully but the bot cannot connect my voice channel when enter 'else' statement. Please help me.Thanks a lot. ` class music_cog(commands.Cog): def __init__(self, bot): self.bot = bot @commands.command() async def join(self, ctx): if not ctx.author.voice: await ctx.send("You are not in a voice channel") else: print('join') channel = ctx.author.voice.channel await channel.connect() ` I tried to give my bot administrator in my server, but it still didn't work.
[ "Simple all u need to do is to download PyNaCl,\npip install PyNaCl\n\nhere is the error that u got\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074672935_discord_discord.py_python.txt
Q: Error while executing bash for loop from python subprocess I want to run this command from python mentioned here: ffmpeg -f concat -safe 0 -i <(for f in ./*.wav; do echo "file '$PWD/$f'"; done) -c copy output.wav But i can't even run this: subprocess.run('for i in {1..3}; do echo $i; done'.split(), capture_output=True) Error: Traceback (most recent call last): File "/media/russich555/hdd/Programming/Freelance/YouDo/21.intercom_record/test.py", line 36, in <module> pr = subprocess.run( ^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/subprocess.py", line 546, in run with Popen(*popenargs, **kwargs) as process: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/subprocess.py", line 1022, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/usr/lib/python3.11/subprocess.py", line 1899, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'for' Process finished with exit code 1 Also tried add shell=True: subprocess.run('for i in {1..3}; do echo $i; done'.split(), capture_output=True, shell=True) stderr output: i: 1: Syntax error: Bad for loop variable Also tried pass /bin/bash, because documentation says that shell=True using /bin/sh subprocess.run('/bin/bash for i in {1..3}; do echo $i; done'.split(), capture_output=True) stderr output: /bin/bash: for: No such file or catalog A: There are two errors here, or really, three; You are trying to use shell features without shell=True You are trying to use Bash features, but the default shell on non-Windows platforms is POSIX sh; you can fix that with executable='/bin/bash' (obviously, adjust the path if necessary). More fundamentally, though, you want to avoid using a subprocess when Python can perform the loop natively. from pathlib import Path import subprocess subprocess.run( ['ffmpeg', '-f', 'concat', '-safe', '0', '-i', '/dev/stdin', '-c', 'copy', 'output.wav'], input="".join(f"file '{x}'\n" for x in Path.cwd().glob("*.wav")), text=True, capture_output=True) Relying on /dev/stdin for the input file is somewhat platform-dependent; in the worst case, you'll need to refactor to use a temporary file, or fall back to using the shell after all. subprocess.run(r"""ffmpeg -f concat -safe 0 -i <(printf "file '%s'\n" $PWD/*.wav) -c copy output.wav""", shell=True, executable='/bin/bash', text=True, capture_output=True) As noted in comments, you should either use shell=True and pass in a single string as the first argument for the shell to parse, or else pass in a list of tokens without shell=True and with no shell features like wildcard expansion, command substitution, variable interpolation, redirection, shell builtins, etc etc. If you really wanted to explicitly run Bash, the syntax for that would look like subprocess.run( ['bash', '-c', r"""ffmpeg -f concat -safe 0 -i <(printf "file '%s'\n" $PWD/*.wav) -c copy output.wav"""], text=True, capture_output=True) (The syntax bash for loop etc tries to find a file named for and run it with Bash, passing in loop and etc as arguments.) It's not clear why you are using capture_output=True here; in order for that to be useful, you need to examine the .stdout (and/or perhaps .stderr) attributes of the object returned by subprocess.run. If you just want to discard the output, use stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL
Error while executing bash for loop from python subprocess
I want to run this command from python mentioned here: ffmpeg -f concat -safe 0 -i <(for f in ./*.wav; do echo "file '$PWD/$f'"; done) -c copy output.wav But i can't even run this: subprocess.run('for i in {1..3}; do echo $i; done'.split(), capture_output=True) Error: Traceback (most recent call last): File "/media/russich555/hdd/Programming/Freelance/YouDo/21.intercom_record/test.py", line 36, in <module> pr = subprocess.run( ^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/subprocess.py", line 546, in run with Popen(*popenargs, **kwargs) as process: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/subprocess.py", line 1022, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/usr/lib/python3.11/subprocess.py", line 1899, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'for' Process finished with exit code 1 Also tried add shell=True: subprocess.run('for i in {1..3}; do echo $i; done'.split(), capture_output=True, shell=True) stderr output: i: 1: Syntax error: Bad for loop variable Also tried pass /bin/bash, because documentation says that shell=True using /bin/sh subprocess.run('/bin/bash for i in {1..3}; do echo $i; done'.split(), capture_output=True) stderr output: /bin/bash: for: No such file or catalog
[ "There are two errors here, or really, three;\n\nYou are trying to use shell features without shell=True\nYou are trying to use Bash features, but the default shell on non-Windows platforms is POSIX sh; you can fix that with executable='/bin/bash' (obviously, adjust the path if necessary).\n\nMore fundamentally, though, you want to avoid using a subprocess when Python can perform the loop natively.\nfrom pathlib import Path\nimport subprocess\n\nsubprocess.run(\n ['ffmpeg', '-f', 'concat', '-safe', '0',\n '-i', '/dev/stdin', '-c', 'copy', 'output.wav'],\n input=\"\".join(f\"file '{x}'\\n\" for x in Path.cwd().glob(\"*.wav\")),\n text=True, capture_output=True)\n\nRelying on /dev/stdin for the input file is somewhat platform-dependent; in the worst case, you'll need to refactor to use a temporary file, or fall back to using the shell after all.\nsubprocess.run(r\"\"\"ffmpeg -f concat -safe 0 -i <(printf \"file '%s'\\n\" $PWD/*.wav) -c copy output.wav\"\"\",\n shell=True, executable='/bin/bash',\n text=True, capture_output=True)\n\nAs noted in comments, you should either use shell=True and pass in a single string as the first argument for the shell to parse, or else pass in a list of tokens without shell=True and with no shell features like wildcard expansion, command substitution, variable interpolation, redirection, shell builtins, etc etc.\nIf you really wanted to explicitly run Bash, the syntax for that would look like\nsubprocess.run(\n ['bash', '-c',\n r\"\"\"ffmpeg -f concat -safe 0 -i <(printf \"file '%s'\\n\" $PWD/*.wav) -c copy output.wav\"\"\"],\n text=True, capture_output=True)\n\n(The syntax bash for loop etc tries to find a file named for and run it with Bash, passing in loop and etc as arguments.)\nIt's not clear why you are using capture_output=True here; in order for that to be useful, you need to examine the .stdout (and/or perhaps .stderr) attributes of the object returned by subprocess.run. If you just want to discard the output, use stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL\n" ]
[ 1 ]
[]
[]
[ "bash", "python", "subprocess" ]
stackoverflow_0074673644_bash_python_subprocess.txt
Q: 'tsc command not found' in compiling typescript I want to install typescript, so I used the following command: npm install -g typescript and test tsc --version, but it just show 'tsc command not found'. I have tried many ways as suggested in stackoverflow, github and other sites. but it doesn't work. How could I know typescript is installed and where it is. my OS is Unix, OS X El Capitan 10.11.6, node version is 4.4.3, npm version is 3.10.5 A: A few tips in order restart the terminal restart the machine reinstall nodejs + then run npm install typescript -g If it still doesn't work run npm config get prefix to see where npm install -g is putting files (append bin to the output) and make sure that they are in the path (the node js setup does this. Maybe you forgot to tick that option). A: You are all messing with the global installations and -path files. Just a little error might damage every project you have ever written, and you will spend the rest of the night trying to get a console.log('hi') to work again. If you have run npm i typescript --save-dev in your project - just try to run: npx tsc And see if it works before messing with global stuff (unless of course you really know what you are doing) A: I had to do this: npx tsc app.ts A: After finding all solutions for this small issue for macOS only. Finally, I got my TSC works on my MacBook pro. This might be the best solution I found out. For all macOS users, instead of installing TypeScript using NPM, you can install TypeScript using homebrew. brew install typescript Please see attached screencap for reference. A: Globally installing TypeScript package worked for me. npm install typescript -g A: If your TSC command is not found in MacOS after proper installation of TypeScript (using the following command: $ sudo npm install -g typescript, then ensure Node /bin path is added to the PATH variable in .bash_profile. Open .bash_profile using terminal: $ open ~/.bash_profile; Edit/Verify bash profile to include the following line (using your favorite text editor): export PATH="$PATH:"/usr/local/lib/node_modules/node/bin""; Load the latest bash profile using terminal: source ~/.bash_profile; Lastly, try the command: $ tsc --version. A: Easy fix for Mac I found. Just run these commands: sudo npm install -g concurrently sudo npm install -g lite-server sudo npm install -g typescript Nothing worked except this for me. A: This answer is specific for iTermV2 on MAC First of all, I needed to instal as sudo (admin) during NPM install sudo npm install -g typescript NPM installs the libraries under /usr/local/Cellar/node/<your latest version>/lib/node_modules/typescript folder and symlinks at /usr/local/Cellar/node/<your latest version>/bin/tsc hence I went ~/.zshrc ( .bashrc, if you use bash)and added /usr/local/Cellar/node/<your latest version>/bin to the $PATH. reload the shell profile via source ~/.zshrc (.bashrc in your case) A: I had this same problem on Ubuntu 19.10 LTS. To solve this I ran the following command: $ sudo apt install node-typescript After that, I was able to use tsc. A: For mac users, you don't need to restart your laptop or doing any other commands Use brew install typescript A: The only solution that work for me was put npx tsc -v or for the compiling npx tsc salida.ts "salida.ts" is the name of the file A: For windows and yarn user, try yarn tsc --init A: Check your npm version If it's not properly installed, then install it first run this command npm install typescript -g now tsc <file_name>.ts It'll create a corresponding .js file. eg <file_name>.js now try node <file_name>.js A: None of above worked for me. I tried this as well, yum install typescript was able to compile by hook and crook as follows. Not recommended but just a workaround. Just install locally using npm, as npm install typescript and verify in node_module folder, if its downloaded. and then run, ./node_modules/typescript/bin/tsc --help ./node_modules/typescript/bin/tsc //this line actually runs and compile and generate the compiled file. A: Non-admin solution I do not have admin privileges since this machine was issued by my job. get path of where node modules are being installed and copy to clipboard npm config get prefix | clip don't have clip? just copy output from npm config get prefix add copied path to environment variables my preferred method (Windows) (Ctrl + R), paste rundll32 sysdm.cpl,EditEnvironmentVariables under User Variables, double-click on Path > New > Paste copied path A: None of above answer solve my problem. The fact is that my project did not have type script installed. But locally I had run npm install -g typescript. So I did not notice that typescript node dependency was not in my package json. When I pushed it to server side, and run npm install, then npx tsc I get a tsc not found. In facts remote server did not have typescript installed. That was hidden because of my local global typescript install. A: On Windows 10 i solved it by adding %APPDATA%\npm to the path A: I have tried a lot to deploy the Node.js typescript project on Heroku and I have tried different solutions but none of them working for me. So, I have implemented a solution that is to create a build locally which is a dist folder, and just only push dist folder with package.json files, you don't need to push your src folder to Heroku. and in your script add "start": "node dist/index.js" Here are my project structure: .gitignore file: package.json file: "start": "node dist/index.js", "deploy": "tsc && git add . && git commit -m Heroku && git push heroku master", "dev": "ts-node-dev --respawn --pretty --transpile-only src/index.ts" just need to add these scripts: A: I was having trouble with this because I didn't want to globally install typescript. I found I had to add a script to the package.json that called tsc for me. The solution can be found here - https://stackoverflow.com/a/41446584/6301243 A: Use: npm rebuild typescript This will rebuild the tsc link on your machine. A: In package.json "scripts": { "tsc": "./node_modules/typescript/bin/tsc", "postinstall": "npm run tsc" }, Works for me for Heroku deployment. Installing typescript npm install -D typescript and writing tsc in the build script "build": "tsc", does not work for me. A: First Install typescript by running this command npm install typescript or npm install -g typescript [ for installing typescript globally ] then run npx tsc --version for checking version of Typescript, instead of tsc --version. Same goes to running any typescript file, just run npx tsc <filename>.ts For example, for a file named hello.ts, run npx tsc hello.ts. A: Don't forget to use sudo if you are using Linux. sudo npm install -g typescript
'tsc command not found' in compiling typescript
I want to install typescript, so I used the following command: npm install -g typescript and test tsc --version, but it just show 'tsc command not found'. I have tried many ways as suggested in stackoverflow, github and other sites. but it doesn't work. How could I know typescript is installed and where it is. my OS is Unix, OS X El Capitan 10.11.6, node version is 4.4.3, npm version is 3.10.5
[ "A few tips in order\n\nrestart the terminal \nrestart the machine\nreinstall nodejs + then run npm install typescript -g\n\nIf it still doesn't work run npm config get prefix to see where npm install -g is putting files (append bin to the output) and make sure that they are in the path (the node js setup does this. Maybe you forgot to tick that option).\n", "You are all messing with the global installations and -path files. Just a little error might damage every project you have ever written, and you will spend the rest of the night trying to get a console.log('hi') to work again.\nIf you have run npm i typescript --save-dev in your project - just try to run:\nnpx tsc \n\nAnd see if it works before messing with global stuff (unless of course you really know what you are doing)\n", "I had to do this:\nnpx tsc app.ts\n\n", "After finding all solutions for this small issue for macOS only.\nFinally, I got my TSC works on my MacBook pro.\nThis might be the best solution I found out.\nFor all macOS users, instead of installing TypeScript using NPM, you can install TypeScript using homebrew.\nbrew install typescript\n\nPlease see attached screencap for reference.\n\n", "Globally installing TypeScript package worked for me.\nnpm install typescript -g\n\n", "If your TSC command is not found in MacOS after proper installation of TypeScript (using the following command: $ sudo npm install -g typescript, then ensure Node /bin path is added to the PATH variable in .bash_profile.\nOpen .bash_profile using terminal: $ open ~/.bash_profile;\nEdit/Verify bash profile to include the following line (using your favorite text editor):\nexport PATH=\"$PATH:\"/usr/local/lib/node_modules/node/bin\"\";\n\nLoad the latest bash profile using terminal: source ~/.bash_profile;\nLastly, try the command: $ tsc --version.\n", "Easy fix for Mac I found. Just run these commands:\nsudo npm install -g concurrently\nsudo npm install -g lite-server\nsudo npm install -g typescript\n\nNothing worked except this for me.\n", "This answer is specific for iTermV2 on MAC\n\nFirst of all, I needed to instal as sudo (admin) during NPM install\nsudo npm install -g typescript\nNPM installs the libraries under /usr/local/Cellar/node/<your latest version>/lib/node_modules/typescript folder and symlinks at /usr/local/Cellar/node/<your latest version>/bin/tsc\n\nhence I went ~/.zshrc ( .bashrc, if you use bash)and added /usr/local/Cellar/node/<your latest version>/bin to the $PATH. \n\nreload the shell profile via source ~/.zshrc (.bashrc in your case)\n\n", "I had this same problem on Ubuntu 19.10 LTS.\nTo solve this I ran the following command:\n$ sudo apt install node-typescript\n\nAfter that, I was able to use tsc.\n", "For mac users, you don't need to restart your laptop or doing any other commands\nUse brew install typescript\n", "The only solution that work for me was put \nnpx tsc -v\nor for the compiling\nnpx tsc salida.ts\n\"salida.ts\" is the name of the file\n", "For windows and yarn user, try yarn tsc --init\n", "\nCheck your npm version\n\nIf it's not properly installed, then install it first\n\nrun this command npm install typescript -g\n\nnow tsc <file_name>.ts\n\nIt'll create a corresponding .js file. eg <file_name>.js\n\nnow try node <file_name>.js \n\n\n", "None of above worked for me.\nI tried this as well,\nyum install typescript \n\nwas able to compile by hook and crook as follows. Not recommended but just a workaround.\nJust install locally using npm, as npm install typescript and verify in node_module folder, if its downloaded. and then run,\n./node_modules/typescript/bin/tsc --help\n./node_modules/typescript/bin/tsc //this line actually runs and compile and generate the compiled file. \n\n", "Non-admin solution\nI do not have admin privileges since this machine was issued by my job.\n\nget path of where node modules are being installed and copy to clipboard\n\n\nnpm config get prefix | clip\ndon't have clip? just copy output from npm config get prefix\n\nadd copied path to environment variables\n\n\nmy preferred method (Windows)\n(Ctrl + R), paste rundll32 sysdm.cpl,EditEnvironmentVariables\nunder User Variables, double-click on Path > New > Paste copied path\n\n\n", "None of above answer solve my problem.\nThe fact is that my project did not have type script installed.\nBut locally I had run npm install -g typescript. So I did not notice that typescript node dependency was not in my package json.\nWhen I pushed it to server side, and run npm install, then npx tsc I get a tsc not found. In facts remote server did not have typescript installed. That was hidden because of my local global typescript install.\n", "On Windows 10 i solved it by adding %APPDATA%\\npm to the path\n", "I have tried a lot to deploy the Node.js typescript project on Heroku and I have tried different solutions but none of them working for me. So, I have implemented a solution that is to create a build locally which is a dist folder, and just only push dist folder with package.json files, you don't need to push your src folder to Heroku. and in your script add \"start\": \"node dist/index.js\"\nHere are my project structure:\n\n.gitignore file:\n\npackage.json file:\n\"start\": \"node dist/index.js\",\n\"deploy\": \"tsc && git add . && git commit -m Heroku && git push heroku master\",\n\"dev\": \"ts-node-dev --respawn --pretty --transpile-only src/index.ts\"\n\njust need to add these scripts:\n\n", "I was having trouble with this because I didn't want to globally install typescript. I found I had to add a script to the package.json that called tsc for me. The solution can be found here - https://stackoverflow.com/a/41446584/6301243\n", "Use:\nnpm rebuild typescript\n\nThis will rebuild the tsc link on your machine.\n", "In package.json\n \"scripts\": {\n \"tsc\": \"./node_modules/typescript/bin/tsc\",\n \"postinstall\": \"npm run tsc\"\n },\n\nWorks for me for Heroku deployment.\nInstalling typescript npm install -D typescript and writing tsc in the build script \"build\": \"tsc\", does not work for me.\n", "First Install typescript by running this command npm install typescript\nor npm install -g typescript [ for installing typescript globally ] then run npx tsc --version for checking version of Typescript, instead of tsc --version.\nSame goes to running any typescript file, just run npx tsc <filename>.ts\nFor example, for a file named hello.ts, run npx tsc hello.ts.\n", "Don't forget to use sudo if you are using Linux.\nsudo npm install -g typescript \n\n" ]
[ 233, 123, 82, 29, 15, 10, 4, 4, 3, 3, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "For windows:\nAdd the path by using command as below in command prompt:\npath=%path%;C:\\Users\\\\npm\nAs in my case, the above path was not registered for command.\n%userprofile% in run windows, will give you path to C:\\users\\\n", "I solved this on my machine by just running sudo npm install in the directory that I was getting the error.\n", "This works perfectly on Mac. Tested on macOS High Sierra\nsudo npm install -g concurrently\nsudo npm install -g lite-server\nsudo npm install -g typescript\ntsc --init\n\nThis generates the tsconfig.json file.\n" ]
[ -1, -1, -3 ]
[ "npm", "tsc", "typescript" ]
stackoverflow_0039404922_npm_tsc_typescript.txt
Q: How to remove numbered newlines from string? I'm cleaning some text data and I've come across a problem associated with removing newline text. For this data, there are not merely \n strings in the text, but \n\n strings, as well as numbered newlines such as: \n2 and \n\n2. The latter are my problem. How does one remove this using regex? I'm working in R. Here is some sample text and what I've used, so far: #string string <- "There is a square in the apartment. \n\n4Great laughs, which I hear from the other room. 4 laughs. Several. 9 times ten.\n2" #code attempt gsub("[\r\\n0-9]", '', string) The problem with this regex code is that it removes numbers and matches with the letter n. I would like to have the following output: "There is a square in the apartment. Great laughs, which I hear from the other room. 4 laughs. Several. 9 times ten." I'm using regexr for reference. A: To remove newlines and numbers from your string, you can use the following regular expression: gsub("\\n[\\n]?[0-9]?", '', string) This will remove any \n characters that are followed by an optional \n character and a number. Note that the backslashes in the regex need to be escaped in the string, so we use two backslashes for each one in the regex. Here's an example of using this regex in R: #string string <- "There is a square in the apartment. \n\n4Great laughs, which I hear from the other room. 4 laughs. Several. 9 times ten.\n2" #code attempt gsub("\\n[\\n]?[0-9]?", '', string) This will output the following string: "There is a square in the apartment. Great laughs, which I hear from the other room. 4 laughs. Several. 9 times ten." A: Writing the pattern like this [\r\\n0-9] matches either a carriage return, one of the chars \ or n or a digit 0-9 You could write the pattern matching 1 or more carriage returns or newlines, followed by optional digits: [\r\n]+[0-9]* Example: string <- "There is a square in the apartment. \n\n4Great laughs, which I hear from the other room. 4 laughs. Several. 9 times ten.\n2" gsub("[\r\n]+[0-9]*", '', string) Output [1] "There is a square in the apartment. Great laughs, which I hear from the other room. 4 laughs. Several. 9 times ten." See a R demo.
How to remove numbered newlines from string?
I'm cleaning some text data and I've come across a problem associated with removing newline text. For this data, there are not merely \n strings in the text, but \n\n strings, as well as numbered newlines such as: \n2 and \n\n2. The latter are my problem. How does one remove this using regex? I'm working in R. Here is some sample text and what I've used, so far: #string string <- "There is a square in the apartment. \n\n4Great laughs, which I hear from the other room. 4 laughs. Several. 9 times ten.\n2" #code attempt gsub("[\r\\n0-9]", '', string) The problem with this regex code is that it removes numbers and matches with the letter n. I would like to have the following output: "There is a square in the apartment. Great laughs, which I hear from the other room. 4 laughs. Several. 9 times ten." I'm using regexr for reference.
[ "To remove newlines and numbers from your string, you can use the following regular expression:\ngsub(\"\\\\n[\\\\n]?[0-9]?\", '', string)\n\nThis will remove any \\n characters that are followed by an optional \\n character and a number. Note that the backslashes in the regex need to be escaped in the string, so we use two backslashes for each one in the regex.\nHere's an example of using this regex in R:\n#string\nstring <- \"There is a square in the apartment. \\n\\n4Great laughs, which I hear from the other room. 4 laughs. Several. 9 times ten.\\n2\"\n#code attempt\ngsub(\"\\\\n[\\\\n]?[0-9]?\", '', string)\n\nThis will output the following string:\n\"There is a square in the apartment. Great laughs, which I hear from the other room. 4 laughs. Several. 9 times ten.\"\n\n", "Writing the pattern like this [\\r\\\\n0-9] matches either a carriage return, one of the chars \\ or n or a digit 0-9\nYou could write the pattern matching 1 or more carriage returns or newlines, followed by optional digits:\n[\\r\\n]+[0-9]*\n\nExample:\nstring <- \"There is a square in the apartment. \\n\\n4Great laughs, which I hear from the other room. 4 laughs. Several. 9 times ten.\\n2\"\ngsub(\"[\\r\\n]+[0-9]*\", '', string)\n\nOutput\n[1] \"There is a square in the apartment. Great laughs, which I hear from the other room. 4 laughs. Several. 9 times ten.\"\n\nSee a R demo.\n" ]
[ 0, 0 ]
[]
[]
[ "r", "regex", "string" ]
stackoverflow_0074673857_r_regex_string.txt
Q: How to prevent from json-server to reload page after post? I'm using Json Server for mocking API requests and i'm experiencing an annoying behavior , each post that i'm doing causes the page to reload... i've read their api documentation but didn't came up with anything. I'm using a simple jquery ajax request that looks like this : $.ajax({ url: 'http://localhost:3000/list', type: 'post', data: itemObj, dataType: 'json', success: function (itemObj) { todoListArr.push(itemObj); createMarkup(); $(this).val(''); // clear input return false; } }); I've tried e.preventDefault() but it's nothing to do with the ajax call - it's the json-server that causes it... the command that i'm using for running the server is : npx json-server --watch db.json tried also npx json-server A: I had the same issue, if your using VS code try not use the live server extension(disable it), and it should work fine. A: It may be because of live server extension or something like that refresh html pages if you're using for example vs code live server. Another possibility is if you're using a form and if something inside the form or outside the form is listening to the same submit event that may cause event bubbling so you may try e.stopPropagation(); as well.
How to prevent from json-server to reload page after post?
I'm using Json Server for mocking API requests and i'm experiencing an annoying behavior , each post that i'm doing causes the page to reload... i've read their api documentation but didn't came up with anything. I'm using a simple jquery ajax request that looks like this : $.ajax({ url: 'http://localhost:3000/list', type: 'post', data: itemObj, dataType: 'json', success: function (itemObj) { todoListArr.push(itemObj); createMarkup(); $(this).val(''); // clear input return false; } }); I've tried e.preventDefault() but it's nothing to do with the ajax call - it's the json-server that causes it... the command that i'm using for running the server is : npx json-server --watch db.json tried also npx json-server
[ "I had the same issue, if your using VS code try not use the live server extension(disable it), and it should work fine.\n", "It may be because of live server extension or something like that refresh html pages if you're using for example vs code live server. Another possibility is if you're using a form and if something inside the form or outside the form is listening to the same submit event that may cause event bubbling so you may try e.stopPropagation(); as well.\n" ]
[ 2, 0 ]
[ "I have the same issue, too. Finally, I think it is because of JSON-server after trying lots of ways.\nI knew JSON-server after using autogenerate API site: https://mockapi.io/, and I prefer mockap.io more.\n" ]
[ -1 ]
[ "jquery", "json_server" ]
stackoverflow_0066836919_jquery_json_server.txt
Q: Sentiment Analysis in R for German language I'm trying to conduct the sentiment analysis in German in R. However, the output does not seem promising as I could not find a way to make it in German language. Would you have any suggestions for me? #libraries library(tidyverse) library(tokenizers) library(stopwords) library(sentimentr) #load data data <- tribble( ~content, "Nimmt euch in Acht✌️#tage #periode #blu #hände #rot #blute #wald #fy #viral", "ich liebe uns #wortwitze #Periode #Tage #couplegoals", "Mit KadeZyklus bei Krämpfen gibt es jetzt endlich ein pflanzliches Helferlein gegen leichte Unterleibskrämpfe!", "Es ist wie es ist Jungs" ) # count freq of words words_as_tokens <- setNames(lapply(sapply(data$content, tokenize_words, stopwords = stopwords(language = "en", source = "smart")), function(x) as.data.frame(sort(table(x), TRUE), stringsAsFactors = F)), data$content) # tidyverse's job stop_german <- data.frame(word = stopwords::stopwords("de"), stringsAsFactors = FALSE) df <- words_as_tokens %>% bind_rows(, .id = "content") %>% rename(word = x) %>% anti_join(stop_german, by = c("word")) #sentiment df$sentiment_score <- sapply(df$content, function(x) mean(sentiment(x)$sentiment)) A: You have specified the wrong source for stopwords and the wrong language. smart as source does not contain de as language. If you do stopwords_getsources() you get all available sources for stopwords. With stopwords_getlanguages(source = 'snowball') you'll see that this contains de. Change your stopwords accordingly and it will work. # count freq of words words_as_tokens <- setNames(lapply( sapply(data$content, tokenize_words, stopwords = stopwords(language = "de", source = "snowball") ), function(x) as.data.frame(sort(table(x), TRUE), stringsAsFactors = F) ), data$content)
Sentiment Analysis in R for German language
I'm trying to conduct the sentiment analysis in German in R. However, the output does not seem promising as I could not find a way to make it in German language. Would you have any suggestions for me? #libraries library(tidyverse) library(tokenizers) library(stopwords) library(sentimentr) #load data data <- tribble( ~content, "Nimmt euch in Acht✌️#tage #periode #blu #hände #rot #blute #wald #fy #viral", "ich liebe uns #wortwitze #Periode #Tage #couplegoals", "Mit KadeZyklus bei Krämpfen gibt es jetzt endlich ein pflanzliches Helferlein gegen leichte Unterleibskrämpfe!", "Es ist wie es ist Jungs" ) # count freq of words words_as_tokens <- setNames(lapply(sapply(data$content, tokenize_words, stopwords = stopwords(language = "en", source = "smart")), function(x) as.data.frame(sort(table(x), TRUE), stringsAsFactors = F)), data$content) # tidyverse's job stop_german <- data.frame(word = stopwords::stopwords("de"), stringsAsFactors = FALSE) df <- words_as_tokens %>% bind_rows(, .id = "content") %>% rename(word = x) %>% anti_join(stop_german, by = c("word")) #sentiment df$sentiment_score <- sapply(df$content, function(x) mean(sentiment(x)$sentiment))
[ "You have specified the wrong source for stopwords and the wrong language. smart as source does not contain de as language. If you do stopwords_getsources() you get all available sources for stopwords. With stopwords_getlanguages(source = 'snowball') you'll see that this contains de.\nChange your stopwords accordingly and it will work.\n# count freq of words\nwords_as_tokens <- setNames(lapply(\n sapply(data$content,\n tokenize_words,\n stopwords = stopwords(language = \"de\", source = \"snowball\")\n ),\n function(x) as.data.frame(sort(table(x), TRUE), stringsAsFactors = F)\n), data$content)\n\n" ]
[ 0 ]
[]
[]
[ "dplyr", "r", "sentiment_analysis" ]
stackoverflow_0074670729_dplyr_r_sentiment_analysis.txt
Q: Changing text value after button is clicked I have an input, text and button component. I want to change the text value with input value when button is clicked. I searched on stackoverflow but they only change text after input text is change by using onChangeText prop of textinput. A: Use onPress prop of the button component. This prop takes a function that will be called when the button is clicked. In that function, you can use the setState method to update the state of your component with the new text value from the input. This will trigger a re-render of your component and update the text value. class MyComponent extends React.Component { constructor(props) { super(props); this.state = { textValue: '', }; } onButtonPress = () => { const { inputValue } = this.state; this.setState({ textValue: inputValue, }); } render() { const { textValue } = this.state; return ( <View> <TextInput value={inputValue} onChangeText={inputValue => this.setState({ inputValue })} /> <Button onPress={this.onButtonPress} title="Update Text" /> <Text>{textValue}</Text> </View> ); } } onButtonPress function is called when the button is clicked and it updates the textValue state with the current inputValue, which update text with the new value assigned. A: To change the text value of a Text component based on the value of an Input component when a Button is clicked, you can use the onPress prop of the Button component to define an event handler that updates the text value of the Text component. Here is an example (NOTE: just a sample - you didn't provide a code on which I could base it) of how you could do this: import React from 'react'; import { Button, Input, Text } from 'react-native'; class MyApp extends React.Component { constructor(props) { super(props); this.state = { inputValue: '', textValue: '', }; } handleInputChange = (inputValue) => { this.setState({ inputValue }); } handleButtonPress = () => { this.setState({ textValue: this.state.inputValue }); } render() { return ( <> <Input value={this.state.inputValue} onChangeText={this.handleInputChange} /> <Button title="Update text" onPress={this.handleButtonPress} /> <Text>{this.state.textValue}</Text> </> ); } } In this example, the MyApp component maintains the state of the input value and the text value. The handleInputChange event handler is called when the value of the Input component changes, and updates the input value in the component's state. The handleButtonPress event handler is called when the Button is pressed, and updates the text value in the component's state with the current input value. Finally, the Text component is rendered with the current text value from the component's state. By using the onChangeText and onPress props to define event handlers that update the component's state, you can control the text value of a Text component based on the value of an Input component.
Changing text value after button is clicked
I have an input, text and button component. I want to change the text value with input value when button is clicked. I searched on stackoverflow but they only change text after input text is change by using onChangeText prop of textinput.
[ "Use\n\nonPress\n\nprop of the button component. This prop takes a function that will be called when the button is clicked.\nIn that function, you can use the setState method to update the state of your component with the new text value from the input.\nThis will trigger a re-render of your component and update the text value.\nclass MyComponent extends React.Component {\n constructor(props) {\n super(props);\n this.state = {\n textValue: '',\n };\n }\n\n onButtonPress = () => {\n const { inputValue } = this.state;\n this.setState({\n textValue: inputValue,\n });\n }\n\n render() {\n const { textValue } = this.state;\n return (\n <View>\n <TextInput\n value={inputValue}\n onChangeText={inputValue => this.setState({ inputValue })}\n />\n <Button onPress={this.onButtonPress} title=\"Update Text\" />\n <Text>{textValue}</Text>\n </View>\n );\n }\n}\n\nonButtonPress function is called when the button is clicked and it updates the textValue state with the current inputValue, which update text with the new value assigned.\n", "To change the text value of a Text component based on the value of an Input component when a Button is clicked, you can use the onPress prop of the Button component to define an event handler that updates the text value of the Text component.\nHere is an example (NOTE: just a sample - you didn't provide a code on which I could base it) of how you could do this:\nimport React from 'react';\nimport { Button, Input, Text } from 'react-native';\n\nclass MyApp extends React.Component {\n constructor(props) {\n super(props);\n\n this.state = {\n inputValue: '',\n textValue: '',\n };\n }\n\n handleInputChange = (inputValue) => {\n this.setState({ inputValue });\n }\n\n handleButtonPress = () => {\n this.setState({ textValue: this.state.inputValue });\n }\n\n render() {\n return (\n <>\n <Input\n value={this.state.inputValue}\n onChangeText={this.handleInputChange}\n />\n <Button\n title=\"Update text\"\n onPress={this.handleButtonPress}\n />\n <Text>{this.state.textValue}</Text>\n </>\n );\n }\n}\n\nIn this example, the MyApp component maintains the state of the input value and the text value. The handleInputChange event handler is called when the value of the Input component changes, and updates the input value in the component's state. The handleButtonPress event handler is called when the Button is pressed, and updates the text value in the component's state with the current input value. Finally, the Text component is rendered with the current text value from the component's state.\nBy using the onChangeText and onPress props to define event handlers that update the component's state, you can control the text value of a Text component based on the value of an Input component.\n" ]
[ 0, 0 ]
[]
[]
[ "react_native" ]
stackoverflow_0074674181_react_native.txt
Q: My code is not working. How should I add a JScrollPane on a JTextArea that is in a JTabbedPane? this is the code I have so far public class WindowHelpGui extends JFrame{ JScrollPane scroll; //constructor public WindowHelpGui(){ //add window title super("Help"); //set window layout setLayout(null); JTabbedPane tabbedPane = new JTabbedPane(); tabbedPane.setBounds(10, 10, 750, 550); JPanel panel1 = new JPanel(); JPanel panel2 = new JPanel(); JPanel panel3 = new JPanel(); panel1.add(new JTextArea("\nDRAFT TEXT 101 \ncodnvfdinofndovndofinvinfdivnifdnvidfnviofnivnidfnviofdnvindfinv ivondfviondfiovndfionvifdnvionivfdninfdinfivnidfovniofnviofnvifdnv fndiovnf\nh\ne\nl\nl\no\n\nf\nr\no\nm\n\nt\nh\ne\n\no\nt\nh"+ "e\nr\n\ns\ni\nd\ne\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n nHello")); panel2.add(new JTextArea("panel2 working")); panel3.add(new JTextArea("panel3 working")); JScrollPane scroll = new JScrollPane(panel1, JScrollPane. HORIZONTAL_SCROLLBAR_NEVER, JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDE add(scroll, panel1); tabbedPane.add("How to play", panel1); tabbedPane.add("Seeds", panel2); tabbedPane.add("Tools", panel3); //scroll = new Jscro this.add(tabbedPane); //set size of window setSize(800,600); //set visibility setVisible(true); //set resizable setResizable(false); //set default close setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } when I run this I dont see any window popping up. Can someone help me please? I have tried also tried using add(scroll, tabbedPane) but it just adds a new tab on my window. A: Call pack() before making your JFrame visible and use setPreferredSize() of the Jframe instead of setSize(). Add a setMinimumSize() as well to the JFrame. Also there can only be one root component of a JFrame. Use a JPanel with a layout to add more components. Don't use a null layout. It will require you to explicitly set position and size of each child component by using setBounds() method of each child. Example code: JFrame frame = new JFrame("Simple GUI"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JLabel textLabel = new JLabel("I'm a label in the window",SwingConstants.CENTER); textLabel.setPreferredSize(new Dimension(300, 100)); frame.getContentPane().add(textLabel, BorderLayout.CENTER); //Display the window. frame.setLocationRelativeTo(null); frame.pack(); frame.setVisible(true); Read more about JFrame and how it works from here. A: The JTextArea must be added directly to the JScrollPane and you can add the scrollpane directly to the tabbedPane: JTextArea textArea = new JTextArea("\nDRAFT TEXT 101\ncodnvfdinofndovndofinvinfdivnifdnvidfnviofnivnidfnviofdnvindfinvivondfviondfiovndfionvifdnvionivfdninfdinfivnidfovniofnviofnvifdnvfndiovnf\nh\ne\nl\nl\no\n\nf\nr\no\nm\n\nt\nh\ne\n\no\nt\nh" + "e\nr\n\ns\ni\nd\ne\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nn\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nnHello"); JScrollPane scrollPane = new JScrollPane(textArea, JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED, JScrollPane.HORIZONTAL_SCROLLBAR_NEVER); tabbedPane.add("How to play", scrollPane); A: You set the JTextArea into the ScrollPane constructor. Here is a runnable example (read comments in code): import java.awt.Dimension; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JScrollPane; import javax.swing.JTabbedPane; import javax.swing.JTextArea; public class WindowHelpGui extends JFrame { private static final long serialVersionUID = 34534L; private JTextArea textArea_1; private JTextArea textArea_2; private JTextArea textArea_3; private JTabbedPane tabbedPane; //constructor public WindowHelpGui() { createForm(); } private void createForm() { //set window layout // setLayout(null); // If possible, don't ever use a Null layout. //set resizable // setResizable(false); // It's nice to be able to resize from time to time. // add window title setTitle("Help"); // set size of window setPreferredSize(new Dimension(800, 600)); //set default close setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); // OPTIONAL - Set form on top of everything. setAlwaysOnTop(true); tabbedPane = new JTabbedPane(); JPanel panel1 = new JPanel(); JPanel panel2 = new JPanel(); JPanel panel3 = new JPanel(); textArea_1 = new JTextArea(); textArea_1.setText("\nDRAFT TEXT 101\ncodnvfdinofndovndofinvinfdivnifdnvidfnviofnivnid" + "fnviofdnvindfinvivondfviondfiovndfionvifdnvionivfdninfdinfivnidfovniofnvio" + "fnvifdnvfndiovnf\nh\ne\nl\nl\no\n\nf\nr\no\nm\n\nt\nh\ne\n\no\nt\nhe\nr\n\n" + "s\ni\nd\ne\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n" + "\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHello"); textArea_1.setCaretPosition(0); // Set the caret to beginning of document. textArea_2 = new JTextArea(); textArea_2.setText("Seeds Area Working"); textArea_3 = new JTextArea(); textArea_3.setText("Tools Area Working"); JScrollPane scroll_1 = new JScrollPane(); scroll_1.setViewportView(textArea_1); JScrollPane scroll_2 = new JScrollPane(); scroll_2.setViewportView(textArea_2); JScrollPane scroll_3 = new JScrollPane(); scroll_3.setViewportView(textArea_3); tabbedPane.add("How to play", scroll_1); tabbedPane.add("Seeds", scroll_2); tabbedPane.add("Tools", scroll_3); add(tabbedPane); pack(); // Now you can resize or maximize the form and everything will resize with it. setVisible(true); // Set Screen Location (after the pack()) setLocationRelativeTo(null); } public static void main(String[] args) { new WindowHelpGui(); } }
My code is not working. How should I add a JScrollPane on a JTextArea that is in a JTabbedPane?
this is the code I have so far public class WindowHelpGui extends JFrame{ JScrollPane scroll; //constructor public WindowHelpGui(){ //add window title super("Help"); //set window layout setLayout(null); JTabbedPane tabbedPane = new JTabbedPane(); tabbedPane.setBounds(10, 10, 750, 550); JPanel panel1 = new JPanel(); JPanel panel2 = new JPanel(); JPanel panel3 = new JPanel(); panel1.add(new JTextArea("\nDRAFT TEXT 101 \ncodnvfdinofndovndofinvinfdivnifdnvidfnviofnivnidfnviofdnvindfinv ivondfviondfiovndfionvifdnvionivfdninfdinfivnidfovniofnviofnvifdnv fndiovnf\nh\ne\nl\nl\no\n\nf\nr\no\nm\n\nt\nh\ne\n\no\nt\nh"+ "e\nr\n\ns\ni\nd\ne\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n nHello")); panel2.add(new JTextArea("panel2 working")); panel3.add(new JTextArea("panel3 working")); JScrollPane scroll = new JScrollPane(panel1, JScrollPane. HORIZONTAL_SCROLLBAR_NEVER, JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDE add(scroll, panel1); tabbedPane.add("How to play", panel1); tabbedPane.add("Seeds", panel2); tabbedPane.add("Tools", panel3); //scroll = new Jscro this.add(tabbedPane); //set size of window setSize(800,600); //set visibility setVisible(true); //set resizable setResizable(false); //set default close setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } when I run this I dont see any window popping up. Can someone help me please? I have tried also tried using add(scroll, tabbedPane) but it just adds a new tab on my window.
[ "Call pack() before making your JFrame visible and use setPreferredSize() of the Jframe instead of setSize().\nAdd a setMinimumSize() as well to the JFrame.\nAlso there can only be one root component of a JFrame. Use a JPanel with a layout to add more components.\nDon't use a null layout. It will require you to explicitly set position and size of each child component by using setBounds() method of each child.\nExample code:\nJFrame frame = new JFrame(\"Simple GUI\");\nframe.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n \nJLabel textLabel = new JLabel(\"I'm a label in the window\",SwingConstants.CENTER);\ntextLabel.setPreferredSize(new Dimension(300, 100)); \nframe.getContentPane().add(textLabel, BorderLayout.CENTER); \n//Display the window. \nframe.setLocationRelativeTo(null); \nframe.pack(); \nframe.setVisible(true);\n\nRead more about JFrame and how it works from here.\n", "The JTextArea must be added directly to the JScrollPane and you can add the scrollpane directly to the tabbedPane:\n JTextArea textArea = new JTextArea(\"\\nDRAFT TEXT 101\\ncodnvfdinofndovndofinvinfdivnifdnvidfnviofnivnidfnviofdnvindfinvivondfviondfiovndfionvifdnvionivfdninfdinfivnidfovniofnviofnvifdnvfndiovnf\\nh\\ne\\nl\\nl\\no\\n\\nf\\nr\\no\\nm\\n\\nt\\nh\\ne\\n\\no\\nt\\nh\" +\n \"e\\nr\\n\\ns\\ni\\nd\\ne\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nn\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nnHello\");\n JScrollPane scrollPane = new JScrollPane(textArea, JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED, JScrollPane.HORIZONTAL_SCROLLBAR_NEVER);\n\n tabbedPane.add(\"How to play\", scrollPane);\n\n", "You set the JTextArea into the ScrollPane constructor. Here is a runnable example (read comments in code):\nimport java.awt.Dimension;\nimport javax.swing.JFrame;\nimport javax.swing.JPanel;\nimport javax.swing.JScrollPane;\nimport javax.swing.JTabbedPane;\nimport javax.swing.JTextArea;\n\n\npublic class WindowHelpGui extends JFrame {\n\n private static final long serialVersionUID = 34534L;\n \n private JTextArea textArea_1;\n private JTextArea textArea_2;\n private JTextArea textArea_3;\n \n private JTabbedPane tabbedPane;\n \n\n //constructor\n public WindowHelpGui() {\n createForm();\n }\n\n private void createForm() {\n //set window layout\n// setLayout(null); // If possible, don't ever use a Null layout.\n\n //set resizable \n// setResizable(false); // It's nice to be able to resize from time to time.\n\n // add window title\n setTitle(\"Help\");\n \n // set size of window\n setPreferredSize(new Dimension(800, 600));\n\n //set default close\n setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n\n // OPTIONAL - Set form on top of everything.\n setAlwaysOnTop(true);\n\n tabbedPane = new JTabbedPane();\n\n JPanel panel1 = new JPanel();\n JPanel panel2 = new JPanel();\n JPanel panel3 = new JPanel();\n\n textArea_1 = new JTextArea();\n textArea_1.setText(\"\\nDRAFT TEXT 101\\ncodnvfdinofndovndofinvinfdivnifdnvidfnviofnivnid\"\n + \"fnviofdnvindfinvivondfviondfiovndfionvifdnvionivfdninfdinfivnidfovniofnvio\"\n + \"fnvifdnvfndiovnf\\nh\\ne\\nl\\nl\\no\\n\\nf\\nr\\no\\nm\\n\\nt\\nh\\ne\\n\\no\\nt\\nhe\\nr\\n\\n\"\n + \"s\\ni\\nd\\ne\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\"\n + \"\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\nHello\");\n textArea_1.setCaretPosition(0); // Set the caret to beginning of document.\n \n textArea_2 = new JTextArea();\n textArea_2.setText(\"Seeds Area Working\");\n \n textArea_3 = new JTextArea();\n textArea_3.setText(\"Tools Area Working\");\n\n JScrollPane scroll_1 = new JScrollPane();\n scroll_1.setViewportView(textArea_1);\n JScrollPane scroll_2 = new JScrollPane();\n scroll_2.setViewportView(textArea_2);\n JScrollPane scroll_3 = new JScrollPane();\n scroll_3.setViewportView(textArea_3);\n \n tabbedPane.add(\"How to play\", scroll_1);\n tabbedPane.add(\"Seeds\", scroll_2);\n tabbedPane.add(\"Tools\", scroll_3);\n\n add(tabbedPane);\n \n pack(); // Now you can resize or maximize the form and everything will resize with it.\n \n setVisible(true);\n // Set Screen Location (after the pack())\n setLocationRelativeTo(null);\n }\n\n public static void main(String[] args) {\n new WindowHelpGui();\n }\n \n}\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "java", "jscrollpane", "jtabbedpane", "swing", "user_interface" ]
stackoverflow_0074673542_java_jscrollpane_jtabbedpane_swing_user_interface.txt
Q: Data stored separately, not simultaneously I am trying to store a historical data through API from front-end(flutter). So far both of the features(clock in and clock out) is working well and I am able to record and stored the data everytime data is sent from front-end(flutter). I separate the clock in and clock out into two different function in the controller and so is when I tested it in postman. The problem is that now, after I clock in, the data is stored in the database table and when I clock out, a new row is created as well on the table as shown here https://paste.pics/f756579f3256f511f828933cbfc52aac . I want it to be when I clock out, the 'time_checkOut', and 'location_checkOut' is stored within the same row and not a new row. At the same time, I want to be able to store a history record like daily record(yesterday, tomorrow, the day after tomorrow, everyday). How can I do this ? Below are the two functions in my controller: clockIn controller public function userClockIn(Request $r) { $result = []; $result['status'] = false; $result['message'] = "something error"; $users = User::where('staff_id', $r->staff_id)->select(['staff_id', 'date_checkIn', 'time_checkIn', 'location_checkIn'])->first(); $mytime = Carbon::now(); $time = $mytime->format('H:i:s'); $date = $mytime->format('Y-m-d'); $users->date_checkIn = $date; $users->time_checkIn = $time; $users->location_checkIn = $r->location_checkIn; $users->save(); // Retrieve current data $currentData = $users->toArray(); // Store current data into attendance record table $attendanceRecord = new AttendanceRecord(); $attendanceRecord->fill($currentData); $attendanceRecord->save(); $result['data'] = $users; $result['status'] = true; $result['message'] = "suksess add data"; return response()->json($result); } clockOut controller public function userClockOut(Request $r) { $result = []; $result['status'] = false; $result['message'] = "something error"; $users = User::where('staff_id', $r->staff_id)->select(['staff_id', 'time_checkOut', 'location_checkOut'])->first(); $mytime = Carbon::now(); $time = $mytime->format('H:i:s'); $users->time_checkOut = $time; $users->location_checkOut = $r->location_checkOut; $users->save(); // Retrieve current data $currentData = $users->toArray(); // Store current data into attendance record table $attendanceRecord = new AttendanceRecord(); $attendanceRecord->fill($currentData); $attendanceRecord->save(); $result['data'] = $users; $result['status'] = true; $result['message'] = "suksess add data"; return response()->json($result); } A: To avoid creating a new row in the AttendanceRecord table when a user clocks out, you can use the updateOrCreate method instead of creating a new instance of AttendanceRecord and calling the save method on it. The updateOrCreate method will update an existing record if it exists, or create a new one if it doesn't. Here's an example of how you could use it: // Retrieve the user's data $users = User::where('staff_id', $r->staff_id) ->select(['staff_id', 'time_checkOut', 'location_checkOut']) ->first(); // Update the user's data with the current time and location $mytime = Carbon::now(); $time = $mytime->format('H:i:s'); $date = $mytime->format('Y-m-d'); $users->time_checkOut = $time; $users->location_checkOut = $r->location_checkOut; // Save the updated data to the database AttendanceRecord::updateOrCreate( ['staff_id' => $users->staff_id, 'date_checkIn' => $date], $users->toArray() );
Data stored separately, not simultaneously
I am trying to store a historical data through API from front-end(flutter). So far both of the features(clock in and clock out) is working well and I am able to record and stored the data everytime data is sent from front-end(flutter). I separate the clock in and clock out into two different function in the controller and so is when I tested it in postman. The problem is that now, after I clock in, the data is stored in the database table and when I clock out, a new row is created as well on the table as shown here https://paste.pics/f756579f3256f511f828933cbfc52aac . I want it to be when I clock out, the 'time_checkOut', and 'location_checkOut' is stored within the same row and not a new row. At the same time, I want to be able to store a history record like daily record(yesterday, tomorrow, the day after tomorrow, everyday). How can I do this ? Below are the two functions in my controller: clockIn controller public function userClockIn(Request $r) { $result = []; $result['status'] = false; $result['message'] = "something error"; $users = User::where('staff_id', $r->staff_id)->select(['staff_id', 'date_checkIn', 'time_checkIn', 'location_checkIn'])->first(); $mytime = Carbon::now(); $time = $mytime->format('H:i:s'); $date = $mytime->format('Y-m-d'); $users->date_checkIn = $date; $users->time_checkIn = $time; $users->location_checkIn = $r->location_checkIn; $users->save(); // Retrieve current data $currentData = $users->toArray(); // Store current data into attendance record table $attendanceRecord = new AttendanceRecord(); $attendanceRecord->fill($currentData); $attendanceRecord->save(); $result['data'] = $users; $result['status'] = true; $result['message'] = "suksess add data"; return response()->json($result); } clockOut controller public function userClockOut(Request $r) { $result = []; $result['status'] = false; $result['message'] = "something error"; $users = User::where('staff_id', $r->staff_id)->select(['staff_id', 'time_checkOut', 'location_checkOut'])->first(); $mytime = Carbon::now(); $time = $mytime->format('H:i:s'); $users->time_checkOut = $time; $users->location_checkOut = $r->location_checkOut; $users->save(); // Retrieve current data $currentData = $users->toArray(); // Store current data into attendance record table $attendanceRecord = new AttendanceRecord(); $attendanceRecord->fill($currentData); $attendanceRecord->save(); $result['data'] = $users; $result['status'] = true; $result['message'] = "suksess add data"; return response()->json($result); }
[ "To avoid creating a new row in the AttendanceRecord table when a user clocks out, you can use the updateOrCreate method instead of creating a new instance of AttendanceRecord and calling the save method on it. The updateOrCreate method will update an existing record if it exists, or create a new one if it doesn't. Here's an example of how you could use it:\n // Retrieve the user's data\n$users = User::where('staff_id', $r->staff_id)\n ->select(['staff_id', 'time_checkOut', 'location_checkOut'])\n ->first();\n\n// Update the user's data with the current time and location\n$mytime = Carbon::now();\n$time = $mytime->format('H:i:s');\n$date = $mytime->format('Y-m-d');\n$users->time_checkOut = $time;\n$users->location_checkOut = $r->location_checkOut;\n\n// Save the updated data to the database\nAttendanceRecord::updateOrCreate(\n ['staff_id' => $users->staff_id, 'date_checkIn' => $date],\n $users->toArray()\n);\n\n" ]
[ 1 ]
[]
[]
[ "api", "laravel", "php" ]
stackoverflow_0074674161_api_laravel_php.txt
Q: Gradle dependencies: Parenthesis vs single/double quotes only When naming dependencies in the Cradle-file: What's the difference between putting the string within parenthesis or just quotes alone. For example: implementation("io.coil-kt:coil:2.2.2") Versus: implementation 'com.squareup.moshi:moshi-kotlin:1.9.3' I see both forms all the time. When do I use which? A: Both notation are available and are equivalent , in the Groovy DSL: see explanation here Reference to this feature in Groovy specifications: https://groovy-lang.org/style-guide.html#_omitting_parentheses In Koltin DSL, only the method call version (with parenthesis) is available. "When do I use which? " : personally I prefer using the version with parenthesis , making it simpler to migrate to Kotlin DSL later : https://docs.gradle.org/current/userguide/migrating_from_groovy_to_kotlin_dsl.html#prepare_your_groovy_scripts
Gradle dependencies: Parenthesis vs single/double quotes only
When naming dependencies in the Cradle-file: What's the difference between putting the string within parenthesis or just quotes alone. For example: implementation("io.coil-kt:coil:2.2.2") Versus: implementation 'com.squareup.moshi:moshi-kotlin:1.9.3' I see both forms all the time. When do I use which?
[ "Both notation are available and are equivalent , in the Groovy DSL: see explanation here\nReference to this feature in Groovy specifications: https://groovy-lang.org/style-guide.html#_omitting_parentheses\nIn Koltin DSL, only the method call version (with parenthesis) is available.\n\"When do I use which? \" : personally I prefer using the version with parenthesis , making it simpler to migrate to Kotlin DSL later : https://docs.gradle.org/current/userguide/migrating_from_groovy_to_kotlin_dsl.html#prepare_your_groovy_scripts\n" ]
[ 1 ]
[]
[]
[ "android", "gradle" ]
stackoverflow_0074674145_android_gradle.txt
Q: What are the differences between Rust's `String` and `str`? Why does Rust have String and str? What are the differences between String and str? When does one use String instead of str and vice versa? Is one of them getting deprecated? A: String is the dynamic heap string type, like Vec: use it when you need to own or modify your string data. str is an immutable1 sequence of UTF-8 bytes of dynamic length somewhere in memory. Since the size is unknown, one can only handle it behind a pointer. This means that str most commonly2 appears as &str: a reference to some UTF-8 data, normally called a "string slice" or just a "slice". A slice is just a view onto some data, and that data can be anywhere, e.g. In static storage: a string literal "foo" is a &'static str. The data is hardcoded into the executable and loaded into memory when the program runs. Inside a heap allocated String: String dereferences to a &str view of the String's data. On the stack: e.g. the following creates a stack-allocated byte array, and then gets a view of that data as a &str: use std::str; let x: &[u8] = &[b'a', b'b', b'c']; let stack_str: &str = str::from_utf8(x).unwrap(); In summary, use String if you need owned string data (like passing strings to other threads, or building them at runtime), and use &str if you only need a view of a string. This is identical to the relationship between a vector Vec<T> and a slice &[T], and is similar to the relationship between by-value T and by-reference &T for general types. 1 A str is fixed-length; you cannot write bytes beyond the end, or leave trailing invalid bytes. Since UTF-8 is a variable-width encoding, this effectively forces all strs to be immutable in many cases. In general, mutation requires writing more or fewer bytes than there were before (e.g. replacing an a (1 byte) with an ä (2+ bytes) would require making more room in the str). There are specific methods that can modify a &mut str in place, mostly those that handle only ASCII characters, like make_ascii_uppercase. 2 Dynamically sized types allow things like Rc<str> for a sequence of reference counted UTF-8 bytes since Rust 1.2. Rust 1.21 allows easily creating these types. A: I have a C++ background and I found it very useful to think about String and &str in C++ terms: A Rust String is like a std::string; it owns the memory and does the dirty job of managing memory. A Rust &str is like a char* (but a little more sophisticated); it points us to the beginning of a chunk in the same way you can get a pointer to the contents of std::string. Are either of them going to disappear? I do not think so. They serve two purposes: String keeps the buffer and is very practical to use. &str is lightweight and should be used to "look" into strings. You can search, split, parse, and even replace chunks without needing to allocate new memory. &str can look inside of a String as it can point to some string literal. The following code needs to copy the literal string into the String managed memory: let a: String = "hello rust".into(); The following code lets you use the literal itself without copy (read only though) let a: &str = "hello rust"; A: It is str that is analogous to String, not the slice to it, also known as &str. An str is a string literal, basically a pre-allocated text: "Hello World" This text has to be stored somewhere, so it is stored in the data section of the executable file along with the program’s machine code, as sequence of bytes ([u8]). Because text can be of any length, they are dynamically-sized, their size is known only at run-time: ┌─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┐ │ H │ e │ l │ l │ o │ │ W │ o │ r │ l │ d │ └─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┘ ┌─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┐ │ 72 │ 101 │ 108 │ 108 │ 111 │ 32 │ 87 │ 111 │ 114 │ 108 │ 100 │ └─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┘ We need a way to access a stored text and that is where the slice comes in. A slice,[T], is a view into a block of memory. Whether mutable or not, a slice always borrows and that is why it is always behind a pointer, &. Lets explain the meaning of being dynamically sized. Some programming languages, like C, appends a zero byte (\0) at the end of its strings and keeps a record of the starting address. To determine a string's length, program has to walk through the raw bytes from starting position until finding this zero byte. So, length of a text can be of any size hence it is dynamically sized. However Rust takes a different approach: It uses a slice. A slice stores the address where a str starts and how many byte it takes. It is better than appending zero byte because calculation is done in advance during compilation. So, "Hello World" expression returns a fat pointer, containing both the address of the actual data and its length. This pointer will be our handle to the actual data and it will also be stored in our program. Now data is behind a pointer and the compiler knows its size at compile time. Since text is stored in the source code, it will be valid for the entire lifetime of the running program, hence will have the static lifetime. So, return value of "Hello Word" expression should reflect these two characteristics, and it does: let s: &'static str = "Hello World"; You may ask why its type is written as str but not as [u8], it is because data is always guaranteed to be a valid UTF-8 sequence. Not all UTF-8 characters are single byte, some take 4 bytes. So [u8] would be inaccurate. If you disassemble a compiled Rust program and inspect the executable file, you will see multiple strs are stored adjacent to each other in the data section without any indication where one starts and the other ends. Compiler takes it even further. If identical static text is used at multiple locations in your program, Rust compiler will optimize your program and create a single binary block in the executable's data section and each slice in your code point to this binary block. For example, compiler creates a single continuous binary with the content of "Hello World" for the following code even though we use three different literals with "Hello World": let x: &'static str = "Hello World"; let y: &'static str = "Hello World"; let z: &'static str = "Hello World"; String, on the other hand, is a specialized type that stores its value as vector of u8. Here is how String type is defined in the source code: pub struct String { vec: Vec<u8>, } Being vector means it is heap allocated and resizable like any other vector value. Being specialized means it does not permit arbitrary access and enforces certain checks that data is always valid UTF-8. Other than that, it is just a vector. So a String is a resizable buffer holding UTF-8 text. This buffer is allocated on the heap, so it can grow as needed or requested. We can fill this buffer anyway we see fit. We can change its content. If you look carefully vec field is kept private to enforce validity. Since it is private, we can not create a String instance directly. The reason why it is kept private because not all stream of bytes produce valid utf-8 characters and direct interaction with the underlying bytes may corrupt the string. We create u8 bytes through methods and methods runs certain checks. We can say that being private and having controlled interaction via methods provides certain guarantees. There are several methods defined on String type to create String instance, new is one of them: pub const fn new() -> String { String { vec: Vec::new() } } We can use it to create a valid String. let s = String::new(); println("{}", s); Unfortunately it does not accept input parameter. So result will be valid but an empty string but it will grow like any other vector when capacity is not enough to hold the assigned value. But application performance will take a hit, as growing requires re-allocation. We can fill the underlying vector with initial values from different sources: From a string literal let a = "Hello World"; let s = String::from(a); Please note that an str is still created and its content is copied to the heap allocated vector via String.from. If we check the executable binary we will see raw bytes in data section with the content "Hello World". This is very important detail some people miss. From raw parts let ptr = s.as_mut_ptr(); let len = s.len(); let capacity = s.capacity(); let s = String::from_raw_parts(ptr, len, capacity); From a character let ch = 'c'; let s = ch.to_string(); From vector of bytes let hello_world = vec![72, 101, 108, 108, 111, 32, 87, 111, 114, 108, 100]; // We know it is valid sequence, so we can use unwrap let hello_world = String::from_utf8(hello_world).unwrap(); println!("{}", hello_world); // Hello World Here we have another important detail. A vector might have any value, there is no guarantee its content will be a valid UTF-8, so Rust forces us to take this into consideration by returning a Result<String, FromUtf8Error> rather than a String. From input buffer use std::io::{self, Read}; fn main() -> io::Result<()> { let mut buffer = String::new(); let stdin = io::stdin(); let mut handle = stdin.lock(); handle.read_to_string(&mut buffer)?; Ok(()) } Or from any other type that implements ToString trait Since String is a vector under the hood, it will exhibit some vector characteristics: a pointer: The pointer points to an internal buffer that stores the data. length: The length is the number of bytes currently stored in the buffer. capacity: The capacity is the size of the buffer in bytes. So, the length will always be less than or equal to the capacity. And it delegates some properties and methods to vectors: pub fn capacity(&self) -> usize { self.vec.capacity() } Most of the examples uses String::from, so people get confused thinking why create String from another string. It is a long read, hope it helps. A: str, only used as &str, is a string slice, a reference to a UTF-8 byte array. String is what used to be ~str, a growable, owned UTF-8 byte array. A: They are actually completely different. First off, a str is nothing but a type level thing; it can only be reasoned about at the type level because it's a so-called dynamically-sized type (DST). The size the str takes up cannot be known at compile time and depends on runtime information — it cannot be stored in a variable because the compiler needs to know at compile time what the size of each variable is. A str is conceptually just a row of u8 bytes with the guarantee that it forms valid UTF-8. How large is the row? No one knows until runtime hence it can't be stored in a variable. The interesting thing is that a &str or any other pointer to a str like Box<str> does exist at runtime. This is a so-called "fat pointer"; it's a pointer with extra information (in this case the size of the thing it's pointing at) so it's twice as large. In fact, a &str is quite close to a String (but not to a &String). A &str is two words; one pointer to a the first byte of a str and another number that describes how many bytes long the the str is. Contrary to what is said, a str does not need to be immutable. If you can get a &mut str as an exclusive pointer to the str, you can mutate it and all the safe functions that mutate it guarantee that the UTF-8 constraint is upheld because if that is violated then we have undefined behaviour as the library assumes this constraint is true and does not check for it. So what is a String? That's three words; two are the same as for &str but it adds a third word which is the capacity of the str buffer on the heap, always on the heap (a str is not necessarily on the heap) it manages before it's filled and has to re-allocate. the String basically owns a str as they say; it controls it and can resize it and reallocate it when it sees fit. So a String is as said closer to a &str than to a str. Another thing is a Box<str>; this also owns a str and its runtime representation is the same as a &str but it also owns the str unlike the &str but it cannot resize it because it does not know its capacity so basically a Box<str> can be seen as a fixed-length String that cannot be resized (you can always convert it into a String if you want to resize it). A very similar relationship exists between [T] and Vec<T> except there is no UTF-8 constraint and it can hold any type whose size is not dynamic. The use of str on the type level is mostly to create generic abstractions with &str; it exists on the type level to be able to conveniently write traits. In theory str as a type thing didn't need to exist and only &str but that would mean a lot of extra code would have to be written that can now be generic. &str is super useful to be able to to have multiple different substrings of a String without having to copy; as said a String owns the str on the heap it manages and if you could only create a substring of a String with a new String it would have to be copied because everything in Rust can only have one single owner to deal with memory safety. So for instance you can slice a string: let string: String = "a string".to_string(); let substring1: &str = &string[1..3]; let substring2: &str = &string[2..4]; We have two different substring strs of the same string. string is the one that owns the actual full str buffer on the heap and the &str substrings are just fat pointers to that buffer on the heap. A: Rust &str and String String: Rust owned String type, the string itself lives on the heap and therefore is mutable and can alter its size and contents. Because String is owned when the variables which owns the string goes out of scope the memory on the heap will be freed. Variables of type String are fat pointers (pointer + associated metadata) The fat pointer is 3 * 8 bytes (wordsize) long consists of the following 3 elements: Pointer to actual data on the heap, it points to the first character Length of the string (# of characters) Capacity of the string on the heap &str: Rust non owned String type and is immutable by default. The string itself lives somewhere else in memory usually on the heap or 'static memory. Because String is non owned when &str variables goes out of scope the memory of the string will not be freed. Variables of type &str are fat pointers (pointer + associated metadata) The fat pointer is 2 * 8 bytes (wordsize) long consists of the following 2 elements: Pointer to actual data on the heap, it points to the first character Length of the string (# of characters) Example: use std::mem; fn main() { // on 64 bit architecture: println!("{}", mem::size_of::<&str>()); // 16 println!("{}", mem::size_of::<String>()); // 24 let string1: &'static str = "abc"; // string will point to `static memory which lives through the whole program let ptr = string1.as_ptr(); let len = string1.len(); println!("{}, {}", unsafe { *ptr as char }, len); // a, 3 // len is 3 characters long so 3 // pointer to the first character points to letter a { let mut string2: String = "def".to_string(); let ptr = string2.as_ptr(); let len = string2.len(); let capacity = string2.capacity(); println!("{}, {}, {}", unsafe { *ptr as char }, len, capacity); // d, 3, 3 // pointer to the first character points to letter d // len is 3 characters long so 3 // string has now 3 bytes of space on the heap string2.push_str("ghijk"); // we can mutate String type, capacity and length will aslo change println!("{}, {}", string2, string2.capacity()); // defghijk, 8 } // memory of string2 on the heap will be freed here because owner goes out of scope } A: std::String is simply a vector of u8. You can find its definition in source code . It's heap-allocated and growable. #[derive(PartialOrd, Eq, Ord)] #[stable(feature = "rust1", since = "1.0.0")] pub struct String { vec: Vec<u8>, } str is a primitive type, also called string slice. A string slice has fixed size. A literal string like let test = "hello world" has &'static str type. test is a reference to this statically allocated string. &str cannot be modified, for example, let mut word = "hello world"; word[0] = 's'; word.push('\n'); str does have mutable slice &mut str, for example: pub fn split_at_mut(&mut self, mid: usize) -> (&mut str, &mut str) let mut s = "Per Martin-Löf".to_string(); { let (first, last) = s.split_at_mut(3); first.make_ascii_uppercase(); assert_eq!("PER", first); assert_eq!(" Martin-Löf", last); } assert_eq!("PER Martin-Löf", s); But a small change to UTF-8 can change its byte length, and a slice cannot reallocate its referent. A: In easy words, String is datatype stored on heap (just like Vec), and you have access to that location. &str is a slice type. That means it is just reference to an already present String somewhere in the heap. &str doesn't do any allocation at runtime. So, for memory reasons, you can use &str over String. But, keep in mind that when using &str you might have to deal with explicit lifetimes. A: For C# and Java people: Rust' String === StringBuilder Rust's &str === (immutable) string I like to think of a &str as a view on a string, like an interned string in Java / C# where you can't change it, only create a new one. A: Some Usages example_1.rs fn main(){ let hello = String::("hello"); let any_char = hello[0];//error } example_2.rs fn main(){ let hello = String::("hello"); for c in hello.chars() { println!("{}",c); } } example_3.rs fn main(){ let hello = String::("String are cool"); let any_char = &hello[5..6]; // = let any_char: &str = &hello[5..6]; println!("{:?}",any_char); } Shadowing fn main() { let s: &str = "hello"; // &str let s: String = s.to_uppercase(); // String println!("{}", s) // HELLO } function fn say_hello(to_whom: &str) { //type coercion println!("Hey {}!", to_whom) } fn main(){ let string_slice: &'static str = "you"; let string: String = string_slice.into(); // &str => String say_hello(string_slice); say_hello(&string);// &String } Concat // String is at heap, and can be increase or decrease in its size // The size of &str is fixed. fn main(){ let a = "Foo"; let b = "Bar"; let c = a + b; //error // let c = a.to_string + b; } Note that String and &str are different types and for 99% of the time, you only should care about &str. A: In Rust, str is a primitive type that represents a sequence of Unicode scalar values, also known as a string slice. This means that it is a read-only view into a string, and it does not own the memory that it points to. On the other hand, String is a growable, mutable, owned string type. This means that when you create a String, it will allocate memory on the heap to store the contents of the string, and it will deallocate this memory when the String goes out of scope. Because String is growable and mutable, you can change the contents of a String after you have created it. In general, str is used when you want to refer to a string slice that is stored in another data structure, such as a String. String is used when you want to create and own a string value.
What are the differences between Rust's `String` and `str`?
Why does Rust have String and str? What are the differences between String and str? When does one use String instead of str and vice versa? Is one of them getting deprecated?
[ "String is the dynamic heap string type, like Vec: use it when you need to own or modify your string data.\nstr is an immutable1 sequence of UTF-8 bytes of dynamic length somewhere in memory. Since the size is unknown, one can only handle it behind a pointer. This means that str most commonly2 appears as &str: a reference to some UTF-8 data, normally called a \"string slice\" or just a \"slice\". A slice is just a view onto some data, and that data can be anywhere, e.g.\n\nIn static storage: a string literal \"foo\" is a &'static str. The data is hardcoded into the executable and loaded into memory when the program runs.\n\nInside a heap allocated String: String dereferences to a &str view of the String's data.\n\nOn the stack: e.g. the following creates a stack-allocated byte array, and then gets a view of that data as a &str:\nuse std::str;\n\nlet x: &[u8] = &[b'a', b'b', b'c'];\nlet stack_str: &str = str::from_utf8(x).unwrap();\n\n\n\nIn summary, use String if you need owned string data (like passing strings to other threads, or building them at runtime), and use &str if you only need a view of a string.\nThis is identical to the relationship between a vector Vec<T> and a slice &[T], and is similar to the relationship between by-value T and by-reference &T for general types.\n\n1 A str is fixed-length; you cannot write bytes beyond the end, or leave trailing invalid bytes. Since UTF-8 is a variable-width encoding, this effectively forces all strs to be immutable in many cases. In general, mutation requires writing more or fewer bytes than there were before (e.g. replacing an a (1 byte) with an ä (2+ bytes) would require making more room in the str). There are specific methods that can modify a &mut str in place, mostly those that handle only ASCII characters, like make_ascii_uppercase.\n2 Dynamically sized types allow things like Rc<str> for a sequence of reference counted UTF-8 bytes since Rust 1.2. Rust 1.21 allows easily creating these types.\n", "I have a C++ background and I found it very useful to think about String and &str in C++ terms:\n\nA Rust String is like a std::string; it owns the memory and does the dirty job of managing memory.\nA Rust &str is like a char* (but a little more sophisticated); it points us to the beginning of a chunk in the same way you can get a pointer to the contents of std::string.\n\nAre either of them going to disappear? I do not think so. They serve two purposes:\nString keeps the buffer and is very practical to use. &str is lightweight and should be used to \"look\" into strings. You can search, split, parse, and even replace chunks without needing to allocate new memory. \n&str can look inside of a String as it can point to some string literal. The following code needs to copy the literal string into the String managed memory:\nlet a: String = \"hello rust\".into();\n\nThe following code lets you use the literal itself without copy (read only though)\nlet a: &str = \"hello rust\";\n\n", "It is str that is analogous to String, not the slice to it, also known as &str.\nAn str is a string literal, basically a pre-allocated text:\n\"Hello World\"\n\nThis text has to be stored somewhere, so it is stored in the data section of the executable file along with the program’s machine code, as sequence of bytes ([u8]). Because text can be of any length, they are dynamically-sized, their size is known only at run-time:\n┌─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┐\n│ H │ e │ l │ l │ o │ │ W │ o │ r │ l │ d │\n└─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┘\n┌─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┬─────┐\n│ 72 │ 101 │ 108 │ 108 │ 111 │ 32 │ 87 │ 111 │ 114 │ 108 │ 100 │\n└─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┴─────┘\n\nWe need a way to access a stored text and that is where the slice comes in.\nA slice,[T], is a view into a block of memory. Whether mutable or not, a slice always borrows and that is why it is always behind a pointer, &.\nLets explain the meaning of being dynamically sized. Some programming languages, like C, appends a zero byte (\\0) at the end of its strings and keeps a record of the starting address. To determine a string's length, program has to walk through the raw bytes from starting position until finding this zero byte. So, length of a text can be of any size hence it is dynamically sized.\nHowever Rust takes a different approach: It uses a slice. A slice stores the address where a str starts and how many byte it takes. It is better than appending zero byte because calculation is done in advance during compilation.\nSo, \"Hello World\" expression returns a fat pointer, containing both the address of the actual data and its length. This pointer will be our handle to the actual data and it will also be stored in our program. Now data is behind a pointer and the compiler knows its size at compile time.\nSince text is stored in the source code, it will be valid for the entire lifetime of the running program, hence will have the static lifetime.\nSo, return value of \"Hello Word\" expression should reflect these two characteristics, and it does:\nlet s: &'static str = \"Hello World\";\n\nYou may ask why its type is written as str but not as [u8], it is because data is always guaranteed to be a valid UTF-8 sequence. Not all UTF-8 characters are single byte, some take 4 bytes. So [u8] would be inaccurate.\nIf you disassemble a compiled Rust program and inspect the executable file, you will see multiple strs are stored adjacent to each other in the data section without any indication where one starts and the other ends.\nCompiler takes it even further. If identical static text is used at multiple locations in your program, Rust compiler will optimize your program and create a single binary block in the executable's data section and each slice in your code point to this binary block.\nFor example, compiler creates a single continuous binary with the content of \"Hello World\" for the following code even though we use three different literals with \"Hello World\":\nlet x: &'static str = \"Hello World\";\nlet y: &'static str = \"Hello World\";\nlet z: &'static str = \"Hello World\";\n\nString, on the other hand, is a specialized type that stores its value as vector of u8. Here is how String type is defined in the source code:\npub struct String {\n vec: Vec<u8>,\n}\n\nBeing vector means it is heap allocated and resizable like any other vector value.\nBeing specialized means it does not permit arbitrary access and enforces certain checks that data is always valid UTF-8. Other than that, it is just a vector.\nSo a String is a resizable buffer holding UTF-8 text. This buffer is allocated on the heap, so it can grow as needed or requested. We can fill this buffer anyway we see fit. We can change its content.\nIf you look carefully vec field is kept private to enforce validity. Since it is private, we can not create a String instance directly. The reason why it is kept private because not all stream of bytes produce valid utf-8 characters and direct interaction with the underlying bytes may corrupt the string. We create u8 bytes through methods and methods runs certain checks. We can say that being private and having controlled interaction via methods provides certain guarantees.\nThere are several methods defined on String type to create String instance, new is one of them:\npub const fn new() -> String {\n String { vec: Vec::new() }\n}\n\nWe can use it to create a valid String.\nlet s = String::new();\nprintln(\"{}\", s);\n\nUnfortunately it does not accept input parameter. So result will be valid but an empty string but it will grow like any other vector when capacity is not enough to hold the assigned value. But application performance will take a hit, as growing requires re-allocation.\nWe can fill the underlying vector with initial values from different sources:\nFrom a string literal\nlet a = \"Hello World\";\nlet s = String::from(a);\n\nPlease note that an str is still created and its content is copied to the heap allocated vector via String.from. If we check the executable binary we will see raw bytes in data section with the content \"Hello World\". This is very important detail some people miss.\nFrom raw parts\nlet ptr = s.as_mut_ptr();\nlet len = s.len();\nlet capacity = s.capacity();\n\nlet s = String::from_raw_parts(ptr, len, capacity);\n\nFrom a character\nlet ch = 'c';\nlet s = ch.to_string();\n\nFrom vector of bytes\nlet hello_world = vec![72, 101, 108, 108, 111, 32, 87, 111, 114, 108, 100];\n// We know it is valid sequence, so we can use unwrap\nlet hello_world = String::from_utf8(hello_world).unwrap();\nprintln!(\"{}\", hello_world); // Hello World\n\nHere we have another important detail. A vector might have any value, there is no guarantee its content will be a valid UTF-8, so Rust forces us to take this into consideration by returning a Result<String, FromUtf8Error> rather than a String.\nFrom input buffer\nuse std::io::{self, Read};\n\nfn main() -> io::Result<()> {\n let mut buffer = String::new();\n let stdin = io::stdin();\n let mut handle = stdin.lock();\n\n handle.read_to_string(&mut buffer)?;\n Ok(())\n}\n\nOr from any other type that implements ToString trait\nSince String is a vector under the hood, it will exhibit some vector characteristics:\n\na pointer: The pointer points to an internal buffer that stores the data.\nlength: The length is the number of bytes currently stored in the buffer.\ncapacity: The capacity is the size of the buffer in bytes. So, the length will always be less than or equal to the capacity.\n\nAnd it delegates some properties and methods to vectors:\npub fn capacity(&self) -> usize {\n self.vec.capacity()\n}\n\nMost of the examples uses String::from, so people get confused thinking why create String from another string.\nIt is a long read, hope it helps.\n", "str, only used as &str, is a string slice, a reference to a UTF-8 byte array.\nString is what used to be ~str, a growable, owned UTF-8 byte array.\n", "They are actually completely different. First off, a str is nothing but a type level thing; it can only be reasoned about at the type level because it's a so-called dynamically-sized type (DST). The size the str takes up cannot be known at compile time and depends on runtime information — it cannot be stored in a variable because the compiler needs to know at compile time what the size of each variable is. A str is conceptually just a row of u8 bytes with the guarantee that it forms valid UTF-8. How large is the row? No one knows until runtime hence it can't be stored in a variable.\nThe interesting thing is that a &str or any other pointer to a str like Box<str> does exist at runtime. This is a so-called \"fat pointer\"; it's a pointer with extra information (in this case the size of the thing it's pointing at) so it's twice as large. In fact, a &str is quite close to a String (but not to a &String). A &str is two words; one pointer to a the first byte of a str and another number that describes how many bytes long the the str is.\nContrary to what is said, a str does not need to be immutable. If you can get a &mut str as an exclusive pointer to the str, you can mutate it and all the safe functions that mutate it guarantee that the UTF-8 constraint is upheld because if that is violated then we have undefined behaviour as the library assumes this constraint is true and does not check for it.\nSo what is a String? That's three words; two are the same as for &str but it adds a third word which is the capacity of the str buffer on the heap, always on the heap (a str is not necessarily on the heap) it manages before it's filled and has to re-allocate. the String basically owns a str as they say; it controls it and can resize it and reallocate it when it sees fit. So a String is as said closer to a &str than to a str.\nAnother thing is a Box<str>; this also owns a str and its runtime representation is the same as a &str but it also owns the str unlike the &str but it cannot resize it because it does not know its capacity so basically a Box<str> can be seen as a fixed-length String that cannot be resized (you can always convert it into a String if you want to resize it).\nA very similar relationship exists between [T] and Vec<T> except there is no UTF-8 constraint and it can hold any type whose size is not dynamic.\nThe use of str on the type level is mostly to create generic abstractions with &str; it exists on the type level to be able to conveniently write traits. In theory str as a type thing didn't need to exist and only &str but that would mean a lot of extra code would have to be written that can now be generic.\n&str is super useful to be able to to have multiple different substrings of a String without having to copy; as said a String owns the str on the heap it manages and if you could only create a substring of a String with a new String it would have to be copied because everything in Rust can only have one single owner to deal with memory safety. So for instance you can slice a string:\nlet string: String = \"a string\".to_string();\nlet substring1: &str = &string[1..3];\nlet substring2: &str = &string[2..4];\n\nWe have two different substring strs of the same string. string is the one that owns the actual full str buffer on the heap and the &str substrings are just fat pointers to that buffer on the heap.\n", "Rust &str and String\n\nString:\n\nRust owned String type, the string itself lives on the heap and therefore is mutable and can alter its size and contents.\nBecause String is owned when the variables which owns the string goes out of scope the memory on the heap will be freed.\nVariables of type String are fat pointers (pointer + associated metadata)\nThe fat pointer is 3 * 8 bytes (wordsize) long consists of the following 3 elements:\n\nPointer to actual data on the heap, it points to the first character\nLength of the string (# of characters)\nCapacity of the string on the heap\n\n\n\n&str:\n\nRust non owned String type and is immutable by default. The string itself lives somewhere else in memory usually on the heap or 'static memory.\nBecause String is non owned when &str variables goes out of scope the memory of the string will not be freed.\nVariables of type &str are fat pointers (pointer + associated metadata)\nThe fat pointer is 2 * 8 bytes (wordsize) long consists of the following 2 elements:\n\nPointer to actual data on the heap, it points to the first character\nLength of the string (# of characters)\n\n\n\nExample:\nuse std::mem;\n\nfn main() {\n // on 64 bit architecture:\n println!(\"{}\", mem::size_of::<&str>()); // 16\n println!(\"{}\", mem::size_of::<String>()); // 24\n\n let string1: &'static str = \"abc\";\n // string will point to `static memory which lives through the whole program\n\n let ptr = string1.as_ptr();\n let len = string1.len();\n\n println!(\"{}, {}\", unsafe { *ptr as char }, len); // a, 3\n // len is 3 characters long so 3\n // pointer to the first character points to letter a\n\n {\n let mut string2: String = \"def\".to_string();\n\n let ptr = string2.as_ptr();\n let len = string2.len();\n let capacity = string2.capacity();\n println!(\"{}, {}, {}\", unsafe { *ptr as char }, len, capacity); // d, 3, 3\n // pointer to the first character points to letter d\n // len is 3 characters long so 3\n // string has now 3 bytes of space on the heap\n\n string2.push_str(\"ghijk\"); // we can mutate String type, capacity and length will aslo change\n println!(\"{}, {}\", string2, string2.capacity()); // defghijk, 8\n\n } // memory of string2 on the heap will be freed here because owner goes out of scope\n\n}\n\n", "std::String is simply a vector of u8. You can find its definition in source code . It's heap-allocated and growable. \n#[derive(PartialOrd, Eq, Ord)]\n#[stable(feature = \"rust1\", since = \"1.0.0\")]\npub struct String {\n vec: Vec<u8>,\n}\n\nstr is a primitive type, also called string slice. A string slice has fixed size. A literal string like let test = \"hello world\" has &'static str type. test is a reference to this statically allocated string. \n&str cannot be modified, for example,\nlet mut word = \"hello world\";\nword[0] = 's';\nword.push('\\n');\n\nstr does have mutable slice &mut str, for example:\npub fn split_at_mut(&mut self, mid: usize) -> (&mut str, &mut str)\nlet mut s = \"Per Martin-Löf\".to_string();\n{\n let (first, last) = s.split_at_mut(3);\n first.make_ascii_uppercase();\n assert_eq!(\"PER\", first);\n assert_eq!(\" Martin-Löf\", last);\n}\nassert_eq!(\"PER Martin-Löf\", s);\n\nBut a small change to UTF-8 can change its byte length, and a slice cannot reallocate its referent. \n", "In easy words, String is datatype stored on heap (just like Vec), and you have access to that location.\n&str is a slice type. That means it is just reference to an already present String somewhere in the heap. \n&str doesn't do any allocation at runtime. So, for memory reasons, you can use &str over String. But, keep in mind that when using &str you might have to deal with explicit lifetimes.\n", "For C# and Java people:\n\nRust' String === StringBuilder \nRust's &str === (immutable) string\n\nI like to think of a &str as a view on a string, like an interned string in Java / C# where you can't change it, only create a new one.\n", "Some Usages\nexample_1.rs\nfn main(){\n let hello = String::(\"hello\");\n let any_char = hello[0];//error\n}\n\nexample_2.rs\nfn main(){\n let hello = String::(\"hello\");\n for c in hello.chars() {\n println!(\"{}\",c);\n }\n}\n\nexample_3.rs\nfn main(){\n let hello = String::(\"String are cool\");\n let any_char = &hello[5..6]; // = let any_char: &str = &hello[5..6];\n println!(\"{:?}\",any_char);\n}\n\nShadowing\nfn main() {\n let s: &str = \"hello\"; // &str\n let s: String = s.to_uppercase(); // String\n println!(\"{}\", s) // HELLO\n}\n\nfunction\nfn say_hello(to_whom: &str) { //type coercion\n println!(\"Hey {}!\", to_whom) \n }\n\n\nfn main(){\n let string_slice: &'static str = \"you\";\n let string: String = string_slice.into(); // &str => String\n say_hello(string_slice);\n say_hello(&string);// &String\n }\n\nConcat\n // String is at heap, and can be increase or decrease in its size\n// The size of &str is fixed.\nfn main(){\n let a = \"Foo\";\n let b = \"Bar\";\n let c = a + b; //error\n // let c = a.to_string + b;\n}\n\nNote that String and &str are different types and for 99% of the time, you only should care about &str.\n", "In Rust, str is a primitive type that represents a sequence of Unicode scalar values, also known as a string slice. This means that it is a read-only view into a string, and it does not own the memory that it points to. On the other hand, String is a growable, mutable, owned string type. This means that when you create a String, it will allocate memory on the heap to store the contents of the string, and it will deallocate this memory when the String goes out of scope. Because String is growable and mutable, you can change the contents of a String after you have created it.\nIn general, str is used when you want to refer to a string slice that is stored in another data structure, such as a String. String is used when you want to create and own a string value.\n" ]
[ 956, 214, 86, 63, 63, 23, 16, 10, 4, 4, 0 ]
[ "Here is a quick and easy explanation. \nString - A growable, ownable heap-allocated data structure. It can be coerced to a &str.\nstr - is (now, as Rust evolves) mutable, fixed-length string that lives on the heap or in the binary. You can only interact with str as a borrowed type via a string slice view, such as &str.\nUsage considerations:\nPrefer String if you want to own or mutate a string - such as passing the string to another thread, etc.\nPrefer &str if you want to have a read-only view of a string.\n" ]
[ -10 ]
[ "rust", "string" ]
stackoverflow_0024158114_rust_string.txt
Q: How to style path link (active visited) After i navigate to another page i.e. ("About us"), how to style navlink i.e. (Anchor tag) on that path wich is active . i tried , ::after , ::visited , but cant find what i want .
How to style path link (active visited)
After i navigate to another page i.e. ("About us"), how to style navlink i.e. (Anchor tag) on that path wich is active . i tried , ::after , ::visited , but cant find what i want .
[]
[]
[ "You can add different inline CSS on the different pages.\n<ul>\n <li>\n <a href=\"#\">Home</a>\n </li>\n <li>\n <a href=\"#\" style=\"color: red;\">About us</a>\n </li>\n <li>\n <a href=\"#\">Contact</a>\n </li>\n</ul>\n\nFor the about us page the link will be red. Adjust the style for other links on other pages.\n" ]
[ -1 ]
[ "css", "frontend", "html", "javascript" ]
stackoverflow_0074674189_css_frontend_html_javascript.txt
Q: Generate a card with random images I have a website where I have cards that have images. I have named the images "wp1", "wp2" and so on. I want the src to have wp(number).png generated on random. <div class="mainWallpapersPanel"> <div class="wallpaperCard"> <a href="#"><img src="images/wp18.png" alt="" class="wallpaperIMG"></a> <h4><a href="#" class="downloadLink">download</a></h4> </div> </div> Above is the card which I have created using div. The main div is "mainWallpapersPanel". I want there to be 18 of these cards in the main div with image src to have the wp no. generated randomly. A: You can use Math.random() with Math.floor() to generate integer numbers. https://www.w3schools.com/js/js_random.asp https://www.w3schools.com/jsref/jsref_random.asp https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/random
Generate a card with random images
I have a website where I have cards that have images. I have named the images "wp1", "wp2" and so on. I want the src to have wp(number).png generated on random. <div class="mainWallpapersPanel"> <div class="wallpaperCard"> <a href="#"><img src="images/wp18.png" alt="" class="wallpaperIMG"></a> <h4><a href="#" class="downloadLink">download</a></h4> </div> </div> Above is the card which I have created using div. The main div is "mainWallpapersPanel". I want there to be 18 of these cards in the main div with image src to have the wp no. generated randomly.
[ "You can use Math.random() with Math.floor() to generate integer numbers.\nhttps://www.w3schools.com/js/js_random.asp\nhttps://www.w3schools.com/jsref/jsref_random.asp\nhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/random\n" ]
[ 0 ]
[]
[]
[ "css", "frontend", "html", "javascript" ]
stackoverflow_0074667553_css_frontend_html_javascript.txt
Q: React Native firebase email verification user.user.sendEmailVerification is not a function i'm trying to create an app using firebase. here my code: const user = await createUserWithEmailAndPassword(auth, email, password) await user.user.sendEmailVerification() user is initializing in firebase authentication but this is happening : TypeError: user.user.sendEmailVerification is not a function. A: You can use async-await syntax with try-catch this way : try { const { user } = await auth().createUserWithEmailAndPassword(email, password); await user.sendEmailVerification(); return user; } catch(e) { return e; } A: since firebase 9 : auth =getAuth(); createUserWithEmailAndPassword(auth,email,password) .then(userCredential=>userCredential.user) .then(user=>{sendEmailVerification(user)})
React Native firebase email verification user.user.sendEmailVerification is not a function
i'm trying to create an app using firebase. here my code: const user = await createUserWithEmailAndPassword(auth, email, password) await user.user.sendEmailVerification() user is initializing in firebase authentication but this is happening : TypeError: user.user.sendEmailVerification is not a function.
[ "You can use async-await syntax with try-catch this way :\n\n\ntry {\n const { user } = await auth().createUserWithEmailAndPassword(email, password);\n await user.sendEmailVerification();\n return user;\n} catch(e) {\n return e;\n}\n\n\n\n", "since firebase 9 :\nauth =getAuth();\ncreateUserWithEmailAndPassword(auth,email,password)\n .then(userCredential=>userCredential.user)\n .then(user=>{sendEmailVerification(user)})\n\n" ]
[ 0, 0 ]
[]
[]
[ "javascript", "react_native", "reactjs" ]
stackoverflow_0073680369_javascript_react_native_reactjs.txt
Q: Python - open file in paint with whitespaces I am trying to open an image in paint with python, however, the path contains a space, paint throws an error saying it cannot find the path because it has just split the string until the first space. Can someone tell me how to solve this without changing the path? Here is my code: import subprocess, os paintImage = "C:\\Users\\Me\MY Images\\image.png" #get the path of paint: paintPath = os.path.splitdrive(os.path.expanduser("~"))[0]+r"\WINDOWS\system32\mspaint.exe" #open the file with paint subprocess.Popen("%s %s" % (paintPath, paintImage)) However, paint opens and says that C:\Users\Me\MY contains an invalid path, because it has not counted the space. I have tried replacing the space with %20, but that does not work. Thanks A: You can rewrite the following line paintImage = "C:\\Users\\Me\MY Images\\image.png" to paintImage = "C:\\Users\\Me\MYImages\\image.png" MYImages should be the new name of the folder no spaces.
Python - open file in paint with whitespaces
I am trying to open an image in paint with python, however, the path contains a space, paint throws an error saying it cannot find the path because it has just split the string until the first space. Can someone tell me how to solve this without changing the path? Here is my code: import subprocess, os paintImage = "C:\\Users\\Me\MY Images\\image.png" #get the path of paint: paintPath = os.path.splitdrive(os.path.expanduser("~"))[0]+r"\WINDOWS\system32\mspaint.exe" #open the file with paint subprocess.Popen("%s %s" % (paintPath, paintImage)) However, paint opens and says that C:\Users\Me\MY contains an invalid path, because it has not counted the space. I have tried replacing the space with %20, but that does not work. Thanks
[ "You can rewrite the following line\npaintImage = \"C:\\\\Users\\\\Me\\MY Images\\\\image.png\" \n\nto\npaintImage = \"C:\\\\Users\\\\Me\\MYImages\\\\image.png\"\n\nMYImages should be the new name of the folder no spaces.\n" ]
[ 0 ]
[]
[]
[ "image", "paint", "python" ]
stackoverflow_0063423058_image_paint_python.txt
Q: Astro: Eslint was configured to run on i was trying to set my eslint configuration up with react, typescript and astro, but seems i can't shake this error off: Here is my .eslintrc.cjs, which looks like: module.exports = { env: { browser: true, es2021: true }, extends: [ 'standard-with-typescript', 'plugin:astro/recommended' ], overrides: [ { files: ['*.astro'], parser: 'astro-eslint-parser', parserOptions: { parser: '@typescript-eslint/parser', extraFileExtensions: ['.astro'] }, rules: { } }, { files: ['.jsx', '.tsx'], extends: [ 'plugin:react/recommended' ], plugins: [ 'react' ], rules: { 'react/jsx-wrap-multilines': [2, { declaration: 'parens-new-line', assignment: 'parens-new-line', return: 'parens-new-line', arrow: 'parens-new-line', condition: 'ignore', logical: 'ignore', prop: 'ignore' }], 'react/react-in-jsx-scope': 'off', 'react/jsx-indent': [1, 2] } } ], parserOptions: { ecmaVersion: 'latest', sourceType: 'module', project: './tsconfig.json' }, rules: { indent: 'off', '@typescript-eslint/indent': [1, 2], 'no-tabs': 'off', '@typescript-eslint/explicit-function-return-type': 'off', '@typescript-eslint/no-unused-vars': 'warn', '@typescript-eslint/consistent-type-definitions': ['error', 'type'], '@typescript-eslint/naming-convention': 'off', '@typescript-eslint/no-floating-promises': 'off', '@typescript-eslint/triple-slash-reference': 'off' } } and here's my tsconfig.json { "extends": "astro/tsconfigs/strictest", "compilerOptions": { "target": "ESNext", "useDefineForClassFields": true, "lib": ["DOM", "DOM.Iterable", "ESNext"], "allowJs": false, "skipLibCheck": true, "esModuleInterop": false, "baseUrl": "./src/", "allowSyntheticDefaultImports": true, "strict": true, "forceConsistentCasingInFileNames": true, "module": "ESNext", "moduleResolution": "Node", "resolveJsonModule": true, "isolatedModules": true, "noEmit": true, "jsx": "react-jsx" }, "include": ["src"] } Any help is appreciated. I tried to follow the astro-eslint setup guide this https://github.com/ota-meshi/eslint-plugin-astro, but still getting those errors. A: You need also to add/update "include": ["**/*.ts", "**/*.tsx", "**/*.astro"] in your tsconfig.json A: You need to add extraFileExtensions: ['.astro'], to parserOptions in top scope of .eslintrc.cjs! That's because standard-with-typescript overwrites parserOptions.
Astro: Eslint was configured to run on
i was trying to set my eslint configuration up with react, typescript and astro, but seems i can't shake this error off: Here is my .eslintrc.cjs, which looks like: module.exports = { env: { browser: true, es2021: true }, extends: [ 'standard-with-typescript', 'plugin:astro/recommended' ], overrides: [ { files: ['*.astro'], parser: 'astro-eslint-parser', parserOptions: { parser: '@typescript-eslint/parser', extraFileExtensions: ['.astro'] }, rules: { } }, { files: ['.jsx', '.tsx'], extends: [ 'plugin:react/recommended' ], plugins: [ 'react' ], rules: { 'react/jsx-wrap-multilines': [2, { declaration: 'parens-new-line', assignment: 'parens-new-line', return: 'parens-new-line', arrow: 'parens-new-line', condition: 'ignore', logical: 'ignore', prop: 'ignore' }], 'react/react-in-jsx-scope': 'off', 'react/jsx-indent': [1, 2] } } ], parserOptions: { ecmaVersion: 'latest', sourceType: 'module', project: './tsconfig.json' }, rules: { indent: 'off', '@typescript-eslint/indent': [1, 2], 'no-tabs': 'off', '@typescript-eslint/explicit-function-return-type': 'off', '@typescript-eslint/no-unused-vars': 'warn', '@typescript-eslint/consistent-type-definitions': ['error', 'type'], '@typescript-eslint/naming-convention': 'off', '@typescript-eslint/no-floating-promises': 'off', '@typescript-eslint/triple-slash-reference': 'off' } } and here's my tsconfig.json { "extends": "astro/tsconfigs/strictest", "compilerOptions": { "target": "ESNext", "useDefineForClassFields": true, "lib": ["DOM", "DOM.Iterable", "ESNext"], "allowJs": false, "skipLibCheck": true, "esModuleInterop": false, "baseUrl": "./src/", "allowSyntheticDefaultImports": true, "strict": true, "forceConsistentCasingInFileNames": true, "module": "ESNext", "moduleResolution": "Node", "resolveJsonModule": true, "isolatedModules": true, "noEmit": true, "jsx": "react-jsx" }, "include": ["src"] } Any help is appreciated. I tried to follow the astro-eslint setup guide this https://github.com/ota-meshi/eslint-plugin-astro, but still getting those errors.
[ "You need also to add/update \"include\": [\"**/*.ts\", \"**/*.tsx\", \"**/*.astro\"] in your tsconfig.json\n", "You need to add extraFileExtensions: ['.astro'], to parserOptions in top scope of .eslintrc.cjs!\nThat's because standard-with-typescript overwrites parserOptions.\n" ]
[ 0, 0 ]
[]
[]
[ "astrojs", "eslint", "reactjs", "typescript", "typescript_eslint" ]
stackoverflow_0074411996_astrojs_eslint_reactjs_typescript_typescript_eslint.txt
Q: If I am using onclick to a button why it redirects me to the same page i am in (reactjs) how to click on anchor tag in the card and redirects me to another page with more details of the current card example click on opens new tab with current (clicked) card details here is an api for item https://api.npoint.io/d275425a434e02acf2f7/News/0 snippets of code also a link that works https://codesandbox.io/s/sweet-spence-1tl4y5?file=/src/App.js my api https://api.npoint.io/d275425a434e02acf2f7 for rendering all items in cards filteredCat?.map((list) => { if (list.showOnHomepage === "yes") { const date = format( new Date(list.publishedDate), "EEE dd MMM yyyy" ); const showCat = news.map((getid) => { if (getid.id == list.categoryID) return getid.name; }); // const rec = list.publishedDate.sort((date1, date2) => date1 - date2); return ( <Card className=" extraCard col-lg-3" style={{ width: "" }} id={list.categoryID} > <Card.Img variant="top" src={list.urlToImage} alt="Image" /> <Card.Body> <Card.Title className="textTitle"> {list.title} </Card.Title> <Card.Text></Card.Text> <small className="text-muted d-flex"> <FaRegCalendarAlt className="m-1" style={{ color: "#0aceff" }} /> {date} </small> <div style={{ color: "#0aceff" }} className="d-flex justify-content-between" > <Button variant="" className={classes["btn-cat"]}> {showCat} </Button> <div> <FaRegHeart /> <FaLink /> </div> </div> </Card.Body> </Card> ); } }) } </div> } I tried this technique but it does direct me to the same page not the new tab with empty page !! function handleClick(event) { event.preventDefault(); window.location.href = 'src/comp/newsitem'; } function news() { return ( <a href="#" onClick={handleClick}> Click me to redirect! </a> ); } A: You can use the window.location.href property to redirect to a new page when an anchor tag is clicked. You can do this by setting the href property in the handleClick function, like this: function handleClick(event) { // Prevent the default behavior of the anchor tag event.preventDefault(); // Set the new page's URL using the window.location.href property window.location.href = 'src/comp/newsitem'; } Then, in your React component, you can add the onClick attribute to the anchor tag and specify the handleClick function as the event handler, like this: function news() { return ( <a href="#" onClick={handleClick}> Click me to redirect! </a> ); } This will cause the handleClick function to be called when the anchor tag is clicked, and the window.location.href property will be set to the URL of the page you want to open. A: React provides SPAs which means that you can load your content of different pages without any refresh or redirect. So no need to redirect to another page unless you really want to open the page in a new tab. Also if you want to have multiple page paths, you should use react-router-dom. So first of all you should add your routes to your app. Add a pages.js file with this content: import { BrowserRouter, Routes, Route } from 'react-router-dom'; import App from './App'; import News from './News'; import NewsItem from './NewsItem'; function Pages() { return ( <BrowserRouter> <Routes> <Route path='/news' element={<News />} /> <Route path='/newsItem' element={<NewsItem />} /> <Route path='/' element={<App />} /> </Routes> </BrowserRouter> ); } export default Pages; And then import it to your index.js file: import { StrictMode } from "react"; import { createRoot } from "react-dom/client"; import Pages from "./Pages"; const rootElement = document.getElementById("root"); const root = createRoot(rootElement); root.render( <StrictMode> <Pages /> </StrictMode> ); NewsItem file: function NewsItem() { return <div>News Item</div>; } export default NewsItem; And finally when you want to navigate the News page, do this: import { Link } from 'react-router-dom' <Link to='/news' /> Or if you want to open in new tab: <Link to='/news' target='_blank' /> And for navigating to NewsItem page (without any a tag): <Link to="/newsItem">News Item</Link>
If I am using onclick to a button why it redirects me to the same page i am in (reactjs)
how to click on anchor tag in the card and redirects me to another page with more details of the current card example click on opens new tab with current (clicked) card details here is an api for item https://api.npoint.io/d275425a434e02acf2f7/News/0 snippets of code also a link that works https://codesandbox.io/s/sweet-spence-1tl4y5?file=/src/App.js my api https://api.npoint.io/d275425a434e02acf2f7 for rendering all items in cards filteredCat?.map((list) => { if (list.showOnHomepage === "yes") { const date = format( new Date(list.publishedDate), "EEE dd MMM yyyy" ); const showCat = news.map((getid) => { if (getid.id == list.categoryID) return getid.name; }); // const rec = list.publishedDate.sort((date1, date2) => date1 - date2); return ( <Card className=" extraCard col-lg-3" style={{ width: "" }} id={list.categoryID} > <Card.Img variant="top" src={list.urlToImage} alt="Image" /> <Card.Body> <Card.Title className="textTitle"> {list.title} </Card.Title> <Card.Text></Card.Text> <small className="text-muted d-flex"> <FaRegCalendarAlt className="m-1" style={{ color: "#0aceff" }} /> {date} </small> <div style={{ color: "#0aceff" }} className="d-flex justify-content-between" > <Button variant="" className={classes["btn-cat"]}> {showCat} </Button> <div> <FaRegHeart /> <FaLink /> </div> </div> </Card.Body> </Card> ); } }) } </div> } I tried this technique but it does direct me to the same page not the new tab with empty page !! function handleClick(event) { event.preventDefault(); window.location.href = 'src/comp/newsitem'; } function news() { return ( <a href="#" onClick={handleClick}> Click me to redirect! </a> ); }
[ "You can use the window.location.href property to redirect to a new page when an anchor tag is clicked. You can do this by setting the href property in the handleClick function, like this:\nfunction handleClick(event) {\n // Prevent the default behavior of the anchor tag\n event.preventDefault();\n\n // Set the new page's URL using the window.location.href property\n window.location.href = 'src/comp/newsitem';\n}\n\nThen, in your React component, you can add the onClick attribute to the anchor tag and specify the handleClick function as the event handler, like this:\nfunction news() {\n return (\n <a href=\"#\" onClick={handleClick}>\n Click me to redirect!\n </a>\n );\n}\n\nThis will cause the handleClick function to be called when the anchor tag is clicked, and the window.location.href property will be set to the URL of the page you want to open.\n", "React provides SPAs which means that you can load your content of different pages without any refresh or redirect. So no need to redirect to another page unless you really want to open the page in a new tab.\nAlso if you want to have multiple page paths, you should use react-router-dom.\nSo first of all you should add your routes to your app. Add a pages.js file with this content:\nimport { BrowserRouter, Routes, Route } from 'react-router-dom';\nimport App from './App';\nimport News from './News';\nimport NewsItem from './NewsItem';\n\nfunction Pages() {\n\n return (\n <BrowserRouter>\n <Routes>\n <Route path='/news' element={<News />} />\n <Route path='/newsItem' element={<NewsItem />} />\n <Route path='/' element={<App />} />\n </Routes>\n </BrowserRouter>\n );\n}\n\nexport default Pages;\n\nAnd then import it to your index.js file:\nimport { StrictMode } from \"react\";\nimport { createRoot } from \"react-dom/client\";\nimport Pages from \"./Pages\";\n\nconst rootElement = document.getElementById(\"root\");\nconst root = createRoot(rootElement);\n\nroot.render(\n <StrictMode>\n <Pages />\n </StrictMode>\n);\n\nNewsItem file:\nfunction NewsItem() {\n return <div>News Item</div>;\n}\nexport default NewsItem;\n\nAnd finally when you want to navigate the News page, do this:\nimport { Link } from 'react-router-dom' \n\n<Link to='/news' />\n\nOr if you want to open in new tab:\n<Link to='/news' target='_blank' />\n\nAnd for navigating to NewsItem page (without any a tag):\n<Link to=\"/newsItem\">News Item</Link>\n\n" ]
[ 1, 0 ]
[]
[]
[ "reactjs" ]
stackoverflow_0074673863_reactjs.txt
Q: How to use reflection to find annotated Lambda Functions I have an application that has many declared Lambdas. I've added an annotation to them so that I can use reflection to find all the functions marked with the annotation. They are all defined as: @FooFunction("abc") public static Function<Task, Result> myFunc = task -> {... returns new Result} At startup, my application uses reflection to find all of the annotated functions and add them to the hashmap. static HashMap<String, Function<Task, Result>> funcMap = new HashMap<>(); static { Reflections reflections = new Reflections("my.package", Scanners.values()); var annotated = reflections.getFieldsAnnotatedWith(FooFunction.class); annotated.forEach(aField -> { try { var annot = aField.getAnnotation(FooFunction.class); var key = annot.value(); funcMap.put(key, aField.get(null); } catch (Exception e) { ...; } } The above code definitely won't work, especially on the put since aField.get(null) returns an Object. If I cast the object to Function<Task,Result>, I get an unchecked cast warning. No matter how I circle around it, I can't get rid of the warning (without using Suppress). I've tried changing the Function<Foo, Bar> to something more generic like Function<?,?> but that took me down another rabbit hole. All of the functions are declared as static since they really don't need to belong to a specific class. They are grouped under various classes simply for organizational purposes. The underlying objective is: the API will receive a list of tasks. There are about 100 different Task types. Each Task has an "id" field which is used to determine which Function should be used to process that Task. It looks something like this: var results = Arrays.stream(request.getTasks()) .map(task -> functionMap.getOrDefault(task.getId(), unknownTaskFn).apply(task) .toList(); My questions: Is this an antipattern? If so, is there a better prescribed pattern? How can I go from an Object to a Function<Task,Result> properly to put it into the map? Thanks A: Casting is inevitable, because Field.get returns Object by design, but it could be done without warnings. I would also suggest define a custom interface public interface TaskResultFunction extends Function<Task, Result> { } and use it for lambda declarations @FooFunction("abc") public static TaskResultFunction myFunc = task -> {... returns new Result} (otherwise we will have to deal with ParameterizedTypeReference, but in this case it is not necessary and overcomplicated) Map<String, Function<Task, String>> funcMap = ... // or more strict Map<String, TaskResultFunction> funcMap = ... //... if (TaskResultFunction.class.isAssignableFrom(field.getType())) { TaskResultFunction fn = (TaskResultFunction) field.get(null); taskResultFunctions.put(key, fn); }
How to use reflection to find annotated Lambda Functions
I have an application that has many declared Lambdas. I've added an annotation to them so that I can use reflection to find all the functions marked with the annotation. They are all defined as: @FooFunction("abc") public static Function<Task, Result> myFunc = task -> {... returns new Result} At startup, my application uses reflection to find all of the annotated functions and add them to the hashmap. static HashMap<String, Function<Task, Result>> funcMap = new HashMap<>(); static { Reflections reflections = new Reflections("my.package", Scanners.values()); var annotated = reflections.getFieldsAnnotatedWith(FooFunction.class); annotated.forEach(aField -> { try { var annot = aField.getAnnotation(FooFunction.class); var key = annot.value(); funcMap.put(key, aField.get(null); } catch (Exception e) { ...; } } The above code definitely won't work, especially on the put since aField.get(null) returns an Object. If I cast the object to Function<Task,Result>, I get an unchecked cast warning. No matter how I circle around it, I can't get rid of the warning (without using Suppress). I've tried changing the Function<Foo, Bar> to something more generic like Function<?,?> but that took me down another rabbit hole. All of the functions are declared as static since they really don't need to belong to a specific class. They are grouped under various classes simply for organizational purposes. The underlying objective is: the API will receive a list of tasks. There are about 100 different Task types. Each Task has an "id" field which is used to determine which Function should be used to process that Task. It looks something like this: var results = Arrays.stream(request.getTasks()) .map(task -> functionMap.getOrDefault(task.getId(), unknownTaskFn).apply(task) .toList(); My questions: Is this an antipattern? If so, is there a better prescribed pattern? How can I go from an Object to a Function<Task,Result> properly to put it into the map? Thanks
[ "Casting is inevitable, because Field.get returns Object by design, but it could be done without warnings.\nI would also suggest define a custom interface\npublic interface TaskResultFunction extends Function<Task, Result> {\n}\n\nand use it for lambda declarations\n@FooFunction(\"abc\")\npublic static TaskResultFunction myFunc = task -> {... returns new Result}\n\n(otherwise we will have to deal with ParameterizedTypeReference, but in this case it is not necessary and overcomplicated)\n Map<String, Function<Task, String>> funcMap = ...\n\n // or more strict\n Map<String, TaskResultFunction> funcMap = ...\n\n //...\n\n if (TaskResultFunction.class.isAssignableFrom(field.getType())) {\n TaskResultFunction fn = (TaskResultFunction) field.get(null);\n taskResultFunctions.put(key, fn);\n }\n\n" ]
[ 0 ]
[]
[]
[ "functional_programming", "java", "lambda", "reflection" ]
stackoverflow_0074672812_functional_programming_java_lambda_reflection.txt
Q: How to implement PIP (Picture in Picture) Mode in React Native? I need to implement PIP mode using react native, but it should update date every second while user enters in PIP mode. I tried using following packages but not worked: react-native-pip-android react-native-picture-in-picture A: RN does not provide a built-in API for implementing PIP mode. So your only option is the react-native-video package, which provides a PIP prop that can be used to enable PIP mode for video playback on iOS and Android. import Video from 'react-native-video'; class MyComponent extends React.Component { render() { return ( <Video source={require('./my-video.mp4')} PIP /> ); } } So PIP prop is set to true to enable PIP mode for the video. But this prop is only supported on iOS and Android, and won't animate on other platforms. To update the date every second while the user is in PIP mode, you can use the setInterval method to call a function that updates the date at regular intervals. You can then use the Text component to display the updated date on the screen. class MyComponent extends React.Component { constructor(props) { super(props); this.state = { date: new Date(), }; } componentDidMount() { this.interval = setInterval(() => { this.setState({ date: new Date(), }); }, 1000); } componentWillUnmount() { clearInterval(this.interval); } render() { const { date } = this.state; return ( <View> <Video source={require('./my-video.mp4')} PIP /> <Text>{date.toString()}</Text> </View> ); } } the setInterval method is used to call a function that updates the date state every second. This causes the date to be updated on the screen, allowing the user to see the current time while in PIP mode.
How to implement PIP (Picture in Picture) Mode in React Native?
I need to implement PIP mode using react native, but it should update date every second while user enters in PIP mode. I tried using following packages but not worked: react-native-pip-android react-native-picture-in-picture
[ "RN does not provide a built-in API for implementing PIP mode. So your only option is the react-native-video package, which provides a PIP prop that can be used to enable PIP mode for video playback on iOS and Android.\nimport Video from 'react-native-video';\n\nclass MyComponent extends React.Component {\n render() {\n return (\n <Video\n source={require('./my-video.mp4')}\n PIP\n />\n );\n }\n}\n\nSo PIP prop is set to true to enable PIP mode for the video. But this prop is only supported on iOS and Android, and won't animate on other platforms.\nTo update the date every second while the user is in PIP mode, you can use the setInterval method to call a function that updates the date at regular intervals. You can then use the Text component to display the updated date on the screen.\nclass MyComponent extends React.Component {\n constructor(props) {\n super(props);\n this.state = {\n date: new Date(),\n };\n }\n\n componentDidMount() {\n this.interval = setInterval(() => {\n this.setState({\n date: new Date(),\n });\n }, 1000);\n }\n\n componentWillUnmount() {\n clearInterval(this.interval);\n }\n\n render() {\n const { date } = this.state;\n return (\n <View>\n <Video\n source={require('./my-video.mp4')}\n PIP\n />\n <Text>{date.toString()}</Text>\n </View>\n );\n }\n}\n\nthe setInterval method is used to call a function that updates the date state every second. This causes the date to be updated on the screen, allowing the user to see the current time while in PIP mode.\n" ]
[ 0 ]
[]
[]
[ "android", "java", "npm", "picture_in_picture", "react_native" ]
stackoverflow_0074673798_android_java_npm_picture_in_picture_react_native.txt
Q: How can I show the things I searched from Google Search Engine in the card layout? I created a search engine with Google search engine and added it to my project. However, I am currently stuck. I want to stylize the results in a card layout and show them side by side. How can I achieve this? HTML: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Book Finder</title> <link rel="stylesheet" href="./style.css"> </head> <body> <div class="header"> <div id="title"> Book Finder </div> </div> <link rel="preconnect" href="https://fonts.googleapis.com"><link rel="preconnect" href="https://fonts.gstatic.com" crossorigin><link href="https://fonts.googleapis.com/css2?family=Cormorant:wght@300&family=Fira+Code:wght@500&family=Josefin+Slab:wght@200&family=Kanit:wght@300&family=MedievalSharp&family=Mulish&family=Radio+Canada:wght@300&family=Smythe&family=Zen+Dots&display=swap" rel="stylesheet"> <script async src="https://cse.google.com/cse.js?cx=d4f7eccee00f1434d"> </script> <div class="gcse-search"></div> </body> </html> > .gsc-result-info { > /* background-color: red; */ > font-family: 'Kanit', sans-serif;; > color: blue; > } > > .gs-title { > font-family: 'Kanit', sans-serif; > > height: 100%; > width: 100%; > position: relative; > transition: transform 1500ms; > transform-style: preserve-3d; > > > } > > .gsc-cursor-page { > font-size: 1.5em; > padding: 4px 8px; > border: 2px solid #ccc; > > > } > > .gs-image-box gs-web-image-box gs-web-image-box-portrait { > height: 100%; > width: 100%; > position: relative; > transition: transform 1500ms; > transform-style: preserve-3d; > } My aim is to shape the outputs as I want, put them side by side and display them in the layout. But I can't do what I want. A: Use the following property in the class of the result wrapper. float:left; display:inline-block;
How can I show the things I searched from Google Search Engine in the card layout?
I created a search engine with Google search engine and added it to my project. However, I am currently stuck. I want to stylize the results in a card layout and show them side by side. How can I achieve this? HTML: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Book Finder</title> <link rel="stylesheet" href="./style.css"> </head> <body> <div class="header"> <div id="title"> Book Finder </div> </div> <link rel="preconnect" href="https://fonts.googleapis.com"><link rel="preconnect" href="https://fonts.gstatic.com" crossorigin><link href="https://fonts.googleapis.com/css2?family=Cormorant:wght@300&family=Fira+Code:wght@500&family=Josefin+Slab:wght@200&family=Kanit:wght@300&family=MedievalSharp&family=Mulish&family=Radio+Canada:wght@300&family=Smythe&family=Zen+Dots&display=swap" rel="stylesheet"> <script async src="https://cse.google.com/cse.js?cx=d4f7eccee00f1434d"> </script> <div class="gcse-search"></div> </body> </html> > .gsc-result-info { > /* background-color: red; */ > font-family: 'Kanit', sans-serif;; > color: blue; > } > > .gs-title { > font-family: 'Kanit', sans-serif; > > height: 100%; > width: 100%; > position: relative; > transition: transform 1500ms; > transform-style: preserve-3d; > > > } > > .gsc-cursor-page { > font-size: 1.5em; > padding: 4px 8px; > border: 2px solid #ccc; > > > } > > .gs-image-box gs-web-image-box gs-web-image-box-portrait { > height: 100%; > width: 100%; > position: relative; > transition: transform 1500ms; > transform-style: preserve-3d; > } My aim is to shape the outputs as I want, put them side by side and display them in the layout. But I can't do what I want.
[ "Use the following property in the class of the result wrapper.\nfloat:left;\ndisplay:inline-block;\n\n" ]
[ 0 ]
[]
[]
[ "cardlayout", "css", "google_api", "google_custom_search", "javascript" ]
stackoverflow_0074673848_cardlayout_css_google_api_google_custom_search_javascript.txt
Q: Is gdbus part of Bluez, glib, or neither? I'm following the advice of Zimano on using the Bluez client as an example to implement Bluetooth in my Linux application. I have installed: libbluetooth-dev libglib2.0-dev libdbus-1-dev The Bluez client example uses a D-Bus helper library that is included as part of Bluez in a gdbus folder when the soure code is downloaded. I have looked at it for a few hours and I think if I want to follow the Bluez client example, I need to add and compile the gdbus source from the Bluez source with my program. My question is, do I have that wrong? Is that gdbus included elsewhere? The naming is so close to files in glib-2.0/gio that I am concerned that I am missing something. A: GDBus is part of GIO, which is distributed with GLib. Based on the package names you've provided I'm guessing you are using a Debian-derived distribution, so libglib2.0-dev is the package you need. A: This is indeed quite a mess for naming. It appears to be that: Official Glib Dbus library is named "GDBus", BUT it is part of "GIO" => thus you would (*) need (<glib.h> and) <gio/gio.h> , not <gdbus.h> Bluez "gdbus.h" is their completely own local support library not found anywhere in binary form as such, also their license is restrictive for free inclusion. (*) But of course the Bluez client still remains dependent on their own "gdbus.h" support library.
Is gdbus part of Bluez, glib, or neither?
I'm following the advice of Zimano on using the Bluez client as an example to implement Bluetooth in my Linux application. I have installed: libbluetooth-dev libglib2.0-dev libdbus-1-dev The Bluez client example uses a D-Bus helper library that is included as part of Bluez in a gdbus folder when the soure code is downloaded. I have looked at it for a few hours and I think if I want to follow the Bluez client example, I need to add and compile the gdbus source from the Bluez source with my program. My question is, do I have that wrong? Is that gdbus included elsewhere? The naming is so close to files in glib-2.0/gio that I am concerned that I am missing something.
[ "GDBus is part of GIO, which is distributed with GLib.\nBased on the package names you've provided I'm guessing you are using a Debian-derived distribution, so libglib2.0-dev is the package you need.\n", "This is indeed quite a mess for naming. It appears to be that:\n\nOfficial Glib Dbus library is named \"GDBus\", BUT it is part of \"GIO\" => thus you would (*) need (<glib.h> and) <gio/gio.h> , not <gdbus.h>\n\nBluez \"gdbus.h\" is their completely own local support library not found anywhere in binary form as such, also their license is restrictive for free inclusion.\n\n\n(*) But of course the Bluez client still remains dependent on their own \"gdbus.h\" support library.\n" ]
[ 3, 0 ]
[]
[]
[ "bluetooth", "bluez", "dbus", "glib" ]
stackoverflow_0036986621_bluetooth_bluez_dbus_glib.txt
Q: Web3: Trace multiple transactions across multiple chains on NodeJS backend I am trying to handle multiple chains in a single Node.js backend. When I was dealing with one chain, which is Ethereum, I just made sure my web3 was targeting that chain ID and doing the transfer. But now, I would like to target both Ethereum and Polygon, which I need to ensure the transfer and transfer tracking are working on the correct chain. So I assume before I call the following code: const contract = new web3.eth.Contract(JSON.parse(r.rows[0].contract_abi), contractAddress); I would need to check the current chain ID and switch it accordingly if needed. Otherwise, it would not find the contract as it is not on the current chain. But while I do so, I might need to switch back as my previous transaction is being tracked by repeatedly calling the following code: await web3.eth.getTransactionReceipt(txHash); If I switch the chain, I think all ongoing tracking process would be affected. Is it possible to keep some web3 on one chain while some other web3 on other chains, so that all processes can go simultaneously? Or would they not affect each other? I wonder how Opensea did this on their platform, they clearly have a tracking system across both chains. A: You can create mutliple web3js instances - each of them connected to different node provider. const web3Ethereum = new Web3("<ethereum_provider_url>"); const web3Polygon = new Web3("<polygon_provider_url>"); console.log( await web3Ethereum.eth.net.getId(), // prints network ID 1 await web3Polygon.eth.net.getId() // prints network ID 137 );
Web3: Trace multiple transactions across multiple chains on NodeJS backend
I am trying to handle multiple chains in a single Node.js backend. When I was dealing with one chain, which is Ethereum, I just made sure my web3 was targeting that chain ID and doing the transfer. But now, I would like to target both Ethereum and Polygon, which I need to ensure the transfer and transfer tracking are working on the correct chain. So I assume before I call the following code: const contract = new web3.eth.Contract(JSON.parse(r.rows[0].contract_abi), contractAddress); I would need to check the current chain ID and switch it accordingly if needed. Otherwise, it would not find the contract as it is not on the current chain. But while I do so, I might need to switch back as my previous transaction is being tracked by repeatedly calling the following code: await web3.eth.getTransactionReceipt(txHash); If I switch the chain, I think all ongoing tracking process would be affected. Is it possible to keep some web3 on one chain while some other web3 on other chains, so that all processes can go simultaneously? Or would they not affect each other? I wonder how Opensea did this on their platform, they clearly have a tracking system across both chains.
[ "You can create mutliple web3js instances - each of them connected to different node provider.\nconst web3Ethereum = new Web3(\"<ethereum_provider_url>\");\nconst web3Polygon = new Web3(\"<polygon_provider_url>\");\n\nconsole.log(\n await web3Ethereum.eth.net.getId(), // prints network ID 1\n await web3Polygon.eth.net.getId() // prints network ID 137\n);\n\n" ]
[ 2 ]
[]
[]
[ "ethereum", "polygon", "web3", "web3js" ]
stackoverflow_0074673051_ethereum_polygon_web3_web3js.txt
Q: Error status code 403 even with headers, Python Requests I am sending a request to some url. I Copied the curl url to get the code from curl to python tool. So all the headers are included, but my request is not working and I recieve status code 403 on printing and error code 1020 in the html output. The code is import requests headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8', 'Accept-Language': 'en-US,en;q=0.5', # 'Accept-Encoding': 'gzip, deflate, br', 'DNT': '1', 'Connection': 'keep-alive', 'Upgrade-Insecure-Requests': '1', 'Sec-Fetch-Dest': 'document', 'Sec-Fetch-Mode': 'navigate', 'Sec-Fetch-Site': 'none', 'Sec-Fetch-User': '?1', } response = requests.get('https://v2.gcchmc.org/book-appointment/', headers=headers) print(response.status_code) print(response.cookies.get_dict()) with open("test.html",'w') as f: f.write(response.text) I also get cookies but not getting the desired response. I know I can do it with selenium but I want to know the reason behind this. Thanks in advance. Note: I have installed all the libraries installed with request with same version as computer and still not working and throwing 403 error A: It works on my machine, so I am not sure what the problem is. However, when I want send a request which does not work, I often try if it works using playwright. Playwright uses a browser driver and thus mimics your actual browser when visiting the page. It can be installed using pip install playwright. When you try it for the first time it may give an error which tells you to install the drivers, just follow the instruction to do so. With playwright you can try the following: from playwright.sync_api import sync_playwright url = 'https://v2.gcchmc.org/book-appointment/' ua = ( "Mozilla/5.0 (Windows NT 10.0; Win64; x64) " "AppleWebKit/537.36 (KHTML, like Gecko) " "Chrome/69.0.3497.100 Safari/537.36" ) with sync_playwright() as p: browser = p.chromium.launch(headless=False) page = browser.new_page(user_agent=ua) page.goto(url) page.wait_for_timeout(1000) html = page.content() print(html) Let me know if this works! A: The site is protected by cloudflare which aims to block, among other things, unauthorized data scraping. From What is data scraping? The process of web scraping is fairly simple, though the implementation can be complex. Web scraping occurs in 3 steps: First the piece of code used to pull the information, which we call a scraper bot, sends an HTTP GET request to a specific website. When the website responds, the scraper parses the HTML document for a specific pattern of data. Once the data is extracted, it is converted into whatever specific format the scraper bot’s author designed. You can use urllib instead of requests, it seems to be able to deal with cloudflare req = urllib.request.Request('https://v2.gcchmc.org/book-appointment/') req.add_headers('User-Agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0') req.add_header('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8') req.add_header('Accept-Language', 'en-US,en;q=0.5') r = urllib.request.urlopen(req).read().decode('utf-8') with open("test.html", 'w', encoding="utf-8") as f: f.write(r)
Error status code 403 even with headers, Python Requests
I am sending a request to some url. I Copied the curl url to get the code from curl to python tool. So all the headers are included, but my request is not working and I recieve status code 403 on printing and error code 1020 in the html output. The code is import requests headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8', 'Accept-Language': 'en-US,en;q=0.5', # 'Accept-Encoding': 'gzip, deflate, br', 'DNT': '1', 'Connection': 'keep-alive', 'Upgrade-Insecure-Requests': '1', 'Sec-Fetch-Dest': 'document', 'Sec-Fetch-Mode': 'navigate', 'Sec-Fetch-Site': 'none', 'Sec-Fetch-User': '?1', } response = requests.get('https://v2.gcchmc.org/book-appointment/', headers=headers) print(response.status_code) print(response.cookies.get_dict()) with open("test.html",'w') as f: f.write(response.text) I also get cookies but not getting the desired response. I know I can do it with selenium but I want to know the reason behind this. Thanks in advance. Note: I have installed all the libraries installed with request with same version as computer and still not working and throwing 403 error
[ "It works on my machine, so I am not sure what the problem is.\nHowever, when I want send a request which does not work, I often try if it works using playwright. Playwright uses a browser driver and thus mimics your actual browser when visiting the page. It can be installed using pip install playwright. When you try it for the first time it may give an error which tells you to install the drivers, just follow the instruction to do so.\nWith playwright you can try the following:\nfrom playwright.sync_api import sync_playwright\n\n\nurl = 'https://v2.gcchmc.org/book-appointment/'\nua = (\n \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) \"\n \"AppleWebKit/537.36 (KHTML, like Gecko) \"\n \"Chrome/69.0.3497.100 Safari/537.36\"\n)\n\nwith sync_playwright() as p:\n browser = p.chromium.launch(headless=False)\n page = browser.new_page(user_agent=ua)\n page.goto(url)\n page.wait_for_timeout(1000)\n \n html = page.content()\n \nprint(html)\n\nLet me know if this works!\n", "The site is protected by cloudflare which aims to block, among other things, unauthorized data scraping. From What is data scraping?\n\n\nThe process of web scraping is fairly simple, though the\nimplementation can be complex. Web scraping occurs in 3 steps:\n\nFirst the piece of code used to pull the information, which we call a scraper bot, sends an HTTP GET request to a specific website.\nWhen the website responds, the scraper parses the HTML document for a specific pattern of data.\nOnce the data is extracted, it is converted into whatever specific format the scraper bot’s author designed.\n\n\nYou can use urllib instead of requests, it seems to be able to deal with cloudflare\nreq = urllib.request.Request('https://v2.gcchmc.org/book-appointment/')\nreq.add_headers('User-Agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:106.0) Gecko/20100101 Firefox/106.0')\nreq.add_header('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8')\nreq.add_header('Accept-Language', 'en-US,en;q=0.5')\n\nr = urllib.request.urlopen(req).read().decode('utf-8')\nwith open(\"test.html\", 'w', encoding=\"utf-8\") as f:\n f.write(r)\n\n" ]
[ 1, 1 ]
[]
[]
[ "python", "python_requests" ]
stackoverflow_0074446830_python_python_requests.txt
Q: Parsing error: Unexpected token => when trying to deploy firebase cloud function. I couldn't find any answers on here exports.sendInvite = functions.firestore .document("invites/{phoneNumber}") .onCreate(async (doc) => { //error is here I assume const from = "+<mynumber>"; const to = doc.data().phoneNumber; const text = "You can join the club now"; return client.messages.create(from, to, text); }); my .eslintrc.js module.exports = { root: true, env: { es6: true, node: true, }, extends: [ "eslint:recommended", "google", ], rules: { quotes: ["error", "double"], }, }; My firebase cloud function is throwing this error Parsing error: Unexpected token =>. Does anyone know why this is happening? I am using javascript btw not TS. A: Arrow functions are an ES6 feature, but here you have an async arrow function. Async functions in general are an ES8 (or 2017) feature. Therefore you need to specify the following setting at the root of your config: parserOptions: { ecmaVersion: 8 // or 2017 } This will let the parser know to expect the => token after async is used. A: Go to your file packages.json and change the line to this one. "scripts": { "lint": "eslint", ... }, The generated version it will contain "eslint ." A: in VScode go to settings.json and enter the following keyword "eslint.options":{"settng":true} check here: https://youtu.be/I8D0BObBXyg
Parsing error: Unexpected token => when trying to deploy firebase cloud function. I couldn't find any answers on here
exports.sendInvite = functions.firestore .document("invites/{phoneNumber}") .onCreate(async (doc) => { //error is here I assume const from = "+<mynumber>"; const to = doc.data().phoneNumber; const text = "You can join the club now"; return client.messages.create(from, to, text); }); my .eslintrc.js module.exports = { root: true, env: { es6: true, node: true, }, extends: [ "eslint:recommended", "google", ], rules: { quotes: ["error", "double"], }, }; My firebase cloud function is throwing this error Parsing error: Unexpected token =>. Does anyone know why this is happening? I am using javascript btw not TS.
[ "Arrow functions are an ES6 feature, but here you have an async arrow function.\nAsync functions in general are an ES8 (or 2017) feature. Therefore you need to specify the following setting at the root of your config:\nparserOptions: {\n ecmaVersion: 8 // or 2017\n}\n\nThis will let the parser know to expect the => token after async is used.\n", "Go to your file packages.json and change the line to this one.\n\"scripts\": {\n \"lint\": \"eslint\",\n ...\n},\n\nThe generated version it will contain \"eslint .\"\n", "in VScode go to settings.json and enter the following keyword\n\"eslint.options\":{\"settng\":true}\ncheck here:\nhttps://youtu.be/I8D0BObBXyg\n" ]
[ 12, 6, 0 ]
[]
[]
[ "eslint", "javascript" ]
stackoverflow_0066416001_eslint_javascript.txt
Q: initializing a field in an interface I want to initialize a patientId in an interface with the value idfrom another component My Interface export class Labtest { patientId!:string } PatientDetailsComponent export class PatientDetailsComponent implements OnInit { ... ngOnInit(): void { const id=this._route.snapshot.paramMap.get('id'); console.log(id); } } How can i do this ? A: You can't initialize an interface. Interfaces do not have default values either. You can extend a base class that has the default values you want instead.
initializing a field in an interface
I want to initialize a patientId in an interface with the value idfrom another component My Interface export class Labtest { patientId!:string } PatientDetailsComponent export class PatientDetailsComponent implements OnInit { ... ngOnInit(): void { const id=this._route.snapshot.paramMap.get('id'); console.log(id); } } How can i do this ?
[ "You can't initialize an interface. Interfaces do not have default values either. You can extend a base class that has the default values you want instead.\n" ]
[ 0 ]
[]
[]
[ "angular", "typescript" ]
stackoverflow_0074674219_angular_typescript.txt
Q: how to pass props to html attributes I have a problem with html attributes and map. How to pass variables to html attrbutes? I have this code: import React from "react"; export const AnimationParams = ({ children, animationparams }) => { animationparams; const number = animationparams.Parameters; const table = [...Array(number)]; return table.map((_, index) => { const row = `Parameters_${index}_name`; const anchor = `Parameters_${index}_value`; return ( <React.Fragment key={row}> <div className="animation" {...{ [animationparams[row]]: animationparams[anchor] }} > {children} </div> </React.Fragment> ); }); }; and on html I get: <div> <div class="animation" data-scroll=""> <h1 class="font-heading max-w-5xl mx-auto my-5 text-6xl ">To jest heading z acf/hero</h1> </div> <div class="animation" data-scroll-direction="horizontal"> <h1 class="font-heading max-w-5xl mx-auto my-5 text-6xl ">To jest heading z acf/hero</h1> </div> <div class="animation" data-scroll-speed="11"> <h1 class="font-heading max-w-5xl mx-auto my-5 text-6xl ">To jest heading z acf/hero</h1> </div> </div> but i want: <div> <div class="animation" data-scroll data-scroll-direction="horizontal" data-scroll-speed="11"\> <h1 class="font-heading max-w-5xl mx-auto my-5 text-6xl "\>To jest heading z acf/hero\</h1\> <div> </div> this is console.log(animationparams) { "heading": "To jest heading z acf/hero", "_heading": "field_638a71fabc28a", "Parameters_0_name": "data-scroll", "_Parameters_0_name": "field_638b5c2392a98", "Parameters_0_value": "", "_Parameters_0_value": "field_638b5c2b92a99", "Parameters_1_name": "data-scroll-direction", "_Parameters_1_name": "field_638b5c2392a98", "Parameters_1_value": "horizontal", "_Parameters_1_value": "field_638b5c2b92a99", "Parameters_2_name": "data-scroll-speed", "_Parameters_2_name": "field_638b5c2392a98", "Parameters_2_value": "11", "_Parameters_2_value": "field_638b5c2b92a99", "Parameters": 3, "_Parameters": "field_638a7b858076a" } A: First of all, the data structure you're using is extremely weird. Is is something you created yourself or does it come from some other component/api/backend/etc? The issue you're having is that you are iterating over Parameters creating div tags when you want to do something like this (I didn't test it!) import React from "react"; export const AnimationParams = ({ children, animationparams }) => { const attributes = {}; [...Array(animationparams.Parameters)].map(index => { attributes[animationparams[`Parameters_${index}_name`]] = animationparams[`Parameters_${index}_value`] }); return ( <div className="animation" {...attributes} > {children} </div> ); };
how to pass props to html attributes
I have a problem with html attributes and map. How to pass variables to html attrbutes? I have this code: import React from "react"; export const AnimationParams = ({ children, animationparams }) => { animationparams; const number = animationparams.Parameters; const table = [...Array(number)]; return table.map((_, index) => { const row = `Parameters_${index}_name`; const anchor = `Parameters_${index}_value`; return ( <React.Fragment key={row}> <div className="animation" {...{ [animationparams[row]]: animationparams[anchor] }} > {children} </div> </React.Fragment> ); }); }; and on html I get: <div> <div class="animation" data-scroll=""> <h1 class="font-heading max-w-5xl mx-auto my-5 text-6xl ">To jest heading z acf/hero</h1> </div> <div class="animation" data-scroll-direction="horizontal"> <h1 class="font-heading max-w-5xl mx-auto my-5 text-6xl ">To jest heading z acf/hero</h1> </div> <div class="animation" data-scroll-speed="11"> <h1 class="font-heading max-w-5xl mx-auto my-5 text-6xl ">To jest heading z acf/hero</h1> </div> </div> but i want: <div> <div class="animation" data-scroll data-scroll-direction="horizontal" data-scroll-speed="11"\> <h1 class="font-heading max-w-5xl mx-auto my-5 text-6xl "\>To jest heading z acf/hero\</h1\> <div> </div> this is console.log(animationparams) { "heading": "To jest heading z acf/hero", "_heading": "field_638a71fabc28a", "Parameters_0_name": "data-scroll", "_Parameters_0_name": "field_638b5c2392a98", "Parameters_0_value": "", "_Parameters_0_value": "field_638b5c2b92a99", "Parameters_1_name": "data-scroll-direction", "_Parameters_1_name": "field_638b5c2392a98", "Parameters_1_value": "horizontal", "_Parameters_1_value": "field_638b5c2b92a99", "Parameters_2_name": "data-scroll-speed", "_Parameters_2_name": "field_638b5c2392a98", "Parameters_2_value": "11", "_Parameters_2_value": "field_638b5c2b92a99", "Parameters": 3, "_Parameters": "field_638a7b858076a" }
[ "First of all, the data structure you're using is extremely weird. Is is something you created yourself or does it come from some other component/api/backend/etc?\nThe issue you're having is that you are iterating over Parameters creating div tags when you want to do something like this (I didn't test it!)\nimport React from \"react\";\n\nexport const AnimationParams = ({ children, animationparams }) => {\n const attributes = {};\n [...Array(animationparams.Parameters)].map(index => {\n attributes[animationparams[`Parameters_${index}_name`]] = animationparams[`Parameters_${index}_value`]\n });\n\n return (\n <div\n className=\"animation\"\n {...attributes}\n >\n {children}\n </div>\n );\n};\n\n\n" ]
[ 0 ]
[]
[]
[ "reactjs" ]
stackoverflow_0074674072_reactjs.txt
Q: How to mirror a composable function made by canvas with Modifier? Problem description I'm trying to create a component that simulates a 7-segment display like this: I'm trying to create a component on android using Compose and Canvas that simulates a 7-segment display like this: For that, I adopted a strategy of creating only half of this component and mirroring this part that I created downwards, so I would have the entire display. This is the top part of the 7-segment display: But the problem is when "mirror" the top to bottom. It turns out that when I add the Modifier.rotate(180f) the figure rotates around the origin of the canvas clockwise, and so it doesn't appear on the screen (it would if it were counterclockwise). I don't want to do this solution using a font for this, I would like to solve this problem through the canvas and compose itself. If there is a smarter way to do this on canvas without necessarily needing a mirror I would like to know. My code Below is my code that I'm using to draw this: DisplayComponent.kt @Composable fun DisplayComponent( modifier: Modifier = Modifier, size: Int = 1000, color: Color = MaterialTheme.colors.primary, ) { Column(modifier = modifier) { HalfDisplayComponent(size, color) HalfDisplayComponent( modifier = Modifier.rotate(180f), size = size, color = color ) } } @Composable private fun HalfDisplayComponent( size: Int, color: Color, modifier: Modifier = Modifier, ) { Box(modifier = modifier) { LedModel.values().forEach { LedComponent( ledModel = it, size = size, color = color ) } } } LedModel.kt enum class LedModel(val coordinates: List<Pair<Float, Float>>) { HorizontalTop( listOf( Pair(0.04f, 0.03f), // Point A Pair(0.07f, 0f), // Point B Pair(0.37f, 0f), // Point C Pair(0.4f, 0.03f), // Point D Pair(0.34f, 0.08f), // Point E Pair(0.1f, 0.08f), // Point F ) ), VerticalRight( listOf( Pair(0.41f, 0.04f), // Point A Pair(0.44f, 0.07f), // Point B Pair(0.44f, 0.37f), // Point C Pair(0.41f, 0.4f), // Point D Pair(0.35f, 0.35f), // Point E Pair(0.35f, 0.09f), // Point F ) ), VerticalLeft( listOf( Pair(0.03f, 0.4f), // Point A Pair(0f, 0.37f), // Point B Pair(0f, 0.07f), // Point C Pair(0.03f, 0.04f), // Point D Pair(0.09f, 0.09f), // Point E Pair(0.09f, 0.35f), // Point F ) ), HorizontalBottom( listOf( Pair(0.1f, 0.36f), // Point A Pair(0.34f, 0.36f), // Point B Pair(0.39f, 0.4f), // Point C Pair(0.05f, 0.4f), // Point D ) ), } LedComponent.kt @Composable fun LedComponent( modifier: Modifier = Modifier, size: Int = 30, color: Color = MaterialTheme.colors.primary, ledModel: LedModel = LedModel.HorizontalTop ) = getPath(ledModel.coordinates).let { path -> Canvas(modifier = modifier.scale(size.toFloat())) { drawPath(path, color) } } private fun getPath(coordinates: List<Pair<Float, Float>>): Path = Path().apply { coordinates.map { transformPointCoordinate(it) }.forEachIndexed { index, point -> if (index == 0) moveTo(point.x, point.y) else lineTo(point.x, point.y) } } private fun transformPointCoordinate(point: Pair<Float, Float>) = Offset(point.first.dp.value, point.second.dp.value) My failed attempt As described earlier, I tried adding a Modifier by rotating the composable of the display but it didn't work. I did it this way: @Composable fun DisplayComponent( modifier: Modifier = Modifier, size: Int = 1000, color: Color = MaterialTheme.colors.primary, ) { Column(modifier = modifier) { DisplayFABGComponent(size, color) DisplayFABGComponent( modifier = Modifier.rotate(180f), size = size, color = color ) } } A: There are many things wrong with the code you posted above. First of all in Jetpack Compose even if your Canvas has 0.dp size you can still draw anywhere which is the first issue in your question. Your Canvas has no size modifier, which you can verify by printing DrawScope.size as below. fun LedComponent( modifier: Modifier = Modifier, size: Int = 1000, color: Color = MaterialTheme.colorScheme.primary, ledModel: LedModel = LedModel.HorizontalTop ) = getPath(ledModel.coordinates).let { path -> Canvas( modifier = modifier.scale(size.toFloat()) ) { println("CANVAS size: ${this.size}") drawPath(path, color) } } any value you enter makes no difference other than Modifier.size(0f), also this is not how you should build or scale your drawing either. If you set size for your Canvas such as @Composable fun DisplayComponent( modifier: Modifier = Modifier, size: Int = 1000, color: Color = MaterialTheme.colorScheme.primary, ) { Column(modifier = modifier) { HalfDisplayComponent( size, color, Modifier .size(200.dp) .border(2.dp,Color.Red) ) HalfDisplayComponent( modifier = Modifier .size(200.dp) .border(2.dp, Color.Cyan) .rotate(180f), size = size, color = color ) } } Rotation works but what you draw is not symmetric as in image in your question. point.first.dp.value this snippet does nothing. What it does is adds dp to float then gets float. This is not how you do float/dp conversions and which is not necessary either. You can achieve your goal with one Canvas or using Modifier.drawBehind{}. Create a Path using Size as reference for half component then draw again and rotate it or create a path that contains full led component. Or you can have paths for each sections if you wish show LED digits separately. This is a simple example to build only one diamond shape, then translate and rotate it to build hourglass like shape using half component. You can use this sample as demonstration for how to create Path using Size as reference, translate and rotate. fun getHalfPath(path: Path, size: Size) { path.apply { val width = size.width val height = size.height / 2 moveTo(width * 0f, height * .5f) lineTo(width * .3f, height * 0.3f) lineTo(width * .7f, height * 0.3f) lineTo(width * 1f, height * .5f) lineTo(width * .5f, height * 1f) lineTo(width * 0f, height * .5f) } } You need to use aspect ratio of 1/2f to be able to have symmetric drawing. Green border is to show bounds of Box composable. val path = remember { Path() } Box(modifier = Modifier .border(3.dp, Color.Green) .fillMaxWidth(.4f) .aspectRatio(1 / 2f) .drawBehind { if (path.isEmpty) { getHalfPath(path, size) } drawPath( path = path, color = Color.Red, style = Stroke(2.dp.toPx()) ) withTransform( { translate(0f, size.height / 2f) rotate( degrees = 180f, pivot = Offset(center.x, center.y / 2) ) } ) { drawPath( path = path, color = Color.Black, style = Stroke(2.dp.toPx()) ) } } Result
How to mirror a composable function made by canvas with Modifier?
Problem description I'm trying to create a component that simulates a 7-segment display like this: I'm trying to create a component on android using Compose and Canvas that simulates a 7-segment display like this: For that, I adopted a strategy of creating only half of this component and mirroring this part that I created downwards, so I would have the entire display. This is the top part of the 7-segment display: But the problem is when "mirror" the top to bottom. It turns out that when I add the Modifier.rotate(180f) the figure rotates around the origin of the canvas clockwise, and so it doesn't appear on the screen (it would if it were counterclockwise). I don't want to do this solution using a font for this, I would like to solve this problem through the canvas and compose itself. If there is a smarter way to do this on canvas without necessarily needing a mirror I would like to know. My code Below is my code that I'm using to draw this: DisplayComponent.kt @Composable fun DisplayComponent( modifier: Modifier = Modifier, size: Int = 1000, color: Color = MaterialTheme.colors.primary, ) { Column(modifier = modifier) { HalfDisplayComponent(size, color) HalfDisplayComponent( modifier = Modifier.rotate(180f), size = size, color = color ) } } @Composable private fun HalfDisplayComponent( size: Int, color: Color, modifier: Modifier = Modifier, ) { Box(modifier = modifier) { LedModel.values().forEach { LedComponent( ledModel = it, size = size, color = color ) } } } LedModel.kt enum class LedModel(val coordinates: List<Pair<Float, Float>>) { HorizontalTop( listOf( Pair(0.04f, 0.03f), // Point A Pair(0.07f, 0f), // Point B Pair(0.37f, 0f), // Point C Pair(0.4f, 0.03f), // Point D Pair(0.34f, 0.08f), // Point E Pair(0.1f, 0.08f), // Point F ) ), VerticalRight( listOf( Pair(0.41f, 0.04f), // Point A Pair(0.44f, 0.07f), // Point B Pair(0.44f, 0.37f), // Point C Pair(0.41f, 0.4f), // Point D Pair(0.35f, 0.35f), // Point E Pair(0.35f, 0.09f), // Point F ) ), VerticalLeft( listOf( Pair(0.03f, 0.4f), // Point A Pair(0f, 0.37f), // Point B Pair(0f, 0.07f), // Point C Pair(0.03f, 0.04f), // Point D Pair(0.09f, 0.09f), // Point E Pair(0.09f, 0.35f), // Point F ) ), HorizontalBottom( listOf( Pair(0.1f, 0.36f), // Point A Pair(0.34f, 0.36f), // Point B Pair(0.39f, 0.4f), // Point C Pair(0.05f, 0.4f), // Point D ) ), } LedComponent.kt @Composable fun LedComponent( modifier: Modifier = Modifier, size: Int = 30, color: Color = MaterialTheme.colors.primary, ledModel: LedModel = LedModel.HorizontalTop ) = getPath(ledModel.coordinates).let { path -> Canvas(modifier = modifier.scale(size.toFloat())) { drawPath(path, color) } } private fun getPath(coordinates: List<Pair<Float, Float>>): Path = Path().apply { coordinates.map { transformPointCoordinate(it) }.forEachIndexed { index, point -> if (index == 0) moveTo(point.x, point.y) else lineTo(point.x, point.y) } } private fun transformPointCoordinate(point: Pair<Float, Float>) = Offset(point.first.dp.value, point.second.dp.value) My failed attempt As described earlier, I tried adding a Modifier by rotating the composable of the display but it didn't work. I did it this way: @Composable fun DisplayComponent( modifier: Modifier = Modifier, size: Int = 1000, color: Color = MaterialTheme.colors.primary, ) { Column(modifier = modifier) { DisplayFABGComponent(size, color) DisplayFABGComponent( modifier = Modifier.rotate(180f), size = size, color = color ) } }
[ "There are many things wrong with the code you posted above.\nFirst of all in Jetpack Compose even if your Canvas has 0.dp size you can still draw anywhere which is the first issue in your question. Your Canvas has no size modifier, which you can verify by printing DrawScope.size as below.\nfun LedComponent(\n modifier: Modifier = Modifier,\n size: Int = 1000,\n color: Color = MaterialTheme.colorScheme.primary,\n ledModel: LedModel = LedModel.HorizontalTop\n) = getPath(ledModel.coordinates).let { path ->\n Canvas(\n modifier = modifier.scale(size.toFloat())\n ) {\n\n println(\"CANVAS size: ${this.size}\")\n drawPath(path, color)\n }\n}\n\nany value you enter makes no difference other than Modifier.size(0f), also this is not how you should build or scale your drawing either.\nIf you set size for your Canvas such as\n@Composable\nfun DisplayComponent(\n modifier: Modifier = Modifier,\n size: Int = 1000,\n color: Color = MaterialTheme.colorScheme.primary,\n) {\n Column(modifier = modifier) {\n HalfDisplayComponent(\n size,\n color,\n Modifier\n .size(200.dp)\n .border(2.dp,Color.Red)\n )\n HalfDisplayComponent(\n modifier = Modifier\n .size(200.dp)\n .border(2.dp, Color.Cyan)\n .rotate(180f),\n size = size,\n color = color\n )\n }\n}\n\nRotation works but what you draw is not symmetric as in image in your question.\n\npoint.first.dp.value this snippet does nothing. What it does is adds dp to float then gets float. This is not how you do float/dp conversions and which is not necessary either.\nYou can achieve your goal with one Canvas or using Modifier.drawBehind{}. Create a Path using Size as reference for half component then draw again and rotate it or create a path that contains full led component. Or you can have paths for each sections if you wish show LED digits separately.\nThis is a simple example to build only one diamond shape, then translate and rotate it to build hourglass like shape using half component. You can use this sample as demonstration for how to create Path using Size as reference, translate and rotate.\nfun getHalfPath(path: Path, size: Size) {\n path.apply {\n val width = size.width\n val height = size.height / 2\n moveTo(width * 0f, height * .5f)\n lineTo(width * .3f, height * 0.3f)\n lineTo(width * .7f, height * 0.3f)\n lineTo(width * 1f, height * .5f)\n lineTo(width * .5f, height * 1f)\n lineTo(width * 0f, height * .5f)\n }\n}\n\nYou need to use aspect ratio of 1/2f to be able to have symmetric drawing. Green border is to show bounds of Box composable.\nval path = remember {\n Path()\n}\n\nBox(modifier = Modifier\n .border(3.dp, Color.Green)\n .fillMaxWidth(.4f)\n .aspectRatio(1 / 2f)\n .drawBehind {\n if (path.isEmpty) {\n getHalfPath(path, size)\n }\n\n drawPath(\n path = path,\n color = Color.Red,\n style = Stroke(2.dp.toPx())\n )\n\n withTransform(\n {\n translate(0f, size.height / 2f)\n rotate(\n degrees = 180f,\n pivot = Offset(center.x, center.y / 2)\n )\n }\n ) {\n drawPath(\n path = path,\n color = Color.Black,\n style = Stroke(2.dp.toPx())\n )\n }\n }\n\nResult\n\n" ]
[ 0 ]
[]
[]
[ "android", "android_canvas", "android_jetpack_compose", "canvas", "kotlin" ]
stackoverflow_0074673019_android_android_canvas_android_jetpack_compose_canvas_kotlin.txt
Q: Sharing button works perfectly on iPhone but crash on iPad I'm trying to add a button in order to share some sentences in Twitter, Facebook... etc. It all works on all iPhone models but simulator crash with an iPad. This is my code: @IBAction func shareButton(sender: AnyObject) { frase = labelFrases.text! autor = labelAutores.text! var myShare = "\(frase) - \(autor)" let activityVC: UIActivityViewController = UIActivityViewController(activityItems: [myShare], applicationActivities: nil) self.presentViewController(activityVC, animated: true, completion: nil) And this is the error: Terminating app due to uncaught exception 'NSGenericException', reason: 'UIPopoverPresentationController (<_UIAlertControllerActionSheetRegularPresentationController: 0x7c0f9190>) should have a non-nil sourceView or barButtonItem set before the presentation occurs How should I solve it? A: For ipad (iOS > 8.0) you need to set popoverPresentationController: //check ipad if (UIDevice.currentDevice().userInterfaceIdiom == UIUserInterfaceIdiom.Pad) { //ios > 8.0 if ( activityVC.respondsToSelector(Selector("popoverPresentationController"))){ activityVC.popoverPresentationController?.sourceView = super.view } } self.presentViewController(activityVC, animated: true, completion: nil) More information here: UIActivityViewController crashing on iOS 8 iPads A: Do this instead for Swift 5 to get share button working on both iPad and iPhone: @IBAction func shareButton(sender: UIButton) { { let itemToShare = ["Some Text goes here"] let avc = UIActivityViewController(activityItems: itemToShare, applicationActivities: nil) //Apps to be excluded sharing to avc.excludedActivityTypes = [ UIActivityType.print, UIActivityType.addToReadingList ] // Check if user is on iPad and present popover if UIDevice.current.userInterfaceIdiom == .pad { if avc.responds(to: #selector(getter: UIViewController.popoverPresentationController)) { avc.popoverPresentationController?.barButtonItem = sender } } // Present share activityView on regular iPhone self.present(avc, animated: true, completion: nil) } Hope this helps! A: Slightly adapted version to make it work on any button, iPad and iPhone. Xcode 13.4.1 (Swift 5.6) let itemToShare = ["Some Text goes here"] let avc = UIActivityViewController(activityItems: itemToShare, applicationActivities: nil) //Apps to be excluded sharing to avc.excludedActivityTypes = [ UIActivity.ActivityType.print, UIActivity.ActivityType.addToReadingList ] // Check if user is on iPad and present popover if UIDevice.current.userInterfaceIdiom == .pad { if avc.responds(to: #selector(getter: UIViewController.popoverPresentationController)) { avc.popoverPresentationController?.sourceView = sender as? UIView } } // Present share activityView on regular iPhone self.present(avc, animated: true, completion: nil)
Sharing button works perfectly on iPhone but crash on iPad
I'm trying to add a button in order to share some sentences in Twitter, Facebook... etc. It all works on all iPhone models but simulator crash with an iPad. This is my code: @IBAction func shareButton(sender: AnyObject) { frase = labelFrases.text! autor = labelAutores.text! var myShare = "\(frase) - \(autor)" let activityVC: UIActivityViewController = UIActivityViewController(activityItems: [myShare], applicationActivities: nil) self.presentViewController(activityVC, animated: true, completion: nil) And this is the error: Terminating app due to uncaught exception 'NSGenericException', reason: 'UIPopoverPresentationController (<_UIAlertControllerActionSheetRegularPresentationController: 0x7c0f9190>) should have a non-nil sourceView or barButtonItem set before the presentation occurs How should I solve it?
[ "For ipad (iOS > 8.0) you need to set popoverPresentationController:\n//check ipad\nif (UIDevice.currentDevice().userInterfaceIdiom == UIUserInterfaceIdiom.Pad)\n{\n //ios > 8.0\n if ( activityVC.respondsToSelector(Selector(\"popoverPresentationController\"))){\n activityVC.popoverPresentationController?.sourceView = super.view\n }\n}\n\nself.presentViewController(activityVC, animated: true, completion: nil)\n\nMore information here:\nUIActivityViewController crashing on iOS 8 iPads\n", "Do this instead for Swift 5 to get share button working on both iPad and iPhone:\n@IBAction func shareButton(sender: UIButton) { {\n let itemToShare = [\"Some Text goes here\"]\n let avc = UIActivityViewController(activityItems: itemToShare, applicationActivities: nil)\n \n //Apps to be excluded sharing to\n avc.excludedActivityTypes = [\n UIActivityType.print,\n UIActivityType.addToReadingList\n ]\n // Check if user is on iPad and present popover\n if UIDevice.current.userInterfaceIdiom == .pad {\n if avc.responds(to: #selector(getter: UIViewController.popoverPresentationController)) {\n avc.popoverPresentationController?.barButtonItem = sender\n }\n }\n // Present share activityView on regular iPhone\n self.present(avc, animated: true, completion: nil)\n}\n\nHope this helps!\n", "Slightly adapted version to make it work on any button, iPad and iPhone.\nXcode 13.4.1 (Swift 5.6)\n let itemToShare = [\"Some Text goes here\"]\n let avc = UIActivityViewController(activityItems: itemToShare, applicationActivities: nil)\n \n //Apps to be excluded sharing to\n avc.excludedActivityTypes = [\n UIActivity.ActivityType.print,\n UIActivity.ActivityType.addToReadingList\n ]\n // Check if user is on iPad and present popover\n if UIDevice.current.userInterfaceIdiom == .pad {\n if avc.responds(to: #selector(getter: UIViewController.popoverPresentationController)) {\n avc.popoverPresentationController?.sourceView = sender as? UIView\n }\n }\n // Present share activityView on regular iPhone\n self.present(avc, animated: true, completion: nil)\n\n" ]
[ 6, 3, 1 ]
[]
[]
[ "share", "swift" ]
stackoverflow_0031506081_share_swift.txt
Q: Does clang apply options by default? For example, when compiling a simple program clang hello_world.c Does the clang add any options by default? Like link library, include search path or optimization flags -O0 or exploit-mitigation flags like -mlvi-cfi? If so, how to get a full list of default options? A: Yes, Clang has default options. The document is here
Does clang apply options by default?
For example, when compiling a simple program clang hello_world.c Does the clang add any options by default? Like link library, include search path or optimization flags -O0 or exploit-mitigation flags like -mlvi-cfi? If so, how to get a full list of default options?
[ "Yes, Clang has default options. The document is here\n" ]
[ 0 ]
[]
[]
[ "clang" ]
stackoverflow_0074670801_clang.txt
Q: Make a PSobject showing published modules and their download count by author While I did write something that managed to work 99% of the time someone knows how to do it better than me I am just looking to learn how to improve my code $mymods = @() Find-Module | Where-Object { $_.Author -eq 'NAME' } | %{$mymods += ($_).name} $dlCount = @() $mymods | %{((find-module $_).additionalmetadata).downloadCount} | %{$dlCount += $_} [int]$max = $mymods.count if ([int]$dlCount.count -gt [int]$mymods.count) {$max = $dlCount.Count} $results = for( $i = 0; $i -lt $max; $i++) { Write-Verbose "$($mymods),$($dlCount)" [PSCustomObject]@{ Modules = $mymods[$i] Count = $dlCount[$i] } } $results A: You can simply do: Find-Module | Select Name,Author,@{N="DownloadCount";E={$_.AdditionalMetadata.downloadCount}} or: $Modules | Group Author,{$_.AdditionalMetadata.downloadCount} I suggest you to first save the results of Find-Module To a variable and use it each time instead of loading it every request, will perform faster $Modules = Find-Module
Make a PSobject showing published modules and their download count by author
While I did write something that managed to work 99% of the time someone knows how to do it better than me I am just looking to learn how to improve my code $mymods = @() Find-Module | Where-Object { $_.Author -eq 'NAME' } | %{$mymods += ($_).name} $dlCount = @() $mymods | %{((find-module $_).additionalmetadata).downloadCount} | %{$dlCount += $_} [int]$max = $mymods.count if ([int]$dlCount.count -gt [int]$mymods.count) {$max = $dlCount.Count} $results = for( $i = 0; $i -lt $max; $i++) { Write-Verbose "$($mymods),$($dlCount)" [PSCustomObject]@{ Modules = $mymods[$i] Count = $dlCount[$i] } } $results
[ "You can simply do:\nFind-Module | \nSelect Name,Author,@{N=\"DownloadCount\";E={$_.AdditionalMetadata.downloadCount}}\n\nor:\n$Modules | Group Author,{$_.AdditionalMetadata.downloadCount}\n\n\nI suggest you to first save the results of Find-Module To a variable and use it each time instead of loading it every request, will perform faster\n$Modules = Find-Module\n\n\n" ]
[ 1 ]
[]
[]
[ "powershell" ]
stackoverflow_0074674110_powershell.txt
Q: ENOENT when connecting to Google Cloud SQL from App Engine I'm trying to deploy my Node.js app on Google App Engine and it deployed fine, but it can't connect to Google Cloud SQL for some reason. Here's what it throws: Error: connect ENOENT /cloudsql/my-project-id:asia-east1:my-sql-instance Here's how I configured the connection: if (process.env.INSTANCE_CONNECTION_NAME) { exports.mysqlConfig = { user: process.env.GCLOUD_SQL_USERNAME, password: process.env.GCLOUD_SQL_PASSWORD, socketPath: '/cloudsql/' + process.env.INSTANCE_CONNECTION_NAME } } else { // Use settings for localhost } I'm using node-mysql module to connect to the database. The App Engine and Cloud SQL are already in the same project. My theory is that the App Engine and the Cloud SQL has to be in the same project and same region, but I'm not sure. A: Check your logs for any errors during startup using: the following cmd gcloud app logs tail -s default or, with the log viewer https://console.cloud.google.com/logs/viewer Chances are that you have not enabled the Cloud SQL API for your project: https://console.developers.google.com/apis/api/sqladmin/overview A: make sure you have added following setting in app.yaml beta_settings: # The connection name of your instance, available by using # 'gcloud beta sql instances describe [INSTANCE_NAME]' or from # the Instance details page in the Google Cloud Platform Console. cloud_sql_instances: YOUR_INSTANCE_CONNECTION_NAME ref:https://cloud.google.com/appengine/docs/flexible/nodejs/using-cloud-sql-postgres A: Apparently the order you do things matters... enable Cloud SQL API then (re)deploy your app (gcloud app deploy) When I did deploy -> create databases -> enable sql ipi I got the ENOENT error A: For anyone using 2nd gen Cloud Functions - they added a portion in the documentation: If you're using Cloud Functions (2nd gen) and not Cloud Functions (1st gen), the following are required (also see Configure Cloud Run): They go on to list the steps required. They're a bit scary, but do work eventually. (If you find yourself looking for the SQL Connection in the new Cloud Run revision, notice there is a separate "Connections" tab for this)
ENOENT when connecting to Google Cloud SQL from App Engine
I'm trying to deploy my Node.js app on Google App Engine and it deployed fine, but it can't connect to Google Cloud SQL for some reason. Here's what it throws: Error: connect ENOENT /cloudsql/my-project-id:asia-east1:my-sql-instance Here's how I configured the connection: if (process.env.INSTANCE_CONNECTION_NAME) { exports.mysqlConfig = { user: process.env.GCLOUD_SQL_USERNAME, password: process.env.GCLOUD_SQL_PASSWORD, socketPath: '/cloudsql/' + process.env.INSTANCE_CONNECTION_NAME } } else { // Use settings for localhost } I'm using node-mysql module to connect to the database. The App Engine and Cloud SQL are already in the same project. My theory is that the App Engine and the Cloud SQL has to be in the same project and same region, but I'm not sure.
[ "Check your logs for any errors during startup using:\n\nthe following cmd gcloud app logs tail -s default or,\nwith the log viewer https://console.cloud.google.com/logs/viewer\n\nChances are that you have not enabled the Cloud SQL API for your project: https://console.developers.google.com/apis/api/sqladmin/overview\n", "make sure you have added following setting in app.yaml\nbeta_settings:\n # The connection name of your instance, available by using\n # 'gcloud beta sql instances describe [INSTANCE_NAME]' or from\n # the Instance details page in the Google Cloud Platform Console.\n cloud_sql_instances: YOUR_INSTANCE_CONNECTION_NAME\n\nref:https://cloud.google.com/appengine/docs/flexible/nodejs/using-cloud-sql-postgres\n", "Apparently the order you do things matters...\n\nenable Cloud SQL API\nthen (re)deploy your app (gcloud app deploy)\n\nWhen I did deploy -> create databases -> enable sql ipi I got the ENOENT error\n", "For anyone using 2nd gen Cloud Functions - they added a portion in the documentation:\n\nIf you're using Cloud Functions (2nd gen) and not Cloud Functions (1st\ngen), the following are required (also see Configure Cloud Run):\n\nThey go on to list the steps required. They're a bit scary, but do work eventually.\n(If you find yourself looking for the SQL Connection in the new Cloud Run revision, notice there is a separate \"Connections\" tab for this)\n" ]
[ 20, 11, 1, 0 ]
[]
[]
[ "google_app_engine", "mysql", "node.js" ]
stackoverflow_0041971154_google_app_engine_mysql_node.js.txt
Q: Zero-scaled displacement image distorts Sprite on container rotation: Pixi.js I have a simple Pixi.js scene where there are 4 Sprites vertically placed. All of them have a displacement image assigned. To begin the sketch, I have set the displacement image to scale 0 so the Sprite doesn't appear distorted by default. The Sprites are perfect rectangles when parent container is not rotated, but when the parent container is rotated, the Sprite gets some displacement/cropping applied on corners. How do I remove this displacement at sketch start? I have attached the screenshot and encircled the croppy parts. And this is the code: let width = window.innerWidth; let height = window.innerHeight; const app = new PIXI.Application({ width: width, height: height, transparent: false, antialias: true }); app.renderer.backgroundColor = 0x404040; // making the canvas responsive window.onresize = () => { let width = window.innerWidth; let height = window.innerHeight; app.renderer.resize(width, height); } app.renderer.view.style.position = 'absolute'; document.body.appendChild(app.view); let pContainer= new PIXI.Container(); pContainer.pivot.set(-width/2, -350); pContainer.rotation = -0.3; // This rotation distorts the Sprites app.stage.addChild(pContainer); for (let i = 0; i < 4; i++) { let container = new PIXI.Container(); container.pivot.y = -i * 210; let image = new PIXI.Sprite.from('image.jpg'); image.width = 100; image.height = 200; image.anchor.set(0.5, 0.5); let dispImage = new PIXI.Sprite.from('disp.jpg'); let dispFilter = new PIXI.filters.DisplacementFilter(dispImage); dispImage.texture.baseTexture.wrapMode = PIXI.WRAP_MODES.REPEAT; container.filters = [dispFilter]; // Turn disp scale to zero so it doesnt show distorted image by default dispImage.scale.set(0); container.addChild(image); container.addChild(dispImage); pContainer.addChild(container); } Thank you. disp.jpg: image.jpg The Sprites' corners getting distorted. Encircled in yellow A: I had the same problem, the simpler fix is to use a rect behind that is bigger than the photo, it can be the screen size too
Zero-scaled displacement image distorts Sprite on container rotation: Pixi.js
I have a simple Pixi.js scene where there are 4 Sprites vertically placed. All of them have a displacement image assigned. To begin the sketch, I have set the displacement image to scale 0 so the Sprite doesn't appear distorted by default. The Sprites are perfect rectangles when parent container is not rotated, but when the parent container is rotated, the Sprite gets some displacement/cropping applied on corners. How do I remove this displacement at sketch start? I have attached the screenshot and encircled the croppy parts. And this is the code: let width = window.innerWidth; let height = window.innerHeight; const app = new PIXI.Application({ width: width, height: height, transparent: false, antialias: true }); app.renderer.backgroundColor = 0x404040; // making the canvas responsive window.onresize = () => { let width = window.innerWidth; let height = window.innerHeight; app.renderer.resize(width, height); } app.renderer.view.style.position = 'absolute'; document.body.appendChild(app.view); let pContainer= new PIXI.Container(); pContainer.pivot.set(-width/2, -350); pContainer.rotation = -0.3; // This rotation distorts the Sprites app.stage.addChild(pContainer); for (let i = 0; i < 4; i++) { let container = new PIXI.Container(); container.pivot.y = -i * 210; let image = new PIXI.Sprite.from('image.jpg'); image.width = 100; image.height = 200; image.anchor.set(0.5, 0.5); let dispImage = new PIXI.Sprite.from('disp.jpg'); let dispFilter = new PIXI.filters.DisplacementFilter(dispImage); dispImage.texture.baseTexture.wrapMode = PIXI.WRAP_MODES.REPEAT; container.filters = [dispFilter]; // Turn disp scale to zero so it doesnt show distorted image by default dispImage.scale.set(0); container.addChild(image); container.addChild(dispImage); pContainer.addChild(container); } Thank you. disp.jpg: image.jpg The Sprites' corners getting distorted. Encircled in yellow
[ "I had the same problem, the simpler fix is to use a rect behind that is bigger than the photo, it can be the screen size too\n" ]
[ 0 ]
[]
[]
[ "javascript", "pixi.js" ]
stackoverflow_0073395163_javascript_pixi.js.txt
Q: Using IMPORTRANGE to import from Excel file in Google Drive To import an entire sheet of data from another spreadsheet using IMPORTRANGE I'd do something like this: =importrange("google-drive-id-for-spreadsheet","A:AR") This works fine for a Google Sheets spreadsheet source, but if the source file is an Excel spreadsheet, I get a #Ref! error in the cell and the hover comment is: Error Spreadsheet cannot be found. I'm presuming this is because IMPORTRANGE doesn't work with Excel files, so how can I achieve the same thing? I don't mind working with scripts but would prefer a formula solution if possible. Edit: This happens whether I use the full URL or just the spreadsheet key and if I use the sheet name with the range or not. I've tried several files and it always works with the Google Sheets files and never works with Excel files. Something occurred to me about the ownership and location of these files. Somebody else is the owner of the spreadsheet that I want the IMPORTRANGE formula in. I have full edit permissions. The folder that the spreadsheet resides in is owned by the same guy, it has been shared with me and I have added it to my Drive. In a subfolder of this folder is where the source files are. I am the owner of the subfolder and the source files within, both Excel and Google Sheets files. Could this setup have anything to do with the results I'm getting? Edit: I've had the ownership of the folders (all the way up the hierarchy) and relevant files transferred to me and it's still doing the same thing. A: This is clearly only a work around and not an answer, but I had to do something so that I could move on. The only way I could get what I wanted is to code a convert to .gsheet format first and point imortrange to that new sheet. Might help someone else get their project pointing in a working direction until this can be answered. A: There are there way to fix this problem: Convert the Excel spreadsheet to a Google spreadsheet. You can then use the importrange() function to import the data from the converted file. Export the Excel spreadsheet to a CSV file first, then you can use a different function called the importdata() function to import that data into a Google spreadsheet. Using Add-on: "Sheetgo" You should watch this video: How to automatically import Excel Data to Google Sheets?
Using IMPORTRANGE to import from Excel file in Google Drive
To import an entire sheet of data from another spreadsheet using IMPORTRANGE I'd do something like this: =importrange("google-drive-id-for-spreadsheet","A:AR") This works fine for a Google Sheets spreadsheet source, but if the source file is an Excel spreadsheet, I get a #Ref! error in the cell and the hover comment is: Error Spreadsheet cannot be found. I'm presuming this is because IMPORTRANGE doesn't work with Excel files, so how can I achieve the same thing? I don't mind working with scripts but would prefer a formula solution if possible. Edit: This happens whether I use the full URL or just the spreadsheet key and if I use the sheet name with the range or not. I've tried several files and it always works with the Google Sheets files and never works with Excel files. Something occurred to me about the ownership and location of these files. Somebody else is the owner of the spreadsheet that I want the IMPORTRANGE formula in. I have full edit permissions. The folder that the spreadsheet resides in is owned by the same guy, it has been shared with me and I have added it to my Drive. In a subfolder of this folder is where the source files are. I am the owner of the subfolder and the source files within, both Excel and Google Sheets files. Could this setup have anything to do with the results I'm getting? Edit: I've had the ownership of the folders (all the way up the hierarchy) and relevant files transferred to me and it's still doing the same thing.
[ "This is clearly only a work around and not an answer, but I had to do something so that I could move on. The only way I could get what I wanted is to code a convert to .gsheet format first and point imortrange to that new sheet. Might help someone else get their project pointing in a working direction until this can be answered.\n", "There are there way to fix this problem:\n\nConvert the Excel spreadsheet to a Google spreadsheet. You can then use the importrange() function to import the data from the converted file.\nExport the Excel spreadsheet to a CSV file first, then you can use a different function called the importdata() function to import that data into a Google spreadsheet.\nUsing Add-on: \"Sheetgo\"\n\nYou should watch this video:\nHow to automatically import Excel Data to Google Sheets?\n" ]
[ 1, 0 ]
[]
[]
[ "google_sheets", "google_sheets_formula", "import_from_excel", "importrange" ]
stackoverflow_0048250257_google_sheets_google_sheets_formula_import_from_excel_importrange.txt
Q: How to make Wordpress in subdirectory of Next JS app Hello dear community, I have my web app using next JS under domain name domain.com Now I want to have a wordpress blog (wordblog.com/community) under example.com/community Using Next JS Rewrites, i added this to my next config : async rewrites() { return { fallback: [ { source: "/community/:path*", destination: `https://wordblog.com/community/:path*`, }, ], } }, in my wordpress installation folder. 1- Added this to .htacces <IfModule mod_headers.c> <FilesMatch "\.(ttf|ttc|otf|eot|woff|woff2|font.css|css|js)$"> Header set Access-Control-Allow-Origin "*" </FilesMatch> </IfModule> 2- Added this to wp-config.php : define('WP_SITEURL', 'https://wordblog.com/community'); define('WP_HOME', 'https://example.com/community'); define('COOKIE_DOMAIN', '.example.com'); 3- Added this wp-content/themes/your-theme/functions.php remove_filter('template_redirect','redirect_canonical'); add_filter('rest_url', 'serve_rest_url_on_wp_subdomain'); function serve_rest_url_on_wp_subdomain ($url) { return str_replace('https://example.com/community', 'https://wordblog.com/community', $url); } But when i go to domain.com/community, it redirects me to wordblog.com/community. Also some pages of wordblog.com/community are rendered under wordblog.com/community/pageX and not as domain.com/community/pageX A: You might be able to solve this using the Alias directive in your apache setup: https://httpd.apache.org/docs/2.0/mod/mod_alias.html#alias This would mean you would have to remove the redirect in nextJS. Apache will know to serve /community from your wordpress installation.
How to make Wordpress in subdirectory of Next JS app
Hello dear community, I have my web app using next JS under domain name domain.com Now I want to have a wordpress blog (wordblog.com/community) under example.com/community Using Next JS Rewrites, i added this to my next config : async rewrites() { return { fallback: [ { source: "/community/:path*", destination: `https://wordblog.com/community/:path*`, }, ], } }, in my wordpress installation folder. 1- Added this to .htacces <IfModule mod_headers.c> <FilesMatch "\.(ttf|ttc|otf|eot|woff|woff2|font.css|css|js)$"> Header set Access-Control-Allow-Origin "*" </FilesMatch> </IfModule> 2- Added this to wp-config.php : define('WP_SITEURL', 'https://wordblog.com/community'); define('WP_HOME', 'https://example.com/community'); define('COOKIE_DOMAIN', '.example.com'); 3- Added this wp-content/themes/your-theme/functions.php remove_filter('template_redirect','redirect_canonical'); add_filter('rest_url', 'serve_rest_url_on_wp_subdomain'); function serve_rest_url_on_wp_subdomain ($url) { return str_replace('https://example.com/community', 'https://wordblog.com/community', $url); } But when i go to domain.com/community, it redirects me to wordblog.com/community. Also some pages of wordblog.com/community are rendered under wordblog.com/community/pageX and not as domain.com/community/pageX
[ "You might be able to solve this using the Alias directive in your apache setup: https://httpd.apache.org/docs/2.0/mod/mod_alias.html#alias\nThis would mean you would have to remove the redirect in nextJS. Apache will know to serve /community from your wordpress installation.\n" ]
[ 0 ]
[]
[]
[ "next.js", "php", "wordpress" ]
stackoverflow_0074674045_next.js_php_wordpress.txt
Q: How to use upsert with Mongoose and MongoDB? I am currently trying to upsert some datas in a mongoDB array. The only problem is, while it's just an update, it's working, but when the object doesn't exist, i get an error. The function in my backend : const session = await mongoose.startSession(); await session.withTransaction(async () => { await Board.updateOne({ _id: boardId }, { title: req.body.title }); for (let column of columns) { await Board.findOneAndUpdate( { "columns._id": column._id }, { $set: { "columns.$.title": column.title, }, }, { new: true, upsert: true } ); } return res.status(200).json({ msg: "OK" }); }); session.endSession(); ERROR MongoServerError: Plan executor error during findAndModify :: caused by :: The positional operator did not find the match needed from the query. I have also tried to not use the update operator $set: { "columns.title": column.title, }, ERROR MongoServerError: Plan executor error during findAndModify :: caused by :: Cannot create field 'title' in element {columns: [ { title: "987", _id: ObjectId('6388bfff30d83d81317a9c54') }, { title: "123", _id: ObjectId('6388bfff30d83d81317a9c55') } ]} The request i am sending : { id: '6387a6f4472d809c4f299794', title: 'Test edit board name UPDATE', columns: [ { title: '987', _id: '6388bfff30d83d81317a9c54' }, { title: '123', _id: '6388bfff30d83d81317a9c55' }, { title: 'ERROR' } ] } Thanks in advance, if more informations are required, i'll update this post. A: Given the fact that you have to deal with potentially incomplete objects in your request you may think about handling the logic in plain JS and updating the object at the end: const session = await mongoose.startSession(); await session.withTransaction(async () => { const board = await Board.findById(boardId); if (!board) return res.status(400).json({ msg: 'Not found' }); // Update title board.title = req.body.title; // Update columns for (let column of columns) { if (column._id === undefined) continue; const colIdx = board.columns.findIndex(c => c._id === column._id); if (colIdx === -1) continue; board.columns[colIdx].title = column.title; } // Save board await board.save(); return res.status(200).json({ msg: 'OK' }); }); session.endSession();
How to use upsert with Mongoose and MongoDB?
I am currently trying to upsert some datas in a mongoDB array. The only problem is, while it's just an update, it's working, but when the object doesn't exist, i get an error. The function in my backend : const session = await mongoose.startSession(); await session.withTransaction(async () => { await Board.updateOne({ _id: boardId }, { title: req.body.title }); for (let column of columns) { await Board.findOneAndUpdate( { "columns._id": column._id }, { $set: { "columns.$.title": column.title, }, }, { new: true, upsert: true } ); } return res.status(200).json({ msg: "OK" }); }); session.endSession(); ERROR MongoServerError: Plan executor error during findAndModify :: caused by :: The positional operator did not find the match needed from the query. I have also tried to not use the update operator $set: { "columns.title": column.title, }, ERROR MongoServerError: Plan executor error during findAndModify :: caused by :: Cannot create field 'title' in element {columns: [ { title: "987", _id: ObjectId('6388bfff30d83d81317a9c54') }, { title: "123", _id: ObjectId('6388bfff30d83d81317a9c55') } ]} The request i am sending : { id: '6387a6f4472d809c4f299794', title: 'Test edit board name UPDATE', columns: [ { title: '987', _id: '6388bfff30d83d81317a9c54' }, { title: '123', _id: '6388bfff30d83d81317a9c55' }, { title: 'ERROR' } ] } Thanks in advance, if more informations are required, i'll update this post.
[ "Given the fact that you have to deal with potentially incomplete objects in your request you may think about handling the logic in plain JS and updating the object at the end:\nconst session = await mongoose.startSession();\n\nawait session.withTransaction(async () => {\n const board = await Board.findById(boardId);\n \n if (!board) return res.status(400).json({ msg: 'Not found' });\n\n // Update title \n board.title = req.body.title;\n\n // Update columns\n for (let column of columns) {\n if (column._id === undefined) continue;\n const colIdx = board.columns.findIndex(c => c._id === column._id);\n if (colIdx === -1) continue;\n board.columns[colIdx].title = column.title;\n }\n\n // Save board\n await board.save();\n\n return res.status(200).json({ msg: 'OK' });\n});\n\nsession.endSession();\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "mongodb", "mongoose" ]
stackoverflow_0074669921_javascript_mongodb_mongoose.txt
Q: Do you get charged for requests that fail an APIG custom authorizer (unauthorized, rate limited)? I'm looking to use an API Gateway Custom Authorizer to authorization. If a user with an unauthorized token makes thousands/millions of tries will I get charged? A: For Custom Authorizer to authorization, you will get charged whether it pass or fail as it needs to get validated with API Gateway A: According to this thread on the AWS Forums: https://forums.aws.amazon.com/thread.jspa?threadID=274894&tstart=0 Unauthorized calls are not charged to your account. You will be charged for any invocations of your custom authorizer, but these results are cached for a TTL that you can configure. A: I wanted to leave a comment here as this is one of the top results when you search for this topic and that forum post is dead. I spoke to AWS Support and they confirmed that you do not pay for unauthed requests at the APIGW level, however you do pay for the authorizer lambda running the verification code: To clarify: Request comes in to API Gateway Request is forwarded to Lambda custom authorizer Request fails custom authorizer <-- You are charged for this lambda invocation API Gateway rejects this request <-- You are not charged for this at the APIGW level
Do you get charged for requests that fail an APIG custom authorizer (unauthorized, rate limited)?
I'm looking to use an API Gateway Custom Authorizer to authorization. If a user with an unauthorized token makes thousands/millions of tries will I get charged?
[ "For Custom Authorizer to authorization, you will get charged whether it pass or fail as it needs to get validated with API Gateway\n", "According to this thread on the AWS Forums: https://forums.aws.amazon.com/thread.jspa?threadID=274894&tstart=0\n\nUnauthorized calls are not charged to your account. You will be charged for any invocations of your custom authorizer, but these results are cached for a TTL that you can configure.\n\n", "I wanted to leave a comment here as this is one of the top results when you search for this topic and that forum post is dead. I spoke to AWS Support and they confirmed that you do not pay for unauthed requests at the APIGW level, however you do pay for the authorizer lambda running the verification code:\n\nTo clarify:\n\nRequest comes in to API Gateway\nRequest is forwarded to Lambda custom authorizer\nRequest fails custom authorizer <-- You are charged for this lambda invocation\nAPI Gateway rejects this request <-- You are not charged for this at the APIGW level\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "aws_api_gateway" ]
stackoverflow_0049048666_aws_api_gateway.txt
Q: How can I join sets in Zimpl? I have two parameters read from file, of m*n (Months * Nights) dimension: A[m,n] and B[m,n] How can I make a set / parameter from these two parameters so that the new set will have the same indices m*n, and elements combined as sets with an extra zero between? For example, let's say A["D", 5] = 35, B["D", 5] = 2 , I want to have C["D", 5] = {35, 0, 2} Is this possible? ps: I want to use it in the objective min/max, I tried to just use {A[a,b], 0, B[a,b]}[index] but failed, so I am trying to get a pre-defined sets to use instead. A: I managed to do it like this: set C[<m,n> in Months*Nights] := {A[m,n], 0, B[m,n]} ; And I can refer them in this way: ord(C[m,n],3,1) Where 3 refer to B[m,n], and 1 is a placeholder for the third parameter of ord function.
How can I join sets in Zimpl?
I have two parameters read from file, of m*n (Months * Nights) dimension: A[m,n] and B[m,n] How can I make a set / parameter from these two parameters so that the new set will have the same indices m*n, and elements combined as sets with an extra zero between? For example, let's say A["D", 5] = 35, B["D", 5] = 2 , I want to have C["D", 5] = {35, 0, 2} Is this possible? ps: I want to use it in the objective min/max, I tried to just use {A[a,b], 0, B[a,b]}[index] but failed, so I am trying to get a pre-defined sets to use instead.
[ "I managed to do it like this:\nset C[<m,n> in Months*Nights] := {A[m,n], 0, B[m,n]} ;\n\nAnd I can refer them in this way:\nord(C[m,n],3,1)\n\nWhere 3 refer to B[m,n], and 1 is a placeholder for the third parameter of ord function.\n" ]
[ 0 ]
[]
[]
[ "scip", "zimpl" ]
stackoverflow_0074670628_scip_zimpl.txt
Q: Find a latest table daily on big query and concatenate with main table having same schema Daily I’m receiving a new table with same format name but different dates, I want to concatenate daily new table data with main_table. I want to concatenate daily new table data with my main table. Note : Dataset is same, schema also same A: Main Table: CREATE TABLE `yourdataset.main_table` ( id INT, name STRING, inserted_date DATE ) PARTITION BY inserted_date; Main table is partitioned by DATE column. You can append data to main table and data from each new table will be stored in different date partitions. INSERT INTO `yourdataset.main_table` VALUES (1, "name1", "2022-12-01"); INSERT INTO `yourdataset.main_table` VALUES (2, "name2", "2022-12-02"); INSERT INTO `yourdataset.main_table` VALUES (3, "name3", "2022-12-03"); INSERT INTO `yourdataset.main_table` VALUES (4, "name4", "2022-12-04"); Refer BigQuery partitioned tables documentation. A: You can execute a query like this : INSERT INTO `mydataset.main_table` SELECT * FROM `mydataset.daily_table_*` where _TABLE_SUFFIX = FORMAT_DATE('%Y%m%d', CURRENT_DATE()) In my example, I assume that you receive your daily table with a name like daily_table_{currentstrdate} {currentstrdate} has this format in my example YYYYMMDD The schema is the same and you can apply an insert/select query. In this example, I retrieved the current date dynamically and transformed it to String, if it's not mandatory for you, you can directly pass the daily table name.
Find a latest table daily on big query and concatenate with main table having same schema
Daily I’m receiving a new table with same format name but different dates, I want to concatenate daily new table data with main_table. I want to concatenate daily new table data with my main table. Note : Dataset is same, schema also same
[ "Main Table:\nCREATE TABLE `yourdataset.main_table`\n(\n id INT,\n name STRING,\n inserted_date DATE\n)\nPARTITION BY inserted_date;\n\nMain table is partitioned by DATE column. You can append data to main table and data from each new table will be stored in different date partitions.\nINSERT INTO `yourdataset.main_table` VALUES (1, \"name1\", \"2022-12-01\");\nINSERT INTO `yourdataset.main_table` VALUES (2, \"name2\", \"2022-12-02\");\nINSERT INTO `yourdataset.main_table` VALUES (3, \"name3\", \"2022-12-03\");\nINSERT INTO `yourdataset.main_table` VALUES (4, \"name4\", \"2022-12-04\");\n\nRefer BigQuery partitioned tables documentation.\n", "You can execute a query like this :\nINSERT INTO `mydataset.main_table`\nSELECT \n *\nFROM\n `mydataset.daily_table_*`\nwhere _TABLE_SUFFIX = FORMAT_DATE('%Y%m%d', CURRENT_DATE())\n\nIn my example, I assume that you receive your daily table with a name like daily_table_{currentstrdate} {currentstrdate} has this format in my example YYYYMMDD\nThe schema is the same and you can apply an insert/select query.\nIn this example, I retrieved the current date dynamically and transformed it to String, if it's not mandatory for you, you can directly pass the daily table name.\n" ]
[ 0, 0 ]
[]
[]
[ "google_bigquery" ]
stackoverflow_0074672427_google_bigquery.txt
Q: How to handle log events with the npmlog library? this might be an easy question for you but I am struggling with it. I want to catch the events thrown by the npmlog library as described here: https://www.npmjs.com/package/npmlog How do I create an event listener on those? There is no .on() function neither can I create an instance of log beforehand. import * as log from 'npmlog'; log.error('problem', 'some message'); // What I would like to do: log.on("error", ()=>{do something}) Thank you very much for your help! A: The eventName is log, log.<level>, or <prefix>. log.on('log', function (record) { // Use log record }); log.error('error', 'error message');
How to handle log events with the npmlog library?
this might be an easy question for you but I am struggling with it. I want to catch the events thrown by the npmlog library as described here: https://www.npmjs.com/package/npmlog How do I create an event listener on those? There is no .on() function neither can I create an instance of log beforehand. import * as log from 'npmlog'; log.error('problem', 'some message'); // What I would like to do: log.on("error", ()=>{do something}) Thank you very much for your help!
[ "The eventName is log, log.<level>, or <prefix>.\nlog.on('log', function (record) {\n // Use log record\n});\n\nlog.error('error', 'error message');\n\n" ]
[ 0 ]
[]
[]
[ "event_handling", "events", "javascript", "logging", "node.js" ]
stackoverflow_0072506435_event_handling_events_javascript_logging_node.js.txt
Q: No module named 'graphql.type' in Django I am New in Django and GraphQL, following the the article, I am using python 3.8 in virtual env and 3.10 in windows, but same error occurs on both side, also tried the this Question, i also heard that GraphQL generate queries, But dont know how to generate it, But this error occurs: Traceback (most recent call last): File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() File "/usr/lib/python3.8/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/home/talha/ve/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper fn(*args, **kwargs) File "/home/talha/ve/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 125, in inner_run autoreload.raise_last_exception() File "/home/talha/ve/lib/python3.8/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception raise _exception[1] File "/home/talha/ve/lib/python3.8/site-packages/django/core/management/__init__.py", line 398, in execute autoreload.check_errors(django.setup)() File "/home/talha/ve/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper fn(*args, **kwargs) File "/home/talha/ve/lib/python3.8/site-packages/django/__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "/home/talha/ve/lib/python3.8/site-packages/django/apps/registry.py", line 91, in populate app_config = AppConfig.create(entry) File "/home/talha/ve/lib/python3.8/site-packages/django/apps/config.py", line 193, in create import_module(entry) File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/talha/ve/lib/python3.8/site-packages/ariadne/__init__.py", line 1, in <module> from .enums import ( File "/home/talha/ve/lib/python3.8/site-packages/ariadne/enums.py", line 17, in <module> from graphql.type import GraphQLEnumType, GraphQLNamedType, GraphQLSchema ModuleNotFoundError: No module named 'graphql.type'``` A: You can try these following ways, One, you can find graphql directory in the project, on python path. renaming it will fix the issue. And also you can try these commands, pip install pip --upgrade pip install setuptools --upgrade pip install gql[all] Hope this helps, if not please let know. Thanks
No module named 'graphql.type' in Django
I am New in Django and GraphQL, following the the article, I am using python 3.8 in virtual env and 3.10 in windows, but same error occurs on both side, also tried the this Question, i also heard that GraphQL generate queries, But dont know how to generate it, But this error occurs: Traceback (most recent call last): File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() File "/usr/lib/python3.8/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/home/talha/ve/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper fn(*args, **kwargs) File "/home/talha/ve/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 125, in inner_run autoreload.raise_last_exception() File "/home/talha/ve/lib/python3.8/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception raise _exception[1] File "/home/talha/ve/lib/python3.8/site-packages/django/core/management/__init__.py", line 398, in execute autoreload.check_errors(django.setup)() File "/home/talha/ve/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper fn(*args, **kwargs) File "/home/talha/ve/lib/python3.8/site-packages/django/__init__.py", line 24, in setup apps.populate(settings.INSTALLED_APPS) File "/home/talha/ve/lib/python3.8/site-packages/django/apps/registry.py", line 91, in populate app_config = AppConfig.create(entry) File "/home/talha/ve/lib/python3.8/site-packages/django/apps/config.py", line 193, in create import_module(entry) File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/talha/ve/lib/python3.8/site-packages/ariadne/__init__.py", line 1, in <module> from .enums import ( File "/home/talha/ve/lib/python3.8/site-packages/ariadne/enums.py", line 17, in <module> from graphql.type import GraphQLEnumType, GraphQLNamedType, GraphQLSchema ModuleNotFoundError: No module named 'graphql.type'```
[ "You can try these following ways,\nOne, you can find graphql directory in the project, on python path. renaming it will fix the issue.\nAnd also you can try these commands,\npip install pip --upgrade\npip install setuptools --upgrade\npip install gql[all]\n\nHope this helps, if not please let know. Thanks\n" ]
[ 0 ]
[]
[]
[ "ariadne_graphql", "django", "graphql", "python" ]
stackoverflow_0074674006_ariadne_graphql_django_graphql_python.txt
Q: How to convert normal java app to java app with GUI or pop up window? I am new in java. I am trying convert simple java app with terminal output to a app with a GUI or UI. I have seen small java applications with little window , like calculator or currency converter. This is the program I am trying to convert to a GUI app https://github.com/Abhinav-26/Employee-Management-System/blob/master/EmployManagementSystem.java I have tried to run this app. I am getting the out put in a terminal. (image 1)terminal output I need a UI to this simple app , like image 2 - example of ui I am expecting to get a simple java app with UI , like in picture 2. Is there a way to edit existing app ? What should I google / lookup ?
How to convert normal java app to java app with GUI or pop up window?
I am new in java. I am trying convert simple java app with terminal output to a app with a GUI or UI. I have seen small java applications with little window , like calculator or currency converter. This is the program I am trying to convert to a GUI app https://github.com/Abhinav-26/Employee-Management-System/blob/master/EmployManagementSystem.java I have tried to run this app. I am getting the out put in a terminal. (image 1)terminal output I need a UI to this simple app , like image 2 - example of ui I am expecting to get a simple java app with UI , like in picture 2. Is there a way to edit existing app ? What should I google / lookup ?
[]
[]
[ "Look into java swing. You can create a JWindow and JFrame with other elements in it like JButton and JTextInput.\nhttps://www.geeksforgeeks.org/java-swing-jwindow-examples/\n", "The topic is not as easy as you might think it is.\nAs you already found out there are many tutorials on how to build a calculator by using JavaFX. I'll list just a few:\n\nSimple JavaFX Calculator at CodeReview\nSimple basic Calculator Javafx at stackoverflow\nJavaFX Software Tutorial: Calculator (MVC) on YouTube\nMaking Calculator in JavaFX (with Source Code)\nJavaFX implementation calculator at Very Interesting Programming\n\nYou can easily find more. But you dont find the same for your application for a good reason: If you want to build a standalone application, building a calculator is a better starting point than building some employee management system.\nWhat do I mean by standalone? Unlike a standalone calculator the technical architecture of an employee management system requires some sort of storage. The typical storage-solution would be a database. Then you need some non-UI code to access this storage, I'll call this the server. And finally there will be some UI, I'll call this the client. So the complete application has three tiers: database, server and client. The calculator example has only one tier, everything is included in a single app.\nSo if you are looking for a tutorial on an employee managment system, you might want to search for tutorials that use a multi-tier-architecture. One common architecture will be a server build with Spring Boot. And there you are – you can easily find tutorials on building an employee managment system with Spring Boot.\nHere are two of them but there are many more:\n\nSpring Boot Tutorial - Build Employee Management Project from Scratch using Spring Boot + Spring Security + Thymeleaf and MySQL Database\nCreate and View Employee Using JPA, Springboot and Thymeleaf\n\nThe downside is: You will not be able to use existing code.\nSo now you have two options:\n\nLearn some Java UI-Framework such as JavaFX or the older Java Swing.\nLearn how to build a more complex Java application that has at least some sort of client and some sort of server. Spring Boot is a good starting point for this.\n\n" ]
[ -2, -2 ]
[ "java" ]
stackoverflow_0074674114_java.txt
Q: How to create a Baccarat trend like graph in Android? I need to create a graph that is similar to a Baccarat trend like this What could be the best approach? I am thinking of using RecyclerView with GridLayout but I have no idea how to plot it this way. Any library that support this kind of graph? A: Try creating your own android View that can take some data and draw it An example of your draw method could be public void draw(Canvas canvas){ //draw grid int spacing = 10; //costant cell size //use (height/rows) and (width/cols) as spacing to have costant cols and rows instead for(int y = 0; y<height; y+=spacing){ canvas.drawLine(0,y,width,y,paint); } for(int x = 0; x<width; x+=spacing){ canvas.drawLine(x, 0, x,height,paint); } //draw your data }
How to create a Baccarat trend like graph in Android?
I need to create a graph that is similar to a Baccarat trend like this What could be the best approach? I am thinking of using RecyclerView with GridLayout but I have no idea how to plot it this way. Any library that support this kind of graph?
[ "Try creating your own android View that can take some data and draw it\nAn example of your draw method could be\npublic void draw(Canvas canvas){\n //draw grid\n int spacing = 10; //costant cell size\n //use (height/rows) and (width/cols) as spacing to have costant cols and rows instead\n\n for(int y = 0; y<height; y+=spacing){\n canvas.drawLine(0,y,width,y,paint);\n }\n\n for(int x = 0; x<width; x+=spacing){\n canvas.drawLine(x, 0, x,height,paint);\n }\n //draw your data\n}\n\n" ]
[ 0 ]
[]
[]
[ "android", "android_gridlayout", "android_recyclerview", "gridlayoutmanager" ]
stackoverflow_0074674212_android_android_gridlayout_android_recyclerview_gridlayoutmanager.txt
Q: not enough arguments in call to (_C2func_bcc_func_load) I am getting an error not enough arguments in call to (_C2func_bcc_func_load) when compiling Go. Go Version used: go version go1.19.1 How can I resolve this error? Would appreciate if anyone could help. Error Message: github.com/iovisor/gobpf/bcc /home/jeremy/go/pkg/mod/github.com/iovisor/[email protected]/bcc/module.go:230:132: not enough arguments in call to (_C2func_bcc_func_load) have (unsafe.Pointer, _Ctype_int, *_Ctype_char, *_Ctype_struct_bpf_insn, _Ctype_int, *_Ctype_char, _Ctype_uint, _Ctype_int, *_Ctype_char, _Ctype_uint, nil) want (unsafe.Pointer, _Ctype_int, *_Ctype_char, *_Ctype_struct_bpf_insn, _Ctype_int, *_Ctype_char, _Ctype_uint, _Ctype_int, *_Ctype_char, _Ctype_uint, *_Ctype_char, _Ctype_int) A: It seems that your dependency library github.com/iovisor was broken. Try to check out their github to see if there are any issues, or just do something like go get -u to update your project dependencies to the latest versions (probably some new version have been already released and the problem is fixed). The version can also be restricted in your go.mod file, so you may want to change it there. A: [email protected] is not compatible with bcc-0.25.0, but it works with bcc-0.24.0. I checked out the code at the desired version: git clone --branch v0.24.0 https://github.com/iovisor/bcc.git Then I followed the instructions to build it from source: mkdir bcc/build; cd bcc/build cmake .. make sudo make install cmake -DPYTHON_CMD=python3 .. # build python3 binding pushd src/python/ make sudo make install popd This issue has more information. There was a PR merged 12 days ago with a potential fix - it will be available in the next release of gobpf. A: I used latest recent commit which is compatible with bcc-0.25.0: $ go list -m github.com/iovisor/gobpf@master github.com/iovisor/gobpf v0.2.1-0.20221005153822-16120a1bf4d4 Then in your go.mod, use: require github.com/iovisor/gobpf v0.2.1-0.20221005153822-16120a1bf4d4
not enough arguments in call to (_C2func_bcc_func_load)
I am getting an error not enough arguments in call to (_C2func_bcc_func_load) when compiling Go. Go Version used: go version go1.19.1 How can I resolve this error? Would appreciate if anyone could help. Error Message: github.com/iovisor/gobpf/bcc /home/jeremy/go/pkg/mod/github.com/iovisor/[email protected]/bcc/module.go:230:132: not enough arguments in call to (_C2func_bcc_func_load) have (unsafe.Pointer, _Ctype_int, *_Ctype_char, *_Ctype_struct_bpf_insn, _Ctype_int, *_Ctype_char, _Ctype_uint, _Ctype_int, *_Ctype_char, _Ctype_uint, nil) want (unsafe.Pointer, _Ctype_int, *_Ctype_char, *_Ctype_struct_bpf_insn, _Ctype_int, *_Ctype_char, _Ctype_uint, _Ctype_int, *_Ctype_char, _Ctype_uint, *_Ctype_char, _Ctype_int)
[ "It seems that your dependency library github.com/iovisor was broken. Try to check out their github to see if there are any issues, or just do something like go get -u to update your project dependencies to the latest versions (probably some new version have been already released and the problem is fixed). The version can also be restricted in your go.mod file, so you may want to change it there.\n", "[email protected] is not compatible with bcc-0.25.0, but it works with bcc-0.24.0.\nI checked out the code at the desired version:\ngit clone --branch v0.24.0 https://github.com/iovisor/bcc.git\n\nThen I followed the instructions to build it from source:\nmkdir bcc/build; cd bcc/build\ncmake ..\nmake\nsudo make install\ncmake -DPYTHON_CMD=python3 .. # build python3 binding\npushd src/python/\nmake\nsudo make install\npopd\n\nThis issue has more information. There was a PR merged 12 days ago with a potential fix - it will be available in the next release of gobpf.\n", "I used latest recent commit which is compatible with bcc-0.25.0:\n$ go list -m github.com/iovisor/gobpf@master\ngithub.com/iovisor/gobpf v0.2.1-0.20221005153822-16120a1bf4d4\n\nThen in your go.mod, use:\nrequire github.com/iovisor/gobpf v0.2.1-0.20221005153822-16120a1bf4d4\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "go" ]
stackoverflow_0073714654_go.txt
Q: react-native-sqlite-storage - no such table Someone knows the best way to connect react-native-sqlite-storage? My debugger gets "no such table: nameOfTable (code 1)" after a query. I should get "id","name" and "last name" from a table named "person". I have connected my prepopulated database ("mioDB") located in "/android/app/src/main/assets/mioDB.db". This is my little code: import React,{Component} from 'react'; import {TextInput,View,Text,TouchableOpacity} from 'react-native'; import { openDatabase } from "react-native-sqlite-storage"; let mioDB = openDatabase('mioDB'); class App extends Component{ constructor(props){ super(props); this.state={ name:'', lastName:'' }; } showData=()=>{ alert('name: '+this.state.name+' lastName: '+this.state.lastName); }; listOfNames=()=>{ try{ mioDB.transaction((statement)=>{ console.log('****************'); statement.executeSql('SELECT * FROM person',[],(statement,results)=>{ console.log('****************'); var len = results.rows.length; for (let i = 0; i < len; i++) { let row = results.rows.item(i); console.log(`Record: ${row.name}`); } }); }); }catch(error){ alert(error); } }; render(){ return( <View style={{marginTop:100}}> <TextInput style={{fontSize:20}} placeholder='name' onChangeText={(text)=>this.setState({name:text})}/> <TextInput style={{fontSize:20}} placeholder='lastName' onChangeText={(text)=>this.setState({lastName:text})}/> <TouchableOpacity style={{width:150,height:50}} onPress={this.showData}><Text>Create db</Text></TouchableOpacity> <TouchableOpacity style={{width:150,height:50}} onPress={this.listOfNames}><Text>Show database</Text></TouchableOpacity> </View> ); } } export default App; from the debugger I don't know where it is wrong because the system to connect is easy A: As per the document (react-native-sqlite-storage), the pre-populated file should be under www directory, but the path that you have shared is not under www directory. Also for pre-populated DB, openDatabase takes an object of the form {name : "testDB", createFromLocation : "~data/mioDB.db"} Here data is a subdirectory of www directory. Please make these two changes and try. A: SQLite silently creates the database file if it does not exist. So if you've got the path wrong, you are opening an empty database file, which of course does not contain any tables. Make sure the database file exists there and it is not empty You have to do some options 1- Create a directory named www under the assets folder in android/app/src/main/assets and place your DB here.(for example I have a Db with the name of common.db) NOTE If the assets folder dose does not exist create it. 2- Open your database like the following: import SQLite from 'react-native-sqlite-storage' .... const db= await SQLite.openDatabase({ name: 'common', createFromLocation: "~common.db", location: 'Library' }) 3- Now you can execute your query db.transaction((tx) => { tx.executeSql("SELECT * FROM person", [], (tx, results) => { results.rows.raw().forEach(item => { console.log(item) }) }) })
react-native-sqlite-storage - no such table
Someone knows the best way to connect react-native-sqlite-storage? My debugger gets "no such table: nameOfTable (code 1)" after a query. I should get "id","name" and "last name" from a table named "person". I have connected my prepopulated database ("mioDB") located in "/android/app/src/main/assets/mioDB.db". This is my little code: import React,{Component} from 'react'; import {TextInput,View,Text,TouchableOpacity} from 'react-native'; import { openDatabase } from "react-native-sqlite-storage"; let mioDB = openDatabase('mioDB'); class App extends Component{ constructor(props){ super(props); this.state={ name:'', lastName:'' }; } showData=()=>{ alert('name: '+this.state.name+' lastName: '+this.state.lastName); }; listOfNames=()=>{ try{ mioDB.transaction((statement)=>{ console.log('****************'); statement.executeSql('SELECT * FROM person',[],(statement,results)=>{ console.log('****************'); var len = results.rows.length; for (let i = 0; i < len; i++) { let row = results.rows.item(i); console.log(`Record: ${row.name}`); } }); }); }catch(error){ alert(error); } }; render(){ return( <View style={{marginTop:100}}> <TextInput style={{fontSize:20}} placeholder='name' onChangeText={(text)=>this.setState({name:text})}/> <TextInput style={{fontSize:20}} placeholder='lastName' onChangeText={(text)=>this.setState({lastName:text})}/> <TouchableOpacity style={{width:150,height:50}} onPress={this.showData}><Text>Create db</Text></TouchableOpacity> <TouchableOpacity style={{width:150,height:50}} onPress={this.listOfNames}><Text>Show database</Text></TouchableOpacity> </View> ); } } export default App; from the debugger I don't know where it is wrong because the system to connect is easy
[ "As per the document (react-native-sqlite-storage), the pre-populated file should be under www directory, but the path that you have shared is not under www directory. Also for pre-populated DB, openDatabase takes an object of the form\n{name : \"testDB\", createFromLocation : \"~data/mioDB.db\"}\n\nHere data is a subdirectory of www directory.\nPlease make these two changes and try.\n", "\nSQLite silently creates the database file if it does not exist. So if\nyou've got the path wrong, you are opening an empty database file,\nwhich of course does not contain any tables. Make sure the database\nfile exists there and it is not empty\n\nYou have to do some options\n1- Create a directory named www under the assets folder in android/app/src/main/assets and place your DB here.(for example I have a Db with the name of common.db)\nNOTE\nIf the assets folder dose does not exist create it.\n2- Open your database like the following:\nimport SQLite from 'react-native-sqlite-storage'\n....\n\nconst db= await SQLite.openDatabase({ name: 'common', createFromLocation: \"~common.db\", location: 'Library' })\n\n3- Now you can execute your query\ndb.transaction((tx) => {\n tx.executeSql(\"SELECT * FROM person\", [], (tx, results) => {\n results.rows.raw().forEach(item => {\n console.log(item)\n })\n })\n})\n\n" ]
[ 0, 0 ]
[]
[]
[ "react_native", "sqlite" ]
stackoverflow_0050440239_react_native_sqlite.txt
Q: typescript has a problem with react-hook-form fieldArray of numbers field name Very long title but essentially there is a Type 'string' is not assignable to type 'never'.ts(2322) fieldArray.d.ts(7, 5): The expected type comes from property 'name' which is declared here on type 'UseFieldArrayProps<FormValues, never, "id">' error on the fieldArray definition in react-hook-form that sometimes disappears but is there most of the time and i have no idea why it is there since all examples show it like that and it sometimes without any changes disappears. Does anyone have a clue what the issue is? Why is typescript complaining? I've tried changing versions, reordering the control and name values (it removed the error once and when i swapped them again it came back and no matter how many times i swapped them around again it stayed there). It's one of those errors I have not the slightest clue where it's coming from. Codesandbox link here: https://codesandbox.io/s/react-hook-form-list-of-numbers-s6zg2p?file=%2Fsrc%2FApp.tsx Edit: error is specifically on line 35. A: It looks like the issue is with the type of the 'name' prop in the fieldArray definition. The 'name' prop is expected to be of type 'never', but in your code it is being passed as a string. You can fix this by either changing the type of the 'name' prop to be a string, or by passing a value of 'never' to the 'name' prop. Here is an example of how you could fix this issue by changing the type of the 'name' prop: // In the fieldArray definition: const { fields, append, remove } = useFieldArray({ name: 'numbers' // Change the type of the 'name' prop to be a string }); And here is an example of how you could fix this issue by passing a value of 'never' to the 'name' prop: // In the fieldArray definition: const { fields, append, remove } = useFieldArray({ name: 'numbers' as never // Pass a value of 'never' to the 'name' prop }); A: You just need to update the version of react-hook-form. I checked above code and on updating version it worked fine. Also it was later fixed here https://github.com/react-hook-form/react-hook-form/blob/4f52102e52434ab81385856bbf81561b3253c3cb/src/types/fieldArray.ts A: Welp the answer is: you can't use anything else that objects for fieldArray values. I was trying to use direct numbers. The issue is discussed here: https://github.com/react-hook-form/react-hook-form/discussions/7586
typescript has a problem with react-hook-form fieldArray of numbers field name
Very long title but essentially there is a Type 'string' is not assignable to type 'never'.ts(2322) fieldArray.d.ts(7, 5): The expected type comes from property 'name' which is declared here on type 'UseFieldArrayProps<FormValues, never, "id">' error on the fieldArray definition in react-hook-form that sometimes disappears but is there most of the time and i have no idea why it is there since all examples show it like that and it sometimes without any changes disappears. Does anyone have a clue what the issue is? Why is typescript complaining? I've tried changing versions, reordering the control and name values (it removed the error once and when i swapped them again it came back and no matter how many times i swapped them around again it stayed there). It's one of those errors I have not the slightest clue where it's coming from. Codesandbox link here: https://codesandbox.io/s/react-hook-form-list-of-numbers-s6zg2p?file=%2Fsrc%2FApp.tsx Edit: error is specifically on line 35.
[ "It looks like the issue is with the type of the 'name' prop in the fieldArray definition. The 'name' prop is expected to be of type 'never', but in your code it is being passed as a string. You can fix this by either changing the type of the 'name' prop to be a string, or by passing a value of 'never' to the 'name' prop.\nHere is an example of how you could fix this issue by changing the type of the 'name' prop:\n// In the fieldArray definition:\nconst { fields, append, remove } = useFieldArray({\n name: 'numbers' // Change the type of the 'name' prop to be a string\n});\n\nAnd here is an example of how you could fix this issue by passing a value of 'never' to the 'name' prop:\n// In the fieldArray definition:\nconst { fields, append, remove } = useFieldArray({\n name: 'numbers' as never // Pass a value of 'never' to the 'name' prop\n});\n\n", "You just need to update the version of react-hook-form. I checked above code and on updating version it worked fine. Also it was later fixed here\nhttps://github.com/react-hook-form/react-hook-form/blob/4f52102e52434ab81385856bbf81561b3253c3cb/src/types/fieldArray.ts\n", "Welp the answer is: you can't use anything else that objects for fieldArray values. I was trying to use direct numbers.\nThe issue is discussed here: https://github.com/react-hook-form/react-hook-form/discussions/7586\n" ]
[ 0, 0, 0 ]
[]
[]
[ "react_hook_form", "reactjs", "typescript" ]
stackoverflow_0074672395_react_hook_form_reactjs_typescript.txt
Q: Amazon API Gateway - Intentional attacks for costs raising I'm new to AWS and would like to deploy a microservice on Amazon Web Services. The function code shall be in AWS Lambda and this functions shall be triggered through AWS API Gateway. My lambda functions itself are protected via authorization. Furthermore, the number of authorised requests are within the free tier. Now my questions: Can unauthorised attacks to Amazon API Gateway let the costs explode? Can i prevent my Amazon API Gateway from such attacks? Can i set a costs limit and shut the API off, in case of too high bills? Are intentionally API attacks common? Thanks A: Can unauthorized attacks to Amazon API Gateway let the costs explode? Yes. This can happen. Can I prevent my Amazon API Gateway from such attacks? You can use a web application firewall to reduce these malicious attacks using AWS WAF. Setup AWS CloudFront integrated with AWS WAF in front of API Gateway. Enabling API Keys in API Gateway so that direct access to API Gateway without the API Key is not possible. You can create use an API Key in Origin Headers in CloudFront so that for requests forwarded to API Gateway uses this API Key in headers. Can I set a costs limit and shut the API off, in case of too high bills? You can enable throttling so that very high peaks of traffic will be throttled for API Gateway reducing Cost Peaks (The negative side of this is that it affects the quality of service for real users). However, if you need to implement shutting down the API based on request rate, it's not directly supported with API Gateway. You need to do a custom implementation for this. Are intentionally API attacks common? I haven't seen much attacks for the APIs I deployed so far. Having said that it can be very subjective based on the nature of your business & etc. However, I have seen Bot based invocations more often. When you are using AWS WAF you can implement a Honey Pot easily to prevent these. Example code is available in AWS Labs in Github for Bad Bot Blocking to connect with WAF. A: Can unauthorized attacks to Amazon API Gateway let the costs explode? If you enable any type of authorization at the API Gateway layer (IAM, Custom, Cognito), API Gateway will NOT charge you for unauthorized requests. However, Lambda functions backing Custom Authorizers are still billed as normal Lambda invocations. The same applies to throttled requests, if you have rate rate limiting enabled on your API. A: I wanted to leave a comment here as this is one of the top results when you search for this topic and this has an official source from AWS. I spoke to AWS Support and they confirmed that you do not pay for unauthed requests at the APIGW level, however you do pay for the authorizer lambda running the verification code: To clarify: Request comes in to API Gateway Request is forwarded to Lambda custom authorizer Request fails custom authorizer <-- You are charged for this lambda invocation API Gateway rejects this request <-- You are not charged for this at the APIGW level
Amazon API Gateway - Intentional attacks for costs raising
I'm new to AWS and would like to deploy a microservice on Amazon Web Services. The function code shall be in AWS Lambda and this functions shall be triggered through AWS API Gateway. My lambda functions itself are protected via authorization. Furthermore, the number of authorised requests are within the free tier. Now my questions: Can unauthorised attacks to Amazon API Gateway let the costs explode? Can i prevent my Amazon API Gateway from such attacks? Can i set a costs limit and shut the API off, in case of too high bills? Are intentionally API attacks common? Thanks
[ "\nCan unauthorized attacks to Amazon API Gateway let the costs explode?\n\nYes. This can happen.\n\nCan I prevent my Amazon API Gateway from such attacks?\n\nYou can use a web application firewall to reduce these malicious attacks using AWS WAF.\n\nSetup AWS CloudFront integrated with AWS WAF in front of API Gateway.\nEnabling API Keys in API Gateway so that direct access to API Gateway without the API Key is not possible. You can create use an API Key in Origin Headers in CloudFront so that for requests forwarded to API Gateway uses this API Key in headers.\n\n\nCan I set a costs limit and shut the API off, in case of too high\n bills?\n\nYou can enable throttling so that very high peaks of traffic will be throttled for API Gateway reducing Cost Peaks (The negative side of this is that it affects the quality of service for real users). However, if you need to implement shutting down the API based on request rate, it's not directly supported with API Gateway. You need to do a custom implementation for this. \n\nAre intentionally API attacks common?\n\nI haven't seen much attacks for the APIs I deployed so far. Having said that it can be very subjective based on the nature of your business & etc. However, I have seen Bot based invocations more often. When you are using AWS WAF you can implement a Honey Pot easily to prevent these. Example code is available in AWS Labs in Github for Bad Bot Blocking to connect with WAF.\n", "\nCan unauthorized attacks to Amazon API Gateway let the costs explode?\n\nIf you enable any type of authorization at the API Gateway layer (IAM, Custom, Cognito), API Gateway will NOT charge you for unauthorized requests. However, Lambda functions backing Custom Authorizers are still billed as normal Lambda invocations.\nThe same applies to throttled requests, if you have rate rate limiting enabled on your API.\n", "I wanted to leave a comment here as this is one of the top results when you search for this topic and this has an official source from AWS. I spoke to AWS Support and they confirmed that you do not pay for unauthed requests at the APIGW level, however you do pay for the authorizer lambda running the verification code:\n\nTo clarify:\n\nRequest comes in to API Gateway\nRequest is forwarded to Lambda custom authorizer\nRequest fails custom authorizer <-- You are charged for this lambda invocation\nAPI Gateway rejects this request <-- You are not charged for this at the APIGW level\n\n" ]
[ 5, 3, 1 ]
[]
[]
[ "amazon_web_services", "aws_api_gateway", "aws_lambda" ]
stackoverflow_0046502462_amazon_web_services_aws_api_gateway_aws_lambda.txt
Q: how to extract the last id from URL I have a column of urls where I want to extract the last element of each url which represent and ID I am looking for. I managed to use 'basename' to extract all the text after the last slash. Here is an example of the url that that I extracted enter image description here I want to extract that last number. I used this script but it seems that I am extract just the first one and copying it in other rows. library(stringr) library(dplyr) df = read.csv('~/Downloads/urls.csv') df = df %>% mutate(temp = str_split(string = url,pattern = '-')) %>% mutate(id = temp[[1]][length(temp[[1]])]) I used the code above and I am expecting to get an id variable with these values enter image description here A: Assuming this column in a dataframe # A tibble: 2 x 1 url <chr> 1 essential-back-pain-stretches-3120312 2 what-is-myotome-296992 Extracting IDs with regex df %>% mutate(id = str_extract(url, pattern = "([^-]+)$")) # A tibble: 2 x 2 url id <chr> <chr> 1 essential-back-pain-stretches-3120312 3120312 2 what-is-myotome-296992 296992
how to extract the last id from URL
I have a column of urls where I want to extract the last element of each url which represent and ID I am looking for. I managed to use 'basename' to extract all the text after the last slash. Here is an example of the url that that I extracted enter image description here I want to extract that last number. I used this script but it seems that I am extract just the first one and copying it in other rows. library(stringr) library(dplyr) df = read.csv('~/Downloads/urls.csv') df = df %>% mutate(temp = str_split(string = url,pattern = '-')) %>% mutate(id = temp[[1]][length(temp[[1]])]) I used the code above and I am expecting to get an id variable with these values enter image description here
[ "Assuming this column in a dataframe\n# A tibble: 2 x 1\n url \n <chr> \n1 essential-back-pain-stretches-3120312\n2 what-is-myotome-296992 \n\nExtracting IDs with regex\ndf %>% \n mutate(id = str_extract(url, pattern = \"([^-]+)$\"))\n\n# A tibble: 2 x 2\n url id \n <chr> <chr> \n1 essential-back-pain-stretches-3120312 3120312\n2 what-is-myotome-296992 296992 \n\n" ]
[ 0 ]
[]
[]
[ "dplyr", "r", "stringr" ]
stackoverflow_0074674274_dplyr_r_stringr.txt
Q: How can i create outdoor positioning map? I saw outdoor positioning map application that not used in google maps, in addition you can also create routes for navigation. I want to know how it possible? And how can i implement it with react. I tried three.js because i know it made with canvas, but i'm not sure that is the right way. As i learned so far, the map area was mapped by drone, but what after that? How the map looks like that and still based on GPS so you can make routes and navigation? screenshot of the app A: Your options are services like OpenStreetMap, Mapbox or using three.js. These services provide APIs that allow you to integrate their map data into your application, allowing you to create custom map views and add features such as routing and navigation. Another approach is to use a 3D rendering engine, such as three.js, to create a custom map view using your own map data. This can be useful if you have access to detailed map data that is not available through a mapping service, or if you want to create a unique visual style for your map. In this case, you would need to create your own map tiles and terrain data, which can be generated from the map data using a tool like the Mapbox CLI. Then you can use React and three.js to create a custom map view that displays the data and allows users to interact with it. Use three.js to render the terrain and map features, and use React to handle user input and display additional information or controls. To implement routing and navigation, you can use a routing service that provides access to routing data and algorithms. OpenRouteService API allows you to calculate routes between two or more points on a map, and provides various options for routing parameters such as the mode of transportation and the route type. You can then use this data to display the route on your map and provide navigation instructions to the user.
How can i create outdoor positioning map?
I saw outdoor positioning map application that not used in google maps, in addition you can also create routes for navigation. I want to know how it possible? And how can i implement it with react. I tried three.js because i know it made with canvas, but i'm not sure that is the right way. As i learned so far, the map area was mapped by drone, but what after that? How the map looks like that and still based on GPS so you can make routes and navigation? screenshot of the app
[ "Your options are services like OpenStreetMap, Mapbox or using three.js. These services provide APIs that allow you to integrate their map data into your application, allowing you to create custom map views and add features such as routing and navigation.\nAnother approach is to use a 3D rendering engine, such as three.js, to create a custom map view using your own map data. This can be useful if you have access to detailed map data that is not available through a mapping service, or if you want to create a unique visual style for your map. In this case, you would need to create your own map tiles and terrain data, which can be generated from the map data using a tool like the Mapbox CLI. Then you can use React and three.js to create a custom map view that displays the data and allows users to interact with it. Use three.js to render the terrain and map features, and use React to handle user input and display additional information or controls.\nTo implement routing and navigation, you can use a routing service that provides access to routing data and algorithms. OpenRouteService API allows you to calculate routes between two or more points on a map, and provides various options for routing parameters such as the mode of transportation and the route type. You can then use this data to display the route on your map and provide navigation instructions to the user.\n" ]
[ 0 ]
[]
[]
[ "google_maps", "maps", "mobile_application", "react_native", "reactjs" ]
stackoverflow_0074673378_google_maps_maps_mobile_application_react_native_reactjs.txt
Q: How to make sequelize stop creating automatic tables (SEQUELIZE) I found out that sequelize is creating tables automatically according to the definition of my model names. I have the following code: const DataTypes = require("sequelize"); const sequelize = require("../mysql.js"); const Approver = sequelize.define("approver", { subordinate_id: { type: DataTypes.INTEGER, allowNull: false, references: { model: "user", key: "id", }, }, leader_id: { type: DataTypes.INTEGER, allowNull: false, references: { model: "user", key: "id", }, }, main_leader_id: { type: DataTypes.INTEGER, allowNull: false, references: { model: "user", key: "id", }, }, }); const connect = async () => { await Approver.sync(); }; connect(); module.exports = Approver; every time I run the local server, I get the following message in the terminal: CREATE TABLE IF NOT EXISTS `approvers` (`id` INTEGER NOT NULL auto_increment , `subordinate_id` INTEGER NOT NULL, `leader_id` INTEGER NOT NULL, `main_leader_id` INTEGER NOT NULL, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, PRIMARY KEY (`id`), FOREIGN KEY (`subordinate_id`) REFERENCES `user` (`id`), FOREIGN KEY (`leader_id`) REFERENCES `user` (`id`), FOREIGN KEY (`main_leader_id`) REFERENCES `user` (`id`)) ENGINE=InnoDB; and I found out that the table creation is generated from the model's define because I put other names in the model and the table created was the same as the one I had named the code. I don't know why the table that is created is in the plural "approvers" and in the model I put the name "approver" and apparently if I try to put another name the plural doesn't happen as well as the word "approver". the big problem is that I have migrations and when I run them the table "approver" is created in my database, but when I run the command to start the local server, the sequelize creates one more table. So I end up with 2 tables in the database, "approver" of the migration and "approvers" of the model. I already tried to put the migration and the model with the plural name "approver" but this causes an error when I try to use the model, the sequelize shows a missing field error when I try to create or update data, it says that the value "updatedAt" is missing, and this only happens because the automatically generated table creates this field, but the funniest thing is that the table was not created in my Dbeaver but the sequelize shows the error of being missing a field, even the model containing the plural name and the migration too... I would like to get the result that the table is not created with the plural. does anyone know how to solve this bug? enter image description here A: You have two problems here: An auto-creation of a table according to a model definition Pluralization of a table name while auto-creating it Solutions: Just remove sync call or the whole piece of the following code: const connect = async () => { await Approver.sync(); }; connect(); If you use migrations to create the whole structure and to make modifications to it then you don't need to use sync method of a model or a Sequelize instance. Pluralization of table names can be turned off by indicating 'freezeTableName: true' in the model's options (see Enforcing table name to be equal to a model name in the official documentation).
How to make sequelize stop creating automatic tables (SEQUELIZE)
I found out that sequelize is creating tables automatically according to the definition of my model names. I have the following code: const DataTypes = require("sequelize"); const sequelize = require("../mysql.js"); const Approver = sequelize.define("approver", { subordinate_id: { type: DataTypes.INTEGER, allowNull: false, references: { model: "user", key: "id", }, }, leader_id: { type: DataTypes.INTEGER, allowNull: false, references: { model: "user", key: "id", }, }, main_leader_id: { type: DataTypes.INTEGER, allowNull: false, references: { model: "user", key: "id", }, }, }); const connect = async () => { await Approver.sync(); }; connect(); module.exports = Approver; every time I run the local server, I get the following message in the terminal: CREATE TABLE IF NOT EXISTS `approvers` (`id` INTEGER NOT NULL auto_increment , `subordinate_id` INTEGER NOT NULL, `leader_id` INTEGER NOT NULL, `main_leader_id` INTEGER NOT NULL, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, PRIMARY KEY (`id`), FOREIGN KEY (`subordinate_id`) REFERENCES `user` (`id`), FOREIGN KEY (`leader_id`) REFERENCES `user` (`id`), FOREIGN KEY (`main_leader_id`) REFERENCES `user` (`id`)) ENGINE=InnoDB; and I found out that the table creation is generated from the model's define because I put other names in the model and the table created was the same as the one I had named the code. I don't know why the table that is created is in the plural "approvers" and in the model I put the name "approver" and apparently if I try to put another name the plural doesn't happen as well as the word "approver". the big problem is that I have migrations and when I run them the table "approver" is created in my database, but when I run the command to start the local server, the sequelize creates one more table. So I end up with 2 tables in the database, "approver" of the migration and "approvers" of the model. I already tried to put the migration and the model with the plural name "approver" but this causes an error when I try to use the model, the sequelize shows a missing field error when I try to create or update data, it says that the value "updatedAt" is missing, and this only happens because the automatically generated table creates this field, but the funniest thing is that the table was not created in my Dbeaver but the sequelize shows the error of being missing a field, even the model containing the plural name and the migration too... I would like to get the result that the table is not created with the plural. does anyone know how to solve this bug? enter image description here
[ "You have two problems here:\n\nAn auto-creation of a table according to a model definition\nPluralization of a table name while auto-creating it\n\nSolutions:\n\nJust remove sync call or the whole piece of the following code:\n\nconst connect = async () => {\n await Approver.sync();\n};\n\nconnect();\n\nIf you use migrations to create the whole structure and to make modifications to it then you don't need to use sync method of a model or a Sequelize instance.\n\nPluralization of table names can be turned off by indicating 'freezeTableName: true' in the model's options (see Enforcing table name to be equal to a model name in the official documentation).\n\n" ]
[ 0 ]
[]
[]
[ "database", "mysql", "node.js", "orm", "sequelize.js" ]
stackoverflow_0074672880_database_mysql_node.js_orm_sequelize.js.txt
Q: Why does TypeScript complaint about missing property in generic inheritance? Consider the following test case : interface BaseFoo {} interface FooAdapter { method<F extends BaseFoo>(foo:F):string; } interface ConcreteFoo extends BaseFoo { value:string; } class ConcreteFooAdapter implements FooAdapter { method(foo: ConcreteFoo): string { return foo.value; } } The only error is with the method signature, where TypeScript complaints that : Property 'value' is missing in type 'BaseFoo' but required in type 'ConcreteFoo'. Why would value be present in BaseFoo since the generic F is supposed to extend it? But, more importantly, how to fix this so there is no error? Edit Here is an alternative solution I was trying to investigate, but with similar failure: interface BarAdapter { method<F>(bar:F):string; } type Bar = { value:string; } class ConcreteBarAdapter implements BarAdapter { method(bar:Bar):string { return bar.value; } } It complaints that F is not assignable to type Bar, and I don't understand why. A: If your only criteria is that the parameter should extend BaseFoo, and the return value should be a string, you may not need generics at all, this would be enough: interface BaseFoo { } interface FooAdapter { method(foo: BaseFoo): string; } interface ConcreteFoo extends BaseFoo { value: string; } class ConcreteFooAdapter implements FooAdapter { method(foo: ConcreteFoo): string { return foo.value; } } This provide as strong typing as in your attempt with generics. TypeScript constraints method in the implementor to extend method(foo: BaseFoo): string. However, if you need to be able to use the adapters as implementors of a specific method signature, then you'd have to add the generic parameter on the interface, and then provide the type explicitly when implementing it: interface BaseFoo { } interface FooAdapter<F extends BaseFoo> { method(foo: F): string; } interface ConcreteFoo extends BaseFoo { value: string; } class ConcreteFooAdapter implements FooAdapter<ConcreteFoo> { method(foo: ConcreteFoo): string { return foo.value; } }
Why does TypeScript complaint about missing property in generic inheritance?
Consider the following test case : interface BaseFoo {} interface FooAdapter { method<F extends BaseFoo>(foo:F):string; } interface ConcreteFoo extends BaseFoo { value:string; } class ConcreteFooAdapter implements FooAdapter { method(foo: ConcreteFoo): string { return foo.value; } } The only error is with the method signature, where TypeScript complaints that : Property 'value' is missing in type 'BaseFoo' but required in type 'ConcreteFoo'. Why would value be present in BaseFoo since the generic F is supposed to extend it? But, more importantly, how to fix this so there is no error? Edit Here is an alternative solution I was trying to investigate, but with similar failure: interface BarAdapter { method<F>(bar:F):string; } type Bar = { value:string; } class ConcreteBarAdapter implements BarAdapter { method(bar:Bar):string { return bar.value; } } It complaints that F is not assignable to type Bar, and I don't understand why.
[ "If your only criteria is that the parameter should extend BaseFoo, and the return value should be a string, you may not need generics at all, this would be enough:\ninterface BaseFoo { }\n\ninterface FooAdapter {\n method(foo: BaseFoo): string;\n}\n\ninterface ConcreteFoo extends BaseFoo {\n value: string;\n}\n\nclass ConcreteFooAdapter implements FooAdapter {\n method(foo: ConcreteFoo): string {\n return foo.value;\n }\n}\n\nThis provide as strong typing as in your attempt with generics. TypeScript constraints method in the implementor to extend method(foo: BaseFoo): string.\nHowever, if you need to be able to use the adapters as implementors of a specific method signature, then you'd have to add the generic parameter on the interface, and then provide the type explicitly when implementing it:\ninterface BaseFoo { }\n\ninterface FooAdapter<F extends BaseFoo> {\n method(foo: F): string;\n}\n\n\ninterface ConcreteFoo extends BaseFoo {\n value: string;\n}\n\nclass ConcreteFooAdapter implements FooAdapter<ConcreteFoo> {\n method(foo: ConcreteFoo): string {\n return foo.value;\n }\n}\n\n" ]
[ 1 ]
[]
[]
[ "generics", "typescript" ]
stackoverflow_0074672954_generics_typescript.txt
Q: Spring Security in Spring Boot 3 I'm currently in the process of migrating our REST application from Spring Boot 2.7.5 to 3.0.0-RC2. I want everything to be secure apart from the Open API URL. In Spring Boot 2.7.5, we used to do this: @Named @EnableWebSecurity public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.authorizeRequests() .antMatchers("/openapi/openapi.yml").permitAll() .anyRequest().authenticated() .and() .httpBasic(); } } and it worked fine. In Spring Boot 3, I had to change it to @Configuration @EnableWebSecurity public class WebSecurityConfig { @Bean public SecurityFilterChain configure(HttpSecurity http) throws Exception { http.authorizeHttpRequests((requests) -> requests .requestMatchers("/openapi/openapi.yml").permitAll() .anyRequest() .authenticated()) .httpBasic(); return http.build(); } } since WebSecurityConfigurerAdapter has been removed. It's not working though. The Open API URL is also secured via basic authentication. Have I made a mistake when upgrading the code or is that possibly an issue in Spring Boot 3 RC 2? Update Since most of the new API was already available in 2.7.5, I've updated our code in our 2.7.5 code base to the following: @Configuration @EnableWebSecurity public class WebSecurityConfig { @Bean public SecurityFilterChain configure(HttpSecurity http) throws Exception { http .csrf().disable() .authorizeHttpRequests((requests) -> requests .antMatchers(OPTIONS).permitAll() // allow CORS option calls for Swagger UI .antMatchers("/openapi/openapi.yml").permitAll() .anyRequest().authenticated()) .httpBasic(); return http.build(); } } In our branch for 3.0.0-RC2, the code is now as follows: @Configuration @EnableWebSecurity public class WebSecurityConfig { @Bean public SecurityFilterChain configure(HttpSecurity http) throws Exception { http .csrf().disable() .authorizeHttpRequests((requests) -> requests .requestMatchers(OPTIONS).permitAll() // allow CORS option calls for Swagger UI .requestMatchers("/openapi/openapi.yml").permitAll() .anyRequest().authenticated()) .httpBasic(); return http.build(); } } As you can see, the only difference is that I call requestMatchers instead of antMatchers. This method seems to have been renamed. The method antMatchers is no longer available. The end effect is still the same though. On our branch for 3.0.0-RC2, Spring Boot asks for basic authentication for the OpenAPI URL. Still works fine on 2.7.5. A: The official documentation suggests an example which I have abridged here with your config: http .authorizeExchange((exchanges) -> exchanges .pathMatchers("/openapi/openapi.yml").permitAll() .anyExchange().authenticated()) .httpBasic(); return http.build(); You could try this, since it changes the "request" for the "exchange" wording, in line with the migration to declarative clients (@PostExchange vs. @PostMapping) I suppose. Hope it helps. A: Use http.securityMatcher("<patterns>")... to specify authentication for endpoints. authorizeHttpRequests((requests) -> requests .requestMatchers("<pattern>") only works for authorization, if you don't set securityMatcher , SecurityFilterChain by default gets any request for authentication. And any request will be authenticated by an authentication provider. In your case, you can define two security filter, chains: one for public endpoitns, another for secured. And give them proper order: @Bean @Order(1) public SecurityFilterChain configure(HttpSecurity http) throws Exception { http.securityMatcher(OPTIONS,"/openapi/openapi.yml").csrf().disable() .authorizeHttpRequests((requests) -> requests .anyRequest().permitAll() // allow CORS option calls for Swagger UI ); return http.build(); } @Bean Order(2) public SecurityFilterChain configure(HttpSecurity http) throws Exception { http.securityMatcher("/**") .csrf().disable() .authorizeHttpRequests((requests) -> requests.anyRequest().authenticated()) .httpBasic(); return http.build(); } A: Author: https://github.com/wilkinsona @Bean public SecurityFilterChain configure(HttpSecurity http) throws Exception { http .authorizeHttpRequests((requests) -> requests .requestMatchers(new AntPathRequestMatcher("/openapi/openapi.yml")).permitAll() .anyRequest().authenticated()) .httpBasic(); return http.build(); } Source: https://github.com/spring-projects/spring-boot/issues/33357#issuecomment-1327301183 I recommend you use Spring Boot 3.0.0 (GA) right now, not RC version. A: This seems to be a bug in Spring Boot 3. I've raised an issue.
Spring Security in Spring Boot 3
I'm currently in the process of migrating our REST application from Spring Boot 2.7.5 to 3.0.0-RC2. I want everything to be secure apart from the Open API URL. In Spring Boot 2.7.5, we used to do this: @Named @EnableWebSecurity public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.authorizeRequests() .antMatchers("/openapi/openapi.yml").permitAll() .anyRequest().authenticated() .and() .httpBasic(); } } and it worked fine. In Spring Boot 3, I had to change it to @Configuration @EnableWebSecurity public class WebSecurityConfig { @Bean public SecurityFilterChain configure(HttpSecurity http) throws Exception { http.authorizeHttpRequests((requests) -> requests .requestMatchers("/openapi/openapi.yml").permitAll() .anyRequest() .authenticated()) .httpBasic(); return http.build(); } } since WebSecurityConfigurerAdapter has been removed. It's not working though. The Open API URL is also secured via basic authentication. Have I made a mistake when upgrading the code or is that possibly an issue in Spring Boot 3 RC 2? Update Since most of the new API was already available in 2.7.5, I've updated our code in our 2.7.5 code base to the following: @Configuration @EnableWebSecurity public class WebSecurityConfig { @Bean public SecurityFilterChain configure(HttpSecurity http) throws Exception { http .csrf().disable() .authorizeHttpRequests((requests) -> requests .antMatchers(OPTIONS).permitAll() // allow CORS option calls for Swagger UI .antMatchers("/openapi/openapi.yml").permitAll() .anyRequest().authenticated()) .httpBasic(); return http.build(); } } In our branch for 3.0.0-RC2, the code is now as follows: @Configuration @EnableWebSecurity public class WebSecurityConfig { @Bean public SecurityFilterChain configure(HttpSecurity http) throws Exception { http .csrf().disable() .authorizeHttpRequests((requests) -> requests .requestMatchers(OPTIONS).permitAll() // allow CORS option calls for Swagger UI .requestMatchers("/openapi/openapi.yml").permitAll() .anyRequest().authenticated()) .httpBasic(); return http.build(); } } As you can see, the only difference is that I call requestMatchers instead of antMatchers. This method seems to have been renamed. The method antMatchers is no longer available. The end effect is still the same though. On our branch for 3.0.0-RC2, Spring Boot asks for basic authentication for the OpenAPI URL. Still works fine on 2.7.5.
[ "The official documentation suggests an example which I have abridged here with your config:\nhttp\n .authorizeExchange((exchanges) ->\n exchanges\n .pathMatchers(\"/openapi/openapi.yml\").permitAll()\n .anyExchange().authenticated())\n .httpBasic();\n\nreturn http.build();\n\nYou could try this, since it changes the \"request\" for the \"exchange\" wording, in line with the migration to declarative clients (@PostExchange vs. @PostMapping) I suppose. Hope it helps.\n", "Use\n http.securityMatcher(\"<patterns>\")...\n\nto specify authentication for endpoints.\nauthorizeHttpRequests((requests) -> requests\n .requestMatchers(\"<pattern>\")\n\nonly works for authorization, if you don't set securityMatcher , SecurityFilterChain by default gets any request for authentication. And any request will be authenticated by an authentication provider.\nIn your case, you can define two security filter, chains: one for public endpoitns, another for secured. And give them proper order:\n@Bean\n@Order(1)\n public SecurityFilterChain configure(HttpSecurity http) throws Exception {\n http.securityMatcher(OPTIONS,\"/openapi/openapi.yml\").csrf().disable()\n .authorizeHttpRequests((requests) -> requests\n .anyRequest().permitAll() // allow CORS option calls for Swagger UI\n);\n return http.build();\n }\n\n@Bean\nOrder(2)\n public SecurityFilterChain configure(HttpSecurity http) throws Exception {\n http.securityMatcher(\"/**\")\n .csrf().disable()\n .authorizeHttpRequests((requests) -> requests.anyRequest().authenticated())\n .httpBasic();\n return http.build();\n }\n\n", "Author: https://github.com/wilkinsona\n @Bean\n public SecurityFilterChain configure(HttpSecurity http) throws Exception {\n http\n .authorizeHttpRequests((requests) -> requests\n .requestMatchers(new AntPathRequestMatcher(\"/openapi/openapi.yml\")).permitAll()\n .anyRequest().authenticated())\n .httpBasic();\n return http.build();\n }\n\nSource: https://github.com/spring-projects/spring-boot/issues/33357#issuecomment-1327301183\nI recommend you use Spring Boot 3.0.0 (GA) right now, not RC version.\n", "This seems to be a bug in Spring Boot 3. I've raised an issue.\n" ]
[ 0, 0, 0, -1 ]
[]
[]
[ "spring_boot", "spring_boot_3", "spring_security" ]
stackoverflow_0074447778_spring_boot_spring_boot_3_spring_security.txt
Q: Verify Key-Store Generated Signature at ethers I have created a key pair at android key store. Now I have Public Key (In DER format) and Generated a signature (In DER format). Now I am trying to verify the same at ethers. But I am unable to. (The Public Key generated from signature does not match) I have tried getting r,s from Der signature like this. DER Sign (0x30 size 20/21 r size 20/21 v) // strip zeros if 21 and Uncompressed public key from DER encoded public key like this. 30 59 # Sequence length 0x59 - 91 bytes long 30 13 # Sequence length 0x13 - 21 bytes long 06 07 2a8648ce3d0201 # Object ID - 7 bytes long - 1.2.840.10045.2.1 (ECC) 06 08 2a8648ce3d030107 # Object ID - 8 bytes long - 1.2.840.10045.3.1.7 (ECDSA P256) 03 42 # Bit stream - 0x42 (66 bytes long) 0004 # Identifies public key 2927b10512bae3eddcfe467828128bad2903269919f7086069c8c4df6c732838 # Identifies public key x co-ordinate c7787964eaac00e5921fb1498a60f4606766b3d9685001558d1a974e7341513e # Identifies public key y co-ordinate Now at ethers to verify My sign : r||s||00 or r||s||01 My Public Key 0x04 || x cord || y cord But at ethers generated public key from given siganture and data does not match which the decoded public key. So where am i doing wrong? A: The curve used in android was secp256r1 aka p256 but the ethers uses secp256k1. In ethers changing the curve to secp256r1 will do the work.
Verify Key-Store Generated Signature at ethers
I have created a key pair at android key store. Now I have Public Key (In DER format) and Generated a signature (In DER format). Now I am trying to verify the same at ethers. But I am unable to. (The Public Key generated from signature does not match) I have tried getting r,s from Der signature like this. DER Sign (0x30 size 20/21 r size 20/21 v) // strip zeros if 21 and Uncompressed public key from DER encoded public key like this. 30 59 # Sequence length 0x59 - 91 bytes long 30 13 # Sequence length 0x13 - 21 bytes long 06 07 2a8648ce3d0201 # Object ID - 7 bytes long - 1.2.840.10045.2.1 (ECC) 06 08 2a8648ce3d030107 # Object ID - 8 bytes long - 1.2.840.10045.3.1.7 (ECDSA P256) 03 42 # Bit stream - 0x42 (66 bytes long) 0004 # Identifies public key 2927b10512bae3eddcfe467828128bad2903269919f7086069c8c4df6c732838 # Identifies public key x co-ordinate c7787964eaac00e5921fb1498a60f4606766b3d9685001558d1a974e7341513e # Identifies public key y co-ordinate Now at ethers to verify My sign : r||s||00 or r||s||01 My Public Key 0x04 || x cord || y cord But at ethers generated public key from given siganture and data does not match which the decoded public key. So where am i doing wrong?
[ "The curve used in android was secp256r1 aka p256 but the ethers uses secp256k1.\nIn ethers changing the curve to secp256r1 will do the work.\n" ]
[ 0 ]
[]
[]
[ "cryptography", "ethers.js", "keystore", "signature" ]
stackoverflow_0074624628_cryptography_ethers.js_keystore_signature.txt
Q: Convert bytes to a string I captured the standard output of an external program into a bytes object: >>> from subprocess import * >>> command_stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0] >>> >>> command_stdout b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\n' I want to convert that to a normal Python string, so that I can print it like this: >>> print(command_stdout) -rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1 -rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2 How do I convert the bytes object to a str with Python 3? A: Decode the bytes object to produce a string: >>> b"abcde".decode("utf-8") 'abcde' The above example assumes that the bytes object is in UTF-8, because it is a common encoding. However, you should use the encoding your data is actually in! A: Decode the byte string and turn it in to a character (Unicode) string. Python 3: encoding = 'utf-8' b'hello'.decode(encoding) or str(b'hello', encoding) Python 2: encoding = 'utf-8' 'hello'.decode(encoding) or unicode('hello', encoding) A: This joins together a list of bytes into a string: >>> bytes_data = [112, 52, 52] >>> "".join(map(chr, bytes_data)) 'p44' A: If you don't know the encoding, then to read binary input into string in Python 3 and Python 2 compatible way, use the ancient MS-DOS CP437 encoding: PY3K = sys.version_info >= (3, 0) lines = [] for line in stream: if not PY3K: lines.append(line) else: lines.append(line.decode('cp437')) Because encoding is unknown, expect non-English symbols to translate to characters of cp437 (English characters are not translated, because they match in most single byte encodings and UTF-8). Decoding arbitrary binary input to UTF-8 is unsafe, because you may get this: >>> b'\x00\x01\xffsd'.decode('utf-8') Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 2: invalid start byte The same applies to latin-1, which was popular (the default?) for Python 2. See the missing points in Codepage Layout - it is where Python chokes with infamous ordinal not in range. UPDATE 20150604: There are rumors that Python 3 has the surrogateescape error strategy for encoding stuff into binary data without data loss and crashes, but it needs conversion tests, [binary] -> [str] -> [binary], to validate both performance and reliability. UPDATE 20170116: Thanks to comment by Nearoo - there is also a possibility to slash escape all unknown bytes with backslashreplace error handler. That works only for Python 3, so even with this workaround you will still get inconsistent output from different Python versions: PY3K = sys.version_info >= (3, 0) lines = [] for line in stream: if not PY3K: lines.append(line) else: lines.append(line.decode('utf-8', 'backslashreplace')) See Python’s Unicode Support for details. UPDATE 20170119: I decided to implement slash escaping decode that works for both Python 2 and Python 3. It should be slower than the cp437 solution, but it should produce identical results on every Python version. # --- preparation import codecs def slashescape(err): """ codecs error handler. err is UnicodeDecode instance. return a tuple with a replacement for the unencodable part of the input and a position where encoding should continue""" #print err, dir(err), err.start, err.end, err.object[:err.start] thebyte = err.object[err.start:err.end] repl = u'\\x'+hex(ord(thebyte))[2:] return (repl, err.end) codecs.register_error('slashescape', slashescape) # --- processing stream = [b'\x80abc'] lines = [] for line in stream: lines.append(line.decode('utf-8', 'slashescape')) A: In Python 3, the default encoding is "utf-8", so you can directly use: b'hello'.decode() which is equivalent to b'hello'.decode(encoding="utf-8") On the other hand, in Python 2, encoding defaults to the default string encoding. Thus, you should use: b'hello'.decode(encoding) where encoding is the encoding you want. Note: support for keyword arguments was added in Python 2.7. A: I think you actually want this: >>> from subprocess import * >>> command_stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0] >>> command_text = command_stdout.decode(encoding='windows-1252') Aaron's answer was correct, except that you need to know which encoding to use. And I believe that Windows uses 'windows-1252'. It will only matter if you have some unusual (non-ASCII) characters in your content, but then it will make a difference. By the way, the fact that it does matter is the reason that Python moved to using two different types for binary and text data: it can't convert magically between them, because it doesn't know the encoding unless you tell it! The only way YOU would know is to read the Windows documentation (or read it here). A: Since this question is actually asking about subprocess output, you have more direct approaches available. The most modern would be using subprocess.check_output and passing text=True (Python 3.7+) to automatically decode stdout using the system default coding: text = subprocess.check_output(["ls", "-l"], text=True) For Python 3.6, Popen accepts an encoding keyword: >>> from subprocess import Popen, PIPE >>> text = Popen(['ls', '-l'], stdout=PIPE, encoding='utf-8').communicate()[0] >>> type(text) str >>> print(text) total 0 -rw-r--r-- 1 wim badger 0 May 31 12:45 some_file.txt The general answer to the question in the title, if you're not dealing with subprocess output, is to decode bytes to text: >>> b'abcde'.decode() 'abcde' With no argument, sys.getdefaultencoding() will be used. If your data is not sys.getdefaultencoding(), then you must specify the encoding explicitly in the decode call: >>> b'caf\xe9'.decode('cp1250') 'café' A: Set universal_newlines to True, i.e. command_stdout = Popen(['ls', '-l'], stdout=PIPE, universal_newlines=True).communicate()[0] A: To interpret a byte sequence as a text, you have to know the corresponding character encoding: unicode_text = bytestring.decode(character_encoding) Example: >>> b'\xc2\xb5'.decode('utf-8') 'µ' ls command may produce output that can't be interpreted as text. File names on Unix may be any sequence of bytes except slash b'/' and zero b'\0': >>> open(bytes(range(0x100)).translate(None, b'\0/'), 'w').close() Trying to decode such byte soup using utf-8 encoding raises UnicodeDecodeError. It can be worse. The decoding may fail silently and produce mojibake if you use a wrong incompatible encoding: >>> '—'.encode('utf-8').decode('cp1252') '—' The data is corrupted but your program remains unaware that a failure has occurred. In general, what character encoding to use is not embedded in the byte sequence itself. You have to communicate this info out-of-band. Some outcomes are more likely than others and therefore chardet module exists that can guess the character encoding. A single Python script may use multiple character encodings in different places. ls output can be converted to a Python string using os.fsdecode() function that succeeds even for undecodable filenames (it uses sys.getfilesystemencoding() and surrogateescape error handler on Unix): import os import subprocess output = os.fsdecode(subprocess.check_output('ls')) To get the original bytes, you could use os.fsencode(). If you pass universal_newlines=True parameter then subprocess uses locale.getpreferredencoding(False) to decode bytes e.g., it can be cp1252 on Windows. To decode the byte stream on-the-fly, io.TextIOWrapper() could be used: example. Different commands may use different character encodings for their output e.g., dir internal command (cmd) may use cp437. To decode its output, you could pass the encoding explicitly (Python 3.6+): output = subprocess.check_output('dir', shell=True, encoding='cp437') The filenames may differ from os.listdir() (which uses Windows Unicode API) e.g., '\xb6' can be substituted with '\x14'—Python's cp437 codec maps b'\x14' to control character U+0014 instead of U+00B6 (¶). To support filenames with arbitrary Unicode characters, see Decode PowerShell output possibly containing non-ASCII Unicode characters into a Python string A: While @Aaron Maenpaa's answer just works, a user recently asked: Is there any more simply way? 'fhand.read().decode("ASCII")' [...] It's so long! You can use: command_stdout.decode() decode() has a standard argument: codecs.decode(obj, encoding='utf-8', errors='strict') A: If you should get the following by trying decode(): AttributeError: 'str' object has no attribute 'decode' You can also specify the encoding type straight in a cast: >>> my_byte_str b'Hello World' >>> str(my_byte_str, 'utf-8') 'Hello World' A: If you have had this error: utf-8 codec can't decode byte 0x8a, then it is better to use the following code to convert bytes to a string: bytes = b"abcdefg" string = bytes.decode("utf-8", "ignore") A: Bytes m=b'This is bytes' Converting to string Method 1 m.decode("utf-8") or m.decode() Method 2 import codecs codecs.decode(m,encoding="utf-8") or import codecs codecs.decode(m) Method 3 str(m,encoding="utf-8") or str(m)[2:-1] Result 'This is bytes' A: For Python 3, this is a much safer and Pythonic approach to convert from byte to string: def byte_to_str(bytes_or_str): if isinstance(bytes_or_str, bytes): # Check if it's in bytes print(bytes_or_str.decode('utf-8')) else: print("Object not of byte type") byte_to_str(b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\n') Output: total 0 -rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1 -rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2 A: When working with data from Windows systems (with \r\n line endings), my answer is String = Bytes.decode("utf-8").replace("\r\n", "\n") Why? Try this with a multiline Input.txt: Bytes = open("Input.txt", "rb").read() String = Bytes.decode("utf-8") open("Output.txt", "w").write(String) All your line endings will be doubled (to \r\r\n), leading to extra empty lines. Python's text-read functions usually normalize line endings so that strings use only \n. If you receive binary data from a Windows system, Python does not have a chance to do that. Thus, Bytes = open("Input.txt", "rb").read() String = Bytes.decode("utf-8").replace("\r\n", "\n") open("Output.txt", "w").write(String) will replicate your original file. A: We can decode the bytes object to produce a string using bytes.decode(encoding='utf-8', errors='strict'). For documentation see bytes.decode. Python 3 example: byte_value = b"abcde" print("Initial value = {}".format(byte_value)) print("Initial value type = {}".format(type(byte_value))) string_value = byte_value.decode("utf-8") # utf-8 is used here because it is a very common encoding, but you need to use the encoding your data is actually in. print("------------") print("Converted value = {}".format(string_value)) print("Converted value type = {}".format(type(string_value))) Output: Initial value = b'abcde' Initial value type = <class 'bytes'> ------------ Converted value = abcde Converted value type = <class 'str'> Note: In Python 3, by default the encoding type is UTF-8. So, <byte_string>.decode("utf-8") can be also written as <byte_string>.decode() A: For your specific case of "run a shell command and get its output as text instead of bytes", on Python 3.7, you should use subprocess.run and pass in text=True (as well as capture_output=True to capture the output) command_result = subprocess.run(["ls", "-l"], capture_output=True, text=True) command_result.stdout # is a `str` containing your program's stdout text used to be called universal_newlines, and was changed (well, aliased) in Python 3.7. If you want to support Python versions before 3.7, pass in universal_newlines=True instead of text=True A: From sys — System-specific parameters and functions: To write or read binary data from/to the standard streams, use the underlying binary buffer. For example, to write bytes to stdout, use sys.stdout.buffer.write(b'abc'). A: Try this: bytes.fromhex('c3a9').decode('utf-8') A: Decode with .decode(). This will decode the string. Pass in 'utf-8') as the value in the inside. A: def toString(string): try: return v.decode("utf-8") except ValueError: return string b = b'97.080.500' s = '97.080.500' print(toString(b)) print(toString(s)) A: If you want to convert any bytes, not just string converted to bytes: with open("bytesfile", "rb") as infile: str = base64.b85encode(imageFile.read()) with open("bytesfile", "rb") as infile: str2 = json.dumps(list(infile.read())) This is not very efficient, however. It will turn a 2 MB picture into 9 MB. A: Try using this one; this function will ignore all the non-character sets (like UTF-8) binaries and return a clean string. It is tested for Python 3.6 and above. def bin2str(text, encoding = 'utf-8'): """Converts a binary to Unicode string by removing all non Unicode char text: binary string to work on encoding: output encoding *utf-8""" return text.decode(encoding, 'ignore') Here, the function will take the binary and decode it (converts binary data to characters using the Python predefined character set and the ignore argument ignores all non-character set data from your binary and finally returns your desired string value. If you are not sure about the encoding, use sys.getdefaultencoding() to get the default encoding of your device. A: You can use the decode() method on the bytes object to convert it to a string: command_stdout = command_stdout.decode() Then you can print the string as usual: print(command_stdout) This will produce the following output: -rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1 -rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2
Convert bytes to a string
I captured the standard output of an external program into a bytes object: >>> from subprocess import * >>> command_stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0] >>> >>> command_stdout b'total 0\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\n' I want to convert that to a normal Python string, so that I can print it like this: >>> print(command_stdout) -rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1 -rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2 How do I convert the bytes object to a str with Python 3?
[ "Decode the bytes object to produce a string:\n>>> b\"abcde\".decode(\"utf-8\") \n'abcde'\n\nThe above example assumes that the bytes object is in UTF-8, because it is a common encoding. However, you should use the encoding your data is actually in!\n", "Decode the byte string and turn it in to a character (Unicode) string.\n\nPython 3:\nencoding = 'utf-8'\nb'hello'.decode(encoding)\n\nor\nstr(b'hello', encoding)\n\n\nPython 2:\nencoding = 'utf-8'\n'hello'.decode(encoding)\n\nor\nunicode('hello', encoding)\n\n", "This joins together a list of bytes into a string:\n>>> bytes_data = [112, 52, 52]\n>>> \"\".join(map(chr, bytes_data))\n'p44'\n\n", "If you don't know the encoding, then to read binary input into string in Python 3 and Python 2 compatible way, use the ancient MS-DOS CP437 encoding:\nPY3K = sys.version_info >= (3, 0)\n\nlines = []\nfor line in stream:\n if not PY3K:\n lines.append(line)\n else:\n lines.append(line.decode('cp437'))\n\nBecause encoding is unknown, expect non-English symbols to translate to characters of cp437 (English characters are not translated, because they match in most single byte encodings and UTF-8).\nDecoding arbitrary binary input to UTF-8 is unsafe, because you may get this:\n>>> b'\\x00\\x01\\xffsd'.decode('utf-8')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 2: invalid\nstart byte\n\nThe same applies to latin-1, which was popular (the default?) for Python 2. See the missing points in Codepage Layout - it is where Python chokes with infamous ordinal not in range.\nUPDATE 20150604: There are rumors that Python 3 has the surrogateescape error strategy for encoding stuff into binary data without data loss and crashes, but it needs conversion tests, [binary] -> [str] -> [binary], to validate both performance and reliability.\nUPDATE 20170116: Thanks to comment by Nearoo - there is also a possibility to slash escape all unknown bytes with backslashreplace error handler. That works only for Python 3, so even with this workaround you will still get inconsistent output from different Python versions:\nPY3K = sys.version_info >= (3, 0)\n\nlines = []\nfor line in stream:\n if not PY3K:\n lines.append(line)\n else:\n lines.append(line.decode('utf-8', 'backslashreplace'))\n\nSee Python’s Unicode Support for details.\nUPDATE 20170119: I decided to implement slash escaping decode that works for both Python 2 and Python 3. It should be slower than the cp437 solution, but it should produce identical results on every Python version.\n# --- preparation\n\nimport codecs\n\ndef slashescape(err):\n \"\"\" codecs error handler. err is UnicodeDecode instance. return\n a tuple with a replacement for the unencodable part of the input\n and a position where encoding should continue\"\"\"\n #print err, dir(err), err.start, err.end, err.object[:err.start]\n thebyte = err.object[err.start:err.end]\n repl = u'\\\\x'+hex(ord(thebyte))[2:]\n return (repl, err.end)\n\ncodecs.register_error('slashescape', slashescape)\n\n# --- processing\n\nstream = [b'\\x80abc']\n\nlines = []\nfor line in stream:\n lines.append(line.decode('utf-8', 'slashescape'))\n\n", "In Python 3, the default encoding is \"utf-8\", so you can directly use:\nb'hello'.decode()\n\nwhich is equivalent to\nb'hello'.decode(encoding=\"utf-8\")\n\nOn the other hand, in Python 2, encoding defaults to the default string encoding. Thus, you should use:\nb'hello'.decode(encoding)\n\nwhere encoding is the encoding you want.\nNote: support for keyword arguments was added in Python 2.7.\n", "I think you actually want this:\n>>> from subprocess import *\n>>> command_stdout = Popen(['ls', '-l'], stdout=PIPE).communicate()[0]\n>>> command_text = command_stdout.decode(encoding='windows-1252')\n\nAaron's answer was correct, except that you need to know which encoding to use. And I believe that Windows uses 'windows-1252'. It will only matter if you have some unusual (non-ASCII) characters in your content, but then it will make a difference.\nBy the way, the fact that it does matter is the reason that Python moved to using two different types for binary and text data: it can't convert magically between them, because it doesn't know the encoding unless you tell it! The only way YOU would know is to read the Windows documentation (or read it here).\n", "Since this question is actually asking about subprocess output, you have more direct approaches available. The most modern would be using subprocess.check_output and passing text=True (Python 3.7+) to automatically decode stdout using the system default coding:\ntext = subprocess.check_output([\"ls\", \"-l\"], text=True)\n\nFor Python 3.6, Popen accepts an encoding keyword:\n>>> from subprocess import Popen, PIPE\n>>> text = Popen(['ls', '-l'], stdout=PIPE, encoding='utf-8').communicate()[0]\n>>> type(text)\nstr\n>>> print(text)\ntotal 0\n-rw-r--r-- 1 wim badger 0 May 31 12:45 some_file.txt\n\nThe general answer to the question in the title, if you're not dealing with subprocess output, is to decode bytes to text:\n>>> b'abcde'.decode()\n'abcde'\n\nWith no argument, sys.getdefaultencoding() will be used. If your data is not sys.getdefaultencoding(), then you must specify the encoding explicitly in the decode call:\n>>> b'caf\\xe9'.decode('cp1250')\n'café'\n\n", "Set universal_newlines to True, i.e.\ncommand_stdout = Popen(['ls', '-l'], stdout=PIPE, universal_newlines=True).communicate()[0]\n\n", "To interpret a byte sequence as a text, you have to know the\ncorresponding character encoding:\nunicode_text = bytestring.decode(character_encoding)\n\nExample:\n>>> b'\\xc2\\xb5'.decode('utf-8')\n'µ'\n\nls command may produce output that can't be interpreted as text. File names\non Unix may be any sequence of bytes except slash b'/' and zero\nb'\\0':\n>>> open(bytes(range(0x100)).translate(None, b'\\0/'), 'w').close()\n\nTrying to decode such byte soup using utf-8 encoding raises UnicodeDecodeError.\nIt can be worse. The decoding may fail silently and produce mojibake\nif you use a wrong incompatible encoding:\n>>> '—'.encode('utf-8').decode('cp1252')\n'—'\n\nThe data is corrupted but your program remains unaware that a failure\nhas occurred.\nIn general, what character encoding to use is not embedded in the byte sequence itself. You have to communicate this info out-of-band. Some outcomes are more likely than others and therefore chardet module exists that can guess the character encoding. A single Python script may use multiple character encodings in different places.\n\nls output can be converted to a Python string using os.fsdecode()\nfunction that succeeds even for undecodable\nfilenames (it uses\nsys.getfilesystemencoding() and surrogateescape error handler on\nUnix):\nimport os\nimport subprocess\n\noutput = os.fsdecode(subprocess.check_output('ls'))\n\nTo get the original bytes, you could use os.fsencode().\nIf you pass universal_newlines=True parameter then subprocess uses\nlocale.getpreferredencoding(False) to decode bytes e.g., it can be\ncp1252 on Windows.\nTo decode the byte stream on-the-fly,\nio.TextIOWrapper()\ncould be used: example.\nDifferent commands may use different character encodings for their\noutput e.g., dir internal command (cmd) may use cp437. To decode its\noutput, you could pass the encoding explicitly (Python 3.6+):\noutput = subprocess.check_output('dir', shell=True, encoding='cp437')\n\nThe filenames may differ from os.listdir() (which uses Windows\nUnicode API) e.g., '\\xb6' can be substituted with '\\x14'—Python's\ncp437 codec maps b'\\x14' to control character U+0014 instead of\nU+00B6 (¶). To support filenames with arbitrary Unicode characters, see Decode PowerShell output possibly containing non-ASCII Unicode characters into a Python string\n", "While @Aaron Maenpaa's answer just works, a user recently asked:\n\nIs there any more simply way? 'fhand.read().decode(\"ASCII\")' [...] It's so long!\n\nYou can use:\ncommand_stdout.decode()\n\ndecode() has a standard argument:\n\ncodecs.decode(obj, encoding='utf-8', errors='strict')\n\n", "If you should get the following by trying decode():\n\nAttributeError: 'str' object has no attribute 'decode'\n\nYou can also specify the encoding type straight in a cast:\n>>> my_byte_str\nb'Hello World'\n\n>>> str(my_byte_str, 'utf-8')\n'Hello World'\n\n", "If you have had this error:\n\nutf-8 codec can't decode byte 0x8a,\n\nthen it is better to use the following code to convert bytes to a string:\nbytes = b\"abcdefg\"\nstring = bytes.decode(\"utf-8\", \"ignore\") \n\n", "Bytes\nm=b'This is bytes'\n\nConverting to string\nMethod 1\nm.decode(\"utf-8\")\n\nor\nm.decode()\n\nMethod 2\nimport codecs\ncodecs.decode(m,encoding=\"utf-8\")\n\nor\nimport codecs\ncodecs.decode(m)\n\nMethod 3\nstr(m,encoding=\"utf-8\")\n\nor\nstr(m)[2:-1]\n\nResult\n'This is bytes'\n\n", "For Python 3, this is a much safer and Pythonic approach to convert from byte to string:\ndef byte_to_str(bytes_or_str):\n if isinstance(bytes_or_str, bytes): # Check if it's in bytes\n print(bytes_or_str.decode('utf-8'))\n else:\n print(\"Object not of byte type\")\n\nbyte_to_str(b'total 0\\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\\n')\n\nOutput:\ntotal 0\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\n\n", "When working with data from Windows systems (with \\r\\n line endings), my answer is\nString = Bytes.decode(\"utf-8\").replace(\"\\r\\n\", \"\\n\")\n\nWhy? Try this with a multiline Input.txt:\nBytes = open(\"Input.txt\", \"rb\").read()\nString = Bytes.decode(\"utf-8\")\nopen(\"Output.txt\", \"w\").write(String)\n\nAll your line endings will be doubled (to \\r\\r\\n), leading to extra empty lines. Python's text-read functions usually normalize line endings so that strings use only \\n. If you receive binary data from a Windows system, Python does not have a chance to do that. Thus,\nBytes = open(\"Input.txt\", \"rb\").read()\nString = Bytes.decode(\"utf-8\").replace(\"\\r\\n\", \"\\n\")\nopen(\"Output.txt\", \"w\").write(String)\n\nwill replicate your original file.\n", "We can decode the bytes object to produce a string using bytes.decode(encoding='utf-8', errors='strict').\nFor documentation see bytes.decode.\nPython 3 example:\nbyte_value = b\"abcde\"\nprint(\"Initial value = {}\".format(byte_value))\nprint(\"Initial value type = {}\".format(type(byte_value)))\nstring_value = byte_value.decode(\"utf-8\")\n# utf-8 is used here because it is a very common encoding, but you need to use the encoding your data is actually in.\nprint(\"------------\")\nprint(\"Converted value = {}\".format(string_value))\nprint(\"Converted value type = {}\".format(type(string_value)))\n\nOutput:\nInitial value = b'abcde'\nInitial value type = <class 'bytes'>\n------------\nConverted value = abcde\nConverted value type = <class 'str'>\n\nNote: In Python 3, by default the encoding type is UTF-8. So, <byte_string>.decode(\"utf-8\") can be also written as <byte_string>.decode()\n", "For your specific case of \"run a shell command and get its output as text instead of bytes\", on Python 3.7, you should use subprocess.run and pass in text=True (as well as capture_output=True to capture the output)\ncommand_result = subprocess.run([\"ls\", \"-l\"], capture_output=True, text=True)\ncommand_result.stdout # is a `str` containing your program's stdout\n\ntext used to be called universal_newlines, and was changed (well, aliased) in Python 3.7. If you want to support Python versions before 3.7, pass in universal_newlines=True instead of text=True\n", "From sys — System-specific parameters and functions:\nTo write or read binary data from/to the standard streams, use the underlying binary buffer. For example, to write bytes to stdout, use sys.stdout.buffer.write(b'abc').\n", "Try this:\nbytes.fromhex('c3a9').decode('utf-8') \n\n", "Decode with .decode(). This will decode the string. Pass in 'utf-8') as the value in the inside.\n", "def toString(string): \n try:\n return v.decode(\"utf-8\")\n except ValueError:\n return string\n\nb = b'97.080.500'\ns = '97.080.500'\nprint(toString(b))\nprint(toString(s))\n\n", "If you want to convert any bytes, not just string converted to bytes:\nwith open(\"bytesfile\", \"rb\") as infile:\n str = base64.b85encode(imageFile.read())\n\nwith open(\"bytesfile\", \"rb\") as infile:\n str2 = json.dumps(list(infile.read()))\n\nThis is not very efficient, however. It will turn a 2 MB picture into 9 MB.\n", "Try using this one; this function will ignore all the non-character sets (like UTF-8) binaries and return a clean string. It is tested for Python 3.6 and above.\ndef bin2str(text, encoding = 'utf-8'):\n \"\"\"Converts a binary to Unicode string by removing all non Unicode char\n text: binary string to work on\n encoding: output encoding *utf-8\"\"\"\n\n return text.decode(encoding, 'ignore')\n\nHere, the function will take the binary and decode it (converts binary data to characters using the Python predefined character set and the ignore argument ignores all non-character set data from your binary and finally returns your desired string value.\nIf you are not sure about the encoding, use sys.getdefaultencoding() to get the default encoding of your device.\n", "You can use the decode() method on the bytes object to convert it to a string:\ncommand_stdout = command_stdout.decode()\nThen you can print the string as usual:\n\n\nprint(command_stdout)\n\nThis will produce the following output:\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file1\n-rw-rw-r-- 1 thomas thomas 0 Mar 3 07:03 file2\n\n" ]
[ 5363, 393, 256, 127, 120, 48, 43, 38, 34, 28, 20, 19, 19, 9, 8, 8, 5, 4, 3, 3, 2, 2, 1, 0 ]
[]
[]
[ "python", "python_3.x", "string" ]
stackoverflow_0000606191_python_python_3.x_string.txt
Q: prettier configuration is conflicting with eslint Our project in company uses .js files and eslint is used for formatting. now we are transforming our app slowly to use .ts and .tsx files so I enabled prettier formatting in .ts and .tsx files only but before we use prettier we configured special rules for typescript files in eslint as following : overrides: [ // Match TypeScript Files // ================================= { files: ['**/*.{ts,tsx}'], // Parser Settings // ================================= // allow ESLint to understand TypeScript syntax // https://github.com/iamturns/eslint-config-airbnb-typescript/blob/master/lib/shared.js#L10 parser: '@typescript-eslint/parser', parserOptions: { project: './tsconfig.json', }, // Extend Other Configs // ================================= extends: [ 'airbnb', 'plugin:@typescript-eslint/recommended', 'plugin:import/typescript', ], rules: { ...rules, 'react/jsx-uses-react': 'off', 'react/react-in-jsx-scope': 'off', '@typescript-eslint/space-before-blocks': 'error', '@typescript-eslint/no-unused-vars': 'error', '@typescript-eslint/explicit-function-return-type': 'error', '@typescript-eslint/no-unsafe-return': 'warn', '@typescript-eslint/padding-line-between-statements': [ 'error', { blankLine: 'always', prev: ['interface', 'type'], next: '*', }, ], '@typescript-eslint/member-delimiter-style': [ 'error', { multiline: { delimiter: 'none', requireLast: false, }, singleline: { delimiter: 'comma', requireLast: false, }, }, ], '@typescript-eslint/type-annotation-spacing': 'error', }, and in .prettierrc file I did the following : { "semi": true, "singleQuote": true, "printWidth": 100, "arrowParens": "always", "tabWidth": 2, "trailingComma": "es5", "bracketSameLine": false, "bracketSpacing": true } so prettier for example adds semicolon at end of each line while this rule in .eslint '@typescript-eslint/member-delimiter-style': [ 'error', { multiline: { delimiter: 'none', requireLast: false, }, singleline: { delimiter: 'comma', requireLast: false, }, }, doesn't require the line to end with semicolon when declaring types and interface and many other rules conflicts also in eslint when adding multilpe empty lines in file it should show error so how I can do that with prettier? A: You may need to change your default formatter for vscode to eslint. You can install the eslint extension and prettier Eslint. If you are on mac this can be achieved by going to: Code -> Preferences -> Settings -> Search for Default formatter -> change it to Eslint (dbaeumer.vscode-eslint). You may also need to change your default formatter. You can achieve this by right-clicking in your screen -> Click on format document with.. -> configure default formatter
prettier configuration is conflicting with eslint
Our project in company uses .js files and eslint is used for formatting. now we are transforming our app slowly to use .ts and .tsx files so I enabled prettier formatting in .ts and .tsx files only but before we use prettier we configured special rules for typescript files in eslint as following : overrides: [ // Match TypeScript Files // ================================= { files: ['**/*.{ts,tsx}'], // Parser Settings // ================================= // allow ESLint to understand TypeScript syntax // https://github.com/iamturns/eslint-config-airbnb-typescript/blob/master/lib/shared.js#L10 parser: '@typescript-eslint/parser', parserOptions: { project: './tsconfig.json', }, // Extend Other Configs // ================================= extends: [ 'airbnb', 'plugin:@typescript-eslint/recommended', 'plugin:import/typescript', ], rules: { ...rules, 'react/jsx-uses-react': 'off', 'react/react-in-jsx-scope': 'off', '@typescript-eslint/space-before-blocks': 'error', '@typescript-eslint/no-unused-vars': 'error', '@typescript-eslint/explicit-function-return-type': 'error', '@typescript-eslint/no-unsafe-return': 'warn', '@typescript-eslint/padding-line-between-statements': [ 'error', { blankLine: 'always', prev: ['interface', 'type'], next: '*', }, ], '@typescript-eslint/member-delimiter-style': [ 'error', { multiline: { delimiter: 'none', requireLast: false, }, singleline: { delimiter: 'comma', requireLast: false, }, }, ], '@typescript-eslint/type-annotation-spacing': 'error', }, and in .prettierrc file I did the following : { "semi": true, "singleQuote": true, "printWidth": 100, "arrowParens": "always", "tabWidth": 2, "trailingComma": "es5", "bracketSameLine": false, "bracketSpacing": true } so prettier for example adds semicolon at end of each line while this rule in .eslint '@typescript-eslint/member-delimiter-style': [ 'error', { multiline: { delimiter: 'none', requireLast: false, }, singleline: { delimiter: 'comma', requireLast: false, }, }, doesn't require the line to end with semicolon when declaring types and interface and many other rules conflicts also in eslint when adding multilpe empty lines in file it should show error so how I can do that with prettier?
[ "You may need to change your default formatter for vscode to eslint. You can install the eslint extension and prettier Eslint.\n\nIf you are on mac this can be achieved by going to:\nCode -> Preferences -> Settings -> Search for Default formatter -> change it to Eslint (dbaeumer.vscode-eslint).\nYou may also need to change your default formatter. You can achieve this by right-clicking in your screen -> Click on format document with.. -> configure default formatter\n\n" ]
[ 0 ]
[]
[]
[ "eslint", "javascript", "prettier", "prettier_eslint", "typescript_eslint" ]
stackoverflow_0074674248_eslint_javascript_prettier_prettier_eslint_typescript_eslint.txt
Q: How to define jenkins build trigger in jenkinsfile to start build after other job I would like to define a build trigger in my Jenkinsfile. I know how to do it for the BuildDiscarderProperty: properties([[$class: 'jenkins.model.BuildDiscarderProperty', strategy: [$class: 'LogRotator', numToKeepStr: '50', artifactNumToKeepStr: '20']]]) How can I set the Build Trigger that starts the job, when another project has been built. I cannot find a suitable entry in the Java API docs. Edit: My solution is to use the following code: stage('Build Agent'){ if (env.BRANCH_NAME == 'develop') { try { // try to start subsequent job, but don't wait for it to finish build job: '../Agent/develop', wait: false } catch(Exception ex) { echo "An error occurred while building the agent." } } if (env.BRANCH_NAME == 'master') { // start subsequent job and wait for it to finish build '../Agent/master', wait: true } } A: I just looked for the same thing and found this Jenkinsfilein jenkins-infra/jenkins.io In short: properties([ pipelineTriggers([cron('H/30 * * * *')]) ]) A: This is an example: #Project test1 pipeline { agent { any } stages { stage('hello') { steps { container('dind') { sh """ echo "Hello world!" """ } } } } post { success{ build propagate: false, job: 'test2' } } } post {} will be execute when project test1 is built and the code inside success {} will only be executed when project test1 is successful. build propagate: false, job: 'test2' will call project test2. propogate: false ensures that project test1 does not wait for project test2's completion and simply invokes it.
How to define jenkins build trigger in jenkinsfile to start build after other job
I would like to define a build trigger in my Jenkinsfile. I know how to do it for the BuildDiscarderProperty: properties([[$class: 'jenkins.model.BuildDiscarderProperty', strategy: [$class: 'LogRotator', numToKeepStr: '50', artifactNumToKeepStr: '20']]]) How can I set the Build Trigger that starts the job, when another project has been built. I cannot find a suitable entry in the Java API docs. Edit: My solution is to use the following code: stage('Build Agent'){ if (env.BRANCH_NAME == 'develop') { try { // try to start subsequent job, but don't wait for it to finish build job: '../Agent/develop', wait: false } catch(Exception ex) { echo "An error occurred while building the agent." } } if (env.BRANCH_NAME == 'master') { // start subsequent job and wait for it to finish build '../Agent/master', wait: true } }
[ "I just looked for the same thing and found this Jenkinsfilein jenkins-infra/jenkins.io\nIn short:\nproperties([\n pipelineTriggers([cron('H/30 * * * *')])\n])\n\n", "This is an example:\n#Project test1\npipeline {\n agent {\n any\n }\n stages {\n stage('hello') {\n steps {\n container('dind') {\n sh \"\"\"\n echo \"Hello world!\"\n \"\"\"\n }\n }\n }\n }\n post {\n success{\n build propagate: false, job: 'test2'\n }\n }\n }\n\npost {} will be execute when project test1 is built and the code inside\nsuccess {} will only be executed when project test1 is successful.\nbuild propagate: false, job: 'test2' will call project test2.\npropogate: false ensures that project test1 does not wait for project test2's\ncompletion and simply invokes it.\n" ]
[ 6, 0 ]
[]
[]
[ "build_triggers", "jenkins_pipeline" ]
stackoverflow_0039944755_build_triggers_jenkins_pipeline.txt
Q: RabbitMQ / amqplib -- Error: Frame size exceeds frame max Connection to RabbitMQ fails with Error: Frame size exceeds frame max. Although there are a few similar issues raised on StackOverflow and Github, but it is still very vague. One assumes that the version of AMQP used by RabbitMQ and amqplib differs, but how to check that? If talking about major differences of amqp 1.0 and amqp 0-9-1, then theoretically amqplib only supports 0-9-1 and RabbitMQ supports it by default. Any other ideas? Versions: RabbitMQ: 3.10.5 amqplib: 0.10.0 A: Try creating an instance on the amqps dashboard, a URL owuld be generated after a successful instance creation. Use the URL. This worked for me. https://api.cloudamqp.com/
RabbitMQ / amqplib -- Error: Frame size exceeds frame max
Connection to RabbitMQ fails with Error: Frame size exceeds frame max. Although there are a few similar issues raised on StackOverflow and Github, but it is still very vague. One assumes that the version of AMQP used by RabbitMQ and amqplib differs, but how to check that? If talking about major differences of amqp 1.0 and amqp 0-9-1, then theoretically amqplib only supports 0-9-1 and RabbitMQ supports it by default. Any other ideas? Versions: RabbitMQ: 3.10.5 amqplib: 0.10.0
[ "Try creating an instance on the amqps dashboard, a URL owuld be generated after a successful instance creation. Use the URL.\nThis worked for me.\nhttps://api.cloudamqp.com/\n" ]
[ 0 ]
[]
[]
[ "amqp", "node.js", "node_amqp", "node_amqplib", "rabbitmq" ]
stackoverflow_0072855031_amqp_node.js_node_amqp_node_amqplib_rabbitmq.txt
Q: How do I print input in array form out of the for loop? for (int i = 0; i < 2; i++) { System.out.println("Please enter your name: "); name = scan.nextLine(); peoples.setName(name); System.out.println("Please enter your IC: "); icNo = scan.nextLine(); peoples.setIcNo(icNo); System.out.println("Please enter your marital status: "); status = scan.nextLine(); taxes.setStatus(status); System.out.println("Please enter your taxable income: "); taxableIncome = scan.nextDouble(); taxes.setTaxableIncome(taxableIncome); peoples.addPeople(name, icNo, taxableIncome, taxAmount); } System.out.printf("NAME " + "IC NO " + "TAXABLE INCOME " + "TAX AMOUNT"); System.out.println(""); System.out.println(peoples.toString()); This is a section of the code for the problem. I'm supposed to ask a person's name, ID, marital status and taxable income. I have three classes, one for the person's details, one to calculate the tax imposed and the main class here. The information obtained from the people was supposed to be placed into an array but they're all in different data types. Well, name and ID are string types while both taxable income and tax amount are in double data types. I tried to make an array in the people class but it didn't work out. I tried casting the array into string but it didn't work either. I'm supposed to obtain data from two or more people and print them out below a header. I just can't think of how it's supposed to store the data from user inputs and print them outside of the for loop. The user input should be stored as an array but I'm open to any other solutions. A: You would need to have a Tax class that would store the tax info and it should contain a method that calculate tax like this public class Tax { private double taxableIncome_; private double taxAmount_; private String status_; Tax(double taxableIncome, double taxAmount, String status) { // initialize your members } public double calculateTax() { //calculate tax } // implement setters and getters if needed. } Then you should create Person class that saves a person info and an instance of Tax like this: public class Person { private String name_; private String id_; private Tax tax_; Person(String name, String id, Tax tax) { // initialize members } @Override public String toString() { // return a string that has a person's information and tax's information } } Then in your main program after creating an array of people you can use Arrays.toString(pepoles) which will call toString for each object in your array Example: import java.util.Arrays; import java.util.Scanner; public class Main { public static void main(String[] args) { Person[] people = new Person[2]; Scanner scan = new Scanner(System.in); for(int i=0;i<2;i++) { String name = scan.nextLine(); System.out.println("Please enter your IC: "); String icNo = scan.nextLine(); System.out.println("Please enter your marital status: "); String status = scan.nextLine(); System.out.println("Please enter your taxable income: "); double taxableIncome = scan.nextDouble(); System.out.println("Please enter your taxable Amount: "); double taxAmount = scan.nextDouble(); people[i] = new Person(name, icNo, new Tax(taxableIncome, taxAmount,status)); } System.out.println("NAME " + "IC NO " + "TAXABLE INCOME " + "TAX AMOUNT"); System.out.print(Arrays.toString(people)); } }
How do I print input in array form out of the for loop?
for (int i = 0; i < 2; i++) { System.out.println("Please enter your name: "); name = scan.nextLine(); peoples.setName(name); System.out.println("Please enter your IC: "); icNo = scan.nextLine(); peoples.setIcNo(icNo); System.out.println("Please enter your marital status: "); status = scan.nextLine(); taxes.setStatus(status); System.out.println("Please enter your taxable income: "); taxableIncome = scan.nextDouble(); taxes.setTaxableIncome(taxableIncome); peoples.addPeople(name, icNo, taxableIncome, taxAmount); } System.out.printf("NAME " + "IC NO " + "TAXABLE INCOME " + "TAX AMOUNT"); System.out.println(""); System.out.println(peoples.toString()); This is a section of the code for the problem. I'm supposed to ask a person's name, ID, marital status and taxable income. I have three classes, one for the person's details, one to calculate the tax imposed and the main class here. The information obtained from the people was supposed to be placed into an array but they're all in different data types. Well, name and ID are string types while both taxable income and tax amount are in double data types. I tried to make an array in the people class but it didn't work out. I tried casting the array into string but it didn't work either. I'm supposed to obtain data from two or more people and print them out below a header. I just can't think of how it's supposed to store the data from user inputs and print them outside of the for loop. The user input should be stored as an array but I'm open to any other solutions.
[ "You would need to have a Tax class that would store the tax info and it should contain a method that calculate tax like this\npublic class Tax {\n private double taxableIncome_;\n private double taxAmount_;\n private String status_;\n\n Tax(double taxableIncome, double taxAmount, String status) {\n // initialize your members\n }\n\n public double calculateTax() {\n //calculate tax\n }\n // implement setters and getters if needed.\n}\n\nThen you should create Person class that saves a person info and an instance of Tax like this:\npublic class Person {\n private String name_;\n private String id_;\n private Tax tax_;\n\n Person(String name, String id, Tax tax) {\n // initialize members\n }\n\n @Override\n public String toString() {\n// return a string that has a person's information and tax's information\n }\n}\n\nThen in your main program after creating an array of people you can use\nArrays.toString(pepoles) which will call toString for each object in your array\nExample:\nimport java.util.Arrays;\nimport java.util.Scanner;\n\npublic class Main\n{\n public static void main(String[] args)\n {\n Person[] people = new Person[2];\n Scanner scan = new Scanner(System.in);\n for(int i=0;i<2;i++)\n {\n String name = scan.nextLine();\n System.out.println(\"Please enter your IC: \");\n String icNo = scan.nextLine();\n System.out.println(\"Please enter your marital status: \");\n String status = scan.nextLine();\n System.out.println(\"Please enter your taxable income: \");\n double taxableIncome = scan.nextDouble();\n System.out.println(\"Please enter your taxable Amount: \");\n double taxAmount = scan.nextDouble();\n people[i] = new Person(name, icNo, new Tax(taxableIncome, taxAmount,status));\n }\n System.out.println(\"NAME \" + \"IC NO \" + \"TAXABLE INCOME \" + \"TAX AMOUNT\");\n System.out.print(Arrays.toString(people));\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "for_loop", "java" ]
stackoverflow_0074673971_arrays_for_loop_java.txt
Q: How To implement CarQuery API In React Native Am new in react Native, a need to implement a simple form with car query data using CarQuery API, have tried to find tutorials online but have not found any in react native, please anyone who can implement a simple form with car query data a will a appreciates so much
How To implement CarQuery API In React Native
Am new in react Native, a need to implement a simple form with car query data using CarQuery API, have tried to find tutorials online but have not found any in react native, please anyone who can implement a simple form with car query data a will a appreciates so much
[]
[]
[ "use the fetch method to send a request to the API and retrieve the data you need. The fetch method is a built-in JavaScript function that allows you to make network requests, and is supported by React Native out of the box.\nconst fetchData = async () => {\n const response = await fetch('https://www.carqueryapi.com/api/0.3/?cmd=getMakes');\n const data = await response.json();\n console.log(data);\n};\n\nfetch method is used to send a request to the getMakes endpoint of the CarQuery API. The API returns a list of car makes in JSON format, which is parsed and logged to the console using the response.json() method.\nyou can store it in the component's state and use it to populate the form fields to use this data in your form. You can create a Select component with options for each car make, and use the data from the API to populate the options.\nclass MyForm extends React.Component {\n constructor(props) {\n super(props);\n this.state = {\n makes: [],\n };\n }\n\n componentDidMount() {\n fetchData().then(data => {\n this.setState({\n makes: data.Makes,\n });\n });\n }\n\n render() {\n const { makes } = this.state;\n return (\n <View>\n <Select\n options={makes.map(make => ({\n value: make.make_id,\n label: make.make_display,\n }))}\n />\n </View>\n );\n }\n}\n\ncomponentDidMount lifecycle method is used to call the fetchData function when the component is mounted. The data returned by the API is stored in the component's state, and to populate the options of the Select component.\n" ]
[ -1 ]
[ "javascript", "react_native", "reactjs" ]
stackoverflow_0074674330_javascript_react_native_reactjs.txt
Q: SQL row_number with a condition on one column I want to configure row_number with a case condition. To look on "time_diffs" column and check - if there 1's go one by one, than it's a one group, the number should repeat when it proceeds after previous 1's. If there 0's, than each 0 is the one group by itself, and number won't repeat - it will grow on +1 after each 0's it proceeds. And when the itterator meets new 1's, after proceeding between 0's, it won't reset the counting. It will continue counting, +1 after 0's, but with the logic described above. The query and result examples listed below. select session_id, player_id, country, start_time, end_time, case when timestampdiff(minute, lag(end_time, 1) over(partition by player_id order by end_time) , start_time) < 5 then 1 when timestampdiff(minute, end_time , lead(start_time, 1) over(partition by player_id order by start_time)) < 5 then 1 else 0 end as time_diffs /* , here is a new code with an expected result */ from game_sessions where 1=1 and player_id = 1 order by player_id, start_time The result of the current query: session_id player_id country start_time end_time time_diffs 1 1 UK 01.01.2021 00:01 01.01.2021 00:10 1 2 1 UK 01.01.2021 00:12 01.01.2021 01:24 1 13 1 UK 01.01.2021 01:27 01.01.2021 01:50 1 3 1 UK 01.01.2021 10:01 01.01.2021 15:10 0 16 1 UK 01.01.2021 17:10 01.01.2021 17:20 1 17 1 UK 01.01.2021 17:22 01.01.2021 17:55 1 54 1 UK 01.01.2021 18:15 01.01.2021 18:35 0 32 1 UK 01.01.2021 18:55 01.01.2021 19:35 0 What I expect to see with a new column added to the current query: session_id player_id country start_time end_time time_diffs expected_result 1 1 UK 01.01.2021 00:01 01.01.2021 00:10 1 1 2 1 UK 01.01.2021 00:12 01.01.2021 01:24 1 1 13 1 UK 01.01.2021 01:27 01.01.2021 01:50 1 1 3 1 UK 01.01.2021 10:01 01.01.2021 15:10 0 2 16 1 UK 01.01.2021 17:10 01.01.2021 17:20 1 3 17 1 UK 01.01.2021 17:22 01.01.2021 17:55 1 3 54 1 UK 01.01.2021 18:15 01.01.2021 18:35 0 4 32 1 UK 01.01.2021 18:55 01.01.2021 19:35 0 5 A: You can't tell ROW_NUMBER to not be ROW_NUMBER. But you can use SUM() OVER () to count up cumulatively. So, first make a flag for "this is the start of a new group" (more than 5mins since last row) and sum them up in order. (I deleted the time diffs column, as it is over engineered and not required in its current form.) WITH diffs AS ( select session_id, player_id, country, start_time, end_time, /* here is a new code with an expected result */ LAG(end_time) OVER ( PARTITION BY player_id ORDER BY start_time ) AS prev_end_time from game_sessions ) SELECT *, SUM( IF(start_time < prev_end_time + INTERVAL '5' MINUTE, 0, 1) ) OVER ( PARTITION BY player_id ORDER BY start_time ) AS expected_result FROM diffs WHERE 1=1 AND player_id = 1 ORDER BY player_id, start_time Demo: https://dbfiddle.uk/sJcWInTN (Note, for future questions, please use ISO-8601 standard notation for dates and times, it makes inserting the data into a table much easier, reliable, etc.)
SQL row_number with a condition on one column
I want to configure row_number with a case condition. To look on "time_diffs" column and check - if there 1's go one by one, than it's a one group, the number should repeat when it proceeds after previous 1's. If there 0's, than each 0 is the one group by itself, and number won't repeat - it will grow on +1 after each 0's it proceeds. And when the itterator meets new 1's, after proceeding between 0's, it won't reset the counting. It will continue counting, +1 after 0's, but with the logic described above. The query and result examples listed below. select session_id, player_id, country, start_time, end_time, case when timestampdiff(minute, lag(end_time, 1) over(partition by player_id order by end_time) , start_time) < 5 then 1 when timestampdiff(minute, end_time , lead(start_time, 1) over(partition by player_id order by start_time)) < 5 then 1 else 0 end as time_diffs /* , here is a new code with an expected result */ from game_sessions where 1=1 and player_id = 1 order by player_id, start_time The result of the current query: session_id player_id country start_time end_time time_diffs 1 1 UK 01.01.2021 00:01 01.01.2021 00:10 1 2 1 UK 01.01.2021 00:12 01.01.2021 01:24 1 13 1 UK 01.01.2021 01:27 01.01.2021 01:50 1 3 1 UK 01.01.2021 10:01 01.01.2021 15:10 0 16 1 UK 01.01.2021 17:10 01.01.2021 17:20 1 17 1 UK 01.01.2021 17:22 01.01.2021 17:55 1 54 1 UK 01.01.2021 18:15 01.01.2021 18:35 0 32 1 UK 01.01.2021 18:55 01.01.2021 19:35 0 What I expect to see with a new column added to the current query: session_id player_id country start_time end_time time_diffs expected_result 1 1 UK 01.01.2021 00:01 01.01.2021 00:10 1 1 2 1 UK 01.01.2021 00:12 01.01.2021 01:24 1 1 13 1 UK 01.01.2021 01:27 01.01.2021 01:50 1 1 3 1 UK 01.01.2021 10:01 01.01.2021 15:10 0 2 16 1 UK 01.01.2021 17:10 01.01.2021 17:20 1 3 17 1 UK 01.01.2021 17:22 01.01.2021 17:55 1 3 54 1 UK 01.01.2021 18:15 01.01.2021 18:35 0 4 32 1 UK 01.01.2021 18:55 01.01.2021 19:35 0 5
[ "You can't tell ROW_NUMBER to not be ROW_NUMBER. But you can use SUM() OVER () to count up cumulatively.\nSo, first make a flag for \"this is the start of a new group\" (more than 5mins since last row) and sum them up in order.\n(I deleted the time diffs column, as it is over engineered and not required in its current form.)\nWITH\n diffs AS\n(\n select\n session_id, \n player_id, \n country, \n start_time, \n end_time,\n /* here is a new code with an expected result */\n LAG(end_time)\n OVER (\n PARTITION BY player_id\n ORDER BY start_time\n )\n AS prev_end_time\n from\n game_sessions\n)\nSELECT\n *,\n SUM(\n IF(start_time < prev_end_time + INTERVAL '5' MINUTE, 0, 1)\n )\n OVER (\n PARTITION BY player_id\n ORDER BY start_time\n )\n AS expected_result\nFROM\n diffs\nWHERE\n 1=1\n AND player_id = 1\nORDER BY\n player_id,\n start_time\n\nDemo: https://dbfiddle.uk/sJcWInTN\n(Note, for future questions, please use ISO-8601 standard notation for dates and times, it makes inserting the data into a table much easier, reliable, etc.)\n" ]
[ 0 ]
[]
[]
[ "mysql", "row_number", "sql" ]
stackoverflow_0074674170_mysql_row_number_sql.txt