content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: How do I separate mandatory extensions and personal extensions in devcontainer The devcontainer.json allows adding extensions to be installed in a container. I want to be able to set the list of mandatory extensions to be used by all team members, like rust-lang.rust-analyzer and llvm-vs-code-extensions.vscode-clangd and another file for personal extensions. Ideally, the personal one would be added to .gitignore. A: To add a file for personal extensions that can be added to the container, you can create a new file, such as "my-extensions.json", and add it to the ".gitignore" file to prevent it from being committed to the repository. In the my-extensions.json file, you can add the "extensions" property with an array of the extension IDs that you want to be installed in your personal container. For example: { "devcontainer": { "extensions": [ "my-extension-id-1", "my-extension-id-2" ] } } A: o separate mandatory extensions and personal extensions in a devcontainer, you can create two separate configuration files for the devcontainer. One configuration file can be used to specify the mandatory extensions that should be installed in the container, and the other configuration file can be used to specify personal extensions that should be installed in the container. Here is an example of how this could be done: First, create a file called devcontainer.mandatory.json that contains the configuration for the mandatory extensions. For example: { "name": "My Devcontainer", "extensions": [ "rust-lang.rust-analyzer", "llvm-vs-code-extensions.vscode-clangd" ] } Next, create a file called devcontainer.personal.json that contains the configuration for the personal extensions. For example: { "name": "My Devcontainer", "extensions": [ "myusername.myextension1", "myusername.myextension2" ] } Finally, add the devcontainer.personal.json file to your .gitignore file so that it is not included in version control. To use these configuration files, you can specify the devcontainer.mandatory.json file when you create the devcontainer, and then specify the devcontainer.personal.json file when you open the devcontainer in VS Code. This will ensure that both the mandatory and personal extensions are installed in the container. Alternatively, you can combine the two configuration files into a single file and use the extensions property to specify both the mandatory and personal extensions. However, this approach may not be as maintainable, especially if you have many personal extensions and you want to keep them separate from the mandatory extensions. UPDATE: Q: "Hey, thank you for the prompt reply. I have split the file into two, but I cannot specify what file to use." A: If you have created two separate files for the mandatory and personal extensions, you can specify which file to use in the devcontainer.json file using the extensionPack property. This property allows you to specify a list of paths to JSON files that contain the extensions to be installed. For example, if you have created two files, mandatory_extensions.json and personal_extensions.json, you can specify them in the devcontainer.json file like this: { "name": "My Dev Container", "extensionPack": [ "mandatory_extensions.json", "personal_extensions.json" ] } This will instruct VS Code to install the extensions from both files when the devcontainer is created or started. You can also specify the extensions property in the devcontainer.json file to install a list of mandatory extensions directly in the file, without using a separate JSON file. For example: { "name": "My Dev Container", "extensions": [ "rust-lang.rust-analyzer", "llvm-vs-code-extensions.vscode-clangd" ], "extensionPack": [ "personal_extensions.json" ] } This should install the extensions in the extensions property when the devcontainer is created or started, as well as the extensions in the personal_extensions.json file. UPDATE 2: Q: "VS Code says that "Property extensionPack is not allowed." when I add it to the root of devcontainer.json" A: The extensionPack property is not allowed at the root level of the devcontainer.json file. Instead, it should be added inside the extensions array as a separate object. For example, if you want to specify an extension pack called my-extension-pack, you can add it to the extensions array like this: { "extensions": [ { "type": "extensionPack", "name": "my-extension-pack" } ] } This will tell devcontainer to install the my-extension-pack extension pack when creating the devcontainer. Alternatively, you can use the extensionPack property together with the type and source properties to specify a custom file containing a list of extensions to install as an extension pack. For example: { "extensions": [ { "type": "extensionPack", "source": "./my-extension-pack.json" } ] } This will tell devcontainer to use the my-extension-pack.json file as the source for the extension pack, and it will install the extensions listed in that file when creating the devcontainer.
How do I separate mandatory extensions and personal extensions in devcontainer
The devcontainer.json allows adding extensions to be installed in a container. I want to be able to set the list of mandatory extensions to be used by all team members, like rust-lang.rust-analyzer and llvm-vs-code-extensions.vscode-clangd and another file for personal extensions. Ideally, the personal one would be added to .gitignore.
[ "To add a file for personal extensions that can be added to the container, you can create a new file, such as \"my-extensions.json\", and add it to the \".gitignore\" file to prevent it from being committed to the repository. In the my-extensions.json file, you can add the \"extensions\" property with an array of the extension IDs that you want to be installed in your personal container. For example:\n{\n \"devcontainer\": {\n \"extensions\": [\n \"my-extension-id-1\",\n \"my-extension-id-2\"\n ]\n }\n}\n\n\n", "o separate mandatory extensions and personal extensions in a devcontainer, you can create two separate configuration files for the devcontainer. One configuration file can be used to specify the mandatory extensions that should be installed in the container, and the other configuration file can be used to specify personal extensions that should be installed in the container.\nHere is an example of how this could be done:\nFirst, create a file called devcontainer.mandatory.json that contains the configuration for the mandatory extensions. For example:\n{\n \"name\": \"My Devcontainer\",\n \"extensions\": [\n \"rust-lang.rust-analyzer\",\n \"llvm-vs-code-extensions.vscode-clangd\"\n ]\n}\n\nNext, create a file called devcontainer.personal.json that contains the configuration for the personal extensions. For example:\n{\n \"name\": \"My Devcontainer\",\n \"extensions\": [\n \"myusername.myextension1\",\n \"myusername.myextension2\"\n ]\n}\n\nFinally, add the devcontainer.personal.json file to your .gitignore file so that it is not included in version control.\nTo use these configuration files, you can specify the devcontainer.mandatory.json file when you create the devcontainer, and then specify the devcontainer.personal.json file when you open the devcontainer in VS Code. This will ensure that both the mandatory and personal extensions are installed in the container.\nAlternatively, you can combine the two configuration files into a single file and use the extensions property to specify both the mandatory and personal extensions. However, this approach may not be as maintainable, especially if you have many personal extensions and you want to keep them separate from the mandatory extensions.\nUPDATE:\nQ: \"Hey, thank you for the prompt reply. I have split the file into two, but I cannot specify what file to use.\"\nA: If you have created two separate files for the mandatory and personal extensions, you can specify which file to use in the devcontainer.json file using the extensionPack property. This property allows you to specify a list of paths to JSON files that contain the extensions to be installed.\nFor example, if you have created two files, mandatory_extensions.json and personal_extensions.json, you can specify them in the devcontainer.json file like this:\n{\n \"name\": \"My Dev Container\",\n \"extensionPack\": [\n \"mandatory_extensions.json\",\n \"personal_extensions.json\"\n ]\n}\n\nThis will instruct VS Code to install the extensions from both files when the devcontainer is created or started.\nYou can also specify the extensions property in the devcontainer.json file to install a list of mandatory extensions directly in the file, without using a separate JSON file.\nFor example:\n{\n \"name\": \"My Dev Container\",\n \"extensions\": [\n \"rust-lang.rust-analyzer\",\n \"llvm-vs-code-extensions.vscode-clangd\"\n ],\n \"extensionPack\": [\n \"personal_extensions.json\"\n ]\n}\n\nThis should install the extensions in the extensions property when the devcontainer is created or started, as well as the extensions in the personal_extensions.json file.\nUPDATE 2:\nQ: \"VS Code says that \"Property extensionPack is not allowed.\" when I add it to the root of devcontainer.json\"\nA: The extensionPack property is not allowed at the root level of the devcontainer.json file. Instead, it should be added inside the extensions array as a separate object.\nFor example, if you want to specify an extension pack called my-extension-pack, you can add it to the extensions array like this:\n{\n \"extensions\": [\n {\n \"type\": \"extensionPack\",\n \"name\": \"my-extension-pack\"\n }\n ]\n}\n\nThis will tell devcontainer to install the my-extension-pack extension pack when creating the devcontainer.\nAlternatively, you can use the extensionPack property together with the type and source properties to specify a custom file containing a list of extensions to install as an extension pack. For example:\n{\n \"extensions\": [\n {\n \"type\": \"extensionPack\",\n \"source\": \"./my-extension-pack.json\"\n }\n ]\n}\n\nThis will tell devcontainer to use the my-extension-pack.json file as the source for the extension pack, and it will install the extensions listed in that file when creating the devcontainer.\n" ]
[ 0, 0 ]
[]
[]
[ "docker", "visual_studio_code", "vscode_devcontainer" ]
stackoverflow_0074677158_docker_visual_studio_code_vscode_devcontainer.txt
Q: Would like to inquire the height of camera on the Google street view car in Hong Kong I am a researcher in a HK university. I Would like to inquire the height of camera above the ground on a Google street view car in Hong Kong. Is it about 2.5 m? A: According to Google, the height of the camera on a Google Street View car is about 2.7 meters (8.9 feet) above the ground. This information is available on the Google Street View website, in the section about how Street View cars are equipped: https://www.google.com/streetview/cars/ Here is the relevant excerpt from the website: The camera system on the roof is made up of 15 cameras pointed in different directions. These cameras take 360-degree photos and capture images every few seconds as the car drives down the street. The cameras are mounted on a lightweight pole that extends about 2.7 meters (8.9 feet) above the roof of the car. I hope this helps! Let me know if you have any questions.
Would like to inquire the height of camera on the Google street view car in Hong Kong
I am a researcher in a HK university. I Would like to inquire the height of camera above the ground on a Google street view car in Hong Kong. Is it about 2.5 m?
[ "According to Google, the height of the camera on a Google Street View car is about 2.7 meters (8.9 feet) above the ground. This information is available on the Google Street View website, in the section about how Street View cars are equipped:\nhttps://www.google.com/streetview/cars/\nHere is the relevant excerpt from the website:\n\nThe camera system on the roof is made up of 15 cameras pointed in\ndifferent directions. These cameras take 360-degree photos and capture\nimages every few seconds as the car drives down the street.\nThe cameras are mounted on a lightweight pole that extends about 2.7\nmeters (8.9 feet) above the roof of the car.\n\nI hope this helps! Let me know if you have any questions.\n" ]
[ 0 ]
[]
[]
[ "google_street_view" ]
stackoverflow_0074675867_google_street_view.txt
Q: Plot duration of processes along with date, start and end timestamps I am trying to plot duration of processes, starting from the following data frame variable year start end seconds hours start_time start_date 0 10m_u_component_of_wind 2005 2022-04-25 13:14:45 2022-04-26 02:13:56 46751 12.986389 13:14 1 10m_u_component_of_wind 2006 2022-04-26 04:56:26 2022-04-26 14:56:35 36009 10.002500 04:56 2 10m_u_component_of_wind 2007 2022-04-26 05:01:04 2022-04-26 20:45:38 56674 15.742778 05:01 .. .. .. .. .. .. .. .. in the following, for example, way: cm = 1 / 2.54 figure, ax = plt.subplots(figsize=(50*cm, 20*cm)) plot = sns.stripplot( data=timings, x=timings.start_date, y=timings.start_time.sort_values(ascending=False), hue='variable', ) # ax.set_ylim(["00:00", "23:59"]) plt.suptitle('Duration of processes') plt.title('using patching script (commit fb14897574f55915bfc65d8be23e6242c7bdbb99)') plt.legend() plt.show() If I comment out the ax.set_ylim(["00:00", "23:59"]) line, no data are plotted. Without it, the y-scale, obviously does not start at 00:00 and end at 23:59. Question How can I plot the duration of processes and show along start date, start time and end time, putting in one axis the hours scale? The point is (also) to emphasize how many processes needed to start very early or late (like before 08:30 AM and after 17:30 PM). Draft hand-drawn plot The following is surely not in proper scale. Just an idea to show in a plot: start date duration of a process (horizontal for easier visual estimation?) start time (y axis) Alternative ideas much appreciated. A: I came up with something close and convinced it is easier than thought to create a plot, as per the question, using the awesomeness of Bokeh: Source code posted at: https://discourse.bokeh.org/t/plotting-timestamps-values-and-highlighting-time-ranges/9804?u=nikosalexandris
Plot duration of processes along with date, start and end timestamps
I am trying to plot duration of processes, starting from the following data frame variable year start end seconds hours start_time start_date 0 10m_u_component_of_wind 2005 2022-04-25 13:14:45 2022-04-26 02:13:56 46751 12.986389 13:14 1 10m_u_component_of_wind 2006 2022-04-26 04:56:26 2022-04-26 14:56:35 36009 10.002500 04:56 2 10m_u_component_of_wind 2007 2022-04-26 05:01:04 2022-04-26 20:45:38 56674 15.742778 05:01 .. .. .. .. .. .. .. .. in the following, for example, way: cm = 1 / 2.54 figure, ax = plt.subplots(figsize=(50*cm, 20*cm)) plot = sns.stripplot( data=timings, x=timings.start_date, y=timings.start_time.sort_values(ascending=False), hue='variable', ) # ax.set_ylim(["00:00", "23:59"]) plt.suptitle('Duration of processes') plt.title('using patching script (commit fb14897574f55915bfc65d8be23e6242c7bdbb99)') plt.legend() plt.show() If I comment out the ax.set_ylim(["00:00", "23:59"]) line, no data are plotted. Without it, the y-scale, obviously does not start at 00:00 and end at 23:59. Question How can I plot the duration of processes and show along start date, start time and end time, putting in one axis the hours scale? The point is (also) to emphasize how many processes needed to start very early or late (like before 08:30 AM and after 17:30 PM). Draft hand-drawn plot The following is surely not in proper scale. Just an idea to show in a plot: start date duration of a process (horizontal for easier visual estimation?) start time (y axis) Alternative ideas much appreciated.
[ "I came up with something close and convinced it is easier than thought to create a plot, as per the question, using the awesomeness of Bokeh:\n\nSource code posted at: https://discourse.bokeh.org/t/plotting-timestamps-values-and-highlighting-time-ranges/9804?u=nikosalexandris\n" ]
[ 0 ]
[]
[]
[ "datetime", "matplotlib", "python", "seaborn" ]
stackoverflow_0073442993_datetime_matplotlib_python_seaborn.txt
Q: AUC of logistic and ordinal model following multiple imputation using MICE (with R) I am asking a question concerning the additive predictive benefit of the inclusion of a variable to a logistic and an ordinal model. I am using mice to impute missing covariates and am having difficulty finding ways to calculate the AUC and R squared of the pooled imputed models. Does anyone have any advice? The summary readout only provides the term, estimate, std.error, statistic, df , p.value Example code: imputed_Data <- mice(Cross_sectional, m=10, predictorMatrix=predM, seed=500, method = meth) Imputedreferecemodel <- with(imputed_Data, glm(Poor ~ age + sex + education + illness + injurycause, family = "binomial", na.action=na.omit) ) summary(pool(Imputedreferecemodel)) Many thanks. A: When conducting logistic regression, I believe that it's good practice is to use McFadden's or Tjur's R2, since both of those tend to be used with generalized linear models. mice::pool.r.squared is designed to be only with lm models. A previous StackOverflow user had the same question as you and it seems that the best function for a multiply-imputed glm() model is mfc() from the Github package glmice. The function looks fairly simple and uses McFadden's R2, although the package hasn't been touched for a few years. That previous user wasn't able to get mfc() to work, but it worked for me. # install.packages("remotes") # remotes::install_github("noahlorinczcomi/glmice") library(glmice) library(mice) data(nhanes) nhanes$hyp <- ifelse(nhanes$hyp == 2, 1, 0) imp <- mice(nhanes, m = 10, seed = 500, printFlag = FALSE) mod <- with(imp, glm(hyp ~ age + bmi, family = "binomial")) # summary(pool(mod)) mcf(mod) #> [1] "34.9656%" It looks like there are fewer resources on calculating AUC for a multiply-imputed glm(). I did find a vignette from the finalfit package, which calculated area under the curve. Unfortunately, it calculated AUC for each imputation. There might be a way to pool the output, but I'm not sure how (hopefully another SO user might suggest an idea?). library(finalfit) mod %>% getfit() %>% purrr::map(~ pROC::roc(.x$y, .x$fitted)$auc) # not pasting the output because it's a lot A: You could use the psfmi package in combination with mice. You could use the function pool_performance to measure performances for logistic regression, according to documentation: pool_performance Pooling performance measures for logistic and Cox regression models. I use the nhanes dataset which is standard in mice to show you a reproducible example. # install.packages("devtools") # devtools::install_github("mwheymans/psfmi") # for installing package library(psfmi) library(mice) # Make reproducible data with 0 and 1 outcome variable set.seed(123) nhanes$hyp <- ifelse(nhanes$hyp==1,0,1) nhanes$hyp <- as.factor(nhanes$hyp) # Mice imp <- mice(nhanes, m=5, maxit=5) nhanes_comp <- complete(imp, action = "long", include = FALSE) pool_lr <- psfmi_lr(data=nhanes_comp, nimp=5, impvar=".imp", formula=hyp ~ bmi, method="D1") pool_lr$RR_model #> $`Step 1 - no variables removed -` #> term estimate std.error statistic df p.value OR #> 1 (Intercept) -0.76441322 3.4753113 -0.21995532 16.06120 0.8286773 0.4656071 #> 2 bmi -0.01262911 0.1302484 -0.09696177 15.79361 0.9239765 0.9874503 #> lower.EXP upper.EXP #> 1 0.0002947263 735.56349 #> 2 0.7489846190 1.30184 # Check performance pool_performance(pool_lr, data = nhanes_comp, formula = hyp ~ bmi, nimp=5, impvar=".imp", cal.plot=TRUE, plot.indiv="mean", groups_cal=4, model_type="binomial") #> Warning: argument plot.indiv is deprecated; please use plot.method instead. #> $ROC_pooled #> 95% Low C-statistic 95% Up #> C-statistic (logit) 0.2731 0.5207 0.7586 #> #> $coef_pooled #> (Intercept) bmi #> -0.76441322 -0.01262911 #> #> $R2_pooled #> [1] 0.009631891 #> #> $Brier_Scaled_pooled #> [1] 0.004627443 #> #> $nimp #> [1] 5 #> #> $HLtest_pooled #> F_value P(>F) df1 df2 #> [1,] 0.9405937 0.400953 2 31.90878 #> #> $model_type #> [1] "binomial" Created on 2022-12-02 with reprex v2.0.2 A: To calculate the AUC and R squared of the pooled imputed models, you can use the pool() function from the mice package. The pool() function returns an object that contains the pooled estimates of the imputed models. You can use the summary() function on the pooled object to get the pooled estimates of the coefficients and their standard errors. You can also use the predict() function on the pooled object to get the predicted probabilities for each observation in the data. To calculate the AUC, you can use the roc() function from the ROCR package. The roc() function takes in two arguments: the predicted probabilities and the actual outcomes. It returns an object that contains the AUC and other metrics. To calculate the R squared, you can use the r.squaredGLM() function from the MuMIn package. The r.squaredGLM() function takes in the fitted model object and returns the R squared value. Here is an example of how you can calculate the AUC and R squared of the pooled imputed models: # load the required packages library(mice) library(ROCR) library(MuMIn) # create the imputed data using mice imputed_data <- mice(Cross_sectional, m=10, predictorMatrix=predM, seed=500, method = meth) # fit the reference model to the imputed data Imputedreferecemodel <- with(imputed_data, glm(Poor ~ age + sex + education + illness + injurycause, family = "binomial", na.action=na.omit) ) # pool the estimates of the imputed models pooled_model <- pool(Imputedreferecemodel) # get the predicted probabilities of the pooled model predicted_probabilities <- predict(pooled_model, type="response") # calculate the AUC of the pooled model roc_obj <- roc(Cross_sectional$Poor, predicted_probabilities) auc <- roc_obj$auc # calculate the R squared of the pooled model rsquared <- r.squaredGLM(pooled_model) # print the AUC and R squared values print(auc) print(rsquared)
AUC of logistic and ordinal model following multiple imputation using MICE (with R)
I am asking a question concerning the additive predictive benefit of the inclusion of a variable to a logistic and an ordinal model. I am using mice to impute missing covariates and am having difficulty finding ways to calculate the AUC and R squared of the pooled imputed models. Does anyone have any advice? The summary readout only provides the term, estimate, std.error, statistic, df , p.value Example code: imputed_Data <- mice(Cross_sectional, m=10, predictorMatrix=predM, seed=500, method = meth) Imputedreferecemodel <- with(imputed_Data, glm(Poor ~ age + sex + education + illness + injurycause, family = "binomial", na.action=na.omit) ) summary(pool(Imputedreferecemodel)) Many thanks.
[ "When conducting logistic regression, I believe that it's good practice is to use McFadden's or Tjur's R2, since both of those tend to be used with generalized linear models. mice::pool.r.squared is designed to be only with lm models. A previous StackOverflow user had the same question as you and it seems that the best function for a multiply-imputed glm() model is mfc() from the Github package glmice. The function looks fairly simple and uses McFadden's R2, although the package hasn't been touched for a few years. That previous user wasn't able to get mfc() to work, but it worked for me.\n# install.packages(\"remotes\")\n# remotes::install_github(\"noahlorinczcomi/glmice\")\nlibrary(glmice)\nlibrary(mice)\ndata(nhanes)\nnhanes$hyp <- ifelse(nhanes$hyp == 2, 1, 0)\nimp <- mice(nhanes, m = 10, seed = 500, printFlag = FALSE)\nmod <- with(imp, glm(hyp ~ age + bmi, family = \"binomial\"))\n# summary(pool(mod))\nmcf(mod)\n#> [1] \"34.9656%\"\n\nIt looks like there are fewer resources on calculating AUC for a multiply-imputed glm(). I did find a vignette from the finalfit package, which calculated area under the curve. Unfortunately, it calculated AUC for each imputation. There might be a way to pool the output, but I'm not sure how (hopefully another SO user might suggest an idea?).\nlibrary(finalfit)\nmod %>% \n getfit() %>% \n purrr::map(~ pROC::roc(.x$y, .x$fitted)$auc)\n# not pasting the output because it's a lot\n\n", "You could use the psfmi package in combination with mice. You could use the function pool_performance to measure performances for logistic regression, according to documentation:\n\npool_performance Pooling performance measures for logistic and Cox\nregression models.\n\nI use the nhanes dataset which is standard in mice to show you a reproducible example.\n# install.packages(\"devtools\")\n# devtools::install_github(\"mwheymans/psfmi\") # for installing package\nlibrary(psfmi)\nlibrary(mice)\n\n# Make reproducible data with 0 and 1 outcome variable\nset.seed(123)\nnhanes$hyp <- ifelse(nhanes$hyp==1,0,1)\nnhanes$hyp <- as.factor(nhanes$hyp)\n\n# Mice\nimp <- mice(nhanes, m=5, maxit=5) \n\nnhanes_comp <- complete(imp, action = \"long\", include = FALSE)\n\npool_lr <- psfmi_lr(data=nhanes_comp, nimp=5, impvar=\".imp\", \n formula=hyp ~ bmi, method=\"D1\")\npool_lr$RR_model\n#> $`Step 1 - no variables removed -`\n#> term estimate std.error statistic df p.value OR\n#> 1 (Intercept) -0.76441322 3.4753113 -0.21995532 16.06120 0.8286773 0.4656071\n#> 2 bmi -0.01262911 0.1302484 -0.09696177 15.79361 0.9239765 0.9874503\n#> lower.EXP upper.EXP\n#> 1 0.0002947263 735.56349\n#> 2 0.7489846190 1.30184\n\n# Check performance\npool_performance(pool_lr, data = nhanes_comp, formula = hyp ~ bmi, \n nimp=5, impvar=\".imp\", \n cal.plot=TRUE, plot.indiv=\"mean\", \n groups_cal=4, model_type=\"binomial\")\n#> Warning: argument plot.indiv is deprecated; please use plot.method instead.\n\n\n#> $ROC_pooled\n#> 95% Low C-statistic 95% Up\n#> C-statistic (logit) 0.2731 0.5207 0.7586\n#> \n#> $coef_pooled\n#> (Intercept) bmi \n#> -0.76441322 -0.01262911 \n#> \n#> $R2_pooled\n#> [1] 0.009631891\n#> \n#> $Brier_Scaled_pooled\n#> [1] 0.004627443\n#> \n#> $nimp\n#> [1] 5\n#> \n#> $HLtest_pooled\n#> F_value P(>F) df1 df2\n#> [1,] 0.9405937 0.400953 2 31.90878\n#> \n#> $model_type\n#> [1] \"binomial\"\n\nCreated on 2022-12-02 with reprex v2.0.2\n", "To calculate the AUC and R squared of the pooled imputed models, you can use the pool() function from the mice package. The pool() function returns an object that contains the pooled estimates of the imputed models.\nYou can use the summary() function on the pooled object to get the pooled estimates of the coefficients and their standard errors. You can also use the predict() function on the pooled object to get the predicted probabilities for each observation in the data.\nTo calculate the AUC, you can use the roc() function from the ROCR package. The roc() function takes in two arguments: the predicted probabilities and the actual outcomes. It returns an object that contains the AUC and other metrics.\nTo calculate the R squared, you can use the r.squaredGLM() function from the MuMIn package. The r.squaredGLM() function takes in the fitted model object and returns the R squared value.\nHere is an example of how you can calculate the AUC and R squared of the pooled imputed models:\n# load the required packages\nlibrary(mice)\nlibrary(ROCR)\nlibrary(MuMIn)\n\n# create the imputed data using mice\nimputed_data <- mice(Cross_sectional, m=10, predictorMatrix=predM, seed=500, method = meth)\n\n# fit the reference model to the imputed data\nImputedreferecemodel <- with(imputed_data, glm(Poor ~ age + sex + education + illness + injurycause, family = \"binomial\", na.action=na.omit) )\n\n# pool the estimates of the imputed models\npooled_model <- pool(Imputedreferecemodel)\n\n# get the predicted probabilities of the pooled model\npredicted_probabilities <- predict(pooled_model, type=\"response\")\n\n# calculate the AUC of the pooled model\nroc_obj <- roc(Cross_sectional$Poor, predicted_probabilities)\nauc <- roc_obj$auc\n\n# calculate the R squared of the pooled model\nrsquared <- r.squaredGLM(pooled_model)\n\n# print the AUC and R squared values\nprint(auc)\nprint(rsquared)\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "auc", "imputation", "r", "r_mice" ]
stackoverflow_0074536110_auc_imputation_r_r_mice.txt
Q: Chart.js How to edit the title label color Im working on a project where I display charts of stocks, everything works fine but I can't find how to change the title color above the chart like here: "AAPL" here This is my code so far: public renderChart(xData, yData, symbol){ this.chart = new Chart("canvas" + symbol, { type: "line", data : { labels: xData, datasets: [{ label: symbol, data: yData, backgroundColor: "#fdb44b", borderColor: "#00204a", } ] }, options: { responsive: true, maintainAspectRatio: false, scales: { y: { ticks: { color: '#f5f5f5'} }, x: { ticks: { color: "#f5f5f5" } } }} }) } Any help is appreciated!
Chart.js How to edit the title label color
Im working on a project where I display charts of stocks, everything works fine but I can't find how to change the title color above the chart like here: "AAPL" here This is my code so far: public renderChart(xData, yData, symbol){ this.chart = new Chart("canvas" + symbol, { type: "line", data : { labels: xData, datasets: [{ label: symbol, data: yData, backgroundColor: "#fdb44b", borderColor: "#00204a", } ] }, options: { responsive: true, maintainAspectRatio: false, scales: { y: { ticks: { color: '#f5f5f5'} }, x: { ticks: { color: "#f5f5f5" } } }} }) } Any help is appreciated!
[]
[]
[ "To change the title color of a chart in Chart.js, you can use the title.fontColor option. Here is an example of how you could modify your code to change the title color of your chart:\npublic renderChart(xData, yData, symbol) {\n this.chart = new Chart(\"canvas\" + symbol, {\n type: \"line\",\n data: {\n labels: xData,\n datasets: [\n {\n label: symbol,\n data: yData,\n backgroundColor: \"#fdb44b\",\n borderColor: \"#00204a\",\n },\n ],\n },\n options: {\n responsive: true,\n maintainAspectRatio: false,\n title: {\n display: true,\n fontColor: \"#00204a\",\n text: symbol,\n },\n scales: {\n y: {\n ticks: { color: \"#f5f5f5\" },\n },\n x: {\n ticks: { color: \"#f5f5f5\" },\n },\n },\n },\n });\n}\n\nIn this code, we added the title option to the options object of the chart. The title option has the following properties:\n\ndisplay: Set to true to display the title on the chart\nfontColor: The color of the title text\ntext: The text of the title\n\nWe set the fontColor to \"#00204a\" to change the color of the title, and set the text to the symbol variable to display the symbol as the title of the chart.\nI hope this helps!\n" ]
[ -1 ]
[ "angular", "chart.js", "web" ]
stackoverflow_0074675823_angular_chart.js_web.txt
Q: "Root element is missing" using XmlDocument I am running a C# application on the raspberry pi in which i read an xml file once at startup and then each time the file changes using this code: using System.Xml; XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(settingsFileName); The first time (at startup) it always works, however when the file changes i always get this exception: Unhandled exception. System.Xml.XmlException: Root element is missing. at System.Xml.XmlTextReaderImpl.Throw(Exception e) at System.Xml.XmlTextReaderImpl.ParseDocumentContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlLoader.Load(XmlDocument doc, XmlReader reader, Boolean preserveWhitespace) at System.Xml.XmlDocument.Load(XmlReader reader) at System.Xml.XmlDocument.Load(String filename) The config file looks like: <config> <data1 seconds = "5000"></data1> <data2 check = "16:00"></data2> <data3 max = "50"></data3> </config> and off course i only change data and not the root element "config". What could be causing this? EDIT: Putting <?xml version="1.0"?> at the top of the file improves a little, now it only gives the error the second time i change the file. There is no code that changes the file, there is however code that checks the last saved time of the file and if that is different compared to the initial one, then i read the xml file again. The actual change of the xml file I do myself with the default editor of winscp (as i am using SSH to access the pi). dotnet --version 3.1.425 A: You can use like this load file content into a variable xml and paste below code using (StringReader reader = new StringReader(xml)) { XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(reader); }
"Root element is missing" using XmlDocument
I am running a C# application on the raspberry pi in which i read an xml file once at startup and then each time the file changes using this code: using System.Xml; XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(settingsFileName); The first time (at startup) it always works, however when the file changes i always get this exception: Unhandled exception. System.Xml.XmlException: Root element is missing. at System.Xml.XmlTextReaderImpl.Throw(Exception e) at System.Xml.XmlTextReaderImpl.ParseDocumentContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlLoader.Load(XmlDocument doc, XmlReader reader, Boolean preserveWhitespace) at System.Xml.XmlDocument.Load(XmlReader reader) at System.Xml.XmlDocument.Load(String filename) The config file looks like: <config> <data1 seconds = "5000"></data1> <data2 check = "16:00"></data2> <data3 max = "50"></data3> </config> and off course i only change data and not the root element "config". What could be causing this? EDIT: Putting <?xml version="1.0"?> at the top of the file improves a little, now it only gives the error the second time i change the file. There is no code that changes the file, there is however code that checks the last saved time of the file and if that is different compared to the initial one, then i read the xml file again. The actual change of the xml file I do myself with the default editor of winscp (as i am using SSH to access the pi). dotnet --version 3.1.425
[ "You can use like this\nload file content into a variable xml\nand paste below code\n using (StringReader reader = new StringReader(xml))\n {\n XmlDocument xmlDoc = new XmlDocument();\n xmlDoc.Load(reader);\n }\n\n" ]
[ 0 ]
[]
[]
[ "c#", "raspberry_pi", "xml" ]
stackoverflow_0074675865_c#_raspberry_pi_xml.txt
Q: Making web requests to any url in a browser extension I'm currently working on a browser extension but I'm having problems with web requests. The extension needs to make requests to a self-hosted instance. That means that the url is different for everyone. I'm having two problems with making the web requests (in javascript): Just making any web request fails. See: fetch(`${base_url}/api/auth/status`) .then(response => { // catch errors if (!response.ok) { return Promise.reject(response.status); }; return; }); .catch(e => { console.log(e); }) Results in the following two errors: Refused to run the JavaScript URL because it violates the following Content Security Policy directive: "script-src 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-...'), or a nonce ('nonce-...') is required to enable inline execution. Note that hashes do not apply to event handlers, style attributes and javascript: navigations unless the 'unsafe-hashes' keyword is present. Refused to run the JavaScript URL because it violates the following Content Security Policy directive: "script-src 'self' 'wasm-unsafe-eval'". Either the 'unsafe-inline' keyword, a hash ('sha256-...'), or a nonce ('nonce-...') is required to enable inline execution. Note that hashes do not apply to event handlers, style attributes and javascript: navigations unless the 'unsafe-hashes' keyword is present. Reading articles, it looks like I need to add the exact url's to the manifest.json file that requests will be made to. However, requests are made to any url, because the server is self-hosted. So how am I going to fix that? I've looked at these articles and SO posts, but none seem to help: Dev, Medium, SO 1, CSPlite, Csper, SO 2 Thanks in advance for any help :) A: I've figured both problems out. The first problem, where JavaScript wouldn't run, was caused by the action attribute of a form. I had a form in my html, and using JavaScript, I set the action attribute to javascript:login();. This runs the login function inline, which is not allowed. I fixed it by changing the form to a div and adding an event listener to the submit button to run login. The second problem was fixed by adding the following to the manifest.json file: "host_permissions": [ "http://*/", "http://*/*", "https://*/", "https://*/*" ]
Making web requests to any url in a browser extension
I'm currently working on a browser extension but I'm having problems with web requests. The extension needs to make requests to a self-hosted instance. That means that the url is different for everyone. I'm having two problems with making the web requests (in javascript): Just making any web request fails. See: fetch(`${base_url}/api/auth/status`) .then(response => { // catch errors if (!response.ok) { return Promise.reject(response.status); }; return; }); .catch(e => { console.log(e); }) Results in the following two errors: Refused to run the JavaScript URL because it violates the following Content Security Policy directive: "script-src 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-...'), or a nonce ('nonce-...') is required to enable inline execution. Note that hashes do not apply to event handlers, style attributes and javascript: navigations unless the 'unsafe-hashes' keyword is present. Refused to run the JavaScript URL because it violates the following Content Security Policy directive: "script-src 'self' 'wasm-unsafe-eval'". Either the 'unsafe-inline' keyword, a hash ('sha256-...'), or a nonce ('nonce-...') is required to enable inline execution. Note that hashes do not apply to event handlers, style attributes and javascript: navigations unless the 'unsafe-hashes' keyword is present. Reading articles, it looks like I need to add the exact url's to the manifest.json file that requests will be made to. However, requests are made to any url, because the server is self-hosted. So how am I going to fix that? I've looked at these articles and SO posts, but none seem to help: Dev, Medium, SO 1, CSPlite, Csper, SO 2 Thanks in advance for any help :)
[ "I've figured both problems out.\nThe first problem, where JavaScript wouldn't run, was caused by the action attribute of a form. I had a form in my html, and using JavaScript, I set the action attribute to javascript:login();. This runs the login function inline, which is not allowed. I fixed it by changing the form to a div and adding an event listener to the submit button to run login.\nThe second problem was fixed by adding the following to the manifest.json file:\n\"host_permissions\": [\n \"http://*/\",\n \"http://*/*\",\n \"https://*/\",\n \"https://*/*\"\n]\n\n" ]
[ 1 ]
[]
[]
[ "browser_extension", "html", "javascript" ]
stackoverflow_0073717897_browser_extension_html_javascript.txt
Q: Launch extension with Metamask - Web3 I am trying to build a project to understand the Web3 much deeper , I wanted to know if there is a solution to launch an extension with the call of a smart contract ( in the same time MetaMask open I can open my extension next to it) I tried several options like to call the same functions as MetaMask or play with their developer version of MetaMask but I saw some extensions that succeed to do it without it Thanks for your help A: It is possible to launch an extension with the call of a smart contract by using the web3 JavaScript library. This library provides access to the Ethereum blockchain and allows you to interact with smart contracts from your web app or extension. Here is an example of how you could use web3 to launch your extension with the call of a smart contract: // Import the web3 library import web3 from "web3"; // Set the provider for web3 to use web3.setProvider(new web3.providers.HttpProvider("http://localhost:8545")); // Define the address of the smart contract const contractAddress = "0x..."; // Define the ABI of the smart contract const contractABI = [ { // Contract definition }, ]; // Instantiate the contract using the ABI and contract address const contract = new web3.eth.Contract(contractABI, contractAddress); // Call the contract function that will launch your extension contract.methods .launchExtension() .send({ from: "0x..." }) .then(function(receipt) { // Extension was launched successfully }) .catch(function(error) { // Error launching extension }); In this code, we first import the web3 library and set the provider for web3 to use. This will allow web3 to communicate with the Ethereum blockchain. Next, we define the address of the smart contract that we want to interact with, and the ABI (Application Binary Interface) of the contract. The ABI is a JSON object that describes the functions and variables of the contract, and is required to interact with the contract using web3. We then instantiate the contract using the web3.eth.Contract method, passing in the contract ABI and address as arguments. This creates an instance of the contract that we can use to call its functions. Finally, we call the contract function that will launch your extension using the contract.methods.launchExtension().send() method. This will send a transaction to the Ethereum network to call the launchExtension function of the contract, and launch your extension. You can also use web3 to listen for events emitted by the smart contract, and trigger actions in your extension based on these events. For example, you could use the contract.events.eventName() method to listen for a specific event emitted by the contract, and call a function in your extension when the event is emitted. I hope this helps!
Launch extension with Metamask - Web3
I am trying to build a project to understand the Web3 much deeper , I wanted to know if there is a solution to launch an extension with the call of a smart contract ( in the same time MetaMask open I can open my extension next to it) I tried several options like to call the same functions as MetaMask or play with their developer version of MetaMask but I saw some extensions that succeed to do it without it Thanks for your help
[ "It is possible to launch an extension with the call of a smart contract by using the web3 JavaScript library. This library provides access to the Ethereum blockchain and allows you to interact with smart contracts from your web app or extension.\nHere is an example of how you could use web3 to launch your extension with the call of a smart contract:\n// Import the web3 library\nimport web3 from \"web3\";\n\n// Set the provider for web3 to use\nweb3.setProvider(new web3.providers.HttpProvider(\"http://localhost:8545\"));\n\n// Define the address of the smart contract\nconst contractAddress = \"0x...\";\n\n// Define the ABI of the smart contract\nconst contractABI = [\n {\n // Contract definition\n },\n];\n\n// Instantiate the contract using the ABI and contract address\nconst contract = new web3.eth.Contract(contractABI, contractAddress);\n\n// Call the contract function that will launch your extension\ncontract.methods\n .launchExtension()\n .send({ from: \"0x...\" })\n .then(function(receipt) {\n // Extension was launched successfully\n })\n .catch(function(error) {\n // Error launching extension\n });\n\nIn this code, we first import the web3 library and set the provider for web3 to use. This will allow web3 to communicate with the Ethereum blockchain.\nNext, we define the address of the smart contract that we want to interact with, and the ABI (Application Binary Interface) of the contract. The ABI is a JSON object that describes the functions and variables of the contract, and is required to interact with the contract using web3.\nWe then instantiate the contract using the web3.eth.Contract method, passing in the contract ABI and address as arguments. This creates an instance of the contract that we can use to call its functions.\nFinally, we call the contract function that will launch your extension using the contract.methods.launchExtension().send() method. This will send a transaction to the Ethereum network to call the launchExtension function of the contract, and launch your extension.\nYou can also use web3 to listen for events emitted by the smart contract, and trigger actions in your extension based on these events. For example, you could use the contract.events.eventName() method to listen for a specific event emitted by the contract, and call a function in your extension when the event is emitted.\nI hope this helps!\n" ]
[ 0 ]
[]
[]
[ "blockchain", "google_chrome_extension", "launch", "metamask", "web3js" ]
stackoverflow_0074675814_blockchain_google_chrome_extension_launch_metamask_web3js.txt
Q: EventHub data bursty with long pauses I'm seeing multi-second pauses in the event stream, even reading from the retention pool. Here's the main nugget of EH setup: BlobContainerClient storageClient = new BlobContainerClient(blobcon, BLOB_NAME); RTMTest.eventProcessor = new EventProcessorClient(storageClient, consumerGroup, ehubcon, EVENTHUB_NAME); And then the do nothing processor: static async Task processEventHandler(ProcessEventArgs eventArgs) { RTMTest.eventsPerSecond++; RTMTest.eventCount++; if ((RTMTest.eventCount % 16) == 0) { await eventArgs.UpdateCheckpointAsync(eventArgs.CancellationToken); } } And then a typical execution: 15:02:23: no events 15:02:24: no events 15:02:25: reqs=643 15:02:26: reqs=656 15:02:27: reqs=1280 15:02:28: reqs=2221 15:02:29: no events 15:02:30: no events 15:02:31: no events 15:02:32: no events 15:02:33: no events 15:02:34: no events 15:02:35: no events 15:02:36: no events 15:02:37: no events 15:02:38: no events 15:02:39: no events 15:02:40: no events 15:02:41: no events 15:02:42: no events 15:02:43: no events 15:02:44: reqs=3027 15:02:45: reqs=3440 15:02:47: reqs=4320 15:02:48: reqs=9232 15:02:49: reqs=4064 15:02:50: reqs=395 15:02:51: no events 15:02:52: no events 15:02:53: no events The event hub, blob storage and RTMTest webjob are all in US West 2. The event hub as 16 partitions. It's correctly calling my handler as evidenced by the bursts of data. The error handler is not called. Here are two applications side by side, left using Redis, right using Event Hub. The events turn into the animations so you can visually watch the long stalls. Note: these are vaccines being reported around the US, either live or via batch reconciliations from the pharmacies. vaccine reporting animations Any idea why I see the multi-second stalls? Thanks. A: Event Hubs consumers make use of a prefetch queue when reading. This is essentially a local cache of events that the consumer tries to keep full by streaming in continually from the service. To prioritize throughput and avoid waiting on the network, consumers read exclusively from prefetch. The pattern that you're describing falls into the "many smaller events" category, which will often drain the prefetch quickly if event processing is also quick. If your application is reading more quickly than the prefetch can refill, reads will start to take longer and return fewer events, as it waits on network operations. One thing that may help is to test using higher values for PrefetchCount and CacheEventCount in the options when creating your processor. These default to a prefetch of 300 and cache event count of 100. You may want try testing with something like 750/250 and see what happens. We recommend keeping at least a 3:1 ratio. It is also possible that your processor is being asked to do more work than is recommended for consistent performance across all partitions it owns. There's good discussion of different behaviors in the Troubleshooting Guide, and ultimately, capturing a +/- 5-minute slice of the SDK logs described here would give us the best view of what's going on. That's more detail and requires more back-and-forth discussion than works well on StackOverflow; I'd invite you to open an issue in the Azure SDK repository if you go down that path. Something to keep in mind is that Event Hubs is optimized to maximize overall throughput and not for minimizing latency for individual events. The service offers no SLA for the time between when an event is received by the service and when it becomes available to be read from a partition. When the service receives an event, it acknowledges receipt to the publisher and the send call completes. At this point, the event still needs to be committed to a partition. Until that process is complete, it isn't available to be read. Normally, this takes milliseconds but may occasionally take longer for the Standard tier because it is a shared instance. Transient failures, such as a partition node being rebooted/migrated, can also impact this. With you near real-time reading, you may be processing quickly enough that there's nothing client-side that will help. In this case, you'd need to consider adding more TUs, moving to a Premium/Dedicated tier, or using more partitions to increase concurrency. Update: For those interested without access to the chat, log analysis shows a pattern of errors that indicates that either the host owns too many partitions and load balancing is unhealthy or there is a rogue processor running in the same consumer group but not using the same storage container. In either case, partition ownership is bouncing frequently causing them to stop, move to a new host, reinitialize, and restart - only to stop and have to move again. I've suggested reading through the Troubleshooting Guide, as this scenario and some of the other symptoms tare discussed in detail. I've also suggested reading through the samples for the processor - particularly Event Processor Configuration and Event Processor Handlers. Each has guidance around processor use and configuration that should be followed to maximize throughput. A: @jesse very patiently examined my logs and led me to the "duh" moment of realizing I just needed a separate consumer group for this 2nd application of the EventHub data. Now things are rock solid. Thanks Jesse!
EventHub data bursty with long pauses
I'm seeing multi-second pauses in the event stream, even reading from the retention pool. Here's the main nugget of EH setup: BlobContainerClient storageClient = new BlobContainerClient(blobcon, BLOB_NAME); RTMTest.eventProcessor = new EventProcessorClient(storageClient, consumerGroup, ehubcon, EVENTHUB_NAME); And then the do nothing processor: static async Task processEventHandler(ProcessEventArgs eventArgs) { RTMTest.eventsPerSecond++; RTMTest.eventCount++; if ((RTMTest.eventCount % 16) == 0) { await eventArgs.UpdateCheckpointAsync(eventArgs.CancellationToken); } } And then a typical execution: 15:02:23: no events 15:02:24: no events 15:02:25: reqs=643 15:02:26: reqs=656 15:02:27: reqs=1280 15:02:28: reqs=2221 15:02:29: no events 15:02:30: no events 15:02:31: no events 15:02:32: no events 15:02:33: no events 15:02:34: no events 15:02:35: no events 15:02:36: no events 15:02:37: no events 15:02:38: no events 15:02:39: no events 15:02:40: no events 15:02:41: no events 15:02:42: no events 15:02:43: no events 15:02:44: reqs=3027 15:02:45: reqs=3440 15:02:47: reqs=4320 15:02:48: reqs=9232 15:02:49: reqs=4064 15:02:50: reqs=395 15:02:51: no events 15:02:52: no events 15:02:53: no events The event hub, blob storage and RTMTest webjob are all in US West 2. The event hub as 16 partitions. It's correctly calling my handler as evidenced by the bursts of data. The error handler is not called. Here are two applications side by side, left using Redis, right using Event Hub. The events turn into the animations so you can visually watch the long stalls. Note: these are vaccines being reported around the US, either live or via batch reconciliations from the pharmacies. vaccine reporting animations Any idea why I see the multi-second stalls? Thanks.
[ "Event Hubs consumers make use of a prefetch queue when reading. This is essentially a local cache of events that the consumer tries to keep full by streaming in continually from the service. To prioritize throughput and avoid waiting on the network, consumers read exclusively from prefetch.\nThe pattern that you're describing falls into the \"many smaller events\" category, which will often drain the prefetch quickly if event processing is also quick. If your application is reading more quickly than the prefetch can refill, reads will start to take longer and return fewer events, as it waits on network operations.\nOne thing that may help is to test using higher values for PrefetchCount and CacheEventCount in the options when creating your processor. These default to a prefetch of 300 and cache event count of 100. You may want try testing with something like 750/250 and see what happens. We recommend keeping at least a 3:1 ratio.\nIt is also possible that your processor is being asked to do more work than is recommended for consistent performance across all partitions it owns. There's good discussion of different behaviors in the Troubleshooting Guide, and ultimately, capturing a +/- 5-minute slice of the SDK logs described here would give us the best view of what's going on. That's more detail and requires more back-and-forth discussion than works well on StackOverflow; I'd invite you to open an issue in the Azure SDK repository if you go down that path.\nSomething to keep in mind is that Event Hubs is optimized to maximize overall throughput and not for minimizing latency for individual events. The service offers no SLA for the time between when an event is received by the service and when it becomes available to be read from a partition.\nWhen the service receives an event, it acknowledges receipt to the publisher and the send call completes. At this point, the event still needs to be committed to a partition. Until that process is complete, it isn't available to be read. Normally, this takes milliseconds but may occasionally take longer for the Standard tier because it is a shared instance. Transient failures, such as a partition node being rebooted/migrated, can also impact this.\nWith you near real-time reading, you may be processing quickly enough that there's nothing client-side that will help. In this case, you'd need to consider adding more TUs, moving to a Premium/Dedicated tier, or using more partitions to increase concurrency.\nUpdate:\nFor those interested without access to the chat, log analysis shows a pattern of errors that indicates that either the host owns too many partitions and load balancing is unhealthy or there is a rogue processor running in the same consumer group but not using the same storage container.\nIn either case, partition ownership is bouncing frequently causing them to stop, move to a new host, reinitialize, and restart - only to stop and have to move again.\nI've suggested reading through the Troubleshooting Guide, as this scenario and some of the other symptoms tare discussed in detail.\nI've also suggested reading through the samples for the processor - particularly Event Processor Configuration and Event Processor Handlers. Each has guidance around processor use and configuration that should be followed to maximize throughput.\n", "@jesse very patiently examined my logs and led me to the \"duh\" moment of realizing I just needed a separate consumer group for this 2nd application of the EventHub data. Now things are rock solid. Thanks Jesse!\n" ]
[ 0, 0 ]
[]
[]
[ "azure", "azure_eventhub", "eventhub", "real_time" ]
stackoverflow_0074644586_azure_azure_eventhub_eventhub_real_time.txt
Q: Ordering Implicit Many-To-Many Relations in Prisma? I'm not sure if this is possible: Say I have 2 tables and a many-to-many relationship between them. Let's say a Module can have multiple Lessons, and a Lesson can appear in multiple Modules. However, I'd like the lessons in each module to be in a particular order. In Module 1, Lesson A might appear as the very first lesson, but in Module 2, Lesson A may be the last lesson. I am using an implicit many-to-many relationship in prisma right now. I'd love to not convert this to an explicit many-to-many, just for adding a simple ordering field. Will I have to? Or is there a way for me to keep implicit many-to-many, but order order the lessons for each module? I'm using Prisma + MySQL. A: It is possible to maintain a many-to-many relationship and still have an ordering of the lessons for each module. One way to do this is to add a new field to the join table that represents the many-to-many relationship. This field can be used to store the order of the lessons for each module. Here's an example of how the schema could look like in Prisma: type Module { id: ID! lessons: [Lesson] @relation(name: "ModuleLessons", link: INLINE) } type Lesson { id: ID! modules: [Module] @relation(name: "ModuleLessons", link: INLINE) } type ModuleLessons @embedded { moduleId: ID! lessonId: ID! order: Int } In this schema, the ModuleLessons type represents the join table that has a many-to-many relationship between the Module and Lesson types. The order field in this type can be used to store the order of the lessons for each module. You can then use the order field in your queries to retrieve the lessons for each module in the desired order. For example, you can use the following query to get the lessons for a particular module, sorted by the order field: query { module(id: "...") { lessons(orderBy: {order: ASC}) { id ... } } } I hope this helps.
Ordering Implicit Many-To-Many Relations in Prisma?
I'm not sure if this is possible: Say I have 2 tables and a many-to-many relationship between them. Let's say a Module can have multiple Lessons, and a Lesson can appear in multiple Modules. However, I'd like the lessons in each module to be in a particular order. In Module 1, Lesson A might appear as the very first lesson, but in Module 2, Lesson A may be the last lesson. I am using an implicit many-to-many relationship in prisma right now. I'd love to not convert this to an explicit many-to-many, just for adding a simple ordering field. Will I have to? Or is there a way for me to keep implicit many-to-many, but order order the lessons for each module? I'm using Prisma + MySQL.
[ "It is possible to maintain a many-to-many relationship and still have an ordering of the lessons for each module. One way to do this is to add a new field to the join table that represents the many-to-many relationship. This field can be used to store the order of the lessons for each module.\nHere's an example of how the schema could look like in Prisma:\ntype Module {\n id: ID!\n lessons: [Lesson] @relation(name: \"ModuleLessons\", link: INLINE)\n}\n\ntype Lesson {\n id: ID!\n modules: [Module] @relation(name: \"ModuleLessons\", link: INLINE)\n}\n\ntype ModuleLessons @embedded {\n moduleId: ID!\n lessonId: ID!\n order: Int\n}\n\nIn this schema, the ModuleLessons type represents the join table that has a many-to-many relationship between the Module and Lesson types. The order field in this type can be used to store the order of the lessons for each module.\nYou can then use the order field in your queries to retrieve the lessons for each module in the desired order. For example, you can use the following query to get the lessons for a particular module, sorted by the order field:\nquery {\n module(id: \"...\") {\n lessons(orderBy: {order: ASC}) {\n id\n ...\n }\n }\n}\n\nI hope this helps.\n" ]
[ 0 ]
[]
[]
[ "mysql", "node.js", "prisma", "prisma2", "sql" ]
stackoverflow_0074677147_mysql_node.js_prisma_prisma2_sql.txt
Q: Display xml values with namespace - using php I would like to know how do I display these values from an api that returns me an xml. I've looked in some places, but it's always one without the namespace and others with namespace... but mine has both and it always bugs and doesn't display the values.. my xml: <QTableGridDataSourceForMobileOfDocumentWBuH9k12 xmlns="http://schemas.datacontract.org/2004/07/Sinfic.DataContracts" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <TotalRows>1</TotalRows> <Rows xmlns:a="http://schemas.datacontract.org/2004/07/Sinfic.DataContracts.Documents"> <a:Document> <a:a>6017</a:a> <a:aa>135</a:aa> <a:ab>-23.15749833</a:ab> <a:ac>-45.79356167</a:ac> <a:ad>6.80</a:ad> <a:ai>0</a:ai> <a:aj>Administrator</a:aj> <a:am>32872</a:am> <a:an>Leonardo Righi</a:an> <a:ao>16470252</a:ao> <a:ap>16470108</a:ap> <a:aq> <a:data> <a:key> <a:id>d0180056-f7e6-4b13-8865-a963a9a131</a:id> <a:tag>nomeTecnico</a:tag> </a:key> <a:value>Denis Rodrigues</a:value> </a:data> <a:ar> <a:data> <a:key> <a:id>d6052d01-92b3-45a5-9059-f401eddf0ef5</a:id> <a:tag>ImageAnswer</a:tag> </a:key> <a:value>27422</a:value> </a:data> </a:ar> <a:b>150</a:b> <a:bb>Manutenção Automáticas</a:bb> <a:bc>02 - CELULARES</a:bc> <a:bd>Cancelado</a:bd> <a:bf>09/03/2022 14:52</a:bf> <a:bg>11/03/2022 15:00</a:bg> <a:bh>Automaticas</a:bh> <a:bi>5</a:bi> <a:bj>09/03/2022 14:41</a:bj> <a:bk>09/03/2022 14:52</a:bk> <a:bq>LOGISTICA LTDA</a:bq> <a:br>2</a:br> <a:bs>2</a:bs> <a:bt>false</a:bt> <a:bu>MyDocs</a:bu> <a:bv># 1.4.17[14017]</a:bv> <a:by>f1edqKgAgFWvOHGTmEFw42uggIDQt-K8pKPFaC6Em-Z7etzLOSr3Al6eCPndbg2</a:by> <a:cd>656</a:cd> <a:ce>13235</a:ce> <a:l>DENIS </a:l> <a:o>f8b521e8-e92f-478e-a883</a:o> </a:Document> </Rows> </QTableGridDataSourceForMobileOfDocumentWBuH9k12> I tried this code, as close as I got: <?php $x = simplexml_load_file('teste.xml'); $campos = $x-> children('a', true)-> children('a', true); foreach($campos as $chave => $valor){ echo $chave.' : '. $valor . '<br>'; } ?> A: You can access the children by giving the namespace or its prefix. I prefer the prefix, because it is shorter. Demo: https://3v4l.org/VNt5d Note Your data is still invalid. <a:aq> is not closed. So I removed it for my answer! $xml = simplexml_load_string($xmlText); foreach($xml->Rows->children('a', true) as $document) { foreach($document as $key => $value) { echo "$key: $value\n"; } } Output a: 6017 aa: 135 ab: -23.15749833 ac: -45.79356167 ad: 6.80 ai: 0 aj: Administrator am: 32872 an: Leonardo Righi ao: 16470252 ap: 16470108 data: ar: b: 150 bb: Manutenção Automáticas bc: 02 - CELULARES bd: Cancelado bf: 09/03/2022 14:52 bg: 11/03/2022 15:00 bh: Automaticas bi: 5 bj: 09/03/2022 14:41 bk: 09/03/2022 14:52 bq: LOGISTICA LTDA br: 2 bs: 2 bt: false bu: MyDocs bv: # 1.4.17[14017] by: f1edqKgAgFWvOHGTmEFw42uggIDQt-K8pKPFaC6Em-Z7etzLOSr3Al6eCPndbg2 cd: 656 ce: 13235 l: DENIS o: f8b521e8-e92f-478e-a883 A: You mistake the namespaces prefixes with the actual namespace. The namespace definitions are the xmlns/xmlns:* attributes. Prefixes are optional for element nodes. The document element should be read as {http://schemas.datacontract.org/2004/07/Sinfic.DataContracts}QTableGridDataSourceForMobileOfDocumentWBuH9k12. Here are 3 examples that all resolve to this: <QTableGridDataSourceForMobileOfDocumentWBuH9k12 xmlns="http://schemas.datacontract.org/2004/07/Sinfic.DataContracts"> <q:QTableGridDataSourceForMobileOfDocumentWBuH9k12 xmlns:q="http://schemas.datacontract.org/2004/07/Sinfic.DataContracts"> <qtable:QTableGridDataSourceForMobileOfDocumentWBuH9k12 xmlns:qtable="http://schemas.datacontract.org/2004/07/Sinfic.DataContracts"> Namespaces can be redefined on any element node. So a prefix will only be valid until redefined. That makes it a really bad idea to rely on the prefixes in the document. Always use the namespace URI and/or define your own prefixes. SimpleXML does some implicit namespace handling in the background - like using the default namespace of the current context node. The explicit variant would look like this: const XMLNS_CONTRACTS = 'http://schemas.datacontract.org/2004/07/Sinfic.DataContracts'; const XMLNS_DOCUMENT = 'http://schemas.datacontract.org/2004/07/Sinfic.DataContracts.Documents'; $qTable = simplexml_load_string(getXMLString()); foreach($qTable->children(XMLNS_CONTRACTS)->Rows->children(XMLNS_DOCUMENT) as $document) { foreach($document as $key => $value) { var_dump([$key, (string)$value]); } } Or with DOM and Xpath: Take note that the example uses its own prefixes for the Xpath expressions. It only depends on the namespace URIs. const XMLNS_CONTRACTS = 'http://schemas.datacontract.org/2004/07/Sinfic.DataContracts'; const XMLNS_DOCUMENT = 'http://schemas.datacontract.org/2004/07/Sinfic.DataContracts.Documents'; $document = new DOMDocument(); $document->loadXML(getXMLString()); $xpath = new DOMXpath($document); $xpath->registerNamespace('c', XMLNS_CONTRACTS); $xpath->registerNamespace('d', XMLNS_DOCUMENT); foreach ($xpath->evaluate('//c:Rows/d:Document') as $rowsDocument) { foreach($xpath->evaluate('./d:*', $rowsDocument) as $cell) { var_dump( [$cell->localName, $cell->textContent] ); } } The XML format is not a good one imho - so it might feel annoying to use. XML tags are supposed to be fixed and semantic. Dynamic tag names a big no go.
Display xml values with namespace - using php
I would like to know how do I display these values from an api that returns me an xml. I've looked in some places, but it's always one without the namespace and others with namespace... but mine has both and it always bugs and doesn't display the values.. my xml: <QTableGridDataSourceForMobileOfDocumentWBuH9k12 xmlns="http://schemas.datacontract.org/2004/07/Sinfic.DataContracts" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <TotalRows>1</TotalRows> <Rows xmlns:a="http://schemas.datacontract.org/2004/07/Sinfic.DataContracts.Documents"> <a:Document> <a:a>6017</a:a> <a:aa>135</a:aa> <a:ab>-23.15749833</a:ab> <a:ac>-45.79356167</a:ac> <a:ad>6.80</a:ad> <a:ai>0</a:ai> <a:aj>Administrator</a:aj> <a:am>32872</a:am> <a:an>Leonardo Righi</a:an> <a:ao>16470252</a:ao> <a:ap>16470108</a:ap> <a:aq> <a:data> <a:key> <a:id>d0180056-f7e6-4b13-8865-a963a9a131</a:id> <a:tag>nomeTecnico</a:tag> </a:key> <a:value>Denis Rodrigues</a:value> </a:data> <a:ar> <a:data> <a:key> <a:id>d6052d01-92b3-45a5-9059-f401eddf0ef5</a:id> <a:tag>ImageAnswer</a:tag> </a:key> <a:value>27422</a:value> </a:data> </a:ar> <a:b>150</a:b> <a:bb>Manutenção Automáticas</a:bb> <a:bc>02 - CELULARES</a:bc> <a:bd>Cancelado</a:bd> <a:bf>09/03/2022 14:52</a:bf> <a:bg>11/03/2022 15:00</a:bg> <a:bh>Automaticas</a:bh> <a:bi>5</a:bi> <a:bj>09/03/2022 14:41</a:bj> <a:bk>09/03/2022 14:52</a:bk> <a:bq>LOGISTICA LTDA</a:bq> <a:br>2</a:br> <a:bs>2</a:bs> <a:bt>false</a:bt> <a:bu>MyDocs</a:bu> <a:bv># 1.4.17[14017]</a:bv> <a:by>f1edqKgAgFWvOHGTmEFw42uggIDQt-K8pKPFaC6Em-Z7etzLOSr3Al6eCPndbg2</a:by> <a:cd>656</a:cd> <a:ce>13235</a:ce> <a:l>DENIS </a:l> <a:o>f8b521e8-e92f-478e-a883</a:o> </a:Document> </Rows> </QTableGridDataSourceForMobileOfDocumentWBuH9k12> I tried this code, as close as I got: <?php $x = simplexml_load_file('teste.xml'); $campos = $x-> children('a', true)-> children('a', true); foreach($campos as $chave => $valor){ echo $chave.' : '. $valor . '<br>'; } ?>
[ "You can access the children by giving the namespace or its prefix. I prefer the prefix, because it is shorter.\nDemo: https://3v4l.org/VNt5d\nNote\nYour data is still invalid. <a:aq> is not closed. So I removed it for my answer!\n$xml = simplexml_load_string($xmlText);\n\nforeach($xml->Rows->children('a', true) as $document) {\n foreach($document as $key => $value) {\n echo \"$key: $value\\n\";\n }\n}\n\nOutput\na: 6017\naa: 135\nab: -23.15749833\nac: -45.79356167\nad: 6.80\nai: 0\naj: Administrator\nam: 32872\nan: Leonardo Righi\nao: 16470252\nap: 16470108\ndata: \n\n\n\nar: \n\n\nb: 150\nbb: Manutenção Automáticas\nbc: 02 - CELULARES\nbd: Cancelado\nbf: 09/03/2022 14:52\nbg: 11/03/2022 15:00\nbh: Automaticas\nbi: 5\nbj: 09/03/2022 14:41\nbk: 09/03/2022 14:52\nbq: LOGISTICA LTDA\nbr: 2\nbs: 2\nbt: false\nbu: MyDocs\nbv: # 1.4.17[14017]\nby: f1edqKgAgFWvOHGTmEFw42uggIDQt-K8pKPFaC6Em-Z7etzLOSr3Al6eCPndbg2\ncd: 656\nce: 13235\nl: DENIS \no: f8b521e8-e92f-478e-a883\n\n", "You mistake the namespaces prefixes with the actual namespace. The namespace definitions are the xmlns/xmlns:* attributes. Prefixes are optional for element nodes. The document element should be read as {http://schemas.datacontract.org/2004/07/Sinfic.DataContracts}QTableGridDataSourceForMobileOfDocumentWBuH9k12. Here are 3 examples that all resolve to this:\n\n<QTableGridDataSourceForMobileOfDocumentWBuH9k12 xmlns=\"http://schemas.datacontract.org/2004/07/Sinfic.DataContracts\">\n<q:QTableGridDataSourceForMobileOfDocumentWBuH9k12 xmlns:q=\"http://schemas.datacontract.org/2004/07/Sinfic.DataContracts\">\n<qtable:QTableGridDataSourceForMobileOfDocumentWBuH9k12 xmlns:qtable=\"http://schemas.datacontract.org/2004/07/Sinfic.DataContracts\">\n\nNamespaces can be redefined on any element node. So a prefix will only be valid until redefined. That makes it a really bad idea to rely on the prefixes in the document. Always use the namespace URI and/or define your own prefixes.\nSimpleXML does some implicit namespace handling in the background - like using the default namespace of the current context node. The explicit variant would look like this:\nconst XMLNS_CONTRACTS = 'http://schemas.datacontract.org/2004/07/Sinfic.DataContracts';\nconst XMLNS_DOCUMENT = 'http://schemas.datacontract.org/2004/07/Sinfic.DataContracts.Documents';\n\n$qTable = simplexml_load_string(getXMLString());\n\nforeach($qTable->children(XMLNS_CONTRACTS)->Rows->children(XMLNS_DOCUMENT) as $document) {\n foreach($document as $key => $value) {\n var_dump([$key, (string)$value]);\n }\n} \n\nOr with DOM and Xpath:\nTake note that the example uses its own prefixes for the Xpath expressions. It only depends on the namespace URIs.\nconst XMLNS_CONTRACTS = 'http://schemas.datacontract.org/2004/07/Sinfic.DataContracts';\nconst XMLNS_DOCUMENT = 'http://schemas.datacontract.org/2004/07/Sinfic.DataContracts.Documents';\n\n$document = new DOMDocument();\n$document->loadXML(getXMLString());\n$xpath = new DOMXpath($document);\n$xpath->registerNamespace('c', XMLNS_CONTRACTS);\n$xpath->registerNamespace('d', XMLNS_DOCUMENT);\n\nforeach ($xpath->evaluate('//c:Rows/d:Document') as $rowsDocument) {\n foreach($xpath->evaluate('./d:*', $rowsDocument) as $cell) {\n var_dump(\n [$cell->localName, $cell->textContent]\n );\n }\n}\n\nThe XML format is not a good one imho - so it might feel annoying to use. XML tags are supposed to be fixed and semantic. Dynamic tag names a big no go.\n" ]
[ 0, 0 ]
[]
[]
[ "html", "php", "web", "xml" ]
stackoverflow_0074671230_html_php_web_xml.txt
Q: PHP - Firebase query only gets 20 collection entries i have a question. In my PHP Firebase query i have the problem that it seems to only get 20 documents of my database collection. I am getting all documents data and then push each entry in a separate array to finally sort the entries. While everything is working so far - i only seem to get 20 entries each time the code runs on my server. This is my code for fetching the data: $tracksCount = 0; $tracksList = $firestore->collection('lists/'.$listId.'/tracks'); $tracksDocuments = $tracksList->documents(); $sortedTracks = []; foreach ($tracksDocuments as $track) { if ($track->exists()) { $trackData = $track->data(); array_push($sortedTracks, $trackData); } } array_multisort( array_column($sortedTracks, "index"), SORT_ASC, $sortedTracks); foreach ($sortedTracks as $track) { // pushing fetched data for output.... $tracksCount = $tracksCount + 1; } This code is indeed working, i am getting all results that are expected - but only for 20 documents. (If there are fewer documents in the collection, it is getting fewer documents aswell. But if more than 20 documents, it has the upper limit for 20) I cannot find the problem. Maybe somebody can help? A: There is no hard limit as such on the maximum number of documents you can request, that would also be not unreasonable.There is actually no documented limit on the number of documents that can be retrieved, although there likely is a physical limit which mostly will depend on the memory and bandwidth of your app. There is a maximum depth of functions calls in the security rules for Cloud Firestore. If you use the list method of the Firestore REST API, you can set the “pageSize” parameter in the method to specify the maximum number of documents to return, and then paginate this data to be displayed in a readable format while being able to scroll through page navigate and access these lists of documents. Also these can be retrieved ID can be passed as an array input , which is something similar you are trying to workaround with. Check for similar examples below: How to get all documents where a specific field exists Is there a workaround for firestore query in limit to 10 Select every document in firestore Get firestore document with query
PHP - Firebase query only gets 20 collection entries
i have a question. In my PHP Firebase query i have the problem that it seems to only get 20 documents of my database collection. I am getting all documents data and then push each entry in a separate array to finally sort the entries. While everything is working so far - i only seem to get 20 entries each time the code runs on my server. This is my code for fetching the data: $tracksCount = 0; $tracksList = $firestore->collection('lists/'.$listId.'/tracks'); $tracksDocuments = $tracksList->documents(); $sortedTracks = []; foreach ($tracksDocuments as $track) { if ($track->exists()) { $trackData = $track->data(); array_push($sortedTracks, $trackData); } } array_multisort( array_column($sortedTracks, "index"), SORT_ASC, $sortedTracks); foreach ($sortedTracks as $track) { // pushing fetched data for output.... $tracksCount = $tracksCount + 1; } This code is indeed working, i am getting all results that are expected - but only for 20 documents. (If there are fewer documents in the collection, it is getting fewer documents aswell. But if more than 20 documents, it has the upper limit for 20) I cannot find the problem. Maybe somebody can help?
[ "There is no hard limit as such on the maximum number of documents you can request, that would also be not unreasonable.There is actually no documented limit on the number of documents that can be retrieved, although there likely is a physical limit which mostly will depend on the memory and bandwidth of your app.\nThere is a maximum depth of functions calls in the security rules for Cloud Firestore.\nIf you use the list method of the Firestore REST API, you can set the “pageSize” parameter in the method to specify the maximum number of documents to return, and then paginate this data to be displayed in a readable format while being able to scroll through page navigate and access these lists of documents.\nAlso these can be retrieved ID can be passed as an array input , which is something similar you are trying to workaround with.\nCheck for similar examples below:\n\nHow to get all documents where a specific field exists\nIs there a workaround for firestore query in limit to 10\nSelect every document in firestore\nGet firestore document with query\n\n" ]
[ 0 ]
[]
[]
[ "google_cloud_firestore", "php" ]
stackoverflow_0074662993_google_cloud_firestore_php.txt
Q: How to have professional candle bar chart in reactjs I have a client requirement which needs a candle stick chart like below one. I know there will be a package for this but which package is it for? May anyone tell me some packages name whose trendline chart is same as below without very much tweaking? A: I'd suggest going with apexcharts. Link for Candlebar chart: https://apexcharts.com/react-chart-demos/candlestick-charts/basic/ I have used it in a client's project and it worked perfectly, similar to what you are looking for. I hope this helps.
How to have professional candle bar chart in reactjs
I have a client requirement which needs a candle stick chart like below one. I know there will be a package for this but which package is it for? May anyone tell me some packages name whose trendline chart is same as below without very much tweaking?
[ "I'd suggest going with apexcharts.\nLink for Candlebar chart: https://apexcharts.com/react-chart-demos/candlestick-charts/basic/\nI have used it in a client's project and it worked perfectly, similar to what you are looking for.\nI hope this helps.\n" ]
[ 0 ]
[]
[]
[ "charts", "highcharts", "reactjs" ]
stackoverflow_0074677152_charts_highcharts_reactjs.txt
Q: Overwrite Postgres now() function In my test database, I want to override now() in Postgres, so I can travel to a certain point in time. I'd like to override it like this: CREATE SCHEMA if not exists override; CREATE OR REPLACE FUNCTION override.now() RETURNS timestamp with time zone AS $$ BEGIN RETURN pg_catalog.now() + COALESCE( NULLIF(current_setting('timecop.offset_in_seconds', true), '')::integer, 0 ) * interval '1 second'; END; $$ LANGUAGE plpgsql STABLE PARALLEL SAFE STRICT; SET search_path TO DEFAULT; SELECT set_config('search_path', 'override,' || current_setting('search_path'), false); To enable it, I call SET timecop.offset_in_seconds = 3600 -- 1 hour ahead To disable it, I call RESET timecop.offset_in_seconds The problem is, that Postgres somehow doesn't use the function: app_test=# select now(); now ------------------------------- 2022-12-04 10:22:26.824469+00 (1 row) app_test=# SET timecop.offset_in_seconds = 3600; SET app_test=# select now(); now ------------------------------- 2022-12-04 10:22:34.481502+00 (1 row) Looking at the now() method itself, I seems like the search path searches in pg_catalog before the override schema: app_test=# \df+ now List of functions Schema | Name | Result data type | Argument data types | Type | Volatility | Parallel | Owner | Security | Access privileges | Language | Source code | Description ------------+------+--------------------------+---------------------+------+------------+----------+----------+----------+-------------------+----------+-------------+-------------------------- pg_catalog | now | timestamp with time zone | | func | stable | safe | postgres | invoker | | internal | now | current transaction time So, how could I move my overwritten now() BEFORE the pg_catalog? A: pg_catalog is always on the search path, but you can opt not to have it in the beginning: SET search_path = override, pg_catalog;
Overwrite Postgres now() function
In my test database, I want to override now() in Postgres, so I can travel to a certain point in time. I'd like to override it like this: CREATE SCHEMA if not exists override; CREATE OR REPLACE FUNCTION override.now() RETURNS timestamp with time zone AS $$ BEGIN RETURN pg_catalog.now() + COALESCE( NULLIF(current_setting('timecop.offset_in_seconds', true), '')::integer, 0 ) * interval '1 second'; END; $$ LANGUAGE plpgsql STABLE PARALLEL SAFE STRICT; SET search_path TO DEFAULT; SELECT set_config('search_path', 'override,' || current_setting('search_path'), false); To enable it, I call SET timecop.offset_in_seconds = 3600 -- 1 hour ahead To disable it, I call RESET timecop.offset_in_seconds The problem is, that Postgres somehow doesn't use the function: app_test=# select now(); now ------------------------------- 2022-12-04 10:22:26.824469+00 (1 row) app_test=# SET timecop.offset_in_seconds = 3600; SET app_test=# select now(); now ------------------------------- 2022-12-04 10:22:34.481502+00 (1 row) Looking at the now() method itself, I seems like the search path searches in pg_catalog before the override schema: app_test=# \df+ now List of functions Schema | Name | Result data type | Argument data types | Type | Volatility | Parallel | Owner | Security | Access privileges | Language | Source code | Description ------------+------+--------------------------+---------------------+------+------------+----------+----------+----------+-------------------+----------+-------------+-------------------------- pg_catalog | now | timestamp with time zone | | func | stable | safe | postgres | invoker | | internal | now | current transaction time So, how could I move my overwritten now() BEFORE the pg_catalog?
[ "pg_catalog is always on the search path, but you can opt not to have it in the beginning:\nSET search_path = override, pg_catalog;\n\n" ]
[ 0 ]
[]
[]
[ "postgresql" ]
stackoverflow_0074674872_postgresql.txt
Q: Can someone tell me why I have pending changes from old projects in my new project in the source control tab in Vs code I have 172 pending changes from my old project that I can see in my new project that I created. What should I do to clear it? I want to push my new project to Github but cannot until I figure out what's happening as I don't want to effect any other projects I've worked on. Has anyone else experiences this?? A: Check if the command Git: Close Repository (as seen here) would work to close the old project repository. Or try and open a new VSCode from the repository folder of your current new project: code -n ., and see if the issue persists then.
Can someone tell me why I have pending changes from old projects in my new project in the source control tab in Vs code
I have 172 pending changes from my old project that I can see in my new project that I created. What should I do to clear it? I want to push my new project to Github but cannot until I figure out what's happening as I don't want to effect any other projects I've worked on. Has anyone else experiences this??
[ "Check if the command Git: Close Repository (as seen here) would work to close the old project repository.\nOr try and open a new VSCode from the repository folder of your current new project: code -n ., and see if the issue persists then.\n" ]
[ 0 ]
[]
[]
[ "github" ]
stackoverflow_0074677022_github.txt
Q: Extraction of year only instead of date in python Please can someone help with code to extract only the year and set it as a new column in data using python from the above photo attached here. when I try, the result shows no consistency, it gives me different values. it extract both the year and date instead of only the year. I think the year is the second character. I used different code and it isn't working. I tried using this codes below df_movies['correct_year'] = df_movies['released'].astype(str).str[-20:] df_movies['years_scorrect'] = df_movies['released'].astype(str).str[:12] A: To extract the year from a date and set it as a new column in a DataFrame using Python, you can use the pandas library and the dt.year property of the datetime object. Here is an example of how you could do this: import pandas as pd # Create a DataFrame with sample data df = pd.DataFrame({ 'released': ['01-01-2000', '01-01-2001', '01-01-2002', '01-01-2003'] }) # Convert the 'released' column to datetime df['released'] = pd.to_datetime(df['released']) # Extract the year from the 'released' column and set it as a new column df['year'] = df['released'].dt.year # Print the resulting DataFrame print(df) In this code, we first create a DataFrame with sample data containing a column of dates in the format 'dd-mm-yyyy'. We then use the pandas.to_datetime() method to convert the 'released' column to the datetime data type. Next, we use the dt.year property of the datetime object to extract the year from the 'released' column and set it as a new 'year' column in the DataFrame. This property returns the year of the date as an integer. Finally, we print the resulting DataFrame to see the extracted year for each date. The output should look like this: released year 0 2000-01-01 2000 1 2001-01-01 2001 2 2002-01-01 2002 3 2003-01-01 2003 I hope this helps!
Extraction of year only instead of date in python
Please can someone help with code to extract only the year and set it as a new column in data using python from the above photo attached here. when I try, the result shows no consistency, it gives me different values. it extract both the year and date instead of only the year. I think the year is the second character. I used different code and it isn't working. I tried using this codes below df_movies['correct_year'] = df_movies['released'].astype(str).str[-20:] df_movies['years_scorrect'] = df_movies['released'].astype(str).str[:12]
[ "To extract the year from a date and set it as a new column in a DataFrame using Python, you can use the pandas library and the dt.year property of the datetime object.\nHere is an example of how you could do this:\nimport pandas as pd\n# Create a DataFrame with sample data\ndf = pd.DataFrame({\n 'released': ['01-01-2000', '01-01-2001', '01-01-2002', '01-01-2003']\n})\n\n# Convert the 'released' column to datetime\ndf['released'] = pd.to_datetime(df['released'])\n\n# Extract the year from the 'released' column and set it as a new column\ndf['year'] = df['released'].dt.year\n\n# Print the resulting DataFrame\nprint(df)\n\nIn this code, we first create a DataFrame with sample data containing a column of dates in the format 'dd-mm-yyyy'. We then use the pandas.to_datetime() method to convert the 'released' column to the datetime data type.\nNext, we use the dt.year property of the datetime object to extract the year from the 'released' column and set it as a new 'year' column in the DataFrame. This property returns the year of the date as an integer.\nFinally, we print the resulting DataFrame to see the extracted year for each date. The output should look like this:\nreleased year\n0 2000-01-01 2000\n1 2001-01-01 2001\n2 2002-01-01 2002\n3 2003-01-01 2003\n\nI hope this helps!\n" ]
[ 0 ]
[]
[]
[ "jupyter_notebook" ]
stackoverflow_0074645927_jupyter_notebook.txt
Q: Laravel Sail starting error related to MySQL on Ububtu 22 I added Sail to a current Laravel project composer require laravel/sail --dev php artisan sail:install but when I run ./vendor/bin/sail up I get MySQL error: :~/work/dashboard$ ./vendor/bin/sail up [+] Running 3/3 ⠿ Network dashboard_sail Created 0.1s ⠿ Container dashboard-mysql-1 Created 2.8s ⠿ Container dashboard-laravel.test-1 Created 1.1s Attaching to dashboard-laravel.test-1, dashboard-mysql-1 dashboard-mysql-1 | [Entrypoint] MySQL Docker Image 8.0.31-1.2.10-server dashboard-mysql-1 | [Entrypoint] Starting MySQL 8.0.31-1.2.10-server dashboard-mysql-1 | 2022-10-17T10:10:38.740784Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. dashboard-mysql-1 | 2022-10-17T10:10:38.741647Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31) starting as process 1 dashboard-mysql-1 | 2022-10-17T10:10:38.750935Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. dashboard-mysql-1 | 2022-10-17T10:10:38.795221Z 1 [ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files. dashboard-mysql-1 | 2022-10-17T10:10:39.287522Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine dashboard-mysql-1 | 2022-10-17T10:10:39.287656Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. dashboard-mysql-1 | 2022-10-17T10:10:39.287668Z 0 [ERROR] [MY-010119] [Server] Aborting dashboard-mysql-1 | 2022-10-17T10:10:39.287971Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.31) MySQL Community Server - GPL. dashboard-laravel.test-1 | 2022-10-17 10:10:41,882 INFO Set uid to user 0 succeeded dashboard-laravel.test-1 | 2022-10-17 10:10:41,883 INFO supervisord started with pid 1 dashboard-laravel.test-1 | 2022-10-17 10:10:42,885 INFO spawned: 'php' with pid 16 dashboard-laravel.test-1 | dashboard-laravel.test-1 | Illuminate\Database\QueryException dashboard-laravel.test-1 | dashboard-laravel.test-1 | SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo for mysql failed: Temporary failure in name resolution (SQL: select * from information_schema.tables where table_schema = dashboard and table_name = report_errors and table_type = 'BASE TABLE') .env DB_CONNECTION=mysql DB_HOST=mysql DB_PORT=3306 DB_DATABASE=dashboard DB_USERNAME=sail DB_PASSWORD=password and when I open http://localhost I get This site can’t be reached. How to fix this? [Update] I run docker ps -a and fount 2 containers: onr for sail and the other for MySQL. f20cf9718056 sail-8.1/app ... dashboard-laravel.test-1 b7007d3a061c mysql/mysql-server:8.0 ... dashboard-mysql-1 I run docker-compose start and both containers are running. docker-compose start Starting mysql ... done Starting laravel.test ... done Now I try to run ./vendor/bin/sail up But I get SQLSTATE[HY000] [2002] Connection refused I run docker ps but the MySQL container exit f20cf9718056 sail-8.1/app "start-container" About an hour ago Up 2 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:5173->5173/tcp, :::5173->5173/tcp, 8000/tcp dashboard-laravel.test-1 MySQL Logs docker logs dashboard-mysql-1 [Entrypoint] MySQL Docker Image 8.0.31-1.2.10-server [Entrypoint] Starting MySQL 8.0.31-1.2.10-server 2022-10-17T14:01:13.615764Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. 2022-10-17T14:01:13.616826Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31) starting as process 1 2022-10-17T14:01:13.627454Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2022-10-17T14:01:13.832319Z 1 [ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files. 2022-10-17T14:01:14.321686Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine 2022-10-17T14:01:14.321863Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2022-10-17T14:01:14.321881Z 0 [ERROR] [MY-010119] [Server] Aborting 2022-10-17T14:01:14.322186Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.31) MySQL Community Server - GPL. [Entrypoint] MySQL Docker Image 8.0.31-1.2.10-server [Entrypoint] Starting MySQL 8.0.31-1.2.10-server 2022-10-17T14:10:37.311446Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. 2022-10-17T14:10:37.312494Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31) starting as process 1 2022-10-17T14:10:37.331341Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2022-10-17T14:10:37.442677Z 1 [ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files. 2022-10-17T14:10:37.932648Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine 2022-10-17T14:10:37.932756Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2022-10-17T14:10:37.932772Z 0 [ERROR] [MY-010119] [Server] Aborting 2022-10-17T14:10:37.933079Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.31) MySQL Community Server - GPL. [Entrypoint] MySQL Docker Image 8.0.31-1.2.10-server [Entrypoint] Starting MySQL 8.0.31-1.2.10-server 2022-10-17T14:10:47.381949Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. 2022-10-17T14:10:47.382974Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31) starting as process 1 2022-10-17T14:10:47.389572Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2022-10-17T14:10:47.411196Z 1 [ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files. 2022-10-17T14:10:47.901979Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine 2022-10-17T14:10:47.902111Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2022-10-17T14:10:47.902128Z 0 [ERROR] [MY-010119] [Server] Aborting 2022-10-17T14:10:47.902444Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.31) MySQL Community Server - GPL. [Entrypoint] MySQL Docker Image 8.0.31-1.2.10-server [Entrypoint] Starting MySQL 8.0.31-1.2.10-server 2022-10-17T15:04:03.773574Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. 2022-10-17T15:04:03.774621Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31) starting as process 1 2022-10-17T15:04:03.788273Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2022-10-17T15:04:03.852948Z 1 [ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files. 2022-10-17T15:04:04.341269Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine 2022-10-17T15:04:04.341410Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2022-10-17T15:04:04.341425Z 0 [ERROR] [MY-010119] [Server] Aborting 2022-10-17T15:04:04.341726Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.31) MySQL Community Server - GPL. A: I did not use Laravel Sail I prefer to make my own docker and docker-compose files so let me get you a bit closer to the error. Error : Failed to connect to server: php_network_getaddresses: getaddrinfo failed this error can occur due to two reasons. there might be an issue with your DNS settings PHP may not be able to get the network addresses for a given host/domain name. so you need to make sure the service name in your docker file for MYSQL is used in your Laravel .env file as a DB_HOST. and one more thing Laravel is trying to make a select statement somehow outside of the container so you got the mentioned error above in case you took care of the 2 points I mentioned above.
Laravel Sail starting error related to MySQL on Ububtu 22
I added Sail to a current Laravel project composer require laravel/sail --dev php artisan sail:install but when I run ./vendor/bin/sail up I get MySQL error: :~/work/dashboard$ ./vendor/bin/sail up [+] Running 3/3 ⠿ Network dashboard_sail Created 0.1s ⠿ Container dashboard-mysql-1 Created 2.8s ⠿ Container dashboard-laravel.test-1 Created 1.1s Attaching to dashboard-laravel.test-1, dashboard-mysql-1 dashboard-mysql-1 | [Entrypoint] MySQL Docker Image 8.0.31-1.2.10-server dashboard-mysql-1 | [Entrypoint] Starting MySQL 8.0.31-1.2.10-server dashboard-mysql-1 | 2022-10-17T10:10:38.740784Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. dashboard-mysql-1 | 2022-10-17T10:10:38.741647Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31) starting as process 1 dashboard-mysql-1 | 2022-10-17T10:10:38.750935Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. dashboard-mysql-1 | 2022-10-17T10:10:38.795221Z 1 [ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files. dashboard-mysql-1 | 2022-10-17T10:10:39.287522Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine dashboard-mysql-1 | 2022-10-17T10:10:39.287656Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. dashboard-mysql-1 | 2022-10-17T10:10:39.287668Z 0 [ERROR] [MY-010119] [Server] Aborting dashboard-mysql-1 | 2022-10-17T10:10:39.287971Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.31) MySQL Community Server - GPL. dashboard-laravel.test-1 | 2022-10-17 10:10:41,882 INFO Set uid to user 0 succeeded dashboard-laravel.test-1 | 2022-10-17 10:10:41,883 INFO supervisord started with pid 1 dashboard-laravel.test-1 | 2022-10-17 10:10:42,885 INFO spawned: 'php' with pid 16 dashboard-laravel.test-1 | dashboard-laravel.test-1 | Illuminate\Database\QueryException dashboard-laravel.test-1 | dashboard-laravel.test-1 | SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo for mysql failed: Temporary failure in name resolution (SQL: select * from information_schema.tables where table_schema = dashboard and table_name = report_errors and table_type = 'BASE TABLE') .env DB_CONNECTION=mysql DB_HOST=mysql DB_PORT=3306 DB_DATABASE=dashboard DB_USERNAME=sail DB_PASSWORD=password and when I open http://localhost I get This site can’t be reached. How to fix this? [Update] I run docker ps -a and fount 2 containers: onr for sail and the other for MySQL. f20cf9718056 sail-8.1/app ... dashboard-laravel.test-1 b7007d3a061c mysql/mysql-server:8.0 ... dashboard-mysql-1 I run docker-compose start and both containers are running. docker-compose start Starting mysql ... done Starting laravel.test ... done Now I try to run ./vendor/bin/sail up But I get SQLSTATE[HY000] [2002] Connection refused I run docker ps but the MySQL container exit f20cf9718056 sail-8.1/app "start-container" About an hour ago Up 2 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:5173->5173/tcp, :::5173->5173/tcp, 8000/tcp dashboard-laravel.test-1 MySQL Logs docker logs dashboard-mysql-1 [Entrypoint] MySQL Docker Image 8.0.31-1.2.10-server [Entrypoint] Starting MySQL 8.0.31-1.2.10-server 2022-10-17T14:01:13.615764Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. 2022-10-17T14:01:13.616826Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31) starting as process 1 2022-10-17T14:01:13.627454Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2022-10-17T14:01:13.832319Z 1 [ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files. 2022-10-17T14:01:14.321686Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine 2022-10-17T14:01:14.321863Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2022-10-17T14:01:14.321881Z 0 [ERROR] [MY-010119] [Server] Aborting 2022-10-17T14:01:14.322186Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.31) MySQL Community Server - GPL. [Entrypoint] MySQL Docker Image 8.0.31-1.2.10-server [Entrypoint] Starting MySQL 8.0.31-1.2.10-server 2022-10-17T14:10:37.311446Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. 2022-10-17T14:10:37.312494Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31) starting as process 1 2022-10-17T14:10:37.331341Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2022-10-17T14:10:37.442677Z 1 [ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files. 2022-10-17T14:10:37.932648Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine 2022-10-17T14:10:37.932756Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2022-10-17T14:10:37.932772Z 0 [ERROR] [MY-010119] [Server] Aborting 2022-10-17T14:10:37.933079Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.31) MySQL Community Server - GPL. [Entrypoint] MySQL Docker Image 8.0.31-1.2.10-server [Entrypoint] Starting MySQL 8.0.31-1.2.10-server 2022-10-17T14:10:47.381949Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. 2022-10-17T14:10:47.382974Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31) starting as process 1 2022-10-17T14:10:47.389572Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2022-10-17T14:10:47.411196Z 1 [ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files. 2022-10-17T14:10:47.901979Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine 2022-10-17T14:10:47.902111Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2022-10-17T14:10:47.902128Z 0 [ERROR] [MY-010119] [Server] Aborting 2022-10-17T14:10:47.902444Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.31) MySQL Community Server - GPL. [Entrypoint] MySQL Docker Image 8.0.31-1.2.10-server [Entrypoint] Starting MySQL 8.0.31-1.2.10-server 2022-10-17T15:04:03.773574Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead. 2022-10-17T15:04:03.774621Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31) starting as process 1 2022-10-17T15:04:03.788273Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started. 2022-10-17T15:04:03.852948Z 1 [ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files. 2022-10-17T15:04:04.341269Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine 2022-10-17T15:04:04.341410Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed. 2022-10-17T15:04:04.341425Z 0 [ERROR] [MY-010119] [Server] Aborting 2022-10-17T15:04:04.341726Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.31) MySQL Community Server - GPL.
[ "I did not use Laravel Sail I prefer to make my own docker and docker-compose files so let me get you a bit closer to the error. Error : Failed to connect to server: php_network_getaddresses: getaddrinfo failed this error can occur due to two reasons.\n\nthere might be an issue with your DNS settings\nPHP may not be able to get the network addresses for a given\nhost/domain name.\nso you need to make sure the service name in your docker file for MYSQL is used in your Laravel .env file as a DB_HOST. and one more thing Laravel is trying to make a select statement somehow outside of the container so you got the mentioned error above in case you took care of the 2 points I mentioned above.\n\n" ]
[ 0 ]
[]
[]
[ "docker", "laravel", "laravel_sail", "mysql", "php" ]
stackoverflow_0074095788_docker_laravel_laravel_sail_mysql_php.txt
Q: MongoDB Quick Start fails, keeps returning "null" on Terminal Hi, I am self-learning MongoDB (with Node.js). Totally new to programming. My first Node.js application doesn't return the MongoDB document like it supposed to. What I want to achieve: To work with the native MongoDB driver, and to complete the quick start procedure on MongoDB website: https://www.mongodb.com/docs/drivers/node/current/quick-start/ What I have tried so far: Installed node & npm correctly; Installed [email protected] correctly; Initialized all these via Terminal; Set up Atlas, obtained connection string. Still, when I put template (obtained from MongoDB quick start tutorial) into my server.js file, entered "npx nodemon app.js" to test, it returns: "null". Here's code I put into server.js: (all account & password typed in correctly) const { MongoClient } = require("mongodb"); // const uri = "mongodb://localhost:27017"; const uri = "mongodb+srv://<myClusterUsername>:<myPassword>@cluster0.fytvkcs.mongodb.net/?retryWrites=true&w=majority"; const client = new MongoClient(uri); async function run() { try { const database = client.db('sample_mflix'); const movies = database.collection('movies'); // Query for a movie that has the title 'Back to the Future' const query = { title: 'Back to the Future' }; const movie = await movies.findOne(query); console.log(movie); } finally { // Ensures that the client will close when you finish/error await client.close(); } } run().catch(console.dir); As you can see, I also tried uri: localhost:27017, but output stay still on my Terminal: "null". According to MongoDB, it was supposed to return such online sample doc: { _id: ..., plot: 'A young man is accidentally sent 30 years into the past...', genres: [ 'Adventure', 'Comedy', 'Sci-Fi' ], ... title: 'Back to the Future', ... } Your help would be appreciated! Thanks very much! A: you should open the folder in visual studio code like this : enter image description here
MongoDB Quick Start fails, keeps returning "null" on Terminal
Hi, I am self-learning MongoDB (with Node.js). Totally new to programming. My first Node.js application doesn't return the MongoDB document like it supposed to. What I want to achieve: To work with the native MongoDB driver, and to complete the quick start procedure on MongoDB website: https://www.mongodb.com/docs/drivers/node/current/quick-start/ What I have tried so far: Installed node & npm correctly; Installed [email protected] correctly; Initialized all these via Terminal; Set up Atlas, obtained connection string. Still, when I put template (obtained from MongoDB quick start tutorial) into my server.js file, entered "npx nodemon app.js" to test, it returns: "null". Here's code I put into server.js: (all account & password typed in correctly) const { MongoClient } = require("mongodb"); // const uri = "mongodb://localhost:27017"; const uri = "mongodb+srv://<myClusterUsername>:<myPassword>@cluster0.fytvkcs.mongodb.net/?retryWrites=true&w=majority"; const client = new MongoClient(uri); async function run() { try { const database = client.db('sample_mflix'); const movies = database.collection('movies'); // Query for a movie that has the title 'Back to the Future' const query = { title: 'Back to the Future' }; const movie = await movies.findOne(query); console.log(movie); } finally { // Ensures that the client will close when you finish/error await client.close(); } } run().catch(console.dir); As you can see, I also tried uri: localhost:27017, but output stay still on my Terminal: "null". According to MongoDB, it was supposed to return such online sample doc: { _id: ..., plot: 'A young man is accidentally sent 30 years into the past...', genres: [ 'Adventure', 'Comedy', 'Sci-Fi' ], ... title: 'Back to the Future', ... } Your help would be appreciated! Thanks very much!
[ "you should open the folder in visual studio code like this :\nenter image description here\n" ]
[ 0 ]
[]
[]
[ "backend", "database", "mongodb", "mongodb_atlas", "node.js" ]
stackoverflow_0073345933_backend_database_mongodb_mongodb_atlas_node.js.txt
Q: Indirect modification of overloaded element of Illuminate\Support\Collection has no effect im quite new in laravel framework, and im from codeigniter. I would like to add new key and value from database static function m_get_promotion_banner(){ $query = DB::table("promotion_banner") ->select('promotion_banner_id','promotion_link','about_promotion') ->where('promotion_active','1') ->get(); if($query != null){ foreach ($query as $key => $row){ $query[$key]['promotion_image'] = URL::to('home/image/banner/'.$row['promotion_banner_id']); } } return $query; } that code was just changed from codeigniter to laravel, since in codeigniter there are no problem in passing a new key and value in foreach statement but when i tried it in laravel i got this following error : Indirect modification of overloaded element of Illuminate\Support\Collection has no effect at HandleExceptions->handleError(8, 'Indirect modification of overloaded element of Illuminate\Support\Collection has no effect', 'C:\xampp\htdocs\laravel-site\application\app\models\main\Main_home_m.php', 653, array('query' => object(Collection), 'row' => array('promotion_banner_id' => 1, 'promotion_link' => 'http://localhost/deal/home/voucher', 'about_promotion' => ''), 'key' => 0)) please guide me how to fix this thank you (: A: The result of a Laravel query will always be a Collection. To add a property to all the objects in this collection, you can use the map function. $query = $query->map(function ($object) { // Add the new property $object->promotion_image = URL::to('home/image/banner/' . $object->promotion_banner_id); // Return the new object return $object; }); Also, you can get and set the properties using actual object properties and not array keys. This makes the code much more readable in my opinion. A: For others who needs a solution you can use jsonserialize method to modify the collection. Such as: $data = $data->jsonserialize(); //do your changes here now. A: The problem is the get is returning a collection of stdObject Instead of adding the new field to the result of your query, modify the model of what you are returning. So, assuming you have a PromotionBanner.php model file in your app directory, edit it and then add these 2 blocks of code: protected $appends = array('promotionImage'); here you just added the custom field. Now you tell the model how to fill it: public function getPromotionImageAttribute() { return (url('home/image/banner/'.$this->promotion_banner_id)); } Now, you get your banners through your model: static function m_get_promotion_banner(){ return \App\PromotionBanner::where('promotion_active','1')->get(); } Now you can access your promotionImage propierty in your result P.D: In the case you are NOT using a model... Well, just create the file app\PromotionImage.php: <?php namespace App; use Illuminate\Database\Eloquent\Model; class PromotionImage extends Model { protected $appends = array('imageAttribute'); protected $table = 'promotion_banner'; public function getPromotionImageAttribute() { return (url('home/image/banner/'.$this->promotion_banner_id)); } /** * The attributes that are mass assignable. * * @var array */ protected $fillable = [ 'promotion_banner_id','promotion_link','about_promotion','promotion_active' ]; A: just improving, in case you need to pass data inside the query $url = 'home/image/banner/'; $query = $query->map(function ($object) use ($url) { // Add the new property $object->promotion_image = URL::to( $url . $object->promotion_banner_id); // Return the new object return $object; }); A: I've been struggling with this all evening, and I'm still not sure what my problem is. I've used ->get() to actually execute the query, and I've tried by ->toArray() and ->jsonserialize() on the data and it didn't fix the problem. In the end, the work-around I found was this: $task = Tasks::where("user_id", $userId)->first()->toArray(); $task = json_decode(json_encode($task), true); $task["foo"] = "bar"; Using json_encode and then json_decode on it again freed it up from whatever was keeping me from editing it. That's a hacky work-around at best, but if anyone else just needs to push past this problem and get on with their work, this might solve the problem for you.
Indirect modification of overloaded element of Illuminate\Support\Collection has no effect
im quite new in laravel framework, and im from codeigniter. I would like to add new key and value from database static function m_get_promotion_banner(){ $query = DB::table("promotion_banner") ->select('promotion_banner_id','promotion_link','about_promotion') ->where('promotion_active','1') ->get(); if($query != null){ foreach ($query as $key => $row){ $query[$key]['promotion_image'] = URL::to('home/image/banner/'.$row['promotion_banner_id']); } } return $query; } that code was just changed from codeigniter to laravel, since in codeigniter there are no problem in passing a new key and value in foreach statement but when i tried it in laravel i got this following error : Indirect modification of overloaded element of Illuminate\Support\Collection has no effect at HandleExceptions->handleError(8, 'Indirect modification of overloaded element of Illuminate\Support\Collection has no effect', 'C:\xampp\htdocs\laravel-site\application\app\models\main\Main_home_m.php', 653, array('query' => object(Collection), 'row' => array('promotion_banner_id' => 1, 'promotion_link' => 'http://localhost/deal/home/voucher', 'about_promotion' => ''), 'key' => 0)) please guide me how to fix this thank you (:
[ "The result of a Laravel query will always be a Collection. To add a property to all the objects in this collection, you can use the map function.\n$query = $query->map(function ($object) {\n\n // Add the new property\n $object->promotion_image = URL::to('home/image/banner/' . $object->promotion_banner_id);\n\n // Return the new object\n return $object;\n\n});\n\nAlso, you can get and set the properties using actual object properties and not array keys. This makes the code much more readable in my opinion.\n", "For others who needs a solution you can use jsonserialize method to modify the collection.\nSuch as:\n$data = $data->jsonserialize();\n//do your changes here now.\n\n", "The problem is the get is returning a collection of stdObject\nInstead of adding the new field to the result of your query, modify the model of what you are returning.\nSo, assuming you have a PromotionBanner.php model file in your app directory, edit it and then add these 2 blocks of code:\nprotected $appends = array('promotionImage');\n\nhere you just added the custom field. Now you tell the model how to fill it:\npublic function getPromotionImageAttribute() {\n return (url('home/image/banner/'.$this->promotion_banner_id)); \n}\n\nNow, you get your banners through your model:\nstatic function m_get_promotion_banner(){\n return \\App\\PromotionBanner::where('promotion_active','1')->get();\n}\n\nNow you can access your promotionImage propierty in your result\nP.D:\nIn the case you are NOT using a model... Well, just create the file app\\PromotionImage.php:\n<?php\n\nnamespace App;\n\nuse Illuminate\\Database\\Eloquent\\Model;\n\n\nclass PromotionImage extends Model\n{\n protected $appends = array('imageAttribute');\n protected $table = 'promotion_banner'; \n\n public function getPromotionImageAttribute() {\n return (url('home/image/banner/'.$this->promotion_banner_id)); \n }\n\n /**\n * The attributes that are mass assignable.\n *\n * @var array\n */\n protected $fillable = [\n 'promotion_banner_id','promotion_link','about_promotion','promotion_active'\n ];\n\n", "just improving, in case you need to pass data inside the query\n$url = 'home/image/banner/';\n$query = $query->map(function ($object) use ($url) {\n\n // Add the new property\n $object->promotion_image = URL::to( $url . $object->promotion_banner_id);\n\n // Return the new object\n return $object;\n});\n\n", "I've been struggling with this all evening, and I'm still not sure what my problem is.\nI've used ->get() to actually execute the query, and I've tried by ->toArray() and ->jsonserialize() on the data and it didn't fix the problem.\nIn the end, the work-around I found was this:\n$task = Tasks::where(\"user_id\", $userId)->first()->toArray();\n$task = json_decode(json_encode($task), true);\n$task[\"foo\"] = \"bar\";\n\nUsing json_encode and then json_decode on it again freed it up from whatever was keeping me from editing it.\nThat's a hacky work-around at best, but if anyone else just needs to push past this problem and get on with their work, this might solve the problem for you.\n" ]
[ 10, 4, 0, 0, 0 ]
[]
[]
[ "database", "laravel" ]
stackoverflow_0044539395_database_laravel.txt
Q: How to do Delete confirmation for a table data with bootstrap Modal in Django? I'm having a table to show a list of actions in my app. I can delete any action in that table. So, I have added a delete button in every row. This delete button will trigger a 'delete confirmation' bootstrap modal. <table class="table table-hover"> <thead> <tr> <th scope="col">#</th> <th scope="col" class="th-lg">Name</th> </tr> </thead> {% for action in actions_list %} <tbody> <tr class="test"> <th scope="row" class="align-middle">{{ forloop.counter }}</th> <td class="align-middle"> {{action.action_name}} </td> <td class="align-middle"> {{action.id}} </td> <td> <div class="row justify-content-end"> <button id="edit" type="button" class="btn btn-sm btn-dark col col-lg-2" style="color: rgb(255,0,0,0)" > <i class="lni-pencil"></i> </button> <button id="trash" type="button" class="btn btn-sm btn-dark col col-lg-2" style="color: rgb(255,0,0,0)" data-toggle="modal" data-target="#modalConfirmDelete" > <i class="lni-trash"></i> </button> </div> </td> </tr> </tbody> {% endfor %} </table> Below is the code for 'Delete Confirmation' bootstrap modal. It will have 'Yes' and 'No' buttons. If I click 'Yes', then that particular action id will be passed to URL and that particular action id will be deleted. {% block modalcontent %} <!--Modal: modalConfirmDelete--> <div class="modal fade" id="modalConfirmDelete" tabindex="-1" role="dialog" aria-labelledby="exampleModalLabel" aria-hidden="true" > <div class="modal-dialog modal-sm modal-notify modal-danger" role="document"> <!--Content--> <div class="modal-content text-center"> <!--Header--> <div class="modal-header d-flex justify-content-center"> <p class="heading">Are you sure?</p> </div> <!--Body--> <div class="modal-body"> <i class="fas fa-times fa-4x animated rotateIn"></i> </div> <!--Footer--> <div class="modal-footer flex-center"> <form action="{% url 'delete_action' aid=action.id %}"> {% csrf_token %} <button class="btn btn-outline-danger">Yes</button> </form> <a type="button" class="btn btn-danger waves-effect" data-dismiss="modal" >No</a > </div> </div> <!--/.Content--> </div> </div> {% endblock %} In above code, I'm using a form tag for the delete action then that action id URL will trigger. Below is the URL to delete an action, re_path(r'^delete_action/(?P<aid>\d+)/', views.delete_action, name='delete_action') Problem I'm Facing : I need action.id value in the modal which I'm not getting! Please help me to solve this. thanks in advance :) A: If any of you are going through this scenario, I have a quick fix. The main idea is to change the form's action URL using Javascript views.py class DeleteAddressView(DeleteView): success_url = reverse_lazy("home") I will try to provide the minimum solution here: my link in the list for delete item will be: <a href="{% url 'item-delete' item.id %}" class="dropdown-item text-danger" data-toggle="modal" data-target="#delete-item-modal" id="delete-item" > Remove </a> modal that popup will be: <div class="modal fade" id="delete-item-modal"> <div class="modal-dialog"> <div class="modal-content"> <div class="modal-body"> <p>Are you sure, You want to remove this item?</p> </div> <div class="justify-content-between mb-2 mr-2 text-right"> <form method="post" id="item-delete-form" > <button type="button" class="btn btn-secondary mr-1" data-dismiss="modal">Cancel</button> {% csrf_token %} <button type="submit" class="btn btn-danger" id="confirm-delete-item-button">Delete</button> </form> </div> </div> </div> </div> Now we have to change the form action URL with the item's a href value <script> $(document).on('click', '#delete-item', () => { document.getElementById("item-delete-form").action = document.querySelector('#delete-item').href }); </script> I know this is too late for your question but can be helpful for others. This is the easiest way to remove an item from the list without redirecting the page to the confirmation page. NOTE: frontend framework bootstrap is used to display the modal, so you must check if bootstrap is working or not before continuing with this solution. A: For more explanation on Gorkali's answer, you can check here: https://elennion.wordpress.com/2018/10/08/bootstrap-4-delete-confirmation-modal-for-list-of-items/ This is how is how I solved it, based on the above answer, using plain JavaScript, and adding some more functionality: in my_template.html: <a href="{% url 'report_generate' %}" class="btn btn-primary" id="generate_{{report.id}}" data-toggle="modal" data-target="#confirmModal" data-message="If you proceed, the existing report will be overwritten." data-buttontext="Proceed"> Regenerate </a> <a href="{% url 'report_generate'" class="btn btn-primary" id="finalize_{{report.id}}" data-toggle="modal" data-target="#confirmModal" data-message="If you proceed, the existing report will be finalized. After that, it can no longer be edited." data-buttontext="Finalize Report"> Finalize </a> {% include "includes/confirm_modal.html" %} with the include file confirm_modal.html: <div class="modal fade" id="confirmModal" tabindex="-1" caller-id="" role="dialog" aria-labelledby="confirmModalLabel" aria-hidden="true"> <div class="modal-dialog modal-dialog-centered" role="document"> <div class="modal-content"> <div class="modal-body" id="modal-message"> Do you wish to proceed? </div> <div class="modal-footer"> <button type="button" class="btn btn-secondary" data-dismiss="modal">Cancel</button> <button type="button" class="btn btn-primary" data-dismiss="modal" id="confirmButtonModal">Confirm</button> </div> </div> </div> </div> <script type="text/javascript"> document.addEventListener('DOMContentLoaded', () => { var buttons = document.querySelectorAll("[data-target='#confirmModal']"); for (const button of buttons) { button.addEventListener("click", function(event) { // find the modal and add the caller-id as an attribute var modal = document.getElementById("confirmModal"); modal.setAttribute("caller-id", this.getAttribute("id")); // extract texts from calling element and replace the modals texts with it if ("message" in this.dataset) { document.getElementById("modal-message").innerHTML = this.dataset.message; }; if ("buttontext" in this.dataset) { document.getElementById("confirmButtonModal").innerHTML = this.dataset.buttontext; }; }) } document.getElementById("confirmButtonModal").onclick = () => { // when the Confirm Button in the modal is clicked var button_clicked = event.target var caller_id = button_clicked.closest("#confirmModal").getAttribute("caller-id"); var caller = document.getElementById(caller_id); // open the url that was specified for the caller window.location = caller.getAttribute("href"); }; }); </script> A: Delete link: <a href="javascript:void(0)" data-toggle="modal" class="confirm-delete" data-url="{% url 'account:delete_address' pk=address.id %}" data-target="#deleteItemModal" data-message="Êtes-vous sûr de supprimer l'article ?" > <i class="far fa-trash-alt"></i> <span>Supprimer</span> </a> Modal: <!-- Modal --> <div id="container_delete"> <div class="modal fade" id="deleteItemModal" tabindex="-1" role="dialog" aria-labelledby="deleteItemModalLabel" aria-hidden="true"> <div class="modal-dialog" role="document"> </div> <div class="modal-content"> <div class="modal-header"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"> <span aria-hidden="true">&times;</span> </button> </div> <div class="modal-body confirm-delete text-center" > <div class="alert" id="delete_item_alert"></div> <div id="modal-message"></div> <hr> <form action="" method="post" id="form_confirm_modal"> {% csrf_token %} <button type="button" class="btn btn-danger" data-dismiss="modal" id="confirmButtonModal">Oui</button> <button type="button" class="btn btn-primary" data-dismiss="modal">Non</button> </form> <input type="hidden" id="address_suppress"/> </div> </div> </div> </div> <script type="text/javascript"> document.addEventListener('DOMContentLoaded', () => { let form_confirm = document.querySelector('#form_confirm_modal') let buttons = document.querySelectorAll("[data-target='#deleteItemModal']"); buttons.forEach(button => { button.addEventListener("click", () => { // extract texts from calling element and replace the modals texts with it if (button.dataset.message) { document.getElementById("modal-message").innerHTML = button.dataset.message; } // extract url from calling element and replace the modals texts with it if (button.dataset.url) { form_confirm.action = button.dataset.url; } }) }); let confirmModal = document.getElementById("confirmButtonModal") confirmModal.addEventListener('click', () => { form_confirm.submit(); }); }); </script> Views: class DeleteAddressView(DeleteView, SuccessMessageMixin): template_name = 'account/address.html' success_message = 'Adresse supprimée' # model = Address def get_object(self, queryset=None): _id = int(self.kwargs.get('pk')) address = get_object_or_404(Address, pk=_id) return address def get_success_url(self): pk = self.request.user.id return reverse_lazy('account:address', args=(pk,)) A: Try this In your delete link <a href="{% url 'your-delete-url' pk=your.id %}" class="confirm-delete" title="Delete" data-toggle="modal" data-target="#confirmDeleteModal" id="deleteButton{{your.id}}"> Your modal <div class="modal fade" id="confirmDeleteModal" tabindex="-1" caller-id="" role="dialog" aria-labelledby="confirmDeleteModalLabel" aria-hidden="true"> <div class="modal-dialog" role="document"> <div class="modal-content"> <div class="modal-body confirm-delete"> This action is permanent! </div> <div class="modal-footer"> <button type="button" class="btn btn-secondary" data-dismiss="modal">Cancel</button> <button type="button" class="btn btn-danger" data-dismiss="modal" id="confirmDeleteButtonModal">Delete</button> </div> </div> </div> </div> <script type="text/javascript"> $(document).on('click', '.confirm-delete', function () { $("#confirmDeleteModal").attr("caller-id", $(this).attr("id")); }); $(document).on('click', '#confirmDeleteButtonModal', function () { var caller = $("#confirmDeleteButtonModal").closest(".modal").attr("caller-id"); window.location = $("#".concat(caller)).attr("href"); }); </script> A: I got the example from @dipesh, but to works for me I needed to change somethings(tag 'a' and javascript) to get the correct element. my script function delete_user(selected_user){ document.getElementById("item-delete-form").action = selected_user.href } my link in the list for delete item will be: <a href="{% url 'item-delete' item.id %}" class="dropdown-item text-danger" data-toggle="modal" data-target="#delete-item-modal" onclick="delete_user(this)"" > Remove </a> my modal <div class="modal fade" id="delete-item-modal"> <div class="modal-dialog"> <div class="modal-content"> <div class="modal-body"> <p>Are you sure, You want to remove this item?</p> </div> <div class="justify-content-between mb-2 mr-2 text-right"> <form method="post" id="item-delete-form" > <button type="button" class="btn btn-secondary mr-1" data-dismiss="modal">Cancel</button> {% csrf_token %} <button type="submit" class="btn btn-danger" id="confirm-delete-item-button">Delete</button> </form> </div> </div> </div> </div> A: You can also do it without JavaScript (see also option 1 in the answer by @lemayzeur in this question). Give the modal a variable name and call it using item.id: # in your link to the modal data-target="#delete-item-modal-{{item.id}}" # in your modal id="delete-item-modal-{{item.id}}"
How to do Delete confirmation for a table data with bootstrap Modal in Django?
I'm having a table to show a list of actions in my app. I can delete any action in that table. So, I have added a delete button in every row. This delete button will trigger a 'delete confirmation' bootstrap modal. <table class="table table-hover"> <thead> <tr> <th scope="col">#</th> <th scope="col" class="th-lg">Name</th> </tr> </thead> {% for action in actions_list %} <tbody> <tr class="test"> <th scope="row" class="align-middle">{{ forloop.counter }}</th> <td class="align-middle"> {{action.action_name}} </td> <td class="align-middle"> {{action.id}} </td> <td> <div class="row justify-content-end"> <button id="edit" type="button" class="btn btn-sm btn-dark col col-lg-2" style="color: rgb(255,0,0,0)" > <i class="lni-pencil"></i> </button> <button id="trash" type="button" class="btn btn-sm btn-dark col col-lg-2" style="color: rgb(255,0,0,0)" data-toggle="modal" data-target="#modalConfirmDelete" > <i class="lni-trash"></i> </button> </div> </td> </tr> </tbody> {% endfor %} </table> Below is the code for 'Delete Confirmation' bootstrap modal. It will have 'Yes' and 'No' buttons. If I click 'Yes', then that particular action id will be passed to URL and that particular action id will be deleted. {% block modalcontent %} <!--Modal: modalConfirmDelete--> <div class="modal fade" id="modalConfirmDelete" tabindex="-1" role="dialog" aria-labelledby="exampleModalLabel" aria-hidden="true" > <div class="modal-dialog modal-sm modal-notify modal-danger" role="document"> <!--Content--> <div class="modal-content text-center"> <!--Header--> <div class="modal-header d-flex justify-content-center"> <p class="heading">Are you sure?</p> </div> <!--Body--> <div class="modal-body"> <i class="fas fa-times fa-4x animated rotateIn"></i> </div> <!--Footer--> <div class="modal-footer flex-center"> <form action="{% url 'delete_action' aid=action.id %}"> {% csrf_token %} <button class="btn btn-outline-danger">Yes</button> </form> <a type="button" class="btn btn-danger waves-effect" data-dismiss="modal" >No</a > </div> </div> <!--/.Content--> </div> </div> {% endblock %} In above code, I'm using a form tag for the delete action then that action id URL will trigger. Below is the URL to delete an action, re_path(r'^delete_action/(?P<aid>\d+)/', views.delete_action, name='delete_action') Problem I'm Facing : I need action.id value in the modal which I'm not getting! Please help me to solve this. thanks in advance :)
[ "If any of you are going through this scenario, I have a quick fix.\n\nThe main idea is to change the form's action URL using Javascript\n\nviews.py\nclass DeleteAddressView(DeleteView):\n success_url = reverse_lazy(\"home\")\n\nI will try to provide the minimum solution here:\nmy link in the list for delete item will be:\n<a\n href=\"{% url 'item-delete' item.id %}\"\n class=\"dropdown-item text-danger\"\n data-toggle=\"modal\"\n data-target=\"#delete-item-modal\"\n id=\"delete-item\"\n>\n Remove\n</a>\n\nmodal that popup will be:\n<div class=\"modal fade\" id=\"delete-item-modal\">\n <div class=\"modal-dialog\">\n <div class=\"modal-content\">\n <div class=\"modal-body\">\n <p>Are you sure, You want to remove this item?</p>\n </div>\n <div class=\"justify-content-between mb-2 mr-2 text-right\">\n <form method=\"post\"\n id=\"item-delete-form\"\n >\n <button type=\"button\" class=\"btn btn-secondary mr-1\" data-dismiss=\"modal\">Cancel</button>\n {% csrf_token %}\n <button type=\"submit\" class=\"btn btn-danger\" id=\"confirm-delete-item-button\">Delete</button>\n </form>\n </div>\n </div>\n </div>\n</div>\n\n\nNow we have to change the form action URL with the item's a href value\n\n<script>\n $(document).on('click', '#delete-item', () => {\n document.getElementById(\"item-delete-form\").action = document.querySelector('#delete-item').href\n });\n</script>\n\nI know this is too late for your question but can be helpful for others. This is the easiest way to remove an item from the list without redirecting the page to the confirmation page.\n\nNOTE: frontend framework bootstrap is used to display the modal, so you must check if bootstrap is working or not before continuing with this solution.\n\n", "For more explanation on Gorkali's answer, you can check here: https://elennion.wordpress.com/2018/10/08/bootstrap-4-delete-confirmation-modal-for-list-of-items/\nThis is how is how I solved it, based on the above answer, using plain JavaScript, and adding some more functionality:\nin my_template.html:\n<a href=\"{% url 'report_generate' %}\" \n class=\"btn btn-primary\" id=\"generate_{{report.id}}\"\n data-toggle=\"modal\" data-target=\"#confirmModal\" \n data-message=\"If you proceed, the existing report will be overwritten.\"\n data-buttontext=\"Proceed\">\n Regenerate\n</a>\n<a href=\"{% url 'report_generate'\" \n class=\"btn btn-primary\" id=\"finalize_{{report.id}}\"\n data-toggle=\"modal\" data-target=\"#confirmModal\" \n data-message=\"If you proceed, the existing report will be finalized. After that, it can no longer be edited.\"\n data-buttontext=\"Finalize Report\">\n Finalize\n</a>\n\n{% include \"includes/confirm_modal.html\" %}\n\nwith the include file confirm_modal.html:\n<div class=\"modal fade\" id=\"confirmModal\" tabindex=\"-1\" caller-id=\"\" role=\"dialog\" aria-labelledby=\"confirmModalLabel\" aria-hidden=\"true\">\n <div class=\"modal-dialog modal-dialog-centered\" role=\"document\">\n <div class=\"modal-content\">\n <div class=\"modal-body\" id=\"modal-message\">\n Do you wish to proceed?\n </div>\n <div class=\"modal-footer\">\n <button type=\"button\" class=\"btn btn-secondary\" data-dismiss=\"modal\">Cancel</button>\n <button type=\"button\" class=\"btn btn-primary\" data-dismiss=\"modal\" id=\"confirmButtonModal\">Confirm</button>\n </div>\n </div>\n </div>\n</div>\n\n<script type=\"text/javascript\">\n document.addEventListener('DOMContentLoaded', () => {\n var buttons = document.querySelectorAll(\"[data-target='#confirmModal']\");\n for (const button of buttons) {\n button.addEventListener(\"click\", function(event) {\n // find the modal and add the caller-id as an attribute\n var modal = document.getElementById(\"confirmModal\");\n modal.setAttribute(\"caller-id\", this.getAttribute(\"id\"));\n\n // extract texts from calling element and replace the modals texts with it\n if (\"message\" in this.dataset) {\n document.getElementById(\"modal-message\").innerHTML = this.dataset.message;\n };\n if (\"buttontext\" in this.dataset) {\n document.getElementById(\"confirmButtonModal\").innerHTML = this.dataset.buttontext;\n };\n })\n }\n\n document.getElementById(\"confirmButtonModal\").onclick = () => {\n // when the Confirm Button in the modal is clicked\n var button_clicked = event.target\n var caller_id = button_clicked.closest(\"#confirmModal\").getAttribute(\"caller-id\");\n var caller = document.getElementById(caller_id);\n // open the url that was specified for the caller\n window.location = caller.getAttribute(\"href\");\n };\n });\n</script>\n\n", "Delete link:\n<a href=\"javascript:void(0)\" data-toggle=\"modal\"\n class=\"confirm-delete\"\n data-url=\"{% url 'account:delete_address' pk=address.id %}\"\n data-target=\"#deleteItemModal\"\n data-message=\"Êtes-vous sûr de supprimer l'article ?\"\n >\n <i class=\"far fa-trash-alt\"></i>\n <span>Supprimer</span>\n </a>\n\nModal:\n <!-- Modal -->\n<div id=\"container_delete\">\n<div class=\"modal fade\" id=\"deleteItemModal\" tabindex=\"-1\" role=\"dialog\"\n aria-labelledby=\"deleteItemModalLabel\" aria-hidden=\"true\">\n <div class=\"modal-dialog\" role=\"document\"> </div>\n <div class=\"modal-content\">\n\n <div class=\"modal-header\">\n <button type=\"button\" class=\"close\" data-dismiss=\"modal\" aria-label=\"Close\">\n <span aria-hidden=\"true\">&times;</span>\n </button>\n </div>\n\n <div class=\"modal-body confirm-delete text-center\" >\n <div class=\"alert\" id=\"delete_item_alert\"></div>\n <div id=\"modal-message\"></div>\n <hr>\n <form action=\"\" method=\"post\" id=\"form_confirm_modal\">\n {% csrf_token %}\n <button type=\"button\" class=\"btn btn-danger\" data-dismiss=\"modal\" id=\"confirmButtonModal\">Oui</button>\n <button type=\"button\" class=\"btn btn-primary\" data-dismiss=\"modal\">Non</button>\n </form>\n <input type=\"hidden\" id=\"address_suppress\"/>\n </div>\n\n </div>\n\n </div>\n</div>\n\n<script type=\"text/javascript\">\n\n document.addEventListener('DOMContentLoaded', () => {\n let form_confirm = document.querySelector('#form_confirm_modal')\n let buttons = document.querySelectorAll(\"[data-target='#deleteItemModal']\");\n buttons.forEach(button => {\n button.addEventListener(\"click\", () => {\n\n // extract texts from calling element and replace the modals texts with it\n if (button.dataset.message) {\n document.getElementById(\"modal-message\").innerHTML = button.dataset.message;\n }\n // extract url from calling element and replace the modals texts with it\n if (button.dataset.url) {\n form_confirm.action = button.dataset.url;\n }\n\n })\n });\n let confirmModal = document.getElementById(\"confirmButtonModal\")\n confirmModal.addEventListener('click', () => {\n form_confirm.submit();\n\n });\n });\n</script>\n\nViews:\nclass DeleteAddressView(DeleteView, SuccessMessageMixin):\n template_name = 'account/address.html'\n success_message = 'Adresse supprimée'\n # model = Address\n\n def get_object(self, queryset=None):\n _id = int(self.kwargs.get('pk'))\n address = get_object_or_404(Address, pk=_id)\n return address\n\n def get_success_url(self):\n pk = self.request.user.id\n return reverse_lazy('account:address', args=(pk,))\n\n", "Try this \n In your delete link\n\n <a href=\"{% url 'your-delete-url' pk=your.id %}\" class=\"confirm-delete\" title=\"Delete\" data-toggle=\"modal\" data-target=\"#confirmDeleteModal\" id=\"deleteButton{{your.id}}\">\n\nYour modal\n<div class=\"modal fade\" id=\"confirmDeleteModal\" tabindex=\"-1\" caller-id=\"\" role=\"dialog\" aria-labelledby=\"confirmDeleteModalLabel\" aria-hidden=\"true\">\n <div class=\"modal-dialog\" role=\"document\">\n <div class=\"modal-content\">\n <div class=\"modal-body confirm-delete\">\n This action is permanent!\n </div>\n <div class=\"modal-footer\">\n <button type=\"button\" class=\"btn btn-secondary\" data-dismiss=\"modal\">Cancel</button>\n <button type=\"button\" class=\"btn btn-danger\" data-dismiss=\"modal\" id=\"confirmDeleteButtonModal\">Delete</button>\n </div>\n </div>\n </div>\n</div>\n\n\n<script type=\"text/javascript\">\n $(document).on('click', '.confirm-delete', function () {\n $(\"#confirmDeleteModal\").attr(\"caller-id\", $(this).attr(\"id\"));\n });\n\n $(document).on('click', '#confirmDeleteButtonModal', function () {\n var caller = $(\"#confirmDeleteButtonModal\").closest(\".modal\").attr(\"caller-id\");\n window.location = $(\"#\".concat(caller)).attr(\"href\");\n });\n</script>\n\n", "I got the example from @dipesh, but to works for me I needed to change somethings(tag 'a' and javascript) to get the correct element.\nmy script\nfunction delete_user(selected_user){\n document.getElementById(\"item-delete-form\").action = selected_user.href\n}\n\nmy link in the list for delete item will be:\n<a\n href=\"{% url 'item-delete' item.id %}\"\n class=\"dropdown-item text-danger\"\n data-toggle=\"modal\"\n data-target=\"#delete-item-modal\"\n onclick=\"delete_user(this)\"\"\n>\n Remove\n</a>\n\nmy modal\n<div class=\"modal fade\" id=\"delete-item-modal\">\n <div class=\"modal-dialog\">\n <div class=\"modal-content\">\n <div class=\"modal-body\">\n <p>Are you sure, You want to remove this item?</p>\n </div>\n <div class=\"justify-content-between mb-2 mr-2 text-right\">\n <form method=\"post\"\n id=\"item-delete-form\"\n >\n <button type=\"button\" class=\"btn btn-secondary mr-1\" data-dismiss=\"modal\">Cancel</button>\n {% csrf_token %}\n <button type=\"submit\" class=\"btn btn-danger\" id=\"confirm-delete-item-button\">Delete</button>\n </form>\n </div>\n </div>\n </div>\n</div>\n\n", "You can also do it without JavaScript (see also option 1 in the answer by @lemayzeur in this question). Give the modal a variable name and call it using item.id:\n# in your link to the modal\ndata-target=\"#delete-item-modal-{{item.id}}\"\n\n# in your modal\nid=\"delete-item-modal-{{item.id}}\"\n\n" ]
[ 5, 1, 1, 0, 0, 0 ]
[]
[]
[ "bootstrap_modal", "django", "django_templates", "jinja2", "python" ]
stackoverflow_0059566549_bootstrap_modal_django_django_templates_jinja2_python.txt
Q: Searching for one even and one odd number in string So I'm going through old advent of codes and came across this one and it asks me to search each string to make sure it has at least one even and one odd number in it. However, my function doesn't correctly sort the list. It runs without errors, but it never filters anything and just prints out everything. I don't really know where I'm going wrong so if there are any pointers to fix it, I would gladly appreciate it. def one_even_one_odd(pass_str: str) -> bool: for i in range(5): if pass_str[i] == pass_str % 2 == 0 and pass_str[i] == pass_str % 2 == 1: return True return False def result(range_from: int, range_to: int) -> int: amount_passwords = 0 each_password = [] for password in range(range_from, range_to + 1): pass_str = str(password) if not pass_str == ''.join(sorted(pass_str)): continue if not one_even_one_odd(pass_str): continue each_password.append(pass_str) amount_passwords += 1 return amount_passwords, each_password def main(): range_from = 138345 range_to = 836215 print(f'Amount of passwords followed by list of passwords: {result(range_from, range_to)}') In this case, the list would print every number in the range for example "111, 112, 222" but I want it to only print 112 as that is the only number that contains at least one even and one odd number in it. A: I would use set operations: odds = set('13579') evens = set('02468') def one_even_one_odd(string): S = set(string) return bool(odds & S) and bool(evens & S) one_even_one_odd('ABCD125') # True one_even_one_odd('ABCD135') # False
Searching for one even and one odd number in string
So I'm going through old advent of codes and came across this one and it asks me to search each string to make sure it has at least one even and one odd number in it. However, my function doesn't correctly sort the list. It runs without errors, but it never filters anything and just prints out everything. I don't really know where I'm going wrong so if there are any pointers to fix it, I would gladly appreciate it. def one_even_one_odd(pass_str: str) -> bool: for i in range(5): if pass_str[i] == pass_str % 2 == 0 and pass_str[i] == pass_str % 2 == 1: return True return False def result(range_from: int, range_to: int) -> int: amount_passwords = 0 each_password = [] for password in range(range_from, range_to + 1): pass_str = str(password) if not pass_str == ''.join(sorted(pass_str)): continue if not one_even_one_odd(pass_str): continue each_password.append(pass_str) amount_passwords += 1 return amount_passwords, each_password def main(): range_from = 138345 range_to = 836215 print(f'Amount of passwords followed by list of passwords: {result(range_from, range_to)}') In this case, the list would print every number in the range for example "111, 112, 222" but I want it to only print 112 as that is the only number that contains at least one even and one odd number in it.
[ "I would use set operations:\nodds = set('13579')\nevens = set('02468')\n\ndef one_even_one_odd(string):\n S = set(string)\n return bool(odds & S) and bool(evens & S)\n \n \none_even_one_odd('ABCD125')\n# True\n\none_even_one_odd('ABCD135')\n# False\n\n" ]
[ 2 ]
[]
[]
[ "python" ]
stackoverflow_0074677212_python.txt
Q: Angular-universal MutationObserver problem I need to add angular-universal with ssr to an existing project. I followed this tutorial, everything seemed fine except when I run the project. After I execute npm run dev:ssr I see Compiled successfully and this message: ReferenceError: window is not defined at Module.FARa (/home/project-path/dist/project/server/main.js:69075:26) at __webpack_require__ (/home/project-path/dist/project/server/main.js:20:30) at Module.JMXn (/home/project-path/dist/project/server/main.js:75830:74) at __webpack_require__ (/home/project-path/dist/project/server/main.js:20:30) at Module.PCNd (/home/project-path/dist/project/server/main.js:82492:109) at __webpack_require__ (/home/project-path/dist/project/server/main.js:20:30) at Module.ZAI4 (/home/project-path/dist/project/server/main.js:99022:79) at __webpack_require__ (/home/project-path/dist/project/server/main.js:20:30) at Module.24aS (/home/project-path/dist/project/server/main.js:41256:69) at __webpack_require__ (/home/project-path/dist/project/server/main.js:20:30) A: You can try this solution: global['MutationObserver'] = getMockMutationObserver(); function getMockMutationObserver() { return class { observe(node, options) {} disconnect() {} takeRecords() { return []; } }; } credit: https://github.com/cloudinary/cloudinary_angular/issues/298
Angular-universal MutationObserver problem
I need to add angular-universal with ssr to an existing project. I followed this tutorial, everything seemed fine except when I run the project. After I execute npm run dev:ssr I see Compiled successfully and this message: ReferenceError: window is not defined at Module.FARa (/home/project-path/dist/project/server/main.js:69075:26) at __webpack_require__ (/home/project-path/dist/project/server/main.js:20:30) at Module.JMXn (/home/project-path/dist/project/server/main.js:75830:74) at __webpack_require__ (/home/project-path/dist/project/server/main.js:20:30) at Module.PCNd (/home/project-path/dist/project/server/main.js:82492:109) at __webpack_require__ (/home/project-path/dist/project/server/main.js:20:30) at Module.ZAI4 (/home/project-path/dist/project/server/main.js:99022:79) at __webpack_require__ (/home/project-path/dist/project/server/main.js:20:30) at Module.24aS (/home/project-path/dist/project/server/main.js:41256:69) at __webpack_require__ (/home/project-path/dist/project/server/main.js:20:30)
[ "You can try this solution:\nglobal['MutationObserver'] = getMockMutationObserver();\n\nfunction getMockMutationObserver() {\n return class {\n observe(node, options) {}\n disconnect() {}\n takeRecords() {\n return [];\n }\n };\n}\n\ncredit: https://github.com/cloudinary/cloudinary_angular/issues/298\n" ]
[ 0 ]
[]
[]
[ "angular", "angular_universal", "express", "mutation_observers", "server_side_rendering" ]
stackoverflow_0065018826_angular_angular_universal_express_mutation_observers_server_side_rendering.txt
Q: Weblogic: How to prevent "A mismatch exists between the bean code and generated code" when deploying I use WLST (scripted) automatic deployment on WebLogic 12c (12.1.3 - latest). This automatically deploys my enterprise application on a managed server (not admin server). Note: The error also occurs if I execute the deployment manually. Sometimes I get this exception: A mismatch exists between the bean code and generated code. ... My application does not get deployed then. This cannot be fixed by deploying again, only deleting the deployment with the help of the AdminServer console works reliably. Any ideas how this is triggered and/or I can "fix" (heal) it reliably? I have seen this error being logged and reported numerous times even with older versions of Weblogic, but no possible solution in sight. A: Sometimes weblogic has caching issues when you try to redeploy over an existing app. Trying an undeploy and redeploy normally corrects it: undeploy(appName=application_name); save() activate(300000, "block='true'") deploy(appName=application_name, path=deployment_artifact, targets=target_names, planPath=deployment_plan); save() activate(300000, "block='true'") A: If nothing works, try to clear the cache as described here: https://www.funoracleapps.com/2022/06/how-to-clear-weblogic-cache.html
Weblogic: How to prevent "A mismatch exists between the bean code and generated code" when deploying
I use WLST (scripted) automatic deployment on WebLogic 12c (12.1.3 - latest). This automatically deploys my enterprise application on a managed server (not admin server). Note: The error also occurs if I execute the deployment manually. Sometimes I get this exception: A mismatch exists between the bean code and generated code. ... My application does not get deployed then. This cannot be fixed by deploying again, only deleting the deployment with the help of the AdminServer console works reliably. Any ideas how this is triggered and/or I can "fix" (heal) it reliably? I have seen this error being logged and reported numerous times even with older versions of Weblogic, but no possible solution in sight.
[ "Sometimes weblogic has caching issues when you try to redeploy over an existing app. Trying an undeploy and redeploy normally corrects it:\nundeploy(appName=application_name);\nsave()\nactivate(300000, \"block='true'\")\n\ndeploy(appName=application_name, path=deployment_artifact, targets=target_names, planPath=deployment_plan);\nsave()\nactivate(300000, \"block='true'\")\n\n", "If nothing works, try to clear the cache as described here: https://www.funoracleapps.com/2022/06/how-to-clear-weblogic-cache.html\n" ]
[ 1, 0 ]
[]
[]
[ "jakarta_ee", "java", "weblogic", "weblogic12c" ]
stackoverflow_0028174118_jakarta_ee_java_weblogic_weblogic12c.txt
Q: Detect if Multiple HDDs are RAID and it's RAID mode(windows) In order to monitor system storage, I need to programmatically find : if Multiple HDDs are used (done) if Multiple HDDs are RAID (not done) if they are RAID, what is the RAID mode (RAID0, RAID1, ...) (not even closed) What I know/can: There are 2 types of RAID , Hardware and Software. If it's Software then it can be RIAD0(striped), RAID1(mirror) and RAID5. I can find if it's software RAID by checking the volume name that is similar in every disk. what I need to find : If it's software RAID how should I know RAID type? (in storage management I can see if the volume is striped or mirror but I cannot get this information programmatically) If it's hardware RAID how could I know that? how could I find it's RAID type the programming language is not important, it could be CMD command, Powershell command or WMI or .....
Detect if Multiple HDDs are RAID and it's RAID mode(windows)
In order to monitor system storage, I need to programmatically find : if Multiple HDDs are used (done) if Multiple HDDs are RAID (not done) if they are RAID, what is the RAID mode (RAID0, RAID1, ...) (not even closed) What I know/can: There are 2 types of RAID , Hardware and Software. If it's Software then it can be RIAD0(striped), RAID1(mirror) and RAID5. I can find if it's software RAID by checking the volume name that is similar in every disk. what I need to find : If it's software RAID how should I know RAID type? (in storage management I can see if the volume is striped or mirror but I cannot get this information programmatically) If it's hardware RAID how could I know that? how could I find it's RAID type the programming language is not important, it could be CMD command, Powershell command or WMI or .....
[]
[]
[ "idk maybe you should try to search online for a reg key that contains this info\nbut i guess that the os doesn't know this info, but anyway you should try maybe you'll find something about it\n" ]
[ -1 ]
[ "c#", "c++", "cmd", "powershell", "python" ]
stackoverflow_0049852709_c#_c++_cmd_powershell_python.txt
Q: Get multiple level of one to many relationship results I have a graph that looks like this A1 A2 A3 A4 A5 \ / \ | / S1 S2 \ / E1 There can be many E nodes. But the essence from the above is: It is a one to many mapping between E and S nodes It is a one to many mapping between S and A nodes The same S1 can also point to another E node, but I want to extract the following relationship: For each E node, get all the S nodes and for each S node we get, get all the A nodes. I know for just E and S, I can do: match (e:E)<--(s:S) return e, collect(distinct s) But I am not sure how to do this with two level of such mapping A: Given the following stub data to represent your graph CREATE (e1:E {id: 'e1'}) CREATE (e2:E {id: 'e2'}) CREATE (s1:S {id: 's1'}) CREATE (s2:S {id: 's2'}) CREATE (a1:A {id: 'a1'}) CREATE (a2:A {id: 'a2'}) CREATE (a3:A {id: 'a3'}) CREATE (a4:A {id: 'a4'}) CREATE (a5:A {id: 'a5'}) CREATE (e1)-[:TO]->(s1) CREATE (e1)-[:TO]->(s2) CREATE (s1)-[:TO]->(a1) CREATE (s1)-[:TO]->(a2) CREATE (s2)-[:TO]->(a3) CREATE (s2)-[:TO]->(a4) CREATE (s2)-[:TO]->(a5) CREATE (e2)-[:TO]->(s2) You can retrieve paths from E to A simply by aliasing the full pattern MATCH path=(e:E)-->(:S)-->(:A) RETURN path This will give you a full path, note that a path is a sequenced list of relationships having a start and end node Graph result Tabular result ╒═══════════════════════════════════════════════════════╕ │"path" │ ╞═══════════════════════════════════════════════════════╡ │[{"id":"e1"},{},{"id":"s2"},{"id":"s2"},{},{"id":"a4"}]│ ├───────────────────────────────────────────────────────┤ │[{"id":"e1"},{},{"id":"s2"},{"id":"s2"},{},{"id":"a5"}]│ ├───────────────────────────────────────────────────────┤ │[{"id":"e1"},{},{"id":"s2"},{"id":"s2"},{},{"id":"a3"}]│ ├───────────────────────────────────────────────────────┤ │[{"id":"e1"},{},{"id":"s1"},{"id":"s1"},{},{"id":"a1"}]│ ├───────────────────────────────────────────────────────┤ │[{"id":"e1"},{},{"id":"s1"},{"id":"s1"},{},{"id":"a2"}]│ ├───────────────────────────────────────────────────────┤ │[{"id":"e2"},{},{"id":"s2"},{"id":"s2"},{},{"id":"a4"}]│ ├───────────────────────────────────────────────────────┤ │[{"id":"e2"},{},{"id":"s2"},{"id":"s2"},{},{"id":"a5"}]│ ├───────────────────────────────────────────────────────┤ │[{"id":"e2"},{},{"id":"s2"},{"id":"s2"},{},{"id":"a3"}]│ └───────────────────────────────────────────────────────┘ To maybe make it more clear, let's limit the result to only one path MATCH path=(e:E)-->(:S)-->(:A) RETURN path LIMIT 1 Tabular result ╒═══════════════════════════════════════════════════════╕ │"path" │ ╞═══════════════════════════════════════════════════════╡ │[{"id":"e1"},{},{"id":"s2"},{"id":"s2"},{},{"id":"a4"}]│ └───────────────────────────────────────────────────────┘ You can now collect paths per E node MATCH path=(e:E)-->(:S)-->(:A) RETURN e, collect(path) AS paths The graph result would be similar since it returns all nodes and rels, but the tabular result shows now the aggregation ╒═══════════╤══════════════════════════════════════════════════════════════════════╕ │"e" │"paths" │ ╞═══════════╪══════════════════════════════════════════════════════════════════════╡ │{"id":"e1"}│[[{"id":"e1"},{},{"id":"s2"},{"id":"s2"},{},{"id":"a4"}],[{"id":"e1"},│ │ │{},{"id":"s2"},{"id":"s2"},{},{"id":"a5"}],[{"id":"e1"},{},{"id":"s2"}│ │ │,{"id":"s2"},{},{"id":"a3"}],[{"id":"e1"},{},{"id":"s1"},{"id":"s1"},{│ │ │},{"id":"a1"}],[{"id":"e1"},{},{"id":"s1"},{"id":"s1"},{},{"id":"a2"}]│ │ │] │ ├───────────┼──────────────────────────────────────────────────────────────────────┤ │{"id":"e2"}│[[{"id":"e2"},{},{"id":"s2"},{"id":"s2"},{},{"id":"a4"}],[{"id":"e2"},│ │ │{},{"id":"s2"},{"id":"s2"},{},{"id":"a5"}],[{"id":"e2"},{},{"id":"s2"}│ │ │,{"id":"s2"},{},{"id":"a3"}]] │ └───────────┴──────────────────────────────────────────────────────────────────────┘ So far we returned full paths. You can extract nodes only from them using the nodes() function MATCH path=(e:E)-->(:S)-->(:A) RETURN nodes(path) ╒═════════════════════════════════════╕ │"nodes(path)" │ ╞═════════════════════════════════════╡ │[{"id":"e1"},{"id":"s2"},{"id":"a4"}]│ ├─────────────────────────────────────┤ │[{"id":"e1"},{"id":"s2"},{"id":"a5"}]│ ├─────────────────────────────────────┤ │[{"id":"e1"},{"id":"s2"},{"id":"a3"}]│ ├─────────────────────────────────────┤ │[{"id":"e1"},{"id":"s1"},{"id":"a1"}]│ ├─────────────────────────────────────┤ │[{"id":"e1"},{"id":"s1"},{"id":"a2"}]│ ├─────────────────────────────────────┤ │[{"id":"e2"},{"id":"s2"},{"id":"a4"}]│ ├─────────────────────────────────────┤ │[{"id":"e2"},{"id":"s2"},{"id":"a5"}]│ ├─────────────────────────────────────┤ │[{"id":"e2"},{"id":"s2"},{"id":"a3"}]│ └─────────────────────────────────────┘ And now, if you want to return a json tree like structure, you can use map projections MATCH (e:E) RETURN e {.*, s: [(e)-->(s:S) | s{.*, a: [(s)-->(a:A) | a{.*}]}]} ╒══════════════════════════════════════════════════════════════════════╕ │"e" │ ╞══════════════════════════════════════════════════════════════════════╡ │{"s":[{"a":[{"id":"a4"},{"id":"a5"},{"id":"a3"}],"id":"s2"},{"a":[{"id│ │":"a1"},{"id":"a2"}],"id":"s1"}],"id":"e1"} │ ├──────────────────────────────────────────────────────────────────────┤ │{"s":[{"a":[{"id":"a4"},{"id":"a5"},{"id":"a3"}],"id":"s2"}],"id":"e2"│ │} │ └──────────────────────────────────────────────────────────────────────┘ Let's format the first result a bit { "s": [ { "a": [ { "id": "a4" }, { "id": "a5" }, { "id": "a3" } ], "id": "s2" }, { "a": [ { "id": "a1" }, { "id": "a2" } ], "id": "s1" } ], "id": "e1" } This sounds a bit cryptic but as soon as you understand how it works it is quite powerful, I suggest reading a bit more about it here : https://neo4j.com/docs/cypher-manual/current/syntax/maps/#cypher-map-projection https://neo4j.com/blog/cypher-graphql-neo4j-3-1-preview/ https://neo4j.com/developer-blog/a-comprehensive-guide-to-cypher-map-projection/ A: I've encountered similar issues in endogamous genealogy family trees. My experience might help you and others? Endogamy is typically describes as a person appearing more than once in the family tree (multiple Ahnentafels). But is it easier to analyze with graphs ... there are multiple paths to that common ancestor(s). The graph has several enhancements. Paths are aliased and uniquely identifiable as you've described. The sequenced numbers in the path are used to create their ORDPATH which is a concatenated bitstring that sorts hierarchically. You can calculate the coefficient of relationship (COR) for between individuals and the coefficient of inbreeding if there is endogamy. Creating a knowledge graph involves memorializing analytics in the graph as nodes, relationships or properties. This one-time activity makes downstream analytics faster and more intuitive (e.g., easier). I'm now enhancing the graph further by creating path nodes with properties of the aliased path, start and end node identifiers. These nodes can be related to primary data making them supernodes. Using apoc.coll.intersection you can add an "intersect" relationship between paths. The initial work on creating this KNOWLEDGE GRAPH is posted here: https://www.wai.md/post/endogamy-i-the-knowledge-graph The second effort should appear soon at the same blog.
Get multiple level of one to many relationship results
I have a graph that looks like this A1 A2 A3 A4 A5 \ / \ | / S1 S2 \ / E1 There can be many E nodes. But the essence from the above is: It is a one to many mapping between E and S nodes It is a one to many mapping between S and A nodes The same S1 can also point to another E node, but I want to extract the following relationship: For each E node, get all the S nodes and for each S node we get, get all the A nodes. I know for just E and S, I can do: match (e:E)<--(s:S) return e, collect(distinct s) But I am not sure how to do this with two level of such mapping
[ "Given the following stub data to represent your graph\nCREATE (e1:E {id: 'e1'})\nCREATE (e2:E {id: 'e2'})\nCREATE (s1:S {id: 's1'})\nCREATE (s2:S {id: 's2'})\nCREATE (a1:A {id: 'a1'})\nCREATE (a2:A {id: 'a2'})\nCREATE (a3:A {id: 'a3'})\nCREATE (a4:A {id: 'a4'})\nCREATE (a5:A {id: 'a5'})\nCREATE (e1)-[:TO]->(s1)\nCREATE (e1)-[:TO]->(s2)\nCREATE (s1)-[:TO]->(a1)\nCREATE (s1)-[:TO]->(a2)\nCREATE (s2)-[:TO]->(a3)\nCREATE (s2)-[:TO]->(a4)\nCREATE (s2)-[:TO]->(a5)\nCREATE (e2)-[:TO]->(s2)\n\n\nYou can retrieve paths from E to A simply by aliasing the full pattern\nMATCH path=(e:E)-->(:S)-->(:A)\nRETURN path\n\nThis will give you a full path, note that a path is a sequenced list of relationships having a start and end node\nGraph result\n\nTabular result\n╒═══════════════════════════════════════════════════════╕\n│\"path\" │\n╞═══════════════════════════════════════════════════════╡\n│[{\"id\":\"e1\"},{},{\"id\":\"s2\"},{\"id\":\"s2\"},{},{\"id\":\"a4\"}]│\n├───────────────────────────────────────────────────────┤\n│[{\"id\":\"e1\"},{},{\"id\":\"s2\"},{\"id\":\"s2\"},{},{\"id\":\"a5\"}]│\n├───────────────────────────────────────────────────────┤\n│[{\"id\":\"e1\"},{},{\"id\":\"s2\"},{\"id\":\"s2\"},{},{\"id\":\"a3\"}]│\n├───────────────────────────────────────────────────────┤\n│[{\"id\":\"e1\"},{},{\"id\":\"s1\"},{\"id\":\"s1\"},{},{\"id\":\"a1\"}]│\n├───────────────────────────────────────────────────────┤\n│[{\"id\":\"e1\"},{},{\"id\":\"s1\"},{\"id\":\"s1\"},{},{\"id\":\"a2\"}]│\n├───────────────────────────────────────────────────────┤\n│[{\"id\":\"e2\"},{},{\"id\":\"s2\"},{\"id\":\"s2\"},{},{\"id\":\"a4\"}]│\n├───────────────────────────────────────────────────────┤\n│[{\"id\":\"e2\"},{},{\"id\":\"s2\"},{\"id\":\"s2\"},{},{\"id\":\"a5\"}]│\n├───────────────────────────────────────────────────────┤\n│[{\"id\":\"e2\"},{},{\"id\":\"s2\"},{\"id\":\"s2\"},{},{\"id\":\"a3\"}]│\n└───────────────────────────────────────────────────────┘\n\nTo maybe make it more clear, let's limit the result to only one path\nMATCH path=(e:E)-->(:S)-->(:A)\nRETURN path\nLIMIT 1\n\n\nTabular result\n╒═══════════════════════════════════════════════════════╕\n│\"path\" │\n╞═══════════════════════════════════════════════════════╡\n│[{\"id\":\"e1\"},{},{\"id\":\"s2\"},{\"id\":\"s2\"},{},{\"id\":\"a4\"}]│\n└───────────────────────────────────────────────────────┘\n\nYou can now collect paths per E node\nMATCH path=(e:E)-->(:S)-->(:A)\nRETURN e, collect(path) AS paths\n\nThe graph result would be similar since it returns all nodes and rels, but the tabular result shows now the aggregation\n╒═══════════╤══════════════════════════════════════════════════════════════════════╕\n│\"e\" │\"paths\" │\n╞═══════════╪══════════════════════════════════════════════════════════════════════╡\n│{\"id\":\"e1\"}│[[{\"id\":\"e1\"},{},{\"id\":\"s2\"},{\"id\":\"s2\"},{},{\"id\":\"a4\"}],[{\"id\":\"e1\"},│\n│ │{},{\"id\":\"s2\"},{\"id\":\"s2\"},{},{\"id\":\"a5\"}],[{\"id\":\"e1\"},{},{\"id\":\"s2\"}│\n│ │,{\"id\":\"s2\"},{},{\"id\":\"a3\"}],[{\"id\":\"e1\"},{},{\"id\":\"s1\"},{\"id\":\"s1\"},{│\n│ │},{\"id\":\"a1\"}],[{\"id\":\"e1\"},{},{\"id\":\"s1\"},{\"id\":\"s1\"},{},{\"id\":\"a2\"}]│\n│ │] │\n├───────────┼──────────────────────────────────────────────────────────────────────┤\n│{\"id\":\"e2\"}│[[{\"id\":\"e2\"},{},{\"id\":\"s2\"},{\"id\":\"s2\"},{},{\"id\":\"a4\"}],[{\"id\":\"e2\"},│\n│ │{},{\"id\":\"s2\"},{\"id\":\"s2\"},{},{\"id\":\"a5\"}],[{\"id\":\"e2\"},{},{\"id\":\"s2\"}│\n│ │,{\"id\":\"s2\"},{},{\"id\":\"a3\"}]] │\n└───────────┴──────────────────────────────────────────────────────────────────────┘\n\nSo far we returned full paths. You can extract nodes only from them using the nodes() function\nMATCH path=(e:E)-->(:S)-->(:A)\nRETURN nodes(path)\n\n╒═════════════════════════════════════╕\n│\"nodes(path)\" │\n╞═════════════════════════════════════╡\n│[{\"id\":\"e1\"},{\"id\":\"s2\"},{\"id\":\"a4\"}]│\n├─────────────────────────────────────┤\n│[{\"id\":\"e1\"},{\"id\":\"s2\"},{\"id\":\"a5\"}]│\n├─────────────────────────────────────┤\n│[{\"id\":\"e1\"},{\"id\":\"s2\"},{\"id\":\"a3\"}]│\n├─────────────────────────────────────┤\n│[{\"id\":\"e1\"},{\"id\":\"s1\"},{\"id\":\"a1\"}]│\n├─────────────────────────────────────┤\n│[{\"id\":\"e1\"},{\"id\":\"s1\"},{\"id\":\"a2\"}]│\n├─────────────────────────────────────┤\n│[{\"id\":\"e2\"},{\"id\":\"s2\"},{\"id\":\"a4\"}]│\n├─────────────────────────────────────┤\n│[{\"id\":\"e2\"},{\"id\":\"s2\"},{\"id\":\"a5\"}]│\n├─────────────────────────────────────┤\n│[{\"id\":\"e2\"},{\"id\":\"s2\"},{\"id\":\"a3\"}]│\n└─────────────────────────────────────┘\n\nAnd now, if you want to return a json tree like structure, you can use map projections\nMATCH (e:E)\nRETURN \ne {.*, s: [(e)-->(s:S) | s{.*, a: [(s)-->(a:A) | a{.*}]}]}\n\n╒══════════════════════════════════════════════════════════════════════╕\n│\"e\" │\n╞══════════════════════════════════════════════════════════════════════╡\n│{\"s\":[{\"a\":[{\"id\":\"a4\"},{\"id\":\"a5\"},{\"id\":\"a3\"}],\"id\":\"s2\"},{\"a\":[{\"id│\n│\":\"a1\"},{\"id\":\"a2\"}],\"id\":\"s1\"}],\"id\":\"e1\"} │\n├──────────────────────────────────────────────────────────────────────┤\n│{\"s\":[{\"a\":[{\"id\":\"a4\"},{\"id\":\"a5\"},{\"id\":\"a3\"}],\"id\":\"s2\"}],\"id\":\"e2\"│\n│} │\n└──────────────────────────────────────────────────────────────────────┘\n\nLet's format the first result a bit\n{\n \"s\": [\n {\n \"a\": [\n {\n \"id\": \"a4\"\n },\n {\n \"id\": \"a5\"\n },\n {\n \"id\": \"a3\"\n }\n ],\n \"id\": \"s2\"\n },\n {\n \"a\": [\n {\n \"id\": \"a1\"\n },\n {\n \"id\": \"a2\"\n }\n ],\n \"id\": \"s1\"\n }\n ],\n \"id\": \"e1\"\n}\n\nThis sounds a bit cryptic but as soon as you understand how it works it is quite powerful, I suggest reading a bit more about it here :\n\nhttps://neo4j.com/docs/cypher-manual/current/syntax/maps/#cypher-map-projection\nhttps://neo4j.com/blog/cypher-graphql-neo4j-3-1-preview/\nhttps://neo4j.com/developer-blog/a-comprehensive-guide-to-cypher-map-projection/\n\n\n", "I've encountered similar issues in endogamous genealogy family trees. My experience might help you and others? Endogamy is typically describes as a person appearing more than once in the family tree (multiple Ahnentafels). But is it easier to analyze with graphs ... there are multiple paths to that common ancestor(s).\nThe graph has several enhancements. Paths are aliased and uniquely identifiable as you've described. The sequenced numbers in the path are used to create their ORDPATH which is a concatenated bitstring that sorts hierarchically. You can calculate the coefficient of relationship (COR) for between individuals and the coefficient of inbreeding if there is endogamy.\nCreating a knowledge graph involves memorializing analytics in the graph as nodes, relationships or properties. This one-time activity makes downstream analytics faster and more intuitive (e.g., easier).\nI'm now enhancing the graph further by creating path nodes with properties of the aliased path, start and end node identifiers. These nodes can be related to primary data making them supernodes. Using apoc.coll.intersection you can add an \"intersect\" relationship between paths.\nThe initial work on creating this KNOWLEDGE GRAPH is posted here:\nhttps://www.wai.md/post/endogamy-i-the-knowledge-graph\nThe second effort should appear soon at the same blog.\n" ]
[ 4, 0 ]
[]
[]
[ "cypher", "neo4j" ]
stackoverflow_0074672877_cypher_neo4j.txt
Q: ODataController, if the model is incorrect, it gets null on post If there is any wrong property (For example if I send the payload data, Person_ instead of Person), model fully gets as null (Post([FromBody] Request data)) public class Person { public Guid Id { get; set; } public string? Firstname { get; set; } public string? Lastname { get; set; } } public class Request { public Guid Id { get; set; } public Guid? Personid { get; set; } public virtual Person? Person { get; set; } } public IActionResult Post([FromBody] Request data) { ... } curl --location --request POST 'https://localhost:7124/v2/request?$expand=Person($select=Id,Firstname,Lastname)/Request&@odata.context=%27https://localhost:7124/v2/$metadata' \ --header 'Content-Type: application/json' \ --data-raw '{ "Id": "a436677a-fa4b-465e-8e70-211a1a3de8e9", "Personid": "be9b53ad-4dfb-4db5-b269-32669f7c4e2d", "Person_" : { "Firstname": "JOHN", "Lastname": "SMITH", } }' I need to get the model even though some properties not correct according to model schema. What could be the reason for it being null? A: The problem is the declaration of Person in the class Request, It should be public Person Person_ { get; set; }. You can declare it as public virtual Person? Person_ { get; set; } also if you don't want to change the declaration. The only catch here is the suffix underscore before Person. If you don't want to change the declaration then you can use JsonProperty [JsonProperty("Person_")] public virtual Person? Person { get; set; } A: One of the main issues is that the type argument forms a strong contract that the OData subsystem tries to enforce. If the Deserializer cannot match the expected type fully, then it returns null, not a partially constructed object, or an empty object if none of the properties matched. What you are expecting was a lazy implementation that we often took for granted in previous versions of OData and JSON.Net, but the OData Entity Serializer doesn't work this way any more. When the argument is null, the ModelState should provide detailed information on the reason for the failure. OData has support for allowing additional members, it is called Open-Type Support. Similar to the catch all solutions in other deserialization methods, we designate a dictionary to route all un-mapped properties so that you can inspect them after deserialization. This was a good walkthrough in .Net FX but basically we add the property: public class Request { public Guid Id { get; set; } public Guid? Personid { get; set; } public virtual Person? Person { get; set; } public IDictionary<string, object> DynamicProperties { get; set; } } Then in your model builder you need to declare the type as open: builder.Entity<Request>("request").EntityType.IsOpen(); This is alone is still going to be hard to use though because your additional member is a complex type, so the type cannot be easily resolved automatically. You could implement your own deserializer, but that is a lot more universal to all of your controllers and endpoints, you should take a little bit more care because it really opens a back door and cancels out a lot of functionality if you don't do it right. In your example the _person is omitted entirely, which might not be your intention. Other solutions are a bit more permanent and messy, like adding additional properties to your model to capture the input and re-assign it internally. The best advice however is to respond to the client with an adequate error message so that they update the call. There is another way that we can also cheat by using the JToken type, instead of the expected concrete type. This will universally ingest the payload from the request, then we can use good old JSON.Net to resolve the object: /// <summary> /// Inserts a new item into this collection /// </summary> /// <param name="item">The item to insert</param> /// <returns>CreatedODataResult</returns> [EnableQuery(AllowedQueryOptions = AllowedQueryOptions.Format | AllowedQueryOptions.Select)] public virtual async Task<IActionResult> Post([FromBody] Newtonsoft.Json.Linq.JToken x) //public virtual async Task<IActionResult> Post(TEntity item) { TEntity item = x.ToObject<TEntity>(); ... insert custom logic to resolve the badly formed properties // Tell the client that the request is invalid, if it is still invalid. if (!ModelState.IsValid) return BadRequest(ModelState); //EvalSecurityPolicy(item, Operation.Insert); await ApplyPost(item); //UpdateRelationships(item, Operation.Insert); await _db.SaveChangesAsync(); return Created(item); } /// <summary> /// Inheriting classes can override this method to apply custom fields or other properties from dynamic members in the posted item, base class will apply TenantId only if it has not already been applied /// </summary> /// <remarks>This process is called after the usual validation overriding just this process means you do not have to replicate the existing internal logic for the afore mentioned tasks.</remarks> /// <param name="item">The new item that has been uploaded</param> /// <returns>Promise to add the item to the underlying table store</returns> public virtual Task ApplyPost(TEntity item) { GetEntitySet().Add(item); return Task.CompletedTask; } This is a base class implementation of ODataController Inheriting controller classes only override ApplyPost if needed. I've commented out some more advanced logic routines to give you other hints on how you might use this pattern. Is a good practice? I'm undecided but it works and will allow your API to be resilient to schema changes that the client hasn't yet been updated to support, you can also inspect and handle the invalid ModelState in your controller before you return to the caller, or can easily add your own custom mapping logic if needed.
ODataController, if the model is incorrect, it gets null on post
If there is any wrong property (For example if I send the payload data, Person_ instead of Person), model fully gets as null (Post([FromBody] Request data)) public class Person { public Guid Id { get; set; } public string? Firstname { get; set; } public string? Lastname { get; set; } } public class Request { public Guid Id { get; set; } public Guid? Personid { get; set; } public virtual Person? Person { get; set; } } public IActionResult Post([FromBody] Request data) { ... } curl --location --request POST 'https://localhost:7124/v2/request?$expand=Person($select=Id,Firstname,Lastname)/Request&@odata.context=%27https://localhost:7124/v2/$metadata' \ --header 'Content-Type: application/json' \ --data-raw '{ "Id": "a436677a-fa4b-465e-8e70-211a1a3de8e9", "Personid": "be9b53ad-4dfb-4db5-b269-32669f7c4e2d", "Person_" : { "Firstname": "JOHN", "Lastname": "SMITH", } }' I need to get the model even though some properties not correct according to model schema. What could be the reason for it being null?
[ "The problem is the declaration of Person in the class Request, It should be public Person Person_ { get; set; }.\nYou can declare it as public virtual Person? Person_ { get; set; } also if you don't want to change the declaration.\nThe only catch here is the suffix underscore before Person.\nIf you don't want to change the declaration then you can use JsonProperty\n[JsonProperty(\"Person_\")]\npublic virtual Person? Person { get; set; }\n\n", "One of the main issues is that the type argument forms a strong contract that the OData subsystem tries to enforce. If the Deserializer cannot match the expected type fully, then it returns null, not a partially constructed object, or an empty object if none of the properties matched.\n\nWhat you are expecting was a lazy implementation that we often took for granted in previous versions of OData and JSON.Net, but the OData Entity Serializer doesn't work this way any more.\n\nWhen the argument is null, the ModelState should provide detailed information on the reason for the failure.\nOData has support for allowing additional members, it is called Open-Type Support. Similar to the catch all solutions in other deserialization methods, we designate a dictionary to route all un-mapped properties so that you can inspect them after deserialization. This was a good walkthrough in .Net FX but basically we add the property:\npublic class Request\n{\n public Guid Id { get; set; }\n public Guid? Personid { get; set; }\n public virtual Person? Person { get; set; }\n public IDictionary<string, object> DynamicProperties { get; set; }\n}\n\nThen in your model builder you need to declare the type as open:\nbuilder.Entity<Request>(\"request\").EntityType.IsOpen();\n\nThis is alone is still going to be hard to use though because your additional member is a complex type, so the type cannot be easily resolved automatically.\nYou could implement your own deserializer, but that is a lot more universal to all of your controllers and endpoints, you should take a little bit more care because it really opens a back door and cancels out a lot of functionality if you don't do it right. In your example the _person is omitted entirely, which might not be your intention.\n\nOther solutions are a bit more permanent and messy, like adding additional properties to your model to capture the input and re-assign it internally. The best advice however is to respond to the client with an adequate error message so that they update the call.\n\nThere is another way that we can also cheat by using the JToken type, instead of the expected concrete type. This will universally ingest the payload from the request, then we can use good old JSON.Net to resolve the object:\n/// <summary>\n/// Inserts a new item into this collection\n/// </summary>\n/// <param name=\"item\">The item to insert</param>\n/// <returns>CreatedODataResult</returns>\n[EnableQuery(AllowedQueryOptions = AllowedQueryOptions.Format | AllowedQueryOptions.Select)]\npublic virtual async Task<IActionResult> Post([FromBody] Newtonsoft.Json.Linq.JToken x)\n//public virtual async Task<IActionResult> Post(TEntity item)\n{\n TEntity item = x.ToObject<TEntity>();\n\n ... insert custom logic to resolve the badly formed properties\n\n // Tell the client that the request is invalid, if it is still invalid.\n if (!ModelState.IsValid)\n return BadRequest(ModelState);\n\n //EvalSecurityPolicy(item, Operation.Insert);\n await ApplyPost(item);\n //UpdateRelationships(item, Operation.Insert);\n\n await _db.SaveChangesAsync();\n\n return Created(item);\n}\n\n/// <summary>\n/// Inheriting classes can override this method to apply custom fields or other properties from dynamic members in the posted item, base class will apply TenantId only if it has not already been applied\n/// </summary>\n/// <remarks>This process is called after the usual validation overriding just this process means you do not have to replicate the existing internal logic for the afore mentioned tasks.</remarks>\n/// <param name=\"item\">The new item that has been uploaded</param>\n/// <returns>Promise to add the item to the underlying table store</returns>\npublic virtual Task ApplyPost(TEntity item)\n{\n GetEntitySet().Add(item);\n return Task.CompletedTask;\n}\n\nThis is a base class implementation of ODataController Inheriting controller classes only override ApplyPost if needed. I've commented out some more advanced logic routines to give you other hints on how you might use this pattern.\nIs a good practice? I'm undecided but it works and will allow your API to be resilient to schema changes that the client hasn't yet been updated to support, you can also inspect and handle the invalid ModelState in your controller before you return to the caller, or can easily add your own custom mapping logic if needed.\n" ]
[ 0, 0 ]
[ "I have found a solution. I have used a custom ODataResourceDeserializer to handle the exception of doesn't exist properties and, included a try catch block in the ApplyNestedProperty method's content. so web service cannot throw an exception for not exists properties while deserialization process.\npublic class CustomResourceDeserializer : ODataResourceDeserializer\n{\n public CustomResourceDeserializer(IODataDeserializerProvider deserializerProvider) : base(deserializerProvider)\n {\n }\n\n public override void ApplyNestedProperty(object resource, ODataNestedResourceInfoWrapper resourceInfoWrapper, IEdmStructuredTypeReference structuredType, ODataDeserializerContext readContext)\n {\n try\n {\n base.ApplyNestedProperty(resource, resourceInfoWrapper, structuredType, readContext);\n }\n catch (System.Exception)\n {\n \n }\n \n }\n}\n\n" ]
[ -1 ]
[ ".net_core", "c#", "http_post", "odata" ]
stackoverflow_0074674094_.net_core_c#_http_post_odata.txt
Q: Remove blank line at end of SSRS CSV report when excelmode=false is alredy set I am trying to export a report from SSRS into CSV and just have no data rows or blank lines at the end. I have already added the following to the config file: <Extension Name="CSV (No Header)" Type="Microsoft.ReportingServices.Rendering.DataRenderer.CsvReport,Microsoft.ReportingServices.DataRendering"> <OverrideNames> <Name Language="en-us"> CSV No Header</Name> </OverrideNames> <Configuration> <DeviceInfo> <NoHeader>true</NoHeader> <ExcelMode>False</ExcelMode> </DeviceInfo> </Configuration> </Extension> The header section works, but only one of the two blank lines is removed. how do i remove the other blank line? Thanks A: It looks like you've already added the correct configuration to your CSV rendering extension in the SSRS config file. This should prevent the header row from being included in the CSV file, and set the ExcelMode to False, which should prevent any additional blank rows from being added. If you're still seeing a blank line in the generated CSV file, you may want to try specifying the AddLineBreaks property as well. This property determines whether line breaks are added after each row of data in the CSV file. Try adding the following to your device info section: <AddLineBreaks>False</AddLineBreaks> This should prevent any additional line breaks from being added to the CSV file, and should remove any remaining blank lines. A: To remove the second blank line at the end of a CSV report exported from SSRS, you can add the true element to the section of the configuration file. This will instruct SSRS to remove any trailing spaces from the CSV report, including the second blank line at the end. Here is an example of what your configuration file might look like with this change: <Extension Name="CSV (No Header)" Type="Microsoft.ReportingServices.Rendering.DataRenderer.CsvReport,Microsoft.ReportingServices.DataRendering"> <OverrideNames> <Name Language="en-us"> CSV No Header</Name> </OverrideNames> <Configuration> <DeviceInfo> <NoHeader>true</NoHeader> <ExcelMode>False</ExcelMode> <RemoveTrailingSpaces>true</RemoveTrailingSpaces> </DeviceInfo> </Configuration> </Extension>
Remove blank line at end of SSRS CSV report when excelmode=false is alredy set
I am trying to export a report from SSRS into CSV and just have no data rows or blank lines at the end. I have already added the following to the config file: <Extension Name="CSV (No Header)" Type="Microsoft.ReportingServices.Rendering.DataRenderer.CsvReport,Microsoft.ReportingServices.DataRendering"> <OverrideNames> <Name Language="en-us"> CSV No Header</Name> </OverrideNames> <Configuration> <DeviceInfo> <NoHeader>true</NoHeader> <ExcelMode>False</ExcelMode> </DeviceInfo> </Configuration> </Extension> The header section works, but only one of the two blank lines is removed. how do i remove the other blank line? Thanks
[ "It looks like you've already added the correct configuration to your CSV rendering extension in the SSRS config file. This should prevent the header row from being included in the CSV file, and set the ExcelMode to False, which should prevent any additional blank rows from being added.\nIf you're still seeing a blank line in the generated CSV file, you may want to try specifying the AddLineBreaks property as well. This property determines whether line breaks are added after each row of data in the CSV file. Try adding the following to your device info section:\n<AddLineBreaks>False</AddLineBreaks>\n\nThis should prevent any additional line breaks from being added to the CSV file, and should remove any remaining blank lines.\n", "To remove the second blank line at the end of a CSV report exported from SSRS, you can add the true element to the section of the configuration file. This will instruct SSRS to remove any trailing spaces from the CSV report, including the second blank line at the end.\nHere is an example of what your configuration file might look like with this change:\n<Extension Name=\"CSV (No Header)\" Type=\"Microsoft.ReportingServices.Rendering.DataRenderer.CsvReport,Microsoft.ReportingServices.DataRendering\">\n <OverrideNames>\n <Name Language=\"en-us\"> CSV No Header</Name>\n </OverrideNames>\n <Configuration>\n <DeviceInfo>\n <NoHeader>true</NoHeader>\n <ExcelMode>False</ExcelMode>\n <RemoveTrailingSpaces>true</RemoveTrailingSpaces>\n </DeviceInfo>\n </Configuration>\n</Extension>\n\n" ]
[ 0, 0 ]
[]
[]
[ "export_to_csv", "reporting_services" ]
stackoverflow_0074570912_export_to_csv_reporting_services.txt
Q: Does not contain a static 'main' method suitable for an entry point I began organizing my code to day into seperarate .cs files, and in order to allow the methods that work with the UI to continue to do so I would create the .cs code under the same namespace and public partial class name so the methods could be inter-operable. My header look like this in four files, including my main core file that calls: public shell() { InitializeComponent(); } Header area of .cs files that work with the UI (and seem to be causing this new conflict): using System; using System.Windows.Forms; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Text.RegularExpressions; using System.IO; using System.Data.SqlServerCe; using System.Diagnostics; using System.Threading; using System.Collections.Specialized; using System.Net; using System.Runtime.InteropServices; using watin = WatiN.Core; using WatiN.Core.Native.InternetExplorer; using System.Web; namespace WindowsFormsApplication1 { public partial class shell : Form { Now when I try to debug/preview my application (BTW this is a Windows Application within Visual Studio 2010 Express) I get this error message: Does not contain a static 'main' method suitable for an entry point I looked in the application properties in Application->Startup object, but it offers me no options. How can I inform the application to begin at the .cs file that has my InitializeComponent(); command? I've looked around so far without a solution. The properties on each .cs file are set to 'Compile'. I do not see an App.xaml file in my Solutions explorer but I do see a app.config file. I'm still very new and this is my first attempt at an organizing method with c# code. A: I was looking at this issue as well, and in my case the solution was too easy. I added a new empty project to the solution. The newly added project is automatically set as a console application. But since the project added was a 'empty' project, no Program.cs existed in that new project. (As expected) All I needed to do was change the output type of the project properties to Class library A: Change the Output Type under the Project > Properties to that of a “Class Library”. By default, this setting may have been set to a “Console Application”. A: I had this error and solved it using this solution. Right click on the project Select "Properties" Set "Output Type" to "Class Library". A: Try adding this method to a class and see if you still get the error: [STAThread] static void Main() { } A: If you don't have a file named Program.cs, just add a new Class and name it Program.cs. Then paste this code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows.Forms; namespace Sales { static class Program { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } } } A: Select App.xaml and display its properties. Set Build Action to ApplicationDefinition. App.xaml and its corresponding *.cs file must be placed into the root directory of the *.csproj file, i. e. not into a "Source" folder. A: Had this problem in VS 2017 caused by: static async Task Main(string[] args) (Feature 'async main' is not available in C# 7.0. Please use language version 7.1 or greater) Adding <LangVersion>latest</LangVersion> to app.csproj helped. A: Edit .csproj file <OutputType>Library</OutputType> cheers ! A: If you do have a Main method but still get this error, make sure that the file containing the Main method has "Build action" set to "Compile" and "Copy to ouput directory" set to "Do not copy". A: For me, the error was actually produced by "Feature 'async main' is not available in C# 7.0. Please use language version 7.1 or greater". This issue was resulting in the "Does not contain a static 'main' method suitable for an entry point" message in the Error List, but the Output window showed the "not available" error. To correct this, I changed the language version from 'C# latest minor version (default)' to 'C# latest minor version (latest)' under Advanced Build Settings. A: hey i got same error and the solution to this error is just write Capital M instead of small m.. eg:- static void Main() I hope it helps.. A: Looks like a Windows Forms project that is trying to use a startup form but for some reason the project properties is set to startup being Main. If you have enabled application framework you may not be able to see that Main is active (this is an invalid configuration). A: Salaam, I have both Visual Studio 2017 and Visual Studio 2019 Visual Studio 2019 does not show this error but 2017 does. Try Installing Visual Studio 2019. Visual Studio 2017 Visual Studio 2019 A: After placing the above code in Program.cs, follow below steps Right click on the project Select Properties Set Output Type to Windows Application Startup object : namepace.Program A: Just right click on project and select properties and then set Output type on Class Library A: When you want to allow paramaters to be specified from the command, they must look like this: [STAThread] static void Main(params string[] paramaters) { you cannot specify more than one paramater, otherwise this will also cause the error reported above. A: For some others coming here: In my case I had copied a .csproj from a sample project which included <EnableDefaultCompileItems>false</EnableDefaultCompileItems> without including the Program.cs file. Fix was to either remove EnableDefaultCompileItems or include Program.cs in the compile explicitly A: hellow your main class was deleted so add new class that name set as Main.cs and pest that code or if porblem in window so same problem on that using System; using System.Collections.Generic; using System.Linq; using Foundation; using UIKit; namespace your_PKG_name.iOS { public class Application { // This is the main entry point of the application. static void Main(string[] args) { // if you want to use a different Application Delegate class from "AppDelegate" // you can specify it here. UIApplication.Main(args, null, "AppDelegate"); } } } A: A valid entry looks like: public static class ConsoleProgram { [STAThread] static void Main() { Console.WriteLine("Got here"); Console.ReadLine(); } } I had issues as I'm writing a web application, but for the dreadly loading time, I wanted to quickly convert the same project to a console application and perform quick method tests without loading the entire solution. My entry point was placed in /App_Code/Main.cs, and I had to do the following: Set Project -> Properties -> Application -> Output type = Console Application Create the /App_Code/Main.cs Add the code above in it (and reference the methods in my project) Right click on the Main.cs file -> Properties -> Build Action = Compile After this, I can set the output (as mentioned in Step 1) to Class Library to start the web site, or Console Application to enter the console mode. Why I did this instead of 2 separate projects? Simply because I had references to Entity Framework and other specific references that created problems running 2 separate projects. For easier solutions, I would still recommend 2 separate projects as the console output is mainly test code and you probably don't want to risk that going out in production code. A: If you are using a class library project then set Class Library as output type in properties under application section of project. A: Another situation where this occur is when someone (unintentionally) changes Build Action for Program.cs. The value for Build Action should be C# compiler. I accidentally changed Build Action to None, which removed program.cs from the project and therefore wasn't included when compile started. A: Did you accidentally remove the entire Program.cs file? If you have removed, using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using System.Windows.Forms; namespace ListWievKullanımı { static class Program { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } } } This might work for you. A: If you do indeed have a public static main method it could be your build settings as explained in this question: Troubleshooting "program does not contain a static 'Main' method" when it clearly does...? A: I too have faced this problem. Then I realized that I was choosing Console Application(Package) rather than Console Application. A: I am using Visual Studio and also had this problem. It took me some time, but in my program it was caused because I accidentally deleted a Class named "Program" that is generated automatically. A: For future readers who faced same issue with Windows Forms Application, one solution is to add these lines to your main/start up form class: [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new MyMainForm()); } Then go to project properties > Application > Startup Object dropdown, should see the namespace.MyMainForm, select it, clean and build the solution. And it should work. A: Check to see if the project is set as the "Startup Project" Right click on the project and choose "Set as Startup Project" from the menu. A: If you are like me, then you might have started with a Class Library, and then switched this to a Console Application. If so, change this... namespace ClassLibrary1 { public class Class1 { } } To this... namespace ConsoleApp1 { class Program { static void Main(string[] args) { } } } A: If you use Visual Studio Code change Project Sdk="Microsoft.NET.Sdk.Web" to Project Sdk="Microsoft.NET.Sdk" on csproj file. A: Use the following - async static **Task** Main(string[] args) A: I got this error when using the command Build Docker Image in Visual Studio 2022. error CS5001: Program does not contain a static 'Main' method suitable for an entry point The project built perfectly well in Windows but I tried to build a Linuxcontainer. Switching to Output Type Class Library solved the error but Docker Compose gave me this error instead: CTC1031 Linux containers are not supported for https://stackoverflow.com/a/74044317/3850405 I tried explicitly using a Main method like this but it did not work: namespace WebApplication { public class Program { public static void Main(string[] args) { I have no idea why but this solved it for me: Gives error: FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base WORKDIR /app EXPOSE 80 EXPOSE 443 FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build WORKDIR /src COPY ["src/Services/Classification/ClassificationService.Api/ClassificationService.Api.csproj", "Services/Classification/ClassificationService.Api/"] RUN dotnet restore "Services/Classification/ClassificationService.Api/ClassificationService.Api.csproj" COPY . . WORKDIR "/src/Services/Classification/ClassificationService.Api" RUN dotnet build "ClassificationService.Api.csproj" -c Release -o /app/build Works: FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base WORKDIR /app EXPOSE 80 EXPOSE 443 FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build WORKDIR /src COPY ["src/Services/Classification/ClassificationService.Api/ClassificationService.Api.csproj", "src/Services/Classification/ClassificationService.Api/"] RUN dotnet restore "src/Services/Classification/ClassificationService.Api/ClassificationService.Api.csproj" COPY . . WORKDIR "/src/src/Services/Classification/ClassificationService.Api" RUN dotnet build "ClassificationService.Api.csproj" -c Release -o /app/build Notice the double /src in the working example. I read that you had to place the Dockerfile at the same level as .sln file but in my case the files are separated by four levels. https://stackoverflow.com/a/63257667/3850405 A: Add static async Task Main(string[] args) { } instead of static async void Main(string[] args) { } its work for me.
Does not contain a static 'main' method suitable for an entry point
I began organizing my code to day into seperarate .cs files, and in order to allow the methods that work with the UI to continue to do so I would create the .cs code under the same namespace and public partial class name so the methods could be inter-operable. My header look like this in four files, including my main core file that calls: public shell() { InitializeComponent(); } Header area of .cs files that work with the UI (and seem to be causing this new conflict): using System; using System.Windows.Forms; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Text.RegularExpressions; using System.IO; using System.Data.SqlServerCe; using System.Diagnostics; using System.Threading; using System.Collections.Specialized; using System.Net; using System.Runtime.InteropServices; using watin = WatiN.Core; using WatiN.Core.Native.InternetExplorer; using System.Web; namespace WindowsFormsApplication1 { public partial class shell : Form { Now when I try to debug/preview my application (BTW this is a Windows Application within Visual Studio 2010 Express) I get this error message: Does not contain a static 'main' method suitable for an entry point I looked in the application properties in Application->Startup object, but it offers me no options. How can I inform the application to begin at the .cs file that has my InitializeComponent(); command? I've looked around so far without a solution. The properties on each .cs file are set to 'Compile'. I do not see an App.xaml file in my Solutions explorer but I do see a app.config file. I'm still very new and this is my first attempt at an organizing method with c# code.
[ "I was looking at this issue as well, and in my case the solution was too easy. I added a new empty project to the solution. The newly added project is automatically set as a console application. But since the project added was a 'empty' project, no Program.cs existed in that new project. (As expected)\nAll I needed to do was change the output type of the project properties to Class library\n", "Change the Output Type under the Project > Properties to that of a “Class Library”. By default, this setting may have been set to a “Console Application”.\n", "I had this error and solved it using this solution.\n\nRight click on the project\nSelect \"Properties\"\nSet \"Output Type\" to \"Class Library\".\n\n", "Try adding this method to a class and see if you still get the error:\n[STAThread]\nstatic void Main()\n{\n}\n\n", "If you don't have a file named Program.cs, just add a new Class and name it Program.cs. \nThen paste this code:\n using System;\n using System.Collections.Generic;\n using System.Linq;\n using System.Text;\n using System.Windows.Forms;\n\n namespace Sales {\n static class Program {\n\n /// <summary>\n /// The main entry point for the application.\n /// </summary>\n [STAThread]\n static void Main() {\n Application.EnableVisualStyles();\n Application.SetCompatibleTextRenderingDefault(false);\n Application.Run(new Form1());\n }\n }\n\n }\n\n", "\nSelect App.xaml and display its properties. Set Build Action to ApplicationDefinition.\nApp.xaml and its corresponding *.cs file must be placed into the root directory of the *.csproj file, i. e. not into a \"Source\" folder.\n\n", "Had this problem in VS 2017 caused by:\nstatic async Task Main(string[] args)\n(Feature 'async main' is not available in C# 7.0. Please use language version 7.1 or greater)\nAdding \n<LangVersion>latest</LangVersion>\nto app.csproj helped.\n", "Edit .csproj file\n<OutputType>Library</OutputType>\ncheers ! \n", "If you do have a Main method but still get this error, make sure that the file containing the Main method has \"Build action\" set to \"Compile\" and \"Copy to ouput directory\" set to \"Do not copy\".\n", "For me, the error was actually produced by \"Feature 'async main' is not available in C# 7.0. Please use language version 7.1 or greater\". This issue was resulting in the \"Does not contain a static 'main' method suitable for an entry point\" message in the Error List, but the Output window showed the \"not available\" error.\nTo correct this, I changed the language version from 'C# latest minor version (default)' to 'C# latest minor version (latest)' under Advanced Build Settings. \n", "hey i got same error and the solution to this error is just write Capital M instead of small m.. eg:- static void Main()\nI hope it helps..\n", "Looks like a Windows Forms project that is trying to use a startup form but for some reason the project properties is set to startup being Main.\nIf you have enabled application framework you may not be able to see that Main is active (this is an invalid configuration).\n", "Salaam, \nI have both Visual Studio 2017 and Visual Studio 2019\nVisual Studio 2019 does not show this error but 2017 does. Try Installing Visual Studio 2019.\n\nVisual Studio 2017\n\n \n\nVisual Studio 2019\n\n\n", "After placing the above code in Program.cs, follow below steps\n\n\nRight click on the project\n\nSelect Properties\n\nSet Output Type to Windows Application\n\nStartup object : namepace.Program\n\n\n\n", "\nJust right click on project and select properties and then set Output type on Class Library\n", "When you want to allow paramaters to be specified from the command, they must look like this:\n [STAThread]\n static void Main(params string[] paramaters)\n {\n\nyou cannot specify more than one paramater, otherwise this will also cause the error reported above.\n", "For some others coming here:\nIn my case I had copied a .csproj from a sample project which included <EnableDefaultCompileItems>false</EnableDefaultCompileItems> without including the Program.cs file. Fix was to either remove EnableDefaultCompileItems or include Program.cs in the compile explicitly\n", "hellow your main class was deleted so add new class that name set as Main.cs and pest that code or if porblem in window so same problem on that\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing Foundation;\nusing UIKit;\n\nnamespace your_PKG_name.iOS\n{\n public class Application\n {\n // This is the main entry point of the application.\n static void Main(string[] args)\n {\n // if you want to use a different Application Delegate class from \"AppDelegate\"\n // you can specify it here.\n UIApplication.Main(args, null, \"AppDelegate\");\n\n }\n }\n}\n\n", "A valid entry looks like:\npublic static class ConsoleProgram\n {\n [STAThread]\n static void Main()\n {\n Console.WriteLine(\"Got here\");\n Console.ReadLine();\n }\n }\n\nI had issues as I'm writing a web application, but for the dreadly loading time, I wanted to quickly convert the same project to a console application and perform quick method tests without loading the entire solution.\nMy entry point was placed in /App_Code/Main.cs, and I had to do the following:\n\nSet Project -> Properties -> Application -> Output type = Console Application\nCreate the /App_Code/Main.cs\nAdd the code above in it (and reference the methods in my project)\nRight click on the Main.cs file -> Properties -> Build Action = Compile\n\nAfter this, I can set the output (as mentioned in Step 1) to Class Library to start the web site, or Console Application to enter the console mode.\nWhy I did this instead of 2 separate projects?\nSimply because I had references to Entity Framework and other specific references that created problems running 2 separate projects.\nFor easier solutions, I would still recommend 2 separate projects as the console output is mainly test code and you probably don't want to risk that going out in production code.\n", "If you are using a class library project then set Class Library as output type in properties under application section of project.\n", "Another situation where this occur is when someone (unintentionally) changes Build Action for Program.cs. The value for Build Action should be C# compiler.\nI accidentally changed Build Action to None, which removed program.cs from the project and therefore wasn't included when compile started.\n", "Did you accidentally remove the entire Program.cs file?\nIf you have removed,\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Threading.Tasks;\nusing System.Windows.Forms;\nnamespace ListWievKullanımı\n{\n static class Program\n {\n /// <summary>\n /// The main entry point for the application.\n /// </summary>\n [STAThread]\n static void Main()\n {\n Application.EnableVisualStyles();\n Application.SetCompatibleTextRenderingDefault(false);\n Application.Run(new Form1());\n }\n }\n}\n\nThis might work for you.\n\n", "If you do indeed have a public static main method it could be your build settings as explained in this question: Troubleshooting \"program does not contain a static 'Main' method\" when it clearly does...?\n", "I too have faced this problem. Then I realized that I was choosing Console Application(Package) rather than Console Application.\n", "I am using Visual Studio and also had this problem. It took me some time, but in my program it was caused because I accidentally deleted a Class named \"Program\" that is generated automatically.\n", "For future readers who faced same issue with Windows Forms Application, one solution is to add these lines to your main/start up form class:\n [STAThread]\n static void Main()\n {\n Application.EnableVisualStyles();\n Application.SetCompatibleTextRenderingDefault(false);\n Application.Run(new MyMainForm());\n }\n\nThen go to project properties > Application > Startup Object dropdown, should see the namespace.MyMainForm, select it, clean and build the solution. And it should work.\n", "Check to see if the project is set as the \"Startup Project\"\nRight click on the project and choose \"Set as Startup Project\" from the menu.\n", "If you are like me, then you might have started with a Class Library, and then switched this to a Console Application. If so, change this...\nnamespace ClassLibrary1\n{\n public class Class1\n {\n }\n}\n\nTo this...\nnamespace ConsoleApp1\n{\n class Program\n {\n static void Main(string[] args)\n {\n }\n }\n}\n\n", "If you use Visual Studio Code change Project Sdk=\"Microsoft.NET.Sdk.Web\" to Project Sdk=\"Microsoft.NET.Sdk\" on csproj file.\n", "Use the following -\nasync static **Task** Main(string[] args)\n\n", "I got this error when using the command Build Docker Image in Visual Studio 2022.\nerror CS5001: Program does not contain a static 'Main' method suitable for an entry point\n\nThe project built perfectly well in Windows but I tried to build a Linuxcontainer. Switching to Output Type Class Library solved the error but Docker Compose gave me this error instead:\n\nCTC1031 Linux containers are not supported for\n\nhttps://stackoverflow.com/a/74044317/3850405\nI tried explicitly using a Main method like this but it did not work:\nnamespace WebApplication\n{\n public class Program\n {\n public static void Main(string[] args)\n {\n\nI have no idea why but this solved it for me:\nGives error:\nFROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base\nWORKDIR /app\nEXPOSE 80\nEXPOSE 443\n\nFROM mcr.microsoft.com/dotnet/sdk:6.0 AS build\nWORKDIR /src\nCOPY [\"src/Services/Classification/ClassificationService.Api/ClassificationService.Api.csproj\", \"Services/Classification/ClassificationService.Api/\"]\n\nRUN dotnet restore \"Services/Classification/ClassificationService.Api/ClassificationService.Api.csproj\"\nCOPY . .\nWORKDIR \"/src/Services/Classification/ClassificationService.Api\"\nRUN dotnet build \"ClassificationService.Api.csproj\" -c Release -o /app/build\n\nWorks:\nFROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base\nWORKDIR /app\nEXPOSE 80\nEXPOSE 443\n\nFROM mcr.microsoft.com/dotnet/sdk:6.0 AS build\nWORKDIR /src\nCOPY [\"src/Services/Classification/ClassificationService.Api/ClassificationService.Api.csproj\", \"src/Services/Classification/ClassificationService.Api/\"]\n\nRUN dotnet restore \"src/Services/Classification/ClassificationService.Api/ClassificationService.Api.csproj\"\nCOPY . .\nWORKDIR \"/src/src/Services/Classification/ClassificationService.Api\"\nRUN dotnet build \"ClassificationService.Api.csproj\" -c Release -o /app/build\n\nNotice the double /src in the working example.\nI read that you had to place the Dockerfile at the same level as .sln file but in my case the files are separated by four levels.\nhttps://stackoverflow.com/a/63257667/3850405\n", "Add\nstatic async Task Main(string[] args)\n{\n}\n\ninstead of\nstatic async void Main(string[] args)\n{\n}\n\nits work for me.\n" ]
[ 161, 103, 35, 27, 14, 13, 11, 9, 8, 6, 4, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Perhaps unintentional, but moving my docker file to the solution folder instead of the project eliminated the error. This was helpful when I still wanted to run the solution independently of docker\n" ]
[ -1 ]
[ "c#", "visual_studio_2010" ]
stackoverflow_0009607702_c#_visual_studio_2010.txt
Q: Python - compound interest calculation issue - cs1301 edx extra practice 5 I have the following problem I can't manage to solve: Find "How much do I need to invest to have a certain amount by a certain year?" For example, "How much do I need to invest to have $50,000 in 5 years at 5% (0.05) interest?" Mathematically, the formula for this is: goal / e ^ (rate * number of years) = principal Add some code below that will print the amount of principal needed to reach the given savings goal within the number of years and interest rate specified. my solution is: import math goal = float(goal) years = float(rate) rate = rate principal = goal / (math.e ** (rate * years)) rounded_principal = round(principal, 2) print(rounded_principal) it should print 38940.04 but instead it prints 49875.16. If i use goal = 200, rate 0.1 and years 1, it returns 198.01 when it should return 180.97 I tried turning the rate into a percentage again by multiplying by 100, adding and deleting parenthesis, tried using a formula found online, not rounding the result, and making e be its pure number (to like 15 decimals). A: You are using rate instead of years for the year. goal = float(goal) years = float(rate) <-- Here rate = rate
Python - compound interest calculation issue - cs1301 edx extra practice 5
I have the following problem I can't manage to solve: Find "How much do I need to invest to have a certain amount by a certain year?" For example, "How much do I need to invest to have $50,000 in 5 years at 5% (0.05) interest?" Mathematically, the formula for this is: goal / e ^ (rate * number of years) = principal Add some code below that will print the amount of principal needed to reach the given savings goal within the number of years and interest rate specified. my solution is: import math goal = float(goal) years = float(rate) rate = rate principal = goal / (math.e ** (rate * years)) rounded_principal = round(principal, 2) print(rounded_principal) it should print 38940.04 but instead it prints 49875.16. If i use goal = 200, rate 0.1 and years 1, it returns 198.01 when it should return 180.97 I tried turning the rate into a percentage again by multiplying by 100, adding and deleting parenthesis, tried using a formula found online, not rounding the result, and making e be its pure number (to like 15 decimals).
[ "You are using rate instead of years for the year.\ngoal = float(goal)\nyears = float(rate) <-- Here\nrate = rate\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074677246_python_python_3.x.txt
Q: Any way to append two monadic lists in Haskell? I am learning Haskell at Uni this semester. I encountered a problem where I have a list of lists as IO [[String]] and I want to append an IO [String] to the first one. Lets denote them as x and y. So I tried doing y >>= return . (++) [x] or y <> [x]. All of them gave the error: Could not match IO [[String]] with [IO [String]]. Any suggestions? Thank you. A: In my opinion, the simplest general technique to learn is about how to use do blocks. test :: IO [[String]] test = do xss <- generateListOfLists -- IO [[String]] xs <- generateList -- IO [String] return (xss ++ [xs]) The idea is that <- temporarily unwraps the monad, removing the IO monad from types, as long as at the very end we return a value in the same monad (the return at the end). After one understands that general technique, one can then learn alternatives like applicative notation, which is not as general, but still nice. test :: IO [[String]] test = (\xss xs -> xss ++ [xs]) <$> generateListOfLists <*> generateList Using >>= is less common, and at least in this case, less convenient than a do block. test :: IO [[String]] test = generateListOfLists >>= \xss -> generateList >>= \xs -> return (xss ++ [xs])
Any way to append two monadic lists in Haskell?
I am learning Haskell at Uni this semester. I encountered a problem where I have a list of lists as IO [[String]] and I want to append an IO [String] to the first one. Lets denote them as x and y. So I tried doing y >>= return . (++) [x] or y <> [x]. All of them gave the error: Could not match IO [[String]] with [IO [String]]. Any suggestions? Thank you.
[ "In my opinion, the simplest general technique to learn is about how to use do blocks.\ntest :: IO [[String]]\ntest = do\n xss <- generateListOfLists -- IO [[String]]\n xs <- generateList -- IO [String]\n return (xss ++ [xs])\n\nThe idea is that <- temporarily unwraps the monad, removing the IO monad from types, as long as at the very end we return a value in the same monad (the return at the end).\nAfter one understands that general technique, one can then learn alternatives like applicative notation, which is not as general, but still nice.\ntest :: IO [[String]]\ntest =\n (\\xss xs -> xss ++ [xs])\n <$> generateListOfLists\n <*> generateList\n\nUsing >>= is less common, and at least in this case, less convenient than a do block.\ntest :: IO [[String]]\ntest = \n generateListOfLists >>= \\xss ->\n generateList >>= \\xs ->\n return (xss ++ [xs])\n\n" ]
[ 1 ]
[]
[]
[ "haskell", "io", "list", "monads", "string" ]
stackoverflow_0074677195_haskell_io_list_monads_string.txt
Q: How to Display data of single json file in mulitple tables through repeating a single react component I am a beginner in Reactjs. What I want is, I have a Component Datatable.js and I want to create three tables in that component by configuring data of a single JSON File and there should be only one component for three tables but the condition is the values in tables must come different-different in each table like- in first tables- Name, email, number; in the second table- email, city, number and in the third table- Name, Profession, number, city. I want to perform all that operation by repeating Datatable.js component three times in App.js so that three tables render, not by writing table element three times in Datatable.js. So please tell me how to do that. I have got the JSON values in the data state and I know it can be displayed through the map() method but the problem is how to send these JSON file values in each repeating component and how Datatable.js would get it so that values would appear differently in each table as I mentioned above? data.json [ { "person": { "name": "Viswas Jha", "avatar": "images/profile.jpg" }, "city": "Mumbai", "email": "[email protected]", "number": 123456, "profession": "UI Designer" }, { "person": { "name": "Damini Pandit", "avatar": "images/profile.jpg" }, "city": "Delhi", "email": "[email protected]", "number": 1345645, "profession": "Front-end Developer" }, { "person": { "name": "Nihal Lingesh", "avatar": "images/profile.jpg" }, "city": "Delhi", "email": "[email protected]", "number": 12345689, "profession": "UX Designer" }, { "person": { "name": "Akash Singh", "avatar": "images/profile.jpg" }, "city": "Kolkata", "email": "[email protected]", "number": 1234566, "profession": "Backend Developer" } ] App.js import Datatable from './Datatable'; import '../node_modules/bootstrap/dist/css/bootstrap.min.css'; import {useState, useEffect} from 'react'; import './App.css'; const fetchData = new Promise((myResolve, myReject) => { let req = new XMLHttpRequest(); req.open('GET', "./data.json"); req.onload = function() { if (req.status == 200) { return myResolve(req.response); } else { return myReject("File not Found"); } }; req.send(); }); function App() { const [data, setData] = useState([]); useEffect(() => { fetchData.then((jsonData) => setData(JSON.parse(jsonData))); }, []); return ( <> <Datatable Data = {data} />; <Datatable Data= {data}/>; <Datatable Data= {data}/>; </> ); } export default App; Datatable.js import React from 'react'; import Grid from '@material-ui/core/Grid'; export default function Datatable({Data}) { return ( <div className='main text-center '> <h1 className='head py-3'>Datatable</h1> <Grid container spacing={1} className='contain m-auto mt-5 ps-5 pb-4'> <table className="table table-striped"> <thead> <tr> <th scope="col">Name</th> <th scope="col">Email</th> <th scope="col">Number</th> </tr> </thead> <tbody> { Data.map((elem, ind)=>{ return ( <tr key={ind}> <td className='d-flex justify-content-between align-items-center'> <img src={elem.person.avatar} alt="avatar"/> {elem.person.name}</td> <td>{elem.email}</td> <td>{elem.number}</td> </tr> ) }) } </tbody> </table> </Grid> </div> ); } A: Update your code with this. App.js import Datatable from './Datatable'; import '../node_modules/bootstrap/dist/css/bootstrap.min.css'; import {useState, useEffect} from 'react'; import './App.css'; const fetchData = new Promise((myResolve, myReject) => { let req = new XMLHttpRequest(); req.open('GET', "./data.json"); req.onload = function() { if (req.status == 200) { return myResolve(req.response); } else { return myReject("File not Found"); } }; req.send(); }); function App() { const [data, setData] = useState([]); useEffect(() => { fetchData.then((jsonData) => setData(JSON.parse(jsonData))); }, []); return ( <> <Datatable data = {data} /> </> ); } export default App; DataTable.js import React from 'react'; import Grid from '@material-ui/core/Grid'; export default function Datatable({props}) { return ( <div className='main text-center '> <h1 className='head py-3'>Datatable</h1> <Grid container spacing={1} className='contain m-auto mt-5 ps-5 pb-4'> <table className="table table-striped "> <thead> <tr> <th scope="col">Name</th> <th scope="col">Email</th> <th scope="col">Number</th> </tr> </thead> <tbody> { props.data.map((elem, ind)=>{ return ( <tr key={ind}> <td className='d-flex justify-content-between align-items-center'> <img src={elem.person.avatar} alt="avatar"/> {elem.person.name}</td> <td>{elem.email}</td> <td>{elem.number}</td> </tr> ) }) } </tbody> </table> <table className="table table-striped "> <thead> <tr> <th scope="col">Email</th> <th scope="col">City</th> <th scope="col">Number</th> </tr> </thead> <tbody> { props.data.map((elem, ind)=>{ return ( <tr key={ind}> <td>{elem.email}</td> <td>{elem.city}</td> <td>{elem.number}</td> </tr> ) }) } </tbody> </table> <table className="table table-striped "> <thead> <tr> <th scope="col">Name</th> <th scope="col">Profession</th> <th scope="col">Number</th> <th scope="col">City</th> </tr> </thead> <tbody> { props.data.map((elem, ind)=>{ return ( <tr key={ind}> <td>{elem.person.name}</td> <td>{elem.profession}</td> <td>{elem.number}</td> <td>{elem.city}</td> </tr> ) }) } </tbody> </table> </Grid> </div> ); }
How to Display data of single json file in mulitple tables through repeating a single react component
I am a beginner in Reactjs. What I want is, I have a Component Datatable.js and I want to create three tables in that component by configuring data of a single JSON File and there should be only one component for three tables but the condition is the values in tables must come different-different in each table like- in first tables- Name, email, number; in the second table- email, city, number and in the third table- Name, Profession, number, city. I want to perform all that operation by repeating Datatable.js component three times in App.js so that three tables render, not by writing table element three times in Datatable.js. So please tell me how to do that. I have got the JSON values in the data state and I know it can be displayed through the map() method but the problem is how to send these JSON file values in each repeating component and how Datatable.js would get it so that values would appear differently in each table as I mentioned above? data.json [ { "person": { "name": "Viswas Jha", "avatar": "images/profile.jpg" }, "city": "Mumbai", "email": "[email protected]", "number": 123456, "profession": "UI Designer" }, { "person": { "name": "Damini Pandit", "avatar": "images/profile.jpg" }, "city": "Delhi", "email": "[email protected]", "number": 1345645, "profession": "Front-end Developer" }, { "person": { "name": "Nihal Lingesh", "avatar": "images/profile.jpg" }, "city": "Delhi", "email": "[email protected]", "number": 12345689, "profession": "UX Designer" }, { "person": { "name": "Akash Singh", "avatar": "images/profile.jpg" }, "city": "Kolkata", "email": "[email protected]", "number": 1234566, "profession": "Backend Developer" } ] App.js import Datatable from './Datatable'; import '../node_modules/bootstrap/dist/css/bootstrap.min.css'; import {useState, useEffect} from 'react'; import './App.css'; const fetchData = new Promise((myResolve, myReject) => { let req = new XMLHttpRequest(); req.open('GET', "./data.json"); req.onload = function() { if (req.status == 200) { return myResolve(req.response); } else { return myReject("File not Found"); } }; req.send(); }); function App() { const [data, setData] = useState([]); useEffect(() => { fetchData.then((jsonData) => setData(JSON.parse(jsonData))); }, []); return ( <> <Datatable Data = {data} />; <Datatable Data= {data}/>; <Datatable Data= {data}/>; </> ); } export default App; Datatable.js import React from 'react'; import Grid from '@material-ui/core/Grid'; export default function Datatable({Data}) { return ( <div className='main text-center '> <h1 className='head py-3'>Datatable</h1> <Grid container spacing={1} className='contain m-auto mt-5 ps-5 pb-4'> <table className="table table-striped"> <thead> <tr> <th scope="col">Name</th> <th scope="col">Email</th> <th scope="col">Number</th> </tr> </thead> <tbody> { Data.map((elem, ind)=>{ return ( <tr key={ind}> <td className='d-flex justify-content-between align-items-center'> <img src={elem.person.avatar} alt="avatar"/> {elem.person.name}</td> <td>{elem.email}</td> <td>{elem.number}</td> </tr> ) }) } </tbody> </table> </Grid> </div> ); }
[ "Update your code with this.\nApp.js\nimport Datatable from './Datatable';\nimport '../node_modules/bootstrap/dist/css/bootstrap.min.css';\nimport {useState, useEffect} from 'react';\nimport './App.css';\n\nconst fetchData = new Promise((myResolve, myReject) => {\n let req = new XMLHttpRequest();\n req.open('GET', \"./data.json\");\n req.onload = function() {\n if (req.status == 200) {\n return myResolve(req.response);\n } else {\n return myReject(\"File not Found\");\n }\n };\n req.send();\n});\n\n\nfunction App() {\n \n const [data, setData] = useState([]);\n\n useEffect(() => {\n fetchData.then((jsonData) => setData(JSON.parse(jsonData)));\n }, []);\n\n return (\n <>\n <Datatable data = {data} />\n </>\n );\n}\n\nexport default App;\n\nDataTable.js\nimport React from 'react';\nimport Grid from '@material-ui/core/Grid';\n\n\nexport default function Datatable({props}) {\n\n \n return (\n <div className='main text-center '>\n <h1 className='head py-3'>Datatable</h1>\n <Grid container spacing={1} className='contain m-auto mt-5 ps-5 pb-4'>\n <table className=\"table table-striped \">\n <thead>\n <tr>\n <th scope=\"col\">Name</th>\n <th scope=\"col\">Email</th>\n <th scope=\"col\">Number</th>\n </tr>\n </thead>\n <tbody> \n {\n props.data.map((elem, ind)=>{\n return (\n <tr key={ind}>\n \n <td className='d-flex justify-content-between align-items-center'>\n <img src={elem.person.avatar} alt=\"avatar\"/>\n {elem.person.name}</td>\n <td>{elem.email}</td>\n <td>{elem.number}</td>\n \n </tr>\n )\n })\n }\n \n </tbody>\n </table>\n <table className=\"table table-striped \">\n <thead>\n <tr>\n <th scope=\"col\">Email</th>\n <th scope=\"col\">City</th>\n <th scope=\"col\">Number</th>\n </tr>\n </thead>\n <tbody> \n {\n props.data.map((elem, ind)=>{\n return (\n <tr key={ind}>\n <td>{elem.email}</td>\n <td>{elem.city}</td>\n <td>{elem.number}</td>\n </tr>\n )\n })\n }\n \n </tbody>\n </table>\n <table className=\"table table-striped \">\n <thead>\n <tr>\n <th scope=\"col\">Name</th>\n <th scope=\"col\">Profession</th>\n <th scope=\"col\">Number</th>\n <th scope=\"col\">City</th>\n </tr>\n </thead>\n <tbody> \n {\n props.data.map((elem, ind)=>{\n return (\n <tr key={ind}>\n <td>{elem.person.name}</td>\n <td>{elem.profession}</td>\n <td>{elem.number}</td>\n <td>{elem.city}</td>\n \n </tr>\n )\n })\n }\n \n </tbody>\n </table>\n </Grid>\n </div>\n );\n\n }\n\n" ]
[ 0 ]
[]
[]
[ "json", "react_component", "react_data_table_component", "react_table", "reactjs" ]
stackoverflow_0074675024_json_react_component_react_data_table_component_react_table_reactjs.txt
Q: Multiple results in object I am currently struggling with a bracket position problem, I currently have a program that lists my employees and I need to add more data at the moment this is my output : [ { gender: 'female', birthDate: '1999-01-09T01:03:14.158Z', name: 'Steven', surname: 'Johnson', workload: 30 }, { gender: 'male', birthDate: '1989-04-09T09:40:26.496Z', name: 'Jon', surname: 'Doe', workload: 20 } ] and I want it like this: { total: 50, workload10: 13, workload20: 12, workload30: 10, workload40: 15 averageAge: 33.6, minAge: 19, maxAge: 55, medianAge: 38, medianWorkload: 28, averageWomenWorkload: 26, sortedByWorkload:[ { gender: 'female', birthDate: '1999-01-09T01:03:14.158Z', name: 'Stephanie', surname: 'Johnson', workload: 30 }, { gender: 'male', birthDate: '1989-04-09T09:40:26.496Z', name: 'Jon', surname: 'Doe', workload: 20 } ] } The problem is that I don't know how to define the brackets to get this shape, can someone please advise me ? A: That would be a starting point: const initialData = [ { gender: "female", birthDate: "1999-01-09T01:03:14.158Z", name: "Steven", surname: "Johnson", workload: 30, }, { gender: "male", birthDate: "1989-04-09T09:40:26.496Z", name: "Jon", surname: "Doe", workload: 20, }, ]; const sortedArray = initialData.sort((a, b) => a.workload > b.workload ? 1 : -1 ); const output = { total: initialData.reduce((acc, e) => acc + e.workload, 0), sortedByWorkload: sortedArray, }; console.log(output); You would still have to add all the other attributes workload10 to averageWomenWorkload. A: Hi @David Ryšánek, You can try to import your file like this:- import data from "Your-file-path". First, you need to make your file your-file-name.json file. Now import your file import data from "your-path" Here is an exmple:- codesandbox The second way, you can defined you array like this:- const data = [ { gender: 'female', birthDate: '1999-01-09T01:03:14.158Z', name: 'Steven', surname: 'Johnson', workload: 30 }, { gender: 'male', birthDate: '1989-04-09T09:40:26.496Z', name: 'Jon', surname: 'Doe', workload: 20 } ] export default data; Now you can use same way. that data.
Multiple results in object
I am currently struggling with a bracket position problem, I currently have a program that lists my employees and I need to add more data at the moment this is my output : [ { gender: 'female', birthDate: '1999-01-09T01:03:14.158Z', name: 'Steven', surname: 'Johnson', workload: 30 }, { gender: 'male', birthDate: '1989-04-09T09:40:26.496Z', name: 'Jon', surname: 'Doe', workload: 20 } ] and I want it like this: { total: 50, workload10: 13, workload20: 12, workload30: 10, workload40: 15 averageAge: 33.6, minAge: 19, maxAge: 55, medianAge: 38, medianWorkload: 28, averageWomenWorkload: 26, sortedByWorkload:[ { gender: 'female', birthDate: '1999-01-09T01:03:14.158Z', name: 'Stephanie', surname: 'Johnson', workload: 30 }, { gender: 'male', birthDate: '1989-04-09T09:40:26.496Z', name: 'Jon', surname: 'Doe', workload: 20 } ] } The problem is that I don't know how to define the brackets to get this shape, can someone please advise me ?
[ "That would be a starting point:\n\n\nconst initialData = [\n {\n gender: \"female\",\n birthDate: \"1999-01-09T01:03:14.158Z\",\n name: \"Steven\",\n surname: \"Johnson\",\n workload: 30,\n },\n {\n gender: \"male\",\n birthDate: \"1989-04-09T09:40:26.496Z\",\n name: \"Jon\",\n surname: \"Doe\",\n workload: 20,\n },\n];\n\nconst sortedArray = initialData.sort((a, b) =>\n a.workload > b.workload ? 1 : -1\n);\n\nconst output = {\n total: initialData.reduce((acc, e) => acc + e.workload, 0),\n sortedByWorkload: sortedArray,\n};\n\nconsole.log(output);\n\n\n\nYou would still have to add all the other attributes workload10 to averageWomenWorkload.\n", "Hi @David Ryšánek,\nYou can try to import your file like this:-\nimport data from \"Your-file-path\".\nFirst, you need to make your file your-file-name.json file.\nNow import your file import data from \"your-path\"\nHere is an exmple:- codesandbox\nThe second way, you can defined you array like this:-\nconst data = [\n {\n gender: 'female',\n birthDate: '1999-01-09T01:03:14.158Z',\n name: 'Steven',\n surname: 'Johnson',\n workload: 30\n },\n {\n gender: 'male',\n birthDate: '1989-04-09T09:40:26.496Z',\n name: 'Jon',\n surname: 'Doe',\n workload: 20\n }\n]\n\nexport default data;\n\nNow you can use same way. that data.\n" ]
[ 0, 0 ]
[]
[]
[ "brackets", "javascript", "node.js" ]
stackoverflow_0074677176_brackets_javascript_node.js.txt
Q: Offsetting a timestamp on a Cassandra query Probably a dumb question but I'm using toTimestamp(now()) to retrieve the timestamp. Is there any way to offset the now() by my specified timeframe. What I have now: > print(session.execute('SELECT toTimestamp(now()) FROM system.local').one()) 2022-12-04 12:12:47.011000 My goal: > print(session.execute('SELECT toTimestamp(now() - 1h) FROM system.local').one()) 2022-12-04 11:12:47.011000 A: To offset the timestamp returned by the toTimestamp(now()) function in Apache Cassandra, you can use the dateOf function to subtract a specified amount of time from the current timestamp. Here is an example of how you can use this query in your code: result = session.execute('SELECT toTimestamp(dateOf(now()) - 1h) FROM system.local').one() print(result) You can use the same syntax to offset the timestamp by any amount of time (like 1d for 1 day). A: You're on the right track. But with the current example, it looks like you're trying to subtract an hour from now(). Now is a type-1 UUID (timeUUID in Cassandra). The date arithmetic operators will only work with dates and timestamps, so just pull that - 1h out one level of parens: > SELECT toTimestamp(now()) - 1h FROM system.local; system.totimestamp(system.now()) - 1h --------------------------------------- 2022-12-04 12:38:35.747000+0000 (1 rows) And then this works: row = session.execute("SELECT toTimestamp(now()) - 1h FROM system.local;").one() if row: print(row[0]) 2022-12-04 12:52:19.187000 NOTE: The parser is a little strict on this one. Make sure that the operator and duration are appropriately spaced. This works: SELECT toTimestamp(now()) - 1h This fails: SELECT toTimestamp(now())-1h
Offsetting a timestamp on a Cassandra query
Probably a dumb question but I'm using toTimestamp(now()) to retrieve the timestamp. Is there any way to offset the now() by my specified timeframe. What I have now: > print(session.execute('SELECT toTimestamp(now()) FROM system.local').one()) 2022-12-04 12:12:47.011000 My goal: > print(session.execute('SELECT toTimestamp(now() - 1h) FROM system.local').one()) 2022-12-04 11:12:47.011000
[ "To offset the timestamp returned by the toTimestamp(now()) function in Apache Cassandra, you can use the dateOf function to subtract a specified amount of time from the current timestamp.\nHere is an example of how you can use this query in your code:\nresult = session.execute('SELECT toTimestamp(dateOf(now()) - 1h) FROM system.local').one()\n\nprint(result)\n\nYou can use the same syntax to offset the timestamp by any amount of time (like 1d for 1 day).\n", "You're on the right track.\nBut with the current example, it looks like you're trying to subtract an hour from now(). Now is a type-1 UUID (timeUUID in Cassandra). The date arithmetic operators will only work with dates and timestamps, so just pull that - 1h out one level of parens:\n> SELECT toTimestamp(now()) - 1h FROM system.local;\n\n system.totimestamp(system.now()) - 1h\n---------------------------------------\n 2022-12-04 12:38:35.747000+0000\n\n(1 rows)\n\nAnd then this works:\nrow = session.execute(\"SELECT toTimestamp(now()) - 1h FROM system.local;\").one()\nif row:\n print(row[0])\n\n2022-12-04 12:52:19.187000\n\nNOTE: The parser is a little strict on this one. Make sure that the operator and duration are appropriately spaced.\nThis works:\nSELECT toTimestamp(now()) - 1h\n\nThis fails:\nSELECT toTimestamp(now())-1h\n\n" ]
[ 0, 0 ]
[]
[]
[ "cassandra", "cql", "python", "python_3.x" ]
stackoverflow_0074675507_cassandra_cql_python_python_3.x.txt
Q: Uri.Path is not parsing / in scala I have a filePath which I am trying to extract from a String. The string looks like: val myPath: String = InitialIndexValue/this/is/some/kind/of/path val firstIndex = myPath.indexOd('/') val extractedPath = myPath.substring(firstIndex) //I am creating a Uri path val uriPath = Uri.Path./(extractedPath) This is returning %2F in place of all '/'. The uri path is returning %2Fthis%2Fis%2Fsome%2Fkind%2Fof%2Fpath. The imports are: import akka.http.scaladsl.model.Uri import java.nio.file.Path My questions are: Why '/' is relaced by "%2F" by Uri.Path? Is there any other way to handle this? TIA A: Method / on Uri.Path is intended to concatenate path segments together into one path. The argument to / is a segment (e.g. "this" or "is" or "some"), not a whole path. When the segment contains special characters, they are URL encoded, e.g. character '/' becomes "%2F" and space would be "%20". What you probably need to use is the apply method of Uri.Path which parses the given string as a whole path: val uriPath = Uri.Path(extractedPath)
Uri.Path is not parsing / in scala
I have a filePath which I am trying to extract from a String. The string looks like: val myPath: String = InitialIndexValue/this/is/some/kind/of/path val firstIndex = myPath.indexOd('/') val extractedPath = myPath.substring(firstIndex) //I am creating a Uri path val uriPath = Uri.Path./(extractedPath) This is returning %2F in place of all '/'. The uri path is returning %2Fthis%2Fis%2Fsome%2Fkind%2Fof%2Fpath. The imports are: import akka.http.scaladsl.model.Uri import java.nio.file.Path My questions are: Why '/' is relaced by "%2F" by Uri.Path? Is there any other way to handle this? TIA
[ "Method / on Uri.Path is intended to concatenate path segments together into one path. The argument to / is a segment (e.g. \"this\" or \"is\" or \"some\"), not a whole path. When the segment contains special characters, they are URL encoded, e.g. character '/' becomes \"%2F\" and space would be \"%20\".\nWhat you probably need to use is the apply method of Uri.Path which parses the given string as a whole path:\nval uriPath = Uri.Path(extractedPath)\n\n" ]
[ 1 ]
[]
[]
[ "akka", "path", "scala", "uri" ]
stackoverflow_0074671878_akka_path_scala_uri.txt
Q: I'm trying to build an Quiz App Where the user Defines the Amount of QA but the loop fails and doesn't reach the specified amount given by user Fair Warning: I'm a student trying to build an app for our research. So I'm Trying to build an app where the user Defines the amount of Q/A he/she wants, then the app will ask for the user to put different Q/A until it reaches the an equal amount where it will then open up a new page to make the user answer those Q/A. The Problem is that the Loop fails to repeat at the stated amount by user making the app not being able to store let's say 10 Q/A to the JShared / Shared Preference. Here's the loop code: package com.prgr.quizards.canary; import android.app.Activity; import android.content.Intent; import android.content.SharedPreferences; import android.os.Bundle; import android.widget.EditText; import android.widget.TextView; import android.widget.Toast; import androidx.appcompat.widget.AppCompatButton; import com.google.android.material.textfield.TextInputEditText; import com.google.gson.Gson; import java.util.HashMap; import java.util.Objects; public class question extends Activity { private HashMap<String, Object> map = new HashMap<>(); private TextView text; private EditText inop; private TextInputEditText ans; private AppCompatButton btn; private SharedPreferences jshared2; private SharedPreferences jshared; @Override protected void onCreate(Bundle _savedInstanceState) { super.onCreate(_savedInstanceState); setContentView(R.layout.activity_question); initializedata(); AppCompatButton btn2 = findViewById(R.id.button3); btn = findViewById(R.id.button); btn2.setOnClickListener(v -> gotoback()); btn.setOnClickListener(view -> logicg()); } public void gotoback(){ Intent intent = new Intent(question.this, activity_home_screen.class); startActivity(intent); } private void initializedata(){ jshared = getSharedPreferences("j", Activity.MODE_PRIVATE); jshared2 = getSharedPreferences("j2", Activity.MODE_PRIVATE); inop = findViewById(R.id.inop); ans = findViewById(R.id.ans); text = findViewById(R.id.text1); } private void logicg() { String mount = jshared.getString("amount", ""); int amounts = Integer.parseInt(mount); for (int i = 1; i < amounts; i++) { //String shite = Integer.toString(qloop); //Toast.makeText(getApplicationContext(), shite, Toast.LENGTH_SHORT).show(); if (i == amounts) { Intent intent = new Intent(question.this, answerscrn.class); startActivity(intent); } else{ map = new HashMap<>(); map.put("answer", Objects.requireNonNull(ans.getText()).toString()); map.put("question", inop.getText().toString()); jshared2.edit().putString("data", new Gson().toJson(map)).commit(); } } } } I tried to do a for loop + if else inside the for loop and that only returns the error of Condition 'i == amounts' is always 'false' what i expect is for it to loop till it reaches the same number stated by the amounts (which is the user defined value) to open up a new page using intent. A: From what I understand, your problem it in the loop, tho I don't understand how do you ask/put diffrent questions. i == amounts is never true because the condition in the FOR loop is: i < amounts You can simplify your code like this: private void logicg() { String mount = jshared.getString("amount", ""); int amounts = Integer.parseInt(mount); for (int i = 0; i < amounts; i++) { //String shite = Integer.toString(qloop); //Toast.makeText(getApplicationContext(), shite, Toast.LENGTH_SHORT).show(); map = new HashMap<>(); map.put("answer", Objects.requireNonNull(ans.getText()).toString()); map.put("question", inop.getText().toString()); jshared2.edit().putString("data", new Gson().toJson(map)).commit(); } Intent intent = new Intent(question.this, answerscrn.class); startActivity(intent); } A: for (int i = 1; i < amounts; i++) IF THIS WAS YOUR LOOP AND YOU TOLD THAT USE WILL SELECT THE NO OF QUESTION SO IF YOU ADD THIS LOOP THEN YOUR APP WILL STOP AT -1 STEP FOR EXAMPLE IF YOUR USER SELECT 10 AND THIS LOOP WILL END AT IT WILL END AT 9 BECAUSE YOUR APP IS STARTING AT I=1 AND I<AMOUNT (THE VALUE ENTER BY THE USE SO YOU CAN MEET THE GOAL SO YOU NEED TO CHANGE THE LOOP TO for (int i = 1; i < =amounts; i++)
I'm trying to build an Quiz App Where the user Defines the Amount of QA but the loop fails and doesn't reach the specified amount given by user
Fair Warning: I'm a student trying to build an app for our research. So I'm Trying to build an app where the user Defines the amount of Q/A he/she wants, then the app will ask for the user to put different Q/A until it reaches the an equal amount where it will then open up a new page to make the user answer those Q/A. The Problem is that the Loop fails to repeat at the stated amount by user making the app not being able to store let's say 10 Q/A to the JShared / Shared Preference. Here's the loop code: package com.prgr.quizards.canary; import android.app.Activity; import android.content.Intent; import android.content.SharedPreferences; import android.os.Bundle; import android.widget.EditText; import android.widget.TextView; import android.widget.Toast; import androidx.appcompat.widget.AppCompatButton; import com.google.android.material.textfield.TextInputEditText; import com.google.gson.Gson; import java.util.HashMap; import java.util.Objects; public class question extends Activity { private HashMap<String, Object> map = new HashMap<>(); private TextView text; private EditText inop; private TextInputEditText ans; private AppCompatButton btn; private SharedPreferences jshared2; private SharedPreferences jshared; @Override protected void onCreate(Bundle _savedInstanceState) { super.onCreate(_savedInstanceState); setContentView(R.layout.activity_question); initializedata(); AppCompatButton btn2 = findViewById(R.id.button3); btn = findViewById(R.id.button); btn2.setOnClickListener(v -> gotoback()); btn.setOnClickListener(view -> logicg()); } public void gotoback(){ Intent intent = new Intent(question.this, activity_home_screen.class); startActivity(intent); } private void initializedata(){ jshared = getSharedPreferences("j", Activity.MODE_PRIVATE); jshared2 = getSharedPreferences("j2", Activity.MODE_PRIVATE); inop = findViewById(R.id.inop); ans = findViewById(R.id.ans); text = findViewById(R.id.text1); } private void logicg() { String mount = jshared.getString("amount", ""); int amounts = Integer.parseInt(mount); for (int i = 1; i < amounts; i++) { //String shite = Integer.toString(qloop); //Toast.makeText(getApplicationContext(), shite, Toast.LENGTH_SHORT).show(); if (i == amounts) { Intent intent = new Intent(question.this, answerscrn.class); startActivity(intent); } else{ map = new HashMap<>(); map.put("answer", Objects.requireNonNull(ans.getText()).toString()); map.put("question", inop.getText().toString()); jshared2.edit().putString("data", new Gson().toJson(map)).commit(); } } } } I tried to do a for loop + if else inside the for loop and that only returns the error of Condition 'i == amounts' is always 'false' what i expect is for it to loop till it reaches the same number stated by the amounts (which is the user defined value) to open up a new page using intent.
[ "From what I understand, your problem it in the loop, tho I don't understand how do you ask/put diffrent questions.\ni == amounts is never true because the condition in the FOR loop is: i < amounts\nYou can simplify your code like this:\nprivate void logicg() {\n String mount = jshared.getString(\"amount\", \"\");\n int amounts = Integer.parseInt(mount);\n\n for (int i = 0; i < amounts; i++) {\n //String shite = Integer.toString(qloop);\n //Toast.makeText(getApplicationContext(), shite, Toast.LENGTH_SHORT).show();\n \n map = new HashMap<>();\n map.put(\"answer\", Objects.requireNonNull(ans.getText()).toString());\n map.put(\"question\", inop.getText().toString());\n \n jshared2.edit().putString(\"data\", new Gson().toJson(map)).commit();\n\n \n }\n\n Intent intent = new Intent(question.this, answerscrn.class);\n startActivity(intent);\n}\n\n", "for (int i = 1; i < amounts; i++) IF THIS WAS YOUR LOOP AND YOU TOLD THAT USE WILL SELECT THE NO OF QUESTION SO\nIF YOU ADD THIS LOOP THEN YOUR APP WILL STOP AT -1 STEP FOR EXAMPLE\nIF YOUR USER SELECT 10 AND THIS LOOP WILL END AT IT WILL END AT 9\nBECAUSE YOUR APP IS STARTING AT I=1 AND I<AMOUNT (THE VALUE ENTER BY THE USE SO YOU CAN MEET THE GOAL\nSO YOU NEED TO CHANGE THE LOOP TO for (int i = 1; i < =amounts; i++)\n" ]
[ 0, 0 ]
[]
[]
[ "android", "java", "loops" ]
stackoverflow_0074674998_android_java_loops.txt
Q: Secant method in matlab stops before it reaches the epsilon So I'm pretty new to MATLAB and numerical analysis and I'm trying to write the code for the secant method, but for some reason it just stops after the first iteration. Here is my code for the secant method: function [root] = secant(func, x0, x1, N, eps_step, max_eps) last_x = x0; curr_x = x1; new_x = x1; diff_x = curr_x - last_x; diff_f = func(curr_x) - func(last_x); k = 0; if func(x0) == 0 root = x0; fprintf('results for secant iterations \n'); fprintf(' x0,x1 k x_k f(x_k) |x_k-x_(k+1)|\n') fprintf(' --------------- --- ----------- --------- ---------------\n') fprintf('N = %12.8f {x0=%12.8f,x1=%12.8f} %3i %12.8f %12.8f %12.8f \neps = %3i \ndelta = %3i \n\n',N, last_x, curr_x, k ,last_x,func(new_x), diff_x, eps_step, max_eps) return; elseif func(x1) == 0 root = x1; fprintf('results for secant iterations \n'); fprintf(' x0,x1 k x_k f(x_k) |x_k-x_(k+1)|\n') fprintf(' --------------- --- ----------- --------- ---------------\n') fprintf('N = %12.8f {x0=%12.8f,x1=%12.8f} %3i %12.8f %12.8f %12.8f \neps = %3i \ndelta = %3i \n\n',N, last_x, curr_x, k ,curr_x,func(new_x), diff_x, eps_step, max_eps) return; end while (k < N) k = k+1; diff_x = curr_x - last_x; diff_f = func(curr_x) - func(last_x); new_x = curr_x - func(curr_x)*(diff_x/diff_f); % compute the new value of x % if abs((new_x - curr_x)*(diff_f/diff_x)+func(curr_x)) < max_eps fprintf('eps_step = %12.8f\n', eps_step); %Doesnt print this line for some reason if (abs(diff_x) < eps_step) if (abs(diff_f) < max_eps && new_x > 0) root = new_x; fprintf('results for secant iterations \n'); fprintf(' x0,x1 k x_k f(x_k) |x_k-x_(k+1)|\n') fprintf(' --------------- --- ----------- --------- ---------------\n') fprintf('N = %12.8f {x0=%12.8f,x1=%12.8f} %3i %12.8f %12.8f %12.8f \neps = %3i \ndelta = %3i \n\n',N, x0, x1, k ,new_x,func(new_x), diff_x, eps_step, max_eps) break end end last_x = curr_x; curr_x = new_x; end end the function and call function are: func = @(x) x.^2-0.2-4*x.*sin(x)+(2*sin(x)).^2; secant(func, 0, 1, 50, 10^-6, 10^-6); But the result I get is: results for secant iterations x0,x1 k x_k f(x_k) |x_k-x_(k+1)| --------------- --- ----------- --------- --------------- N = 50.00000000 {x0= 0.00000000,x1= 1.00000000} 1 0.42880752 -0.03777984 1.00000000 eps = 1.000000e-06 delta = 1.000000e-06 the root is supposed to be 0.484736. Where is the mistake? A: It seems like the if statement inside your while loop is not being entered because abs(diff_x) is not less than eps_step. Since diff_x is initialized to curr_x - last_x, which is equal to x1 - x0, it is unlikely that it will be less than eps_step on the first iteration. One possible solution is to move the if statement outside of the while loop, like this: while (k < N) k = k+1; diff_x = curr_x - last_x; diff_f = func(curr_x) - func(last_x); new_x = curr_x - func(curr_x)*(diff_x/diff_f); % compute the new value of x if (abs(diff_x) < eps_step && abs(diff_f) < max_eps && new_x > 0) root = new_x; fprintf('results for secant iterations \n'); fprintf(' x0,x1 k x_k f(x_k) |x_k-x_(k+1)|\n') fprintf(' --------------- --- ----------- --------- ---------------\n') fprintf('N = %12.8f {x0=%12.8f,x1=%12.8f} %3i %12.8f %12.8f %12.8f \neps = %3i \ndelta = %3i \n\n',N, x0, x1, k ,new_x,func(new_x), diff_x, eps_step, max_eps) break end last_x = curr_x; curr_x = new_x; end This way, the if statement will be executed on each iteration, and the loop will break if any of the conditions are met. You may also want to adjust the initial values of x0 and x1 to make sure that they are not too far apart, as this can affect the convergence of the secant method.
Secant method in matlab stops before it reaches the epsilon
So I'm pretty new to MATLAB and numerical analysis and I'm trying to write the code for the secant method, but for some reason it just stops after the first iteration. Here is my code for the secant method: function [root] = secant(func, x0, x1, N, eps_step, max_eps) last_x = x0; curr_x = x1; new_x = x1; diff_x = curr_x - last_x; diff_f = func(curr_x) - func(last_x); k = 0; if func(x0) == 0 root = x0; fprintf('results for secant iterations \n'); fprintf(' x0,x1 k x_k f(x_k) |x_k-x_(k+1)|\n') fprintf(' --------------- --- ----------- --------- ---------------\n') fprintf('N = %12.8f {x0=%12.8f,x1=%12.8f} %3i %12.8f %12.8f %12.8f \neps = %3i \ndelta = %3i \n\n',N, last_x, curr_x, k ,last_x,func(new_x), diff_x, eps_step, max_eps) return; elseif func(x1) == 0 root = x1; fprintf('results for secant iterations \n'); fprintf(' x0,x1 k x_k f(x_k) |x_k-x_(k+1)|\n') fprintf(' --------------- --- ----------- --------- ---------------\n') fprintf('N = %12.8f {x0=%12.8f,x1=%12.8f} %3i %12.8f %12.8f %12.8f \neps = %3i \ndelta = %3i \n\n',N, last_x, curr_x, k ,curr_x,func(new_x), diff_x, eps_step, max_eps) return; end while (k < N) k = k+1; diff_x = curr_x - last_x; diff_f = func(curr_x) - func(last_x); new_x = curr_x - func(curr_x)*(diff_x/diff_f); % compute the new value of x % if abs((new_x - curr_x)*(diff_f/diff_x)+func(curr_x)) < max_eps fprintf('eps_step = %12.8f\n', eps_step); %Doesnt print this line for some reason if (abs(diff_x) < eps_step) if (abs(diff_f) < max_eps && new_x > 0) root = new_x; fprintf('results for secant iterations \n'); fprintf(' x0,x1 k x_k f(x_k) |x_k-x_(k+1)|\n') fprintf(' --------------- --- ----------- --------- ---------------\n') fprintf('N = %12.8f {x0=%12.8f,x1=%12.8f} %3i %12.8f %12.8f %12.8f \neps = %3i \ndelta = %3i \n\n',N, x0, x1, k ,new_x,func(new_x), diff_x, eps_step, max_eps) break end end last_x = curr_x; curr_x = new_x; end end the function and call function are: func = @(x) x.^2-0.2-4*x.*sin(x)+(2*sin(x)).^2; secant(func, 0, 1, 50, 10^-6, 10^-6); But the result I get is: results for secant iterations x0,x1 k x_k f(x_k) |x_k-x_(k+1)| --------------- --- ----------- --------- --------------- N = 50.00000000 {x0= 0.00000000,x1= 1.00000000} 1 0.42880752 -0.03777984 1.00000000 eps = 1.000000e-06 delta = 1.000000e-06 the root is supposed to be 0.484736. Where is the mistake?
[ "It seems like the if statement inside your while loop is not being entered because abs(diff_x) is not less than eps_step. Since diff_x is initialized to curr_x - last_x, which is equal to x1 - x0, it is unlikely that it will be less than eps_step on the first iteration.\nOne possible solution is to move the if statement outside of the while loop, like this:\nwhile (k < N)\nk = k+1;\ndiff_x = curr_x - last_x;\ndiff_f = func(curr_x) - func(last_x);\nnew_x = curr_x - func(curr_x)*(diff_x/diff_f); % compute the new value of x\nif (abs(diff_x) < eps_step && abs(diff_f) < max_eps && new_x > 0)\nroot = new_x;\nfprintf('results for secant iterations \\n');\nfprintf(' x0,x1 k x_k f(x_k) |x_k-x_(k+1)|\\n')\nfprintf(' --------------- --- ----------- --------- ---------------\\n')\nfprintf('N = %12.8f {x0=%12.8f,x1=%12.8f} %3i %12.8f %12.8f %12.8f \\neps = %3i \\ndelta = %3i \\n\\n',N, x0, x1, k ,new_x,func(new_x), diff_x, eps_step, max_eps)\nbreak\nend\nlast_x = curr_x;\ncurr_x = new_x;\nend\n\nThis way, the if statement will be executed on each iteration, and the loop will break if any of the conditions are met. You may also want to adjust the initial values of x0 and x1 to make sure that they are not too far apart, as this can affect the convergence of the secant method.\n" ]
[ 0 ]
[]
[]
[ "matlab", "numerical_methods" ]
stackoverflow_0074677272_matlab_numerical_methods.txt
Q: truncate portions of a string found between a regex pattern i need to truncate portions of a string that are found between a regex pattern. if a portion length is <= to the given padding, leave the portion as it is. the starting portion should be truncated from the left. the ending portion should be truncated from the right. the portions in between should be truncated in the middle. if no pattern found, leave the string untouched code: // note that the 'x' characters below could be any characters, even spaces or line breaks. const str = 'xxxxx<mark>foo</mark>xx<mark>bar</mark>xxxxxxxxx<mark>baz</mark>xxxxxxxx' const truncateBetweenPattern = (str, pattern, padding=0, sep='...') => { // code } const pattern = '(<mark>.+</mark>)' // (not sure if this is valid) const result = truncateBetweenPattern(str, pattern, 3) output: result === '...xxx<mark>foo</mark>xx<mark>bar</mark>xxx...xxx<mark>baz</mark>xxx...' A: You could split the string by the pattern, also producing the matched pattern itself (by making the pattern a capture group). Then map each part. When it is a separating part (your mark tag), it will have an odd index, and in that case just echo that part without change. If it is not a separating part, then map it using another regex that will match when a separator needs to be injected. Design three regexes for this purpose: one for the prefix, one for the postfix, and one for all other parts. The case where the separator is not found at all, the original string is returned (boundary case). Here is how that could be coded: const truncateBetweenPattern = (str, pattern, padding=0, sep='...') => { const re = [ RegExp(`^().+?(.{${padding}})$`, "s"), RegExp(`^(.{${padding}}).+?(.{${padding}})$`, "s"), RegExp(`^(.{${padding}}).+?()$`, "s") ]; const parts = str.split(RegExp(`(${pattern})`, "s")); return parts.length < 2 ? str : parts.map((part, i, {length}) => i % 2 ? part : part.replace(re[(i > 0) + (i == length - 1)], `$1${sep}$2`) ).join(""); } const pattern = '<mark>.+?</mark>'; // Make "+" lazy const str = 'xxxxx<mark>foo</mark>xx<mark>bar</mark>xxxxxxxxx<mark>baz</mark>xxxxxxxx' const result = truncateBetweenPattern(str, pattern, 3); console.log(result); A: If your environment supports a lookbehind assertion, you can account for the 4 different scenario's with capture groups and lookarounds. In the code check if the group value exists, and based upon the group number return the right replacement. You capture either <mark>....</mark> in which case you just return the unmodified match. For </mark>....</mark> you do the replacement with the separator if the string length is greater than 2 times the padding. For the part till the first occurrence of <mark> or the part after the last occurrence, you do the replacement if the string length is greater than the padding. See the regex capture group values. const regex = /(<mark>[^]*?<\/mark>)|(?<=<\/mark>)([^]*?)(?=<mark>)|([^]*?)(?=<mark>)|(?<=<\/mark>)([^]*)/g; const truncateBetweenPattern = (str, pattern, padding = 0, sep = '...') => { if (padding <= 0) return str; return str.replace(regex, (m, g1, g2, g3, g4) => { if (g1) return g1; else if (g2 && g2.length > padding * 2) { return g2.slice(0, padding) + sep + g2.slice(-padding); } else if (g3 && g3.length > padding) { return sep + g3.slice(-padding); } else if (g4 && g4.length > padding) { return g4.slice(0, padding) + sep; } else return m; }) } const strings = [ 'xxxxx<mark>foo</mark>xx<mark>bar</mark>xxxxxxxxx<mark>baz</mark>xxxxxxxx', 'xxxxx<mark>foo</mark>xx<mark>bar</mark>xxxxxxx<mark>baz</mark>xxxxxxxx', 'xxxxx<mark>foo</mark>xx<mark>bar</mark>xxxxxx<mark>baz</mark>xxxxxxxx', 'xx<mark>foo</mark>xx<mark>bar</mark>xxxxxxxx<mark>baz</mark>xxxxxxxx', 'xx<mark>foo</mark>xx<mark>bar</mark>xxxxxxxxxxxxxxxxxxxx<mark>baz</mark>xx<mark>foo</mark>xx<mark>bar</mark>xxxxxxxxxxxxxxxxxxxx<mark>baz</mark>' ]; strings.forEach(s => console.log(truncateBetweenPattern(s, regex, 3))); A: The following code should do what you want: const truncateBetweenPattern = (str, pattern, padding=0, sep='...') => { const regex = new RegExp(pattern, 'g'); let result = ''; let match; while ((match = regex.exec(str)) !== null) { // Truncate the left portion of the match if needed. const left = match.index; if (left > padding) { result += str.slice(0, left - padding) + sep; } else { result += str.slice(0, left); } // Truncate the middle portion of the match if needed. const middle = match[0].length; if (middle > padding * 2) { result += str.slice(left, left + padding) + sep + str.slice(left + middle - padding); } else { result += match[0]; } // Truncate the right portion of the match if needed. const right = str.length - regex.lastIndex; if (right > padding) { result += sep + str.slice(regex.lastIndex, regex.lastIndex + padding); } else { result += str.slice(regex.lastIndex); } // Update the string for the next iteration. str = str.slice(regex.lastIndex); } return result; } Here's an example of how to use the function: const str = 'xxxxx<mark>foo</mark>xx<mark>bar</mark>xxxxxxxxx<mark>baz</mark>xxxxxxxx'; const pattern = '(<mark>.+</mark>)'; const result = truncateBetweenPattern(str, pattern, 3); console.log(result); This should output: xxx<mark>foo</mark>x...<mark>ba...z</mark>...
truncate portions of a string found between a regex pattern
i need to truncate portions of a string that are found between a regex pattern. if a portion length is <= to the given padding, leave the portion as it is. the starting portion should be truncated from the left. the ending portion should be truncated from the right. the portions in between should be truncated in the middle. if no pattern found, leave the string untouched code: // note that the 'x' characters below could be any characters, even spaces or line breaks. const str = 'xxxxx<mark>foo</mark>xx<mark>bar</mark>xxxxxxxxx<mark>baz</mark>xxxxxxxx' const truncateBetweenPattern = (str, pattern, padding=0, sep='...') => { // code } const pattern = '(<mark>.+</mark>)' // (not sure if this is valid) const result = truncateBetweenPattern(str, pattern, 3) output: result === '...xxx<mark>foo</mark>xx<mark>bar</mark>xxx...xxx<mark>baz</mark>xxx...'
[ "You could split the string by the pattern, also producing the matched pattern itself (by making the pattern a capture group). Then map each part. When it is a separating part (your mark tag), it will have an odd index, and in that case just echo that part without change. If it is not a separating part, then map it using another regex that will match when a separator needs to be injected. Design three regexes for this purpose: one for the prefix, one for the postfix, and one for all other parts.\nThe case where the separator is not found at all, the original string is returned (boundary case).\nHere is how that could be coded:\n\n\nconst truncateBetweenPattern = (str, pattern, padding=0, sep='...') => {\n const re = [\n RegExp(`^().+?(.{${padding}})$`, \"s\"),\n RegExp(`^(.{${padding}}).+?(.{${padding}})$`, \"s\"),\n RegExp(`^(.{${padding}}).+?()$`, \"s\")\n ];\n const parts = str.split(RegExp(`(${pattern})`, \"s\"));\n return parts.length < 2 ? str\n : parts.map((part, i, {length}) =>\n i % 2 ? part : part.replace(re[(i > 0) + (i == length - 1)], `$1${sep}$2`)\n ).join(\"\");\n}\n\nconst pattern = '<mark>.+?</mark>'; // Make \"+\" lazy\nconst str = 'xxxxx<mark>foo</mark>xx<mark>bar</mark>xxxxxxxxx<mark>baz</mark>xxxxxxxx'\nconst result = truncateBetweenPattern(str, pattern, 3);\nconsole.log(result);\n\n\n\n", "If your environment supports a lookbehind assertion, you can account for the 4 different scenario's with capture groups and lookarounds.\nIn the code check if the group value exists, and based upon the group number return the right replacement.\nYou capture either <mark>....</mark> in which case you just return the unmodified match.\nFor </mark>....</mark> you do the replacement with the separator if the string length is greater than 2 times the padding.\nFor the part till the first occurrence of <mark> or the part after the last occurrence, you do the replacement if the string length is greater than the padding.\nSee the regex capture group values.\n\n\nconst regex = /(<mark>[^]*?<\\/mark>)|(?<=<\\/mark>)([^]*?)(?=<mark>)|([^]*?)(?=<mark>)|(?<=<\\/mark>)([^]*)/g;\n\nconst truncateBetweenPattern = (str, pattern, padding = 0, sep = '...') => {\n if (padding <= 0) return str;\n\n return str.replace(regex, (m, g1, g2, g3, g4) => {\n if (g1) return g1;\n else if (g2 && g2.length > padding * 2) {\n return g2.slice(0, padding) + sep + g2.slice(-padding);\n } else if (g3 && g3.length > padding) {\n return sep + g3.slice(-padding);\n } else if (g4 && g4.length > padding) {\n return g4.slice(0, padding) + sep;\n } else return m;\n })\n}\n\nconst strings = [\n 'xxxxx<mark>foo</mark>xx<mark>bar</mark>xxxxxxxxx<mark>baz</mark>xxxxxxxx',\n 'xxxxx<mark>foo</mark>xx<mark>bar</mark>xxxxxxx<mark>baz</mark>xxxxxxxx',\n 'xxxxx<mark>foo</mark>xx<mark>bar</mark>xxxxxx<mark>baz</mark>xxxxxxxx',\n 'xx<mark>foo</mark>xx<mark>bar</mark>xxxxxxxx<mark>baz</mark>xxxxxxxx',\n 'xx<mark>foo</mark>xx<mark>bar</mark>xxxxxxxxxxxxxxxxxxxx<mark>baz</mark>xx<mark>foo</mark>xx<mark>bar</mark>xxxxxxxxxxxxxxxxxxxx<mark>baz</mark>'\n];\n\nstrings.forEach(s => console.log(truncateBetweenPattern(s, regex, 3)));\n\n\n\n", "The following code should do what you want:\nconst truncateBetweenPattern = (str, pattern, padding=0, sep='...') => {\n const regex = new RegExp(pattern, 'g');\n let result = '';\n let match;\n\n while ((match = regex.exec(str)) !== null) {\n // Truncate the left portion of the match if needed.\n const left = match.index;\n if (left > padding) {\n result += str.slice(0, left - padding) + sep;\n } else {\n result += str.slice(0, left);\n }\n\n // Truncate the middle portion of the match if needed.\n const middle = match[0].length;\n if (middle > padding * 2) {\n result += str.slice(left, left + padding) + sep + str.slice(left + middle - padding);\n } else {\n result += match[0];\n }\n\n // Truncate the right portion of the match if needed.\n const right = str.length - regex.lastIndex;\n if (right > padding) {\n result += sep + str.slice(regex.lastIndex, regex.lastIndex + padding);\n } else {\n result += str.slice(regex.lastIndex);\n }\n\n // Update the string for the next iteration.\n str = str.slice(regex.lastIndex);\n }\n\n return result;\n}\n\nHere's an example of how to use the function:\nconst str = 'xxxxx<mark>foo</mark>xx<mark>bar</mark>xxxxxxxxx<mark>baz</mark>xxxxxxxx';\nconst pattern = '(<mark>.+</mark>)';\nconst result = truncateBetweenPattern(str, pattern, 3);\n\nconsole.log(result);\n\nThis should output:\nxxx<mark>foo</mark>x...<mark>ba...z</mark>...\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "javascript", "regex", "string", "truncate" ]
stackoverflow_0074674559_javascript_regex_string_truncate.txt
Q: Strip html from string Ruby on Rails I'm working with Ruby on Rails, Is there a way to strip html from a string using sanitize or equal method and keep only text inside value attribute on input tag? A: If we want to use this in model ActionView::Base.full_sanitizer.sanitize(html_string) which is the code in "strip_tags" method A: There's a strip_tags method in ActionView::Helpers::SanitizeHelper: http://api.rubyonrails.org/classes/ActionView/Helpers/SanitizeHelper.html#method-i-strip_tags Edit: for getting the text inside the value attribute, you could use something like Nokogiri with an Xpath expression to get that out of the string. A: Yes, call this: sanitize(html_string, tags:[]) A: ActionView::Base.full_sanitizer.sanitize(html_string) White list of tags and attributes can be specified as bellow ActionView::Base.full_sanitizer.sanitize(html_string, :tags => %w(img br p), :attributes => %w(src style)) Above statement allows tags img, br and p and attributes src and style. A: I've used the Loofah library, as it is suitable for both HTML and XML (both documents and string fragments). It is the engine behind the html sanitizer gem. I'm simply pasting the code example to show how simple it is to use. Loofah Gem unsafe_html = "ohai! <div>div is safe</div> <script>but script is not</script>" doc = Loofah.fragment(unsafe_html).scrub!(:strip) doc.to_s # => "ohai! <div>div is safe</div> " doc.text # => "ohai! div is safe " A: How about this? white_list_sanitizer = Rails::Html::WhiteListSanitizer.new WHITELIST = ['p','b','h1','h2','h3','h4','h5','h6','li','ul','ol','small','i','u'] [Your, Models, Here].each do |klass| klass.all.each do |ob| klass.attribute_names.each do |attrs| if ob.send(attrs).is_a? String ob.send("#{attrs}=", white_list_sanitizer.sanitize(ob.send(attrs), tags: WHITELIST, attributes: %w(id style)).gsub(/<p>\s*<\/p>\r\n/im, '')) ob.save end end end end A: If you want to remove all html tags you can use htm.gsub(/<[^>]*>/,'') A: This is working for me in rails 6.1.3: .errors-description = sanitize(message, tags: %w[div span strong], attributes: %w[class]) A: You can do .to_plain_text: @my_string = <p>My HTML String</p> @my_string.to_plain_text => My HTML String
Strip html from string Ruby on Rails
I'm working with Ruby on Rails, Is there a way to strip html from a string using sanitize or equal method and keep only text inside value attribute on input tag?
[ "If we want to use this in model\nActionView::Base.full_sanitizer.sanitize(html_string)\n\nwhich is the code in \"strip_tags\" method\n", "There's a strip_tags method in ActionView::Helpers::SanitizeHelper:\nhttp://api.rubyonrails.org/classes/ActionView/Helpers/SanitizeHelper.html#method-i-strip_tags\nEdit: for getting the text inside the value attribute, you could use something like Nokogiri with an Xpath expression to get that out of the string.\n", "Yes, call this: sanitize(html_string, tags:[])\n", "ActionView::Base.full_sanitizer.sanitize(html_string)\n\nWhite list of tags and attributes can be specified as bellow\nActionView::Base.full_sanitizer.sanitize(html_string, :tags => %w(img br p), :attributes => %w(src style))\n\nAbove statement allows tags img, br and p and attributes src and style.\n", "I've used the Loofah library, as it is suitable for both HTML and XML (both documents and string fragments). It is the engine behind the html sanitizer gem. I'm simply pasting the code example to show how simple it is to use.\nLoofah Gem\nunsafe_html = \"ohai! <div>div is safe</div> <script>but script is not</script>\"\n\ndoc = Loofah.fragment(unsafe_html).scrub!(:strip)\ndoc.to_s # => \"ohai! <div>div is safe</div> \"\ndoc.text # => \"ohai! div is safe \"\n\n", "How about this?\nwhite_list_sanitizer = Rails::Html::WhiteListSanitizer.new\nWHITELIST = ['p','b','h1','h2','h3','h4','h5','h6','li','ul','ol','small','i','u']\n\n\n[Your, Models, Here].each do |klass| \n klass.all.each do |ob| \n klass.attribute_names.each do |attrs|\n if ob.send(attrs).is_a? String\n ob.send(\"#{attrs}=\", white_list_sanitizer.sanitize(ob.send(attrs), tags: WHITELIST, attributes: %w(id style)).gsub(/<p>\\s*<\\/p>\\r\\n/im, ''))\n ob.save\n end\n end\n end\nend\n\n", "If you want to remove all html tags you can use\n htm.gsub(/<[^>]*>/,'')\n\n", "This is working for me in rails 6.1.3:\n.errors-description\n = sanitize(message, tags: %w[div span strong], attributes: %w[class])\n\n", "You can do .to_plain_text:\n@my_string = <p>My HTML String</p>\n@my_string.to_plain_text\n=> My HTML String\n\n" ]
[ 204, 146, 33, 33, 10, 2, 1, 0, 0 ]
[]
[]
[ "html", "ruby", "ruby_on_rails_3", "string" ]
stackoverflow_0007414267_html_ruby_ruby_on_rails_3_string.txt
Q: Get ASCII grid from compressed .gz file from URL in R I am trying to download and gunzip grid files in ascii format, compressed to .gz files from an URL like this. I tried to get to the files via y <- gzon(url("name-of-url") and then gunzip(y), but for gunzip that is an invalid file. If I can decompress the file, I would like to read the .asc file with raster() Any ideas how to solve this? A: I don't know why unzip does not work on these files, but you can get at the contents as follows: URL = "https://opendata.dwd.de/climate_environment/CDC/grids_germany/annual/summer_days/grids_germany_annual_summer_days_1951_17.asc.gz" download.file(URL, "grids_germany_annual_summer_days_1951_17.asc.gz") GZ = gzfile("grids_germany_annual_summer_days_1951_17.asc.gz") Lines = readLines(GZ, 10) writeLines(Lines, "grids_germany_annual_summer_days_1951_17.asc") Now you have an ascii text file.
Get ASCII grid from compressed .gz file from URL in R
I am trying to download and gunzip grid files in ascii format, compressed to .gz files from an URL like this. I tried to get to the files via y <- gzon(url("name-of-url") and then gunzip(y), but for gunzip that is an invalid file. If I can decompress the file, I would like to read the .asc file with raster() Any ideas how to solve this?
[ "I don't know why unzip does not work on these files, but you can get at the contents as follows:\nURL = \"https://opendata.dwd.de/climate_environment/CDC/grids_germany/annual/summer_days/grids_germany_annual_summer_days_1951_17.asc.gz\"\ndownload.file(URL, \"grids_germany_annual_summer_days_1951_17.asc.gz\")\n\nGZ = gzfile(\"grids_germany_annual_summer_days_1951_17.asc.gz\")\nLines = readLines(GZ, 10)\nwriteLines(Lines, \"grids_germany_annual_summer_days_1951_17.asc\")\n\nNow you have an ascii text file.\n" ]
[ 2 ]
[]
[]
[ "gunzip", "r", "rcurl" ]
stackoverflow_0074677106_gunzip_r_rcurl.txt
Q: DAO based on ERC1155 norm I am building a project in which i will have a list of fractional NFT's. NFT1 --> 10 Parts. NFT2 --> 20 Parts. ... NFTN --> X Parts. i want people who ones Parts of the NFT1 to be able to vote to decide what are we going to do with it. a small DAO for every single one of those NFT's. I am wondering if i should implement this from scratch ( not 100 secure ), or use existing code from openZeppelin although i did't find DAO's based on the 1155 in openZeppelin web site ? A: If you want to implement a small DAO for each of your fractional NFTs, using existing code from OpenZeppelin is a good idea. OpenZeppelin provides a range of secure, tested, and audited smart contract libraries that you can use to build your project, including a DAO library. The OpenZeppelin DAO library is a set of smart contracts that you can use to create and manage a DAO on the Ethereum blockchain. It includes the following contracts: DAO: The main contract that represents the DAO. It allows members to join, leave, and vote on proposals. DAOFactory: A contract that allows you to create new instances of the DAO contract. PLCRVoting: A contract that provides a voting mechanism for the DAO contract. PLCRFactory: A contract that allows you to create new instances of the PLCRVoting contract. To use the OpenZeppelin DAO library in your project, you can follow these steps: Install the @openzeppelin/contracts package from NPM using the following command: npm install @openzeppelin/contracts Import the DAO and DAOFactory contracts from the @openzeppelin/contracts package into your project: import { DAO, DAOFactory } from "@openzeppelin/contracts"; 3. Create a new instance of the DAOFactory contract and deploy it to the Ethereum blockchain using the deploy() method. This method returns an instance of the DAO contract that you can use to manage the DAO: // Create a new instance of the DAOFactory contract const daoFactory = new DAOFactory(web3.currentProvider); // Deploy the DAOFactory contract to the Ethereum blockchain const dao = await daoFactory.deploy(); 4. Use the DAO instance to manage your DAO, such as adding members, creating proposals, and voting on proposals. For more details on how to use the DAO contract, you can check the OpenZeppelin documentation. // Add a new member to the DAO await dao.addMember(memberAddress); // Create a new proposal await dao.newProposal( memberAddress, // The address of the member who is creating the proposal "My proposal", // The name of the proposal "Proposal description", // The description of the proposal "0x1234...", // The data or code of the proposal "0x5678..." // The destination address of the proposal ); // Vote on a proposal await dao.vote( memberAddress, // The address of the member who is voting proposalId, // The ID of the proposal that is being voted on true // The vote (true = yes, false = no) ); I hope this helps! Marcell
DAO based on ERC1155 norm
I am building a project in which i will have a list of fractional NFT's. NFT1 --> 10 Parts. NFT2 --> 20 Parts. ... NFTN --> X Parts. i want people who ones Parts of the NFT1 to be able to vote to decide what are we going to do with it. a small DAO for every single one of those NFT's. I am wondering if i should implement this from scratch ( not 100 secure ), or use existing code from openZeppelin although i did't find DAO's based on the 1155 in openZeppelin web site ?
[ "If you want to implement a small DAO for each of your fractional NFTs, using existing code from OpenZeppelin is a good idea. OpenZeppelin provides a range of secure, tested, and audited smart contract libraries that you can use to build your project, including a DAO library.\nThe OpenZeppelin DAO library is a set of smart contracts that you can use to create and manage a DAO on the Ethereum blockchain. It includes the following contracts:\n\nDAO: The main contract that represents the DAO. It allows members to\njoin, leave, and vote on proposals.\nDAOFactory: A contract that allows you to create new instances of the\nDAO contract.\nPLCRVoting: A contract that provides a voting mechanism for the DAO\ncontract.\nPLCRFactory: A contract that allows you to create new instances of\nthe PLCRVoting contract.\n\nTo use the OpenZeppelin DAO library in your project, you can follow these steps:\n\nInstall the @openzeppelin/contracts package from NPM using the\nfollowing command:\nnpm install @openzeppelin/contracts\n\n\nImport the DAO and DAOFactory contracts from the @openzeppelin/contracts package into your project:\nimport { DAO, DAOFactory } from \"@openzeppelin/contracts\";\n\n\n\n3. Create a new instance of the DAOFactory contract and deploy it to the Ethereum blockchain using the deploy() method. This method returns an instance of the DAO contract that you can use to manage the DAO:\n// Create a new instance of the DAOFactory contract\nconst daoFactory = new DAOFactory(web3.currentProvider);\n\n// Deploy the DAOFactory contract to the Ethereum blockchain\nconst dao = await daoFactory.deploy();\n\n4. Use the DAO instance to manage your DAO, such as adding members, creating proposals, and voting on proposals. For more details on how to use the DAO contract, you can check the OpenZeppelin documentation.\n// Add a new member to the DAO\nawait dao.addMember(memberAddress);\n\n// Create a new proposal\nawait dao.newProposal(\n memberAddress, // The address of the member who is creating the proposal\n \"My proposal\", // The name of the proposal\n \"Proposal description\", // The description of the proposal\n \"0x1234...\", // The data or code of the proposal\n \"0x5678...\" // The destination address of the proposal\n);\n\n// Vote on a proposal\nawait dao.vote(\n memberAddress, // The address of the member who is voting\n proposalId, // The ID of the proposal that is being voted on\n true // The vote (true = yes, false = no)\n);\n\nI hope this helps!\nMarcell\n" ]
[ 1 ]
[]
[]
[ "erc1155", "ethereum", "ethers.js", "solidity" ]
stackoverflow_0074675411_erc1155_ethereum_ethers.js_solidity.txt
Q: Assigning functions to dynamically created buttons in kivy? I am working on this program in which a list buttons gets created dynamically, based on the items in a list. The code I am using for this is: self.list_of_btns = [] def create(self, list=items): #Creates Categorie Buttons self.h = 1 for i in list: self.h = self.h - 0.2 self.btn = Button(text= f"{i}", size_hint=(.2,.22), pos_hint={"center_y":self.h, "center_x":.5}) self.list_of_btns.append(self.btn) self.add_widget(self.btn) ``` Now I want to add a function to each button. The function is suppose to write the button name (i) to a .txt file. This part of the code gets handled in an external module. The problem I am having rn is i cant bind the buttons to their individual functions. I alwasy get the error File "kivy/_event.pyx", line 238, in kivy._event.EventDispatcher.__init__ TypeError: Properties ['command'] passed to __init__ may not be existing property names. Valid properties are ['always_release', 'anchors', 'background_color', 'background_disabled_down', 'background_disabled_normal', 'background_down', 'background_normal', 'base_direction', 'bold', 'border', 'center', 'center_x', 'center_y', 'children', 'cls', 'color', 'disabled', 'disabled_color', 'disabled_image', 'disabled_outline_color', 'ellipsis_options', 'font_blended', 'font_context', 'font_family', 'font_features', 'font_hinting', 'font_kerning', 'font_name', 'font_size', 'halign', 'height', 'ids', 'is_shortened', 'italic', 'last_touch', 'line_height', 'markup', 'max_lines', 'min_state_time', 'mipmap', 'motion_filter', 'opacity', 'outline_color', 'outline_width', 'padding', 'padding_x', 'padding_y', 'parent', 'pos', 'pos_hint', 'refs', 'right', 'shorten', 'shorten_from', 'size', 'size_hint', 'size_hint_max', 'size_hint_max_x', 'size_hint_max_y', 'size_hint_min', 'size_hint_min_x', 'size_hint_min_y', 'size_hint_x', 'size_hint_y', 'split_str', 'state', 'state_image', 'strikethrough', 'strip', 'text', 'text_language', 'text_size', 'texture', 'texture_size', 'top', 'underline', 'unicode_errors', 'valign', 'width', 'x', 'y'] A: .bind() method can be used. In this example, partial is used in order to preset an argument so that each button does something unique. self.btn was changed to _btn because the references are being added to a list and self.btn was repeatedly assigned a new object. I didn't think this was intended, but it is not a critical part of this answer. The self.h was changed to self.height in the dictionary describing the button because I think that may be the source of the error printed in the question. from functools import partial self.list_of_btns = [] def your_function(self, your_argument): print(your_argument) def create(self, list=items): #Creates Categorie Buttons self.h = 1 for i in list: self.h = self.h - 0.2 _btn = Button(text= f"{i}", size_hint=(.2,.22), pos_hint={"center_y":self.height, "center_x":.5}) myfun = partial(self.your_function, your_argument=i) # this should be a function, not function() _btn.bind(on_press=myfun) self.list_of_btns.append(_btn) self.add_widget(_btn)
Assigning functions to dynamically created buttons in kivy?
I am working on this program in which a list buttons gets created dynamically, based on the items in a list. The code I am using for this is: self.list_of_btns = [] def create(self, list=items): #Creates Categorie Buttons self.h = 1 for i in list: self.h = self.h - 0.2 self.btn = Button(text= f"{i}", size_hint=(.2,.22), pos_hint={"center_y":self.h, "center_x":.5}) self.list_of_btns.append(self.btn) self.add_widget(self.btn) ``` Now I want to add a function to each button. The function is suppose to write the button name (i) to a .txt file. This part of the code gets handled in an external module. The problem I am having rn is i cant bind the buttons to their individual functions. I alwasy get the error File "kivy/_event.pyx", line 238, in kivy._event.EventDispatcher.__init__ TypeError: Properties ['command'] passed to __init__ may not be existing property names. Valid properties are ['always_release', 'anchors', 'background_color', 'background_disabled_down', 'background_disabled_normal', 'background_down', 'background_normal', 'base_direction', 'bold', 'border', 'center', 'center_x', 'center_y', 'children', 'cls', 'color', 'disabled', 'disabled_color', 'disabled_image', 'disabled_outline_color', 'ellipsis_options', 'font_blended', 'font_context', 'font_family', 'font_features', 'font_hinting', 'font_kerning', 'font_name', 'font_size', 'halign', 'height', 'ids', 'is_shortened', 'italic', 'last_touch', 'line_height', 'markup', 'max_lines', 'min_state_time', 'mipmap', 'motion_filter', 'opacity', 'outline_color', 'outline_width', 'padding', 'padding_x', 'padding_y', 'parent', 'pos', 'pos_hint', 'refs', 'right', 'shorten', 'shorten_from', 'size', 'size_hint', 'size_hint_max', 'size_hint_max_x', 'size_hint_max_y', 'size_hint_min', 'size_hint_min_x', 'size_hint_min_y', 'size_hint_x', 'size_hint_y', 'split_str', 'state', 'state_image', 'strikethrough', 'strip', 'text', 'text_language', 'text_size', 'texture', 'texture_size', 'top', 'underline', 'unicode_errors', 'valign', 'width', 'x', 'y']
[ ".bind() method can be used. In this example, partial is used in order to preset an argument so that each button does something unique. self.btn was changed to _btn because the references are being added to a list and self.btn was repeatedly assigned a new object. I didn't think this was intended, but it is not a critical part of this answer.\nThe self.h was changed to self.height in the dictionary describing the button because I think that may be the source of the error printed in the question.\nfrom functools import partial\n\n\nself.list_of_btns = []\n\ndef your_function(self, your_argument):\n print(your_argument)\n\ndef create(self, list=items): #Creates Categorie Buttons\n self.h = 1\n for i in list:\n self.h = self.h - 0.2\n _btn = Button(text= f\"{i}\", size_hint=(.2,.22), pos_hint={\"center_y\":self.height, \"center_x\":.5})\n myfun = partial(self.your_function, your_argument=i)\n # this should be a function, not function()\n _btn.bind(on_press=myfun)\n self.list_of_btns.append(_btn)\n self.add_widget(_btn)\n\n" ]
[ 0 ]
[]
[]
[ "dynamic", "kivy", "python" ]
stackoverflow_0074677171_dynamic_kivy_python.txt
Q: Error '-2147217913' performing sql query in access VBA I have a form in Excel and I need to return data from a table in access. When executing an instruction like the image it returns the error "Data Type Mismatch in Criterion Expression". I already reviewed the data types in the table and still could not resolve. What could be happening? Sub pesquisar() Set rs = New ADODB.Recordset conectdb rs.Open "SELECT * FROM TbApolice WHERE Contrato='" & UserForm.txt_certificado.Value & "'", db, adOpenKeyset, adLockReadOnly If UserForm.txt_certificado.Value <> "" Then UserForm.txt_nome = rs!Nome UserForm.txt_cpf = rs!CPF UserForm.txt_iniciovigencia = rs!Inicio_vigencia UserForm.txt_fimvigencia = rs!Fim_de_vigencia UserForm.txt_premio = rs!Premio Else MsgBox "Segurado não localizado", vbInformation, "LOCALIZAR" End If If Not rs Is Nothing Then rs.Close Set rs = Nothing End If fechadb End Sub I've already made some attempts to point break and debug the code, in addition to validating all fields and data types, but I didn't get any results. A: Try using a parameterized query. Option Explicit Sub pesquisar() Const SQL = "SELECT * FROM TbApolice WHERE Contrato = ?" Dim Db As ADODB.Connection, cmd As ADODB.Command Dim rs As ADODB.Recordset, sContrato As String, n As Long With UserForm sContrato = Trim(.txt_certificado.Value) If Len(sContrato) > 0 Then Set Db = conectdb("Database11.accdb") Set cmd = New ADODB.Command With cmd .ActiveConnection = Db .CommandText = SQL .Parameters.Append .CreateParameter("p1", adVarWChar, adParamInput, 255) Set rs = .Execute(n, sContrato) End With If rs.EOF Then MsgBox "Segurado não localizado", vbInformation, "LOCALIZAR" Else .txt_nome = rs!Nome .txt_cpf = rs!CPF .txt_iniciovigencia = rs!Inicio_vigencia .txt_fimvigencia = rs!Fim_de_vigencia .txt_premio = rs!Premio rs.Close Set rs = Nothing End If End If End With 'fechadb End Sub Function conectdb(s As String) As ADODB.Connection Set conectdb = New ADODB.Connection conectdb.Open "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" & s End Function
Error '-2147217913' performing sql query in access VBA
I have a form in Excel and I need to return data from a table in access. When executing an instruction like the image it returns the error "Data Type Mismatch in Criterion Expression". I already reviewed the data types in the table and still could not resolve. What could be happening? Sub pesquisar() Set rs = New ADODB.Recordset conectdb rs.Open "SELECT * FROM TbApolice WHERE Contrato='" & UserForm.txt_certificado.Value & "'", db, adOpenKeyset, adLockReadOnly If UserForm.txt_certificado.Value <> "" Then UserForm.txt_nome = rs!Nome UserForm.txt_cpf = rs!CPF UserForm.txt_iniciovigencia = rs!Inicio_vigencia UserForm.txt_fimvigencia = rs!Fim_de_vigencia UserForm.txt_premio = rs!Premio Else MsgBox "Segurado não localizado", vbInformation, "LOCALIZAR" End If If Not rs Is Nothing Then rs.Close Set rs = Nothing End If fechadb End Sub I've already made some attempts to point break and debug the code, in addition to validating all fields and data types, but I didn't get any results.
[ "Try using a parameterized query.\nOption Explicit\n\nSub pesquisar()\n\n Const SQL = \"SELECT * FROM TbApolice WHERE Contrato = ?\"\n\n Dim Db As ADODB.Connection, cmd As ADODB.Command\n Dim rs As ADODB.Recordset, sContrato As String, n As Long\n \n With UserForm\n sContrato = Trim(.txt_certificado.Value)\n If Len(sContrato) > 0 Then\n \n Set Db = conectdb(\"Database11.accdb\")\n Set cmd = New ADODB.Command\n With cmd\n .ActiveConnection = Db\n .CommandText = SQL\n .Parameters.Append .CreateParameter(\"p1\", adVarWChar, adParamInput, 255)\n Set rs = .Execute(n, sContrato)\n End With\n \n If rs.EOF Then\n MsgBox \"Segurado não localizado\", vbInformation, \"LOCALIZAR\"\n Else\n .txt_nome = rs!Nome\n .txt_cpf = rs!CPF\n .txt_iniciovigencia = rs!Inicio_vigencia\n .txt_fimvigencia = rs!Fim_de_vigencia\n .txt_premio = rs!Premio\n rs.Close\n Set rs = Nothing\n End If\n \n End If\n End With\n 'fechadb\n\nEnd Sub\n\nFunction conectdb(s As String) As ADODB.Connection\n Set conectdb = New ADODB.Connection\n conectdb.Open \"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=\" & s\nEnd Function\n\n" ]
[ 0 ]
[ "As stated in these question How to deal with single quote in Word VBA SQL query? your SQL query is missing a single quote:\nrs.Open \"SELECT * FROM TbApolice WHERE Contrato='\" & UserForm.txt_certificado.Value & \"''\", db, adOpenKeyset, adLockReadOnly\n\n\nAnd as it says, your code is vulnerable to a SQL injection attack.\n" ]
[ -1 ]
[ "excel", "ms_access", "vba" ]
stackoverflow_0074669406_excel_ms_access_vba.txt
Q: CoreTextServicesManager not working as expected I want to get the text input of the CoreTextServicesManager, but the TextUpdating event is not even triggered. In my UWP project, it is working fine. This is how I create the Service: CoreTextServicesManager manager = CoreTextServicesManager.GetForCurrentView(); EditContext = manager.CreateEditContext(); EditContext.InputPaneDisplayPolicy = CoreTextInputPaneDisplayPolicy.Manual; EditContext.InputScope = CoreTextInputScope.Text; EditContext.TextRequested += delegate { }; EditContext.SelectionRequested += delegate { }; EditContext.TextUpdating += EditContext_TextUpdating; EditContext.FocusRemoved += EditContext_FocusRemoved; EditContext.NotifyFocusEnter(); Here are my events: private void EditContext_TextUpdating(CoreTextEditContext sender, CoreTextTextUpdatingEventArgs args) { Debug.WriteLine(args.Text); } private void EditContext_FocusRemoved(CoreTextEditContext sender, object args) { Debug.WriteLine("Lost focus"); } Why does the TextUpdating event not trigger? what am I doing wrong? A: It looks like you are not adding any text input to the CoreTextServicesManager instance. The TextUpdating event is only triggered when text is being added or removed from the input field, so if you are not modifying the text in any way, the event will not be triggered. One way to fix this issue is to add a UI element that allows the user to enter text, such as a TextBox or TextBlock. Then, when the user adds or removes text from this element, the TextUpdating event should be triggered. Here is an example of how you could do this: // Create a TextBox to allow the user to enter text var textBox = new TextBox(); // Set the TextBox as the input element for the CoreTextServicesManager CoreTextServicesManager manager = CoreTextServicesManager.GetForCurrentView(); manager.InputPaneDisplayPolicy = CoreTextInputPaneDisplayPolicy.Manual; manager.InputScope = CoreTextInputScope.Text; manager.SetInputScope(textBox, CoreTextInputScope.Text); // Subscribe to the TextUpdating event of the CoreTextServicesManager manager.TextUpdating += (sender, args) => { Debug.WriteLine(args.Text); }; Once you have added a UI element for the user to enter text, the TextUpdating event should be triggered when the user modifies the text in this element.
CoreTextServicesManager not working as expected
I want to get the text input of the CoreTextServicesManager, but the TextUpdating event is not even triggered. In my UWP project, it is working fine. This is how I create the Service: CoreTextServicesManager manager = CoreTextServicesManager.GetForCurrentView(); EditContext = manager.CreateEditContext(); EditContext.InputPaneDisplayPolicy = CoreTextInputPaneDisplayPolicy.Manual; EditContext.InputScope = CoreTextInputScope.Text; EditContext.TextRequested += delegate { }; EditContext.SelectionRequested += delegate { }; EditContext.TextUpdating += EditContext_TextUpdating; EditContext.FocusRemoved += EditContext_FocusRemoved; EditContext.NotifyFocusEnter(); Here are my events: private void EditContext_TextUpdating(CoreTextEditContext sender, CoreTextTextUpdatingEventArgs args) { Debug.WriteLine(args.Text); } private void EditContext_FocusRemoved(CoreTextEditContext sender, object args) { Debug.WriteLine("Lost focus"); } Why does the TextUpdating event not trigger? what am I doing wrong?
[ "It looks like you are not adding any text input to the CoreTextServicesManager instance. The TextUpdating event is only triggered when text is being added or removed from the input field, so if you are not modifying the text in any way, the event will not be triggered.\nOne way to fix this issue is to add a UI element that allows the user to enter text, such as a TextBox or TextBlock. Then, when the user adds or removes text from this element, the TextUpdating event should be triggered.\nHere is an example of how you could do this:\n// Create a TextBox to allow the user to enter text\nvar textBox = new TextBox();\n\n// Set the TextBox as the input element for the CoreTextServicesManager\nCoreTextServicesManager manager = CoreTextServicesManager.GetForCurrentView();\nmanager.InputPaneDisplayPolicy = CoreTextInputPaneDisplayPolicy.Manual;\nmanager.InputScope = CoreTextInputScope.Text;\nmanager.SetInputScope(textBox, CoreTextInputScope.Text);\n\n// Subscribe to the TextUpdating event of the CoreTextServicesManager\nmanager.TextUpdating += (sender, args) =>\n{\n Debug.WriteLine(args.Text);\n};\n\nOnce you have added a UI element for the user to enter text, the TextUpdating event should be triggered when the user modifies the text in this element.\n" ]
[ 0 ]
[]
[]
[ "c#", "winui_3" ]
stackoverflow_0074677268_c#_winui_3.txt
Q: How to get zio.Runtime.default.unsafeRun available? When i try call zio.Runtime.default.unsafeRun(someStuff()) unsafeRun turns red so i cant call It I need to take off all the wrappers and get a clean val from ZIO[R,E,A] What should I import or\and use as dependency to fix it? Im already use these imports import zio.http.{Client, *} import zio.json.* import zio.http.model.Method import zio.{Scope, Task, ZIO, ZIOAppDefault} import zio.http.Client import zhttp.http.Status.NotFound import zhttp.http.Status import scala.language.postfixOps import zio.* import scala.collection.immutable.List import zio.{ExitCode, URIO, ZIO} import Endpoint11._ import zio.Runtime.unsafe import zio.Runtime.default.unsafe import zio.Runtime._ import zio.Runtime.* import zio.Scope import zio.ZIO import zio.ZIOAppDefault import zio.ZLayer import zio.Schedule import zio.durationInt import scala.concurrent.ExecutionContext import scala.concurrent.Future and these deps scalaVersion := "3.2.1" organization := "dev.zio" name := "zio-quickstart-restful-webservice" val zioV = "2.0.4" val zioNioV = "2.0.0" val zioHttpV = "0.0.3" val zioJsonV = "0.3.0" libraryDependencies ++= Seq( "dev.zio" %% "zio-http" % "0.0.3", "dev.zio" %% "zio" % "2.0.1", "dev.zio" %% "zio-json" % "0.3.0-RC11", "io.d11" %% "zhttp" % "2.0.0-RC10", "io.getquill" %% "quill-zio" % "4.3.0", "io.getquill" %% "quill-jdbc-zio" % "4.3.0", "com.h2database" % "h2" % "2.1.214", "dev.zio" %% "zio" % zioV, "dev.zio" %% "zio-streams" % zioV, "dev.zio" %% "zio-nio" % zioNioV exclude("org.scala-lang.modules", "scala-collection-compat_2.13"), "dev.zio" %% "zio-http" % zioHttpV, "dev.zio" %% "zio-json" % zioJsonV, "org.slf4j" % "slf4j-simple" % "2.0.5" % Test, //new deps "com.softwaremill.sttp.client3" %% "http4s-backend" % "3.8.3", "io.7mind.izumi" %% "distage-core" % "1.1.0-M10" ) A: You need to bring an unsafe instance into scope: Unsafe.unsafe { implicit unsafe => zio.Runtime.default.unsafeRun(someStuff()) } In scala 3 you can do: Unsafe.unsafely { zio.Runtime.default.unsafeRun(someStuff()) } For more information see https://zio.dev/guides/migrate/zio-2.x-migration-guide/#unsafe-marker.
How to get zio.Runtime.default.unsafeRun available?
When i try call zio.Runtime.default.unsafeRun(someStuff()) unsafeRun turns red so i cant call It I need to take off all the wrappers and get a clean val from ZIO[R,E,A] What should I import or\and use as dependency to fix it? Im already use these imports import zio.http.{Client, *} import zio.json.* import zio.http.model.Method import zio.{Scope, Task, ZIO, ZIOAppDefault} import zio.http.Client import zhttp.http.Status.NotFound import zhttp.http.Status import scala.language.postfixOps import zio.* import scala.collection.immutable.List import zio.{ExitCode, URIO, ZIO} import Endpoint11._ import zio.Runtime.unsafe import zio.Runtime.default.unsafe import zio.Runtime._ import zio.Runtime.* import zio.Scope import zio.ZIO import zio.ZIOAppDefault import zio.ZLayer import zio.Schedule import zio.durationInt import scala.concurrent.ExecutionContext import scala.concurrent.Future and these deps scalaVersion := "3.2.1" organization := "dev.zio" name := "zio-quickstart-restful-webservice" val zioV = "2.0.4" val zioNioV = "2.0.0" val zioHttpV = "0.0.3" val zioJsonV = "0.3.0" libraryDependencies ++= Seq( "dev.zio" %% "zio-http" % "0.0.3", "dev.zio" %% "zio" % "2.0.1", "dev.zio" %% "zio-json" % "0.3.0-RC11", "io.d11" %% "zhttp" % "2.0.0-RC10", "io.getquill" %% "quill-zio" % "4.3.0", "io.getquill" %% "quill-jdbc-zio" % "4.3.0", "com.h2database" % "h2" % "2.1.214", "dev.zio" %% "zio" % zioV, "dev.zio" %% "zio-streams" % zioV, "dev.zio" %% "zio-nio" % zioNioV exclude("org.scala-lang.modules", "scala-collection-compat_2.13"), "dev.zio" %% "zio-http" % zioHttpV, "dev.zio" %% "zio-json" % zioJsonV, "org.slf4j" % "slf4j-simple" % "2.0.5" % Test, //new deps "com.softwaremill.sttp.client3" %% "http4s-backend" % "3.8.3", "io.7mind.izumi" %% "distage-core" % "1.1.0-M10" )
[ "You need to bring an unsafe instance into scope:\nUnsafe.unsafe { implicit unsafe =>\n zio.Runtime.default.unsafeRun(someStuff())\n}\n\nIn scala 3 you can do:\nUnsafe.unsafely {\n zio.Runtime.default.unsafeRun(someStuff())\n}\n\nFor more information see https://zio.dev/guides/migrate/zio-2.x-migration-guide/#unsafe-marker.\n" ]
[ 1 ]
[]
[]
[ "scala", "scala_3", "zio", "zio_http" ]
stackoverflow_0074677162_scala_scala_3_zio_zio_http.txt
Q: Create regularly-spaced vector points from irregular XY geographic data in Python I have point (vector) coordinates in meters (x and y in 1-D arrays) which are irregularly spaced. I would like to re-sample the points so that they are regularly spaced by 10 m between each set of XY points. I have managed to regularly re-sample the points in the X direction (see code below), however when trying to use the scipy.interpolate.interp1d function on the Y variable, the points are obviously no longer spaced by 10 m. The code I use is as follows: f_bot = interpolate.interp1d(x, y) xnew_bot = np.arange(np.min(x),np.max(x),10) # 10 m spaced ynew_bot = f_bot(xnew_bot) plt.plot(x, y, 'o', xnew_bot, ynew_bot,'-') plt.show() If one measures the space between each orange point (in say, QGIS) or by doing: dist_total = np.hstack([0,np.cumsum(np.hypot(np.diff(xnew_bot),np.diff(ynew_bot)))]) diff_dist = np.diff(dist_total) # calculate distance difference between each points diff_dist will show that points are still irregularly spaced due to the interpolation of the Y points. Another way to see is to compute the distance between each X points, which is exactly 10 m, and compute the same for the Y points which show they are irregular. Is there a function or approach I could use to make sure both X and Y are spaced regularly? All I need is that each set of X Y points is spaced apart by 10 m, which should be simple to do but I can't find a better way so far! Any help would be appreciated. Data for X and Y are as follows: | x | y | | -------- | -------------- | | -1091590.00|158697 | -1091580.00|158702 | -1091580.00|158708 | -1091570.00|158713 | -1091560.00|158719 | -1091550.00|158724 |...|... | -1079450.00|164674 | -1079440.00|164677 | -1079430.00|164680 | -1079420.00|164683 A: Would you consider finding the interpolation line ST_LineInterpolatePoints measure the length of the line [in meters], ST_Length divide it by 10 [m] to find number of slots, divide the line by number of slots to find the coordinates for each group of points, ST_LineSubstring divide number of points by number of slots and assign them to coordinates from p. 4 (you can find the distance between points and division coordinates with ST_Distance Regards, Grzegorz A: What you want is to create evenly spaced points from interpolation using your original data points. There's plenty of examples here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html This is how you can use it, you have to specify max_len from scipy import interpolate f1 = interpolate.interp1d(x, y) x_new = np.linspace(x[0], x[-1], max_len) y_new = f1(x_new)
Create regularly-spaced vector points from irregular XY geographic data in Python
I have point (vector) coordinates in meters (x and y in 1-D arrays) which are irregularly spaced. I would like to re-sample the points so that they are regularly spaced by 10 m between each set of XY points. I have managed to regularly re-sample the points in the X direction (see code below), however when trying to use the scipy.interpolate.interp1d function on the Y variable, the points are obviously no longer spaced by 10 m. The code I use is as follows: f_bot = interpolate.interp1d(x, y) xnew_bot = np.arange(np.min(x),np.max(x),10) # 10 m spaced ynew_bot = f_bot(xnew_bot) plt.plot(x, y, 'o', xnew_bot, ynew_bot,'-') plt.show() If one measures the space between each orange point (in say, QGIS) or by doing: dist_total = np.hstack([0,np.cumsum(np.hypot(np.diff(xnew_bot),np.diff(ynew_bot)))]) diff_dist = np.diff(dist_total) # calculate distance difference between each points diff_dist will show that points are still irregularly spaced due to the interpolation of the Y points. Another way to see is to compute the distance between each X points, which is exactly 10 m, and compute the same for the Y points which show they are irregular. Is there a function or approach I could use to make sure both X and Y are spaced regularly? All I need is that each set of X Y points is spaced apart by 10 m, which should be simple to do but I can't find a better way so far! Any help would be appreciated. Data for X and Y are as follows: | x | y | | -------- | -------------- | | -1091590.00|158697 | -1091580.00|158702 | -1091580.00|158708 | -1091570.00|158713 | -1091560.00|158719 | -1091550.00|158724 |...|... | -1079450.00|164674 | -1079440.00|164677 | -1079430.00|164680 | -1079420.00|164683
[ "Would you consider\n\nfinding the interpolation line ST_LineInterpolatePoints\nmeasure the length of the line [in meters], ST_Length\ndivide it by 10 [m] to find number of slots,\ndivide the line by number of slots to find the coordinates for each group of points, ST_LineSubstring\ndivide number of points by number of slots and assign them to coordinates from p. 4 (you can find the distance between points and division coordinates with ST_Distance\n\nRegards,\nGrzegorz\n", "What you want is to create evenly spaced points from interpolation using your original data points.\nThere's plenty of examples here:\nhttps://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html\nThis is how you can use it, you have to specify max_len\nfrom scipy import interpolate\nf1 = interpolate.interp1d(x, y)\nx_new = np.linspace(x[0], x[-1], max_len)\ny_new = f1(x_new)\n\n" ]
[ 0, 0 ]
[]
[]
[ "coordinates", "gis", "python", "qgis", "vector" ]
stackoverflow_0070771182_coordinates_gis_python_qgis_vector.txt
Q: How to cut out part of an image based on coordinates of a given circle I'm trying to cut out part of an image based on some circles coordinates, my initial attempts have been to try doing this startx = circle[0] starty = circle[1] radius = circle[2] recImage = cv2.rectangle(image,(startx-radius,starty-radius), (startx+radius,starty+radius), (0,0,255),2) miniImage = recImage[startx-radius:startx+radius,starty-radius:starty+radius] circle[0] and circle[1] being the x and y coords of the circle centre and circle[2] being the radius. The recImage should draw the rectangle and then the miniImage should be a smaller image of that rectangle. However they don't line up. the image of squares to cut out one of the actual cut outs I was expecting them to line up as their starting and ending values are identical but they don't. Thanks A: There is a mistake in your code. You are using the coordinates of the center of the circle to draw the rectangle and cut out the mini image. However, the coordinates of the top left corner of the rectangle should be used to draw the rectangle and cut out the mini image. Here is the updated code: startx = circle[0] starty = circle[1] radius = circle[2] # Calculate the coordinates of the top left corner of the rectangle x1 = startx - radius y1 = starty - radius x2 = startx + radius y2 = starty + radius # Draw the rectangle on the image recImage = cv2.rectangle(image, (x1, y1), (x2, y2), (0, 0, 255), 2) # Cut out the mini image from the rectangle miniImage = recImage[y1:y2, x1:x2] # Show the mini image cv2.imshow('Mini Image', miniImage) cv2.waitKey(0) cv2.destroyAllWindows()
How to cut out part of an image based on coordinates of a given circle
I'm trying to cut out part of an image based on some circles coordinates, my initial attempts have been to try doing this startx = circle[0] starty = circle[1] radius = circle[2] recImage = cv2.rectangle(image,(startx-radius,starty-radius), (startx+radius,starty+radius), (0,0,255),2) miniImage = recImage[startx-radius:startx+radius,starty-radius:starty+radius] circle[0] and circle[1] being the x and y coords of the circle centre and circle[2] being the radius. The recImage should draw the rectangle and then the miniImage should be a smaller image of that rectangle. However they don't line up. the image of squares to cut out one of the actual cut outs I was expecting them to line up as their starting and ending values are identical but they don't. Thanks
[ "There is a mistake in your code. You are using the coordinates of the center of the circle to draw the rectangle and cut out the mini image. However, the coordinates of the top left corner of the rectangle should be used to draw the rectangle and cut out the mini image.\nHere is the updated code:\nstartx = circle[0]\nstarty = circle[1]\nradius = circle[2]\n\n# Calculate the coordinates of the top left corner of the rectangle\nx1 = startx - radius\ny1 = starty - radius\nx2 = startx + radius\ny2 = starty + radius\n\n# Draw the rectangle on the image\nrecImage = cv2.rectangle(image, (x1, y1), (x2, y2), (0, 0, 255), 2)\n\n# Cut out the mini image from the rectangle\nminiImage = recImage[y1:y2, x1:x2]\n\n# Show the mini image\ncv2.imshow('Mini Image', miniImage)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\n" ]
[ 0 ]
[]
[]
[ "image", "opencv", "python" ]
stackoverflow_0074677301_image_opencv_python.txt
Q: TypeORM Sort by subset of one to many relation I have the following structure representing rows of a user created table with dynamic columns: @Entity() class Row { @PrimaryGeneratedColumn() id: number; @OneToMany(() => RowValue, (item) => item.row) rowValues: RowValue[]; } And the row values: @Entity() export class RowValue { @PrimaryGeneratedColumn() id: number; @Column() key: string; @Column('simple-json') value: string; @ManyToOne(() => Row, (row) => row.rowValues) row: Row; } Now I would like to allow my users to sort the Row list by a specific row value. I know that the order option supports nesting. But since in my case it is a subset of the relation I am not sure how to implement it, even with the query builder. A: I implemented this using the query builder with a subquery statement: const sortBy = 'SomeColumn'; const sortDirection = 'DESC'; const limit = 10; const offset = 0; const repository = this.connection.getRepository(Row); const sortByQuery = repository .createQueryBuilder('sorted_results') .innerJoinAndSelect('sorted_results.rowValues', 'rowValues') .where('rowValues.key = :columnName', { columnName: sortColumn }) .orderBy('rowValues.value', sortDirection) .limit(limit) .offset(offset); const query = repository .createQueryBuilder('row') .innerJoinAndSelect('row.rowValues', 'rowValues') .innerJoin( `(${sortByColumn.getQuery()})`, 'sorted_results', 'sorted_results.sorted_results_id = row.id' ) .orderBy('sorted_results.rowValues_value', sortDirection) .setParameters(sortByQuery.getParameters()); const results = await query.getMany();
TypeORM Sort by subset of one to many relation
I have the following structure representing rows of a user created table with dynamic columns: @Entity() class Row { @PrimaryGeneratedColumn() id: number; @OneToMany(() => RowValue, (item) => item.row) rowValues: RowValue[]; } And the row values: @Entity() export class RowValue { @PrimaryGeneratedColumn() id: number; @Column() key: string; @Column('simple-json') value: string; @ManyToOne(() => Row, (row) => row.rowValues) row: Row; } Now I would like to allow my users to sort the Row list by a specific row value. I know that the order option supports nesting. But since in my case it is a subset of the relation I am not sure how to implement it, even with the query builder.
[ "I implemented this using the query builder with a subquery statement:\nconst sortBy = 'SomeColumn';\nconst sortDirection = 'DESC';\nconst limit = 10;\nconst offset = 0;\n\n\nconst repository = this.connection.getRepository(Row);\n\nconst sortByQuery = repository\n .createQueryBuilder('sorted_results')\n .innerJoinAndSelect('sorted_results.rowValues', 'rowValues')\n .where('rowValues.key = :columnName', { columnName: sortColumn })\n .orderBy('rowValues.value', sortDirection)\n .limit(limit)\n .offset(offset);\n\nconst query = repository\n .createQueryBuilder('row')\n .innerJoinAndSelect('row.rowValues', 'rowValues')\n .innerJoin(\n `(${sortByColumn.getQuery()})`,\n 'sorted_results',\n 'sorted_results.sorted_results_id = row.id'\n )\n .orderBy('sorted_results.rowValues_value', sortDirection)\n .setParameters(sortByQuery.getParameters());\n\nconst results = await query.getMany();\n\n" ]
[ 0 ]
[]
[]
[ "sql", "typeorm", "typescript" ]
stackoverflow_0074430390_sql_typeorm_typescript.txt
Q: GET http://localhost:3001/delete 404 (Not Found) My index.js (Server file) const express = require("express"); const bodyparser = require("body-parser"); const app = express(); const mysql = require("mysql"); const cors = require("cors"); app.use(cors()); app.use(express.json()); app.use(bodyparser.urlencoded({ extended: true })); const db = mysql.createConnection({ user: "root", host: "localhost", password: "", database: "ecom_store", }); app.post("/create", (req, res) => { const brand_name = req.body.brand_name; const brand_id = req.body.brand_id; db.query( "INSERT INTO brands (brand_name,brand_id) VALUES (?,?)", [brand_name, brand_id], (err, result) => { if (err) { console.log(err); } else { res.send(result); } } ); }); app.get("/brands", (req, res) => { db.query("SELECT * FROM brands", (err, result) => { if (err) { console.log(err); } else { res.send(result); } }); }); app.put("/update", (req, res) => { const brand_id = req.body.id; const brand_name = req.body.name; console.log(brand_id); console.log(brand_name); db.query( "UPDATE brands SET brand_name = ? WHERE brand_id = ?", [brand_name, brand_id], (err, result) => { if (err) { console.log(err); } else { console.log("successfully updated"); res.send(result); } } ); }); app.delete(`/delete`, async (req, res) => { const brand_id = req.body.id; console.log(brand_id); db.query( "DELETE FROM brands WHERE brand_id = ?", [brand_id], (err, result) => { if (err) { console.log(err); } else { res.send(result); console.log(result); } } ); }); app.listen(3001, () => { console.log("Yey, your server is running on port 3001"); }); I am on the http://localhost:3001/delete but getting this error I a using Xampp For my database I have attached the SS of my serverPage [![ServerPage][1]][1] [1]: https://i.stack.imgur.com/x2VNT.png It works perfectly fine when I use app.get() rather than app.delete() method pls help m out with this error Client code const deleteDATA = async () => { const id = prompt("Enter the Id of the brand"); axios.delete(`http://localhost:3001/delete`, [id]).then((response) => { alert("Successfully deleted"); console.log(response.data); }); }; A: Add the id to request body something like that for getting the id in server side. axios.delete(url, { data: { id: id }, });
GET http://localhost:3001/delete 404 (Not Found)
My index.js (Server file) const express = require("express"); const bodyparser = require("body-parser"); const app = express(); const mysql = require("mysql"); const cors = require("cors"); app.use(cors()); app.use(express.json()); app.use(bodyparser.urlencoded({ extended: true })); const db = mysql.createConnection({ user: "root", host: "localhost", password: "", database: "ecom_store", }); app.post("/create", (req, res) => { const brand_name = req.body.brand_name; const brand_id = req.body.brand_id; db.query( "INSERT INTO brands (brand_name,brand_id) VALUES (?,?)", [brand_name, brand_id], (err, result) => { if (err) { console.log(err); } else { res.send(result); } } ); }); app.get("/brands", (req, res) => { db.query("SELECT * FROM brands", (err, result) => { if (err) { console.log(err); } else { res.send(result); } }); }); app.put("/update", (req, res) => { const brand_id = req.body.id; const brand_name = req.body.name; console.log(brand_id); console.log(brand_name); db.query( "UPDATE brands SET brand_name = ? WHERE brand_id = ?", [brand_name, brand_id], (err, result) => { if (err) { console.log(err); } else { console.log("successfully updated"); res.send(result); } } ); }); app.delete(`/delete`, async (req, res) => { const brand_id = req.body.id; console.log(brand_id); db.query( "DELETE FROM brands WHERE brand_id = ?", [brand_id], (err, result) => { if (err) { console.log(err); } else { res.send(result); console.log(result); } } ); }); app.listen(3001, () => { console.log("Yey, your server is running on port 3001"); }); I am on the http://localhost:3001/delete but getting this error I a using Xampp For my database I have attached the SS of my serverPage [![ServerPage][1]][1] [1]: https://i.stack.imgur.com/x2VNT.png It works perfectly fine when I use app.get() rather than app.delete() method pls help m out with this error Client code const deleteDATA = async () => { const id = prompt("Enter the Id of the brand"); axios.delete(`http://localhost:3001/delete`, [id]).then((response) => { alert("Successfully deleted"); console.log(response.data); }); };
[ "Add the id to request body something like that for getting the id in server side.\naxios.delete(url, { data: { id: id }, }); \n\n" ]
[ 0 ]
[]
[]
[ "express", "mysql", "node.js" ]
stackoverflow_0074675571_express_mysql_node.js.txt
Q: How to hash CSS module class names in Nextjs 13? How can I edit/minify/hash/hide/obfuscate css class names in Next JS? I tried many ways including this thread Getting the following errors when trying this solution. yarn build yarn run v1.22.19 $ next build warn - You have enabled experimental feature (appDir) in next.config.js. warn - Experimental features are not covered by semver, and may cause unexpected or broken application behavior. Use at your own risk. info - Thank you for testing `appDir` please leave your feedback at https://nextjs.link/app-feedback warn - The @next/font/google font Inter has no selected subsets. Please specify subsets in the function call or in your next.config.js, otherwise no fonts will be preloaded. Read more: https://nextjs.org/docs/messages/google-fonts-missing-subsets info - Creating an optimized production build Failed to compile. HookWebpackError: Unexpected '/'. Escaping special characters with \ may help. at makeWebpackError (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\webpack\bundle5.js:28:308185) at C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\webpack\bundle5.js:28:105236 at eval (eval at create (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\webpack\bundle5.js:13:28771), <anonymous>:44:1) -- inner error -- Error: Unexpected '/'. Escaping special characters with \ may help. at C:\k\vercel\static\css\9db6a345a2f242fe.css:1:817 at Root._error (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:78465) at Root.error (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:124360) at Parser.error (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:86811) at Parser.unexpected (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:87297) at Parser.combinator (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:85544) at new Parser (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:78322) at Processor._root (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:95242) at Processor._runSync (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:95749) > Build failed because of webpack errors error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
How to hash CSS module class names in Nextjs 13?
How can I edit/minify/hash/hide/obfuscate css class names in Next JS? I tried many ways including this thread Getting the following errors when trying this solution. yarn build yarn run v1.22.19 $ next build warn - You have enabled experimental feature (appDir) in next.config.js. warn - Experimental features are not covered by semver, and may cause unexpected or broken application behavior. Use at your own risk. info - Thank you for testing `appDir` please leave your feedback at https://nextjs.link/app-feedback warn - The @next/font/google font Inter has no selected subsets. Please specify subsets in the function call or in your next.config.js, otherwise no fonts will be preloaded. Read more: https://nextjs.org/docs/messages/google-fonts-missing-subsets info - Creating an optimized production build Failed to compile. HookWebpackError: Unexpected '/'. Escaping special characters with \ may help. at makeWebpackError (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\webpack\bundle5.js:28:308185) at C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\webpack\bundle5.js:28:105236 at eval (eval at create (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\webpack\bundle5.js:13:28771), <anonymous>:44:1) -- inner error -- Error: Unexpected '/'. Escaping special characters with \ may help. at C:\k\vercel\static\css\9db6a345a2f242fe.css:1:817 at Root._error (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:78465) at Root.error (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:124360) at Parser.error (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:86811) at Parser.unexpected (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:87297) at Parser.combinator (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:85544) at new Parser (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:78322) at Processor._root (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:95242) at Processor._runSync (C:\k\vercel\node_modules\.pnpm\[email protected]_m5sxuueb27gk6ddc5gums6vtgq\node_modules\next\dist\compiled\cssnano-simple\index.js:190:95749) > Build failed because of webpack errors error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
[]
[]
[ "To hash CSS module class names in Nextjs 13, you can use the getLocalIdent function from the css-loader package. To configure the css-loader to hash CSS module class names, follow these steps:\nInstall the css-loader package by running npm install css-loader or yarn add css-loader in your project's root directory.\nIn your next.config.js file, add the following code to configure the css-loader to hash CSS module class names:\nconst { getLocalIdent } = require('css-loader');\n\nmodule.exports = {\n webpack: (config) => {\n config.module.rules.push({\n test: /\\.css$/,\n use: [\n {\n loader: 'css-loader',\n options: {\n modules: {\n getLocalIdent: (context, localIdentName, localName, options) => {\n // Generate a hashed class name using the `getLocalIdent` function\n return getLocalIdent(context, localIdentName, localName, {\n ...options,\n // Enable hashing of class names\n hashPrefix: 'hash',\n });\n },\n },\n },\n },\n ],\n });\n\n return config;\n },\n};\n\nIn your CSS module files, import the css-loader package and use the locals property to access the hashed class names:\n@import 'css-loader/locals';\n\n.class-name {\n /* CSS styles */\n}\n\nIn your React components, use the className prop to apply the hashed class names to your elements:\nimport styles from './styles.css';\n\nconst MyComponent = () => (\n <div className={styles.className}>\n {/* Content */}\n </div>\n);\n\nWith this setup, the css-loader will automatically hash the class names in your CSS modules and make them unique for each component. This can help prevent conflicts and ensure that your styles are applied correctly.\nthis an Ai generated answer\n" ]
[ -2 ]
[ "javascript", "next.js", "nextjs13", "typescript" ]
stackoverflow_0074677233_javascript_next.js_nextjs13_typescript.txt
Q: Return "True" if all the characters in a string are "x" or "X" else return false I am looking at this code challenge: Complete the function isAllX to determine if the entire string is made of lower-case x or upper-case X. Return true if they are, false if not. Examples: isAllX("Xx"); // true isAllX("xAbX"); // false Below is my answer, but it is wrong. I want "false" for the complete string if any of the character is not "x" or "X": function isAllX(string) { for (let i = 0; i < string.length; i++) { if (string[i] === "x" || string[i] === "X") { console.log(true); } else if (string[i] !== "x" || string[i] !== "X") { console.log(false); } } } isAllX("xAbX"); A: Your loop is outputting a result in every iteration. There are two issues with that: You should only give one result for an input, so not in every iteration; currently you are reporting on every single character in the input string. You are asked to return a boolean result (false/true), not to have the function print something. That should be left to the caller You could take a simpler approach though, and first turn the input string to all lower case. Now you only have to look for "x". Then take out all "x" and see if something is left over. You can check the length property of the resulting string to decide whether the return value should be false or true: function isAllX(string) { return string.toLowerCase().replaceAll("x", "").length == 0; } console.log(isAllX("xxXXxxAxx")); // false console.log(isAllX("xxXXxxXxx")); // true If you are confortable with regular expressions, you could also use the test method: function isAllX(string) { return /^x*$/i.test(string); } console.log(isAllX("xxXXxxAxx")); // false console.log(isAllX("xxXXxxXxx")); // true A: You can use regex to find the same. function allX(testString) { return /^x+$/i.test(testString); } console.log(allX("xxXX")); console.log(allX("xxAAAXX")); A: You can try this way. function isAllX(str) { let isX = true; let newString = str.toLowerCase(); for (let i = 0; i < newString.length; i++) { if (newString[i] !== "x") { isX = false; } } return isX; } console.log(isAllX("xAbX")); console.log(isAllX("XXXxxxXXXxxx"));
Return "True" if all the characters in a string are "x" or "X" else return false
I am looking at this code challenge: Complete the function isAllX to determine if the entire string is made of lower-case x or upper-case X. Return true if they are, false if not. Examples: isAllX("Xx"); // true isAllX("xAbX"); // false Below is my answer, but it is wrong. I want "false" for the complete string if any of the character is not "x" or "X": function isAllX(string) { for (let i = 0; i < string.length; i++) { if (string[i] === "x" || string[i] === "X") { console.log(true); } else if (string[i] !== "x" || string[i] !== "X") { console.log(false); } } } isAllX("xAbX");
[ "Your loop is outputting a result in every iteration. There are two issues with that:\n\nYou should only give one result for an input, so not in every iteration; currently you are reporting on every single character in the input string.\nYou are asked to return a boolean result (false/true), not to have the function print something. That should be left to the caller\n\nYou could take a simpler approach though, and first turn the input string to all lower case. Now you only have to look for \"x\". Then take out all \"x\" and see if something is left over. You can check the length property of the resulting string to decide whether the return value should be false or true:\n\n\nfunction isAllX(string) {\n return string.toLowerCase().replaceAll(\"x\", \"\").length == 0;\n}\n\n\nconsole.log(isAllX(\"xxXXxxAxx\")); // false\nconsole.log(isAllX(\"xxXXxxXxx\")); // true\n\n\n\nIf you are confortable with regular expressions, you could also use the test method:\n\n\nfunction isAllX(string) {\n return /^x*$/i.test(string);\n}\n\n\nconsole.log(isAllX(\"xxXXxxAxx\")); // false\nconsole.log(isAllX(\"xxXXxxXxx\")); // true\n\n\n\n", "You can use regex to find the same.\n\n\nfunction allX(testString) {\n return /^x+$/i.test(testString);\n}\n\nconsole.log(allX(\"xxXX\"));\nconsole.log(allX(\"xxAAAXX\"));\n\n\n\n", "You can try this way.\n\nfunction isAllX(str) {\n let isX = true;\n let newString = str.toLowerCase();\n\n for (let i = 0; i < newString.length; i++) {\n if (newString[i] !== \"x\") {\n isX = false;\n }\n }\n return isX;\n}\nconsole.log(isAllX(\"xAbX\"));\nconsole.log(isAllX(\"XXXxxxXXXxxx\"));\n\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "javascript" ]
stackoverflow_0074675876_javascript.txt
Q: npm check and update package if needed We need to integrate Karma test runner into TeamCity and for that I'd like to give sys-engineers small script (powershell or whatever) that would: pick up desired version number from some config file (I guess I can put it as a comment right in the karma.conf.js) check if the defined version of karma runner installed in npm's global repo if it's not, or the installed version is older than desired: pick up and install right version run it: karma start .\Scripts-Tests\karma.conf.js --reporters teamcity --single-run So my real question is: "how can one check in a script, if desired version of package installed?". Should you do the check, or it's safe to just call npm -g install everytime? I don't want to always check and install the latest available version, because other config values may become incompatible A: To check if any module in a project is 'old': npm outdated 'outdated' will check every module defined in package.json and see if there is a newer version in the NPM registry. For example, say xml2js 0.2.6 (located in node_modules in the current project) is outdated because a newer version exists (0.2.7). You would see: [email protected] node_modules/xml2js current=0.2.6 To update all dependencies, if you are confident this is desirable: npm update Or, to update a single dependency such as xml2js: npm update xml2js To update package.json version numbers, append the --save flag: npm update --save A: npm outdated will identify packages that should be updated, and npm update <package name> can be used to update each package. But prior to [email protected], npm update <package name> will not update the versions in your package.json which is an issue. The best workflow is to: Identify out of date packages with npm outdated Update the versions in your package.json Run npm update to install the latest versions of each package Check out npm-check-updates to help with this workflow. Install npm-check-updates Run npm-check-updates to list what packages are out of date (basically the same thing as running npm outdated) Run npm-check-updates -u to update all the versions in your package.json (this is the magic sauce) Run npm update as usual to install the new versions of your packages based on the updated package.json A: There is also a "fresh" module called npm-check: npm-check Check for outdated, incorrect, and unused dependencies. It also provides a convenient interactive way to update the dependencies with npm-check -u. A: One easy step: $ npm i -g npm-check-updates && ncu -u && npm i That is all. All of the package versions in package.json will be the latest major versions. Edit: What is happening here? Installing a package that checks updates for you. Use this package to update all package versions in your package.json (-u is short for --updateAll). Install all of the new versions of the packages. A: To update a single local package: First find out your outdated packages: npm outdated Then update the package or packages that you want manually as: npm update --save package_name This way it is not necessary to update your local package.json file. Note that this will update your package to the latest version. If you write some version in your package.json file and do: npm update package_name In this case you will get just the next stable version (wanted) regarding the version that you wrote in your package.json file. And with npm list (package_name) you can find out the current version of your local packages. A: You can try either of these options: Check outdated packages npm outdated Check and pick packages to update npx npm-check -u A: No additional packages, to just check outdated and update those which are, this command will do: npm install $(npm outdated | cut -d' ' -f 1 | sed '1d' | xargs -I '$' echo '$@latest' | xargs echo) A: NPM commands to update or fix vulnerabilities in some dependency manifest files Use below command to check outdated or vulnerabilities in your node modules. npm audit If any vulnerabilities found, use below command to fix all issues. npm audit fix If it doesn't work for you then try npm audit fix -f, this command will almost fix all vulnerabilities. Some dependencies or devDependencies are locked in package-lock.json file, so we use -f flag to force update them. If you don't want to use force audit fix then you can manually fix your dependencies versions by changing them in package-lock.json and package.json file. Then run npm update && npm upgrade A: When installing npm packages (both globally or locally) you can define a specific version by using the @version syntax to define a version to be installed. In other words, doing: npm install -g [email protected] will ensure that only 0.9.2 is installed and won't reinstall if it already exists. As a word of a advice, I would suggest avoiding global npm installs wherever you can. Many people don't realize that if a dependency defines a bin file, it gets installed to ./node_modules/.bin/. Often, its very easy to use that local version of an installed module that is defined in your package.json. In fact, npm scripts will add the ./node_modules/.bin onto your path. As an example, here is a package.json that, when I run npm install && npm test will install the version of karma defined in my package.json, and use that version of karma (installed at node_modules/.bin/karma) when running the test script: { "name": "myApp", "main": "app.js", "scripts": { "test": "karma test/*", }, "dependencies": {...}, "devDependencies": { "karma": "0.9.2" } } This gives you the benefit of your package.json defining the version of karma to use and not having to keep that config globally on your CI box. A: As of [email protected]+ you can simply do: npm update <package name> This will automatically update the package.json file. We don't have to update the latest version manually and then use npm update <package name> You can still get the old behavior using npm update --no-save (Reference) A: A different approach would be to first uprade the package.json file using, ncu -u and then simply run, npm install to update all the packages to the latest version. ps: It will update all the packages to the latest version however if the package is already up to date that package will not be affected at all. A: 3 simple steps you can use for update all outdated packages First, check the packages which are outdated sudo npm i -g npm-check-updates Second, put all of them in ready ncu -u Results in Terminal will be like this: Third, just update all of them. npm install That's it. A: Just do this to update everything to the latest version - npx npm-check-updates -u Note - You'll be prompted to install npm-check-updates. Press y and enter. Now run npm i. You're good to go. A: To really update just one package install NCU and then run it just for that package. This will bump to the real latest. npm install -g npm-check-updates ncu -f your-intended-package-name -u A: You can do this completely automatically in 2022 Install npm-check-updates Run the command ncu --doctor -u It will first try every dependency you have and run tests, if the tests fail it will update each dependency one by one and run tests after each update A: One more for bash: npm outdated -parseable|cut -d: -f5|xargs -L1 npm i A: I'm just interested in updating the outdated packages using the semantic versioning rules in my package.json. Here's a one-liner that takes care of that npm update `npm outdated | awk '{print $1}' | tr '\n' ' '` What it does: takes the output from npm outdated and pipes that into awk where we're grabbing just the name of the package (in column 1) then we're using tr to convert newline characters into spaces finally -- using backticks -- we're using the output of the preceding steps as arguments to npm update so we get all our needed updates in one shot. One would think that there's a way to do this using npm alone, but it wasn't here when I looked, so I'm just dropping this here in case it's helpful to anyone . ** I believe there's an answer that MikeMajara provides here that does something similar, but it's appending @latest to the updated package name, which I'm not really interested in as a part of my regularly scheduled updates. A: If you want to upgrade a package to the latest release, (major, minor and patch), append the @latest keyword to the end of the package name, ex: npm i express-mongo-sanitize@latest this will update express-mongo-sanitize from version 1.2.1 for example to version 2.2.0. If you want to know which packages are outdated and which can be updated, use the npm outdated command ex: $ npm outdated Package Current Wanted Latest Location Depended by express-rate-limit 3.5.3 3.5.3 6.4.0 node_modules/express-rate-limit apiv2 helmet 3.23.3 3.23.3 5.1.0 node_modules/helmet apiv2 request-ip 2.2.0 2.2.0 3.3.0 node_modules/request-ip apiv2 validator 10.11.0 10.11.0 13.7.0 node_modules/validator apiv2 A: If you have multiple projects with the same node-modules content, pnpm is recommended. This will prevent the modules from being downloaded in each project. After the installation the answer to your question is: pnpm up
npm check and update package if needed
We need to integrate Karma test runner into TeamCity and for that I'd like to give sys-engineers small script (powershell or whatever) that would: pick up desired version number from some config file (I guess I can put it as a comment right in the karma.conf.js) check if the defined version of karma runner installed in npm's global repo if it's not, or the installed version is older than desired: pick up and install right version run it: karma start .\Scripts-Tests\karma.conf.js --reporters teamcity --single-run So my real question is: "how can one check in a script, if desired version of package installed?". Should you do the check, or it's safe to just call npm -g install everytime? I don't want to always check and install the latest available version, because other config values may become incompatible
[ "To check if any module in a project is 'old':\nnpm outdated\n\n'outdated' will check every module defined in package.json and see if there is a newer version in the NPM registry.\nFor example, say xml2js 0.2.6 (located in node_modules in the current project) is outdated because a newer version exists (0.2.7). You would see:\[email protected] node_modules/xml2js current=0.2.6\n\nTo update all dependencies, if you are confident this is desirable:\nnpm update\n\nOr, to update a single dependency such as xml2js:\nnpm update xml2js\n\nTo update package.json version numbers, append the --save flag:\nnpm update --save\n\n", "npm outdated will identify packages that should be updated, and npm update <package name> can be used to update each package. But prior to [email protected], npm update <package name> will not update the versions in your package.json which is an issue.\nThe best workflow is to:\n\nIdentify out of date packages with npm outdated\nUpdate the versions in your package.json\nRun npm update to install the latest versions of each package\n\nCheck out npm-check-updates to help with this workflow.\n\nInstall npm-check-updates\nRun npm-check-updates to list what packages are out of date (basically the same thing as running npm outdated)\nRun npm-check-updates -u to update all the versions in your package.json (this is the magic sauce)\nRun npm update as usual to install the new versions of your packages based on the updated package.json\n\n", "There is also a \"fresh\" module called npm-check:\n\nnpm-check\nCheck for outdated, incorrect, and unused dependencies.\n\n\nIt also provides a convenient interactive way to update the dependencies with npm-check -u.\n", "One easy step:\n$ npm i -g npm-check-updates && ncu -u && npm i\nThat is all. All of the package versions in package.json will be the latest major versions.\nEdit:\nWhat is happening here?\n\n\nInstalling a package that checks updates for you.\n\nUse this package to update all package versions in your package.json (-u is short for --updateAll).\n\nInstall all of the new versions of the packages.\n\n\n\n", "\nTo update a single local package:\n\nFirst find out your outdated packages:\nnpm outdated\nThen update the package or packages that you want manually as:\nnpm update --save package_name\n\n\nThis way it is not necessary to update your local package.json\n file.\nNote that this will update your package to the latest version.\n\nIf you write some version in your package.json file and do:\nnpm update package_name\nIn this case you will get just the next stable version (wanted) regarding the version that you wrote in your package.json file.\n\nAnd with npm list (package_name) you can find out the current version of your local packages.\n", "You can try either of these options:\n\nCheck outdated packages\nnpm outdated\n\n\n\nCheck and pick packages to update\nnpx npm-check -u\n\n\n\n\n", "No additional packages, to just check outdated and update those which are, this command will do:\nnpm install $(npm outdated | cut -d' ' -f 1 | sed '1d' | xargs -I '$' echo '$@latest' | xargs echo)\n", "NPM commands to update or fix vulnerabilities in some dependency manifest files\n\nUse below command to check outdated or vulnerabilities in your node modules.\n\nnpm audit\n\nIf any vulnerabilities found, use below command to fix all issues.\n\nnpm audit fix\n\nIf it doesn't work for you then try \n\nnpm audit fix -f, this command will almost fix all vulnerabilities. Some dependencies or devDependencies are locked in package-lock.json file, so we use -f flag to force update them.\n\nIf you don't want to use force audit fix then you can manually fix your dependencies versions by changing them in package-lock.json and package.json file. Then run \n\nnpm update && npm upgrade\n", "When installing npm packages (both globally or locally) you can define a specific version by using the @version syntax to define a version to be installed.\nIn other words, doing:\nnpm install -g [email protected] \nwill ensure that only 0.9.2 is installed and won't reinstall if it already exists.\nAs a word of a advice, I would suggest avoiding global npm installs wherever you can. Many people don't realize that if a dependency defines a bin file, it gets installed to ./node_modules/.bin/. Often, its very easy to use that local version of an installed module that is defined in your package.json. In fact, npm scripts will add the ./node_modules/.bin onto your path.\nAs an example, here is a package.json that, when I run npm install && npm test will install the version of karma defined in my package.json, and use that version of karma (installed at node_modules/.bin/karma) when running the test script:\n{\n \"name\": \"myApp\",\n \"main\": \"app.js\",\n \"scripts\": {\n \"test\": \"karma test/*\",\n },\n \"dependencies\": {...},\n \"devDependencies\": {\n \"karma\": \"0.9.2\"\n }\n}\n\nThis gives you the benefit of your package.json defining the version of karma to use and not having to keep that config globally on your CI box.\n", "As of [email protected]+ you can simply do:\nnpm update <package name>\n\nThis will automatically update the package.json file. We don't have to update the latest version manually and then use npm update <package name>\nYou can still get the old behavior using \nnpm update --no-save\n\n(Reference)\n", "A different approach would be to first uprade the package.json file using,\nncu -u\n\n\nand then simply run,\nnpm install\n\nto update all the packages to the latest version.\nps: It will update all the packages to the latest version however if the package is already up to date that package will not be affected at all.\n", "3 simple steps you can use for update all outdated packages\nFirst, check the packages which are outdated\nsudo npm i -g npm-check-updates\nSecond, put all of them in ready\nncu -u\nResults in Terminal will be like this:\n\nThird, just update all of them.\nnpm install\nThat's it.\n", "Just do this to update everything to the latest version -\nnpx npm-check-updates -u\nNote - You'll be prompted to install npm-check-updates. Press y and enter.\nNow run npm i. You're good to go.\n", "To really update just one package install NCU and then run it just for that package. This will bump to the real latest.\nnpm install -g npm-check-updates\n\nncu -f your-intended-package-name -u\n\n", "You can do this completely automatically in 2022\n\nInstall npm-check-updates\n\nRun the command\nncu --doctor -u\n\nIt will first try every dependency you have and run tests, if the tests fail it will update each dependency one by one and run tests after each update\n\n\n", "One more for bash:\nnpm outdated -parseable|cut -d: -f5|xargs -L1 npm i\n\n", "I'm just interested in updating the outdated packages using the semantic versioning rules in my package.json.\nHere's a one-liner that takes care of that\nnpm update `npm outdated | awk '{print $1}' | tr '\\n' ' '`\n\nWhat it does:\n\ntakes the output from npm outdated and\npipes that into awk where we're grabbing just the name of the package (in column 1)\nthen we're using tr to convert newline characters into spaces\nfinally -- using backticks -- we're using the output of the preceding steps as arguments to npm update so we get all our needed updates in one shot.\n\nOne would think that there's a way to do this using npm alone, but it wasn't here when I looked, so I'm just dropping this here in case it's helpful to anyone .\n** I believe there's an answer that MikeMajara provides here that does something similar, but it's appending @latest to the updated package name, which I'm not really interested in as a part of my regularly scheduled updates.\n", "If you want to upgrade a package to the latest release, (major, minor and patch), append the @latest keyword to the end of the package name, ex:\nnpm i express-mongo-sanitize@latest\n\nthis will update express-mongo-sanitize from version 1.2.1 for example to version 2.2.0.\nIf you want to know which packages are outdated and which can be updated, use the npm outdated command\nex:\n$ npm outdated\nPackage Current Wanted Latest Location Depended by\nexpress-rate-limit 3.5.3 3.5.3 6.4.0 node_modules/express-rate-limit apiv2\nhelmet 3.23.3 3.23.3 5.1.0 node_modules/helmet apiv2\nrequest-ip 2.2.0 2.2.0 3.3.0 node_modules/request-ip apiv2\nvalidator 10.11.0 10.11.0 13.7.0 node_modules/validator apiv2\n\n\n", "If you have multiple projects with the same node-modules content, pnpm is recommended. This will prevent the modules from being downloaded in each project. After the installation the answer to your question is:\npnpm up\n\n" ]
[ 997, 474, 188, 137, 76, 44, 28, 25, 7, 7, 4, 4, 4, 3, 2, 1, 1, 1, 0 ]
[]
[]
[ "karma_runner", "node.js", "npm", "teamcity" ]
stackoverflow_0016525430_karma_runner_node.js_npm_teamcity.txt
Q: Reversing a string in Rust What is wrong with this: fn main() { let word: &str = "lowks"; assert_eq!(word.chars().rev(), "skwol"); } I get an error like this: error[E0369]: binary operation `==` cannot be applied to type `std::iter::Rev<std::str::Chars<'_>>` --> src/main.rs:4:5 | 4 | assert_eq!(word.chars().rev(), "skwol"); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: an implementation of `std::cmp::PartialEq` might be missing for `std::iter::Rev<std::str::Chars<'_>>` = note: this error originates in a macro outside of the current crate What is the correct way to do this? A: Since, as @DK. suggested, .graphemes() isn't available on &str in stable, you might as well just do what @huon suggested in the comments: fn main() { let foo = "palimpsest"; println!("{}", foo.chars().rev().collect::<String>()); } A: The first, and most fundamental, problem is that this isn't how you reverse a Unicode string. You are reversing the order of the code points, where you want to reverse the order of graphemes. There may be other issues with this that I'm not aware of. Text is hard. The second issue is pointed out by the compiler: you are trying to compare a string literal to a char iterator. chars and rev don't produce new strings, they produce lazy sequences, as with iterators in general. The following works: /*! Add the following to your `Cargo.toml`: ```cargo [dependencies] unicode-segmentation = "0.1.2" ``` */ extern crate unicode_segmentation; use unicode_segmentation::UnicodeSegmentation; fn main() { let word: &str = "loẅks"; let drow: String = word // Split the string into an Iterator of &strs, where each element is an // extended grapheme cluster. .graphemes(true) // Reverse the order of the grapheme iterator. .rev() // Collect all the chars into a new owned String. .collect(); assert_eq!(drow, "skẅol"); // Print it out to be sure. println!("drow = `{}`", drow); } Note that graphemes used to be in the standard library as an unstable method, so the above will break with sufficiently old versions of Rust. In that case, you need to use UnicodeSegmentation::graphemes(s, true) instead. A: If you are just dealing with ASCII characters, you can make the reversal in place with the unstable reverse function for slices. It is doing something like that: fn main() { let mut slice = *b"lowks"; let end = slice.len() - 1; for i in 0..end / 2 { slice.swap(i, end - i); } assert_eq!(std::str::from_utf8(&slice).unwrap(), "skwol"); } Playground
Reversing a string in Rust
What is wrong with this: fn main() { let word: &str = "lowks"; assert_eq!(word.chars().rev(), "skwol"); } I get an error like this: error[E0369]: binary operation `==` cannot be applied to type `std::iter::Rev<std::str::Chars<'_>>` --> src/main.rs:4:5 | 4 | assert_eq!(word.chars().rev(), "skwol"); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: an implementation of `std::cmp::PartialEq` might be missing for `std::iter::Rev<std::str::Chars<'_>>` = note: this error originates in a macro outside of the current crate What is the correct way to do this?
[ "Since, as @DK. suggested, .graphemes() isn't available on &str in stable, you might as well just do what @huon suggested in the comments:\nfn main() {\n let foo = \"palimpsest\";\n println!(\"{}\", foo.chars().rev().collect::<String>());\n}\n\n", "The first, and most fundamental, problem is that this isn't how you reverse a Unicode string. You are reversing the order of the code points, where you want to reverse the order of graphemes. There may be other issues with this that I'm not aware of. Text is hard.\nThe second issue is pointed out by the compiler: you are trying to compare a string literal to a char iterator. chars and rev don't produce new strings, they produce lazy sequences, as with iterators in general. The following works:\n/*!\nAdd the following to your `Cargo.toml`:\n\n```cargo\n[dependencies]\nunicode-segmentation = \"0.1.2\"\n```\n*/\nextern crate unicode_segmentation;\nuse unicode_segmentation::UnicodeSegmentation;\n\nfn main() {\n let word: &str = \"loẅks\";\n let drow: String = word\n // Split the string into an Iterator of &strs, where each element is an\n // extended grapheme cluster.\n .graphemes(true)\n // Reverse the order of the grapheme iterator.\n .rev()\n // Collect all the chars into a new owned String.\n .collect();\n\n assert_eq!(drow, \"skẅol\");\n\n // Print it out to be sure.\n println!(\"drow = `{}`\", drow);\n}\n\nNote that graphemes used to be in the standard library as an unstable method, so the above will break with sufficiently old versions of Rust. In that case, you need to use UnicodeSegmentation::graphemes(s, true) instead.\n", "If you are just dealing with ASCII characters, you can make the reversal in place with the unstable reverse function for slices.\nIt is doing something like that:\nfn main() {\n let mut slice = *b\"lowks\";\n let end = slice.len() - 1;\n for i in 0..end / 2 {\n slice.swap(i, end - i);\n }\n assert_eq!(std::str::from_utf8(&slice).unwrap(), \"skwol\");\n}\n\nPlayground\n" ]
[ 88, 59, 0 ]
[]
[]
[ "rust" ]
stackoverflow_0027996430_rust.txt
Q: Can't build docker compose This is my code for Dockerfile: FROM python:3.8 WORKDIR /py-api-yahoo-finance COPY requirements.txt requirements.txt RUN pip3 install -r requirements.txt COPY api_yahoo . And this is my docker-compose.yml file: version: "3.8" services: py-api-yahoo-finance: build: . ports: - "5000:5000" container_name: api_yahoo command: python manage.py runserver 0.0.0.0:5000 I try to build the image by the command: docker-compose build and then I encounter the following error: failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount974641243/Dockerfile: no such file or directory ERROR: Service 'py-api-yahoo-finance' failed to build : Build failed I tried to follow all the tutorials I found online but none of the answers helped. I would appreciate your help, thanks. A: It looks like the issue might be with the path specified in the Dockerfile. In the WORKDIR command, you're specifying /py-api-yahoo-finance as the working directory, but in the COPY command you're copying files from the current directory (.) to the working directory. This means that the requirements.txt file and the api_yahoo directory won't be found in the working directory, and the build will fail. To fix this, you can either specify the full path to the requirements.txt file and api_yahoo directory in the COPY command, or you can move the Dockerfile to the directory containing the requirements.txt file and api_yahoo directory and run the docker-compose command from there. This way, the COPY command will be able to find the files in the current directory. Here's an example of how your Dockerfile and docker-compose.yml files could look after making these changes: Dockerfile: FROM python:3.8 WORKDIR /py-api-yahoo-finance COPY requirements.txt /py-api-yahoo-finance/requirements.txt RUN pip3 install -r requirements.txt COPY api_yahoo /py-api-yahoo-finance/api_yahoo docker-compose.yml: version: "3.8" services: py-api-yahoo-finance: build: . ports: - "5000:5000" container_name: api_yahoo command: python manage.py runserver 0.0.0.0:5000 Make sure to run the docker-compose build command from the directory containing the Dockerfile, requirements.txt file, and api_yahoo directory. This should fix the issue and allow the image to build successfully. EDIT It's possible that there is an issue with the path specified in the docker-compose.yml file. In the build section, you're specifying the current directory (.) as the path to the Dockerfile, but it's possible that the Dockerfile isn't in the current directory when you run the docker-compose command. To fix this, you can specify the full path to the Dockerfile in the build section of the docker-compose.yml file. This way, docker-compose will be able to find the Dockerfile and build the image successfully. Here's an example of how your docker-compose.yml file could look after making this change: version: "3.8" services: py-api-yahoo-finance: build: /path/to/Dockerfile ports: - "5000:5000" container_name: api_yahoo command: python manage.py runserver 0.0.0.0:5000 Make sure to specify the correct path to the Dockerfile in the build section. This should fix the issue and allow you to build the image successfully. If you continue to have issues, it may be helpful to check the permissions on the Dockerfile, requirements.txt file, and api_yahoo directory to make sure they are readable by the user running the docker-compose command. You can use the ls -l command to check the permissions on these files and directories, and use the chmod command to change the permissions if necessary. For example, if the Dockerfile has permissions set to -rw-rw-rw-, you can use the following command to make it readable by everyone: chmod a+r Dockerfile This will add read permission for all users on the Dockerfile, and you should be able to build the image successfully.
Can't build docker compose
This is my code for Dockerfile: FROM python:3.8 WORKDIR /py-api-yahoo-finance COPY requirements.txt requirements.txt RUN pip3 install -r requirements.txt COPY api_yahoo . And this is my docker-compose.yml file: version: "3.8" services: py-api-yahoo-finance: build: . ports: - "5000:5000" container_name: api_yahoo command: python manage.py runserver 0.0.0.0:5000 I try to build the image by the command: docker-compose build and then I encounter the following error: failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount974641243/Dockerfile: no such file or directory ERROR: Service 'py-api-yahoo-finance' failed to build : Build failed I tried to follow all the tutorials I found online but none of the answers helped. I would appreciate your help, thanks.
[ "It looks like the issue might be with the path specified in the Dockerfile. In the WORKDIR command, you're specifying /py-api-yahoo-finance as the working directory, but in the COPY command you're copying files from the current directory (.) to the working directory.\nThis means that the requirements.txt file and the api_yahoo directory won't be found in the working directory, and the build will fail.\nTo fix this, you can either specify the full path to the requirements.txt file and api_yahoo directory in the COPY command, or you can move the Dockerfile to the directory containing the requirements.txt file and api_yahoo directory and run the docker-compose command from there. This way, the COPY command will be able to find the files in the current directory.\nHere's an example of how your Dockerfile and docker-compose.yml files could look after making these changes:\nDockerfile:\nFROM python:3.8\nWORKDIR /py-api-yahoo-finance\nCOPY requirements.txt /py-api-yahoo-finance/requirements.txt\nRUN pip3 install -r requirements.txt\nCOPY api_yahoo /py-api-yahoo-finance/api_yahoo\n\ndocker-compose.yml:\nversion: \"3.8\"\nservices:\n py-api-yahoo-finance:\n build: .\n ports:\n - \"5000:5000\"\n container_name: api_yahoo\n command: python manage.py runserver 0.0.0.0:5000\n\nMake sure to run the docker-compose build command from the directory containing the Dockerfile, requirements.txt file, and api_yahoo directory. This should fix the issue and allow the image to build successfully.\nEDIT\nIt's possible that there is an issue with the path specified in the docker-compose.yml file. In the build section, you're specifying the current directory (.) as the path to the Dockerfile, but it's possible that the Dockerfile isn't in the current directory when you run the docker-compose command.\nTo fix this, you can specify the full path to the Dockerfile in the build section of the docker-compose.yml file. This way, docker-compose will be able to find the Dockerfile and build the image successfully.\nHere's an example of how your docker-compose.yml file could look after making this change:\nversion: \"3.8\"\nservices:\npy-api-yahoo-finance:\nbuild: /path/to/Dockerfile\nports:\n- \"5000:5000\"\ncontainer_name: api_yahoo\ncommand: python manage.py runserver 0.0.0.0:5000\n\nMake sure to specify the correct path to the Dockerfile in the build section. This should fix the issue and allow you to build the image successfully.\nIf you continue to have issues, it may be helpful to check the permissions on the Dockerfile, requirements.txt file, and api_yahoo directory to make sure they are readable by the user running the docker-compose command. You can use the ls -l command to check the permissions on these files and directories, and use the chmod command to change the permissions if necessary.\nFor example, if the Dockerfile has permissions set to -rw-rw-rw-, you can use the following command to make it readable by everyone:\nchmod a+r Dockerfile\n\nThis will add read permission for all users on the Dockerfile, and you should be able to build the image successfully.\n" ]
[ 0 ]
[]
[]
[ "docker", "docker_compose", "docker_machine", "dockerfile" ]
stackoverflow_0074677300_docker_docker_compose_docker_machine_dockerfile.txt
Q: terminal user interface with python what is the best TUI module to use with python , I used prompt-toolkit and I can't use so many things in it like Layout and I can't use a view in textual there is so many errors . I want to build TUI for myself, and I want a well documented and working TUI or CUI python module A: There are many TUI libraries available for Python, and the best one for you will depend on your specific needs and preferences. Some popular options include curses, npyscreen, urwid, and blessings. All of these libraries provide basic TUI functionality, such as creating and positioning text and interactive elements on the screen, and handling user input. If you need more advanced features, such as multiple windows and panels, or rich text formatting, you may want to consider using a library that provides a higher-level interface, such as PyQt or PyGTK. These libraries can be more complex to use, but they offer a wider range of features and more flexibility in terms of design and layout. Ultimately, the best TUI library for you will depend on the requirements of your specific project, so it's worth experimenting with different options to see which one works best for you. A: An AppSession is an interactive session, usually connected to one terminal. Within one such session, interaction with many applications can happen, one after the other. The input/output device is not supposed to change during one session. Warning: Always use the create_app_session function to create an instance, so that it gets activated correctly. Parameters input – Use this as a default input for all applications running in this session, unless an input is passed to the Application explicitely. output – Use this as a default output.
terminal user interface with python
what is the best TUI module to use with python , I used prompt-toolkit and I can't use so many things in it like Layout and I can't use a view in textual there is so many errors . I want to build TUI for myself, and I want a well documented and working TUI or CUI python module
[ "There are many TUI libraries available for Python, and the best one for you will depend on your specific needs and preferences. Some popular options include curses, npyscreen, urwid, and blessings. All of these libraries provide basic TUI functionality, such as creating and positioning text and interactive elements on the screen, and handling user input.\nIf you need more advanced features, such as multiple windows and panels, or rich text formatting, you may want to consider using a library that provides a higher-level interface, such as PyQt or PyGTK. These libraries can be more complex to use, but they offer a wider range of features and more flexibility in terms of design and layout.\nUltimately, the best TUI library for you will depend on the requirements of your specific project, so it's worth experimenting with different options to see which one works best for you.\n", "An AppSession is an interactive session, usually connected to one terminal. Within one such session, interaction with many applications can happen, one after the other.\nThe input/output device is not supposed to change during one session.\nWarning: Always use the create_app_session function to create an instance, so that it gets activated correctly.\nParameters\ninput – Use this as a default input for all applications running in this session, unless an input is passed to the Application explicitely.\noutput – Use this as a default output.\n" ]
[ 0, 0 ]
[]
[]
[ "command_line_interface", "python", "tui" ]
stackoverflow_0074677311_command_line_interface_python_tui.txt
Q: Find pattern in file powershell I have this pattern thats composed by a quantity, a description and a price. Please not that the numbers and text can change everytime. I need to use powershell to find and save the results. So far I got this... the pattern format is what is driving me crazy. $example_line = '3.00 CEBICHE CORVINA PI 26,805.00' $pattern= '\d.\d\d \D' $results = $example_line | Select-String $pattern -AllMatches $results.Matches.Value Any help is appreciated. Thanks in advance ! EDIT. After seeing the answers Im trying regex to build the array. I dont know why its not working for me... Im reading the txt file and I get results, but when I want to see the array I dont get any data in it. So for example Im using this to save all matching lines into the array function pasar_a_word($archivo) { $content = Get-Content $root\UNB\FACTURA_FINAL\$archivo $pattern = '(\d\.\d\d) (\D+?) (\d+,\d\d\d\.\d\d)' Write-Host "FUNCION PASAR A WORD" for($i = 0; $i -lt $content.Count; $i++){ $line = $content[$i] $results = ([regex]::Matches($line, $pattern)).Value } Write-Host $results[1] } A: Code: $example_line = '3.00 CEBICHE CORVINA PI 26,805.00' # Expression Pattern $pattern = '(\d\.\d\d) (\D+?) (\d+,\d\d\d\.\d\d)' # Use the -match operator to match the string against the pattern $match = $example_line -match $pattern # If the match is successful, extract the three captured groups if ($match) { $quantity = $matches[1] $description = $matches[2] $price = $matches[3] # Print the extracted values Write-Output "Quantity: $quantity" Write-Output "Description: $description" Write-Output "Price: $price" } A: You can use the regex static .Matches() method for this: $string = '2.00 CAUSA PERUANA GRAN 32,504.00 1.00 ENSALADA QUINUA 7,309.00' $results = ([regex]::Matches($string, '(\d\.\d{2}\s[\w\s]+\d+,[\d.]+)')).Value # $results[0] --> 2.00 CAUSA PERUANA GRAN 32,504.00 # $results[1] --> 1.00 ENSALADA QUINUA 7,309.00 A: The proper pattern is $pattern= '\d.\d\d \D* \d*,\d*.\d*'
Find pattern in file powershell
I have this pattern thats composed by a quantity, a description and a price. Please not that the numbers and text can change everytime. I need to use powershell to find and save the results. So far I got this... the pattern format is what is driving me crazy. $example_line = '3.00 CEBICHE CORVINA PI 26,805.00' $pattern= '\d.\d\d \D' $results = $example_line | Select-String $pattern -AllMatches $results.Matches.Value Any help is appreciated. Thanks in advance ! EDIT. After seeing the answers Im trying regex to build the array. I dont know why its not working for me... Im reading the txt file and I get results, but when I want to see the array I dont get any data in it. So for example Im using this to save all matching lines into the array function pasar_a_word($archivo) { $content = Get-Content $root\UNB\FACTURA_FINAL\$archivo $pattern = '(\d\.\d\d) (\D+?) (\d+,\d\d\d\.\d\d)' Write-Host "FUNCION PASAR A WORD" for($i = 0; $i -lt $content.Count; $i++){ $line = $content[$i] $results = ([regex]::Matches($line, $pattern)).Value } Write-Host $results[1] }
[ "Code:\n$example_line = '3.00 CEBICHE CORVINA PI 26,805.00'\n\n# Expression Pattern\n$pattern = '(\\d\\.\\d\\d) (\\D+?) (\\d+,\\d\\d\\d\\.\\d\\d)'\n\n# Use the -match operator to match the string against the pattern\n$match = $example_line -match $pattern\n\n# If the match is successful, extract the three captured groups\nif ($match) {\n $quantity = $matches[1]\n $description = $matches[2]\n $price = $matches[3]\n\n # Print the extracted values\n Write-Output \"Quantity: $quantity\"\n Write-Output \"Description: $description\"\n Write-Output \"Price: $price\"\n}\n\n", "You can use the regex static .Matches() method for this:\n$string = '2.00 CAUSA PERUANA GRAN 32,504.00 1.00 ENSALADA QUINUA 7,309.00'\n$results = ([regex]::Matches($string, '(\\d\\.\\d{2}\\s[\\w\\s]+\\d+,[\\d.]+)')).Value\n\n# $results[0] --> 2.00 CAUSA PERUANA GRAN 32,504.00\n# $results[1] --> 1.00 ENSALADA QUINUA 7,309.00\n\n", "The proper pattern is\n$pattern= '\\d.\\d\\d \\D* \\d*,\\d*.\\d*'\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "powershell", "regex" ]
stackoverflow_0074662659_powershell_regex.txt
Q: PHP MariaDB Pivot Table with dynamic Header and Sub Header I want to create report from database, here is my table tbl_alokasi id_alokasi nama_alokasi 1 Pemerataan 2 Khusus n etc... tbl_akun id_akun nama_akun 1 Pajak 2 Retribusi 3 Lainnya n etc... tbl_penerima id_penerima nama_penerima 1 Asep 2 Ujang n etc... tbl_rincian id_rincian id_penerima id_alokasi id_akun nominal 1 1 1 1 1000 2 1 1 2 2000 4 2 1 1 1000 5 2 1 2 2000 6 1 2 1 500 7 1 2 2 100 8 2 2 1 500 all table is dynamic value. here is what i want with the report, main header table is dynamic with sub header is dynamic too. Report here database query, save in stored procedure CREATE DEFINER=`root`@`localhost` PROCEDURE `cetak_view`() BEGIN SET @SQL = NULL; SELECT GROUP_CONCAT( DISTINCT CONCAT( "SUM(CASE WHEN id_alokasi = ", ar.id_alokasi, " AND id_akun = ", ar.id_akun ," THEN nominal ELSE 0 END) AS 'alokasi_",ar.id_alokasi,"_akun_", ar.id_akun, "'" ) ) INTO @SQL FROM tbl_rincian ar; SET @SQL3 = CONCAT( "SELECT ar.id_penerima, d.nama_penerima,", @SQL, " FROM tbl_rincian ar LEFT JOIN tbl_penerima d ON d.id_penerima = ar.id_penerima GROUP BY ar.id_penerima" ); PREPARE stmt FROM @SQL3; EXECUTE stmt; DEALLOCATE PREPARE stmt; END result: [result] https://i.stack.imgur.com/XtW5V.png PHP Code $db_host = 'localhost'; $db_port = '3306'; $db_name = 'db_appsimple'; $db_user = 'root'; $db_pass = ''; $conn = mysqli_connect($db_host, $db_user, $db_pass, $db_name); if (!$conn) { die('Error. MySQL: ' . mysqli_connect_error()); } $query = \mysqli_query($conn, 'call cetak_view()'); $result = mysqli_fetch_all($query, MYSQLI_ASSOC); echo "<table border='1'> <tr> "; foreach ($result[0] as $key => $value) { echo "<th>" . $key . "</th>"; } echo " </tr> </table>"; i dont know how to created dynamic header A: Finally i found a solution. here is code what i use in real project query database in stored procedure: CREATE DEFINER=`root`@`localhost` PROCEDURE `cetak_dbh_rencana_rinci`( IN jenis_dbh INT(2), IN alokasi_dbh INT (16), IN skpd INT(8), IN tahun_r INT(4) ) BEGIN SET @SQL = NULL; SELECT GROUP_CONCAT( DISTINCT CONCAT( "SUM(CASE WHEN id_alokasi_dbh = ", alokasi_dbh ," AND id_akun = ", ar.id_akun, " THEN nominal_rencana ELSE 0 END) AS '" ,alokasi_dbh, "_" , ar.kode_akun,"'" ) ) INTO @SQL FROM dbh_alokasi_rencana ar; SET @SQL2 = NULL; SELECT GROUP_CONCAT( DISTINCT CONCAT( "SUM(CASE WHEN id_alokasi_dbh = ", alokasi_dbh, " THEN nominal_rencana ELSE 0 END) AS jmlalokasi") ) INTO @SQL2 FROM dbh_alokasi_rencana; -- SET @SQL3 = CONCAT( "SELECT ar.id_desa, d.nama_desa, ar.id_alokasi_dbh, ", @SQL,"," , @SQL2, " FROM dbh_alokasi_rencana ar LEFT JOIN tbl_desa d ON d.id_desa = ar.id_desa WHERE ar.id_jenis_dbh =",jenis_dbh ," AND id_skpd = ", skpd ," AND tahun_realisasi=",tahun_r, " AND id_alokasi_dbh =", alokasi_dbh , " GROUP BY ar.id_desa ORDER BY ar.id_alokasi_dbh, ar.id_desa " ); -- SELECT @SQL3; PREPARE stmt FROM @SQL3; EXECUTE stmt; DEALLOCATE PREPARE stmt; END result of query : result_db Controller public function cetakRencanaRinci($idJenis, $idSkpd, $tahun) { $desa = $this->desa->getDesa(); return \view('dbh.cetak_rencana_rinciv3', \compact('desa', 'idJenis', 'idSkpd', 'tahun')); } the view <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>Laporan Rencana DBH</title> <style> body { font-family: 'Open Sans', -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif; padding: 1px; margin: 0; font-size: 11px; } table { /* border: 1px solid black; */ border-collapse: collapse; } td { padding: 5px; } .bawah { border-bottom: 1px solid #000; } .kiri { border-left: 1px solid #000; } .kanan { border-right: 1px solid #000; } .atas { border-top: 1px solid #000; } .text-kanan { text-align: right; } .text_blok { font-weight: bold; } </style> </head> <?php $alokasi = DB::table('dbh_alokasi') ->where('id_jenis_dbh', $idJenis) ->where('id_skpd', $idSkpd) ->where('tahun_realisasi', $tahun) ->get(); $header1 = ''; $header2 = ''; $idAlokasi = ''; // $desa = DB::table('tbl_desa')->get(); ?> <body> <table width="100%" cellpadding="0" cellspacing="0"> <tbody> <tr> <?php $i_al = 0; foreach ($alokasi as $al) { $kodeAkun = ''; $akun = DB::table('dbh_alokasi_akun') ->where('id_alokasi_dbh', $al->id_alokasi_dbh) ->where('id_jenis_dbh', $idJenis) ->where('id_skpd', $idSkpd) ->where('tahun_realisasi', $tahun) ->get(); echo '<td style="padding-left:0; padding-right:0 !important;">'; echo '<table style="width:100%">'; echo '<tr>'; if ($i_al == 0) { echo '<th rowspan="2" class="kiri atas bawah kanan">No.</th>'; echo '<th rowspan="2" class="atas bawah kanan">Kecamatan</th>'; echo '<th rowspan="2" class="atas bawah kanan">Desa</th>'; } if (count($akun) != 0) { echo '<th colspan="' . count($akun) . '" class="atas bawah kanan">' . $al->nama_alokasi . '</th>'; echo '<th rowspan="2" class="atas bawah kanan">JUMLAH</th>'; echo '<tr>'; } foreach ($akun as $ak) { $kodeAkun .= $al->id_alokasi_dbh . '_' . $ak->kode_akun . '-'; echo '<th class="bawah kanan">' . $ak->nama_akun . '</th>'; } echo '</tr>'; $explodeKodeAkun = explode("-", substr($kodeAkun, 0, -1)); $rencana = DB::select('call cetak_dbh_rencana_rinci(?,?,?,?)', [$idJenis, $al->id_alokasi_dbh, $idSkpd, $tahun]); $no = 1; foreach ($desa as $d) { echo '<tr>'; if ($i_al == 0) { echo '<td class="kiri bawah kanan">' . $no++ . '</td>'; echo '<td class="kiri bawah kanan">' . $d->nama_kecamatan . '</td>'; echo '<td class="kiri bawah kanan">' . $d->nama_desa . '</td>'; } $sum = 0; foreach ($rencana as $rp) { for ($i = 0; $i < count($explodeKodeAkun); $i++) { if ($rp->id_alokasi_dbh == $al->id_alokasi_dbh && $rp->id_desa == $d->id_desa && property_exists($rp, $explodeKodeAkun[$i])) { echo '<td class="kiri bawah text-kanan">'; echo number_format($rp->{$explodeKodeAkun[$i]}, 0, ',', '.'); echo '</td>'; $sum += $rp->{$explodeKodeAkun[$i]}; } } } if (count($akun) != 0) { echo '<td class="kiri kanan bawah text-kanan text_blok">' . number_format($sum, 0, ',', '.') . '</td>'; } echo '</tr>'; } echo '</table>'; echo '</td>'; $i_al++; } ?> </tr> </tbody> <tbody> </tbody> </table> </body> </html> and here the final result final result html for now this is the best solution for me.
PHP MariaDB Pivot Table with dynamic Header and Sub Header
I want to create report from database, here is my table tbl_alokasi id_alokasi nama_alokasi 1 Pemerataan 2 Khusus n etc... tbl_akun id_akun nama_akun 1 Pajak 2 Retribusi 3 Lainnya n etc... tbl_penerima id_penerima nama_penerima 1 Asep 2 Ujang n etc... tbl_rincian id_rincian id_penerima id_alokasi id_akun nominal 1 1 1 1 1000 2 1 1 2 2000 4 2 1 1 1000 5 2 1 2 2000 6 1 2 1 500 7 1 2 2 100 8 2 2 1 500 all table is dynamic value. here is what i want with the report, main header table is dynamic with sub header is dynamic too. Report here database query, save in stored procedure CREATE DEFINER=`root`@`localhost` PROCEDURE `cetak_view`() BEGIN SET @SQL = NULL; SELECT GROUP_CONCAT( DISTINCT CONCAT( "SUM(CASE WHEN id_alokasi = ", ar.id_alokasi, " AND id_akun = ", ar.id_akun ," THEN nominal ELSE 0 END) AS 'alokasi_",ar.id_alokasi,"_akun_", ar.id_akun, "'" ) ) INTO @SQL FROM tbl_rincian ar; SET @SQL3 = CONCAT( "SELECT ar.id_penerima, d.nama_penerima,", @SQL, " FROM tbl_rincian ar LEFT JOIN tbl_penerima d ON d.id_penerima = ar.id_penerima GROUP BY ar.id_penerima" ); PREPARE stmt FROM @SQL3; EXECUTE stmt; DEALLOCATE PREPARE stmt; END result: [result] https://i.stack.imgur.com/XtW5V.png PHP Code $db_host = 'localhost'; $db_port = '3306'; $db_name = 'db_appsimple'; $db_user = 'root'; $db_pass = ''; $conn = mysqli_connect($db_host, $db_user, $db_pass, $db_name); if (!$conn) { die('Error. MySQL: ' . mysqli_connect_error()); } $query = \mysqli_query($conn, 'call cetak_view()'); $result = mysqli_fetch_all($query, MYSQLI_ASSOC); echo "<table border='1'> <tr> "; foreach ($result[0] as $key => $value) { echo "<th>" . $key . "</th>"; } echo " </tr> </table>"; i dont know how to created dynamic header
[ "Finally i found a solution.\nhere is code what i use in real project\nquery database in stored procedure:\nCREATE DEFINER=`root`@`localhost` PROCEDURE `cetak_dbh_rencana_rinci`(\nIN jenis_dbh INT(2),\nIN alokasi_dbh INT (16),\nIN skpd INT(8),\nIN tahun_r INT(4)\n)\nBEGIN\n SET @SQL = NULL;\nSELECT\n GROUP_CONCAT( DISTINCT CONCAT( \"SUM(CASE WHEN id_alokasi_dbh = \", alokasi_dbh ,\" AND id_akun = \", ar.id_akun, \" THEN nominal_rencana ELSE 0 END) AS '\" ,alokasi_dbh, \"_\" , ar.kode_akun,\"'\" ) )\n INTO @SQL \nFROM\n dbh_alokasi_rencana ar;\n \nSET @SQL2 = NULL;\nSELECT\n GROUP_CONCAT( DISTINCT CONCAT( \"SUM(CASE WHEN id_alokasi_dbh = \", alokasi_dbh, \" THEN nominal_rencana ELSE 0 END) AS jmlalokasi\") ) \n INTO @SQL2 \nFROM\n dbh_alokasi_rencana;\n-- \nSET @SQL3 = CONCAT( \"SELECT ar.id_desa, d.nama_desa, ar.id_alokasi_dbh, \", @SQL,\",\" , @SQL2, \" FROM dbh_alokasi_rencana ar LEFT JOIN tbl_desa d ON d.id_desa = ar.id_desa WHERE ar.id_jenis_dbh =\",jenis_dbh ,\" AND id_skpd = \", skpd ,\" AND tahun_realisasi=\",tahun_r, \" AND id_alokasi_dbh =\", alokasi_dbh , \" GROUP BY ar.id_desa\nORDER BY ar.id_alokasi_dbh, ar.id_desa\n\" );\n-- SELECT @SQL3;\nPREPARE stmt FROM @SQL3;\nEXECUTE stmt;\nDEALLOCATE PREPARE stmt;\n\nEND\n\nresult of query :\nresult_db\nController\n public function cetakRencanaRinci($idJenis, $idSkpd, $tahun)\n {\n $desa = $this->desa->getDesa();\n return \\view('dbh.cetak_rencana_rinciv3', \\compact('desa', 'idJenis', 'idSkpd', 'tahun'));\n }\n\nthe view\n<!DOCTYPE html>\n<html lang=\"en\">\n\n<head>\n <meta charset=\"UTF-8\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n <meta http-equiv=\"X-UA-Compatible\" content=\"ie=edge\">\n <title>Laporan Rencana DBH</title>\n <style>\n body {\n font-family: 'Open Sans', -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif;\n padding: 1px;\n margin: 0;\n font-size: 11px;\n }\n\n table {\n /* border: 1px solid black; */\n border-collapse: collapse;\n }\n\n td {\n padding: 5px;\n }\n\n .bawah {\n border-bottom: 1px solid #000;\n }\n\n .kiri {\n border-left: 1px solid #000;\n }\n\n .kanan {\n border-right: 1px solid #000;\n }\n\n .atas {\n border-top: 1px solid #000;\n }\n\n .text-kanan {\n text-align: right;\n }\n\n .text_blok {\n font-weight: bold;\n }\n </style>\n</head>\n<?php\n$alokasi = DB::table('dbh_alokasi')\n ->where('id_jenis_dbh', $idJenis)\n ->where('id_skpd', $idSkpd)\n ->where('tahun_realisasi', $tahun)\n ->get();\n$header1 = '';\n$header2 = '';\n$idAlokasi = '';\n\n\n// $desa = DB::table('tbl_desa')->get();\n?>\n\n<body>\n <table width=\"100%\" cellpadding=\"0\" cellspacing=\"0\">\n <tbody>\n <tr>\n <?php\n $i_al = 0;\n foreach ($alokasi as $al) {\n $kodeAkun = '';\n $akun = DB::table('dbh_alokasi_akun')\n ->where('id_alokasi_dbh', $al->id_alokasi_dbh)\n ->where('id_jenis_dbh', $idJenis)\n ->where('id_skpd', $idSkpd)\n ->where('tahun_realisasi', $tahun)\n ->get();\n echo '<td style=\"padding-left:0; padding-right:0 !important;\">';\n echo '<table style=\"width:100%\">';\n\n echo '<tr>';\n if ($i_al == 0) {\n echo '<th rowspan=\"2\" class=\"kiri atas bawah kanan\">No.</th>';\n echo '<th rowspan=\"2\" class=\"atas bawah kanan\">Kecamatan</th>';\n echo '<th rowspan=\"2\" class=\"atas bawah kanan\">Desa</th>';\n }\n if (count($akun) != 0) {\n echo '<th colspan=\"' . count($akun) . '\" class=\"atas bawah kanan\">' . $al->nama_alokasi . '</th>';\n echo '<th rowspan=\"2\" class=\"atas bawah kanan\">JUMLAH</th>';\n echo '<tr>';\n }\n foreach ($akun as $ak) {\n $kodeAkun .= $al->id_alokasi_dbh . '_' . $ak->kode_akun . '-';\n echo '<th class=\"bawah kanan\">' . $ak->nama_akun . '</th>';\n }\n echo '</tr>';\n $explodeKodeAkun = explode(\"-\", substr($kodeAkun, 0, -1));\n $rencana = DB::select('call cetak_dbh_rencana_rinci(?,?,?,?)', [$idJenis, $al->id_alokasi_dbh, $idSkpd, $tahun]);\n $no = 1;\n foreach ($desa as $d) {\n echo '<tr>';\n if ($i_al == 0) {\n echo '<td class=\"kiri bawah kanan\">' . $no++ . '</td>';\n echo '<td class=\"kiri bawah kanan\">' . $d->nama_kecamatan . '</td>';\n echo '<td class=\"kiri bawah kanan\">' . $d->nama_desa . '</td>';\n }\n $sum = 0;\n foreach ($rencana as $rp) {\n for ($i = 0; $i < count($explodeKodeAkun); $i++) {\n if ($rp->id_alokasi_dbh == $al->id_alokasi_dbh && $rp->id_desa == $d->id_desa && property_exists($rp, $explodeKodeAkun[$i])) {\n echo '<td class=\"kiri bawah text-kanan\">';\n echo number_format($rp->{$explodeKodeAkun[$i]}, 0, ',', '.');\n echo '</td>';\n $sum += $rp->{$explodeKodeAkun[$i]};\n }\n }\n }\n if (count($akun) != 0) {\n echo '<td class=\"kiri kanan bawah text-kanan text_blok\">' . number_format($sum, 0, ',', '.') . '</td>';\n }\n echo '</tr>';\n }\n echo '</table>';\n echo '</td>';\n $i_al++;\n }\n ?>\n </tr>\n </tbody>\n <tbody>\n\n </tbody>\n </table>\n\n</body>\n\n</html>\n\n\n\nand here the final result\nfinal result html\nfor now this is the best solution for me.\n" ]
[ 0 ]
[]
[]
[ "html", "mariadb", "mysql", "php" ]
stackoverflow_0074638411_html_mariadb_mysql_php.txt
Q: Connect React Client running on local PC to Kubernetes Cluster on Azure I have 2 deployments + services running on Azure: react client and nodejs auth. I have registered a public IP on Azure which I added to my windows host file (= myexample.com). Typing the URL in the browser, the client opens and requests go to auth service as expected. Now I want to run the client locally (with npm start) but connect to auth service still running on Azure. I removed the client from the cloud deployment (= the deployment+the service) and use the domain (=myexample.cloud) as the base URL in my axios client in my React client. To confirm, on Azure my ingress-nginx-controller of type Load_Balancer shows the aforementioned public IP as its external IP plus ports 80:30819/TCP,443:31077/TCP. When I ran the Client locally, it shows the correct request URL (http://myexample.cloud/api/users/signin) but I get a 403 Forbidden answer. What am I missing? I should be able to connect to my cloud service by using the public IP? There error is caused by my client because Azure is not putting road blocks in place. I mean it is a pubic IP, correct? Update 1 Just to clarify, the 403 Forbidden is not caused by me trying to sign in with incorrect credentials. I have another api/users/health-ckeck route that is giving me the same error My cloud ingress deployment. I have also tried to remove the client part (last 7 lines) to no effect. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-service annotations: nginx.ingress.kubernetes.io/use-regex: "true" kubernetes.io/ingress.class: nginx spec: rules: - host: myexample.cloud http: paths: - path: /api/users/?(.*) pathType: Prefix backend: service: name: auth-srv port: number: 3000 - path: / pathType: Prefix backend: service: name: client-srv port: number: 3000 my client cloud deployment+service that worked when client was running in cloud apiVersion: apps/v1 kind: Deployment metadata: name: client spec: replicas: 1 selector: matchLabels: app: client template: metadata: labels: app: client spec: containers: - name: client image: client --- apiVersion: v1 kind: Service metadata: name: client spec: selector: app: client ports: - name: client protocol: TCP port: 3000 targetPort: 3000 my auth deployment + service apiVersion: apps/v1 kind: Deployment metadata: name: auth spec: replicas: 1 selector: matchLabels: app: auth template: metadata: labels: app: auth spec: containers: - name: auth image: auth apiVersion: v1 kind: Service metadata: name: auth spec: selector: app: auth ports: - name: auth protocol: TCP port: 3000 targetPort: 3000 A: The problem was actually CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' but my browser did not tell me. After switching from Chrome to Firefox, the problem became apperant. I had to add annotations to my ingress controller as described here: express + socket.io + kubernetes Access-Control-Allow-Origin' header
Connect React Client running on local PC to Kubernetes Cluster on Azure
I have 2 deployments + services running on Azure: react client and nodejs auth. I have registered a public IP on Azure which I added to my windows host file (= myexample.com). Typing the URL in the browser, the client opens and requests go to auth service as expected. Now I want to run the client locally (with npm start) but connect to auth service still running on Azure. I removed the client from the cloud deployment (= the deployment+the service) and use the domain (=myexample.cloud) as the base URL in my axios client in my React client. To confirm, on Azure my ingress-nginx-controller of type Load_Balancer shows the aforementioned public IP as its external IP plus ports 80:30819/TCP,443:31077/TCP. When I ran the Client locally, it shows the correct request URL (http://myexample.cloud/api/users/signin) but I get a 403 Forbidden answer. What am I missing? I should be able to connect to my cloud service by using the public IP? There error is caused by my client because Azure is not putting road blocks in place. I mean it is a pubic IP, correct? Update 1 Just to clarify, the 403 Forbidden is not caused by me trying to sign in with incorrect credentials. I have another api/users/health-ckeck route that is giving me the same error My cloud ingress deployment. I have also tried to remove the client part (last 7 lines) to no effect. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-service annotations: nginx.ingress.kubernetes.io/use-regex: "true" kubernetes.io/ingress.class: nginx spec: rules: - host: myexample.cloud http: paths: - path: /api/users/?(.*) pathType: Prefix backend: service: name: auth-srv port: number: 3000 - path: / pathType: Prefix backend: service: name: client-srv port: number: 3000 my client cloud deployment+service that worked when client was running in cloud apiVersion: apps/v1 kind: Deployment metadata: name: client spec: replicas: 1 selector: matchLabels: app: client template: metadata: labels: app: client spec: containers: - name: client image: client --- apiVersion: v1 kind: Service metadata: name: client spec: selector: app: client ports: - name: client protocol: TCP port: 3000 targetPort: 3000 my auth deployment + service apiVersion: apps/v1 kind: Deployment metadata: name: auth spec: replicas: 1 selector: matchLabels: app: auth template: metadata: labels: app: auth spec: containers: - name: auth image: auth apiVersion: v1 kind: Service metadata: name: auth spec: selector: app: auth ports: - name: auth protocol: TCP port: 3000 targetPort: 3000
[ "The problem was actually CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin'\nbut my browser did not tell me.\nAfter switching from Chrome to Firefox, the problem became apperant.\nI had to add annotations to my ingress controller as described here: express + socket.io + kubernetes Access-Control-Allow-Origin' header\n" ]
[ 0 ]
[]
[]
[ "azure", "kubernetes", "reactjs" ]
stackoverflow_0074671467_azure_kubernetes_reactjs.txt
Q: com.jscape.inet.sftp.SftpException: cause: java.util.NoSuchElementException: no common elements found- JSCAPE Java library I tried to connect and download a file from a Linux server but faced the following exception when connecting to the server using the jscape java library. Code package com.example.util; import com.jscape.inet.sftp.Sftp; import com.jscape.inet.ssh.util.SshParameters; public class TestFTPManager { private static final String hostname = "mycompany.example.com"; private static final String username = "exampleuser"; private static final String password = "examplepassword"; private static final int port = 22; private Sftp sftpClient; public TestFTPManager() { this.sftpClient = new Sftp( new SshParameters(hostname, port, username, password )); } public void connect() throws Exception { this.sftpClient.connect(); } public void setAscii() throws Exception { this.sftpClient.setAscii(); } public void setBinary() throws Exception { this.sftpClient.setBinary(); } public Sftp getSftpClient() { return sftpClient; } public void setSftpClient( Sftp sftpClient ) { this.sftpClient = sftpClient; } public static void main(String[] args) { try { TestFTPManager sftpManager = new TestFTPManager(); sftpManager.getSftpClient().connect(); // Error System.out.println( "Connection successful!" ); // download operation is done here. sftpManager.getSftpClient().disconnect(); System.out.println( "Disconnection successful!" ); } catch (Exception e) { e.printStackTrace(); } } } Error com.jscape.inet.sftp.SftpException: cause: java.util.NoSuchElementException: no common elements found at com.jscape.inet.sftp.SftpConfiguration.createClient(Unknown Source) at com.jscape.inet.sftp.Sftp.connect(Unknown Source) at com.jscape.inet.sftp.Sftp.connect(Unknown Source) at com.example.util.TestFTPManager.main(TestFTPManager.java:54) Caused by: com.jscape.inet.ssh.transport.TransportException: cause: java.util.NoSuchElementException: no common elements found at com.jscape.inet.ssh.transport.AlgorithmSuite.<init>(Unknown Source) at com.jscape.inet.ssh.transport.TransportClient.getSuite(Unknown Source) at com.jscape.inet.ssh.transport.Transport.exchangeKeys(Unknown Source) at com.jscape.inet.ssh.transport.Transport.exchangeKeys(Unknown Source) at com.jscape.inet.ssh.transport.TransportClient.<init>(Unknown Source) at com.jscape.inet.ssh.transport.TransportClient.<init>(Unknown Source) at com.jscape.inet.ssh.SshConfiguration.createConnectionClient(Unknown Source) at com.jscape.inet.ssh.SshStandaloneConnector.openConnection(Unknown Source) ... 4 more Caused by: java.util.NoSuchElementException: no common elements found at com.jscape.inet.ssh.types.SshNameList.getFirstCommonNameFrom(Unknown Source) at com.jscape.inet.ssh.transport.AlgorithmSuite.a(Unknown Source) at com.jscape.inet.ssh.transport.AlgorithmSuite.h(Unknown Source) ... 12 more However, when I commented down the following lines (lines no 23,23,25) in the /etc/ssh/sshd_config file in the server. I could successfully connect and download the file from the server without any exceptions. Question: How to get rid of getting this exception without commenting down (lines no 23,23,25) the /etc/ssh/sshd_config file in the server? I would appreciate having an explanation why I get this exception too. A: If you are facing this issue please check the following findings. I used JSCAPE (Java) library version 8.8.0. According to my understanding, this version is not supported some of the Ciphers and KexAlgorithms specified in the sshd_config file. When you refer to the JSCAPE documentation, com.jscape.inet.sftp class contains what you need to set key exchanges, ciphers, macs and compressions if needed. Please click here to see the official documentation, there you can see how you can put these things into code. However, the JSCAPE (Java) library that I use (version 8.8.0) does not contain these classes and methods to set key exchanges, ciphers, macs and compressions if needed. One of the things you can try is to use the latest version of the JSCAPE library, but I doubt it is available for free.
com.jscape.inet.sftp.SftpException: cause: java.util.NoSuchElementException: no common elements found- JSCAPE Java library
I tried to connect and download a file from a Linux server but faced the following exception when connecting to the server using the jscape java library. Code package com.example.util; import com.jscape.inet.sftp.Sftp; import com.jscape.inet.ssh.util.SshParameters; public class TestFTPManager { private static final String hostname = "mycompany.example.com"; private static final String username = "exampleuser"; private static final String password = "examplepassword"; private static final int port = 22; private Sftp sftpClient; public TestFTPManager() { this.sftpClient = new Sftp( new SshParameters(hostname, port, username, password )); } public void connect() throws Exception { this.sftpClient.connect(); } public void setAscii() throws Exception { this.sftpClient.setAscii(); } public void setBinary() throws Exception { this.sftpClient.setBinary(); } public Sftp getSftpClient() { return sftpClient; } public void setSftpClient( Sftp sftpClient ) { this.sftpClient = sftpClient; } public static void main(String[] args) { try { TestFTPManager sftpManager = new TestFTPManager(); sftpManager.getSftpClient().connect(); // Error System.out.println( "Connection successful!" ); // download operation is done here. sftpManager.getSftpClient().disconnect(); System.out.println( "Disconnection successful!" ); } catch (Exception e) { e.printStackTrace(); } } } Error com.jscape.inet.sftp.SftpException: cause: java.util.NoSuchElementException: no common elements found at com.jscape.inet.sftp.SftpConfiguration.createClient(Unknown Source) at com.jscape.inet.sftp.Sftp.connect(Unknown Source) at com.jscape.inet.sftp.Sftp.connect(Unknown Source) at com.example.util.TestFTPManager.main(TestFTPManager.java:54) Caused by: com.jscape.inet.ssh.transport.TransportException: cause: java.util.NoSuchElementException: no common elements found at com.jscape.inet.ssh.transport.AlgorithmSuite.<init>(Unknown Source) at com.jscape.inet.ssh.transport.TransportClient.getSuite(Unknown Source) at com.jscape.inet.ssh.transport.Transport.exchangeKeys(Unknown Source) at com.jscape.inet.ssh.transport.Transport.exchangeKeys(Unknown Source) at com.jscape.inet.ssh.transport.TransportClient.<init>(Unknown Source) at com.jscape.inet.ssh.transport.TransportClient.<init>(Unknown Source) at com.jscape.inet.ssh.SshConfiguration.createConnectionClient(Unknown Source) at com.jscape.inet.ssh.SshStandaloneConnector.openConnection(Unknown Source) ... 4 more Caused by: java.util.NoSuchElementException: no common elements found at com.jscape.inet.ssh.types.SshNameList.getFirstCommonNameFrom(Unknown Source) at com.jscape.inet.ssh.transport.AlgorithmSuite.a(Unknown Source) at com.jscape.inet.ssh.transport.AlgorithmSuite.h(Unknown Source) ... 12 more However, when I commented down the following lines (lines no 23,23,25) in the /etc/ssh/sshd_config file in the server. I could successfully connect and download the file from the server without any exceptions. Question: How to get rid of getting this exception without commenting down (lines no 23,23,25) the /etc/ssh/sshd_config file in the server? I would appreciate having an explanation why I get this exception too.
[ "If you are facing this issue please check the following findings.\nI used JSCAPE (Java) library version 8.8.0. According to my understanding, this version is not supported some of the Ciphers and KexAlgorithms specified in the sshd_config file.\nWhen you refer to the JSCAPE documentation, com.jscape.inet.sftp class contains what you need to set key exchanges, ciphers, macs and compressions if needed. Please click here to see the official documentation, there you can see how you can put these things into code.\nHowever, the JSCAPE (Java) library that I use (version 8.8.0) does not contain these classes and methods to set key exchanges, ciphers, macs and compressions if needed.\nOne of the things you can try is to use the latest version of the JSCAPE library, but I doubt it is available for free.\n" ]
[ 0 ]
[]
[]
[ "java_8", "jscape", "linux", "openssh", "ssh" ]
stackoverflow_0074413750_java_8_jscape_linux_openssh_ssh.txt
Q: How do I print 3 lists side by side in python? a = ['a', 'b', 'c,'] b = [1, 2, 3] c = [1, 2, 3] #this is my effort on printing 3 lists side by side; however, I noticed it is completely wrong res = "\n".join("{} {}".format(x, y, z) for x, y, z in zip(a, b, c)) print(res) #how i want my results to look like a 1 1 b 2 2 c 3 3 I was expecting to print 3 lists side by side A: a = ['a', 'b', 'c'] b = [1, 2, 3] c = [1, 2, 3] i = 0 while i < len(a) and i < len(b) and i < len(c): print(a[i], end=' ') print(b[i], end=' ') print(c[i], end=' ') i += 1 A: You can use the str.format() method to print multiple lists side by side in Python. The str.format() method allows you to specify placeholders in a string, and then fill those placeholders with values from variables. To print the lists a, b, and c side by side, you can use the following code: a = ['a', 'b', 'c'] b = [1, 2, 3] c = [1, 2, 3] # Create a format string with three placeholders for the values from the lists format_string = "{:<4}{:<4}{:<4}" # Print the lists using the format string print(format_string.format("a", "b", "c")) for x, y, z in zip(a, b, c): print(format_string.format(x, y, z)) This code will produce the following output: a b c a 1 1 b 2 2 c 3 3 The format_string variable contains a format string with three placeholders for the values from the lists. The {:<4} placeholder specifies that the value should be left-aligned in a field of width 4. The zip() function is used to combine the values from the lists into tuples, and then the print() function is used to print each tuple using the format string. Hope it helps. Marcell
How do I print 3 lists side by side in python?
a = ['a', 'b', 'c,'] b = [1, 2, 3] c = [1, 2, 3] #this is my effort on printing 3 lists side by side; however, I noticed it is completely wrong res = "\n".join("{} {}".format(x, y, z) for x, y, z in zip(a, b, c)) print(res) #how i want my results to look like a 1 1 b 2 2 c 3 3 I was expecting to print 3 lists side by side
[ "a = ['a', 'b', 'c']\nb = [1, 2, 3]\nc = [1, 2, 3]\n\ni = 0\nwhile i < len(a) and i < len(b) and i < len(c):\n print(a[i], end=' ')\n print(b[i], end=' ')\n print(c[i], end=' ')\n i += 1\n\n", "You can use the str.format() method to print multiple lists side by side in Python. The str.format() method allows you to specify placeholders in a string, and then fill those placeholders with values from variables.\nTo print the lists a, b, and c side by side, you can use the following code:\n a = ['a', 'b', 'c']\n b = [1, 2, 3]\n c = [1, 2, 3]\n\n # Create a format string with three placeholders for the values from the lists\n format_string = \"{:<4}{:<4}{:<4}\"\n\n # Print the lists using the format string\n print(format_string.format(\"a\", \"b\", \"c\"))\n for x, y, z in zip(a, b, c):\n print(format_string.format(x, y, z))\n\nThis code will produce the following output:\na b c \na 1 1 \nb 2 2 \nc 3 3 \n\nThe format_string variable contains a format string with three placeholders for the values from the lists. The {:<4} placeholder specifies that the value should be left-aligned in a field of width 4. The zip() function is used to combine the values from the lists into tuples, and then the print() function is used to print each tuple using the format string.\nHope it helps.\nMarcell\n" ]
[ 0, 0 ]
[]
[]
[ "list", "tuples" ]
stackoverflow_0074677196_list_tuples.txt
Q: {culture} tag in route is not working for c# dotnet 7 minimal api For API project with controllers, {culture} tag can be used to set the culture for the called api. When used like this swagger GUI asks for the culture separately. But when I use the same approach with minimal api, {culture} tag is not being replaced with the culture (en-US / tr-TR / ...) but needs to be typed exactly as {culture}. Expected behaviour In other words I need to call the api as /{culture}/login instead of /en-US/login. The problematic swagger design This is the minimal api code which is not working. var builder = WebApplication.CreateBuilder(args); // Add services to the container. // Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); var app = builder.Build(); // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); #region WeatherInfo var summaries = new[] { "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching" }; app.MapGet("/{culture}/weatherforecast", () => { var forecast = Enumerable.Range(1, 5).Select(index => new WeatherForecast ( DateOnly.FromDateTime(DateTime.Now.AddDays(index)), Random.Shared.Next(-20, 55), summaries[Random.Shared.Next(summaries.Length)] )) .ToArray(); return forecast; }) .WithName("GetWeatherForecast") .WithOpenApi(); #endregion app.Run(); record WeatherForecast(DateOnly Date, int TemperatureC, string? Summary) { public int TemperatureF => 32 + (int)(TemperatureC / 0.5556); } A: var builder = WebApplication.CreateBuilder(args); // Add services to the container. // Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); var app = builder.Build(); // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); #region WeatherInfo var summaries = new[] { "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching" }; //you should need to add a culture parameter app.MapGet("/{culture}/weatherforecast", (string culture) => { var forecast = Enumerable.Range(1, 5).Select(index => new WeatherForecast ( DateOnly.FromDateTime(DateTime.Now.AddDays(index)), Random.Shared.Next(-20, 55), summaries[Random.Shared.Next(summaries.Length)] )) .ToArray(); return forecast; }) .WithName("GetWeatherForecast") .WithOpenApi(); #endregion app.Run(); record WeatherForecast(DateOnly Date, int TemperatureC, string? Summary) { public int TemperatureF => 32 + (int)(TemperatureC / 0.5556); }
{culture} tag in route is not working for c# dotnet 7 minimal api
For API project with controllers, {culture} tag can be used to set the culture for the called api. When used like this swagger GUI asks for the culture separately. But when I use the same approach with minimal api, {culture} tag is not being replaced with the culture (en-US / tr-TR / ...) but needs to be typed exactly as {culture}. Expected behaviour In other words I need to call the api as /{culture}/login instead of /en-US/login. The problematic swagger design This is the minimal api code which is not working. var builder = WebApplication.CreateBuilder(args); // Add services to the container. // Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); var app = builder.Build(); // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); #region WeatherInfo var summaries = new[] { "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching" }; app.MapGet("/{culture}/weatherforecast", () => { var forecast = Enumerable.Range(1, 5).Select(index => new WeatherForecast ( DateOnly.FromDateTime(DateTime.Now.AddDays(index)), Random.Shared.Next(-20, 55), summaries[Random.Shared.Next(summaries.Length)] )) .ToArray(); return forecast; }) .WithName("GetWeatherForecast") .WithOpenApi(); #endregion app.Run(); record WeatherForecast(DateOnly Date, int TemperatureC, string? Summary) { public int TemperatureF => 32 + (int)(TemperatureC / 0.5556); }
[ "var builder = WebApplication.CreateBuilder(args);\n\n// Add services to the container.\n// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle\nbuilder.Services.AddEndpointsApiExplorer();\nbuilder.Services.AddSwaggerGen();\n\nvar app = builder.Build();\n\n// Configure the HTTP request pipeline.\nif (app.Environment.IsDevelopment())\n{\n app.UseSwagger();\n app.UseSwaggerUI();\n}\n\napp.UseHttpsRedirection();\n#region WeatherInfo\nvar summaries = new[]\n{\n \"Freezing\", \"Bracing\", \"Chilly\", \"Cool\", \"Mild\", \"Warm\", \"Balmy\", \"Hot\", \"Sweltering\", \"Scorching\"\n};\n//you should need to add a culture parameter\napp.MapGet(\"/{culture}/weatherforecast\", (string culture) =>\n{\n var forecast = Enumerable.Range(1, 5).Select(index =>\n new WeatherForecast\n (\n DateOnly.FromDateTime(DateTime.Now.AddDays(index)),\n Random.Shared.Next(-20, 55),\n summaries[Random.Shared.Next(summaries.Length)]\n ))\n .ToArray();\n return forecast;\n})\n.WithName(\"GetWeatherForecast\")\n.WithOpenApi();\n#endregion\n\napp.Run();\n\nrecord WeatherForecast(DateOnly Date, int TemperatureC, string? Summary)\n{\n public int TemperatureF => 32 + (int)(TemperatureC / 0.5556);\n}\n\n" ]
[ 0 ]
[]
[]
[ "c#", "culture", "minimal_apis" ]
stackoverflow_0074670877_c#_culture_minimal_apis.txt
Q: How to loop through file names returned by find? x=$(find . -name "*.txt") echo $x if I run the above piece of code in Bash shell, what I get is a string containing several file names separated by blank, not a list. Of course, I can further separate them by blank to get a list, but I'm sure there is a better way to do it. So what is the best way to loop through the results of a find command? A: TL;DR: If you're just here for the most correct answer, you probably want my personal preference (see the bottom of this post): # execute `process` once for each file find . -name '*.txt' -exec process {} \; If you have time, read through the rest to see several different ways and the problems with most of them. The full answer: The best way depends on what you want to do, but here are a few options. As long as no file or folder in the subtree has whitespace in its name, you can just loop over the files: for i in $x; do # Not recommended, will break on whitespace process "$i" done Marginally better, cut out the temporary variable x: for i in $(find -name \*.txt); do # Not recommended, will break on whitespace process "$i" done It is much better to glob when you can. White-space safe, for files in the current directory: for i in *.txt; do # Whitespace-safe but not recursive. process "$i" done By enabling the globstar option, you can glob all matching files in this directory and all subdirectories: # Make sure globstar is enabled shopt -s globstar for i in **/*.txt; do # Whitespace-safe and recursive process "$i" done In some cases, e.g. if the file names are already in a file, you may need to use read: # IFS= makes sure it doesn't trim leading and trailing whitespace # -r prevents interpretation of \ escapes. while IFS= read -r line; do # Whitespace-safe EXCEPT newlines process "$line" done < filename read can be used safely in combination with find by setting the delimiter appropriately: find . -name '*.txt' -print0 | while IFS= read -r -d '' line; do process "$line" done For more complex searches, you will probably want to use find, either with its -exec option or with -print0 | xargs -0: # execute `process` once for each file find . -name \*.txt -exec process {} \; # execute `process` once with all the files as arguments*: find . -name \*.txt -exec process {} + # using xargs* find . -name \*.txt -print0 | xargs -0 process # using xargs with arguments after each filename (implies one run per filename) find . -name \*.txt -print0 | xargs -0 -I{} process {} argument find can also cd into each file's directory before running a command by using -execdir instead of -exec, and can be made interactive (prompt before running the command for each file) using -ok instead of -exec (or -okdir instead of -execdir). *: Technically, both find and xargs (by default) will run the command with as many arguments as they can fit on the command line, as many times as it takes to get through all the files. In practice, unless you have a very large number of files it won't matter, and if you exceed the length but need them all on the same command line, you're SOL find a different way. A: What ever you do, don't use a for loop: # Don't do this for file in $(find . -name "*.txt") do …code using "$file" done Three reasons: For the for loop to even start, the find must run to completion. If a file name has any whitespace (including space, tab or newline) in it, it will be treated as two separate names. Although now unlikely, you can overrun your command line buffer. Imagine if your command line buffer holds 32KB, and your for loop returns 40KB of text. That last 8KB will be dropped right off your for loop and you'll never know it. Always use a while read construct: find . -name "*.txt" -print0 | while read -d $'\0' file do …code using "$file" done The loop will execute while the find command is executing. Plus, this command will work even if a file name is returned with whitespace in it. And, you won't overflow your command line buffer. The -print0 will use the NULL as a file separator instead of a newline and the -d $'\0' will use NULL as the separator while reading. A: find . -name "*.txt"|while read fname; do echo "$fname" done Note: this method and the (second) method shown by bmargulies are safe to use with white space in the file/folder names. In order to also have the - somewhat exotic - case of newlines in the file/folder names covered, you will have to resort to the -exec predicate of find like this: find . -name '*.txt' -exec echo "{}" \; The {} is the placeholder for the found item and the \; is used to terminate the -exec predicate. And for the sake of completeness let me add another variant - you gotta love the *nix ways for their versatility: find . -name '*.txt' -print0|xargs -0 -n 1 echo This would separate the printed items with a \0 character that isn't allowed in any of the file systems in file or folder names, to my knowledge, and therefore should cover all bases. xargs picks them up one by one then ... A: Filenames can include spaces and even control characters. Spaces are (default) delimiters for shell expansion in bash and as a result of that x=$(find . -name "*.txt") from the question is not recommended at all. If find gets a filename with spaces e.g. "the file.txt" you will get 2 separated strings for processing, if you process x in a loop. You can improve this by changing delimiter (bash IFS Variable) e.g. to \r\n, but filenames can include control characters - so this is not a (completely) safe method. From my point of view, there are 2 recommended (and safe) patterns for processing files: 1. Use for loop & filename expansion: for file in ./*.txt; do [[ ! -e $file ]] && continue # continue, if file does not exist # single filename is in $file echo "$file" # your code here done 2. Use find-read-while & process substitution while IFS= read -r -d '' file; do # single filename is in $file echo "$file" # your code here done < <(find . -name "*.txt" -print0) Remarks on Pattern 1: bash returns the search pattern ("*.txt") if no matching file is found - so the extra line "continue, if file does not exist" is needed. see Bash Manual, Filename Expansion shell option nullglob can be used to avoid this extra line. "If the failglob shell option is set, and no matches are found, an error message is printed and the command is not executed." (from Bash Manual above) shell option globstar: "If set, the pattern ‘**’ used in a filename expansion context will match all files and zero or more directories and subdirectories. If the pattern is followed by a ‘/’, only directories and subdirectories match." see Bash Manual, Shopt Builtin other options for filename expansion: extglob, nocaseglob, dotglob & shell variable GLOBIGNORE on Pattern 2: filenames can contain blanks, tabs, spaces, newlines, ... to process filenames in a safe way, find with -print0 is used: filename is printed with all control characters & terminated with NUL. see also Gnu Findutils Manpage, Unsafe File Name Handling, safe File Name Handling, unusual characters in filenames. See David A. Wheeler below for detailed discussion of this topic. There are some possible patterns to process find results in a while loop. Others (kevin, David W.) have shown how to do this using pipes: files_found=1 find . -name "*.txt" -print0 | while IFS= read -r -d '' file; do # single filename in $file echo "$file" files_found=0 # not working example # your code here done [[ $files_found -eq 0 ]] && echo "files found" || echo "no files found" When you try this piece of code, you will see, that it does not work: files_found is always "true" & the code will always echo "no files found". Reason is: each command of a pipeline is executed in a separate subshell, so the changed variable inside the loop (separate subshell) does not change the variable in the main shell script. This is why I recommend using process substitution as the "better", more useful, more general pattern.See I set variables in a loop that's in a pipeline. Why do they disappear... (from Greg's Bash FAQ) for a detailed discussion on this topic. Additional References & Sources: Gnu Bash Manual, Pattern Matching Filenames and Pathnames in Shell: How to do it Correctly, David A. Wheeler Why you don't read lines with "for", Greg's Wiki Why you shouldn't parse the output of ls(1), Greg's Wiki Gnu Bash Manual, Process Substitution A: (Updated to include @Socowi's execellent speed improvement) With any $SHELL that supports it (dash/zsh/bash...): find . -name "*.txt" -exec $SHELL -c ' for i in "$@" ; do echo "$i" done ' {} + Done. Original answer (shorter, but slower): find . -name "*.txt" -exec $SHELL -c ' echo "$0" ' {} \; A: If you can assume the file names don't contain newlines, you can read the output of find into a Bash array using the following command: readarray -t x < <(find . -name '*.txt') Note: -t causes readarray to strip newlines. It won't work if readarray is in a pipe, hence the process substitution. readarray is available since Bash 4. Bash 4.4 and up also supports the -d parameter for specifying the delimiter. Using the null character, instead of newline, to delimit the file names works also in the rare case that the file names contain newlines: readarray -d '' x < <(find . -name '*.txt' -print0) readarray can also be invoked as mapfile with the same options. Reference: https://mywiki.wooledge.org/BashFAQ/005#Loading_lines_from_a_file_or_stream A: # Doesn't handle whitespace for x in `find . -name "*.txt" -print`; do process_one $x done or # Handles whitespace and newlines find . -name "*.txt" -print0 | xargs -0 -n 1 process_one A: I like to use find which is first assigned to variable and IFS switched to new line as follow: FilesFound=$(find . -name "*.txt") IFSbkp="$IFS" IFS=$'\n' counter=1; for file in $FilesFound; do echo "${counter}: ${file}" let counter++; done IFS="$IFSbkp" As commented by @Konrad Rudolph this will not work with "new lines" in file name. I still think it is handy as it covers most of the cases when you need to loop over command output. A: You can put the filenames returned by find into an array like this: array=() while IFS= read -r -d ''; do array+=("$REPLY") done < <(find . -name '*.txt' -print0) Now you can just loop through the array to access individual items and do whatever you want with them. Note: It's white space safe. A: based on other answers and comment of @phk, using fd #3: (which still allows to use stdin inside the loop) while IFS= read -r f <&3; do echo "$f" done 3< <(find . -iname "*filename*") A: As already posted on the top answer by Kevin, the best solution is to use a for loop with bash glob, but as bash glob is not recursive by default, this can be fixed by a bash recursive function: #!/bin/bash set -x set -eu -o pipefail all_files=(); function get_all_the_files() { directory="$1"; for item in "$directory"/* "$directory"/.[^.]*; do if [[ -d "$item" ]]; then get_all_the_files "$item"; else all_files+=("$item"); fi; done; } get_all_the_files "/tmp"; for file_path in "${all_files[@]}" do printf 'My file is "%s"\n' "$file_path"; done; Related questions: Bash loop through directory including hidden file Recursively list files from a given directory in Bash ls command: how can I get a recursive full-path listing, one line per file? List files recursively in Linux CLI with path relative to the current directory Recursively List all directories and files bash script, create array of all files in a directory How can I creates array that contains the names of all the files in a folder? How can I creates array that contains the names of all the files in a folder? How to get the list of files in a directory in a shell script? A: You can store your find output in array if you wish to use the output later as: array=($(find . -name "*.txt")) Now to print the each element in new line, you can either use for loop iterating to all the elements of array, or you can use printf statement. for i in ${array[@]};do echo $i; done or printf '%s\n' "${array[@]}" You can also use: for file in "`find . -name "*.txt"`"; do echo "$file"; done This will print each filename in newline To only print the find output in list form, you can use either of the following: find . -name "*.txt" -print 2>/dev/null or find . -name "*.txt" -print | grep -v 'Permission denied' This will remove error messages and only give the filename as output in new line. If you wish to do something with the filenames, storing it in array is good, else there is no need to consume that space and you can directly print the output from find. A: function loop_through(){ length_="$(find . -name '*.txt' | wc -l)" length_="${length_#"${length_%%[![:space:]]*}"}" length_="${length_%"${length_##*[![:space:]]}"}" for i in {1..$length_} do x=$(find . -name '*.txt' | sort | head -$i | tail -1) echo $x done } To grab the length of the list of files for loop, I used the first command "wc -l". That command is set to a variable. Then, I need to remove the trailing white spaces from the variable so the for loop can read it. A: I think using this piece of code (piping the command after while done): while read fname; do echo "$fname" done <<< "$(find . -name "*.txt")" is better than this answer because while loop is executed in a subshell according to here, if you use this answer and variable changes cannot be seen after while loop if you want to modify variables inside the loop.
How to loop through file names returned by find?
x=$(find . -name "*.txt") echo $x if I run the above piece of code in Bash shell, what I get is a string containing several file names separated by blank, not a list. Of course, I can further separate them by blank to get a list, but I'm sure there is a better way to do it. So what is the best way to loop through the results of a find command?
[ "TL;DR: If you're just here for the most correct answer, you probably want my personal preference (see the bottom of this post):\n# execute `process` once for each file\nfind . -name '*.txt' -exec process {} \\;\n\nIf you have time, read through the rest to see several different ways and the problems with most of them.\n\nThe full answer:\nThe best way depends on what you want to do, but here are a few options. As long as no file or folder in the subtree has whitespace in its name, you can just loop over the files:\nfor i in $x; do # Not recommended, will break on whitespace\n process \"$i\"\ndone\n\nMarginally better, cut out the temporary variable x:\nfor i in $(find -name \\*.txt); do # Not recommended, will break on whitespace\n process \"$i\"\ndone\n\nIt is much better to glob when you can. White-space safe, for files in the current directory:\nfor i in *.txt; do # Whitespace-safe but not recursive.\n process \"$i\"\ndone\n\nBy enabling the globstar option, you can glob all matching files in this directory and all subdirectories:\n# Make sure globstar is enabled\nshopt -s globstar\nfor i in **/*.txt; do # Whitespace-safe and recursive\n process \"$i\"\ndone\n\nIn some cases, e.g. if the file names are already in a file, you may need to use read:\n# IFS= makes sure it doesn't trim leading and trailing whitespace\n# -r prevents interpretation of \\ escapes.\nwhile IFS= read -r line; do # Whitespace-safe EXCEPT newlines\n process \"$line\"\ndone < filename\n\nread can be used safely in combination with find by setting the delimiter appropriately:\nfind . -name '*.txt' -print0 | \n while IFS= read -r -d '' line; do \n process \"$line\"\n done\n\nFor more complex searches, you will probably want to use find, either with its -exec option or with -print0 | xargs -0:\n# execute `process` once for each file\nfind . -name \\*.txt -exec process {} \\;\n\n# execute `process` once with all the files as arguments*:\nfind . -name \\*.txt -exec process {} +\n\n# using xargs*\nfind . -name \\*.txt -print0 | xargs -0 process\n\n# using xargs with arguments after each filename (implies one run per filename)\nfind . -name \\*.txt -print0 | xargs -0 -I{} process {} argument\n\nfind can also cd into each file's directory before running a command by using -execdir instead of -exec, and can be made interactive (prompt before running the command for each file) using -ok instead of -exec (or -okdir instead of -execdir).\n*: Technically, both find and xargs (by default) will run the command with as many arguments as they can fit on the command line, as many times as it takes to get through all the files. In practice, unless you have a very large number of files it won't matter, and if you exceed the length but need them all on the same command line, you're SOL find a different way.\n", "What ever you do, don't use a for loop:\n# Don't do this\nfor file in $(find . -name \"*.txt\")\ndo\n …code using \"$file\"\ndone\n\nThree reasons:\n\nFor the for loop to even start, the find must run to completion.\nIf a file name has any whitespace (including space, tab or newline) in it, it will be treated as two separate names.\nAlthough now unlikely, you can overrun your command line buffer. Imagine if your command line buffer holds 32KB, and your for loop returns 40KB of text. That last 8KB will be dropped right off your for loop and you'll never know it.\n\n\nAlways use a while read construct:\nfind . -name \"*.txt\" -print0 | while read -d $'\\0' file\ndo\n …code using \"$file\"\ndone\n\nThe loop will execute while the find command is executing. Plus, this command will work even if a file name is returned with whitespace in it. And, you won't overflow your command line buffer.\nThe -print0 will use the NULL as a file separator instead of a newline and the -d $'\\0' will use NULL as the separator while reading.\n", "find . -name \"*.txt\"|while read fname; do\n echo \"$fname\"\ndone\n\nNote: this method and the (second) method shown by bmargulies are safe to use with white space in the file/folder names.\nIn order to also have the - somewhat exotic - case of newlines in the file/folder names covered, you will have to resort to the -exec predicate of find like this:\nfind . -name '*.txt' -exec echo \"{}\" \\;\n\nThe {} is the placeholder for the found item and the \\; is used to terminate the -exec predicate.\nAnd for the sake of completeness let me add another variant - you gotta love the *nix ways for their versatility:\nfind . -name '*.txt' -print0|xargs -0 -n 1 echo\n\nThis would separate the printed items with a \\0 character that isn't allowed in any of the file systems in file or folder names, to my knowledge, and therefore should cover all bases. xargs picks them up one by one then ...\n", "Filenames can include spaces and even control characters. Spaces are (default) delimiters for shell expansion in bash and as a result of that x=$(find . -name \"*.txt\") from the question is not recommended at all. If find gets a filename with spaces e.g. \"the file.txt\" you will get 2 separated strings for processing, if you process x in a loop. You can improve this by changing delimiter (bash IFS Variable) e.g. to \\r\\n, but filenames can include control characters - so this is not a (completely) safe method.\nFrom my point of view, there are 2 recommended (and safe) patterns for processing files:\n1. Use for loop & filename expansion:\nfor file in ./*.txt; do\n [[ ! -e $file ]] && continue # continue, if file does not exist\n # single filename is in $file\n echo \"$file\"\n # your code here\ndone\n\n2. Use find-read-while & process substitution\nwhile IFS= read -r -d '' file; do\n # single filename is in $file\n echo \"$file\"\n # your code here\ndone < <(find . -name \"*.txt\" -print0)\n\nRemarks\non Pattern 1:\n\nbash returns the search pattern (\"*.txt\") if no matching file is found - so the extra line \"continue, if file does not exist\" is needed. see Bash Manual, Filename Expansion\nshell option nullglob can be used to avoid this extra line.\n\"If the failglob shell option is set, and no matches are found, an error message is printed and the command is not executed.\" (from Bash Manual above)\nshell option globstar: \"If set, the pattern ‘**’ used in a filename expansion context will match all files and zero or more directories and subdirectories. If the pattern is followed by a ‘/’, only directories and subdirectories match.\" see Bash Manual, Shopt Builtin\nother options for filename expansion: extglob, nocaseglob, dotglob & shell variable GLOBIGNORE\n\non Pattern 2:\n\nfilenames can contain blanks, tabs, spaces, newlines, ... to process filenames in a safe way, find with -print0 is used: filename is printed with all control characters & terminated with NUL. see also Gnu Findutils Manpage, Unsafe File Name Handling, safe File Name Handling, unusual characters in filenames. See David A. Wheeler below for detailed discussion of this topic.\n\nThere are some possible patterns to process find results in a while loop. Others (kevin, David W.) have shown how to do this using pipes:\n\nfiles_found=1\nfind . -name \"*.txt\" -print0 | \n while IFS= read -r -d '' file; do\n # single filename in $file\n echo \"$file\"\n files_found=0 # not working example\n # your code here\n done\n[[ $files_found -eq 0 ]] && echo \"files found\" || echo \"no files found\"\n\n\nWhen you try this piece of code, you will see, that it does not work: files_found is always \"true\" & the code will always echo \"no files found\". Reason is: each command of a pipeline is executed in a separate subshell, so the changed variable inside the loop (separate subshell) does not change the variable in the main shell script. This is why I recommend using process substitution as the \"better\", more useful, more general pattern.See I set variables in a loop that's in a pipeline. Why do they disappear... (from Greg's Bash FAQ) for a detailed discussion on this topic.\n\n\nAdditional References & Sources:\n\nGnu Bash Manual, Pattern Matching\n\nFilenames and Pathnames in Shell: How to do it Correctly, David A. Wheeler\n\nWhy you don't read lines with \"for\", Greg's Wiki\n\nWhy you shouldn't parse the output of ls(1), Greg's Wiki\n\nGnu Bash Manual, Process Substitution\n\n\n", "(Updated to include @Socowi's execellent speed improvement)\nWith any $SHELL that supports it (dash/zsh/bash...):\nfind . -name \"*.txt\" -exec $SHELL -c '\n for i in \"$@\" ; do\n echo \"$i\"\n done\n' {} +\n\nDone.\n\nOriginal answer (shorter, but slower):\nfind . -name \"*.txt\" -exec $SHELL -c '\n echo \"$0\"\n' {} \\;\n\n", "If you can assume the file names don't contain newlines, you can read the output of find into a Bash array using the following command:\nreadarray -t x < <(find . -name '*.txt')\n\nNote:\n\n-t causes readarray to strip newlines.\nIt won't work if readarray is in a pipe, hence the process substitution.\nreadarray is available since Bash 4.\n\nBash 4.4 and up also supports the -d parameter for specifying the delimiter. Using the null character, instead of newline, to delimit the file names works also in the rare case that the file names contain newlines:\nreadarray -d '' x < <(find . -name '*.txt' -print0)\n\nreadarray can also be invoked as mapfile with the same options.\nReference: https://mywiki.wooledge.org/BashFAQ/005#Loading_lines_from_a_file_or_stream\n", "# Doesn't handle whitespace\nfor x in `find . -name \"*.txt\" -print`; do\n process_one $x\ndone\n\nor\n\n# Handles whitespace and newlines\nfind . -name \"*.txt\" -print0 | xargs -0 -n 1 process_one\n\n", "I like to use find which is first assigned to variable and IFS switched to new line as follow:\nFilesFound=$(find . -name \"*.txt\")\n\nIFSbkp=\"$IFS\"\nIFS=$'\\n'\ncounter=1;\nfor file in $FilesFound; do\n echo \"${counter}: ${file}\"\n let counter++;\ndone\nIFS=\"$IFSbkp\"\n\nAs commented by @Konrad Rudolph this will not work with \"new lines\" in file name. I still think it is handy as it covers most of the cases when you need to loop over command output.\n", "You can put the filenames returned by find into an array like this:\narray=()\nwhile IFS= read -r -d ''; do\n array+=(\"$REPLY\")\ndone < <(find . -name '*.txt' -print0)\n\nNow you can just loop through the array to access individual items and do whatever you want with them.\nNote: It's white space safe.\n", "based on other answers and comment of @phk, using fd #3:\n(which still allows to use stdin inside the loop)\nwhile IFS= read -r f <&3; do\n echo \"$f\"\n\ndone 3< <(find . -iname \"*filename*\")\n\n", "As already posted on the top answer by Kevin, the best solution is to use a for loop with bash glob, but as bash glob is not recursive by default, this can be fixed by a bash recursive function:\n#!/bin/bash\nset -x\nset -eu -o pipefail\n\nall_files=();\n\nfunction get_all_the_files()\n{\n directory=\"$1\";\n for item in \"$directory\"/* \"$directory\"/.[^.]*;\n do\n if [[ -d \"$item\" ]];\n then\n get_all_the_files \"$item\";\n else\n all_files+=(\"$item\");\n fi;\n done;\n}\n\nget_all_the_files \"/tmp\";\n\nfor file_path in \"${all_files[@]}\"\ndo\n printf 'My file is \"%s\"\\n' \"$file_path\";\ndone;\n\nRelated questions:\n\nBash loop through directory including hidden file\nRecursively list files from a given directory in Bash\nls command: how can I get a recursive full-path listing, one line per file?\nList files recursively in Linux CLI with path relative to the current directory\nRecursively List all directories and files\nbash script, create array of all files in a directory\nHow can I creates array that contains the names of all the files in a folder?\nHow can I creates array that contains the names of all the files in a folder?\nHow to get the list of files in a directory in a shell script?\n\n", "You can store your find output in array if you wish to use the output later as:\narray=($(find . -name \"*.txt\"))\n\nNow to print the each element in new line, you can either use for loop iterating to all the elements of array, or you can use printf statement.\nfor i in ${array[@]};do echo $i; done\n\nor\nprintf '%s\\n' \"${array[@]}\"\n\n\nYou can also use:\nfor file in \"`find . -name \"*.txt\"`\"; do echo \"$file\"; done\n\nThis will print each filename in newline\nTo only print the find output in list form, you can use either of the following:\nfind . -name \"*.txt\" -print 2>/dev/null\n\nor\nfind . -name \"*.txt\" -print | grep -v 'Permission denied'\n\nThis will remove error messages and only give the filename as output in new line.\nIf you wish to do something with the filenames, storing it in array is good, else there is no need to consume that space and you can directly print the output from find.\n", "function loop_through(){\n length_=\"$(find . -name '*.txt' | wc -l)\"\n length_=\"${length_#\"${length_%%[![:space:]]*}\"}\"\n length_=\"${length_%\"${length_##*[![:space:]]}\"}\" \n for i in {1..$length_}\n do\n x=$(find . -name '*.txt' | sort | head -$i | tail -1)\n echo $x\n done\n\n}\n\nTo grab the length of the list of files for loop, I used the first command \"wc -l\". \nThat command is set to a variable. \nThen, I need to remove the trailing white spaces from the variable so the for loop can read it. \n", "I think using this piece of code (piping the command after while done):\nwhile read fname; do\n echo \"$fname\"\ndone <<< \"$(find . -name \"*.txt\")\"\n\nis better than this answer because while loop is executed in a subshell according to here, if you use this answer and variable changes cannot be seen after while loop if you want to modify variables inside the loop.\n" ]
[ 627, 178, 138, 31, 12, 7, 6, 4, 3, 3, 3, 2, 0, 0 ]
[ "find <path> -xdev -type f -name *.txt -exec ls -l {} \\;\nThis will list the files and give details about attributes.\n", "Another alternative is to not use bash, but call Python to do the heavy lifting. I recurred to this because bash solutions as my other answer were too slow.\nWith this solution, we build a bash array of files from inline Python script:\n#!/bin/bash\nset -eu -o pipefail\n\ndsep=\":\" # directory_separator\nbase_directory=/tmp\n\nall_files=()\nall_files_string=\"$(python3 -c '#!/usr/bin/env python3\nimport os\nimport sys\n\ndsep=\"'\"$dsep\"'\"\nbase_directory=\"'\"$base_directory\"'\"\n\ndef log(*args, **kwargs):\n print(*args, file=sys.stderr, **kwargs)\n\ndef check_invalid_characther(file_path):\n for thing in (\"\\\\\", \"\\n\"):\n if thing in file_path:\n raise RuntimeError(f\"It is not allowed {thing} on \\\"{file_path}\\\"!\")\n\ndef absolute_path_to_relative(base_directory, file_path):\n relative_path = os.path.commonprefix( [ base_directory, file_path ] )\n relative_path = os.path.normpath( file_path.replace( relative_path, \"\" ) )\n\n # if you use Windows Python, it accepts / instead of \\\\\n # if you have \\ on your files names, rename them or comment this\n relative_path = relative_path.replace(\"\\\\\", \"/\")\n if relative_path.startswith( \"/\" ):\n relative_path = relative_path[1:]\n return relative_path\n\nfor directory, directories, files in os.walk(base_directory):\n for file in files:\n local_file_path = os.path.join(directory, file)\n local_file_name = absolute_path_to_relative(base_directory, local_file_path)\n\n log(f\"local_file_name {local_file_name}.\")\n check_invalid_characther(local_file_name)\n print(f\"{base_directory}{dsep}{local_file_name}\")\n' | dos2unix)\";\n\nif [[ -n \"$all_files_string\" ]];\nthen\n readarray -t temp <<< \"$all_files_string\";\n all_files+=(\"${temp[@]}\");\nfi;\n\nfor item in \"${all_files[@]}\";\ndo\n OLD_IFS=\"$IFS\"; IFS=\"$dsep\";\n read -r base_directory local_file_name <<< \"$item\"; IFS=\"$OLD_IFS\";\n\n printf 'item \"%s\", base_directory \"%s\", local_file_name \"%s\".\\n' \\\n \"$item\" \\\n \"$base_directory\" \\\n \"$local_file_name\";\ndone;\n\nRelated:\n\nos.walk without hidden folders\nHow to do a recursive sub-folder search and return files in a list?\nHow to split a string into an array in Bash?\n\n", "How about if you use grep instead of find?\nls | grep .txt$ > out.txt\n\nNow you can read this file and the filenames are in the form of a list.\n" ]
[ -1, -4, -5 ]
[ "bash", "find" ]
stackoverflow_0009612090_bash_find.txt
Q: How to use for loop or any other loop to loop a list while taking specified number of items I have a list of vehicle makes, while the list is so large and I want to store it in room database in android. The app may crash while performing this operation. I want to loop through the list and take some chunks of items and to store to database. For example for each loop I take 20 items until the list is empty. How do I achieve this in Kotlin or any other suggestion that can work efficiently. A: You don't say if you care in what order you want to remove the list items. If you don't care, then the following code shows how you can achieve it. Notice that the original list must be mutable for the remove() operation to work val CHUNK_SIZE = 20 val vehicles = (0..177).map { "Car ${it}" }.toMutableList() val carListIterator = vehicles.iterator() val removalChunk = mutableListOf<String>() while(carListIterator.hasNext()) { removalChunk.add(carListIterator.next()) carListIterator.remove() // this removes the element that was just returned if(removalChunk.size >= CHUNK_SIZE || !carListIterator.hasNext()) { //store chunk elsewhere println("Storing: ${removalChunk.joinToString()}") removalChunk.clear() } } A: To loop over a list in Kotlin and take a specified number of items at a time, you can use the chunked() function on the list. This function returns a list of lists, where each sublist contains a specified number of elements from the original list. Here is an example of how you can use the chunked() function to take a specified number of items from a list at a time: val numbers = listOf(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) // Use the chunked() function to take 3 items at a time from the list val chunks = numbers.chunked(3) // Loop through the chunks and print each one for (chunk in chunks) { println(chunk) } In this example, the chunked() function is used to take 3 items at a time from the numbers list. This results in a chunks list with four sublists, each containing 3 items from the original list. The code then loops through the chunks list and prints each sublist. The output of this code would be: [1, 2, 3] [4, 5, 6] [7, 8, 9] [10]
How to use for loop or any other loop to loop a list while taking specified number of items
I have a list of vehicle makes, while the list is so large and I want to store it in room database in android. The app may crash while performing this operation. I want to loop through the list and take some chunks of items and to store to database. For example for each loop I take 20 items until the list is empty. How do I achieve this in Kotlin or any other suggestion that can work efficiently.
[ "You don't say if you care in what order you want to remove the list items. If you don't care, then the following code shows how you can achieve it. Notice that the original list must be mutable for the remove() operation to work\nval CHUNK_SIZE = 20\nval vehicles = (0..177).map { \"Car ${it}\" }.toMutableList()\n\nval carListIterator = vehicles.iterator()\nval removalChunk = mutableListOf<String>()\nwhile(carListIterator.hasNext()) {\n removalChunk.add(carListIterator.next())\n carListIterator.remove() // this removes the element that was just returned\n if(removalChunk.size >= CHUNK_SIZE || !carListIterator.hasNext()) {\n //store chunk elsewhere\n println(\"Storing: ${removalChunk.joinToString()}\")\n removalChunk.clear()\n }\n}\n\n", "To loop over a list in Kotlin and take a specified number of items at a time, you can use the chunked() function on the list. This function returns a list of lists, where each sublist contains a specified number of elements from the original list.\nHere is an example of how you can use the chunked() function to take a specified number of items from a list at a time:\nval numbers = listOf(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)\n\n// Use the chunked() function to take 3 items at a time from the list\nval chunks = numbers.chunked(3)\n\n// Loop through the chunks and print each one\nfor (chunk in chunks) {\n println(chunk)\n}\n\n\nIn this example, the chunked() function is used to take 3 items at a time from the numbers list. This results in a chunks list with four sublists, each containing 3 items from the original list. The code then loops through the chunks list and prints each sublist.\nThe output of this code would be:\n[1, 2, 3]\n[4, 5, 6]\n[7, 8, 9]\n[10]\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "android", "for_loop", "kotlin" ]
stackoverflow_0073819076_android_for_loop_kotlin.txt
Q: Web Scraping using Puppeteer returns undefined during atcoder contest I made a web scrapper for parsing test cases of Atcoder contest. It works well if the contest is already finished but gives an error for an ongoing contest. The error arises when accessing the rows of the table HTML element. I am positive that the table exists but for some reason, the script returns undefined for an ongoing contest. Error: Error: Evaluation failed: TypeError: Cannot read properties of undefined (reading 'rows') at pptr://__puppeteer_evaluation_script__:3:32 at ExecutionContext._ExecutionContext_evaluate (/mnt/d/c++/node_modules/puppeteer-core/lib/cjs/puppeteer/common/ExecutionContext.js:229:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async ExecutionContext.evaluate (/mnt/d/c++/node_modules/puppeteer-core/lib/cjs/puppeteer/common/ExecutionContext.js:107:16) at async scrapeSite (/mnt/d/c++/codeforces/atcoder.js:57:33) Here is my Scrapper: atcoder.js: const puppeteer = require("puppeteer"); const fs = require("fs"); const contest_id = process.argv[2]; async function scrapeProblem(problem_letter) { const url = `https://atcoder.jp/contests/${contest_id}/tasks/${contest_id}_${problem_letter.toLowerCase()}`; console.log(url); try { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto(url, { waitUntil: "networkidle0" }); const samples_scraped = await page.evaluate(() => { const samples = document.querySelectorAll("pre"); const scraped = Array.from(samples).filter((child) => { return child.id !== ""; }); let num_scraped = scraped.length; // The elements were repeated twice, so remove the extra elements for (let i = 0; i < num_scraped / 2; i++) scraped.pop(); return scraped.map((ele) => ele.innerText); // return Array.from(samples).map((child) => child.innerText); }); let id = 1; // Now we need to store the samples in text format samples_scraped.map((ele, idx) => { if (idx % 2 == 0) { // Input fs.writeFile(`${problem_letter}-${id}.in`, ele, (err) => { if (err) throw err; }); } else { // Output fs.writeFile(`${problem_letter}-${id}.out`, ele, (err) => { if (err) throw err; }); id++; } return ele; }); await browser.close(); } catch (e) { console.log(e); } } async function scrapeSite() { const url = `https://atcoder.jp/contests/${contest_id}/tasks`; console.log(url); try { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto(url, { waitUntil: "networkidle0" }); // Returns all the problem letters const problem_letters = await page.evaluate(() => { const table = document.querySelectorAll("table")[0]; const rows = table.rows.length; const letters = []; for (let i = 1; i < rows; i++) { letters.push(table.rows[i].cells[0].innerText); } return letters; }); console.log(problem_letters); for (problem_letter of problem_letters) { scrapeProblem(problem_letter); } await browser.close(); } catch (e) { console.log(e); } } scrapeSite(); The scrapeProblem(problem_letter) is a helper function to scrape the test cases for the given problem letter. It then stores the test cases to the user's file system using fs module. The scrapeSite() function first parses the homepage for the number of problems and the problem letter associated with each problem. It then calls the scrapeProblem(problem_letter) helper function to parse the required web site for test cases. To run the script: node scrapper.js abc280 Update: I tried it in a new contest and again got the same error. This time I took a screenshot using Puppeteer and found out the problem. I am getting permission denied if I try to accesss the site without logging in for an ongoing contest. A: The problem was the site requires us to login and only then we can see the problem statements of an ongoing contest. So I added a function which will first login to the site and then it will proceed to parse the test cases. Updated code: const puppeteer = require("puppeteer"); const fs = require("fs"); require('dotenv').config(); const contest_id = process.argv[2]; async function login(browser, page) { const url = `https://atcoder.jp/login?continue=https%3A%2F%2Fatcoder.jp%2F`; console.log("Logging in..", url); try { await page.goto(url, { waitUntil: "networkidle0" }); await page.type('#username', process.env.USERNAME); await page.type("#password", process.env.PASSWORD); await page.click("#submit"); } catch (e) { console.log("Login failed..."); console.log(e); } } async function scrapeProblem(browser, Problem) { const url = Problem.Url; console.log(url); try { // const browser = await puppeteer.launch(); const page = await browser.newPage(); // await login(browser, page); await page.goto(url, { waitUntil: "networkidle0" }); const samples_scraped = await page.evaluate(() => { const samples = document.querySelectorAll("pre"); const scraped = Array.from(samples).filter((child) => { return child.id !== ""; }); let num_scraped = scraped.length; // The elements were repeated twice, so remove the extra elements for (let i = 0; i < num_scraped / 2; i++) scraped.pop(); return scraped.map((ele) => ele.innerText); // return Array.from(samples).map((child) => child.innerText); }); let id = 1; // Now we need to store the samples in text format samples_scraped.map((ele, idx) => { if (idx % 2 == 0) { // Input fs.writeFile(`${Problem.Problem_letter}-${id}.in`, ele, (err) => { if (err) throw err; }); } else { // Output fs.writeFile(`${Problem.Problem_letter}-${id}.out`, ele, (err) => { if (err) throw err; }); id++; } return ele; }); // await browser.close(); } catch (e) { console.log(e); } } async function scrapeSite() { const url = `https://atcoder.jp/contests/${contest_id}/tasks`; console.log(url); try { const browser = await puppeteer.launch(); const page = await browser.newPage(); await login(browser, page); await page.goto(url, { waitUntil: "networkidle0" }); // await page.screenshot({ path: "./screenshot.png", fullPage: true}); // Returns all the problem letters const problems = await page.evaluate(() => { const table = document.querySelectorAll("table")[0]; const rows = table.rows.length; const letters = []; for (let i = 1; i < rows; i++) { letters.push({Problem_letter: table.rows[i].cells[0].innerText, Url: table.rows[i].cells[0].firstChild.href }); } return letters; }); console.log(problems); const promises = [] for (problem of problems) { promises.push(scrapeProblem(browser, problem)); } await Promise.all(promises); // All the promises must be resolved before closing the browser await browser.close(); } catch (e) { console.log(e); } } scrapeSite();
Web Scraping using Puppeteer returns undefined during atcoder contest
I made a web scrapper for parsing test cases of Atcoder contest. It works well if the contest is already finished but gives an error for an ongoing contest. The error arises when accessing the rows of the table HTML element. I am positive that the table exists but for some reason, the script returns undefined for an ongoing contest. Error: Error: Evaluation failed: TypeError: Cannot read properties of undefined (reading 'rows') at pptr://__puppeteer_evaluation_script__:3:32 at ExecutionContext._ExecutionContext_evaluate (/mnt/d/c++/node_modules/puppeteer-core/lib/cjs/puppeteer/common/ExecutionContext.js:229:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async ExecutionContext.evaluate (/mnt/d/c++/node_modules/puppeteer-core/lib/cjs/puppeteer/common/ExecutionContext.js:107:16) at async scrapeSite (/mnt/d/c++/codeforces/atcoder.js:57:33) Here is my Scrapper: atcoder.js: const puppeteer = require("puppeteer"); const fs = require("fs"); const contest_id = process.argv[2]; async function scrapeProblem(problem_letter) { const url = `https://atcoder.jp/contests/${contest_id}/tasks/${contest_id}_${problem_letter.toLowerCase()}`; console.log(url); try { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto(url, { waitUntil: "networkidle0" }); const samples_scraped = await page.evaluate(() => { const samples = document.querySelectorAll("pre"); const scraped = Array.from(samples).filter((child) => { return child.id !== ""; }); let num_scraped = scraped.length; // The elements were repeated twice, so remove the extra elements for (let i = 0; i < num_scraped / 2; i++) scraped.pop(); return scraped.map((ele) => ele.innerText); // return Array.from(samples).map((child) => child.innerText); }); let id = 1; // Now we need to store the samples in text format samples_scraped.map((ele, idx) => { if (idx % 2 == 0) { // Input fs.writeFile(`${problem_letter}-${id}.in`, ele, (err) => { if (err) throw err; }); } else { // Output fs.writeFile(`${problem_letter}-${id}.out`, ele, (err) => { if (err) throw err; }); id++; } return ele; }); await browser.close(); } catch (e) { console.log(e); } } async function scrapeSite() { const url = `https://atcoder.jp/contests/${contest_id}/tasks`; console.log(url); try { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.goto(url, { waitUntil: "networkidle0" }); // Returns all the problem letters const problem_letters = await page.evaluate(() => { const table = document.querySelectorAll("table")[0]; const rows = table.rows.length; const letters = []; for (let i = 1; i < rows; i++) { letters.push(table.rows[i].cells[0].innerText); } return letters; }); console.log(problem_letters); for (problem_letter of problem_letters) { scrapeProblem(problem_letter); } await browser.close(); } catch (e) { console.log(e); } } scrapeSite(); The scrapeProblem(problem_letter) is a helper function to scrape the test cases for the given problem letter. It then stores the test cases to the user's file system using fs module. The scrapeSite() function first parses the homepage for the number of problems and the problem letter associated with each problem. It then calls the scrapeProblem(problem_letter) helper function to parse the required web site for test cases. To run the script: node scrapper.js abc280 Update: I tried it in a new contest and again got the same error. This time I took a screenshot using Puppeteer and found out the problem. I am getting permission denied if I try to accesss the site without logging in for an ongoing contest.
[ "The problem was the site requires us to login and only then we can see the problem statements of an ongoing contest. So I added a function which will first login to the site and then it will proceed to parse the test cases.\nUpdated code:\nconst puppeteer = require(\"puppeteer\");\nconst fs = require(\"fs\");\nrequire('dotenv').config();\n\nconst contest_id = process.argv[2];\n\nasync function login(browser, page) {\n const url = `https://atcoder.jp/login?continue=https%3A%2F%2Fatcoder.jp%2F`;\n console.log(\"Logging in..\", url);\n try {\n await page.goto(url, { waitUntil: \"networkidle0\" });\n\n await page.type('#username', process.env.USERNAME);\n await page.type(\"#password\", process.env.PASSWORD);\n await page.click(\"#submit\");\n\n } catch (e) {\n console.log(\"Login failed...\");\n console.log(e);\n }\n}\n\nasync function scrapeProblem(browser, Problem) {\n const url = Problem.Url;\n console.log(url);\n try {\n // const browser = await puppeteer.launch();\n const page = await browser.newPage();\n // await login(browser, page);\n await page.goto(url, { waitUntil: \"networkidle0\" });\n\n const samples_scraped = await page.evaluate(() => {\n const samples = document.querySelectorAll(\"pre\");\n const scraped = Array.from(samples).filter((child) => {\n return child.id !== \"\";\n });\n let num_scraped = scraped.length;\n // The elements were repeated twice, so remove the extra elements\n for (let i = 0; i < num_scraped / 2; i++) scraped.pop();\n return scraped.map((ele) => ele.innerText);\n // return Array.from(samples).map((child) => child.innerText);\n });\n\n let id = 1;\n // Now we need to store the samples in text format\n samples_scraped.map((ele, idx) => {\n if (idx % 2 == 0) {\n // Input\n fs.writeFile(`${Problem.Problem_letter}-${id}.in`, ele, (err) => {\n if (err) throw err;\n });\n } else {\n // Output\n fs.writeFile(`${Problem.Problem_letter}-${id}.out`, ele, (err) => {\n if (err) throw err;\n });\n id++;\n }\n return ele;\n });\n\n // await browser.close();\n\n } catch (e) {\n console.log(e);\n }\n}\n\nasync function scrapeSite() {\n const url = `https://atcoder.jp/contests/${contest_id}/tasks`;\n console.log(url);\n try {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n await login(browser, page);\n await page.goto(url, { waitUntil: \"networkidle0\" });\n\n // await page.screenshot({ path: \"./screenshot.png\", fullPage: true});\n\n // Returns all the problem letters\n const problems = await page.evaluate(() => {\n const table = document.querySelectorAll(\"table\")[0];\n const rows = table.rows.length;\n const letters = [];\n\n for (let i = 1; i < rows; i++) {\n letters.push({Problem_letter: table.rows[i].cells[0].innerText, Url: table.rows[i].cells[0].firstChild.href });\n }\n\n return letters;\n });\n\n console.log(problems);\n const promises = []\n\n for (problem of problems) {\n promises.push(scrapeProblem(browser, problem));\n }\n\n await Promise.all(promises); // All the promises must be resolved before closing the browser\n\n await browser.close();\n\n } catch (e) {\n console.log(e);\n }\n}\n\nscrapeSite();\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "puppeteer", "web_scraping" ]
stackoverflow_0074668350_javascript_puppeteer_web_scraping.txt
Q: i cant populate select in jquery,my code is in razor c#, i try this the content of the table is this enter image description here @section scripts{ <script> function adddata() { $.ajax({ type: "POST", url: '?handler=GetItems', headers: { "RequestVerificationToken": $('input[name="__RequestVerificationToken"]').val() }, data: { Id: $("#IdSelectIdPuesto").val() }, success: function (data) { for (var i = 0; i < data.length; i++) { $("#IdSelectlocalidad").append("<option value='" + data[i].value + "' selected>" + data[i].text + "</option>"); } }, error: function (result) { alert("fail"); } }) } </script> } public JsonResult OnPostGetItems(int Id) { //displaydata1 = rTDBContext.Turnos.ToList(); var displayda = (from c in displaydata1 select c.LocNombre).Distinct().ToList(); return new JsonResult(new List<SelectListItem> = { new SelectListItem { Value = "1", Text = "LOcalidad" + 1 }, new SelectListItem { Value = "2", Text = "LOcalidad" + 2 } }); } i'm trying a select distinct and add to the specified select the id and the locNombre to the text A: If you want to change the options of $("#IdSelectlocalidad") with the id passed from ajax,here is a demo: public JsonResult OnPostGetItems(int Id) { //displaydata1 = rTDBContext.Turnos.ToList(); var displayda = (from c in l where c.Id==Id select new SelectListItem { Text=c.LocNombre, Value=c.Id+""}).Distinct().ToList(); return new JsonResult(displayda); } js(remove selected in the option,with the code it will select the first option by default): @section scripts{ <script> function adddata() { $.ajax({ type: "POST", url: '?handler=GetItems', headers: { "RequestVerificationToken": $('input[name="__RequestVerificationToken"]').val() }, data: { Id: $("#IdSelectIdPuesto").val() }, success: function (data) { for (var i = 0; i < data.length; i++) { $("#IdSelectlocalidad").append("<option value='" + data[i].value + "' >" + data[i].text + "</option>"); } }, error: function (result) { alert("fail"); } }) } </script> } A: [thanks Yiyi You, ifound the correct answer i doit this way] [enter image description here][1] [1]: https://i.stack.imgur.com/KWsIi.png do as folow displaydata1 = rTDBContext.Turnos.ToList(); var display = displaydata1.DistinctBy(x => x.LocNombre ).ToList(); var displayda = (from c in display select new SelectListItem { Text = c.LocNombre, Value = c.IdLocalidad + "" }).Distinct().ToList(); now is working as desired.
i cant populate select in jquery,my code is in razor c#, i try this
the content of the table is this enter image description here @section scripts{ <script> function adddata() { $.ajax({ type: "POST", url: '?handler=GetItems', headers: { "RequestVerificationToken": $('input[name="__RequestVerificationToken"]').val() }, data: { Id: $("#IdSelectIdPuesto").val() }, success: function (data) { for (var i = 0; i < data.length; i++) { $("#IdSelectlocalidad").append("<option value='" + data[i].value + "' selected>" + data[i].text + "</option>"); } }, error: function (result) { alert("fail"); } }) } </script> } public JsonResult OnPostGetItems(int Id) { //displaydata1 = rTDBContext.Turnos.ToList(); var displayda = (from c in displaydata1 select c.LocNombre).Distinct().ToList(); return new JsonResult(new List<SelectListItem> = { new SelectListItem { Value = "1", Text = "LOcalidad" + 1 }, new SelectListItem { Value = "2", Text = "LOcalidad" + 2 } }); } i'm trying a select distinct and add to the specified select the id and the locNombre to the text
[ "If you want to change the options of $(\"#IdSelectlocalidad\") with the id passed from ajax,here is a demo:\npublic JsonResult OnPostGetItems(int Id)\n {\n //displaydata1 = rTDBContext.Turnos.ToList();\n var displayda = (from c in l\n where c.Id==Id select new SelectListItem { Text=c.LocNombre, Value=c.Id+\"\"}).Distinct().ToList();\n \n return new JsonResult(displayda);\n\n }\n\njs(remove selected in the option,with the code it will select the first option by default):\n@section scripts{\n <script>\n function adddata() {\n\n $.ajax({\n type: \"POST\",\n url: '?handler=GetItems',\n headers: { \"RequestVerificationToken\": $('input[name=\"__RequestVerificationToken\"]').val() },\n data: { Id: $(\"#IdSelectIdPuesto\").val() },\n success: function (data) {\n for (var i = 0; i < data.length; i++) {\n $(\"#IdSelectlocalidad\").append(\"<option value='\" + data[i].value +\n \"' >\" + data[i].text + \"</option>\");\n \n\n }\n },\n error: function (result) {\n alert(\"fail\");\n }\n })\n }\n </script>\n }\n\n", "[thanks Yiyi You, ifound the correct answer i doit this way]\n[enter image description here][1]\n[1]: https://i.stack.imgur.com/KWsIi.png do as folow\ndisplaydata1 = rTDBContext.Turnos.ToList();\n var display = displaydata1.DistinctBy(x => x.LocNombre ).ToList();\n var displayda = (from c in display\n select new SelectListItem { Text = c.LocNombre, Value = c.IdLocalidad + \"\" }).Distinct().ToList();\n\nnow is working as desired.\n" ]
[ 0, 0 ]
[]
[]
[ "ajax", "asp.net_core", "razor" ]
stackoverflow_0074618862_ajax_asp.net_core_razor.txt
Q: Jetpack compose : Problem while swiping pages in pager compasable I'm building a registration page with jetpack compose and I have implemented it in a horizontal pager linked with TabRow to display signup and login page respectively but when I swipe to the other page (login or signup), the next page will build only after the swipe offset exceeds apporximately 50% of the screen width, and the pervious page will be rendered in the other page as shown in the video below @OptIn(ExperimentalPagerApi::class) @Composable fun RegistrationHorizontalPageView() { val tabRowState = viewModel<TabRowViewModel>() HorizontalPager( count = 2, state = tabRowState.pagerState, modifier = Modifier.fillMaxSize(), verticalAlignment = Alignment.Top ) { if(currentPage == 0){ RegistrationPage() }else{ LoginPage() } } } how to fix this ? A: I suggest to use a list of tab Items that contain different elements of view. Make sure to not use if else statements based on the page state because it is changing only after 50%, rather try to create your view always available. For example you can do something like this: sealed class TabItem(var title: String, var color: Color, content: @Composable () -> Unit) { object Registration : TabItem("REGISTRATION", Color.Red, { /* you can add here your composable*/} ) object Login : TabItem("LOGIN", Color.Blue, { /* you can add here your composable*/} ) } Usage: @OptIn(ExperimentalPagerApi::class) @Composable fun MyScreen() { val pagerState = rememberPagerState() val tabs = listOf(TabItem.Registration, TabItem.Login) Column( modifier = Modifier.fillMaxSize(), verticalArrangement = Arrangement.Top ) { TabRow( selectedTabIndex = pagerState.currentPage, indicator = { positions -> TabRowDefaults.Indicator( Modifier.pagerTabIndicatorOffset(pagerState, positions) ) } ) { tabs.forEachIndexed { index, tab -> val isSelected = pagerState.currentPage == index Tab( selected = isSelected, text = { Text(text = tab.title) }, onClick = {} ) } } HorizontalPager( count = tabs.size, state = pagerState ) { page -> Box(modifier = Modifier .fillMaxSize() .background(tabs[page].color) // for the example, showing that both pages are rendered ){ Text( modifier = Modifier.fillMaxWidth(), text = tabs[page].title) } } } } Result:
Jetpack compose : Problem while swiping pages in pager compasable
I'm building a registration page with jetpack compose and I have implemented it in a horizontal pager linked with TabRow to display signup and login page respectively but when I swipe to the other page (login or signup), the next page will build only after the swipe offset exceeds apporximately 50% of the screen width, and the pervious page will be rendered in the other page as shown in the video below @OptIn(ExperimentalPagerApi::class) @Composable fun RegistrationHorizontalPageView() { val tabRowState = viewModel<TabRowViewModel>() HorizontalPager( count = 2, state = tabRowState.pagerState, modifier = Modifier.fillMaxSize(), verticalAlignment = Alignment.Top ) { if(currentPage == 0){ RegistrationPage() }else{ LoginPage() } } } how to fix this ?
[ "I suggest to use a list of tab Items that contain different elements of view.\nMake sure to not use if else statements based on the page state because it is changing only after 50%, rather try to create your view always available.\nFor example you can do something like this:\nsealed class TabItem(var title: String, var color: Color, content: @Composable () -> Unit) {\n object Registration : TabItem(\"REGISTRATION\", Color.Red, { /* you can add here your composable*/} )\n object Login : TabItem(\"LOGIN\", Color.Blue, { /* you can add here your composable*/} )\n}\n\nUsage:\n\n@OptIn(ExperimentalPagerApi::class)\n@Composable\nfun MyScreen() {\n val pagerState = rememberPagerState()\n val tabs = listOf(TabItem.Registration, TabItem.Login)\n\n Column(\n modifier = Modifier.fillMaxSize(),\n verticalArrangement = Arrangement.Top\n ) {\n TabRow(\n selectedTabIndex = pagerState.currentPage,\n indicator = { positions ->\n TabRowDefaults.Indicator(\n Modifier.pagerTabIndicatorOffset(pagerState, positions)\n )\n }\n ) {\n tabs.forEachIndexed { index, tab ->\n val isSelected = pagerState.currentPage == index\n Tab(\n selected = isSelected,\n text = {\n Text(text = tab.title)\n },\n onClick = {}\n )\n }\n }\n HorizontalPager(\n count = tabs.size, \n state = pagerState\n ) { page ->\n Box(modifier = Modifier\n .fillMaxSize()\n .background(tabs[page].color) // for the example, showing that both pages are rendered\n ){\n Text( modifier = Modifier.fillMaxWidth(), text = tabs[page].title)\n }\n }\n }\n}\n\nResult:\n\n" ]
[ 0 ]
[]
[]
[ "android", "android_jetpack_compose", "android_viewpager", "pager" ]
stackoverflow_0074640739_android_android_jetpack_compose_android_viewpager_pager.txt
Q: styles to TouchableNativeFeedback I am using translate on View inside the TouchableNativeFeedback. But after using translate css the upper part of view is not clickable. Can I give translate css to TouchableNativeFeedback also. function App() { return ( <View> <TouchableNativeFeedback onPress={this._onButtonPress}> <View style={styles.app}></View> </TouchableNativeFeedback> </View> ); } const styles = StyleSheet.create({ app: { height: 50, width: 50, transform: [{ translateY: 25 }] } }); I tried giving style to that. But I don't think it accepts that. React-native version: "0.61.2" A: You need to think about some margins or padding. From the official documentation: Transforms are style properties that will help you modify the appearance and position of your components using 2D or 3D transformations. However, once you apply transforms, the layouts remain the same around the transformed component hence it might overlap with the nearby components. You can apply margin to the transformed component, the nearby components or padding to the container to prevent such overlaps. https://reactnative.dev/docs/transforms
styles to TouchableNativeFeedback
I am using translate on View inside the TouchableNativeFeedback. But after using translate css the upper part of view is not clickable. Can I give translate css to TouchableNativeFeedback also. function App() { return ( <View> <TouchableNativeFeedback onPress={this._onButtonPress}> <View style={styles.app}></View> </TouchableNativeFeedback> </View> ); } const styles = StyleSheet.create({ app: { height: 50, width: 50, transform: [{ translateY: 25 }] } }); I tried giving style to that. But I don't think it accepts that. React-native version: "0.61.2"
[ "You need to think about some margins or padding.\nFrom the official documentation:\nTransforms are style properties that will help you modify the appearance and position of your components using 2D or 3D transformations. However, once you apply transforms, the layouts remain the same around the transformed component hence it might overlap with the nearby components. You can apply margin to the transformed component, the nearby components or padding to the container to prevent such overlaps.\nhttps://reactnative.dev/docs/transforms\n" ]
[ 0 ]
[]
[]
[ "css", "react_native" ]
stackoverflow_0074675931_css_react_native.txt
Q: Create possible combinations in sql alchemy table 1 table 2 I have these tables. I wan' to do the following steps. For all rows with type g in table 1-> obtain from table 2 - 1 [a,b] , 2[c,d] , 3 [e,f] create possible combinations and store in table 1 -> I'm finding it hard to start on this. Is there something in sql I can explore? A: Join table1 to 3 copies of table2: INSERT INTO table1 (cat1, cat2, cat3, type) SELECT t21.val, t22.val, t23.val, 'm' FROM table1 t1 INNER JOIN table2 t21 ON t21.cat = t1.cat1 INNER JOIN table2 t22 ON t22.cat = t1.cat2 INNER JOIN table2 t23 ON t23.cat = t1.cat3 WHERE t1.type = 'g'; See the demo.
Create possible combinations in sql alchemy
table 1 table 2 I have these tables. I wan' to do the following steps. For all rows with type g in table 1-> obtain from table 2 - 1 [a,b] , 2[c,d] , 3 [e,f] create possible combinations and store in table 1 -> I'm finding it hard to start on this. Is there something in sql I can explore?
[ "Join table1 to 3 copies of table2:\nINSERT INTO table1 (cat1, cat2, cat3, type)\nSELECT t21.val, t22.val, t23.val, 'm'\nFROM table1 t1 \nINNER JOIN table2 t21 ON t21.cat = t1.cat1\nINNER JOIN table2 t22 ON t22.cat = t1.cat2\nINNER JOIN table2 t23 ON t23.cat = t1.cat3\nWHERE t1.type = 'g';\n\nSee the demo.\n" ]
[ 0 ]
[]
[]
[ "mysql", "sqlalchemy" ]
stackoverflow_0074677156_mysql_sqlalchemy.txt
Q: Creating new polars dataframe column based on other column I have a hard time with what should be a very common use case on polars dataframes. I simply want to create a new column on an existing dataframe based on some other column. Here is the code I try, but doesn't work: import polars as pl df.with_columns([ (pl.col('old_col').apply(lambda x: func(x)).alias("new_col"), ]) A: Here is a revised version of your code that should create a new column on an existing polars dataframe based on some other column: import polars as pl # Define the function to apply to the old column def func(x): # Perform some operation on x and return the result return x + 1 # Create the new column by applying the function to the old column new_col = pl.col('old_col').apply(lambda x: func(x)).alias("new_col") # Create a new dataframe with the new column df = df.with_columns([new_col]) In this code, the func function is applied to the values in the old_col column using the apply method. The resulting values are then used to create a new column called new_col using the alias method. Finally, the with_columns method is used to create a new dataframe that includes the new_col column. UPDATE: A: "so I just need to do df = df.with_columns(...), the result has to be assigned to a variable. The method creates a new dataframe." Q: Yes, that is correct. In the code you provided, the with_columns method is used to create a new dataframe that includes a new column based on some other column. Since the with_columns method returns a new dataframe, you need to assign the result to a new variable in order to access and use the new dataframe. Here is an example of how this could be done: # Import the polars library import polars as pl # Define the function to apply to the old column def func(x): # Perform some operation on x and return the result return x + 1 # Create the new column by applying the function to the old column new_col = pl.col('old_col').apply(lambda x: func(x)).alias("new_col") # Create a new dataframe with the new column and assign it to a new variable new_df = df.with_columns([new_col]) # Use the new dataframe print(new_df) In this code, the with_columns method is used to create a new dataframe that includes the new_col column. The result of this operation is then assigned to the new_df variable, which can be used to access and manipulate the new dataframe. A: Polars uses pure functions meaning that they don't have any side effects nor mutably change self. This property will ensure that you can safely pass a DataFrame to another function without being afraid that that function changes the state of the DataFrame underneath of you. If you want to change the assigned variable, simply reassign the new state: df = df.with_column(...)
Creating new polars dataframe column based on other column
I have a hard time with what should be a very common use case on polars dataframes. I simply want to create a new column on an existing dataframe based on some other column. Here is the code I try, but doesn't work: import polars as pl df.with_columns([ (pl.col('old_col').apply(lambda x: func(x)).alias("new_col"), ])
[ "Here is a revised version of your code that should create a new column on an existing polars dataframe based on some other column:\nimport polars as pl\n\n# Define the function to apply to the old column\ndef func(x):\n # Perform some operation on x and return the result\n return x + 1\n\n# Create the new column by applying the function to the old column\nnew_col = pl.col('old_col').apply(lambda x: func(x)).alias(\"new_col\")\n\n# Create a new dataframe with the new column\ndf = df.with_columns([new_col])\n\nIn this code, the func function is applied to the values in the old_col column using the apply method. The resulting values are then used to create a new column called new_col using the alias method. Finally, the with_columns method is used to create a new dataframe that includes the new_col column.\nUPDATE:\nA: \"so I just need to do df = df.with_columns(...), the result has to be assigned to a variable. The method creates a new dataframe.\"\nQ: Yes, that is correct. In the code you provided, the with_columns method is used to create a new dataframe that includes a new column based on some other column. Since the with_columns method returns a new dataframe, you need to assign the result to a new variable in order to access and use the new dataframe.\nHere is an example of how this could be done:\n# Import the polars library\nimport polars as pl\n\n# Define the function to apply to the old column\ndef func(x):\n # Perform some operation on x and return the result\n return x + 1\n\n# Create the new column by applying the function to the old column\nnew_col = pl.col('old_col').apply(lambda x: func(x)).alias(\"new_col\")\n\n# Create a new dataframe with the new column and assign it to a new variable\nnew_df = df.with_columns([new_col])\n\n# Use the new dataframe\nprint(new_df)\n\nIn this code, the with_columns method is used to create a new dataframe that includes the new_col column. The result of this operation is then assigned to the new_df variable, which can be used to access and manipulate the new dataframe.\n", "Polars uses pure functions meaning that they don't have any side effects nor mutably change self.\nThis property will ensure that you can safely pass a DataFrame to another function without being afraid that that function changes the state of the DataFrame underneath of you.\nIf you want to change the assigned variable, simply reassign the new state:\ndf = df.with_column(...)\n\n" ]
[ 1, 1 ]
[]
[]
[ "python_polars" ]
stackoverflow_0074677123_python_polars.txt
Q: Can't open CSV file even with full file path I'm using Python 3.5 and I'm having some problems opening a CSV file. I've tried entering the entire path but it still doesn't work, but the file is clearly in the folder. (My code is called 'simplecsvtest.py') Here's the code snippet: import csv import sys file = open(r"C:\python35\files\results.csv", 'rt') try: reader = csv.reader(file, delimiter='\t') ... some code here ... finally: file.close() And here's what PowerShell says: PS C:\python35\files> python simplecsvtest.py Traceback (most recent call last): File "simplecsvtest.py", line 20, in file = open(r"C:\python35\files\results.csv", 'rt') FileNotFoundError: [Errno 2] No such file or directory: 'C:\\python35\\files\\results.csv' Well, I'm very certain that 'results.csv' is in that folder: here's the filepath in Windows Explorer: C:\Python35\files (Note: The folder has capital 'P' for Python35, and I've tried having both capitalized and uncapitalized 'P' in the code, neither works) The CSV file is a "Microsoft Excel Comma Separated Values Files", if that matters, but the extension is still csv. Can anyone tell me what's wrong? A: I would suggest creating a folder inside your project folder and then use a relative path : file = open(r".\files\results.csv", 'rt') . implies that the path is relative to your current directory A: I figured out a work-around solution myself: Somehow, if I copy all the data from the csv and paste it in a new excel spreadsheet and save it as a csv, it works. I don't know why. A: Although the file path is absolute it wouldn't work this way. You have to use "forward" slashes in the path name instead of "backward" slashes. As file = open(r"C:/python35/files/results.csv", 'rt')
Can't open CSV file even with full file path
I'm using Python 3.5 and I'm having some problems opening a CSV file. I've tried entering the entire path but it still doesn't work, but the file is clearly in the folder. (My code is called 'simplecsvtest.py') Here's the code snippet: import csv import sys file = open(r"C:\python35\files\results.csv", 'rt') try: reader = csv.reader(file, delimiter='\t') ... some code here ... finally: file.close() And here's what PowerShell says: PS C:\python35\files> python simplecsvtest.py Traceback (most recent call last): File "simplecsvtest.py", line 20, in file = open(r"C:\python35\files\results.csv", 'rt') FileNotFoundError: [Errno 2] No such file or directory: 'C:\\python35\\files\\results.csv' Well, I'm very certain that 'results.csv' is in that folder: here's the filepath in Windows Explorer: C:\Python35\files (Note: The folder has capital 'P' for Python35, and I've tried having both capitalized and uncapitalized 'P' in the code, neither works) The CSV file is a "Microsoft Excel Comma Separated Values Files", if that matters, but the extension is still csv. Can anyone tell me what's wrong?
[ "I would suggest creating a folder inside your project folder and then use a relative path : \nfile = open(r\".\\files\\results.csv\", 'rt') \n. implies that the path is relative to your current directory\n", "I figured out a work-around solution myself:\nSomehow, if I copy all the data from the csv and paste it in a new excel spreadsheet and save it as a csv, it works. I don't know why.\n", "Although the file path is absolute it wouldn't work this way.\nYou have to use \"forward\" slashes in the path name instead of \"backward\" slashes. As\nfile = open(r\"C:/python35/files/results.csv\", 'rt')\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "csv", "python" ]
stackoverflow_0044631654_csv_python.txt
Q: How to create a functioning multipage streamlit app? I am creating a web app using Streamlit. I have created a multipage app where the sidebar has a drop-down menu to go to a particular page. I have created a page that allows the user to input a sequence and count the number of characters (for example a DNA sequence and count the number of nucleotide bases). The architecture of my web app is as follows: Web_App | |__app.py (main file) |__multipage.py |__home.py | |__apps (another folder) |__nucleotide_eda.py The problem that I am facing right now is when I run the streamlit keeping the sequence page (nucleotide.py) as the home page (meaning just running the sequence page) the inbuilt calculations in the nucleotide.py is running fine (the user submits the query sequence and automatically the A/T/G/C counts are given along with a bar plot). But when I add the nucleotide.py as a multipage and run my main file (app.py) the inbuilt functions are not working. I only see the text box area where the user can give the input sequence. I have given it multiple tries but still, the problem persists. My code looks like this: app.py (main file): import streamlit as st from multiapp import MultiApp from apps import home,nucleotide_eda app = MultiApp() # Add all your applications here app.add_app("Home", home.app) app.add_app("Nucleotide EDA", nucleotide_eda.app) app.run() Nucleotide.py (sequence file): import streamlit as st def app(): st.title('Exploratory Data Analysis of Genomic Sequence') st.write('Nucleotide EDA allows you to perform Exploratory Data Analysis on any submitted genomic sequence') st.header("Enter Your DNA Sequence in FASTA Format") sequence_input = ">DNA\nATGCGCTAGGATACA" sequence = st.text_area("Sequence Input", sequence_input, height=250) sequence = sequence.splitlines() sequence = sequence[1:] sequence = ''.join(sequence) st.write('''***''') st.header("Input Query Sequence") sequence st.write('''***''') st.header("Nucleotide Count") def nucleotide_count(seq): d = dict([('A', seq.count("A")),('T',seq.count("T")),('G',seq.count("G")),('C',seq.count("C"))]) return d X = nucleotide_count(sequence) X_label = list(X) X_values = list(X.values()) X st.write('''***''') All I see is the text box prompting the user to give input. But once a sequence is submitted nothing happens. I expect that once a user submits the sequence, the web app should first print the input query using: st.header("Input Query Sequence") sequence and then runs the "nucleotide_count" function and gives out the corresponding dictionary. Any help would be much appreciated A: Use st.write(). st.header("Input Query Sequence") st.write(sequence) # <----------------------------------- this st.write('''***''') st.header("Nucleotide Count") def nucleotide_count(seq): d = dict([('A', seq.count("A")),('T',seq.count("T")),('G',seq.count("G")),('C',seq.count("C"))]) return d X = nucleotide_count(sequence) X_label = list(X) X_values = list(X.values()) st.write(X) # <----------------------------------- this st.write('''***''') Sample output
How to create a functioning multipage streamlit app?
I am creating a web app using Streamlit. I have created a multipage app where the sidebar has a drop-down menu to go to a particular page. I have created a page that allows the user to input a sequence and count the number of characters (for example a DNA sequence and count the number of nucleotide bases). The architecture of my web app is as follows: Web_App | |__app.py (main file) |__multipage.py |__home.py | |__apps (another folder) |__nucleotide_eda.py The problem that I am facing right now is when I run the streamlit keeping the sequence page (nucleotide.py) as the home page (meaning just running the sequence page) the inbuilt calculations in the nucleotide.py is running fine (the user submits the query sequence and automatically the A/T/G/C counts are given along with a bar plot). But when I add the nucleotide.py as a multipage and run my main file (app.py) the inbuilt functions are not working. I only see the text box area where the user can give the input sequence. I have given it multiple tries but still, the problem persists. My code looks like this: app.py (main file): import streamlit as st from multiapp import MultiApp from apps import home,nucleotide_eda app = MultiApp() # Add all your applications here app.add_app("Home", home.app) app.add_app("Nucleotide EDA", nucleotide_eda.app) app.run() Nucleotide.py (sequence file): import streamlit as st def app(): st.title('Exploratory Data Analysis of Genomic Sequence') st.write('Nucleotide EDA allows you to perform Exploratory Data Analysis on any submitted genomic sequence') st.header("Enter Your DNA Sequence in FASTA Format") sequence_input = ">DNA\nATGCGCTAGGATACA" sequence = st.text_area("Sequence Input", sequence_input, height=250) sequence = sequence.splitlines() sequence = sequence[1:] sequence = ''.join(sequence) st.write('''***''') st.header("Input Query Sequence") sequence st.write('''***''') st.header("Nucleotide Count") def nucleotide_count(seq): d = dict([('A', seq.count("A")),('T',seq.count("T")),('G',seq.count("G")),('C',seq.count("C"))]) return d X = nucleotide_count(sequence) X_label = list(X) X_values = list(X.values()) X st.write('''***''') All I see is the text box prompting the user to give input. But once a sequence is submitted nothing happens. I expect that once a user submits the sequence, the web app should first print the input query using: st.header("Input Query Sequence") sequence and then runs the "nucleotide_count" function and gives out the corresponding dictionary. Any help would be much appreciated
[ "Use st.write().\n st.header(\"Input Query Sequence\")\n st.write(sequence) # <----------------------------------- this\n \n st.write('''***''')\n \n st.header(\"Nucleotide Count\") \n def nucleotide_count(seq):\n d = dict([('A', seq.count(\"A\")),('T',seq.count(\"T\")),('G',seq.count(\"G\")),('C',seq.count(\"C\"))])\n return d\n \n X = nucleotide_count(sequence)\n X_label = list(X)\n X_values = list(X.values())\n st.write(X) # <----------------------------------- this\n st.write('''***''')\n\nSample output\n\n" ]
[ 0 ]
[]
[]
[ "python", "streamlit", "web" ]
stackoverflow_0074673627_python_streamlit_web.txt
Q: Replace matched regex group by occurence I'm working on a drag and drop function for SVG path, which lets a user move the co-ordinates of the path. Please consider the string below: M162.323 150.513L232.645 8L303.504 149.837L461.168 173.5L347.156 284.5L373.605 440.728L233.5 367.854L91.7415 442L118.424 284.883L5.151 173.549Z Would it be possible to replace a specific(let's say the 4th) occurence of a matched regex group using the .replace method? Regex: [A-Z](-?\d*\.?\d*\s-?\d*\.?\d*) A: regex.exec() is a method that is used to find the next match in a string based on a regular expression. It returns an array containing the matched string and any capturing groups, or null if no match is found. This method can be used in a loop to iterate over all matches in a string and adjust the match accordingly. let string = "M162.323 150.513L232.645 8L303.504 149.837L461.168 173.5L347.156 284.5L373.605 440.728L233.5 367.854L91.7415 442L118.424 284.883L5.151 173.549Z"; let regex = /[A-Z](-?\d*\.?\d*\s-?\d*\.?\d*)/g; // Replace the 4th match let newString = ""; let index = 0; let match; while (match = regex.exec(string)) { if (index === 3) { // Do something to modify the 4th match newString += match[0].replace(/-?\d*\.?\d*\s-?\d*\.?\d*/, "REPLACED"); } else { // Leave other matches unchanged newString += match[0]; } index++; } console.log(newString); A: const s = 'M162.323 150.513L232.645 8L303.504 149.837L461.168 173.5L347.156 284.5L373.605 440.728L233.5 367.854L91.7415 442L118.424 284.883L5.151 173.549Z' let n = 4, regex = /[A-Z](-?\d*\.?\d*\s-?\d*\.?\d*)/gm console.log(s.replace(regex, m => --n ? m : 'hello'))
Replace matched regex group by occurence
I'm working on a drag and drop function for SVG path, which lets a user move the co-ordinates of the path. Please consider the string below: M162.323 150.513L232.645 8L303.504 149.837L461.168 173.5L347.156 284.5L373.605 440.728L233.5 367.854L91.7415 442L118.424 284.883L5.151 173.549Z Would it be possible to replace a specific(let's say the 4th) occurence of a matched regex group using the .replace method? Regex: [A-Z](-?\d*\.?\d*\s-?\d*\.?\d*)
[ "regex.exec() is a method that is used to find the next match in a string based on a regular expression. It returns an array containing the matched string and any capturing groups, or null if no match is found. This method can be used in a loop to iterate over all matches in a string and adjust the match accordingly.\n\n\nlet string = \"M162.323 150.513L232.645 8L303.504 149.837L461.168 173.5L347.156 284.5L373.605 440.728L233.5 367.854L91.7415 442L118.424 284.883L5.151 173.549Z\";\nlet regex = /[A-Z](-?\\d*\\.?\\d*\\s-?\\d*\\.?\\d*)/g;\n\n// Replace the 4th match\nlet newString = \"\";\nlet index = 0;\nlet match;\n\nwhile (match = regex.exec(string)) {\n if (index === 3) {\n // Do something to modify the 4th match\n newString += match[0].replace(/-?\\d*\\.?\\d*\\s-?\\d*\\.?\\d*/, \"REPLACED\");\n } else {\n // Leave other matches unchanged\n newString += match[0];\n }\n index++;\n}\n\nconsole.log(newString);\n\n\n\n", "\n\nconst s = 'M162.323 150.513L232.645 8L303.504 149.837L461.168 173.5L347.156 284.5L373.605 440.728L233.5 367.854L91.7415 442L118.424 284.883L5.151 173.549Z'\n\nlet n = 4, regex = /[A-Z](-?\\d*\\.?\\d*\\s-?\\d*\\.?\\d*)/gm\nconsole.log(s.replace(regex, m => --n ? m : 'hello'))\n\n\n\n" ]
[ 2, 2 ]
[]
[]
[ "javascript", "regex", "replace" ]
stackoverflow_0074677310_javascript_regex_replace.txt
Q: How to Give BorderRadius to SliverList I am using SliverAppBar and SliverListView in my project. I need BorderRadius to my SliverList that is coming bottom of my SliverAppBar. Here is screenshot what I need : And here is my code: Scaffold( body: CustomScrollView( slivers: <Widget>[ SliverAppBar( backgroundColor: Colors.transparent, brightness: Brightness.dark, actions: <Widget>[ IconButton(icon: Icon(Icons.favorite), onPressed: () {}), IconButton(icon: Icon(Icons.share), onPressed: () {}) ], floating: false, pinned: false, //title: Text("Flexible space title"), expandedHeight: getHeight(context) - MediaQuery.of(context).padding.top, flexibleSpace: Container( height: double.infinity, width: double.infinity, decoration: BoxDecoration( image: DecorationImage( fit: BoxFit.cover, image: AssetImage("assets/images/Rectangle-image.png") ) ), ), bottom: _bottomWidget(context), ), SliverList( delegate: SliverChildListDelegate(listview), ), ], ), ) So, with this code the UI is coming like this... can suggest any other approach that i can take to achieve this kind of design... A: I achieved this design using SliverToBoxAdapter my code like this. final sliver = CustomScrollView( slivers: <Widget>[ SliverAppBar(), SliverToBoxAdapter( child: Container( color: Color(0xff5c63f1), height: 20, child: Column( mainAxisAlignment: MainAxisAlignment.end, children: <Widget>[ Container( height: 20, decoration: BoxDecoration( color: Colors.white, borderRadius: BorderRadius.only( topLeft: const Radius.circular(20.0), topRight: const Radius.circular(20.0), ), ), ), ], ), ), ), SliverList(), ], ); I used 2 containers inside SliverToBoxAdapter. SliverToBoxAdapter is between the Sliver Appbar and the Sliver List. first I create a blue (should be Appbar color) container for the corner edge. then I create the same height white container with border-radius inside the blue container for list view. Preview on dartpad A: Solution At the time of writing, there is no widget that would support this functionality. The way to do it is with Stack widget and with your own SliveWidget Before: Here is your default code: import 'package:flutter/material.dart'; void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return MaterialApp( title: 'Flexible space title', home: MyHomePage(), ); } } class MyHomePage extends StatelessWidget { @override Widget build(BuildContext context) { return DefaultTabController( length: 2, child: Scaffold( body: CustomScrollView( slivers: <Widget>[ SliverAppBar( backgroundColor: Colors.transparent, brightness: Brightness.dark, actions: <Widget>[IconButton(icon: Icon(Icons.favorite), onPressed: () {}), IconButton(icon: Icon(Icons.share), onPressed: () {})], floating: false, pinned: false, expandedHeight: 250 - MediaQuery.of(context).padding.top, flexibleSpace: Container( height: 550, width: double.infinity, decoration: BoxDecoration( image: DecorationImage( fit: BoxFit.cover, image: NetworkImage( 'https://images.unsplash.com/photo-1561752888-21eb3b67eb4e?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=967&q=80'))), ), //bottom: _bottomWidget(context), ), SliverList( delegate: SliverChildListDelegate(_listview(50)), ), ], ), ), ); } } List _listview(int count) { List<Widget> listItems = List(); listItems.add(Container( color: Colors.black, height: 50, child: TabBar( tabs: [FlutterLogo(), FlutterLogo()], ), )); for (int i = 0; i < count; i++) { listItems.add(new Padding(padding: new EdgeInsets.all(20.0), child: new Text('Item ${i.toString()}', style: new TextStyle(fontSize: 25.0)))); } return listItems; } After And here is your code done with Stack and SliveWidget widgets: import 'package:flutter/material.dart'; import 'package:flutter/rendering.dart'; void main() { runApp(MyApp()); } class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return MaterialApp( title: 'Flexible space title', home: MyHomePage(), ); } } class MyHomePage extends StatelessWidget { @override Widget build(BuildContext context) { return DefaultTabController( length: 2, child: Scaffold( body: Stack( children: [ Container( height: 550, width: double.infinity, decoration: BoxDecoration( image: DecorationImage( fit: BoxFit.cover, image: NetworkImage( 'https://images.unsplash.com/photo-1561752888-21eb3b67eb4e?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=967&q=80'))), ), CustomScrollView( anchor: 0.4, slivers: <Widget>[ SliverWidget( child: Container( width: double.infinity, height: 100, decoration: BoxDecoration( color: Colors.yellow, borderRadius: BorderRadius.only(topLeft: Radius.circular(30), topRight: Radius.circular(30))), child: TabBar( tabs: [FlutterLogo(), FlutterLogo()], ), ), ), SliverList( delegate: SliverChildListDelegate(_listview(50)), ), ], ), ], ), ), ); } } List _listview(int count) { List<Widget> listItems = List(); for (int i = 0; i < count; i++) { listItems.add( Container( //NOTE: workaround to prevent antialiasing according to: https://github.com/flutter/flutter/issues/25009 decoration: BoxDecoration( color: Colors.white, //the color of the main container border: Border.all( //apply border to only that side where the line is appearing i.e. top | bottom | right | left. width: 2.0, //depends on the width of the unintended line color: Colors.white)), child: Container( padding: EdgeInsets.all(20), color: Colors.white, child: new Text( 'Item ${i.toString()}', style: new TextStyle(fontSize: 25.0), ), ), ), ); } return listItems; } class SliverWidget extends SingleChildRenderObjectWidget { SliverWidget({Widget child, Key key}) : super(child: child, key: key); @override RenderObject createRenderObject(BuildContext context) { // TODO: implement createRenderObject return RenderSliverWidget(); } } class RenderSliverWidget extends RenderSliverToBoxAdapter { RenderSliverWidget({ RenderBox child, }) : super(child: child); @override void performResize() {} @override void performLayout() { if (child == null) { geometry = SliverGeometry.zero; return; } final SliverConstraints constraints = this.constraints; child.layout(constraints.asBoxConstraints(/* crossAxisExtent: double.infinity */), parentUsesSize: true); double childExtent; switch (constraints.axis) { case Axis.horizontal: childExtent = child.size.width; break; case Axis.vertical: childExtent = child.size.height; break; } assert(childExtent != null); final double paintedChildSize = calculatePaintOffset(constraints, from: 0.0, to: childExtent); final double cacheExtent = calculateCacheOffset(constraints, from: 0.0, to: childExtent); assert(paintedChildSize.isFinite); assert(paintedChildSize >= 0.0); geometry = SliverGeometry( scrollExtent: childExtent, paintExtent: 100, paintOrigin: constraints.scrollOffset, cacheExtent: cacheExtent, maxPaintExtent: childExtent, hitTestExtent: paintedChildSize, ); setChildParentData(child, constraints, geometry); } } A: Use Stack. It's the best and smooth way I found and used. Preview import 'dart:math'; import 'package:agro_prep/views/structure/constant.dart'; import 'package:flutter/material.dart'; import 'package:flutter_screenutil/flutter_screenutil.dart'; class CropDetailsPage extends StatelessWidget { const CropDetailsPage({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return Scaffold( body: CustomScrollView( slivers: [ SliverAppBar( backgroundColor: Colors.white, actions: <Widget>[ IconButton(icon: Icon(Icons.share), onPressed: () {}) ], floating: false, pinned: false, //title: Text("Flexible space title"), expandedHeight: 281.h, flexibleSpace: Stack( children: [ const Positioned.fill( child: FadeInImage( image: NetworkImage(tempImage), placeholder: const NetworkImage(tempImage), // imageErrorBuilder: (context, error, stackTrace) { // return Image.asset('assets/images/background.jpg', // fit: BoxFit.cover); // }, fit: BoxFit.cover, ), ), Positioned( child: Container( height: 33.h, decoration: const BoxDecoration( color: Colors.white, borderRadius: BorderRadius.vertical( top: Radius.circular(40), ), ), ), bottom: -7, left: 0, right: 0, ) ], ), ), SliverList( delegate: SliverChildBuilderDelegate((context, index) { return ListTile( tileColor: whiteColor, title: Text(Random().nextInt(100).toString()), ); }, childCount: 15)) ], ), ); } } A: Worked for me! SliverAppBar( pinned: true, floating: false, centerTitle: true, title: TextWidget(detail.title, weight: FontWeight.bold ), expandedHeight: MediaQuery.of(context).size.height/2.5, flexibleSpace: FlexibleSpaceBar( centerTitle: true, collapseMode: CollapseMode.parallax, background: Stack( children: [ // Carousel images Swiper( itemWidth: MediaQuery.of(context).size.width, itemHeight: MediaQuery.of(context).size.height /3.5, itemCount: 2, pagination: SwiperPagination.dots, loop: detail.banners.length > 1, itemBuilder: (BuildContext context, int index) { return Image.network( 'https://image.com?image=123.png', fit: BoxFit.cover ); } ), //Border radius Align( alignment: Alignment.bottomCenter, child: Container( color: Colors.transparent, height: 20, child: Column( mainAxisAlignment: MainAxisAlignment.end, children: <Widget>[ Container( height: 10, decoration: BoxDecoration( color: Colors.white, borderRadius: BorderRadius.only( topLeft: const Radius.circular(10), topRight: const Radius.circular(10), ), ), ), ], ), ), ) ], ), ), ) A: The idea is good but it looks odd in some cases. You could give a borderRadius to your first element in your list Container( decoration: BoxDecoration( borderRadius: BorderRadius.only( topRight: Radius.circular(index == 0 ? 15 : 0), topLeft: Radius.circular(index == 0 ? 15 : 0), ), ), ) Hope this helps someone A: Try This, It's a Simple Solution import 'package:flutter/material.dart'; class SliveR extends StatelessWidget { const SliveR({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return Scaffold( body: Stack( children: [ SizedBox( width: double.infinity, child: Image.network( 'https://images.unsplash.com/photo-1517248135467-4c7edcad34c4?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=2700&q=80', fit: BoxFit.cover, height: MediaQuery.of(context).size.height * 0.35, ), ), Align( alignment: Alignment.topCenter, child: Container( decoration: const BoxDecoration( borderRadius: BorderRadius.all(Radius.circular(30)), ), child: ClipRRect( borderRadius: const BorderRadius.all(Radius.circular(30)), child: CustomScrollView( anchor: 0.3, slivers: [ SliverToBoxAdapter( child: Container( height: 900, decoration: const BoxDecoration( color: Colors.white, borderRadius: BorderRadius.only( topLeft: Radius.circular(40.0), topRight: Radius.circular(40.0), ), boxShadow: [ BoxShadow( color: Colors.grey, offset: Offset(0.0, 1.0), //(x,y) blurRadius: 16.0, ), ], ), child: const Center( child: Text( 'Hello', style: TextStyle(color: Colors.grey), ), ), ), ) ], ), ), ), ), ], ), ); } } A: FlexibleSpaceBar( title: CustomText( text: "Renaissance Concourse Hotel", textSize: kSubtitle3FontSize, fontWeight: kBold), centerTitle: true, collapseMode: CollapseMode.pin, background: Stack( children: [ CachedNetworkImage( imageUrl: "url", width: DeviceUtils.getWidth(context), fit: BoxFit.cover, placeholder: (context, url) => const Center( child: CircularProgressIndicator(), ), errorWidget: (context, url, error) => const Icon(Icons.error_rounded), ), Positioned( bottom: 50, right: 0, left: 0, child: ContainerPlus( color: kWhiteColor, child: const SizedBox( height: 20, ), radius: RadiusPlus.only( topLeft: kBorderRadiusValue10, topRight: kBorderRadiusValue10, ), ), ) ], )) A: So the best way to achieve your result is to use "bottom" poperty inside SliverAppBar. This will add your rounded container to bottom of appbar / start of sliverlist bottom: PreferredSize( preferredSize: const Size.fromHeight(24), child: Container( width: double.infinity, decoration: const BoxDecoration( borderRadius: BorderRadius.vertical( top: Radius.circular(12), ), color: Colors.white, ), child: Column( children: [ Padding( padding: const EdgeInsets.symmetric(vertical: 10), child: Container( width: 40, height: 4, decoration: BoxDecoration( color: Colors.black, borderRadius: BorderRadius.circular(2), ), ), ), ], ), ), ),
How to Give BorderRadius to SliverList
I am using SliverAppBar and SliverListView in my project. I need BorderRadius to my SliverList that is coming bottom of my SliverAppBar. Here is screenshot what I need : And here is my code: Scaffold( body: CustomScrollView( slivers: <Widget>[ SliverAppBar( backgroundColor: Colors.transparent, brightness: Brightness.dark, actions: <Widget>[ IconButton(icon: Icon(Icons.favorite), onPressed: () {}), IconButton(icon: Icon(Icons.share), onPressed: () {}) ], floating: false, pinned: false, //title: Text("Flexible space title"), expandedHeight: getHeight(context) - MediaQuery.of(context).padding.top, flexibleSpace: Container( height: double.infinity, width: double.infinity, decoration: BoxDecoration( image: DecorationImage( fit: BoxFit.cover, image: AssetImage("assets/images/Rectangle-image.png") ) ), ), bottom: _bottomWidget(context), ), SliverList( delegate: SliverChildListDelegate(listview), ), ], ), ) So, with this code the UI is coming like this... can suggest any other approach that i can take to achieve this kind of design...
[ "I achieved this design using SliverToBoxAdapter my code like this. \n\nfinal sliver = CustomScrollView(\n slivers: <Widget>[\n SliverAppBar(),\n SliverToBoxAdapter(\n child: Container(\n color: Color(0xff5c63f1),\n height: 20,\n child: Column(\n mainAxisAlignment: MainAxisAlignment.end,\n children: <Widget>[\n Container(\n height: 20,\n decoration: BoxDecoration(\n color: Colors.white,\n borderRadius: BorderRadius.only(\n topLeft: const Radius.circular(20.0),\n topRight: const Radius.circular(20.0),\n ),\n ),\n ),\n ],\n ),\n ),\n ),\n SliverList(),\n ],\n);\n\nI used 2 containers inside SliverToBoxAdapter.\nSliverToBoxAdapter is between the Sliver Appbar and the Sliver List.\n\nfirst I create a blue (should be Appbar color) container for the corner edge.\nthen I create the same height white container with border-radius inside the blue container for list view.\n\nPreview on dartpad \n", "Solution\nAt the time of writing, there is no widget that would support this functionality. The way to do it is with Stack widget and with your own SliveWidget\nBefore:\n\nHere is your default code:\n\n import 'package:flutter/material.dart';\n\nvoid main() {\n runApp(MyApp());\n}\n\nclass MyApp extends StatelessWidget {\n // This widget is the root of your application.\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n title: 'Flexible space title',\n home: MyHomePage(),\n );\n }\n}\n\nclass MyHomePage extends StatelessWidget {\n @override\n Widget build(BuildContext context) {\n return DefaultTabController(\n length: 2,\n child: Scaffold(\n body: CustomScrollView(\n slivers: <Widget>[\n SliverAppBar(\n backgroundColor: Colors.transparent,\n brightness: Brightness.dark,\n actions: <Widget>[IconButton(icon: Icon(Icons.favorite), onPressed: () {}), IconButton(icon: Icon(Icons.share), onPressed: () {})],\n floating: false,\n pinned: false,\n expandedHeight: 250 - MediaQuery.of(context).padding.top,\n flexibleSpace: Container(\n height: 550,\n width: double.infinity,\n decoration: BoxDecoration(\n image: DecorationImage(\n fit: BoxFit.cover,\n image: NetworkImage(\n 'https://images.unsplash.com/photo-1561752888-21eb3b67eb4e?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=967&q=80'))),\n ),\n //bottom: _bottomWidget(context),\n ),\n SliverList(\n delegate: SliverChildListDelegate(_listview(50)),\n ),\n ],\n ),\n ),\n );\n }\n}\n\nList _listview(int count) {\n List<Widget> listItems = List();\n\n listItems.add(Container(\n color: Colors.black,\n height: 50,\n child: TabBar(\n tabs: [FlutterLogo(), FlutterLogo()],\n ),\n ));\n\n for (int i = 0; i < count; i++) {\n listItems.add(new Padding(padding: new EdgeInsets.all(20.0), child: new Text('Item ${i.toString()}', style: new TextStyle(fontSize: 25.0))));\n }\n\n return listItems;\n}\n\n\nAfter\n\nAnd here is your code done with Stack and SliveWidget widgets:\nimport 'package:flutter/material.dart';\nimport 'package:flutter/rendering.dart';\n\nvoid main() {\n runApp(MyApp());\n}\n\nclass MyApp extends StatelessWidget {\n // This widget is the root of your application.\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n title: 'Flexible space title',\n home: MyHomePage(),\n );\n }\n}\n\nclass MyHomePage extends StatelessWidget {\n @override\n Widget build(BuildContext context) {\n return DefaultTabController(\n length: 2,\n child: Scaffold(\n body: Stack(\n children: [\n Container(\n height: 550,\n width: double.infinity,\n decoration: BoxDecoration(\n image: DecorationImage(\n fit: BoxFit.cover,\n image: NetworkImage(\n 'https://images.unsplash.com/photo-1561752888-21eb3b67eb4e?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=967&q=80'))),\n ),\n CustomScrollView(\n anchor: 0.4,\n slivers: <Widget>[\n SliverWidget(\n child: Container(\n width: double.infinity,\n height: 100,\n decoration: BoxDecoration(\n color: Colors.yellow, borderRadius: BorderRadius.only(topLeft: Radius.circular(30), topRight: Radius.circular(30))),\n child: TabBar(\n tabs: [FlutterLogo(), FlutterLogo()],\n ),\n ),\n ),\n SliverList(\n delegate: SliverChildListDelegate(_listview(50)),\n ),\n ],\n ),\n ],\n ),\n ),\n );\n }\n}\n\nList _listview(int count) {\n List<Widget> listItems = List();\n\n for (int i = 0; i < count; i++) {\n listItems.add(\n Container( //NOTE: workaround to prevent antialiasing according to: https://github.com/flutter/flutter/issues/25009\n decoration: BoxDecoration(\n color: Colors.white, //the color of the main container\n border: Border.all(\n //apply border to only that side where the line is appearing i.e. top | bottom | right | left.\n width: 2.0, //depends on the width of the unintended line\n color: Colors.white)),\n child: Container(\n padding: EdgeInsets.all(20),\n color: Colors.white,\n child: new Text(\n 'Item ${i.toString()}',\n style: new TextStyle(fontSize: 25.0),\n ),\n ),\n ),\n );\n }\n\n return listItems;\n}\n\nclass SliverWidget extends SingleChildRenderObjectWidget {\n SliverWidget({Widget child, Key key}) : super(child: child, key: key);\n @override\n RenderObject createRenderObject(BuildContext context) {\n // TODO: implement createRenderObject\n return RenderSliverWidget();\n }\n}\n\nclass RenderSliverWidget extends RenderSliverToBoxAdapter {\n RenderSliverWidget({\n RenderBox child,\n }) : super(child: child);\n\n @override\n void performResize() {}\n\n @override\n void performLayout() {\n if (child == null) {\n geometry = SliverGeometry.zero;\n return;\n }\n final SliverConstraints constraints = this.constraints;\n child.layout(constraints.asBoxConstraints(/* crossAxisExtent: double.infinity */), parentUsesSize: true);\n double childExtent;\n switch (constraints.axis) {\n case Axis.horizontal:\n childExtent = child.size.width;\n break;\n case Axis.vertical:\n childExtent = child.size.height;\n break;\n }\n assert(childExtent != null);\n final double paintedChildSize = calculatePaintOffset(constraints, from: 0.0, to: childExtent);\n final double cacheExtent = calculateCacheOffset(constraints, from: 0.0, to: childExtent);\n\n assert(paintedChildSize.isFinite);\n assert(paintedChildSize >= 0.0);\n geometry = SliverGeometry(\n scrollExtent: childExtent,\n paintExtent: 100,\n paintOrigin: constraints.scrollOffset,\n cacheExtent: cacheExtent,\n maxPaintExtent: childExtent,\n hitTestExtent: paintedChildSize,\n );\n setChildParentData(child, constraints, geometry);\n }\n}\n\n\n", "Use Stack. It's the best and smooth way I found and used.\nPreview\nimport 'dart:math';\nimport 'package:agro_prep/views/structure/constant.dart';\nimport 'package:flutter/material.dart';\nimport 'package:flutter_screenutil/flutter_screenutil.dart';\n\nclass CropDetailsPage extends StatelessWidget {\n const CropDetailsPage({Key? key}) : super(key: key);\n\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n body: CustomScrollView(\n slivers: [\n SliverAppBar(\n backgroundColor: Colors.white,\n\n actions: <Widget>[\n IconButton(icon: Icon(Icons.share), onPressed: () {})\n ],\n floating: false,\n pinned: false,\n //title: Text(\"Flexible space title\"),\n expandedHeight: 281.h,\n flexibleSpace: Stack(\n children: [\n const Positioned.fill(\n child: FadeInImage(\n image: NetworkImage(tempImage),\n placeholder: const NetworkImage(tempImage),\n // imageErrorBuilder: (context, error, stackTrace) {\n // return Image.asset('assets/images/background.jpg',\n // fit: BoxFit.cover);\n // },\n fit: BoxFit.cover,\n ),\n ),\n Positioned(\n child: Container(\n height: 33.h,\n decoration: const BoxDecoration(\n color: Colors.white,\n borderRadius: BorderRadius.vertical(\n top: Radius.circular(40),\n ),\n ),\n ),\n bottom: -7,\n left: 0,\n right: 0,\n )\n ],\n ),\n ),\n SliverList(\n delegate: SliverChildBuilderDelegate((context, index) {\n return ListTile(\n tileColor: whiteColor,\n title: Text(Random().nextInt(100).toString()),\n );\n }, childCount: 15))\n ],\n ),\n );\n }\n}\n\n", "Worked for me!\n SliverAppBar(\n pinned: true,\n floating: false,\n centerTitle: true,\n title: TextWidget(detail.title,\n weight: FontWeight.bold\n ),\n expandedHeight: MediaQuery.of(context).size.height/2.5,\n flexibleSpace: FlexibleSpaceBar(\n centerTitle: true,\n collapseMode: CollapseMode.parallax,\n background: Stack(\n children: [\n // Carousel images\n Swiper(\n itemWidth: MediaQuery.of(context).size.width,\n itemHeight: MediaQuery.of(context).size.height /3.5,\n itemCount: 2,\n pagination: SwiperPagination.dots,\n loop: detail.banners.length > 1,\n itemBuilder: (BuildContext context, int index) {\n return Image.network(\n 'https://image.com?image=123.png',\n fit: BoxFit.cover\n );\n }\n ),\n //Border radius \n Align(\n alignment: Alignment.bottomCenter,\n child: Container(\n color: Colors.transparent,\n height: 20,\n child: Column(\n mainAxisAlignment: MainAxisAlignment.end,\n children: <Widget>[\n Container(\n height: 10,\n decoration: BoxDecoration(\n color: Colors.white,\n borderRadius: BorderRadius.only(\n topLeft: const Radius.circular(10),\n topRight: const Radius.circular(10),\n ),\n ),\n ),\n ],\n ),\n ),\n )\n ],\n ),\n ),\n )\n\n", "The idea is good but it looks odd in some cases.\nYou could give a borderRadius to your first element in your list \nContainer(\n decoration: BoxDecoration(\n borderRadius: BorderRadius.only(\n topRight: Radius.circular(index == 0 ? 15 : 0),\n topLeft: Radius.circular(index == 0 ? 15 : 0),\n ),\n ),\n)\n\nHope this helps someone\n", "Try This, It's a Simple Solution\nimport 'package:flutter/material.dart';\n\nclass SliveR extends StatelessWidget {\n const SliveR({Key? key}) : super(key: key);\n\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n body: Stack(\n children: [\n SizedBox(\n width: double.infinity,\n child: Image.network(\n 'https://images.unsplash.com/photo-1517248135467-4c7edcad34c4?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=2700&q=80',\n fit: BoxFit.cover,\n height: MediaQuery.of(context).size.height * 0.35,\n ),\n ),\n Align(\n alignment: Alignment.topCenter,\n child: Container(\n decoration: const BoxDecoration(\n borderRadius: BorderRadius.all(Radius.circular(30)),\n ),\n child: ClipRRect(\n borderRadius: const BorderRadius.all(Radius.circular(30)),\n child: CustomScrollView(\n anchor: 0.3,\n slivers: [\n SliverToBoxAdapter(\n child: Container(\n height: 900,\n decoration: const BoxDecoration(\n color: Colors.white,\n borderRadius: BorderRadius.only(\n topLeft: Radius.circular(40.0),\n topRight: Radius.circular(40.0),\n ),\n boxShadow: [\n BoxShadow(\n color: Colors.grey,\n offset: Offset(0.0, 1.0), //(x,y)\n blurRadius: 16.0,\n ),\n ],\n ),\n child: const Center(\n child: Text(\n 'Hello',\n style: TextStyle(color: Colors.grey),\n ),\n ),\n ),\n )\n ],\n ),\n ),\n ),\n ),\n ],\n ),\n );\n }\n}\n\n", "FlexibleSpaceBar(\n title: CustomText(\n text: \"Renaissance Concourse Hotel\",\n textSize: kSubtitle3FontSize,\n fontWeight: kBold),\n centerTitle: true,\n collapseMode: CollapseMode.pin,\n background: Stack(\n children: [\n CachedNetworkImage(\n imageUrl:\n \"url\",\n width: DeviceUtils.getWidth(context),\n fit: BoxFit.cover,\n placeholder: (context, url) => const Center(\n child: CircularProgressIndicator(),\n ),\n errorWidget: (context, url, error) =>\n const Icon(Icons.error_rounded),\n ),\n Positioned(\n bottom: 50,\n right: 0,\n left: 0,\n child: ContainerPlus(\n color: kWhiteColor,\n child: const SizedBox(\n height: 20,\n ),\n radius: RadiusPlus.only(\n topLeft: kBorderRadiusValue10,\n topRight: kBorderRadiusValue10,\n ),\n ),\n )\n ],\n ))\n\n\n", "So the best way to achieve your result is to use \"bottom\" poperty inside SliverAppBar. This will add your rounded container to bottom of appbar / start of sliverlist\nbottom: PreferredSize(\n preferredSize: const Size.fromHeight(24),\n child: Container(\n width: double.infinity,\n decoration: const BoxDecoration(\n borderRadius: BorderRadius.vertical(\n top: Radius.circular(12),\n ),\n color: Colors.white,\n ),\n child: Column(\n children: [\n Padding(\n padding: const EdgeInsets.symmetric(vertical: 10),\n child: Container(\n width: 40,\n height: 4,\n decoration: BoxDecoration(\n color: Colors.black,\n borderRadius: BorderRadius.circular(2),\n ),\n ),\n ),\n ],\n ),\n ),\n ),\n\n" ]
[ 10, 7, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "dart", "flutter", "flutter_layout", "flutter_sliver" ]
stackoverflow_0058570208_dart_flutter_flutter_layout_flutter_sliver.txt
Q: How to reserve ZIO response inside a custom method in I have this method import ClientServer.* import zio.http.{Client, *} import zio.json.* import zio.http.model.Method import zio.{ExitCode, URIO, ZIO} import sttp.capabilities.* import sttp.client3.Request import zio.* import zio.http.model.Headers.Header import zio.http.model.Version.Http_1_0 import zio.stream.* import java.net.InetAddress import sttp.model.sse.ServerSentEvent import sttp.client3._ object fillFileWithLeagues: def fill = for { openDotaResponse <- Client.request("https://api.opendota.com/api/leagues") bodyOfResponse <- openDotaResponse.body.asString listOfLeagues <- ZIO.fromEither(bodyOfResponse.fromJson[List[League]].left.map(error => new Exception(error))) save = FileStorage.saveToFile(listOfLeagues.toJson) //Ok }yield () println("Im here fillFileWithLeagues.fill ") and when I try use fillFileWithLeagues.fill nothing happens I'm trying fill file with data from target api using fillFileWithLeagues.fill def readFromFileV8(path: Path = Path("src", "main", "resources", "data.json")): ZIO[Any, Throwable, String] = val zioStr = (for bool <- Files.isReadable(path) yield bool).flatMap(bool => if (bool) Files.readAllLines(path, Charset.Standard.utf8).map(_.head) else { fillFileWithLeagues.fill wait(10000) println("im here readFromFileV8") readFromFileV8()}) zioStr I'm expecting data.json file must created from Client.request("https://api.opendota.com/api/leagues") but there is nothing happens Maybe I should use some sttp, or some other tools? A: If we fix indentation of the code we'll find this: object fillFileWithLeagues { def fill = { for { openDotaResponse <- Client.request("https://api.opendota.com/api/leagues") bodyOfResponse <- openDotaResponse.body.asString listOfLeagues <- ZIO.fromEither(bodyOfResponse.fromJson[List[League]].left.map(error => new Exception(error))) save = FileStorage.saveToFile(listOfLeagues.toJson) //Ok } yield () } println("Im here fillFileWithLeagues.fill ") } As you see the println is part of fillFileWithLeagues, not of fill. Another potential problem is that an expression like fillFileWithLeagues.fill only returns a ZIO instance, it is not yet evaluated. To evaluate it, it needs to be run. For example as follows: import zio._ object MainApp extends ZIOAppDefault { def run = fillFileWithLeagues.fill }
How to reserve ZIO response inside a custom method in
I have this method import ClientServer.* import zio.http.{Client, *} import zio.json.* import zio.http.model.Method import zio.{ExitCode, URIO, ZIO} import sttp.capabilities.* import sttp.client3.Request import zio.* import zio.http.model.Headers.Header import zio.http.model.Version.Http_1_0 import zio.stream.* import java.net.InetAddress import sttp.model.sse.ServerSentEvent import sttp.client3._ object fillFileWithLeagues: def fill = for { openDotaResponse <- Client.request("https://api.opendota.com/api/leagues") bodyOfResponse <- openDotaResponse.body.asString listOfLeagues <- ZIO.fromEither(bodyOfResponse.fromJson[List[League]].left.map(error => new Exception(error))) save = FileStorage.saveToFile(listOfLeagues.toJson) //Ok }yield () println("Im here fillFileWithLeagues.fill ") and when I try use fillFileWithLeagues.fill nothing happens I'm trying fill file with data from target api using fillFileWithLeagues.fill def readFromFileV8(path: Path = Path("src", "main", "resources", "data.json")): ZIO[Any, Throwable, String] = val zioStr = (for bool <- Files.isReadable(path) yield bool).flatMap(bool => if (bool) Files.readAllLines(path, Charset.Standard.utf8).map(_.head) else { fillFileWithLeagues.fill wait(10000) println("im here readFromFileV8") readFromFileV8()}) zioStr I'm expecting data.json file must created from Client.request("https://api.opendota.com/api/leagues") but there is nothing happens Maybe I should use some sttp, or some other tools?
[ "If we fix indentation of the code we'll find this:\nobject fillFileWithLeagues {\n\n def fill = {\n for {\n openDotaResponse <- Client.request(\"https://api.opendota.com/api/leagues\")\n bodyOfResponse <- openDotaResponse.body.asString\n listOfLeagues <- ZIO.fromEither(bodyOfResponse.fromJson[List[League]].left.map(error => new Exception(error)))\n save = FileStorage.saveToFile(listOfLeagues.toJson) //Ok\n } yield ()\n }\n\n println(\"Im here fillFileWithLeagues.fill \")\n}\n\nAs you see the println is part of fillFileWithLeagues, not of fill.\nAnother potential problem is that an expression like fillFileWithLeagues.fill only returns a ZIO instance, it is not yet evaluated. To evaluate it, it needs to be run. For example as follows:\nimport zio._\n\nobject MainApp extends ZIOAppDefault {\n def run = fillFileWithLeagues.fill\n}\n\n" ]
[ 0 ]
[]
[]
[ "scala", "zio_http" ]
stackoverflow_0074669729_scala_zio_http.txt
Q: How to update the state of a useState component outside of the useEffect that transforms the State variable? I manage to read a selection variable declared as const[selection, setSelection] = useState([]) and I need to update it in setSelection to send it to another file with useContext. But, my other file well provider, does not receive any selection even in a useEffect() I would conserve useContext and not use Redux or RTK In my configuration, it is not advisable to reverse the parent/children. import '../styles/Filters.css' import React, { useState, useContext } from 'react' import { SelectionContext } from '../App' import Budget from '../components/Budget' import { useEffect } from 'react' export default function Filters() { const [filters, setFilters] = useState({}) const [minValue2, setMinValue2] = useState(0) const [maxValue2, setMaxValue2] = useState(0) let [selection, setSelection] = useContext(SelectionContext) useEffect( (send) => { if ( Object.keys(filters)[0] !== undefined && Object.keys(Object.values(filters)[0])[0] === '0' ) { if (!selection.some((el) => Object.keys(filters)[0] in el)) { selection.push({ [Object.keys(filters)[0]]: [Object.values(filters)[0]], }) } else if ( selection.some((el) => Object.keys(filters)[0] in el) && !selection.some((el) => Object.values(el)[0].includes(Object.values(filters)[0]) ) ) { let indexOfTitle = 0 indexOfTitle = selection.findIndex( (obj) => Object.keys(filters)[0] === Object.keys(obj)[0] ) selection[indexOfTitle][Object.keys(filters)[0]].push( Object.values(filters)[0] ) } else if ( selection.some((el) => Object.keys(filters)[0] in el) && selection.some((el) => Object.values(el)[0].includes(Object.values(filters)[0]) ) ) { let indexOfTitle = 0 indexOfTitle = selection.findIndex( (obj) => Object.keys(filters)[0] === Object.keys(obj)[0] ) let indexOfId = selection[indexOfTitle][ Object.keys(filters)[0] ].indexOf(Object.values(filters)[0]) selection[indexOfTitle][Object.keys(filters)[0]].splice( indexOfId, 1 ) if ( selection[indexOfTitle][Object.keys(filters)[0]] .length === 0 ) { selection.splice(indexOfTitle, 1) } } } }, [filters, selection] ) console.log(selection) useEffect(() => { setSelection(selection) }, [selection, setSelection]) return ( <> <Budget setMinValue2={setMinValue2} setMaxValue2={setMaxValue2} setFilters={setFilters} /> </> ) } A: Have you passed [selection, setSelection] to your SelectionContext.Provider ? make sure you have passed them like this : <SelectionContext.Provider value={[selection, setSelection]}> <Filters /> </SelectionContext>
How to update the state of a useState component outside of the useEffect that transforms the State variable?
I manage to read a selection variable declared as const[selection, setSelection] = useState([]) and I need to update it in setSelection to send it to another file with useContext. But, my other file well provider, does not receive any selection even in a useEffect() I would conserve useContext and not use Redux or RTK In my configuration, it is not advisable to reverse the parent/children. import '../styles/Filters.css' import React, { useState, useContext } from 'react' import { SelectionContext } from '../App' import Budget from '../components/Budget' import { useEffect } from 'react' export default function Filters() { const [filters, setFilters] = useState({}) const [minValue2, setMinValue2] = useState(0) const [maxValue2, setMaxValue2] = useState(0) let [selection, setSelection] = useContext(SelectionContext) useEffect( (send) => { if ( Object.keys(filters)[0] !== undefined && Object.keys(Object.values(filters)[0])[0] === '0' ) { if (!selection.some((el) => Object.keys(filters)[0] in el)) { selection.push({ [Object.keys(filters)[0]]: [Object.values(filters)[0]], }) } else if ( selection.some((el) => Object.keys(filters)[0] in el) && !selection.some((el) => Object.values(el)[0].includes(Object.values(filters)[0]) ) ) { let indexOfTitle = 0 indexOfTitle = selection.findIndex( (obj) => Object.keys(filters)[0] === Object.keys(obj)[0] ) selection[indexOfTitle][Object.keys(filters)[0]].push( Object.values(filters)[0] ) } else if ( selection.some((el) => Object.keys(filters)[0] in el) && selection.some((el) => Object.values(el)[0].includes(Object.values(filters)[0]) ) ) { let indexOfTitle = 0 indexOfTitle = selection.findIndex( (obj) => Object.keys(filters)[0] === Object.keys(obj)[0] ) let indexOfId = selection[indexOfTitle][ Object.keys(filters)[0] ].indexOf(Object.values(filters)[0]) selection[indexOfTitle][Object.keys(filters)[0]].splice( indexOfId, 1 ) if ( selection[indexOfTitle][Object.keys(filters)[0]] .length === 0 ) { selection.splice(indexOfTitle, 1) } } } }, [filters, selection] ) console.log(selection) useEffect(() => { setSelection(selection) }, [selection, setSelection]) return ( <> <Budget setMinValue2={setMinValue2} setMaxValue2={setMaxValue2} setFilters={setFilters} /> </> ) }
[ "Have you passed [selection, setSelection] to your SelectionContext.Provider ?\nmake sure you have passed them like this :\n<SelectionContext.Provider value={[selection, setSelection]}>\n <Filters />\n</SelectionContext>\n\n" ]
[ 0 ]
[]
[]
[ "react_context", "react_hooks", "reactjs" ]
stackoverflow_0074658821_react_context_react_hooks_reactjs.txt
Q: Calling a member-function by std::function with reference to the object type as first parameter When using std::function to call a non-static member-function, we can pass either the object pointer or the object reference as the first parameter: struct Foo { void bar() const { std::cout << "Foo::bar called "<< std::endl; } }; int main() { Foo foo; // version1: pass the object pointer std::function<void(Foo*)> call_bar_by_pointer = &Foo::bar; call_bar_by_pointer(&foo); // or, version2: pass the object reference std::function<void(Foo&)> call_bar_by_reference = &Foo::bar; call_bar_by_reference(foo); return 0; } In my previous understanding, the non-static member-function essentially has the object pointer as the first argument implicitly. So when it comes to std::function for calling a non-static member-function, I also expect the first parameter in the parameter list to be the pointer to the object type (i.e., version1). Can someone kindly explain why the reference type (i.e., version2) is also supported here, and how it works? A: std::function can be used for any callable. It uses the INVOKE definition, specifically: INVOKE(f, t1, t2, ..., tN) is defined as follows: if f is a pointer to member function of class T: (1) if std::is_base_of<T, std::remove_reference_t<decltype(t1)>>::value is true, then INVOKE(f, t1, t2, ..., tN) is equivalent to (t1.*f)(t2, ..., tN) (2) otherwise, [...] (std::reference_wrapper specialization) (3) otherwise, if t1 does not satisfy the previous items, then INVOKE(f, t1, t2, ..., tN) is equivalent to ((*t1).*f)(t2, ..., tN). Point (1) works when t1 is a simple value or a reference, and point (3) works when t1 is a pointer (could be a smart pointer too). This is just how std::function (and anything using INVOKE) is implemented. Without it, you cannot use the same syntax, you must use the pointer-to-member operator: using BarType = void (Foo::*)() const; BarType normal_function_pointer = &Foo::bar; (foo.*normal_function_pointer)(); Note that std::function even works with std::reference_wrapper (point (2)), and it could work with other things, like double pointers, but that's just not implemented (and probably not useful).
Calling a member-function by std::function with reference to the object type as first parameter
When using std::function to call a non-static member-function, we can pass either the object pointer or the object reference as the first parameter: struct Foo { void bar() const { std::cout << "Foo::bar called "<< std::endl; } }; int main() { Foo foo; // version1: pass the object pointer std::function<void(Foo*)> call_bar_by_pointer = &Foo::bar; call_bar_by_pointer(&foo); // or, version2: pass the object reference std::function<void(Foo&)> call_bar_by_reference = &Foo::bar; call_bar_by_reference(foo); return 0; } In my previous understanding, the non-static member-function essentially has the object pointer as the first argument implicitly. So when it comes to std::function for calling a non-static member-function, I also expect the first parameter in the parameter list to be the pointer to the object type (i.e., version1). Can someone kindly explain why the reference type (i.e., version2) is also supported here, and how it works?
[ "std::function can be used for any callable.\nIt uses the INVOKE definition, specifically:\n\nINVOKE(f, t1, t2, ..., tN) is defined as follows:\n\nif f is a pointer to member function of class T:\n\n(1) if std::is_base_of<T, std::remove_reference_t<decltype(t1)>>::value is true, then INVOKE(f, t1, t2, ..., tN) is equivalent to (t1.*f)(t2, ..., tN)\n(2) otherwise, [...] (std::reference_wrapper specialization)\n(3) otherwise, if t1 does not satisfy the previous items, then INVOKE(f, t1, t2, ..., tN) is equivalent to ((*t1).*f)(t2, ..., tN).\n\n\n\n\nPoint (1) works when t1 is a simple value or a reference, and point (3) works when t1 is a pointer (could be a smart pointer too).\nThis is just how std::function (and anything using INVOKE) is implemented. Without it, you cannot use the same syntax, you must use the pointer-to-member operator:\nusing BarType = void (Foo::*)() const;\nBarType normal_function_pointer = &Foo::bar;\n(foo.*normal_function_pointer)();\n\nNote that std::function even works with std::reference_wrapper (point (2)), and it could work with other things, like double pointers, but that's just not implemented (and probably not useful).\n" ]
[ 1 ]
[]
[]
[ "c++", "pointer_to_member", "std_function" ]
stackoverflow_0074677063_c++_pointer_to_member_std_function.txt
Q: GCC- what does "-undefined dynamic_lookup" do? I'm reading the open-source code for RBENV, and I came across this line of code. When I generated the Makefile using the configure script and ran make, I saw the following: $ make gcc -fno-common -c -o realpath.o realpath.c gcc -dynamiclib -dynamic -undefined dynamic_lookup -o ../libexec/rbenv-realpath.dylib realpath.o I wanted to know what each of these commands is doing, so I started Googling each of the flags. I found many of them, but -undefined dynamic_lookup initially stumped me. Eventually I found this Github issue, which contained the following sentence: ...the option -undefined dynamic_lookup must be passed to the linker to indicate that unresolved symbols will be resolved at runtime. This makes sense to me, given what I know about the purpose of this particular Makefile (it allows RBENV to use a faster, more performant version of the realpath program as a dynamic library). However, I was unable to confirm this independently, since I don't see any official GCC docs which describe the -undefined flag or the dynamic_lookup option. I Googled gcc dynamic_lookup site:gnu.org, but the only results which turned up were Bugzilla tickets, email threads about patches, etc. I also searched man gcc and help gcc in my terminal, but got No manual entry for gcc as a response. Questions: Is the Github issue's description of -undefined dynamic_lookup correct? Am I going crazy, or is -undefined dynamic_lookup actually missing from the gcc docs? A: Turns out I was getting the No manual entry for gcc error because "gcc isn't installed anymore by Xcode, it really installs clang and calls it gcc". See this StackOverflow post and this answer for more info. Once I realized that, I Googled "man gcc" and found the docs from man7.org. As user @n.m. mentions in the comment above, the -undefined option (along with many others) is forwarded to the Darwin linker, therefore it is those docs that I should have been looking at. Once I found the Darwin linker docs, I found the info I was looking for: undefined < treatment > Specifies how undefined symbols are to be treated. Options are: error, warning, suppress, or dynamic_lookup. The default is error.
GCC- what does "-undefined dynamic_lookup" do?
I'm reading the open-source code for RBENV, and I came across this line of code. When I generated the Makefile using the configure script and ran make, I saw the following: $ make gcc -fno-common -c -o realpath.o realpath.c gcc -dynamiclib -dynamic -undefined dynamic_lookup -o ../libexec/rbenv-realpath.dylib realpath.o I wanted to know what each of these commands is doing, so I started Googling each of the flags. I found many of them, but -undefined dynamic_lookup initially stumped me. Eventually I found this Github issue, which contained the following sentence: ...the option -undefined dynamic_lookup must be passed to the linker to indicate that unresolved symbols will be resolved at runtime. This makes sense to me, given what I know about the purpose of this particular Makefile (it allows RBENV to use a faster, more performant version of the realpath program as a dynamic library). However, I was unable to confirm this independently, since I don't see any official GCC docs which describe the -undefined flag or the dynamic_lookup option. I Googled gcc dynamic_lookup site:gnu.org, but the only results which turned up were Bugzilla tickets, email threads about patches, etc. I also searched man gcc and help gcc in my terminal, but got No manual entry for gcc as a response. Questions: Is the Github issue's description of -undefined dynamic_lookup correct? Am I going crazy, or is -undefined dynamic_lookup actually missing from the gcc docs?
[ "Turns out I was getting the No manual entry for gcc error because \"gcc isn't installed anymore by Xcode, it really installs clang and calls it gcc\". See this StackOverflow post and this answer for more info. Once I realized that, I Googled \"man gcc\" and found the docs from man7.org.\nAs user @n.m. mentions in the comment above, the -undefined option (along with many others) is forwarded to the Darwin linker, therefore it is those docs that I should have been looking at. Once I found the Darwin linker docs, I found the info I was looking for:\n\nundefined < treatment >\nSpecifies how undefined symbols are to be treated. Options are: error, warning, suppress, or dynamic_lookup. The default is error.\n\n" ]
[ 0 ]
[]
[]
[ "gcc" ]
stackoverflow_0074667414_gcc.txt
Q: react native - problem with useState, fetch value? I want to update the 'people' field. The problem is that the previous value goes to the state, not the current one. I don't know how to make it correct. I am getting a value from previous rendering const [totalPeoplem, setTotalPeople] = useState(0); const handleUpdate = async (initialItem) => { firestore() .collection('users') .doc(user.uid) .collection('city') .doc(itemId) .update({ people: totalPeople }).then(() => { console.log('Update'); }); } A: You can solve it with useEffect For example: const [totalPeoplem, setTotalPeople] = useState(0); useEffect(() => { const handleUpdate = async (initialItem) => { await firestore() .collection('users') .doc(user.uid) .collection('city') .doc(itemId) .update({ people: totalPeoplem, }).then(() => { console.log('Update'); }); }; handleUpdate() }, [totalPeoplem]);
react native - problem with useState, fetch value?
I want to update the 'people' field. The problem is that the previous value goes to the state, not the current one. I don't know how to make it correct. I am getting a value from previous rendering const [totalPeoplem, setTotalPeople] = useState(0); const handleUpdate = async (initialItem) => { firestore() .collection('users') .doc(user.uid) .collection('city') .doc(itemId) .update({ people: totalPeople }).then(() => { console.log('Update'); }); }
[ "You can solve it with useEffect\nFor example:\nconst [totalPeoplem, setTotalPeople] = useState(0);\n\nuseEffect(() => {\n const handleUpdate = async (initialItem) => {\n await firestore()\n .collection('users')\n .doc(user.uid)\n .collection('city')\n .doc(itemId)\n .update({\n people: totalPeoplem,\n }).then(() => {\n console.log('Update');\n });\n };\n handleUpdate()\n}, [totalPeoplem]);\n\n" ]
[ 0 ]
[]
[]
[ "react_native" ]
stackoverflow_0074675810_react_native.txt
Q: how to call a property from values in django I need the value of the property, which calls through the values call so that later i will use in the union method so used model is class Bills(models.Model): salesPerson = models.ForeignKey(User, on_delete = models.SET_NULL, null=True) purchasedPerson = models.ForeignKey(Members, on_delete = models.PROTECT, null=True) cash = models.BooleanField(default=True) totalAmount = models.IntegerField() advance = models.IntegerField(null=True, blank=True) remarks = models.CharField(max_length = 200, null=True, blank=True) created = models.DateTimeField(auto_now_add=True) update = models.DateTimeField(auto_now=True) class Meta: ordering = ['-update', '-created'] def __str__(self): return str(self.purchasedPerson) @property def balance(self): return 0 if self.cash == True else self.totalAmount - self.advance when i call the model as bills = Bills.objects.all() I can call the balance property as for bill in bills: bill.balance no issue in above method but i need to use the bills in union with another model so needed fixed vales to call i am calling the method as bill_trans = Bills.objects.filter(purchasedPerson__id__contains = pk, cash = False).values('purchasedPerson', 'purchasedPerson__name', 'cash', 'totalAmount', 'id', 'created') in place of the 'totalamount' i need balance how can i approach this step A: you can annotate the balance and include it in your values bill_trans = Bills.objects.filter(purchasedPerson__id__contains = pk, cash = False).annotate(balance=(F('totalAmount') - F('advance')).values('purchasedPerson', 'purchasedPerson__name', 'cash', 'balance', 'id', 'created') Also, I was suggesting that the advance field should not be allowed to be null and instead give a default of zero
how to call a property from values in django
I need the value of the property, which calls through the values call so that later i will use in the union method so used model is class Bills(models.Model): salesPerson = models.ForeignKey(User, on_delete = models.SET_NULL, null=True) purchasedPerson = models.ForeignKey(Members, on_delete = models.PROTECT, null=True) cash = models.BooleanField(default=True) totalAmount = models.IntegerField() advance = models.IntegerField(null=True, blank=True) remarks = models.CharField(max_length = 200, null=True, blank=True) created = models.DateTimeField(auto_now_add=True) update = models.DateTimeField(auto_now=True) class Meta: ordering = ['-update', '-created'] def __str__(self): return str(self.purchasedPerson) @property def balance(self): return 0 if self.cash == True else self.totalAmount - self.advance when i call the model as bills = Bills.objects.all() I can call the balance property as for bill in bills: bill.balance no issue in above method but i need to use the bills in union with another model so needed fixed vales to call i am calling the method as bill_trans = Bills.objects.filter(purchasedPerson__id__contains = pk, cash = False).values('purchasedPerson', 'purchasedPerson__name', 'cash', 'totalAmount', 'id', 'created') in place of the 'totalamount' i need balance how can i approach this step
[ "you can annotate the balance and include it in your values\nbill_trans = Bills.objects.filter(purchasedPerson__id__contains = pk,\n cash = False).annotate(balance=(F('totalAmount') - F('advance')).values('purchasedPerson',\n 'purchasedPerson__name', 'cash',\n 'balance', 'id', 'created')\n\nAlso, I was suggesting that the advance field should not be allowed to be null and instead give a default of zero\n" ]
[ 0 ]
[]
[]
[ "django", "django_models" ]
stackoverflow_0074677149_django_django_models.txt
Q: How to make smaller bubbels above bigger bubbels in ggplot2 R? In the plot, smaller bubbles will be hidden by bigger bubbles. if I use alpha, they will appear. I would like that small bubbles superpose the bigger ones without using alpha library(ggplot2) library(dplyr) The dataset is provided in the gapminder library library(gapminder) data <- gapminder %>% filter(year=="2007") %>% dplyr::select(-year) # Most basic bubble plot data %>% arrange(desc(pop)) %>% mutate(country = factor(country, country)) %>% ggplot(aes(x=gdpPercap, y=lifeExp, size=pop, color=continent)) + geom_point(alpha=0.5) + scale_size(range = c(.1, 24), name="Population (M)") A: The small points are already plotted over the large points. What you need is an outline on the points. You can do this by selecting shape = 21 and using the fill aesthetic for their overall color. Their outline can be whatever color you like, though here I have made them a darker version of the fill color, which gives a more subtle outline: library(ggplot2) library(dplyr) library(gapminder) library(colorspace) data <- gapminder %>% filter(year=="2007") %>% dplyr::select(-year) data %>% arrange(desc(pop)) %>% mutate(country = factor(country, country)) %>% ggplot(aes(gdpPercap, lifeExp, size = pop, fill = continent, color = after_scale(darken(fill, 0.3)))) + geom_point(shape = 21) + scale_size(range = c(.1, 24), name = "Population (M)") + scale_x_continuous("GDP per Capita", labels = scales::dollar) + ylab("Life Expectancy") + theme_minimal(base_size = 20) + scale_fill_brewer(palette = "Pastel1") + ggtitle("Average life expectancy 2007") + guides(size = "none", fill = guide_legend(override.aes = list(size = 6))) + theme(panel.background = element_rect(fill = "gray95", color = NA), legend.title = element_text(size = 25, face = 2), legend.text = element_text(size = 25, face = 2)) A: Here is another version of @Allan Cameron's solution: # Libraries library(ggplot2) library(dplyr) library(viridis) library(gapminder) gapminder %>% filter(year=="2007") %>% select(-year) %>% arrange(desc(pop)) %>% mutate(country = factor(country, country)) %>% ggplot(aes(x=gdpPercap, y=lifeExp, size=pop, fill=continent)) + geom_point(alpha=0.5, shape=21, color="black") + scale_size(range = c(.1, 24), name="Population (M)", guide="none")+ scale_fill_viridis(discrete=TRUE, option="A") + theme_bw() + theme(legend.position="bottom") + ylab("Life Expectancy") + xlab("Gdp per Capita")
How to make smaller bubbels above bigger bubbels in ggplot2 R?
In the plot, smaller bubbles will be hidden by bigger bubbles. if I use alpha, they will appear. I would like that small bubbles superpose the bigger ones without using alpha library(ggplot2) library(dplyr) The dataset is provided in the gapminder library library(gapminder) data <- gapminder %>% filter(year=="2007") %>% dplyr::select(-year) # Most basic bubble plot data %>% arrange(desc(pop)) %>% mutate(country = factor(country, country)) %>% ggplot(aes(x=gdpPercap, y=lifeExp, size=pop, color=continent)) + geom_point(alpha=0.5) + scale_size(range = c(.1, 24), name="Population (M)")
[ "The small points are already plotted over the large points. What you need is an outline on the points. You can do this by selecting shape = 21 and using the fill aesthetic for their overall color. Their outline can be whatever color you like, though here I have made them a darker version of the fill color, which gives a more subtle outline:\nlibrary(ggplot2)\nlibrary(dplyr)\nlibrary(gapminder)\nlibrary(colorspace)\n\ndata <- gapminder %>% filter(year==\"2007\") %>% dplyr::select(-year)\n\ndata %>%\n arrange(desc(pop)) %>%\n mutate(country = factor(country, country)) %>%\n ggplot(aes(gdpPercap, lifeExp, size = pop, fill = continent,\n color = after_scale(darken(fill, 0.3)))) +\n geom_point(shape = 21) +\n scale_size(range = c(.1, 24), name = \"Population (M)\") +\n scale_x_continuous(\"GDP per Capita\", labels = scales::dollar) +\n ylab(\"Life Expectancy\") +\n theme_minimal(base_size = 20) +\n scale_fill_brewer(palette = \"Pastel1\") +\n ggtitle(\"Average life expectancy 2007\") +\n guides(size = \"none\",\n fill = guide_legend(override.aes = list(size = 6))) +\n theme(panel.background = element_rect(fill = \"gray95\", color = NA),\n legend.title = element_text(size = 25, face = 2),\n legend.text = element_text(size = 25, face = 2))\n\n\n", "Here is another version of @Allan Cameron's solution:\n# Libraries\nlibrary(ggplot2)\nlibrary(dplyr)\nlibrary(viridis)\nlibrary(gapminder)\n\ngapminder %>% \n filter(year==\"2007\") %>%\n select(-year) %>% \n arrange(desc(pop)) %>%\n mutate(country = factor(country, country)) %>%\n ggplot(aes(x=gdpPercap, y=lifeExp, size=pop, fill=continent)) +\n geom_point(alpha=0.5, shape=21, color=\"black\") +\n scale_size(range = c(.1, 24), name=\"Population (M)\", guide=\"none\")+\n scale_fill_viridis(discrete=TRUE, option=\"A\") +\n theme_bw() +\n theme(legend.position=\"bottom\") +\n ylab(\"Life Expectancy\") +\n xlab(\"Gdp per Capita\") \n\n\n" ]
[ 3, 1 ]
[]
[]
[ "ggplot2", "r" ]
stackoverflow_0074674746_ggplot2_r.txt
Q: How to remove whitespaces in a string except from between certain elements I have a string similar to (the below one is simplified): " word= {his or her} whatever " I want to delete every whitespace except between {}, so that my modified string will be: "word={his or her}whatever" lstrip or rstrip doesn't work of course. If I delete all whitespaces the whitespaces between {} are deleted as well. I tried to look up solutions for limiting the replace function to certain areas but even if I found out it I haven't been able to implement it. There are some stuff with regex (I am not sure if they are relevant here) but I haven't been able to understand them. EDIT: If I wanted to except the area between, say {} and "", that is: if I wanted to turn this string: " word= {his or her} and "his or her" whatever " into this: "word={his or her}and"his or her"whatever" What would I change re.sub(r'\s+(?![^{]*})', '', list_name) into? A: To solve this problem, you can use regular expressions to find and replace the whitespace characters. In particular, you can use the re.sub function to search for whitespace characters outside of the curly braces and replace them with an empty string. Here is an example of how you can use re.sub to solve this problem: import re # Define the input string input_str = " word= {his or her} whatever " # Use a regular expression to search for whitespace characters outside of the curly braces output_str = re.sub(r'\s+(?![^{]*})', '', input_str) # Print the result print(output_str) This code will print the modified string as follows: word={his or her}whatever The regular expression r'\s+(?![^{]*})' matches the whitespace that you want to remove from the string. The negative lookahead assertion ensures that the match is not followed by a string of the form {...}, so that the whitespace between the curly brackets is not removed. The `re.sub function replaces these matches with an empty string, effectively removing the whitespace characters from the input string. You can use this approach to modify your string and remove the whitespace characters outside of the curly braces. A: See instead going arround re you can replace uisng string.replace. Which will be much more easier and less complex when you playing around strings. Espacillay when you have multiple substitutions you end up bigger regex. st =" word= {his or her} whatever " st2=""" word= {his or her} and "his or her" whatever """ new = " ".join(st2.split()) new = new.replace("= ", "=").replace("} ", "}").replace('" ' , '"').replace(' "' , '"') print(new) Some outputs Example 1 output word={his or her}whatever Example 2 output word={his or her}and"his or her"whatever
How to remove whitespaces in a string except from between certain elements
I have a string similar to (the below one is simplified): " word= {his or her} whatever " I want to delete every whitespace except between {}, so that my modified string will be: "word={his or her}whatever" lstrip or rstrip doesn't work of course. If I delete all whitespaces the whitespaces between {} are deleted as well. I tried to look up solutions for limiting the replace function to certain areas but even if I found out it I haven't been able to implement it. There are some stuff with regex (I am not sure if they are relevant here) but I haven't been able to understand them. EDIT: If I wanted to except the area between, say {} and "", that is: if I wanted to turn this string: " word= {his or her} and "his or her" whatever " into this: "word={his or her}and"his or her"whatever" What would I change re.sub(r'\s+(?![^{]*})', '', list_name) into?
[ "To solve this problem, you can use regular expressions to find and replace the whitespace characters. In particular, you can use the re.sub function to search for whitespace characters outside of the curly braces and replace them with an empty string.\nHere is an example of how you can use re.sub to solve this problem:\nimport re\n\n# Define the input string\ninput_str = \" word= {his or her} whatever \"\n\n# Use a regular expression to search for whitespace characters outside of the curly braces\noutput_str = re.sub(r'\\s+(?![^{]*})', '', input_str)\n\n# Print the result\nprint(output_str)\n\nThis code will print the modified string as follows:\nword={his or her}whatever\n\nThe regular expression r'\\s+(?![^{]*})' matches the whitespace that you want to remove from the string. The negative lookahead assertion ensures that the match is not followed by a string of the form {...}, so that the whitespace between the curly brackets is not removed. The `re.sub function replaces these matches with an empty string, effectively removing the whitespace characters from the input string.\nYou can use this approach to modify your string and remove the whitespace characters outside of the curly braces.\n", "See instead going arround re you can replace uisng string.replace. Which will be much more easier and less complex when you playing around strings. Espacillay when you have multiple substitutions you end up bigger regex.\nst =\" word= {his or her} whatever \"\nst2=\"\"\" word= {his or her} and \"his or her\" whatever \"\"\"\n\nnew = \" \".join(st2.split())\nnew = new.replace(\"= \", \"=\").replace(\"} \", \"}\").replace('\" ' , '\"').replace(' \"' , '\"')\nprint(new)\n\nSome outputs\nExample 1 output\nword={his or her}whatever\n\nExample 2 output\nword={his or her}and\"his or her\"whatever\n\n" ]
[ 1, 1 ]
[ "You can use by replace\ndef remove(string):\n return string.replace(\" \", \"\")\n\nstring = 'hell o whatever'\nprint(remove(string)) // Output: hellowhatever\n\n" ]
[ -2 ]
[ "python", "python_3.x", "removing_whitespace", "replace", "string" ]
stackoverflow_0074675792_python_python_3.x_removing_whitespace_replace_string.txt
Q: What is the ARIA role of a div? What is the default ARIA role of a div or span in HTML? Or, in other words, what is the ARIA role of non-interactive elements and the elements that don't have a specific ARIA role assigned to them? Is it none or presentation? Or is it not defined? And if it is, do they have any different meaning than none or presentation? A: The definitive answer to this and all similar questions can be found in the ARIA in HTML specification. [Note, this is different from the ARIA specification, because ARIA in principle at least can be used in other markup languages, and the ARIA spec itself makes no specific mention of HTML.] In particular, the table in this section gives the "implicit role" of each HTML element. It makes clear that divs (and spans too for that matter) have the implicit (= assumed/default) role of generic. Looking that up in the ARIA specification itself, the generic role is one that you can't (or at least shouldn't) use yourself as an HTML role attribute value - but is strongly related to the roles (that you can use) none and presentation. These 2 roles are synonyms of each other, so there is no difference between the two (but presentation is seen more often, at least in part because it's been around for longer). Both remove all semantic meaning from the element - which means screenreaders and other assistive technologies will read out the content of these elements, including any nested content with a semantic role, but that the div element itself has no semantic meaning. As far as assistive technology is concerned that <div> might as well not be there and just be replaced by whatever its children are. (This is different from marking something with aria-hidden="true" which means neither the element itself nor any of its children will be exposed to assistive technologies - with generic roles the content is still there, it just has no semantics attached.) I'm not 100% clear on what the difference is between the generic role and none/presentation, but the ARIA spec (same section as I linked to above) has this distinction: However, unlike elements with role presentation, generic elements are exposed in accessibility APIs so that assistive technologies can gather certain properties such as layout and bounds. The difference probably doesn't matter unless you're programming a browser or an assistive technology, as web authors as already mentioned should not use the generic role. A: The default ARIA role of a div or span in HTML is "presentation". This means that the element is not interactive and does not have any specific ARIA role assigned to it. The role of "none" is similar to "presentation" in that it indicates that the element is not interactive, but it explicitly states that the element should not be included in the accessibility tree. The meaning of "none" and "presentation" is the same in terms of the element's role in accessibility, but "none" may be used in situations where it is necessary to explicitly exclude the element from the accessibility tree.
What is the ARIA role of a div?
What is the default ARIA role of a div or span in HTML? Or, in other words, what is the ARIA role of non-interactive elements and the elements that don't have a specific ARIA role assigned to them? Is it none or presentation? Or is it not defined? And if it is, do they have any different meaning than none or presentation?
[ "The definitive answer to this and all similar questions can be found in the ARIA in HTML specification. [Note, this is different from the ARIA specification, because ARIA in principle at least can be used in other markup languages, and the ARIA spec itself makes no specific mention of HTML.]\nIn particular, the table in this section gives the \"implicit role\" of each HTML element. It makes clear that divs (and spans too for that matter) have the implicit (= assumed/default) role of generic.\nLooking that up in the ARIA specification itself, the generic role is one that you can't (or at least shouldn't) use yourself as an HTML role attribute value - but is strongly related to the roles (that you can use) none and presentation. These 2 roles are synonyms of each other, so there is no difference between the two (but presentation is seen more often, at least in part because it's been around for longer). Both remove all semantic meaning from the element - which means screenreaders and other assistive technologies will read out the content of these elements, including any nested content with a semantic role, but that the div element itself has no semantic meaning. As far as assistive technology is concerned that <div> might as well not be there and just be replaced by whatever its children are. (This is different from marking something with aria-hidden=\"true\" which means neither the element itself nor any of its children will be exposed to assistive technologies - with generic roles the content is still there, it just has no semantics attached.)\nI'm not 100% clear on what the difference is between the generic role and none/presentation, but the ARIA spec (same section as I linked to above) has this distinction:\n\nHowever, unlike elements with role presentation, generic elements are exposed in accessibility APIs so that assistive technologies can gather certain properties such as layout and bounds.\n\nThe difference probably doesn't matter unless you're programming a browser or an assistive technology, as web authors as already mentioned should not use the generic role.\n", "The default ARIA role of a div or span in HTML is \"presentation\". This means that the element is not interactive and does not have any specific ARIA role assigned to it. The role of \"none\" is similar to \"presentation\" in that it indicates that the element is not interactive, but it explicitly states that the element should not be included in the accessibility tree. The meaning of \"none\" and \"presentation\" is the same in terms of the element's role in accessibility, but \"none\" may be used in situations where it is necessary to explicitly exclude the element from the accessibility tree.\n" ]
[ 2, 0 ]
[]
[]
[ "accessibility", "aria_role", "html", "wai_aria" ]
stackoverflow_0074674764_accessibility_aria_role_html_wai_aria.txt
Q: How to use Django signals when has role based decorators? Update, I have created a signal file like this, so for example if an employer has created a shift, the admin will get an email notification like this: from django.db.models.signals import post_save from django.dispatch import receiver from .models import Shift from django.core.mail import send_mail @receiver(post_save, sender=Shift) def send_mail_to_user(sender, instance, created, **kwargs): if created: send_mail( 'A new shift has been created by employer xxx', 'Here is the message.', '[email protected]', ['[email protected]'], fail_silently=False, ) Original post: Hi, I 'm trying to add signals when an employer or admin/staff has created a shift. Currently I have a view like this, I 'm wondering how should I modify it so I can have a post-save signal? @login_required @admin_staff_employer_required def createShift(request): user=request.user employer=Employer.objects.all() form = CreateShiftForm() if request.method == 'POST': form = CreateShiftForm(request.POST) if form.is_valid(): form.save() messages.success(request, "The shift has been created") return redirect('/shifts') else: messages.error(request,"Please correct your input field and try again") context = {'form':form} return render(request, 'create_shift.html', context) Thanks for your help! A: You have to create a new function that will be the receiver of the post-save signal. It is explained here in the docs: https://docs.djangoproject.com/en/4.1/topics/signals/#connecting-to-signals-sent-by-specific-senders Basically, you would need to set the sender to your model class, and on every post_save signal the handler function would get triggered. If you need a specific signal you can create your custom ones: https://thetldr.tech/how-to-add-custom-signals-dispatch-in-django/ , but for your use case you can trigger the function on every save of the shift model. from django.db.models.signals import post_save from django.dispatch import receiver #here you have to import your Shift model @receiver(post_save, sender=Shift) def my_handler(sender, **kwargs): print('Shift created') Also import the signals file in your apps.py file like so: from django.apps import AppConfig class ApplicationConfig(AppConfig): name = "<your app name>" def ready(self): from <app_name> import <signal_recievers_file_name> This should print "Shift created" to your console once you create a new shift. A: Try this(in this case I suppose your app name is 'almo' and model name is 'Shift'). signals.py from .models import Shift from django.dispatch import receiver from django.db.models.signals import post_save @receiver(post_save, sender=Shift) def shift_created_callback(sender, **kwargs): print('shift created!') # do something # arguments can be added by your taste # def shift_created_callback(sender, instance, **kwargs): # def shift_created_callback(sender, instance, created, **kwargs): apps.py from django.apps import AppConfig class AlmoConfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'almo' def ready(self): import almo.signals You can see 'shift created' in console when you create shift
How to use Django signals when has role based decorators?
Update, I have created a signal file like this, so for example if an employer has created a shift, the admin will get an email notification like this: from django.db.models.signals import post_save from django.dispatch import receiver from .models import Shift from django.core.mail import send_mail @receiver(post_save, sender=Shift) def send_mail_to_user(sender, instance, created, **kwargs): if created: send_mail( 'A new shift has been created by employer xxx', 'Here is the message.', '[email protected]', ['[email protected]'], fail_silently=False, ) Original post: Hi, I 'm trying to add signals when an employer or admin/staff has created a shift. Currently I have a view like this, I 'm wondering how should I modify it so I can have a post-save signal? @login_required @admin_staff_employer_required def createShift(request): user=request.user employer=Employer.objects.all() form = CreateShiftForm() if request.method == 'POST': form = CreateShiftForm(request.POST) if form.is_valid(): form.save() messages.success(request, "The shift has been created") return redirect('/shifts') else: messages.error(request,"Please correct your input field and try again") context = {'form':form} return render(request, 'create_shift.html', context) Thanks for your help!
[ "You have to create a new function that will be the receiver of the post-save signal. It is explained here in the docs: https://docs.djangoproject.com/en/4.1/topics/signals/#connecting-to-signals-sent-by-specific-senders\nBasically, you would need to set the sender to your model class, and on every post_save signal the handler function would get triggered. If you need a specific signal you can create your custom ones: https://thetldr.tech/how-to-add-custom-signals-dispatch-in-django/ , but for your use case you can trigger the function on every save of the shift model.\nfrom django.db.models.signals import post_save\nfrom django.dispatch import receiver\n#here you have to import your Shift model\n\n\n@receiver(post_save, sender=Shift)\ndef my_handler(sender, **kwargs):\n print('Shift created')\n\nAlso import the signals file in your apps.py file like so:\nfrom django.apps import AppConfig\n\n\nclass ApplicationConfig(AppConfig):\n name = \"<your app name>\"\n\n def ready(self):\n from <app_name> import <signal_recievers_file_name>\n\nThis should print \"Shift created\" to your console once you create a new shift.\n", "Try this(in this case I suppose your app name is 'almo' and model name is 'Shift').\nsignals.py\nfrom .models import Shift\nfrom django.dispatch import receiver\nfrom django.db.models.signals import post_save\n\n@receiver(post_save, sender=Shift)\ndef shift_created_callback(sender, **kwargs):\n print('shift created!')\n # do something\n\n# arguments can be added by your taste\n# def shift_created_callback(sender, instance, **kwargs):\n# def shift_created_callback(sender, instance, created, **kwargs):\n\napps.py\nfrom django.apps import AppConfig\n\nclass AlmoConfig(AppConfig):\n default_auto_field = 'django.db.models.BigAutoField'\n name = 'almo'\n\n def ready(self):\n import almo.signals\n\nYou can see 'shift created' in console when you create shift\n" ]
[ 1, 1 ]
[]
[]
[ "django", "django_models", "django_views" ]
stackoverflow_0074675512_django_django_models_django_views.txt
Q: KSQLDB: Using CREATE STREAM AS SELECT with Differing KEY SCHEMAS Here is the description of the problem statement: STREAM_SUMMARY: A stream with one of the value columns as an ARRAY-of-STRUCTS. Name : STREAM_SUMMARY Field | Type ------------------------------------------------------------------------------------------------------------------------------------------------ ROWKEY | STRUCT<asessment_id VARCHAR(STRING), institution_id INTEGER> (key) assessment_id | VARCHAR(STRING) institution_id | INTEGER responses | ARRAY<STRUCT<student_id INTEGER, question_id INTEGER, response VARCHAR(STRING)>> ------------------------------------------------------------------------------------------------------------------------------------------------ STREAM_DETAIL: This is a stream to be created from STREAM1, by "exploding" the the array-of-structs into separate rows. Note that the KEY schema is also different. Below is the Key and Value schema I want to achieve (end state)... Name : STREAM_DETAIL Field | Type ------------------------------------------------------------------------------------------------------- ROWKEY | **STRUCT<asessment_id VARCHAR(STRING), student_id INTEGER, question_id INTEGER> (key)** assessment_id | VARCHAR(STRING) institution_id | INTEGER student_id | INTEGER question_id | INTEGER response | VARCHAR(STRING) My objective is to create the STREAM_DETAIL from the STREAM_SUMMARY. I tried the below: CREATE STREAM STREAM_DETAIL WITH ( KAFKA_TOPIC = 'stream_detail' ) AS SELECT STRUCT ( `assessment_id` := "assessment_id", `student_id` := EXPLODE("responses")->"student_id", `question_id` := EXPLODE("responses")->"question_id" ) , "assessment_id" , "institution_id" , EXPLODE("responses")->"student_id" , EXPLODE("responses")->"question_id" , EXPLODE("responses")->"response" FROM STREAM_SUMMARY EMIT CHANGES; While the SELECT query works fine, the CREATE STREAM returned with the following error: "Key missing from projection." If I add the ROWKEY column in the SELECT clause in the above statement, things work, however, the KEY schema of the resultant STREAM is same as the original SREAM's key. The "Key" schema that I want in the new STREAM is : STRUCT<asessment_id VARCHAR(STRING), student_id INTEGER, question_id INTEGER> (key) Alternatively, I tried creating the STREAM_DETAIL by hand (using plain CREATE STREAM statement by providing key and value SCHEMA_IDs). Later I tried the INSERT INTO approach... INSERT INTO STREAM_DETAIL SELECT .... FROM STREAM_SUMMARY EMIT CHANGES; The errors were the same. Can you please guide on how can I achieve enriching a STREAM but with a different Key Schema? Note that a new/different Key schema is important for me since I use the underlying topic to be synced to a database via a Kafka sink connector. The sink connector requires the key schema in this way, for me to be able to do an UPSERT. I am not able to get past this. Appreciate your help. A: You can't change the key of a stream when it is created from another stream. But there is a different approach to the problem. What you want is re-key. And to do so you need to use ksqlDB table. Can be solved like - CREATE STREAM IF NOT EXISTS INTERMEDIATE_STREAM_SUMMARY_FLATTNED AS SELECT ROWKEY, EXPLODE(responses) as response FROM STREAM_SUMMARY; CREATE TABLE IF NOT EXISTS STREAM_DETAIL AS -- This also creates a underlying topic SELECT ROWKEY -> `assessment_id` as `assessment_id`, response -> `student_id` as `student_id`, response -> `question_id` as `question_id`, ROWKEY -> `institution_id` as `institution_id`, response -> `response` as `response` FROM INTERMEDIATE_STREAM_SUMMARY_FLATTNED GROUP BY ROWKEY -> `assessment_id`, response -> `student_id`, response -> `question_id`; Key schema will be STRUCT<asessment_id VARCHAR(STRING), student_id INTEGER, question_id INTEGER>, you can check schema registry or print the topic to validate that. In ksqlDB describe table will show you flat key, but don't panic. I have used similar and sync the final topic to database.
KSQLDB: Using CREATE STREAM AS SELECT with Differing KEY SCHEMAS
Here is the description of the problem statement: STREAM_SUMMARY: A stream with one of the value columns as an ARRAY-of-STRUCTS. Name : STREAM_SUMMARY Field | Type ------------------------------------------------------------------------------------------------------------------------------------------------ ROWKEY | STRUCT<asessment_id VARCHAR(STRING), institution_id INTEGER> (key) assessment_id | VARCHAR(STRING) institution_id | INTEGER responses | ARRAY<STRUCT<student_id INTEGER, question_id INTEGER, response VARCHAR(STRING)>> ------------------------------------------------------------------------------------------------------------------------------------------------ STREAM_DETAIL: This is a stream to be created from STREAM1, by "exploding" the the array-of-structs into separate rows. Note that the KEY schema is also different. Below is the Key and Value schema I want to achieve (end state)... Name : STREAM_DETAIL Field | Type ------------------------------------------------------------------------------------------------------- ROWKEY | **STRUCT<asessment_id VARCHAR(STRING), student_id INTEGER, question_id INTEGER> (key)** assessment_id | VARCHAR(STRING) institution_id | INTEGER student_id | INTEGER question_id | INTEGER response | VARCHAR(STRING) My objective is to create the STREAM_DETAIL from the STREAM_SUMMARY. I tried the below: CREATE STREAM STREAM_DETAIL WITH ( KAFKA_TOPIC = 'stream_detail' ) AS SELECT STRUCT ( `assessment_id` := "assessment_id", `student_id` := EXPLODE("responses")->"student_id", `question_id` := EXPLODE("responses")->"question_id" ) , "assessment_id" , "institution_id" , EXPLODE("responses")->"student_id" , EXPLODE("responses")->"question_id" , EXPLODE("responses")->"response" FROM STREAM_SUMMARY EMIT CHANGES; While the SELECT query works fine, the CREATE STREAM returned with the following error: "Key missing from projection." If I add the ROWKEY column in the SELECT clause in the above statement, things work, however, the KEY schema of the resultant STREAM is same as the original SREAM's key. The "Key" schema that I want in the new STREAM is : STRUCT<asessment_id VARCHAR(STRING), student_id INTEGER, question_id INTEGER> (key) Alternatively, I tried creating the STREAM_DETAIL by hand (using plain CREATE STREAM statement by providing key and value SCHEMA_IDs). Later I tried the INSERT INTO approach... INSERT INTO STREAM_DETAIL SELECT .... FROM STREAM_SUMMARY EMIT CHANGES; The errors were the same. Can you please guide on how can I achieve enriching a STREAM but with a different Key Schema? Note that a new/different Key schema is important for me since I use the underlying topic to be synced to a database via a Kafka sink connector. The sink connector requires the key schema in this way, for me to be able to do an UPSERT. I am not able to get past this. Appreciate your help.
[ "You can't change the key of a stream when it is created from another stream.\nBut there is a different approach to the problem.\nWhat you want is re-key. And to do so you need to use ksqlDB table. Can be solved like -\nCREATE STREAM IF NOT EXISTS INTERMEDIATE_STREAM_SUMMARY_FLATTNED AS\nSELECT\n ROWKEY,\n EXPLODE(responses) as response\nFROM STREAM_SUMMARY;\n\nCREATE TABLE IF NOT EXISTS STREAM_DETAIL AS -- This also creates a underlying topic\nSELECT\n ROWKEY -> `assessment_id` as `assessment_id`,\n response -> `student_id` as `student_id`,\n response -> `question_id` as `question_id`,\n ROWKEY -> `institution_id` as `institution_id`,\n response -> `response` as `response`\nFROM INTERMEDIATE_STREAM_SUMMARY_FLATTNED\nGROUP BY ROWKEY -> `assessment_id`, response -> `student_id`, response -> `question_id`;\n\nKey schema will be STRUCT<asessment_id VARCHAR(STRING), student_id INTEGER, question_id INTEGER>, you can check schema registry or print the topic to validate that. In ksqlDB describe table will show you flat key, but don't panic.\nI have used similar and sync the final topic to database.\n" ]
[ 0 ]
[]
[]
[ "apache_kafka_streams", "ksqldb" ]
stackoverflow_0074673252_apache_kafka_streams_ksqldb.txt
Q: How to change the color of a button? I'm new to android programming. How do I change the color of a button? <Button android:id="@+id/btn" android:layout_width="55dp" android:layout_height="50dp" android:layout_gravity="center" android:text="Button Text" android:paddingBottom="20dp"/> A: The RIGHT way... The following methods actually work. if you wish - using a theme By default a buttons color is android:colorAccent. So, if you create a style like this... <style name="Button.White" parent="ThemeOverlay.AppCompat"> <item name="colorAccent">@android:color/white</item> </style> You can use it like this... <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:theme="@style/Button.White" /> alternatively - using a tint You can simply add android:backgroundTint for API Level 21 and higher, or app:backgroundTint for API Level 7 and higher. For more information, see this blog. The problem with the accepted answer... If you replace the background with a color you will loose the effect of the button, and the color will be applied to the entire area of the button. It will not respect the padding, shadow, and corner radius. A: You can change the colour two ways; through XML or through coding. I would recommend XML since it's easier to follow for beginners. XML: <Button android:background="@android:color/white" android:textColor="@android:color/black" /> You can also use hex values ex. android:background="#FFFFFF" Coding: //btn represents your button object btn.setBackgroundColor(Color.WHITE); btn.setTextColor(Color.BLACK); A: For the text color add: android:textColor="<hex color>" For the background color add: android:background="<hex color>" From API 21 you can use: android:backgroundTint="<hex color>" android:backgroundTintMode="<mode>" Note: If you're going to work with android/java you really should learn how to google ;)How to customize different buttons in Android A: If the first solution doesn't work try this: android:backgroundTint="@android:color/white" I hope this work. Happy coding. A: Many great methods presented above - One newer note It appears that there was a bug in earlier versions of Material that prevented certain types of overriding the button color. See: [Button] android:background not working #889 I am using today Material 1.3.0. I just followed the direction of KavinduDissanayake in the linked post and used this format: app:backgroundTint="@color/purple_700" (I changed the chosen color to my own theme of course.) This solution worked very simply for me. A: Here is my code, to make different colors on button, and Linear, Constraint and Scroll Layout First, you need to make a custom_button.xml on your drawable Go to res Expand it, right click on drawable New -> Drawable Resource File File Name : custom_button, Click OK Custom_Button.xml Code <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_pressed="true" android:drawable="@color/red"/> <!-- pressed --> <item android:state_focused="true" android:drawable="@color/blue"/> <!-- focused --> <item android:drawable="@color/black"/> <!-- default --> </selector> Second, go to res Expand values Double click on colors.xml Copy the code below Colors.xml Code <?xml version="1.0" encoding="utf-8"?> <resources> <color name="colorPrimary">#3F51B5</color> <color name="colorPrimaryDark">#303F9F</color> <color name="colorAccent">#FF4081</color> <color name="black">#000</color> <color name="violet">#9400D3</color> <color name="indigo">#4B0082</color> <color name="blue">#0000FF</color> <color name="green">#00FF00</color> <color name="yellow">#FFFF00</color> <color name="orange">#FF7F00</color> <color name="red">#FF0000</color> </resources> Screenshots below XML Coding Design Preview A: Starting with API 23, you can do: btn.setBackgroundColor(getResources().getColor(R.color.colorOffWhite)); and your colors.xml must contain: <?xml version="1.0" encoding="utf-8"?> <resources> <color name="colorOffWhite">#80ffffff</color> </resources> A: Best way to change button color without losing button ghosting and other features. Try it and you will see it is the best app:backgroundTint="@color/color_name" A: in theme change this: parent="Theme.MaterialComponents.DayNight.DarkActionBar" to that: parent="Theme.AppCompat.Light.NoActionBar" It worked for me after many search A: usually with API 21 and above : just PUT this attribute : android:backgroundTint=" your color " A: You can change the value in the XML like this: <Button android:background="#FFFFFF" ../> Here, you can add any other color, from the resources or hex. Similarly, you can also change these values form the code like this: demoButton.setBackgroundColor(Color.WHITE); Another easy way is to make a drawable, customize the corners and shape according to your preference and set the background color and stroke of the drawable. For eg. button_background.xml <?xml version="1.0" encoding="UTF-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android"> <stroke android:width="2dp" android:color="#ff207d94" /> <corners android:radius="5dp" /> <solid android:color="#FFFFFF" /> </shape> And then set this shape as the background of your button. <Button android:background="@drawable/button_background.xml" ../> Hope this helps, good luck! A: see the image and easy understand A: in new update of android studio you have to change the button -> androidx.appcompat.widget.AppCompatButton then only the button color will changed res/drawable/button_color_border.xml <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle" > <solid android:color="#FFDA8200" /> <stroke android:width="3dp" android:color="#FFFF4917" /> </shape> And add button to your XML activity layout and set background android:background="@drawable/button_color_border". <androidx.appcompat.widget.AppCompatButton android:layout_width="wrap_content" android:layout_height="wrap_content" android:background="@drawable/button_border" android:text="Button" /> A: Use below code for button background color btn.setBackgroundResource(R.drawable.btn_rounded); here is drawable xml <layer-list xmlns:android="http://schemas.android.com/apk/res/android"> <item> <shape android:shape="rectangle" > <solid android:color="@color/gray_scale_color"/> <corners android:radius="@dimen/_12sdp"/> <stroke android:width="0.5dp" android:color="?attr/appbackgroundColor"/> </shape> </item> </layer-list> A: i'm using api 32 and I had to do it this way in the xml code: android:backgroundTint="@android:color/white" A: If you are trying to set the background as some other resource file in your drawable folder, say, a custom-button.xml, then try this: button_name.setBackgroundResource(R.drawable.custom_button_file_name); eg. Say, you have a custom-button.xml file. Then, button_name.setBackgroundResource(R.drawable.custom_button); Will set the button background as the custom-button.xml file. A: I have the same problem the solution for me was the background color was colorPrimary of my theme you can use custom theme as one answer say above and set the colorPrimary to what you want 1- add this to your "value/themes/themes.xml" inside resources <resources> <style name="Button.color" parent="ThemeOverlay.AppCompat"> <item name="colorPrimary">@color/purple_500</item> </style> </resources> 2- add this line to the button you want to have the color <Button android:theme="@style/Button.color"/> A: backgroundTint above API21 background has no effect it takes colorPrimary of the theme by default A: Button background color in xml <Button android:id="@+id/button" android:background="#0000FF" android:textColor="#FFFFFF"/> Change button background color programmatically Button button = findViewById(R.id.button); button.setBackgroundColor(Color.BLUE); Custom button background shape.xml [res --> drawble] <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android"> <corners android:radius="5dp" /> <gradient android:angle="0" android:endColor="#0000FF" android:startColor="#00FFFF" /> <stroke android:width="1dp" android:color="#000000" /> </shape> Add this line android:background="@drawable/shape" full code <Button android:id="@+id/button" android:background="@drawable/shape" android:textColor="#FFFFFF"/> Material Design Button Change button background color app:backgroundTint="#0000FF" Button Stroke color app:strokeColor="#000000" Button Stroke width app:strokeWidth="2dp" Full code <Button android:id="@+id/button" android:textColor="#FFFFFF" app:backgroundTint="#0000FF" app:strokeColor="#000000" app:strokeWidth="2dp"/> Hope you helpful! A: Go to themes.xml file under res -> values -> themes Go to the line - <style name="Theme.YourProjectName" parent="Theme.MaterialComponents.DayNight.DarkActionBar"> and change it to - <style name="Theme.YourProjectName" parent="Theme.AppCompat.DayNight.DarkActionBar"> Then proceed with changing android:backgroundTint to desired color
How to change the color of a button?
I'm new to android programming. How do I change the color of a button? <Button android:id="@+id/btn" android:layout_width="55dp" android:layout_height="50dp" android:layout_gravity="center" android:text="Button Text" android:paddingBottom="20dp"/>
[ "The RIGHT way...\nThe following methods actually work.\nif you wish - using a theme\nBy default a buttons color is android:colorAccent. So, if you create a style like this...\n<style name=\"Button.White\" parent=\"ThemeOverlay.AppCompat\">\n <item name=\"colorAccent\">@android:color/white</item>\n</style>\n\nYou can use it like this...\n<Button\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:theme=\"@style/Button.White\"\n />\n\nalternatively - using a tint\nYou can simply add android:backgroundTint for API Level 21 and higher, or app:backgroundTint for API Level 7 and higher.\nFor more information, see this blog.\nThe problem with the accepted answer...\nIf you replace the background with a color you will loose the effect of the button, and the color will be applied to the entire area of the button. It will not respect the padding, shadow, and corner radius.\n", "You can change the colour two ways; through XML or through coding. I would recommend XML since it's easier to follow for beginners.\nXML:\n<Button\n android:background=\"@android:color/white\"\n android:textColor=\"@android:color/black\"\n/>\n\nYou can also use hex values ex.\nandroid:background=\"#FFFFFF\"\n\nCoding:\n//btn represents your button object\n\nbtn.setBackgroundColor(Color.WHITE);\nbtn.setTextColor(Color.BLACK);\n\n", "For the text color add:\nandroid:textColor=\"<hex color>\"\n\nFor the background color add:\nandroid:background=\"<hex color>\"\n\nFrom API 21 you can use:\nandroid:backgroundTint=\"<hex color>\"\nandroid:backgroundTintMode=\"<mode>\"\n\n\nNote: If you're going to work with android/java you really should learn how to google ;)How to customize different buttons in Android\n", "If the first solution doesn't work try this:\nandroid:backgroundTint=\"@android:color/white\"\n\nI hope this work.\nHappy coding.\n", "Many great methods presented above - One newer note\nIt appears that there was a bug in earlier versions of Material that prevented certain types of overriding the button color.\nSee: [Button] android:background not working #889\nI am using today Material 1.3.0. I just followed the direction of KavinduDissanayake in the linked post and used this format:\napp:backgroundTint=\"@color/purple_700\"\n\n(I changed the chosen color to my own theme of course.) This solution worked very simply for me.\n", "Here is my code, to make different colors on button, and Linear, Constraint and Scroll Layout\nFirst, you need to make a custom_button.xml on your drawable\n\nGo to res\nExpand it, right click on drawable\nNew -> Drawable Resource File\nFile Name : custom_button, Click OK\n\nCustom_Button.xml Code\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<selector xmlns:android=\"http://schemas.android.com/apk/res/android\">\n <item android:state_pressed=\"true\" android:drawable=\"@color/red\"/> <!-- pressed -->\n <item android:state_focused=\"true\" android:drawable=\"@color/blue\"/> <!-- focused -->\n <item android:drawable=\"@color/black\"/> <!-- default -->\n</selector>\n\nSecond, go to res\n\nExpand values\nDouble click on colors.xml\nCopy the code below\n\nColors.xml Code\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<resources>\n <color name=\"colorPrimary\">#3F51B5</color>\n <color name=\"colorPrimaryDark\">#303F9F</color>\n <color name=\"colorAccent\">#FF4081</color>\n\n <color name=\"black\">#000</color>\n <color name=\"violet\">#9400D3</color>\n <color name=\"indigo\">#4B0082</color>\n <color name=\"blue\">#0000FF</color>\n <color name=\"green\">#00FF00</color>\n <color name=\"yellow\">#FFFF00</color>\n <color name=\"orange\">#FF7F00</color>\n <color name=\"red\">#FF0000</color>\n</resources>\n\nScreenshots below\n\n XML Coding\n\n Design Preview\n", "Starting with API 23, you can do:\nbtn.setBackgroundColor(getResources().getColor(R.color.colorOffWhite));\n\nand your colors.xml must contain:\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<resources>\n <color name=\"colorOffWhite\">#80ffffff</color>\n</resources>\n\n", "Best way to change button color without losing button ghosting and other features.\nTry it and you will see it is the best\napp:backgroundTint=\"@color/color_name\"\n\n", "in theme change this:\nparent=\"Theme.MaterialComponents.DayNight.DarkActionBar\"\nto that:\nparent=\"Theme.AppCompat.Light.NoActionBar\"\nIt worked for me after many search\n", "usually with API 21 and above :\njust PUT this attribute : android:backgroundTint=\" your color \"\n", "You can change the value in the XML like this:\n<Button\n android:background=\"#FFFFFF\"\n ../>\n\nHere, you can add any other color, from the resources or hex.\nSimilarly, you can also change these values form the code like this:\ndemoButton.setBackgroundColor(Color.WHITE);\n\nAnother easy way is to make a drawable, customize the corners and shape according to your preference and set the background color and stroke of the drawable.\nFor eg.\nbutton_background.xml\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<shape xmlns:android=\"http://schemas.android.com/apk/res/android\">\n <stroke android:width=\"2dp\" android:color=\"#ff207d94\" />\n <corners android:radius=\"5dp\" />\n <solid android:color=\"#FFFFFF\" />\n</shape>\n\nAnd then set this shape as the background of your button.\n<Button\n android:background=\"@drawable/button_background.xml\"\n ../>\n\nHope this helps, good luck!\n", "see the image and easy understand\n\n", "\nin new update of android studio you have to change the\n\nbutton -> androidx.appcompat.widget.AppCompatButton\nthen only the button color will changed\nres/drawable/button_color_border.xml\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<shape xmlns:android=\"http://schemas.android.com/apk/res/android\"\n android:shape=\"rectangle\" >\n\n <solid android:color=\"#FFDA8200\" />\n\n <stroke\n android:width=\"3dp\"\n android:color=\"#FFFF4917\" />\n\n</shape>\n\n\n\nAnd add button to your XML activity layout and set background\nandroid:background=\"@drawable/button_color_border\".\n<androidx.appcompat.widget.AppCompatButton\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:background=\"@drawable/button_border\"\n android:text=\"Button\" />\n\n\n", "Use below code for button background color\nbtn.setBackgroundResource(R.drawable.btn_rounded);\n\nhere is drawable xml\n<layer-list xmlns:android=\"http://schemas.android.com/apk/res/android\">\n\n <item>\n <shape android:shape=\"rectangle\" >\n <solid android:color=\"@color/gray_scale_color\"/>\n <corners android:radius=\"@dimen/_12sdp\"/>\n\n <stroke android:width=\"0.5dp\"\n android:color=\"?attr/appbackgroundColor\"/>\n </shape>\n </item>\n</layer-list>\n\n", "i'm using api 32 and I had to do it this way in the xml code:\nandroid:backgroundTint=\"@android:color/white\"\n\n", "If you are trying to set the background as some other resource file in your drawable folder, say, a custom-button.xml, then try this:\nbutton_name.setBackgroundResource(R.drawable.custom_button_file_name);\n\neg. Say, you have a custom-button.xml file. Then,\nbutton_name.setBackgroundResource(R.drawable.custom_button);\n\nWill set the button background as the custom-button.xml file.\n", "I have the same problem\nthe solution for me was the background color was colorPrimary of my theme\nyou can use custom theme as one answer say above and set the colorPrimary to what you want\n1- add this to your \"value/themes/themes.xml\" inside resources\n<resources>\n <style name=\"Button.color\" parent=\"ThemeOverlay.AppCompat\">\n <item name=\"colorPrimary\">@color/purple_500</item>\n </style>\n</resources>\n\n2- add this line to the button you want to have the color\n <Button\n \n android:theme=\"@style/Button.color\"/>\n\n", "backgroundTint above API21 background has no effect it takes colorPrimary of the theme by default\n", "\nButton background color in xml\n\n<Button\n android:id=\"@+id/button\"\n android:background=\"#0000FF\"\n android:textColor=\"#FFFFFF\"/>\n\n\nChange button background color programmatically\n\nButton button = findViewById(R.id.button);\nbutton.setBackgroundColor(Color.BLUE);\n\n\nCustom button background\n\nshape.xml [res --> drawble]\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<shape xmlns:android=\"http://schemas.android.com/apk/res/android\">\n <corners android:radius=\"5dp\" />\n <gradient\n android:angle=\"0\"\n android:endColor=\"#0000FF\"\n android:startColor=\"#00FFFF\" />\n <stroke\n android:width=\"1dp\"\n android:color=\"#000000\" />\n</shape>\n\nAdd this line\nandroid:background=\"@drawable/shape\"\n\nfull code\n<Button\n android:id=\"@+id/button\"\n android:background=\"@drawable/shape\"\n android:textColor=\"#FFFFFF\"/>\n\n\nMaterial Design Button\n\nChange button background color\napp:backgroundTint=\"#0000FF\"\n\nButton Stroke color\napp:strokeColor=\"#000000\"\n\nButton Stroke width\napp:strokeWidth=\"2dp\"\n\nFull code\n<Button\n android:id=\"@+id/button\"\n android:textColor=\"#FFFFFF\"\n app:backgroundTint=\"#0000FF\"\n app:strokeColor=\"#000000\"\n app:strokeWidth=\"2dp\"/>\n\nHope you helpful!\n", "Go to themes.xml file under res -> values -> themes\nGo to the line -\n<style name=\"Theme.YourProjectName\" parent=\"Theme.MaterialComponents.DayNight.DarkActionBar\">\n\nand change it to -\n<style name=\"Theme.YourProjectName\" parent=\"Theme.AppCompat.DayNight.DarkActionBar\">\n\nThen proceed with changing\nandroid:backgroundTint to desired color\n" ]
[ 93, 48, 27, 6, 6, 4, 4, 3, 2, 2, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0 ]
[ "Go to res > values > themes > theme.xml/themes.xml. Then change:\n<style name=\"Theme.BirthdayGreet\" parent=\"Theme.MaterialComponents.DayNight.DarkActionBar\">\n\nTo:\n<style name=\"Theme.MemeShare\" parent=\"Theme.AppCompat.Light.NoActionBar\">)>\n\nWatch this video for more information.\n", "To change the color of button programmatically \nHere it is :\nButton b1;\n//colorAccent is the resource made in the color.xml file , you can change it.\nb1.setBackgroundResource(R.color.colorAccent); \n\n" ]
[ -1, -2 ]
[ "android", "android_button", "android_layout" ]
stackoverflow_0032671004_android_android_button_android_layout.txt
Q: nginx unable to load media files - 404 (Not found) I have tried everything to serve my media file but yet getting same 404 error. Please guide. My docker-compose file: version: "3.9" services: nginx: container_name: realestate_preprod_nginx_con build: ./nginx volumes: - static_volume:/home/inara/RealEstatePreProd/static - media_volume:/home/inara/RealEstatePreProd/media networks: glory1network: ipv4_address: 10.1.1.8 expose: - 8000 depends_on: - realestate_frontend - realestate_backend real_estate_master_db: image: postgres:latest container_name: realestate_master_db_con env_file: - "./database/master_env" restart: "always" networks: glory1network: ipv4_address: 10.1.1.5 expose: - 5432 volumes: - real_estate_master_db_volume:/var/lib/postgresql/data real_estate_tenant1_db: image: postgres:latest container_name: realestate_tenant1_db_con env_file: - "./database/tenant1_env" restart: "always" networks: glory1network: ipv4_address: 10.1.1.9 expose: - 5432 volumes: - real_estate_tenant1_db_volume:/var/lib/postgresql/data realestate_frontend: image: realestate_web_frontend_service container_name: realestate_frontend_con restart: "always" build: ./frontend command: bash -c "./realestate_frontend_ctl.sh" expose: - 8092 networks: glory1network: ipv4_address: 10.1.1.6 depends_on: - real_estate_master_db - real_estate_tenant1_db realestate_backend: image: realestate_web_backend_service container_name: realestate_backend_con restart: "always" build: ./backend command: bash -c "./realestate_backend_ctl.sh" expose: - 8091 volumes: - static_volume:/home/inara/RealEstatePreProd/static - media_volume:/home/inara/RealEstatePreProd/media networks: glory1network: ipv4_address: 10.1.1.7 env_file: - "./database/env" depends_on: - realestate_frontend - real_estate_master_db - real_estate_tenant1_db networks: glory1network: external: true volumes: real_estate_master_db_volume: real_estate_tenant1_db_volume: static_volume: media_volume: My nginx configuration file: upstream realestate_frontend_site { server realestate_frontend:8092; } server { listen 8000; access_log /home/inara/RealEstatePreProd/realestate_frontend-access.log; error_log /home/inara/RealEstatePreProd/realestate_frontend-error.log; client_max_body_size 0; location / { proxy_pass http://realestate_frontend_site; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; client_max_body_size 0; } } upstream realestate_backend_site { server realestate_backend:8091; } server { listen 8000; access_log /home/inara/RealEstatePreProd/realestate_backend-access.log; error_log /home/inara/RealEstatePreProd/realestate_backend-error.log; location / { proxy_pass http://realestate_backend_site; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } location /static { root /home/inara/RealEstatePreProd; } location /media/ { alias /home/inara/RealEstatePreProd/media/; } } All the APIs are working fine but any media file gives 404. I have checked the volume and validated that files being accessed are present there. I logged in my docker container and validated file presence in media folder there too. Please guide what did I miss ?? I expect to access my media files but getting 404 A: First step is to define /static location directive above / location directive. Because as per your code /static url is also routed to / , as / is above /static. Anything for /static should go to /static and not to /, so you can try this and let us know if the error is resolved or not. A: Make sure you specify MEDIA_URL and MEDIA_ROOT in RealEstatePreProd.settings: MEDIA_URL = '/media/' MEDIA_ROOT = BASE_DIR / 'media'
nginx unable to load media files - 404 (Not found)
I have tried everything to serve my media file but yet getting same 404 error. Please guide. My docker-compose file: version: "3.9" services: nginx: container_name: realestate_preprod_nginx_con build: ./nginx volumes: - static_volume:/home/inara/RealEstatePreProd/static - media_volume:/home/inara/RealEstatePreProd/media networks: glory1network: ipv4_address: 10.1.1.8 expose: - 8000 depends_on: - realestate_frontend - realestate_backend real_estate_master_db: image: postgres:latest container_name: realestate_master_db_con env_file: - "./database/master_env" restart: "always" networks: glory1network: ipv4_address: 10.1.1.5 expose: - 5432 volumes: - real_estate_master_db_volume:/var/lib/postgresql/data real_estate_tenant1_db: image: postgres:latest container_name: realestate_tenant1_db_con env_file: - "./database/tenant1_env" restart: "always" networks: glory1network: ipv4_address: 10.1.1.9 expose: - 5432 volumes: - real_estate_tenant1_db_volume:/var/lib/postgresql/data realestate_frontend: image: realestate_web_frontend_service container_name: realestate_frontend_con restart: "always" build: ./frontend command: bash -c "./realestate_frontend_ctl.sh" expose: - 8092 networks: glory1network: ipv4_address: 10.1.1.6 depends_on: - real_estate_master_db - real_estate_tenant1_db realestate_backend: image: realestate_web_backend_service container_name: realestate_backend_con restart: "always" build: ./backend command: bash -c "./realestate_backend_ctl.sh" expose: - 8091 volumes: - static_volume:/home/inara/RealEstatePreProd/static - media_volume:/home/inara/RealEstatePreProd/media networks: glory1network: ipv4_address: 10.1.1.7 env_file: - "./database/env" depends_on: - realestate_frontend - real_estate_master_db - real_estate_tenant1_db networks: glory1network: external: true volumes: real_estate_master_db_volume: real_estate_tenant1_db_volume: static_volume: media_volume: My nginx configuration file: upstream realestate_frontend_site { server realestate_frontend:8092; } server { listen 8000; access_log /home/inara/RealEstatePreProd/realestate_frontend-access.log; error_log /home/inara/RealEstatePreProd/realestate_frontend-error.log; client_max_body_size 0; location / { proxy_pass http://realestate_frontend_site; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; client_max_body_size 0; } } upstream realestate_backend_site { server realestate_backend:8091; } server { listen 8000; access_log /home/inara/RealEstatePreProd/realestate_backend-access.log; error_log /home/inara/RealEstatePreProd/realestate_backend-error.log; location / { proxy_pass http://realestate_backend_site; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } location /static { root /home/inara/RealEstatePreProd; } location /media/ { alias /home/inara/RealEstatePreProd/media/; } } All the APIs are working fine but any media file gives 404. I have checked the volume and validated that files being accessed are present there. I logged in my docker container and validated file presence in media folder there too. Please guide what did I miss ?? I expect to access my media files but getting 404
[ "First step is to define /static location directive above / location directive.\nBecause as per your code /static url is also routed to / , as / is above /static. Anything for /static should go to /static and not to /, so you can try this and let us know if the error is resolved or not.\n", "Make sure you specify MEDIA_URL and MEDIA_ROOT in RealEstatePreProd.settings:\nMEDIA_URL = '/media/'\nMEDIA_ROOT = BASE_DIR / 'media'\n\n" ]
[ 0, 0 ]
[]
[]
[ "django", "docker", "http_status_code_404", "media", "nginx" ]
stackoverflow_0074675514_django_docker_http_status_code_404_media_nginx.txt
Q: print("Are you a human: "str(human)) what did i do wrong (https://i.stack.imgur.com/IFtgk.png) can someone help me plzzzzz A: Add '+' between two variables , like print("Are You human"+str(human)) A: if you want to output whit print always remember to use + between variables even there are string human=True print(" are ypu a human "+str(human))
print("Are you a human: "str(human)) what did i do wrong
(https://i.stack.imgur.com/IFtgk.png) can someone help me plzzzzz
[ "Add '+' between two variables , like print(\"Are You human\"+str(human))\n", "if you want to output whit print always remember to use + between variables even\nthere are string\nhuman=True\nprint(\" are ypu a human \"+str(human))\n" ]
[ 0, 0 ]
[]
[]
[ "boolean", "python" ]
stackoverflow_0074677299_boolean_python.txt
Q: Issue with custom Annotation created from javax.persistence.Entity annotation We have a requirement to customize all SQL and NoSQL database annotation in Spring Boot application. And able to do it for MongoDB but unable to do it for SQL database and getting a java.lang.IllegalArgumentException: Not a managed type: class com.entity.EmployeeEntity exception SQL JPA is not working: import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Documented; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.persistence.Entity; import org.springframework.core.annotation.AliasFor; @Target(value = { TYPE }) @Retention(value = RUNTIME) @Documented @Entity public @interface MyEntity { @AliasFor(annotation = Entity.class) String name() default ""; } import java.io.Serializable; import java.util.List; import java.util.Map; import com.mt.mtamp.core.annotations.MtAmpEntity; import com.mt.mtamp.core.annotations.MtAmpId; import lombok.Data; @Data @MyEntity(name = "employee") public class EmployeeEntity implements Serializable{ private static final long serialVersionUID = 1L; @MtAmpId private Long storeId; private String storeName; } - **Exception**: Caused by: java.lang.IllegalArgumentException: Not a managed type: class com.entity.EmployeeEntity at org.hibernate.metamodel.internal.MetamodelImpl.managedType(MetamodelImpl.java:583) ~[hibernate-core-5.6.14.Final.jar:5.6.14.Final] at org.hibernate.metamodel.internal.MetamodelImpl.managedType(MetamodelImpl.java:85) ~[hibernate-core-5.6.14.Final.jar:5.6.14.Final] A: @Inherited annotation is missing, please follow: @Target(value = { TYPE }) @Retention(value = RUNTIME) @Documented @Inherited // <-- add this line @Entity public @interface MyEntity { @AliasFor(annotation = Entity.class) String name() default ""; }
Issue with custom Annotation created from javax.persistence.Entity annotation
We have a requirement to customize all SQL and NoSQL database annotation in Spring Boot application. And able to do it for MongoDB but unable to do it for SQL database and getting a java.lang.IllegalArgumentException: Not a managed type: class com.entity.EmployeeEntity exception SQL JPA is not working: import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Documented; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.persistence.Entity; import org.springframework.core.annotation.AliasFor; @Target(value = { TYPE }) @Retention(value = RUNTIME) @Documented @Entity public @interface MyEntity { @AliasFor(annotation = Entity.class) String name() default ""; } import java.io.Serializable; import java.util.List; import java.util.Map; import com.mt.mtamp.core.annotations.MtAmpEntity; import com.mt.mtamp.core.annotations.MtAmpId; import lombok.Data; @Data @MyEntity(name = "employee") public class EmployeeEntity implements Serializable{ private static final long serialVersionUID = 1L; @MtAmpId private Long storeId; private String storeName; } - **Exception**: Caused by: java.lang.IllegalArgumentException: Not a managed type: class com.entity.EmployeeEntity at org.hibernate.metamodel.internal.MetamodelImpl.managedType(MetamodelImpl.java:583) ~[hibernate-core-5.6.14.Final.jar:5.6.14.Final] at org.hibernate.metamodel.internal.MetamodelImpl.managedType(MetamodelImpl.java:85) ~[hibernate-core-5.6.14.Final.jar:5.6.14.Final]
[ "@Inherited annotation is missing, please follow:\n@Target(value = { TYPE })\n@Retention(value = RUNTIME)\n@Documented\n@Inherited // <-- add this line\n@Entity\npublic @interface MyEntity {\n\n @AliasFor(annotation = Entity.class)\n String name() default \"\";\n}\n\n" ]
[ 2 ]
[]
[]
[ "customization", "spring_annotations", "spring_data_jpa" ]
stackoverflow_0074655069_customization_spring_annotations_spring_data_jpa.txt
Q: Please explain this trait with a self reference I've seen this trait as a way to pass references to async functions. I don't understand it trait AsyncSingleArgFnOnce<Arg>: FnOnce(Arg) -> <Self as AsyncSingleArgFnOnce<Arg>>::Fut { type Fut: Future<Output = <Self as AsyncSingleArgFnOnce<Arg>>::Output>; type Output; } I especially don't understand the super trait referencing the trait in the function return. It looks like a lot of the trait text is to use associated types in the correct places. A: First, we have a trait AsyncSingleArgFnOnce with a generic parameter Arg. It has an associated type Output specifying the declared return type of the async fn. It then has another associated type, Fut, that specifies the actual return type of the async fn. Remember that: async fn foo() -> Bar { ... } Is actually: fn foo() -> impl Future<Output = Bar> { async move { ... } } So Output is Bar, and Fut is the impl Future. This associated type is constrained to be a future (type Fut: Future<Output = ...>) whose output type is <Self as AsyncSingleArgFnOnce<Arg>>::Output. AsyncSingleArgFnOnce<Arg> is just the trait itself; <Self as AsyncSingleArgFnOnce<Arg>>::Output is the fully-qualified Output associated type of the trait AsyncSingleArgFnOnce of the implementing type. That is, just the value of the associated type Output. The trait has FnOnce(Arg) -> ... as a supertrait, that means it needs to be a function (FnOnce) that takes Arg (the generic parameter) and returns <Self as AsyncSingleArgFnOnce<Arg>>::Fut. Just as with <Self as AsyncSingleArgFnOnce<Arg>>::Output, that just means the value of the associated type Fut. So we have a trait, that requires a function with a single argument and a future return type. The returned future is in the associated type Fut; the declared return type is in the associated type Output.
Please explain this trait with a self reference
I've seen this trait as a way to pass references to async functions. I don't understand it trait AsyncSingleArgFnOnce<Arg>: FnOnce(Arg) -> <Self as AsyncSingleArgFnOnce<Arg>>::Fut { type Fut: Future<Output = <Self as AsyncSingleArgFnOnce<Arg>>::Output>; type Output; } I especially don't understand the super trait referencing the trait in the function return. It looks like a lot of the trait text is to use associated types in the correct places.
[ "First, we have a trait AsyncSingleArgFnOnce with a generic parameter Arg.\nIt has an associated type Output specifying the declared return type of the async fn.\nIt then has another associated type, Fut, that specifies the actual return type of the async fn. Remember that:\nasync fn foo() -> Bar { ... }\n\nIs actually:\nfn foo() -> impl Future<Output = Bar> { async move { ... } }\n\nSo Output is Bar, and Fut is the impl Future.\nThis associated type is constrained to be a future (type Fut: Future<Output = ...>) whose output type is <Self as AsyncSingleArgFnOnce<Arg>>::Output. AsyncSingleArgFnOnce<Arg> is just the trait itself; <Self as AsyncSingleArgFnOnce<Arg>>::Output is the fully-qualified Output associated type of the trait AsyncSingleArgFnOnce of the implementing type. That is, just the value of the associated type Output.\nThe trait has FnOnce(Arg) -> ... as a supertrait, that means it needs to be a function (FnOnce) that takes Arg (the generic parameter) and returns <Self as AsyncSingleArgFnOnce<Arg>>::Fut. Just as with <Self as AsyncSingleArgFnOnce<Arg>>::Output, that just means the value of the associated type Fut.\nSo we have a trait, that requires a function with a single argument and a future return type. The returned future is in the associated type Fut; the declared return type is in the associated type Output.\n" ]
[ 0 ]
[]
[]
[ "rust" ]
stackoverflow_0074677178_rust.txt
Q: Pass environment variables to Github action Github provides secrets, whose values can be used in workflows. Unfortunately, the values of secrets is protected and we can't easily see it in the repo or debug it in the workflow as it is scrubbed. Is there a way to define an "environment variable" in the repository that can be easily seen and debugged? My use case is for configuration that can be easily modified if the repo is forked. A: You can store environment variables in an .env file like this: FOO=bar Then you can write code to append data from that file to $GITHUB_ENV: name: CI on: workflow_dispatch: jobs: foo: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - run: cat .env >> $GITHUB_ENV - name: Use the value run: echo $FOO You'll need to do cat .env >> $GITHUB_ENV (and use actions/checkout) for each job where you need to access env vars from that file. DO NOT STORE SECRETS IN .env -- use it only for storing configurations, etc. Complete code: https://github.com/brc-dd/env-from-file You can also change .env to something like .env.github to keep things more organized. A: FYI: you can pass secret as env variable inside job. Sample job where I have added foo as a secret in actions workflow: name: simple secret id: secret_env env: foo: ${{ secrets.foo}} run: echo $foo ** above is example but ignore code syntax issues because of comments format here..
Pass environment variables to Github action
Github provides secrets, whose values can be used in workflows. Unfortunately, the values of secrets is protected and we can't easily see it in the repo or debug it in the workflow as it is scrubbed. Is there a way to define an "environment variable" in the repository that can be easily seen and debugged? My use case is for configuration that can be easily modified if the repo is forked.
[ "You can store environment variables in an .env file like this:\nFOO=bar\n\nThen you can write code to append data from that file to $GITHUB_ENV:\nname: CI\n\non:\n workflow_dispatch:\n\njobs:\n foo:\n runs-on: ubuntu-latest\n\n steps:\n - uses: actions/checkout@v3\n - run: cat .env >> $GITHUB_ENV\n\n - name: Use the value\n run: echo $FOO\n\nYou'll need to do cat .env >> $GITHUB_ENV (and use actions/checkout) for each job where you need to access env vars from that file.\nDO NOT STORE SECRETS IN .env -- use it only for storing configurations, etc.\nComplete code: https://github.com/brc-dd/env-from-file\nYou can also change .env to something like .env.github to keep things more organized.\n", "FYI: you can pass secret as env variable inside job.\nSample job where I have added foo as a secret in actions workflow:\n\nname: simple secret\nid: secret_env\nenv: \nfoo: ${{ secrets.foo}}\nrun: echo $foo **\n\nabove is example but ignore code syntax issues because of comments format here..\n" ]
[ 3, 0 ]
[]
[]
[ "github_actions" ]
stackoverflow_0072634723_github_actions.txt
Q: React 16: Warning: Expected server HTML to contain a matching in due to State I'm getting the following error using SSR Warning: Expected server HTML to contain a matching <div> in <div>. The issue is on the client when checking the browser width on component mount, and then setting the state of a component to render a mobile version of it instead. But the server is defaulting the desktop version of the container as it is not aware of the browser width. How do I deal with such a case? Can I somehow detect the browser width on the server and render the mobile container before sending to the client? EDIT: For now I've decided to render the container when the component mounts. This way, both server and client side render nothing initially preventing this error. I'm still open to a better solution A: This will solve the issue. // Fix: Expected server HTML to contain a matching <a> in const renderMethod = module.hot ? ReactDOM.render : ReactDOM.hydrate; renderMethod( <BrowserRouter> <RoutersController data={data} routes={routes} /> </BrowserRouter>, document.getElementById('root') ); A: Gatsby A recent feature flag of gatsby (introduced in v2.28, December 2020) ables to server-side render pages in dev environment. This flag is set to true by default. In this case, you might see this error message in the console Warning: Expected server HTML to contain a matching <div> in <div>. You can disable this flag in gatsby.config.js file : module.exports = { flags: { DEV_SSR: false, } } doc : https://www.gatsbyjs.com/docs/reference/release-notes/v2.28/#feature-flags-in-gatsby-configjs A: The current accepted answer doesn’t play well with TypeScript. Here is what works for me. <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <link rel="shortcut icon" href="/favicon.ico" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> </head> <body> <noscript> You need to enable JavaScript to run this app. </noscript> <div id="root"></div> </body> </html> import React from "react" import { hydrate, render } from "react-dom" import BrowserRouter from "./routers/Browser" const root = document.getElementById("root") var renderMethod if (root && root.innerHTML !== "") { renderMethod = hydrate } else { renderMethod = render } renderMethod(<BrowserRouter />, document.getElementById("root")) A: This message can also occurs due to bad code that doesn't render consistent content between your SSR and CSR, thus hydrate can't resolve. For example SSR returns : ... <div id="root"> <div id="myDiv">My div content</div> </div> ... While CSR returns : ... <div id="root"> <div id="anotherDiv">My other div content</div> </div> ... The best solution in this case is not installing libraries or turning off hydrate but actually fix the inconsistency in your code. Temporaryly removing <script src="/react-bundle-path.js"></script> from index.js can help to compare the exact content rendered by SSR with content rendered by CSR hydrate. A: HTTP Client Hints could help you with this. Another interesting article regarding Client Hints. A: My solution is to use a middleware like express-useragent to detect the browser user agent. Then, in the server side, create a viewsize like {width, height} by the following rules if (ua.isMobile) { return {width: 360, height: 480} } if (ua.isDesktop) { return {width: 768, height: 600} } return {width: 360, height: 480} // default, and for bot Then, it is still somehow a responsive design in SSR. A: From https://github.com/vercel/next.js/discussions/17443#discussioncomment-87097 Code that is only supposed to run in the browser should be executed inside useEffect. That's required because the first render should match the initial render of the server. If you manipulate that result it creates a mismatch and React won't be able to hydrate the page successfully. When you run browser only code (like trying to access window) inside useEffect, it will happen after hydration
React 16: Warning: Expected server HTML to contain a matching in due to State
I'm getting the following error using SSR Warning: Expected server HTML to contain a matching <div> in <div>. The issue is on the client when checking the browser width on component mount, and then setting the state of a component to render a mobile version of it instead. But the server is defaulting the desktop version of the container as it is not aware of the browser width. How do I deal with such a case? Can I somehow detect the browser width on the server and render the mobile container before sending to the client? EDIT: For now I've decided to render the container when the component mounts. This way, both server and client side render nothing initially preventing this error. I'm still open to a better solution
[ "This will solve the issue.\n// Fix: Expected server HTML to contain a matching <a> in\nconst renderMethod = module.hot ? ReactDOM.render : ReactDOM.hydrate;\nrenderMethod(\n <BrowserRouter>\n <RoutersController data={data} routes={routes} />\n </BrowserRouter>,\n document.getElementById('root')\n);\n\n", "Gatsby\nA recent feature flag of gatsby (introduced in v2.28, December 2020) ables to server-side render pages in dev environment.\nThis flag is set to true by default. In this case, you might see this error message in the console\nWarning: Expected server HTML to contain a matching <div> in <div>.\n\nYou can disable this flag in gatsby.config.js file :\nmodule.exports = {\n flags: {\n DEV_SSR: false,\n }\n}\n\n\ndoc : https://www.gatsbyjs.com/docs/reference/release-notes/v2.28/#feature-flags-in-gatsby-configjs\n", "The current accepted answer doesn’t play well with TypeScript. Here is what works for me.\n<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <meta charset=\"utf-8\" />\n <link rel=\"shortcut icon\" href=\"/favicon.ico\" />\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\" />\n </head>\n <body>\n <noscript>\n You need to enable JavaScript to run this app.\n </noscript>\n <div id=\"root\"></div>\n </body>\n</html>\n\nimport React from \"react\"\nimport { hydrate, render } from \"react-dom\"\nimport BrowserRouter from \"./routers/Browser\"\n\nconst root = document.getElementById(\"root\")\nvar renderMethod\nif (root && root.innerHTML !== \"\") {\n renderMethod = hydrate\n} else {\n renderMethod = render\n}\nrenderMethod(<BrowserRouter />, document.getElementById(\"root\"))\n\n", "This message can also occurs due to bad code that doesn't render consistent content between your SSR and CSR, thus hydrate can't resolve.\nFor example SSR returns :\n...\n<div id=\"root\">\n <div id=\"myDiv\">My div content</div>\n</div>\n...\n\nWhile CSR returns :\n...\n<div id=\"root\">\n <div id=\"anotherDiv\">My other div content</div>\n</div>\n...\n\nThe best solution in this case is not installing libraries or turning off hydrate but actually fix the inconsistency in your code.\nTemporaryly removing <script src=\"/react-bundle-path.js\"></script> from index.js can help to compare the exact content rendered by SSR with content rendered by CSR hydrate.\n", "HTTP Client Hints could help you with this.\nAnother interesting article regarding Client Hints.\n", "My solution is to use a middleware like express-useragent to detect the browser user agent.\nThen, in the server side, create a viewsize like {width, height} by the following rules\nif (ua.isMobile) {\n return {width: 360, height: 480}\n}\n\nif (ua.isDesktop) {\n return {width: 768, height: 600}\n}\n\nreturn {width: 360, height: 480} // default, and for bot\n\nThen, it is still somehow a responsive design in SSR.\n", "From https://github.com/vercel/next.js/discussions/17443#discussioncomment-87097\n\nCode that is only supposed to run in the browser should be executed inside useEffect. That's required because the first render should match the initial render of the server. If you manipulate that result it creates a mismatch and React won't be able to hydrate the page successfully.\nWhen you run browser only code (like trying to access window) inside useEffect, it will happen after hydration \n\n" ]
[ 34, 21, 7, 5, 1, 0, 0 ]
[]
[]
[ "client_hints", "isomorphic_javascript", "reactjs" ]
stackoverflow_0046865880_client_hints_isomorphic_javascript_reactjs.txt
Q: Python (RenPy): Textbuttons executing a function they don't explicitly call every time they're pressed In the game I'm working on, I use an array to track the current stats of the player's company, and the following function to edit the array. init python: #The following store item objects, which include an array of their own stats #Stores currently owned equipment Equipment = [] #Stores items available to buy Items = [] #Stores currently equipped equipment Equipped = [] #The company's current stats SecArray = [0, 0, 0, 0, 0, 0] #Called whenever moving something in or out of the currently equipped items. #Pass the current item's stat array, the stat array, and a + or - symbol def arrayedit(array, SecArray, symbol): Notify("The stats have changed") if symbol == "+": SecArray[0] += array[0] SecArray[1] += array[1] SecArray[2] += array[2] SecArray[3] += array[3] SecArray[4] += array[4] SecArray[5] += array[5] if symbol == "-": SecArray[0] -= array[0] SecArray[1] -= array[1] SecArray[2] -= array[2] SecArray[3] -= array[3] SecArray[4] -= array[4] SecArray[5] -= array[5] return() If any items are in the "Equipment" array, however, their stats are added to the current stats every time a textbutton is clicked (so, for example, if an item has 3 in a stat, the player's current stats will increase by 3 every time any button is clicked, counting infinitely upward). Similarly, if any items are in the "Equipped" array, their current stats are subtracted from the player's current stats every time a textbutton is clicked. Items in the "Items" array do not have any effect. The following code is for windows to shop and equip/dequip equipment. screen shopping1(): frame: xpos (config.screen_width*25/64) ypos (config.screen_height*11/64) ysize (config.screen_height*31/64) xsize (config.screen_width*36/64) has side "c r b" viewport: yadjustment tutorials_adjustment mousewheel True vbox: xpos (config.screen_width*2/5) ypos (config.screen_height*3/16) ysize (config.screen_height/2) xsize (config.screen_width/2) for i in Items: if i.kind == "Item": if i.cost <= Money: textbutton "[i.title] $[i.cost]": action [AddToSet(Equipment, i), RemoveFromSet(Items, i), Hide("shopping1"), Return(i.cost)] left_padding 20 xfill True hovered Notify(i.hover) else: null height 10 text i.title alt "" null height 5 for i in Policies: if i.kind == "Policy": if i.cost <= Money: textbutton "[i.title] $[i.cost]": action [AddToSet(OwnedPolicies, i), RemoveFromSet(Policies, i), Hide("shopping1"), Return(i.cost)] left_padding 20 xfill True hovered Notify(i.hover) else: null height 10 text i.title alt "" null height 5 for i in Trainings: if i.kind == "Training": if i.cost <= Money: textbutton "[i.title] $[i.cost]": action [AddToSet(OwnedTrainings, i), RemoveFromSet(Trainings, i), Hide("shopping1"), Return(i.cost)] left_padding 20 xfill True hovered Notify(i.hover) else: null height 10 text i.title alt "" null height 5 bar adjustment tutorials_adjustment style "vscrollbar" textbutton _("Return"): xfill True action [Hide("shopping1")] top_margin 10 screen equipmentedit(): frame: xpos (config.screen_width*5/128) ypos (config.screen_height*2/64) ysize (config.screen_height*47/64) xsize (config.screen_width*19/64) has side "c r b" viewport: yadjustment tutorials_adjustment mousewheel True vbox: null height 10 text "Unequipped Items" alt "" null height 5 for i in Equipment: if i.kind == "Item": textbutton "[i.title] $[i.cost]": action [arrayedit(i.stats, SecArray, "+"), AddToSet(Equipped, i), RemoveFromSet(Equipment, i), Hide("equipmentedit"), Return(i.cost)] left_padding 20 xfill True else: null height 10 text i.title alt "" null height 5 null height 10 text "Equipped Items" alt "" null height 5 for i in Equipped: if i.kind == "Item": textbutton "[i.title] $[i.cost]": action [arrayedit(i.stats, SecArray, "-"), AddToSet(Equipment, i), RemoveFromSet(Equipped, i), Hide("equipmentedit"), Return(i.cost)] left_padding 20 xfill True else: null height 10 text i.title alt "" null height 5 bar adjustment tutorials_adjustment style "vscrollbar" textbutton _("Return"): xfill True action [Hide("equipmentedit")] top_margin 10 Outside of this, the arrays and functions used are not called or referenced elsewhere in the program. I believe the "arrayedit" function is being called for items in the equipped and equpiment arrays every time a button is clicked, including the return buttons, but I'm unsure of why. Any insight would be greatly appreciated! A: I had the same problem and I believe you're you're falling victim to Renpy's prediction, as demonstrated by this thread on GitHub: https://github.com/renpy/renpy/issues/3718. Renpy runs through all the code in a screen when it's shown (and multiple other times) including searching any function calls in order to pre-load assets. Usually this shouldn't cause a problem because most of the Renpy calls utilise renpy.restart_interaction() which reverts any changes to variables. Unfortunately it can struggle when it comes to directly calling Python objects. The solution I used for this was to have the screen element call an intermediate function which included the renpy.restart_interaction() call (for some reason putting the restart directly into my Python object's function was causing other problems). Something like this: init python: def intermediateFunc(funcToCall, obj, arg): funcToCall(obj, arg) renpy.restart_interaction() screen myScreen: imagebutton: # ... action Function(intermediateFunc, MyObj.DoSomething, obj, "an argument")
Python (RenPy): Textbuttons executing a function they don't explicitly call every time they're pressed
In the game I'm working on, I use an array to track the current stats of the player's company, and the following function to edit the array. init python: #The following store item objects, which include an array of their own stats #Stores currently owned equipment Equipment = [] #Stores items available to buy Items = [] #Stores currently equipped equipment Equipped = [] #The company's current stats SecArray = [0, 0, 0, 0, 0, 0] #Called whenever moving something in or out of the currently equipped items. #Pass the current item's stat array, the stat array, and a + or - symbol def arrayedit(array, SecArray, symbol): Notify("The stats have changed") if symbol == "+": SecArray[0] += array[0] SecArray[1] += array[1] SecArray[2] += array[2] SecArray[3] += array[3] SecArray[4] += array[4] SecArray[5] += array[5] if symbol == "-": SecArray[0] -= array[0] SecArray[1] -= array[1] SecArray[2] -= array[2] SecArray[3] -= array[3] SecArray[4] -= array[4] SecArray[5] -= array[5] return() If any items are in the "Equipment" array, however, their stats are added to the current stats every time a textbutton is clicked (so, for example, if an item has 3 in a stat, the player's current stats will increase by 3 every time any button is clicked, counting infinitely upward). Similarly, if any items are in the "Equipped" array, their current stats are subtracted from the player's current stats every time a textbutton is clicked. Items in the "Items" array do not have any effect. The following code is for windows to shop and equip/dequip equipment. screen shopping1(): frame: xpos (config.screen_width*25/64) ypos (config.screen_height*11/64) ysize (config.screen_height*31/64) xsize (config.screen_width*36/64) has side "c r b" viewport: yadjustment tutorials_adjustment mousewheel True vbox: xpos (config.screen_width*2/5) ypos (config.screen_height*3/16) ysize (config.screen_height/2) xsize (config.screen_width/2) for i in Items: if i.kind == "Item": if i.cost <= Money: textbutton "[i.title] $[i.cost]": action [AddToSet(Equipment, i), RemoveFromSet(Items, i), Hide("shopping1"), Return(i.cost)] left_padding 20 xfill True hovered Notify(i.hover) else: null height 10 text i.title alt "" null height 5 for i in Policies: if i.kind == "Policy": if i.cost <= Money: textbutton "[i.title] $[i.cost]": action [AddToSet(OwnedPolicies, i), RemoveFromSet(Policies, i), Hide("shopping1"), Return(i.cost)] left_padding 20 xfill True hovered Notify(i.hover) else: null height 10 text i.title alt "" null height 5 for i in Trainings: if i.kind == "Training": if i.cost <= Money: textbutton "[i.title] $[i.cost]": action [AddToSet(OwnedTrainings, i), RemoveFromSet(Trainings, i), Hide("shopping1"), Return(i.cost)] left_padding 20 xfill True hovered Notify(i.hover) else: null height 10 text i.title alt "" null height 5 bar adjustment tutorials_adjustment style "vscrollbar" textbutton _("Return"): xfill True action [Hide("shopping1")] top_margin 10 screen equipmentedit(): frame: xpos (config.screen_width*5/128) ypos (config.screen_height*2/64) ysize (config.screen_height*47/64) xsize (config.screen_width*19/64) has side "c r b" viewport: yadjustment tutorials_adjustment mousewheel True vbox: null height 10 text "Unequipped Items" alt "" null height 5 for i in Equipment: if i.kind == "Item": textbutton "[i.title] $[i.cost]": action [arrayedit(i.stats, SecArray, "+"), AddToSet(Equipped, i), RemoveFromSet(Equipment, i), Hide("equipmentedit"), Return(i.cost)] left_padding 20 xfill True else: null height 10 text i.title alt "" null height 5 null height 10 text "Equipped Items" alt "" null height 5 for i in Equipped: if i.kind == "Item": textbutton "[i.title] $[i.cost]": action [arrayedit(i.stats, SecArray, "-"), AddToSet(Equipment, i), RemoveFromSet(Equipped, i), Hide("equipmentedit"), Return(i.cost)] left_padding 20 xfill True else: null height 10 text i.title alt "" null height 5 bar adjustment tutorials_adjustment style "vscrollbar" textbutton _("Return"): xfill True action [Hide("equipmentedit")] top_margin 10 Outside of this, the arrays and functions used are not called or referenced elsewhere in the program. I believe the "arrayedit" function is being called for items in the equipped and equpiment arrays every time a button is clicked, including the return buttons, but I'm unsure of why. Any insight would be greatly appreciated!
[ "I had the same problem and I believe you're you're falling victim to Renpy's prediction, as demonstrated by this thread on GitHub: https://github.com/renpy/renpy/issues/3718.\nRenpy runs through all the code in a screen when it's shown (and multiple other times) including searching any function calls in order to pre-load assets. Usually this shouldn't cause a problem because most of the Renpy calls utilise renpy.restart_interaction() which reverts any changes to variables. Unfortunately it can struggle when it comes to directly calling Python objects.\nThe solution I used for this was to have the screen element call an intermediate function which included the renpy.restart_interaction() call (for some reason putting the restart directly into my Python object's function was causing other problems).\nSomething like this:\ninit python:\n def intermediateFunc(funcToCall, obj, arg):\n funcToCall(obj, arg)\n renpy.restart_interaction()\n\nscreen myScreen:\n imagebutton:\n # ...\n action Function(intermediateFunc, MyObj.DoSomething, obj, \"an argument\")\n\n" ]
[ 0 ]
[]
[]
[ "python", "renpy" ]
stackoverflow_0059776715_python_renpy.txt
Q: Batch File Running but not Following All Steps/Apps Script also hit and miss Ok, firstly let me say that I am a complete beginner aside from a random number generator I created in high school about 20 years ago. I know there is probably a much more elegant way to accomplish my task, but this is currently what I am comfortable with. Task: Move and rename .csv reports from a shared server folder to my Google Drive. At this point my apps script will take over and import into a specified Sheet. The apps scripts are embedded into each sheet (7 sheets total) but they are all the exact same code (changed for the specific files and sheets). Issue 1: The batch script works every time flawlessly when run manually, but when scheduled, sometimes it will not rename my files (but will still move them). Issue 2: The apps script is hit and miss, sometimes it runs perfectly and sometimes it fails with an error "TypeError: Cannot read property 'clearContents' of null". Thus the Sheet is not updated and I have csv files sitting in my drive doing nothing. Batch Script @echo off ren "\\Server\Folder\subfolder\DataDaily-Emb Smalls-*.csv" smalls.csv ren "\\Server\Folder\subfolder\DataDaily-HP & Laser-*.csv" hp.csv ren "\\Server\Folder\subfolder\DataDaily-Emb Hats-*.csv" hats.csv ren "\\Server\Folder\subfolder\DataDaily-Embroidery-*.csv" emb.csv ren "\\Server\Folder\subfolder\DataDaily-Screen Print-*.csv" sp.csv ren "\\Server\Folder\subfolder\DataDaily-Database-*.csv" database.csv robocopy \\Server\Folder\subfolder "G:\My Drive\Dashboard" /MOV /XF *.bat Apps Script Example hobbled together from different posts on this forum, it works on every other sheet except for this one function RecImport() { const csvFolderName = 'FolderName'; var file = DriveApp.getFilesByName("rec.csv").next(); var csvData = Utilities.parseCsv(file.getBlob().getDataAsString()); var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("rec"); sheet.clearContents(); sheet.getRange(1, 1, csvData.length, csvData[0].length).setValues(csvData); DriveApp.getFilesByName("rec.csv").next().setTrashed(true); } Example of other sheet code that works consistently function ImportSmallsCSVfromDrive() { const csvFolderName = 'FolderName'; var file = DriveApp.getFilesByName("smalls.csv").next(); var csvData = Utilities.parseCsv(file.getBlob().getDataAsString()); var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('smalls'); sheet.clearContents(); sheet.getRange(1, 1, csvData.length, csvData[0].length).setValues(csvData); DriveApp.getFilesByName("smalls.csv").next().setTrashed(true); } Ultimately, I am at a loss here as everything looks like it should be working but yet I still have renaming inconsistencies with my batch script and failed executions on my apps script. What am I missing? A: File locks are often problems when you're changing a file on a schedule. You can prove this by copying the files before the scheduled rename. Copying will often succeed while files are locked and then changes can be made. This may not fit your business case (maybe you can't copy them first) but it will help you identify the problem. You can add this to your robocopy to log while scheduled. You should get a detailed error message for the failed files. >> c:\log.txt
Batch File Running but not Following All Steps/Apps Script also hit and miss
Ok, firstly let me say that I am a complete beginner aside from a random number generator I created in high school about 20 years ago. I know there is probably a much more elegant way to accomplish my task, but this is currently what I am comfortable with. Task: Move and rename .csv reports from a shared server folder to my Google Drive. At this point my apps script will take over and import into a specified Sheet. The apps scripts are embedded into each sheet (7 sheets total) but they are all the exact same code (changed for the specific files and sheets). Issue 1: The batch script works every time flawlessly when run manually, but when scheduled, sometimes it will not rename my files (but will still move them). Issue 2: The apps script is hit and miss, sometimes it runs perfectly and sometimes it fails with an error "TypeError: Cannot read property 'clearContents' of null". Thus the Sheet is not updated and I have csv files sitting in my drive doing nothing. Batch Script @echo off ren "\\Server\Folder\subfolder\DataDaily-Emb Smalls-*.csv" smalls.csv ren "\\Server\Folder\subfolder\DataDaily-HP & Laser-*.csv" hp.csv ren "\\Server\Folder\subfolder\DataDaily-Emb Hats-*.csv" hats.csv ren "\\Server\Folder\subfolder\DataDaily-Embroidery-*.csv" emb.csv ren "\\Server\Folder\subfolder\DataDaily-Screen Print-*.csv" sp.csv ren "\\Server\Folder\subfolder\DataDaily-Database-*.csv" database.csv robocopy \\Server\Folder\subfolder "G:\My Drive\Dashboard" /MOV /XF *.bat Apps Script Example hobbled together from different posts on this forum, it works on every other sheet except for this one function RecImport() { const csvFolderName = 'FolderName'; var file = DriveApp.getFilesByName("rec.csv").next(); var csvData = Utilities.parseCsv(file.getBlob().getDataAsString()); var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("rec"); sheet.clearContents(); sheet.getRange(1, 1, csvData.length, csvData[0].length).setValues(csvData); DriveApp.getFilesByName("rec.csv").next().setTrashed(true); } Example of other sheet code that works consistently function ImportSmallsCSVfromDrive() { const csvFolderName = 'FolderName'; var file = DriveApp.getFilesByName("smalls.csv").next(); var csvData = Utilities.parseCsv(file.getBlob().getDataAsString()); var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('smalls'); sheet.clearContents(); sheet.getRange(1, 1, csvData.length, csvData[0].length).setValues(csvData); DriveApp.getFilesByName("smalls.csv").next().setTrashed(true); } Ultimately, I am at a loss here as everything looks like it should be working but yet I still have renaming inconsistencies with my batch script and failed executions on my apps script. What am I missing?
[ "File locks are often problems when you're changing a file on a schedule. You can prove this by copying the files before the scheduled rename. Copying will often succeed while files are locked and then changes can be made.\nThis may not fit your business case (maybe you can't copy them first) but it will help you identify the problem.\nYou can add this to your robocopy to log while scheduled. You should get a detailed error message for the failed files.\n>> c:\\log.txt\n\n" ]
[ 0 ]
[]
[]
[ "batch_file", "batch_rename", "csv", "google_apps_script", "google_sheets" ]
stackoverflow_0074677340_batch_file_batch_rename_csv_google_apps_script_google_sheets.txt
Q: How to hide content of .ts in angular? I'm writing a calculation logic on button click which can be viewed in main.ts even when build in prod mode. How can I hide the calculation written inside the event method of .ts file In .html - <button class="btn btn-primary" style="margin-left:15px;align-items:center" (click)="Calculate()">Calculate</button> In .ts - Calculate() { // some calculation logic which I want to hide and don't want to expose to front-end user through dev tools} Also the values are calculated based on input form user enters as an input, hence calculation logic also contains the properties in 2 way data binding. A: You shold use a server-side language, such as Node.js, to perform the calculation and return the result to the front-end. This way, the calculation logic will not be visible to the user, since it will be executed on the server rather than in the browser.
How to hide content of .ts in angular?
I'm writing a calculation logic on button click which can be viewed in main.ts even when build in prod mode. How can I hide the calculation written inside the event method of .ts file In .html - <button class="btn btn-primary" style="margin-left:15px;align-items:center" (click)="Calculate()">Calculate</button> In .ts - Calculate() { // some calculation logic which I want to hide and don't want to expose to front-end user through dev tools} Also the values are calculated based on input form user enters as an input, hence calculation logic also contains the properties in 2 way data binding.
[ "You shold use a server-side language, such as Node.js, to perform the calculation and return the result to the front-end. This way, the calculation logic will not be visible to the user, since it will be executed on the server rather than in the browser.\n" ]
[ 1 ]
[]
[]
[ "abstraction", "angular", "components", "typescript" ]
stackoverflow_0074677401_abstraction_angular_components_typescript.txt
Q: NullInjectorError: No provider for Store I'm receiving the following error when running my unit tests: Error: StaticInjectorError(DynamicTestModule)[BlogService -> Store]: StaticInjectorError(Platform: core)[BlogService -> Store]: NullInjectorError: No provider for Store! Here is the code in my test file: import { TestBed, inject } from '@angular/core/testing'; import { BlogService } from './blog.service'; describe('BlogService', () => { beforeEach(() => { TestBed.configureTestingModule({ providers: [BlogService] }); }); it('should be created', inject([BlogService], (service: BlogService) => { expect(service).toBeTruthy(); })); }); I'm not sure why this error is occuring. I thought the 'inject' call instantiates the service. A: You can use the ngrx mock store: import { provideMockStore } from '@ngrx/store/testing'; beforeEach(() => { TestBed.configureTestingModule({ providers: [provideMockStore({})], }); }); You can define a specific state and pass it as method parameter. A: As stated here Add following to the specs.ts file: // Add the import the module from the package import { StoreModule } from '@ngrx/store'; // Add the imported module to the imports array in beforeEach beforeEach(async(() => { TestBed.configureTestingModule({ imports: [ StoreModule.provideStore({}) ], declarations: [ // The component that's being tested ] }) .compileComponents(); })); And if you get error Property 'provideStore' does not exist on type 'typeof StoreModule use forRoot instead of provideStore. Also look here and here is similar question here. Cheers! A: if your service is referencing an ngrx store then you need to import the ngrx modules. I am making an assumption right now that you are doing that in your AppModule. You need to duplicate that in your TestBed module. I generally create a test ngrx module that does all that and then I can just import that in any spec file that references Store A: import { provideMockStore } from '@ngrx/store/testing'; import { StoreModule } from '@ngrx/store'; TestBed.configureTestingModule({ imports: [ StoreModule.forRoot(provideMockStore),
NullInjectorError: No provider for Store
I'm receiving the following error when running my unit tests: Error: StaticInjectorError(DynamicTestModule)[BlogService -> Store]: StaticInjectorError(Platform: core)[BlogService -> Store]: NullInjectorError: No provider for Store! Here is the code in my test file: import { TestBed, inject } from '@angular/core/testing'; import { BlogService } from './blog.service'; describe('BlogService', () => { beforeEach(() => { TestBed.configureTestingModule({ providers: [BlogService] }); }); it('should be created', inject([BlogService], (service: BlogService) => { expect(service).toBeTruthy(); })); }); I'm not sure why this error is occuring. I thought the 'inject' call instantiates the service.
[ "You can use the ngrx mock store:\nimport { provideMockStore } from '@ngrx/store/testing';\n\nbeforeEach(() => {\n TestBed.configureTestingModule({\n providers: [provideMockStore({})],\n });\n });\n\nYou can define a specific state and pass it as method parameter.\n", "As stated here\nAdd following to the specs.ts file:\n// Add the import the module from the package \nimport { StoreModule } from '@ngrx/store';\n\n // Add the imported module to the imports array in beforeEach \n beforeEach(async(() => {\n TestBed.configureTestingModule({\n imports: [\n StoreModule.provideStore({})\n ],\n declarations: [\n // The component that's being tested\n ]\n })\n .compileComponents();\n }));\n\nAnd if you get error Property 'provideStore' does not exist on type 'typeof StoreModule use forRoot instead of provideStore.\nAlso look here and here is similar question here.\nCheers!\n", "if your service is referencing an ngrx store then you need to import the ngrx modules. I am making an assumption right now that you are doing that in your AppModule. You need to duplicate that in your TestBed module. I generally create a test ngrx module that does all that and then I can just import that in any spec file that references Store\n", "import { provideMockStore } from '@ngrx/store/testing';\nimport { StoreModule } from '@ngrx/store';\n\nTestBed.configureTestingModule({\n imports: [\n StoreModule.forRoot(provideMockStore),\n\n" ]
[ 15, 12, 11, 0 ]
[]
[]
[ "angular", "karma_jasmine", "unit_testing" ]
stackoverflow_0051071603_angular_karma_jasmine_unit_testing.txt
Q: SlackBot with slash commands I need your help please I wrote a SlackBOT and enabled the slash command feature, but I see that each member can use the command in slack (type "/" and the command appear to him - which I don't want). Can I limit it only to a dedicated channel? Thanks! A: Yeah you can limit a Slack Bot's slash command to only work in a specific channel, you just need to specify the channel where the command should be available when you create the command using the Slack API or the SlackBOT's configuration settings: SLACK_BOT_TOKEN="your_bot_token" SLACK_CHANNEL_ID="your_channel_id" # create the slash command curl -X POST https://slack.com/api/commands.create \ -H "Authorization: Bearer $SLACK_BOT_TOKEN" \ -H "Content-type: application/json" \ -d '{ "name": "mycommand", "description": "My custom command", "usage_hint": "Usage hint for my command", "channel_id": "'"$SLACK_CHANNEL_ID"'", "command": "/mycommand" }' The channel_id parameter is used to specify the ID of the channel where the /mycommand command should be available. This will limit the command to only work in that specific channel.
SlackBot with slash commands
I need your help please I wrote a SlackBOT and enabled the slash command feature, but I see that each member can use the command in slack (type "/" and the command appear to him - which I don't want). Can I limit it only to a dedicated channel? Thanks!
[ "Yeah you can limit a Slack Bot's slash command to only work in a specific channel, you just need to specify the channel where the command should be available when you create the command using the Slack API or the SlackBOT's configuration settings:\nSLACK_BOT_TOKEN=\"your_bot_token\"\nSLACK_CHANNEL_ID=\"your_channel_id\"\n\n# create the slash command\ncurl -X POST https://slack.com/api/commands.create \\\n -H \"Authorization: Bearer $SLACK_BOT_TOKEN\" \\\n -H \"Content-type: application/json\" \\\n -d '{\n \"name\": \"mycommand\",\n \"description\": \"My custom command\",\n \"usage_hint\": \"Usage hint for my command\",\n \"channel_id\": \"'\"$SLACK_CHANNEL_ID\"'\",\n \"command\": \"/mycommand\"\n }'\n\nThe channel_id parameter is used to specify the ID of the channel where the /mycommand command should be available. This will limit the command to only work in that specific channel.\n" ]
[ 2 ]
[]
[]
[ "slack", "slack_commands" ]
stackoverflow_0074677435_slack_slack_commands.txt
Q: Error: Failed executing DbCommand cannot execute insert command I'm trying to add an entity to one of my tables but an error occurred. You can find below model, controller and output debug: noting that I'm using this approach in my entire project and there is no problem, i don't know why he called the applicationUserId it's not in the related model... model: public class Rate { public int Id { get; set; } public int ProjectId { get; set; } [ForeignKey("ProjectId")] public Projects Project { get; set; } } Controller: var rate = new Rate { ProjectId = CPVM.Id, }; rateRepository.Add(rate); Rep: public void Add(Rate entity) { db.Rate.Add(entity); db.SaveChanges(); } debugger output: Microsoft.EntityFrameworkCore.Database.Command: Error: Failed executing DbCommand (2ms) [Parameters=[@p0='?' (Size = 450), @p1='?' (DbType = Int32), CommandType='Text', CommandTimeout='30'] INSERT INTO [Rate] ([ApplicationUserId], [ProjectId]) VALUES (@p0, @p1); DBContext public class ApplicationDbContext : IdentityDbContext<ApplicationUser> { public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options) : base(options) { } protected override void OnModelCreating(ModelBuilder builder) { base.OnModelCreating(builder); builder.Entity<ApplicationUser>().ToTable("Users"); builder.Entity<IdentityRole>().ToTable("Roles"); builder.Entity<IdentityUserClaim<string>>().ToTable("UserClaims"); builder.Entity<IdentityUserRole<string>>().ToTable("UserRoles"); builder.Entity<IdentityUserLogin<string>>().ToTable("Userlogins"); builder.Entity<IdentityRoleClaim<string>>().ToTable("RoleClaims"); builder.Entity<IdentityUserToken<string>>().ToTable("UserTokens"); builder.Entity<Bids>() .HasKey(b => b.Id); builder.Entity<Languages>() .HasKey(l => l.Id); builder.Entity<Rate>() .HasKey(r => r.Id); builder.Entity<Projects>() .HasKey(p => p.Id); builder.Entity<TranslatorsLanguages>() .HasKey(t => t.Id); builder.Entity<TranslatorsLanguages>() .HasOne(p => p.Languages) .WithMany(b => b.TranslatorsLanguagesList) .HasForeignKey(b => b.FromLanguage); builder.Entity<TranslatorsLanguages>() .HasOne(p => p.Languages) .WithMany(b => b.TranslatorsLanguagesList) .HasForeignKey(b => b.ToLanguage); builder.Entity<TranslatorsLanguages>() .HasOne(p => p.ApplicationUser) .WithMany(b => b.TranslatorLanguagesList) .HasForeignKey(b => b.TranslatorId); builder.Entity<Projects>() .HasOne(p => p.ApplicationUser) .WithMany(b => b.ProjectsList) .HasForeignKey(b => b.CustomerId); builder.Entity<Projects>() .HasOne(p => p.Languages) .WithMany(b => b.ProjectsList) .HasForeignKey(b => b.FromLanguage); builder.Entity<Projects>() .HasOne(p => p.Languages) .WithMany(b => b.ProjectsList) .HasForeignKey(b => b.ToLanguage); builder.Entity<Bids>() .HasOne(p => p.ApplicationUser) .WithMany(b => b.BidsList) .HasForeignKey(b => b.TranslatorId); builder.Entity<Bids>() .HasOne(p => p.Projects) .WithMany(b => b.BidsList) .HasForeignKey(b => b.ProjectId); } public DbSet<Languages> Languages { get; set; } public DbSet<Projects> Projects { get; set; } public DbSet<Rate> Rate { get; set; } public DbSet<TranslatorsLanguages> TranslatorsLanguages { get; set; } public DbSet<Bids> Bids { get; set; } } A: I found that in the applicationUser model there is a reference for Rate model public List<Rate> RateList { get; set; } when i removed it, it works!!
Error: Failed executing DbCommand cannot execute insert command
I'm trying to add an entity to one of my tables but an error occurred. You can find below model, controller and output debug: noting that I'm using this approach in my entire project and there is no problem, i don't know why he called the applicationUserId it's not in the related model... model: public class Rate { public int Id { get; set; } public int ProjectId { get; set; } [ForeignKey("ProjectId")] public Projects Project { get; set; } } Controller: var rate = new Rate { ProjectId = CPVM.Id, }; rateRepository.Add(rate); Rep: public void Add(Rate entity) { db.Rate.Add(entity); db.SaveChanges(); } debugger output: Microsoft.EntityFrameworkCore.Database.Command: Error: Failed executing DbCommand (2ms) [Parameters=[@p0='?' (Size = 450), @p1='?' (DbType = Int32), CommandType='Text', CommandTimeout='30'] INSERT INTO [Rate] ([ApplicationUserId], [ProjectId]) VALUES (@p0, @p1); DBContext public class ApplicationDbContext : IdentityDbContext<ApplicationUser> { public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options) : base(options) { } protected override void OnModelCreating(ModelBuilder builder) { base.OnModelCreating(builder); builder.Entity<ApplicationUser>().ToTable("Users"); builder.Entity<IdentityRole>().ToTable("Roles"); builder.Entity<IdentityUserClaim<string>>().ToTable("UserClaims"); builder.Entity<IdentityUserRole<string>>().ToTable("UserRoles"); builder.Entity<IdentityUserLogin<string>>().ToTable("Userlogins"); builder.Entity<IdentityRoleClaim<string>>().ToTable("RoleClaims"); builder.Entity<IdentityUserToken<string>>().ToTable("UserTokens"); builder.Entity<Bids>() .HasKey(b => b.Id); builder.Entity<Languages>() .HasKey(l => l.Id); builder.Entity<Rate>() .HasKey(r => r.Id); builder.Entity<Projects>() .HasKey(p => p.Id); builder.Entity<TranslatorsLanguages>() .HasKey(t => t.Id); builder.Entity<TranslatorsLanguages>() .HasOne(p => p.Languages) .WithMany(b => b.TranslatorsLanguagesList) .HasForeignKey(b => b.FromLanguage); builder.Entity<TranslatorsLanguages>() .HasOne(p => p.Languages) .WithMany(b => b.TranslatorsLanguagesList) .HasForeignKey(b => b.ToLanguage); builder.Entity<TranslatorsLanguages>() .HasOne(p => p.ApplicationUser) .WithMany(b => b.TranslatorLanguagesList) .HasForeignKey(b => b.TranslatorId); builder.Entity<Projects>() .HasOne(p => p.ApplicationUser) .WithMany(b => b.ProjectsList) .HasForeignKey(b => b.CustomerId); builder.Entity<Projects>() .HasOne(p => p.Languages) .WithMany(b => b.ProjectsList) .HasForeignKey(b => b.FromLanguage); builder.Entity<Projects>() .HasOne(p => p.Languages) .WithMany(b => b.ProjectsList) .HasForeignKey(b => b.ToLanguage); builder.Entity<Bids>() .HasOne(p => p.ApplicationUser) .WithMany(b => b.BidsList) .HasForeignKey(b => b.TranslatorId); builder.Entity<Bids>() .HasOne(p => p.Projects) .WithMany(b => b.BidsList) .HasForeignKey(b => b.ProjectId); } public DbSet<Languages> Languages { get; set; } public DbSet<Projects> Projects { get; set; } public DbSet<Rate> Rate { get; set; } public DbSet<TranslatorsLanguages> TranslatorsLanguages { get; set; } public DbSet<Bids> Bids { get; set; } }
[ "I found that in the applicationUser model there is a reference for Rate model public List<Rate> RateList { get; set; } when i removed it, it works!!\n" ]
[ 0 ]
[]
[]
[ "asp.net_mvc", "c#" ]
stackoverflow_0074673126_asp.net_mvc_c#.txt
Q: Cloud Firestore collection count Is it possible to count how many items a collection has using the new Firebase database, Cloud Firestore? If so, how do I do that? A: As with many questions, the answer is - It depends. You should be very careful when handling large amounts of data on the front end. On top of making your front end feel sluggish, Firestore also charges you $0.60 per million reads you make. Small collection (less than 100 documents) Use with care - Frontend user experience may take a hit Handling this on the front end should be fine as long as you are not doing too much logic with this returned array. db.collection('...').get().then(snap => { size = snap.size // will return the collection size }); Medium collection (100 to 1000 documents) Use with care - Firestore read invocations may cost a lot Handling this on the front end is not feasible as it has too much potential to slow down the users system. We should handle this logic server side and only return the size. The drawback to this method is you are still invoking Firestore reads (equal to the size of your collection), which in the long run may end up costing you more than expected. Cloud Function: db.collection('...').get().then(snap => { res.status(200).send({length: snap.size}); }); Front End: yourHttpClient.post(yourCloudFunctionUrl).toPromise().then(snap => { size = snap.length // will return the collection size }) Large collection (1000+ documents) Most scalable solution FieldValue.increment() As of April 2019 Firestore now allows incrementing counters, completely atomically, and without reading the data prior. This ensures we have correct counter values even when updating from multiple sources simultaneously (previously solved using transactions), while also reducing the number of database reads we perform. By listening to any document deletes or creates we can add to or remove from a count field that is sitting in the database. See the firestore docs - Distributed Counters Or have a look at Data Aggregation by Jeff Delaney. His guides are truly fantastic for anyone using AngularFire but his lessons should carry over to other frameworks as well. Cloud Function: export const documentWriteListener = functions.firestore .document('collection/{documentUid}') .onWrite((change, context) => { if (!change.before.exists) { // New document Created : add one to count db.doc(docRef).update({ numberOfDocs: FieldValue.increment(1) }); } else if (change.before.exists && change.after.exists) { // Updating existing document : Do nothing } else if (!change.after.exists) { // Deleting document : subtract one from count db.doc(docRef).update({ numberOfDocs: FieldValue.increment(-1) }); } return; }); Now on the frontend you can just query this numberOfDocs field to get the size of the collection. A: Simplest way to do so is to read the size of a "querySnapshot". db.collection("cities").get().then(function(querySnapshot) { console.log(querySnapshot.size); }); You can also read the length of the docs array inside "querySnapshot". querySnapshot.docs.length; Or if a "querySnapshot" is empty by reading the empty value, which will return a boolean value. querySnapshot.empty; A: As far as I know there is no build-in solution for this and it is only possible in the node sdk right now. If you have a db.collection('someCollection') you can use .select([fields]) to define which field you want to select. If you do an empty select() you will just get an array of document references. example: db.collection('someCollection').select().get().then( (snapshot) => console.log(snapshot.docs.length) ); This solution is only a optimization for the worst case of downloading all documents and does not scale on large collections! Also have a look at this: How to get a count of number of documents in a collection with Cloud Firestore A: Be careful counting number of documents for large collections. It is a little bit complex with firestore database if you want to have a precalculated counter for every collection. Code like this doesn't work in this case: export const customerCounterListener = functions.firestore.document('customers/{customerId}') .onWrite((change, context) => { // on create if (!change.before.exists && change.after.exists) { return firestore .collection('metadatas') .doc('customers') .get() .then(docSnap => docSnap.ref.set({ count: docSnap.data().count + 1 })) // on delete } else if (change.before.exists && !change.after.exists) { return firestore .collection('metadatas') .doc('customers') .get() .then(docSnap => docSnap.ref.set({ count: docSnap.data().count - 1 })) } return null; }); The reason is because every cloud firestore trigger has to be idempotent, as firestore documentation say: https://firebase.google.com/docs/functions/firestore-events#limitations_and_guarantees Solution So, in order to prevent multiple executions of your code, you need to manage with events and transactions. This is my particular way to handle large collection counters: const executeOnce = (change, context, task) => { const eventRef = firestore.collection('events').doc(context.eventId); return firestore.runTransaction(t => t .get(eventRef) .then(docSnap => (docSnap.exists ? null : task(t))) .then(() => t.set(eventRef, { processed: true })) ); }; const documentCounter = collectionName => (change, context) => executeOnce(change, context, t => { // on create if (!change.before.exists && change.after.exists) { return t .get(firestore.collection('metadatas') .doc(collectionName)) .then(docSnap => t.set(docSnap.ref, { count: ((docSnap.data() && docSnap.data().count) || 0) + 1 })); // on delete } else if (change.before.exists && !change.after.exists) { return t .get(firestore.collection('metadatas') .doc(collectionName)) .then(docSnap => t.set(docSnap.ref, { count: docSnap.data().count - 1 })); } return null; }); Use cases here: /** * Count documents in articles collection. */ exports.articlesCounter = functions.firestore .document('articles/{id}') .onWrite(documentCounter('articles')); /** * Count documents in customers collection. */ exports.customersCounter = functions.firestore .document('customers/{id}') .onWrite(documentCounter('customers')); As you can see, the key to prevent multiple execution is the property called eventId in the context object. If the function has been handled many times for the same event, the event id will be the same in all cases. Unfortunately, you must have "events" collection in your database. A: In 2020 this is still not available in the Firebase SDK however it is available in Firebase Extensions (Beta) however it's pretty complex to setup and use... A reasonable approach Helpers... (create/delete seems redundant but is cheaper than onUpdate) export const onCreateCounter = () => async ( change, context ) => { const collectionPath = change.ref.parent.path; const statsDoc = db.doc("counters/" + collectionPath); const countDoc = {}; countDoc["count"] = admin.firestore.FieldValue.increment(1); await statsDoc.set(countDoc, { merge: true }); }; export const onDeleteCounter = () => async ( change, context ) => { const collectionPath = change.ref.parent.path; const statsDoc = db.doc("counters/" + collectionPath); const countDoc = {}; countDoc["count"] = admin.firestore.FieldValue.increment(-1); await statsDoc.set(countDoc, { merge: true }); }; export interface CounterPath { watch: string; name: string; } Exported Firestore hooks export const Counters: CounterPath[] = [ { name: "count_buildings", watch: "buildings/{id2}" }, { name: "count_buildings_subcollections", watch: "buildings/{id2}/{id3}/{id4}" } ]; Counters.forEach(item => { exports[item.name + '_create'] = functions.firestore .document(item.watch) .onCreate(onCreateCounter()); exports[item.name + '_delete'] = functions.firestore .document(item.watch) .onDelete(onDeleteCounter()); }); In action The building root collection and all sub collections will be tracked. Here under the /counters/ root path Now collection counts will update automatically and eventually! If you need a count, just use the collection path and prefix it with counters. const collectionPath = 'buildings/138faicnjasjoa89/buildingContacts'; const collectionCount = await db .doc('counters/' + collectionPath) .get() .then(snap => snap.get('count')); Limitations As this approach uses a single database and document, it is limited to the Firestore constraint of 1 Update per Second for each counter. It will be eventually consistent, but in cases where large amounts of documents are added/removed the counter will lag behind the actual collection count. A: Aggregate count query just landed as a preview in Firestore. Announced at the 2022 Firebase Summit: https://firebase.blog/posts/2022/10/whats-new-at-Firebase-Sumit-2022 Excerpt: [Developer Preview] Count() function: With the new count function in Firstore [sic], you can now get the count of the matching documents when you run a query or read from a collection, without loading the actual documents, which saves you a lot of time. Code sample they showed at the summit: During the Q&A, someone asked about pricing for aggregated queries, and the answer the Firebase team provided was that it'll cost 1 / 1000th of the price of a read (rounded up to the nearest read, see comments below for more details), but will count all records that are part of the aggregate. A: I agree with @Matthew, it will cost a lot if you perform such query. [ADVICE FOR DEVELOPERS BEFORE STARTING THEIR PROJECTS] Since we have foreseen this situation at the beginning, we can actually make a collection namely counters with a document to store all the counters in a field with type number. For example: For each CRUD operation on the collection, update the counter document: When you create a new collection/subcollection: (+1 in the counter) [1 write operation] When you delete a collection/subcollection: (-1 in the counter) [1 write operation] When you update an existing collection/subcollection, do nothing on the counter document: (0) When you read an existing collection/subcollection, do nothing on the counter document: (0) Next time, when you want to get the number of collection, you just need to query/point to the document field. [1 read operation] In addition, you can store the collections name in an array, but this will be tricky, the condition of array in firebase is shown as below: // we send this ['a', 'b', 'c', 'd', 'e'] // Firebase stores this {0: 'a', 1: 'b', 2: 'c', 3: 'd', 4: 'e'} // since the keys are numeric and sequential, // if we query the data, we get this ['a', 'b', 'c', 'd', 'e'] // however, if we then delete a, b, and d, // they are no longer mostly sequential, so // we do not get back an array {2: 'c', 4: 'e'} So, if you are not going to delete the collection , you can actually use array to store list of collections name instead of querying all the collection every time. Hope it helps! A: Increment a counter using admin.firestore.FieldValue.increment: exports.onInstanceCreate = functions.firestore.document('projects/{projectId}/instances/{instanceId}') .onCreate((snap, context) => db.collection('projects').doc(context.params.projectId).update({ instanceCount: admin.firestore.FieldValue.increment(1), }) ); exports.onInstanceDelete = functions.firestore.document('projects/{projectId}/instances/{instanceId}') .onDelete((snap, context) => db.collection('projects').doc(context.params.projectId).update({ instanceCount: admin.firestore.FieldValue.increment(-1), }) ); In this example we increment an instanceCount field in the project each time a document is added to the instances sub collection. If the field doesn't exist yet it will be created and incremented to 1. The incrementation is transactional internally but you should use a distributed counter if you need to increment more frequently than every 1 second. It's often preferable to implement onCreate and onDelete rather than onWrite as you will call onWrite for updates which means you are spending more money on unnecessary function invocations (if you update the docs in your collection). A: No, there is no built-in support for aggregation queries right now. However there are a few things you could do. The first is documented here. You can use transactions or cloud functions to maintain aggregate information: This example shows how to use a function to keep track of the number of ratings in a subcollection, as well as the average rating. exports.aggregateRatings = firestore .document('restaurants/{restId}/ratings/{ratingId}') .onWrite(event => { // Get value of the newly added rating var ratingVal = event.data.get('rating'); // Get a reference to the restaurant var restRef = db.collection('restaurants').document(event.params.restId); // Update aggregations in a transaction return db.transaction(transaction => { return transaction.get(restRef).then(restDoc => { // Compute new number of ratings var newNumRatings = restDoc.data('numRatings') + 1; // Compute new average rating var oldRatingTotal = restDoc.data('avgRating') * restDoc.data('numRatings'); var newAvgRating = (oldRatingTotal + ratingVal) / newNumRatings; // Update restaurant info return transaction.update(restRef, { avgRating: newAvgRating, numRatings: newNumRatings }); }); }); }); The solution that jbb mentioned is also useful if you only want to count documents infrequently. Make sure to use the select() statement to avoid downloading all of each document (that's a lot of bandwidth when you only need a count). select() is only available in the server SDKs for now so that solution won't work in a mobile app. A: UPDATE 11/20 I created an npm package for easy access to a counter function: https://code.build/p/9DicAmrnRoK4uk62Hw1bEV/firestore-counters I created a universal function using all these ideas to handle all counter situations (except queries). The only exception would be when doing so many writes a second, it slows you down. An example would be likes on a trending post. It is overkill on a blog post, for example, and will cost you more. I suggest creating a separate function in that case using shards: https://firebase.google.com/docs/firestore/solutions/counters // trigger collections exports.myFunction = functions.firestore .document('{colId}/{docId}') .onWrite(async (change: any, context: any) => { return runCounter(change, context); }); // trigger sub-collections exports.mySubFunction = functions.firestore .document('{colId}/{docId}/{subColId}/{subDocId}') .onWrite(async (change: any, context: any) => { return runCounter(change, context); }); // add change the count const runCounter = async function (change: any, context: any) { const col = context.params.colId; const eventsDoc = '_events'; const countersDoc = '_counters'; // ignore helper collections if (col.startsWith('_')) { return null; } // simplify event types const createDoc = change.after.exists && !change.before.exists; const updateDoc = change.before.exists && change.after.exists; if (updateDoc) { return null; } // check for sub collection const isSubCol = context.params.subDocId; const parentDoc = `${countersDoc}/${context.params.colId}`; const countDoc = isSubCol ? `${parentDoc}/${context.params.docId}/${context.params.subColId}` : `${parentDoc}`; // collection references const countRef = db.doc(countDoc); const countSnap = await countRef.get(); // increment size if doc exists if (countSnap.exists) { // createDoc or deleteDoc const n = createDoc ? 1 : -1; const i = admin.firestore.FieldValue.increment(n); // create event for accurate increment const eventRef = db.doc(`${eventsDoc}/${context.eventId}`); return db.runTransaction(async (t: any): Promise<any> => { const eventSnap = await t.get(eventRef); // do nothing if event exists if (eventSnap.exists) { return null; } // add event and update size await t.update(countRef, { count: i }); return t.set(eventRef, { completed: admin.firestore.FieldValue.serverTimestamp() }); }).catch((e: any) => { console.log(e); }); // otherwise count all docs in the collection and add size } else { const colRef = db.collection(change.after.ref.parent.path); return db.runTransaction(async (t: any): Promise<any> => { // update size const colSnap = await t.get(colRef); return t.set(countRef, { count: colSnap.size }); }).catch((e: any) => { console.log(e); });; } } This handles events, increments, and transactions. The beauty in this, is that if you are not sure about the accuracy of a document (probably while still in beta), you can delete the counter to have it automatically add them up on the next trigger. Yes, this costs, so don't delete it otherwise. Same kind of thing to get the count: const collectionPath = 'buildings/138faicnjasjoa89/buildingContacts'; const colSnap = await db.doc('_counters/' + collectionPath).get(); const count = colSnap.get('count'); Also, you may want to create a cron job (scheduled function) to remove old events to save money on database storage. You need at least a blaze plan, and there may be some more configuration. You could run it every sunday at 11pm, for example. https://firebase.google.com/docs/functions/schedule-functions This is untested, but should work with a few tweaks: exports.scheduledFunctionCrontab = functions.pubsub.schedule('5 11 * * *') .timeZone('America/New_York') .onRun(async (context) => { // get yesterday const yesterday = new Date(); yesterday.setDate(yesterday.getDate() - 1); const eventFilter = db.collection('_events').where('completed', '<=', yesterday); const eventFilterSnap = await eventFilter.get(); eventFilterSnap.forEach(async (doc: any) => { await doc.ref.delete(); }); return null; }); And last, don't forget to protect the collections in firestore.rules: match /_counters/{document} { allow read; allow write: if false; } match /_events/{document} { allow read, write: if false; } Update: Queries Adding to my other answer if you want to automate query counts as well, you can use this modified code in your cloud function: if (col === 'posts') { // counter reference - user doc ref const userRef = after ? after.userDoc : before.userDoc; // query reference const postsQuery = db.collection('posts').where('userDoc', "==", userRef); // add the count - postsCount on userDoc await addCount(change, context, postsQuery, userRef, 'postsCount'); } return delEvents(); Which will automatically update the postsCount in the userDocument. You could easily add other one to many counts this way. This just gives you ideas of how you can automate things. I also gave you another way to delete the events. You have to read each date to delete it, so it won't really save you to delete them later, just makes the function slower. /** * Adds a counter to a doc * @param change - change ref * @param context - context ref * @param queryRef - the query ref to count * @param countRef - the counter document ref * @param countName - the name of the counter on the counter document */ const addCount = async function (change: any, context: any, queryRef: any, countRef: any, countName: string) { // events collection const eventsDoc = '_events'; // simplify event type const createDoc = change.after.exists && !change.before.exists; // doc references const countSnap = await countRef.get(); // increment size if field exists if (countSnap.get(countName)) { // createDoc or deleteDoc const n = createDoc ? 1 : -1; const i = admin.firestore.FieldValue.increment(n); // create event for accurate increment const eventRef = db.doc(`${eventsDoc}/${context.eventId}`); return db.runTransaction(async (t: any): Promise<any> => { const eventSnap = await t.get(eventRef); // do nothing if event exists if (eventSnap.exists) { return null; } // add event and update size await t.set(countRef, { [countName]: i }, { merge: true }); return t.set(eventRef, { completed: admin.firestore.FieldValue.serverTimestamp() }); }).catch((e: any) => { console.log(e); }); // otherwise count all docs in the collection and add size } else { return db.runTransaction(async (t: any): Promise<any> => { // update size const colSnap = await t.get(queryRef); return t.set(countRef, { [countName]: colSnap.size }, { merge: true }); }).catch((e: any) => { console.log(e); });; } } /** * Deletes events over a day old */ const delEvents = async function () { // get yesterday const yesterday = new Date(); yesterday.setDate(yesterday.getDate() - 1); const eventFilter = db.collection('_events').where('completed', '<=', yesterday); const eventFilterSnap = await eventFilter.get(); eventFilterSnap.forEach(async (doc: any) => { await doc.ref.delete(); }); return null; } I should also warn you that universal functions will run on every onWrite call period. It may be cheaper to only run the function on onCreate and on onDelete instances of your specific collections. Like the noSQL database we are using, repeated code and data can save you money. A: There is no direct option available. You cant't do db.collection("CollectionName").count(). Below are the two ways by which you can find the count of number of documents within a collection. 1 :- Get all the documents in the collection and then get it's size.(Not the best Solution) db.collection("CollectionName").get().subscribe(doc=>{ console.log(doc.size) }) By using above code your document reads will be equal to the size of documents within a collection and that is the reason why one must avoid using above solution. 2:- Create a separate document with in your collection which will store the count of number of documents in the collection.(Best Solution) db.collection("CollectionName").doc("counts")get().subscribe(doc=>{ console.log(doc.count) }) Above we created a document with name counts to store all the count information.You can update the count document in the following way:- Create a firestore triggers on the document counts Increment the count property of counts document when a new document is created. Decrement the count property of counts document when a document is deleted. w.r.t price (Document Read = 1) and fast data retrieval the above solution is good. A: As of October 2022, Firestore has introduced a count() method on the client SDKs. Now you can count for a query without downloads. For 1000 documents, it will charge you for 1 document read. Web (v9) Introduced in Firebase 9.11.0: const collectionRef = collection(db, "cities"); const snapshot = await getCountFromServer(collectionRef); console.log('count: ', snapshot.data().count); Web V8, Node (Admin) Not Available. Android (Kotlin) Introduced in firestore v24.4.0 (BoM 31.0.0): val query = db.collection("cities") val countQuery = query.count() countQuery.get(AggregateSource.SERVER).addOnCompleteListener { task -> if (task.isSuccessful) { val snapshot = task.result Log.d(TAG, "Count: ${snapshot.count}") } else { Log.d(TAG, "Count failed: ", task.getException()) } } Apple Platforms (Swift) Introduced in Firestore v10.0.0: do { let query = db.collection("cities") let countQuery = query.countAggregateQuery let snapshot = try await countQuery.aggregation(source: AggregateSource.server) print(snapshot.count) } catch { print(error) } A: A workaround is to: write a counter in a firebase doc, which you increment within a transaction everytime you create a new entry You store the count in a field of your new entry (i.e: position: 4). Then you create an index on that field (position DESC). You can do a skip+limit with a query.Where("position", "<" x).OrderBy("position", DESC) Hope this helps! A: I have try a lot with different approaches. And finally, I improve one of the methods. First you need to create a separate collection and save there all events. Second you need to create a new lambda to be triggered by time. This lambda will Count events in event collection and clear event documents. Code details in article. https://medium.com/@ihor.malaniuk/how-to-count-documents-in-google-cloud-firestore-b0e65863aeca A: one of the fast + money saver trick is that:- make a doc and store a 'count' variable in firestore, when user add new doc in the collection, increase that variable, and when user delete a doc, decrease variable. e.g. updateDoc(doc(db, "Count_collection", "Count_Doc"), {count: increment(1)}); note: use (-1) for decreasing, (1) for increasing count How it save money and time:- you(firebase) don't need to loop through the collection, nor browser needs to load whole collection to count number of docs. all the counts are save in a doc of only one variable named "count" or whatever, so less than 1kb data is used, and it use only 1 reads in firebase firestore. A: Solution using pagination with offset & limit: public int collectionCount(String collection) { Integer page = 0; List<QueryDocumentSnapshot> snaps = new ArrayList<>(); findDocsByPage(collection, page, snaps); return snaps.size(); } public void findDocsByPage(String collection, Integer page, List<QueryDocumentSnapshot> snaps) { try { Integer limit = 26000; FieldPath[] selectedFields = new FieldPath[] { FieldPath.of("id") }; List<QueryDocumentSnapshot> snapshotPage; snapshotPage = fireStore() .collection(collection) .select(selectedFields) .offset(page * limit) .limit(limit) .get().get().getDocuments(); if (snapshotPage.size() > 0) { snaps.addAll(snapshotPage); page++; findDocsByPage(collection, page, snaps); } } catch (InterruptedException | ExecutionException e) { e.printStackTrace(); } } findDocsPage it's a recursive method to find all pages of collection selectedFields for otimize query and get only id field instead full body of document limit max size of each query page page define inicial page for pagination From the tests I did it worked well for collections with up to approximately 120k records! A: Firestore is introducing a new Query.count() that fetches the count of a query without fetching the docs. This would allow to simply query all collection items and get the count of that query. Ref: Firebase 10 iOS SDK [JS SDK PR] (https://github.com/firebase/firebase-js-sdk/pull/6608) A: There's a new build in function since version 9.11.0 called getCountFromServer(), which fetches the number of documents in the result set without actually downloading the documents. https://firebase.google.com/docs/reference/js/firestore_#getcountfromserver A: Took me a while to get this working based on some of the answers above, so I thought I'd share it for others to use. I hope it's useful. 'use strict'; const functions = require('firebase-functions'); const admin = require('firebase-admin'); admin.initializeApp(); const db = admin.firestore(); exports.countDocumentsChange = functions.firestore.document('library/{categoryId}/documents/{documentId}').onWrite((change, context) => { const categoryId = context.params.categoryId; const categoryRef = db.collection('library').doc(categoryId) let FieldValue = require('firebase-admin').firestore.FieldValue; if (!change.before.exists) { // new document created : add one to count categoryRef.update({numberOfDocs: FieldValue.increment(1)}); console.log("%s numberOfDocs incremented by 1", categoryId); } else if (change.before.exists && change.after.exists) { // updating existing document : Do nothing } else if (!change.after.exists) { // deleting document : subtract one from count categoryRef.update({numberOfDocs: FieldValue.increment(-1)}); console.log("%s numberOfDocs decremented by 1", categoryId); } return 0; }); A: Along with my npm package adv-firestore-functions above, you can also just use firestore rules to force a good counter: Firestore Rules function counter() { let docPath = /databases/$(database)/documents/_counters/$(request.path[3]); let afterCount = getAfter(docPath).data.count; let beforeCount = get(docPath).data.count; let addCount = afterCount == beforeCount + 1; let subCount = afterCount == beforeCount - 1; let newId = getAfter(docPath).data.docId == request.path[4]; let deleteDoc = request.method == 'delete'; let createDoc = request.method == 'create'; return (newId && subCount && deleteDoc) || (newId && addCount && createDoc); } function counterDoc() { let doc = request.path[4]; let docId = request.resource.data.docId; let afterCount = request.resource.data.count; let beforeCount = resource.data.count; let docPath = /databases/$(database)/documents/$(doc)/$(docId); let createIdDoc = existsAfter(docPath) && !exists(docPath); let deleteIdDoc = !existsAfter(docPath) && exists(docPath); let addCount = afterCount == beforeCount + 1; let subCount = afterCount == beforeCount - 1; return (createIdDoc && addCount) || (deleteIdDoc && subCount); } and use them like so: match /posts/{document} { allow read; allow update; allow create: if counter(); allow delete: if counter(); } match /_counters/{document} { allow read; allow write: if counterDoc(); } Frontend Replace your set and delete functions with these: set async setDocWithCounter( ref: DocumentReference<DocumentData>, data: { [x: string]: any; }, options: SetOptions): Promise<void> { // counter collection const counterCol = '_counters'; const col = ref.path.split('/').slice(0, -1).join('/'); const countRef = doc(this.afs, counterCol, col); const countSnap = await getDoc(countRef); const refSnap = await getDoc(ref); // don't increase count if edit if (refSnap.exists()) { await setDoc(ref, data, options); // increase count } else { const batch = writeBatch(this.afs); batch.set(ref, data, options); // if count exists if (countSnap.exists()) { batch.update(countRef, { count: increment(1), docId: ref.id }); // create count } else { // will only run once, should not use // for mature apps const colRef = collection(this.afs, col); const colSnap = await getDocs(colRef); batch.set(countRef, { count: colSnap.size + 1, docId: ref.id }); } batch.commit(); } } delete async delWithCounter( ref: DocumentReference<DocumentData> ): Promise<void> { // counter collection const counterCol = '_counters'; const col = ref.path.split('/').slice(0, -1).join('/'); const countRef = doc(this.afs, counterCol, col); const countSnap = await getDoc(countRef); const batch = writeBatch(this.afs); // if count exists batch.delete(ref); if (countSnap.exists()) { batch.update(countRef, { count: increment(-1), docId: ref.id }); } /* if ((countSnap.data() as any).count == 1) { batch.delete(countRef); }*/ batch.commit(); } see here for more info... J A: This uses counting to create numeric unique ID. In my use, I will not be decrementing ever, even when the document that the ID is needed for is deleted. Upon a collection creation that needs unique numeric value Designate a collection appData with one document, set with .doc id only Set uniqueNumericIDAmount to 0 in the firebase firestore console Use doc.data().uniqueNumericIDAmount + 1 as the unique numeric id Update appData collection uniqueNumericIDAmount with firebase.firestore.FieldValue.increment(1) firebase .firestore() .collection("appData") .doc("only") .get() .then(doc => { var foo = doc.data(); foo.id = doc.id; // your collection that needs a unique ID firebase .firestore() .collection("uniqueNumericIDs") .doc(user.uid)// user id in my case .set({// I use this in login, so this document doesn't // exist yet, otherwise use update instead of set phone: this.state.phone,// whatever else you need uniqueNumericID: foo.uniqueNumericIDAmount + 1 }) .then(() => { // upon success of new ID, increment uniqueNumericIDAmount firebase .firestore() .collection("appData") .doc("only") .update({ uniqueNumericIDAmount: firebase.firestore.FieldValue.increment( 1 ) }) .catch(err => { console.log(err); }); }) .catch(err => { console.log(err); }); }); A: var variable=0 variable=variable+querySnapshot.count then if you are to use it on a String variable then let stringVariable= String(variable) A: This feature is now supported in FireStore, albeit in Beta. Here are the official Firebase docs A: With the new version of Firebase, you can now run aggregated queries! Simply write .count().get(); after your query. A: As it stands, firebase only allows server-side count, like this const collectionRef = db.collection('cities'); const snapshot = await collectionRef.count().get(); console.log(snapshot.data().count); Please not this is for nodeJS A: New feature available in Firebase/Firestore provides a count of documents in a collection: See this thread to see how to achieve it, with an example. How To Count Number of Documents in a Collection in Firebase Firestore With a WHERE query in react.js A: According to this documentation Cloud Firestore supports the count() aggregation query and is available in preview. The Flutter/Dart code was missing (at the time of writing this) so I played around with it and the following function seems to work: Future<int> getCount(String path) async { var collection = _fireStore.collection(path); var countQuery = collection.count(); var snapShot = await countQuery.get(source: AggregateSource.server); return snapShot.count; }
Cloud Firestore collection count
Is it possible to count how many items a collection has using the new Firebase database, Cloud Firestore? If so, how do I do that?
[ "As with many questions, the answer is - It depends.\nYou should be very careful when handling large amounts of data on the front end. On top of making your front end feel sluggish, Firestore also charges you $0.60 per million reads you make.\n\nSmall collection (less than 100 documents)\nUse with care - Frontend user experience may take a hit\nHandling this on the front end should be fine as long as you are not doing too much logic with this returned array.\ndb.collection('...').get().then(snap => {\n size = snap.size // will return the collection size\n});\n\n\nMedium collection (100 to 1000 documents)\nUse with care - Firestore read invocations may cost a lot\nHandling this on the front end is not feasible as it has too much potential to slow down the users system. We should handle this logic server side and only return the size.\nThe drawback to this method is you are still invoking Firestore reads (equal to the size of your collection), which in the long run may end up costing you more than expected.\nCloud Function:\ndb.collection('...').get().then(snap => {\n res.status(200).send({length: snap.size});\n});\n\nFront End:\nyourHttpClient.post(yourCloudFunctionUrl).toPromise().then(snap => {\n size = snap.length // will return the collection size\n})\n\n\nLarge collection (1000+ documents)\nMost scalable solution\n\n\nFieldValue.increment()\n\nAs of April 2019 Firestore now allows incrementing counters, completely atomically, and without reading the data prior. This ensures we have correct counter values even when updating from multiple sources simultaneously (previously solved using transactions), while also reducing the number of database reads we perform.\n\nBy listening to any document deletes or creates we can add to or remove from a count field that is sitting in the database.\nSee the firestore docs - Distributed Counters\nOr have a look at Data Aggregation by Jeff Delaney. His guides are truly fantastic for anyone using AngularFire but his lessons should carry over to other frameworks as well.\nCloud Function:\nexport const documentWriteListener = functions.firestore\n .document('collection/{documentUid}')\n .onWrite((change, context) => {\n\n if (!change.before.exists) {\n // New document Created : add one to count\n db.doc(docRef).update({ numberOfDocs: FieldValue.increment(1) });\n } else if (change.before.exists && change.after.exists) {\n // Updating existing document : Do nothing\n } else if (!change.after.exists) {\n // Deleting document : subtract one from count\n db.doc(docRef).update({ numberOfDocs: FieldValue.increment(-1) });\n }\n\n return;\n });\n\n\nNow on the frontend you can just query this numberOfDocs field to get the size of the collection.\n", "Simplest way to do so is to read the size of a \"querySnapshot\".\ndb.collection(\"cities\").get().then(function(querySnapshot) { \n console.log(querySnapshot.size); \n});\n\nYou can also read the length of the docs array inside \"querySnapshot\".\nquerySnapshot.docs.length;\n\nOr if a \"querySnapshot\" is empty by reading the empty value, which will return a boolean value.\nquerySnapshot.empty;\n\n", "As far as I know there is no build-in solution for this and it is only possible in the node sdk right now.\nIf you have a\ndb.collection('someCollection')\n\nyou can use\n.select([fields])\n\nto define which field you want to select. If you do an empty select() you will just get an array of document references.\nexample:\ndb.collection('someCollection').select().get().then(\n (snapshot) => console.log(snapshot.docs.length)\n);\nThis solution is only a optimization for the worst case of downloading all documents and does not scale on large collections!\nAlso have a look at this:\nHow to get a count of number of documents in a collection with Cloud Firestore\n", "Be careful counting number of documents for large collections. It is a little bit complex with firestore database if you want to have a precalculated counter for every collection.\nCode like this doesn't work in this case:\nexport const customerCounterListener = \n functions.firestore.document('customers/{customerId}')\n .onWrite((change, context) => {\n\n // on create\n if (!change.before.exists && change.after.exists) {\n return firestore\n .collection('metadatas')\n .doc('customers')\n .get()\n .then(docSnap =>\n docSnap.ref.set({\n count: docSnap.data().count + 1\n }))\n // on delete\n } else if (change.before.exists && !change.after.exists) {\n return firestore\n .collection('metadatas')\n .doc('customers')\n .get()\n .then(docSnap =>\n docSnap.ref.set({\n count: docSnap.data().count - 1\n }))\n }\n\n return null;\n});\n\nThe reason is because every cloud firestore trigger has to be idempotent, as firestore documentation say: https://firebase.google.com/docs/functions/firestore-events#limitations_and_guarantees\nSolution\nSo, in order to prevent multiple executions of your code, you need to manage with events and transactions. This is my particular way to handle large collection counters:\nconst executeOnce = (change, context, task) => {\n const eventRef = firestore.collection('events').doc(context.eventId);\n\n return firestore.runTransaction(t =>\n t\n .get(eventRef)\n .then(docSnap => (docSnap.exists ? null : task(t)))\n .then(() => t.set(eventRef, { processed: true }))\n );\n};\n\nconst documentCounter = collectionName => (change, context) =>\n executeOnce(change, context, t => {\n // on create\n if (!change.before.exists && change.after.exists) {\n return t\n .get(firestore.collection('metadatas')\n .doc(collectionName))\n .then(docSnap =>\n t.set(docSnap.ref, {\n count: ((docSnap.data() && docSnap.data().count) || 0) + 1\n }));\n // on delete\n } else if (change.before.exists && !change.after.exists) {\n return t\n .get(firestore.collection('metadatas')\n .doc(collectionName))\n .then(docSnap =>\n t.set(docSnap.ref, {\n count: docSnap.data().count - 1\n }));\n }\n\n return null;\n });\n\nUse cases here:\n/**\n * Count documents in articles collection.\n */\nexports.articlesCounter = functions.firestore\n .document('articles/{id}')\n .onWrite(documentCounter('articles'));\n\n/**\n * Count documents in customers collection.\n */\nexports.customersCounter = functions.firestore\n .document('customers/{id}')\n .onWrite(documentCounter('customers'));\n\nAs you can see, the key to prevent multiple execution is the property called eventId in the context object. If the function has been handled many times for the same event, the event id will be the same in all cases. Unfortunately, you must have \"events\" collection in your database.\n", "In 2020 this is still not available in the Firebase SDK however it is available in Firebase Extensions (Beta) however it's pretty complex to setup and use...\nA reasonable approach\nHelpers... (create/delete seems redundant but is cheaper than onUpdate)\nexport const onCreateCounter = () => async (\n change,\n context\n) => {\n const collectionPath = change.ref.parent.path;\n const statsDoc = db.doc(\"counters/\" + collectionPath);\n const countDoc = {};\n countDoc[\"count\"] = admin.firestore.FieldValue.increment(1);\n await statsDoc.set(countDoc, { merge: true });\n};\n\nexport const onDeleteCounter = () => async (\n change,\n context\n) => {\n const collectionPath = change.ref.parent.path;\n const statsDoc = db.doc(\"counters/\" + collectionPath);\n const countDoc = {};\n countDoc[\"count\"] = admin.firestore.FieldValue.increment(-1);\n await statsDoc.set(countDoc, { merge: true });\n};\n\nexport interface CounterPath {\n watch: string;\n name: string;\n}\n\n\nExported Firestore hooks\n\nexport const Counters: CounterPath[] = [\n {\n name: \"count_buildings\",\n watch: \"buildings/{id2}\"\n },\n {\n name: \"count_buildings_subcollections\",\n watch: \"buildings/{id2}/{id3}/{id4}\"\n }\n];\n\n\nCounters.forEach(item => {\n exports[item.name + '_create'] = functions.firestore\n .document(item.watch)\n .onCreate(onCreateCounter());\n\n exports[item.name + '_delete'] = functions.firestore\n .document(item.watch)\n .onDelete(onDeleteCounter());\n});\n\n\nIn action\nThe building root collection and all sub collections will be tracked.\n\nHere under the /counters/ root path\n\nNow collection counts will update automatically and eventually! If you need a count, just use the collection path and prefix it with counters.\nconst collectionPath = 'buildings/138faicnjasjoa89/buildingContacts';\nconst collectionCount = await db\n .doc('counters/' + collectionPath)\n .get()\n .then(snap => snap.get('count'));\n\nLimitations\nAs this approach uses a single database and document, it is limited to the Firestore constraint of 1 Update per Second for each counter. It will be eventually consistent, but in cases where large amounts of documents are added/removed the counter will lag behind the actual collection count.\n", "Aggregate count query just landed as a preview in Firestore.\nAnnounced at the 2022 Firebase Summit: https://firebase.blog/posts/2022/10/whats-new-at-Firebase-Sumit-2022\nExcerpt:\n\n[Developer Preview] Count() function: With the new count function in\nFirstore [sic], you can now get the count of the matching documents when you\nrun a query or read from a collection, without loading the actual\ndocuments, which saves you a lot of time.\n\nCode sample they showed at the summit:\n\nDuring the Q&A, someone asked about pricing for aggregated queries, and the answer the Firebase team provided was that it'll cost 1 / 1000th of the price of a read (rounded up to the nearest read, see comments below for more details), but will count all records that are part of the aggregate.\n", "I agree with @Matthew, it will cost a lot if you perform such query. \n[ADVICE FOR DEVELOPERS BEFORE STARTING THEIR PROJECTS]\nSince we have foreseen this situation at the beginning, we can actually make a collection namely counters with a document to store all the counters in a field with type number. \nFor example:\nFor each CRUD operation on the collection, update the counter document:\n\nWhen you create a new collection/subcollection: (+1 in the counter) [1 write operation]\nWhen you delete a collection/subcollection: (-1 in the counter) [1 write operation]\nWhen you update an existing collection/subcollection, do nothing on the counter document: (0) \nWhen you read an existing collection/subcollection, do nothing on the counter document: (0) \n\nNext time, when you want to get the number of collection, you just need to query/point to the document field. [1 read operation]\nIn addition, you can store the collections name in an array, but this will be tricky, the condition of array in firebase is shown as below:\n// we send this\n['a', 'b', 'c', 'd', 'e']\n// Firebase stores this\n{0: 'a', 1: 'b', 2: 'c', 3: 'd', 4: 'e'}\n\n// since the keys are numeric and sequential,\n// if we query the data, we get this\n['a', 'b', 'c', 'd', 'e']\n\n// however, if we then delete a, b, and d,\n// they are no longer mostly sequential, so\n// we do not get back an array\n{2: 'c', 4: 'e'}\n\nSo, if you are not going to delete the collection , you can actually use array to store list of collections name instead of querying all the collection every time.\nHope it helps!\n", "Increment a counter using admin.firestore.FieldValue.increment:\nexports.onInstanceCreate = functions.firestore.document('projects/{projectId}/instances/{instanceId}')\n .onCreate((snap, context) =>\n db.collection('projects').doc(context.params.projectId).update({\n instanceCount: admin.firestore.FieldValue.increment(1),\n })\n );\n\nexports.onInstanceDelete = functions.firestore.document('projects/{projectId}/instances/{instanceId}')\n .onDelete((snap, context) =>\n db.collection('projects').doc(context.params.projectId).update({\n instanceCount: admin.firestore.FieldValue.increment(-1),\n })\n );\n\nIn this example we increment an instanceCount field in the project each time a document is added to the instances sub collection. If the field doesn't exist yet it will be created and incremented to 1.\nThe incrementation is transactional internally but you should use a distributed counter if you need to increment more frequently than every 1 second.\nIt's often preferable to implement onCreate and onDelete rather than onWrite as you will call onWrite for updates which means you are spending more money on unnecessary function invocations (if you update the docs in your collection).\n", "No, there is no built-in support for aggregation queries right now. However there are a few things you could do.\nThe first is documented here. You can use transactions or cloud functions to maintain aggregate information:\nThis example shows how to use a function to keep track of the number of ratings in a subcollection, as well as the average rating.\nexports.aggregateRatings = firestore\n .document('restaurants/{restId}/ratings/{ratingId}')\n .onWrite(event => {\n // Get value of the newly added rating\n var ratingVal = event.data.get('rating');\n\n // Get a reference to the restaurant\n var restRef = db.collection('restaurants').document(event.params.restId);\n\n // Update aggregations in a transaction\n return db.transaction(transaction => {\n return transaction.get(restRef).then(restDoc => {\n // Compute new number of ratings\n var newNumRatings = restDoc.data('numRatings') + 1;\n\n // Compute new average rating\n var oldRatingTotal = restDoc.data('avgRating') * restDoc.data('numRatings');\n var newAvgRating = (oldRatingTotal + ratingVal) / newNumRatings;\n\n // Update restaurant info\n return transaction.update(restRef, {\n avgRating: newAvgRating,\n numRatings: newNumRatings\n });\n });\n });\n});\n\nThe solution that jbb mentioned is also useful if you only want to count documents infrequently. Make sure to use the select() statement to avoid downloading all of each document (that's a lot of bandwidth when you only need a count). select() is only available in the server SDKs for now so that solution won't work in a mobile app.\n", "UPDATE 11/20\nI created an npm package for easy access to a counter function: https://code.build/p/9DicAmrnRoK4uk62Hw1bEV/firestore-counters\n\nI created a universal function using all these ideas to handle all counter situations (except queries).\n\nThe only exception would be when doing so many writes a second, it\nslows you down. An example would be likes on a trending post. It is\noverkill on a blog post, for example, and will cost you more. I\nsuggest creating a separate function in that case using shards:\nhttps://firebase.google.com/docs/firestore/solutions/counters\n\n// trigger collections\nexports.myFunction = functions.firestore\n .document('{colId}/{docId}')\n .onWrite(async (change: any, context: any) => {\n return runCounter(change, context);\n });\n\n// trigger sub-collections\nexports.mySubFunction = functions.firestore\n .document('{colId}/{docId}/{subColId}/{subDocId}')\n .onWrite(async (change: any, context: any) => {\n return runCounter(change, context);\n });\n\n// add change the count\nconst runCounter = async function (change: any, context: any) {\n\n const col = context.params.colId;\n\n const eventsDoc = '_events';\n const countersDoc = '_counters';\n\n // ignore helper collections\n if (col.startsWith('_')) {\n return null;\n }\n // simplify event types\n const createDoc = change.after.exists && !change.before.exists;\n const updateDoc = change.before.exists && change.after.exists;\n\n if (updateDoc) {\n return null;\n }\n // check for sub collection\n const isSubCol = context.params.subDocId;\n\n const parentDoc = `${countersDoc}/${context.params.colId}`;\n const countDoc = isSubCol\n ? `${parentDoc}/${context.params.docId}/${context.params.subColId}`\n : `${parentDoc}`;\n\n // collection references\n const countRef = db.doc(countDoc);\n const countSnap = await countRef.get();\n\n // increment size if doc exists\n if (countSnap.exists) {\n // createDoc or deleteDoc\n const n = createDoc ? 1 : -1;\n const i = admin.firestore.FieldValue.increment(n);\n\n // create event for accurate increment\n const eventRef = db.doc(`${eventsDoc}/${context.eventId}`);\n\n return db.runTransaction(async (t: any): Promise<any> => {\n const eventSnap = await t.get(eventRef);\n // do nothing if event exists\n if (eventSnap.exists) {\n return null;\n }\n // add event and update size\n await t.update(countRef, { count: i });\n return t.set(eventRef, {\n completed: admin.firestore.FieldValue.serverTimestamp()\n });\n }).catch((e: any) => {\n console.log(e);\n });\n // otherwise count all docs in the collection and add size\n } else {\n const colRef = db.collection(change.after.ref.parent.path);\n return db.runTransaction(async (t: any): Promise<any> => {\n // update size\n const colSnap = await t.get(colRef);\n return t.set(countRef, { count: colSnap.size });\n }).catch((e: any) => {\n console.log(e);\n });;\n }\n}\n\nThis handles events, increments, and transactions. The beauty in this, is that if you are not sure about the accuracy of a document (probably while still in beta), you can delete the counter to have it automatically add them up on the next trigger. Yes, this costs, so don't delete it otherwise.\nSame kind of thing to get the count:\nconst collectionPath = 'buildings/138faicnjasjoa89/buildingContacts';\nconst colSnap = await db.doc('_counters/' + collectionPath).get();\nconst count = colSnap.get('count');\n\nAlso, you may want to create a cron job (scheduled function) to remove old events to save money on database storage. You need at least a blaze plan, and there may be some more configuration. You could run it every sunday at 11pm, for example.\nhttps://firebase.google.com/docs/functions/schedule-functions\nThis is untested, but should work with a few tweaks:\nexports.scheduledFunctionCrontab = functions.pubsub.schedule('5 11 * * *')\n .timeZone('America/New_York')\n .onRun(async (context) => {\n\n // get yesterday\n const yesterday = new Date();\n yesterday.setDate(yesterday.getDate() - 1);\n\n const eventFilter = db.collection('_events').where('completed', '<=', yesterday);\n const eventFilterSnap = await eventFilter.get();\n eventFilterSnap.forEach(async (doc: any) => {\n await doc.ref.delete();\n });\n return null;\n });\n\nAnd last, don't forget to protect the collections in firestore.rules:\nmatch /_counters/{document} {\n allow read;\n allow write: if false;\n}\nmatch /_events/{document} {\n allow read, write: if false;\n}\n\nUpdate: Queries\nAdding to my other answer if you want to automate query counts as well, you can use this modified code in your cloud function:\n if (col === 'posts') {\n\n // counter reference - user doc ref\n const userRef = after ? after.userDoc : before.userDoc;\n // query reference\n const postsQuery = db.collection('posts').where('userDoc', \"==\", userRef);\n // add the count - postsCount on userDoc\n await addCount(change, context, postsQuery, userRef, 'postsCount');\n\n }\n return delEvents();\n\n\nWhich will automatically update the postsCount in the userDocument. You could easily add other one to many counts this way. This just gives you ideas of how you can automate things. I also gave you another way to delete the events. You have to read each date to delete it, so it won't really save you to delete them later, just makes the function slower.\n/**\n * Adds a counter to a doc\n * @param change - change ref\n * @param context - context ref\n * @param queryRef - the query ref to count\n * @param countRef - the counter document ref\n * @param countName - the name of the counter on the counter document\n */\nconst addCount = async function (change: any, context: any, \n queryRef: any, countRef: any, countName: string) {\n\n // events collection\n const eventsDoc = '_events';\n\n // simplify event type\n const createDoc = change.after.exists && !change.before.exists;\n\n // doc references\n const countSnap = await countRef.get();\n\n // increment size if field exists\n if (countSnap.get(countName)) {\n // createDoc or deleteDoc\n const n = createDoc ? 1 : -1;\n const i = admin.firestore.FieldValue.increment(n);\n\n // create event for accurate increment\n const eventRef = db.doc(`${eventsDoc}/${context.eventId}`);\n\n return db.runTransaction(async (t: any): Promise<any> => {\n const eventSnap = await t.get(eventRef);\n // do nothing if event exists\n if (eventSnap.exists) {\n return null;\n }\n // add event and update size\n await t.set(countRef, { [countName]: i }, { merge: true });\n return t.set(eventRef, {\n completed: admin.firestore.FieldValue.serverTimestamp()\n });\n }).catch((e: any) => {\n console.log(e);\n });\n // otherwise count all docs in the collection and add size\n } else {\n return db.runTransaction(async (t: any): Promise<any> => {\n // update size\n const colSnap = await t.get(queryRef);\n return t.set(countRef, { [countName]: colSnap.size }, { merge: true });\n }).catch((e: any) => {\n console.log(e);\n });;\n }\n}\n/**\n * Deletes events over a day old\n */\nconst delEvents = async function () {\n\n // get yesterday\n const yesterday = new Date();\n yesterday.setDate(yesterday.getDate() - 1);\n\n const eventFilter = db.collection('_events').where('completed', '<=', yesterday);\n const eventFilterSnap = await eventFilter.get();\n eventFilterSnap.forEach(async (doc: any) => {\n await doc.ref.delete();\n });\n return null;\n}\n\n\nI should also warn you that universal functions will run on every\nonWrite call period. It may be cheaper to only run the function on\nonCreate and on onDelete instances of your specific collections. Like\nthe noSQL database we are using, repeated code and data can save you\nmoney.\n\n", "There is no direct option available. You cant't do db.collection(\"CollectionName\").count().\nBelow are the two ways by which you can find the count of number of documents within a collection.\n1 :- Get all the documents in the collection and then get it's size.(Not the best Solution)\ndb.collection(\"CollectionName\").get().subscribe(doc=>{\nconsole.log(doc.size)\n})\n\nBy using above code your document reads will be equal to the size of documents within a collection and that is the reason why one must avoid using above solution.\n2:- Create a separate document with in your collection which will store the count of number of documents in the collection.(Best Solution)\ndb.collection(\"CollectionName\").doc(\"counts\")get().subscribe(doc=>{\nconsole.log(doc.count)\n})\n\nAbove we created a document with name counts to store all the count information.You can update the count document in the following way:-\n\nCreate a firestore triggers on the document counts\nIncrement the count property of counts document when a new document is created.\nDecrement the count property of counts document when a document is deleted.\n\nw.r.t price (Document Read = 1) and fast data retrieval the above solution is good.\n", "As of October 2022, Firestore has introduced a count() method on the client SDKs. Now you can count for a query without downloads.\nFor 1000 documents, it will charge you for 1 document read.\nWeb (v9)\nIntroduced in Firebase 9.11.0:\nconst collectionRef = collection(db, \"cities\");\nconst snapshot = await getCountFromServer(collectionRef);\nconsole.log('count: ', snapshot.data().count);\n\nWeb V8, Node (Admin)\nNot Available.\nAndroid (Kotlin)\nIntroduced in firestore v24.4.0 (BoM 31.0.0):\nval query = db.collection(\"cities\")\nval countQuery = query.count()\ncountQuery.get(AggregateSource.SERVER).addOnCompleteListener { task ->\n if (task.isSuccessful) {\n val snapshot = task.result\n Log.d(TAG, \"Count: ${snapshot.count}\")\n } else {\n Log.d(TAG, \"Count failed: \", task.getException())\n }\n}\n\nApple Platforms (Swift)\nIntroduced in Firestore v10.0.0:\ndo {\n let query = db.collection(\"cities\")\n let countQuery = query.countAggregateQuery\n let snapshot = try await countQuery.aggregation(source: AggregateSource.server)\n print(snapshot.count)\n} catch {\n print(error)\n}\n\n", "A workaround is to:\nwrite a counter in a firebase doc, which you increment within a transaction everytime you create a new entry\nYou store the count in a field of your new entry (i.e: position: 4).\nThen you create an index on that field (position DESC).\nYou can do a skip+limit with a query.Where(\"position\", \"<\" x).OrderBy(\"position\", DESC)\nHope this helps!\n", "I have try a lot with different approaches. \nAnd finally, I improve one of the methods. \nFirst you need to create a separate collection and save there all events.\nSecond you need to create a new lambda to be triggered by time. This lambda will Count events in event collection and clear event documents.\nCode details in article.\nhttps://medium.com/@ihor.malaniuk/how-to-count-documents-in-google-cloud-firestore-b0e65863aeca\n", "one of the fast + money saver trick is that:-\nmake a doc and store a 'count' variable in firestore, when user add new doc in the collection, increase that variable, and when user delete a doc, decrease variable. e.g.\nupdateDoc(doc(db, \"Count_collection\", \"Count_Doc\"), {count: increment(1)});\nnote: use (-1) for decreasing, (1) for increasing count\nHow it save money and time:-\n\nyou(firebase) don't need to loop through the collection, nor browser needs to load whole collection to count number of docs.\nall the counts are save in a doc of only one variable named \"count\" or whatever, so less than 1kb data is used, and it use only 1 reads in firebase firestore.\n\n", "Solution using pagination with offset & limit:\npublic int collectionCount(String collection) {\n Integer page = 0;\n List<QueryDocumentSnapshot> snaps = new ArrayList<>();\n findDocsByPage(collection, page, snaps);\n return snaps.size();\n }\n\npublic void findDocsByPage(String collection, Integer page, \n List<QueryDocumentSnapshot> snaps) {\n try {\n Integer limit = 26000;\n FieldPath[] selectedFields = new FieldPath[] { FieldPath.of(\"id\") };\n List<QueryDocumentSnapshot> snapshotPage;\n snapshotPage = fireStore()\n .collection(collection)\n .select(selectedFields)\n .offset(page * limit)\n .limit(limit)\n .get().get().getDocuments(); \n if (snapshotPage.size() > 0) {\n snaps.addAll(snapshotPage);\n page++;\n findDocsByPage(collection, page, snaps);\n }\n } catch (InterruptedException | ExecutionException e) {\n e.printStackTrace();\n }\n}\n\n\nfindDocsPage it's a recursive method to find all pages of collection\n\nselectedFields for otimize query and get only id field instead full body of document\n\nlimit max size of each query page\n\npage define inicial page for pagination\n\n\n\nFrom the tests I did it worked well for collections with up to approximately 120k records!\n\n", "Firestore is introducing a new Query.count() that fetches the count of a query without fetching the docs.\nThis would allow to simply query all collection items and get the count of that query.\nRef:\n\nFirebase 10 iOS SDK\n[JS SDK PR] (https://github.com/firebase/firebase-js-sdk/pull/6608)\n\n", "There's a new build in function since version 9.11.0 called getCountFromServer(), which fetches the number of documents in the result set without actually downloading the documents.\nhttps://firebase.google.com/docs/reference/js/firestore_#getcountfromserver\n", "Took me a while to get this working based on some of the answers above, so I thought I'd share it for others to use. I hope it's useful.\n'use strict';\n\nconst functions = require('firebase-functions');\nconst admin = require('firebase-admin');\nadmin.initializeApp();\nconst db = admin.firestore();\n\nexports.countDocumentsChange = functions.firestore.document('library/{categoryId}/documents/{documentId}').onWrite((change, context) => {\n\n const categoryId = context.params.categoryId;\n const categoryRef = db.collection('library').doc(categoryId)\n let FieldValue = require('firebase-admin').firestore.FieldValue;\n\n if (!change.before.exists) {\n\n // new document created : add one to count\n categoryRef.update({numberOfDocs: FieldValue.increment(1)});\n console.log(\"%s numberOfDocs incremented by 1\", categoryId);\n\n } else if (change.before.exists && change.after.exists) {\n\n // updating existing document : Do nothing\n\n } else if (!change.after.exists) {\n\n // deleting document : subtract one from count\n categoryRef.update({numberOfDocs: FieldValue.increment(-1)});\n console.log(\"%s numberOfDocs decremented by 1\", categoryId);\n\n }\n\n return 0;\n});\n\n", "Along with my npm package adv-firestore-functions above, you can also just use firestore rules to force a good counter:\nFirestore Rules\nfunction counter() {\n let docPath = /databases/$(database)/documents/_counters/$(request.path[3]);\n let afterCount = getAfter(docPath).data.count;\n let beforeCount = get(docPath).data.count;\n let addCount = afterCount == beforeCount + 1;\n let subCount = afterCount == beforeCount - 1;\n let newId = getAfter(docPath).data.docId == request.path[4];\n let deleteDoc = request.method == 'delete';\n let createDoc = request.method == 'create';\n return (newId && subCount && deleteDoc) || (newId && addCount && createDoc);\n}\n\nfunction counterDoc() {\n let doc = request.path[4];\n let docId = request.resource.data.docId;\n let afterCount = request.resource.data.count;\n let beforeCount = resource.data.count;\n let docPath = /databases/$(database)/documents/$(doc)/$(docId);\n let createIdDoc = existsAfter(docPath) && !exists(docPath);\n let deleteIdDoc = !existsAfter(docPath) && exists(docPath);\n let addCount = afterCount == beforeCount + 1;\n let subCount = afterCount == beforeCount - 1;\n return (createIdDoc && addCount) || (deleteIdDoc && subCount);\n}\n\nand use them like so:\nmatch /posts/{document} {\n allow read;\n allow update;\n allow create: if counter();\n allow delete: if counter();\n}\nmatch /_counters/{document} {\n allow read;\n allow write: if counterDoc();\n}\n\nFrontend\nReplace your set and delete functions with these:\nset\nasync setDocWithCounter(\n ref: DocumentReference<DocumentData>,\n data: {\n [x: string]: any;\n },\n options: SetOptions): Promise<void> {\n\n // counter collection\n const counterCol = '_counters';\n\n const col = ref.path.split('/').slice(0, -1).join('/');\n const countRef = doc(this.afs, counterCol, col);\n const countSnap = await getDoc(countRef);\n const refSnap = await getDoc(ref);\n\n // don't increase count if edit\n if (refSnap.exists()) {\n await setDoc(ref, data, options);\n\n // increase count\n } else {\n const batch = writeBatch(this.afs);\n batch.set(ref, data, options);\n\n // if count exists\n if (countSnap.exists()) {\n batch.update(countRef, {\n count: increment(1),\n docId: ref.id\n });\n // create count\n } else {\n // will only run once, should not use\n // for mature apps\n const colRef = collection(this.afs, col);\n const colSnap = await getDocs(colRef);\n batch.set(countRef, {\n count: colSnap.size + 1,\n docId: ref.id\n });\n }\n batch.commit();\n }\n}\n\ndelete\nasync delWithCounter(\n ref: DocumentReference<DocumentData>\n): Promise<void> {\n\n // counter collection\n const counterCol = '_counters';\n\n const col = ref.path.split('/').slice(0, -1).join('/');\n const countRef = doc(this.afs, counterCol, col);\n const countSnap = await getDoc(countRef);\n const batch = writeBatch(this.afs);\n\n // if count exists\n batch.delete(ref);\n if (countSnap.exists()) {\n batch.update(countRef, {\n count: increment(-1),\n docId: ref.id\n });\n }\n /*\n if ((countSnap.data() as any).count == 1) {\n batch.delete(countRef);\n }*/\n batch.commit();\n}\n\nsee here for more info...\nJ\n", "This uses counting to create numeric unique ID. In my use, I will not be decrementing ever, even when the document that the ID is needed for is deleted.\nUpon a collection creation that needs unique numeric value\n\nDesignate a collection appData with one document, set with .doc id only\nSet uniqueNumericIDAmount to 0 in the firebase firestore console\nUse doc.data().uniqueNumericIDAmount + 1 as the unique numeric id\nUpdate appData collection uniqueNumericIDAmount with firebase.firestore.FieldValue.increment(1)\n\nfirebase\n .firestore()\n .collection(\"appData\")\n .doc(\"only\")\n .get()\n .then(doc => {\n var foo = doc.data();\n foo.id = doc.id;\n\n // your collection that needs a unique ID\n firebase\n .firestore()\n .collection(\"uniqueNumericIDs\")\n .doc(user.uid)// user id in my case\n .set({// I use this in login, so this document doesn't\n // exist yet, otherwise use update instead of set\n phone: this.state.phone,// whatever else you need\n uniqueNumericID: foo.uniqueNumericIDAmount + 1\n })\n .then(() => {\n\n // upon success of new ID, increment uniqueNumericIDAmount\n firebase\n .firestore()\n .collection(\"appData\")\n .doc(\"only\")\n .update({\n uniqueNumericIDAmount: firebase.firestore.FieldValue.increment(\n 1\n )\n })\n .catch(err => {\n console.log(err);\n });\n })\n .catch(err => {\n console.log(err);\n });\n });\n\n", "var variable=0\nvariable=variable+querySnapshot.count\n\nthen if you are to use it on a String variable then\nlet stringVariable= String(variable)\n\n", "This feature is now supported in FireStore, albeit in Beta.\nHere are the official Firebase docs\n", "With the new version of Firebase, you can now run aggregated queries!\nSimply write\n.count().get(); \n\nafter your query.\n", "As it stands, firebase only allows server-side count, like this\nconst collectionRef = db.collection('cities');\nconst snapshot = await collectionRef.count().get();\nconsole.log(snapshot.data().count);\n\nPlease not this is for nodeJS\n", "New feature available in Firebase/Firestore provides a count of documents in a collection:\nSee this thread to see how to achieve it, with an example.\nHow To Count Number of Documents in a Collection in Firebase Firestore With a WHERE query in react.js\n", "According to this documentation Cloud Firestore supports the count() aggregation query and is available in preview.\nThe Flutter/Dart code was missing (at the time of writing this) so I played around with it and the following function seems to work:\n Future<int> getCount(String path) async {\n var collection = _fireStore.collection(path);\n var countQuery = collection.count();\n var snapShot = await countQuery.get(source: AggregateSource.server);\n return snapShot.count;\n }\n\n" ]
[ 328, 41, 33, 21, 18, 18, 11, 7, 6, 6, 5, 5, 4, 3, 2, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "firebaseFirestore.collection(\"...\").addSnapshotListener(new EventListener<QuerySnapshot>() {\n @Override\n public void onEvent(QuerySnapshot documentSnapshots, FirebaseFirestoreException e) {\n\n int Counter = documentSnapshots.size();\n\n }\n });\n\n", "So my solution for this problem is a bit non-technical, not super precise, but good enough for me.\n\nThose are my documents. As I have a lot of them (100k+) there are 'laws of big numbers' happening. I can assume that there is less-or-more equal number of items having id starting with 0, 1, 2, etc.\nSo what I do is I scroll my list till I get into id's starting with 1, or with 01, depending on how long you have to scroll\n\n here we are.\nNow, having scrolled so far, I open the inspector and see how much did I scroll and divide it by height of single element\n\nHad to scroll 82000px to get items with id starting with 1. Height of single element is 32px.\nIt means I have 2500 with id starting with 0, so now I multiply it by number of possible 'starting char'. In firebase it can be A-Z, a-z, 0-9 which means it's 24 + 24 + 10 = 58.\nIt means I have ~~2500*58 so it gives roughly 145000 items in my collection.\nSummarizing: What is wrong with you firebase?\n" ]
[ -2, -8 ]
[ "firebase", "google_cloud_firestore" ]
stackoverflow_0046554091_firebase_google_cloud_firestore.txt
Q: Accessing Pytorch model layers and computational graph I am trying to examine the layers of my pytorch model in a way that allows me to keep track of which layers feed input to others. I've been able to get a list of the layers by using model.modules(), but this list doesn't preserve any information about which layers feed into others in the transformer network I'm analyzing. Is there a way to access each layer and its weights while keeping track of what feeds into where? A: You can use the nn.ModuleList class from PyTorch, which allows you to create a list of PyTorch modules and easily access their individual layers and weights: import torch.nn as nn # define your model class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.layers = nn.ModuleList([ nn.Linear(10, 20), nn.Linear(20, 30), nn.Linear(30, 40) ]) def forward(self, x): for layer in self.layers: x = layer(x) return x # create an instance of the model model = MyModel() # access the layers and their weights for i, layer in enumerate(model.layers): print(f"Layer {i}:") print(f" weights: {layer.weight.shape}") print(f" bias: {layer.bias.shape}") # access the input and output shapes of each layer for i, layer in enumerate(model.layers): if i == 0: input_shape = (10,) else: input_shape = model.layers[i-1].weight.shape output_shape = layer.weight.shape print(f"Layer {i}:") print(f" input shape: {input_shape}") print(f" output shape: {output_shape}")
Accessing Pytorch model layers and computational graph
I am trying to examine the layers of my pytorch model in a way that allows me to keep track of which layers feed input to others. I've been able to get a list of the layers by using model.modules(), but this list doesn't preserve any information about which layers feed into others in the transformer network I'm analyzing. Is there a way to access each layer and its weights while keeping track of what feeds into where?
[ "You can use the nn.ModuleList class from PyTorch, which allows you to create a list of PyTorch modules and easily access their individual layers and weights:\nimport torch.nn as nn\n\n# define your model\nclass MyModel(nn.Module):\n def __init__(self):\n super(MyModel, self).__init__()\n self.layers = nn.ModuleList([\n nn.Linear(10, 20),\n nn.Linear(20, 30),\n nn.Linear(30, 40)\n ])\n\n def forward(self, x):\n for layer in self.layers:\n x = layer(x)\n return x\n\n# create an instance of the model\nmodel = MyModel()\n\n# access the layers and their weights\nfor i, layer in enumerate(model.layers):\n print(f\"Layer {i}:\")\n print(f\" weights: {layer.weight.shape}\")\n print(f\" bias: {layer.bias.shape}\")\n\n# access the input and output shapes of each layer\nfor i, layer in enumerate(model.layers):\n if i == 0:\n input_shape = (10,)\n else:\n input_shape = model.layers[i-1].weight.shape\n\n output_shape = layer.weight.shape\n\n print(f\"Layer {i}:\")\n print(f\" input shape: {input_shape}\")\n print(f\" output shape: {output_shape}\")\n\n" ]
[ 0 ]
[]
[]
[ "pytorch" ]
stackoverflow_0074677370_pytorch.txt
Q: Input file retrieve back database image_URL record I'm trying to edit a smartphone detail and there is dataRequired() validation for all input field. However, the input file for image is default empty. When I'm trying to edit other field such as the brand, the input file for image must be also inputted in order to edit successfully. How can I let the input file retrieve back the image_URL in database automatically after submitting the form? Input file for image_URL {% for smartphone in smartphones %} <div class="form-floating mb-4 justify-content-between"> <img src="{{ url_for('static',filename = smartphone['image_URL']) }}" style="height: 250px;"> <input type="file" id="image_URL" name="image_URL" accept="image/*"> </div> {% endfor %} Backend in app.py @app.route('/editSmartphone/<int:id>',methods = ['GET','POST']) def editSmartphone(id): smartphoneID = id conn = get_db_connection() smartphones = conn.execute('SELECT * FROM Smartphone WHERE id = ?',(smartphoneID,)).fetchall() form = editSmartphoneForm(request.form) if request.method == 'POST' and form.validate(): conn.execute('UPDATE Smartphone SET brand = ?,model = ?,processor = ?, ram = ?, colour = ?, battery = ?, lowprice = ?, highprice = ?, screenSize = ?, refreshRate = ?, description = ?, image_URL = ? WHERE id = ?',(form.brand.data, form.model.data, form.processor.data, form.ram.data, form.colour.data, form.battery.data, form.lowprice.data, form.highprice.data, form.screenSize.data, form.refreshRate.data, form.description.data, form.image_URL.data, smartphoneID)) conn.commit() conn.close() message = "Smartphone detail has been modified successfully" flash(message,'edited') return redirect('/manageSmartphone') return render_template('editSmartphone.html',smartphones = smartphones, form = form) A: Your question looks a bit similar to this question, so I'll borrow some elements from that answer here. You're already getting the current smartphone via the smartphones list, therefore the current image_URL of the phone you're editing should be something like: current_image_URL = smartphones[0][11] My approach to this would be to check if the form.image_URL.data is empty when editing a phone on your editSmartphone route. You could write some logic to check this before you call your DB update statement: if form.image_URL.data == "": image_URL = current_image_URL else: image_URL = form.image_URL.data You can see above that we are storing the outcome of this check in image_URL. You can then simply replace form.image_URL.data with image_URL in your DB update statement: conn.execute('UPDATE Smartphone SET ... image_URL = ? ...',(..., image_URL, ...)) Also, inside of the editSmartphoneForm, make sure to remove the DataRequired() validator on your image_URL. Hope this helps, or at least gets you on the right track!
Input file retrieve back database image_URL record
I'm trying to edit a smartphone detail and there is dataRequired() validation for all input field. However, the input file for image is default empty. When I'm trying to edit other field such as the brand, the input file for image must be also inputted in order to edit successfully. How can I let the input file retrieve back the image_URL in database automatically after submitting the form? Input file for image_URL {% for smartphone in smartphones %} <div class="form-floating mb-4 justify-content-between"> <img src="{{ url_for('static',filename = smartphone['image_URL']) }}" style="height: 250px;"> <input type="file" id="image_URL" name="image_URL" accept="image/*"> </div> {% endfor %} Backend in app.py @app.route('/editSmartphone/<int:id>',methods = ['GET','POST']) def editSmartphone(id): smartphoneID = id conn = get_db_connection() smartphones = conn.execute('SELECT * FROM Smartphone WHERE id = ?',(smartphoneID,)).fetchall() form = editSmartphoneForm(request.form) if request.method == 'POST' and form.validate(): conn.execute('UPDATE Smartphone SET brand = ?,model = ?,processor = ?, ram = ?, colour = ?, battery = ?, lowprice = ?, highprice = ?, screenSize = ?, refreshRate = ?, description = ?, image_URL = ? WHERE id = ?',(form.brand.data, form.model.data, form.processor.data, form.ram.data, form.colour.data, form.battery.data, form.lowprice.data, form.highprice.data, form.screenSize.data, form.refreshRate.data, form.description.data, form.image_URL.data, smartphoneID)) conn.commit() conn.close() message = "Smartphone detail has been modified successfully" flash(message,'edited') return redirect('/manageSmartphone') return render_template('editSmartphone.html',smartphones = smartphones, form = form)
[ "Your question looks a bit similar to this question, so I'll borrow some elements from that answer here.\nYou're already getting the current smartphone via the smartphones list, therefore the current image_URL of the phone you're editing should be something like:\ncurrent_image_URL = smartphones[0][11]\n\nMy approach to this would be to check if the form.image_URL.data is empty when editing a phone on your editSmartphone route. You could write some logic to check this before you call your DB update statement:\nif form.image_URL.data == \"\":\n image_URL = current_image_URL\nelse:\n image_URL = form.image_URL.data\n\nYou can see above that we are storing the outcome of this check in image_URL. You can then simply replace form.image_URL.data with image_URL in your DB update statement:\nconn.execute('UPDATE Smartphone SET ... image_URL = ? ...',(..., image_URL, ...))\n\nAlso, inside of the editSmartphoneForm, make sure to remove the DataRequired() validator on your image_URL.\nHope this helps, or at least gets you on the right track!\n" ]
[ 0 ]
[]
[]
[ "flask", "html", "sqlite" ]
stackoverflow_0074674409_flask_html_sqlite.txt
Q: How to extract HTML data from pandas dataframe column I'm trying to extract certain elements from a block of webpages that I've extracted and put into a pandas column. I've tried lxml and can't get to pull out very specific phrases in text. What is the pythonistic way to go? Tried this: def scrape_details(s): results = requests.get(s) results2 = etree.parse(StringIO(str(results.content)), parser) return results2.xpath('//p') df['data'] = df['URL'].map(lambda x: scrape_details(x)) Returned: 0 The Kingsley School https://www.isc.co.uk/schools/england/warwickshire/royal-leamington-spa/the-kingsley-school/ warwickshire [[], [[]], [], [], [[], [], [], []], [], [[]], [[]], [[]], [[], [], []], [[], [<Element a at 0x7fd3c003fec0>, <Element a at 0x7fd3c0035a80>, <Element a at 0x7fd3c003fec0>, <Element a at 0x7fd3c0035a80>, <Element a at 0x7fd3c003fec0>, <Element a at 0x7fd3c0035a80>]], [[]], [[]], [[], []], [[]], [[], []], [[], []], [[], []], [[], []], [[]], [[]], [[]], [[]], [[]], [[]], [], [], []] A: What elements are you trying to extract? If its general HTML elements you could use the html.parser python package. I just modified the generic parser class they provide near the bottom of the page a lil: import requests import pandas as pd from html.parser import HTMLParser from html.entities import name2codepoint class MyHTMLParser(HTMLParser): def __init__(self): self.parsed_html = [] super().__init__() def handle_starttag(self, tag, attrs): self.parsed_html.append(f"Start tag: {tag}") for attr in attrs: self.parsed_html.append(f" attr: {attr}") def handle_endtag(self, tag): self.parsed_html.append(f"End tag : {tag}") def handle_data(self, data): self.parsed_html.append(f"Data : {data.strip()}") def handle_comment(self, data): self.parsed_html.append(f"Comment : {data.strip()}") def handle_entityref(self, name): c = chr(name2codepoint[name]) self.parsed_html.append(f"Named ent: {c}") def handle_charref(self, name): if name.startswith('x'): c = chr(int(name[1:], 16)) else: c = chr(int(name)) self.parsed_html.append(f"Num ent : {c}") def handle_decl(self, data): self.parsed_html.append(f"Decl : {data.strip()}") def scrape_details(url): parser = MyHTMLParser() html_text = requests.get(url).text parser.feed(html_text) return parser.parsed_html df = pd.DataFrame( {'URL': ['https://www.isc.co.uk/schools/england/warwickshire/royal-leamington-spa/the-kingsley-school/']}) df['data'] = df['URL'].map(lambda url: scrape_details(url)) for data in df['data']: print('\n'.join(data)) some of result: Data : Decl : DOCTYPE html Data : Start tag: html attr: ('lang', 'en') Data : Start tag: head Data : Start tag: meta attr: ('charset', 'UTF-8') End tag : meta Data : Start tag: title Data : The Kingsley School, Royal Leamington Spa - ISC End tag : title Data : Start tag: meta attr: ('name', 'robots') attr: ('content', 'index, follow') End tag : meta Data : Start tag: link attr: ('rel', 'canonical') attr: ('href', '/schools/england/warwickshire/royal-leamington-spa/the-kingsley-school/') End tag : link Data : Start tag: meta attr: ('name', 'description') attr: ('content', 'Information about Royal Leamington Spa, The Kingsley School') End tag : meta Data : Start tag: meta attr: ('name', 'viewport') attr: ('content', 'width=device-width, initial-scale=1.0') Data : Start tag: meta attr: ('name', 'format-detection') attr: ('content', 'telephone-no') End tag : meta Data : Comment : Google Tag Manager Data : Start tag: script Data : (function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src= 'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer','GTM-TCCJT7P'); End tag : script Data : Comment : End Google Tag Manager Data : Start tag: script attr: ('src', '/bundles/upper?v=C-4SuuHJ23ObmSV6TVgJvlUZTwUHbE9Mit8bUEVmwIo1') End tag : script Data : Start tag: link attr: ('href', '/bundles/site?v=_nmQ8YRQiebaCD8eMIdpKjF42e4TM9br-6hkul2424o1') attr: ('rel', 'stylesheet') End tag : link Data : Start tag: link attr: ('rel', 'apple-touch-icon') attr: ('sizes', '180x180') attr: ('href', '/favicon/isc/apple-touch-icon.png') Data : Start tag: link attr: ('rel', 'icon') attr: ('type', 'image/png') attr: ('sizes', '32x32') attr: ('href', '/favicon/isc/favicon-32x32.png') Data : Start tag: link attr: ('rel', 'icon') attr: ('type', 'image/png') attr: ('sizes', '16x16') attr: ('href', '/favicon/isc/favicon-16x16.png') Data : Start tag: link attr: ('rel', 'manifest') attr: ('href', '/favicon/isc/site.webmanifest') Data : Start tag: link attr: ('rel', 'mask-icon') attr: ('href', '/favicon/isc/safari-pinned-tab.svg') attr: ('color', '#5bbad5') Data : Start tag: meta attr: ('name', 'msapplication-TileColor') attr: ('content', '#ffc40d') Data : Start tag: meta attr: ('name', 'msapplication-config') attr: ('content', '/favicon/isc/browserconfig.xml') Data : Start tag: meta attr: ('name', 'theme-color') attr: ('content', '#ffffff') Data : Start tag: script Data : var aId = 0; // mz values var sId = 0; var mID = 0; End tag : script Data : Start tag: script attr: ('src', 'https://www.google.com/recaptcha/api.js') End tag : script Data : Start tag: link attr: ('href', 'https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css') attr: ('rel', 'stylesheet') attr: ('integrity', 'sha384-wvfXpqpZZVQGK6TAh5PVlGOfQNHSoD2xbE+QkPxCAFlNEevoEH3Sl0sibVcOQVnN') attr: ('crossorigin', 'anonymous') Data : Start tag: script attr: ('type', 'application/ld+json') Data : { "@context": "https://schema.org", "@type": "School", "name": "The Kingsley School", "url": "https://www.thekingsleyschool.co.uk/", "description": "Kingsley provides a friendly, supportive family environment where individuals are encouraged to enjoy learning, and to aim for and achieve excellence. Our principal aim is to enable all our pupils to fulfil their potential wherever it may lie, through our wide range of subjects, lively extra-c urricular activities, small classes and highly effective pastoral care.\r\n", "address": { "@type": "PostalAddress", "addressCountry": "England", "addressLocality": "Royal Leamington Spa", "streetAddress": "Beauchamp Avenue", "postalCode": "CV32 5RD", "telephone": "+44 (0)1926 425127" }, "geo": { "@type": "GeoCoordinates", "latitude": 52.2942769, "longitude": -1.5379113 }, "email": "[email protected]" } End tag : script UPDATE: specific terms I want to extract: Head's name: Mr James Murphy-O'Connor (Principal) ISC associations: HMC, IAPS, GSA, AGBIS, ISBA Religious affiliation: Church in Wales Day/boarding type: Day and Full Boarding Gender profile: Boys and girls taught separately (diamond structure) Size: 1236 Boys - age range & pupil numbers: Day: 3 to 18 (522) Boarding: 7 to 18 (150) Sixth form: (156) Girls - age range & pupil numbers: Day: 3 to 18 (456) Boarding: 7 to 18 (108) Sixth form: (104) ...
How to extract HTML data from pandas dataframe column
I'm trying to extract certain elements from a block of webpages that I've extracted and put into a pandas column. I've tried lxml and can't get to pull out very specific phrases in text. What is the pythonistic way to go? Tried this: def scrape_details(s): results = requests.get(s) results2 = etree.parse(StringIO(str(results.content)), parser) return results2.xpath('//p') df['data'] = df['URL'].map(lambda x: scrape_details(x)) Returned: 0 The Kingsley School https://www.isc.co.uk/schools/england/warwickshire/royal-leamington-spa/the-kingsley-school/ warwickshire [[], [[]], [], [], [[], [], [], []], [], [[]], [[]], [[]], [[], [], []], [[], [<Element a at 0x7fd3c003fec0>, <Element a at 0x7fd3c0035a80>, <Element a at 0x7fd3c003fec0>, <Element a at 0x7fd3c0035a80>, <Element a at 0x7fd3c003fec0>, <Element a at 0x7fd3c0035a80>]], [[]], [[]], [[], []], [[]], [[], []], [[], []], [[], []], [[], []], [[]], [[]], [[]], [[]], [[]], [[]], [], [], []]
[ "What elements are you trying to extract? If its general HTML elements you could use the html.parser python package. I just modified the generic parser class they provide near the bottom of the page a lil:\nimport requests\nimport pandas as pd\n\n\n\n\n\n\n\n\nfrom html.parser import HTMLParser\nfrom html.entities import name2codepoint\n\n\nclass MyHTMLParser(HTMLParser):\n def __init__(self):\n self.parsed_html = []\n super().__init__()\n\n def handle_starttag(self, tag, attrs):\n self.parsed_html.append(f\"Start tag: {tag}\")\n for attr in attrs:\n self.parsed_html.append(f\" attr: {attr}\")\n\n def handle_endtag(self, tag):\n self.parsed_html.append(f\"End tag : {tag}\")\n\n def handle_data(self, data):\n self.parsed_html.append(f\"Data : {data.strip()}\")\n\n def handle_comment(self, data):\n self.parsed_html.append(f\"Comment : {data.strip()}\")\n\n def handle_entityref(self, name):\n c = chr(name2codepoint[name])\n self.parsed_html.append(f\"Named ent: {c}\")\n\n def handle_charref(self, name):\n if name.startswith('x'):\n c = chr(int(name[1:], 16))\n else:\n c = chr(int(name))\n self.parsed_html.append(f\"Num ent : {c}\")\n\n def handle_decl(self, data):\n self.parsed_html.append(f\"Decl : {data.strip()}\")\n\n\ndef scrape_details(url):\n parser = MyHTMLParser()\n html_text = requests.get(url).text\n parser.feed(html_text)\n return parser.parsed_html\n\n\ndf = pd.DataFrame(\n {'URL': ['https://www.isc.co.uk/schools/england/warwickshire/royal-leamington-spa/the-kingsley-school/']})\ndf['data'] = df['URL'].map(lambda url: scrape_details(url))\n\nfor data in df['data']:\n print('\\n'.join(data))\n\nsome of result:\nData : \nDecl : DOCTYPE html\nData :\nStart tag: html\n attr: ('lang', 'en')\nData :\nStart tag: head\nData :\nStart tag: meta\n attr: ('charset', 'UTF-8')\nEnd tag : meta\nData :\nStart tag: title\nData : The Kingsley School, Royal Leamington Spa - ISC\nEnd tag : title\nData :\nStart tag: meta\n attr: ('name', 'robots')\n attr: ('content', 'index, follow')\nEnd tag : meta\nData :\nStart tag: link\n attr: ('rel', 'canonical')\n attr: ('href', '/schools/england/warwickshire/royal-leamington-spa/the-kingsley-school/')\nEnd tag : link\nData :\nStart tag: meta\n attr: ('name', 'description')\n attr: ('content', 'Information about Royal Leamington Spa, The Kingsley School')\nEnd tag : meta\nData :\nStart tag: meta\n attr: ('name', 'viewport')\n attr: ('content', 'width=device-width, initial-scale=1.0')\nData :\nStart tag: meta\n attr: ('name', 'format-detection')\n attr: ('content', 'telephone-no')\nEnd tag : meta\nData :\nComment : Google Tag Manager\nData :\nStart tag: script\nData : (function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':\n new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],\n j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=\n 'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);\n })(window,document,'script','dataLayer','GTM-TCCJT7P');\nEnd tag : script\nData :\nComment : End Google Tag Manager\nData :\nStart tag: script\n attr: ('src', '/bundles/upper?v=C-4SuuHJ23ObmSV6TVgJvlUZTwUHbE9Mit8bUEVmwIo1')\nEnd tag : script\nData :\nStart tag: link\n attr: ('href', '/bundles/site?v=_nmQ8YRQiebaCD8eMIdpKjF42e4TM9br-6hkul2424o1')\n attr: ('rel', 'stylesheet')\nEnd tag : link\nData :\nStart tag: link\n attr: ('rel', 'apple-touch-icon')\n attr: ('sizes', '180x180')\n attr: ('href', '/favicon/isc/apple-touch-icon.png')\nData :\nStart tag: link\n attr: ('rel', 'icon')\n attr: ('type', 'image/png')\n attr: ('sizes', '32x32')\n attr: ('href', '/favicon/isc/favicon-32x32.png')\nData :\nStart tag: link\n attr: ('rel', 'icon')\n attr: ('type', 'image/png')\n attr: ('sizes', '16x16')\n attr: ('href', '/favicon/isc/favicon-16x16.png')\nData :\nStart tag: link\n attr: ('rel', 'manifest')\n attr: ('href', '/favicon/isc/site.webmanifest')\nData :\nStart tag: link\n attr: ('rel', 'mask-icon')\n attr: ('href', '/favicon/isc/safari-pinned-tab.svg')\n attr: ('color', '#5bbad5')\nData :\nStart tag: meta\n attr: ('name', 'msapplication-TileColor')\n attr: ('content', '#ffc40d')\nData :\nStart tag: meta\n attr: ('name', 'msapplication-config')\n attr: ('content', '/favicon/isc/browserconfig.xml')\nData :\nStart tag: meta\n attr: ('name', 'theme-color')\n attr: ('content', '#ffffff')\nData :\nStart tag: script\nData : var aId = 0; // mz values\n var sId = 0;\n var mID = 0;\nEnd tag : script\nData :\nStart tag: script\n attr: ('src', 'https://www.google.com/recaptcha/api.js')\nEnd tag : script\nData :\nStart tag: link\n attr: ('href', 'https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css')\n attr: ('rel', 'stylesheet')\n attr: ('integrity', 'sha384-wvfXpqpZZVQGK6TAh5PVlGOfQNHSoD2xbE+QkPxCAFlNEevoEH3Sl0sibVcOQVnN')\n attr: ('crossorigin', 'anonymous')\nData :\nStart tag: script\n attr: ('type', 'application/ld+json')\nData : {\n \"@context\": \"https://schema.org\",\n \"@type\": \"School\",\n \"name\": \"The Kingsley School\",\n \"url\": \"https://www.thekingsleyschool.co.uk/\",\n \"description\": \"Kingsley provides a friendly, supportive family environment where individuals are encouraged to enjoy learning, and to aim for and achieve\n excellence. Our principal aim is to enable all our pupils to fulfil their potential wherever it may lie, through our wide range of subjects, lively extra-c\nurricular activities, small classes and highly effective pastoral care.\\r\\n\",\n \"address\": {\n \"@type\": \"PostalAddress\",\n \"addressCountry\": \"England\",\n \"addressLocality\": \"Royal Leamington Spa\",\n \"streetAddress\": \"Beauchamp Avenue\",\n \"postalCode\": \"CV32 5RD\",\n \"telephone\": \"+44 (0)1926 425127\"\n },\n \"geo\": {\n \"@type\": \"GeoCoordinates\",\n \"latitude\": 52.2942769,\n \"longitude\": -1.5379113\n },\n \"email\": \"[email protected]\"\n}\nEnd tag : script\n\nUPDATE:\n\nspecific terms I want to extract:\n\nHead's name: Mr James Murphy-O'Connor (Principal)\nISC associations: HMC, IAPS, GSA, AGBIS, ISBA\n\nReligious affiliation: Church in Wales\n\nDay/boarding type: Day and Full Boarding\n\nGender profile: Boys and girls taught separately (diamond structure)\nSize: 1236\n\nBoys - age range & pupil numbers:\n\nDay: 3 to 18 (522)\n\nBoarding: 7 to 18 (150)\n\nSixth form: (156)\n\nGirls - age range & pupil numbers:\n\nDay: 3 to 18 (456)\n\nBoarding: 7 to 18 (108)\n\nSixth form: (104)\n\n...\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074675148_pandas_python.txt