id
stringlengths
40
40
text
stringlengths
29
2.03k
original_text
stringlengths
3
154k
subdomain
stringclasses
20 values
metadata
dict
a187c2bf01fd4f84df514b6f796eda24a9ff43b7
Stackoverflow Stackexchange Q: Group By Non-Missing Values Dplyr I have a large dataframe (≈ 2M observations) that have many duplicates. I am going to delete those duplicates, but I need to keep the non-duplicate values as conditioned on another value that is not missing (NA). It can be any value imaginable, as long as there is a non-NA. For example: data <- airquality data[4:10,3] <- rep(NA,7) data[1:5,4] <- NA library(dplyr) new.data <- data %>% group_by(Ozone) %>% filter(Wind==????)) Here you can see I am not sure what to filter by as annotated by the "Wind==????". As long as any value (numeric or nominal) is in the Wind column , then I would like to keep these unique values, while deleting the conditional values on non-missing values. Thank you! A: We can do data %>% group_by(Ozone) %>% filter(!duplicated(Wind) & !is.na(Wind))
Q: Group By Non-Missing Values Dplyr I have a large dataframe (≈ 2M observations) that have many duplicates. I am going to delete those duplicates, but I need to keep the non-duplicate values as conditioned on another value that is not missing (NA). It can be any value imaginable, as long as there is a non-NA. For example: data <- airquality data[4:10,3] <- rep(NA,7) data[1:5,4] <- NA library(dplyr) new.data <- data %>% group_by(Ozone) %>% filter(Wind==????)) Here you can see I am not sure what to filter by as annotated by the "Wind==????". As long as any value (numeric or nominal) is in the Wind column , then I would like to keep these unique values, while deleting the conditional values on non-missing values. Thank you! A: We can do data %>% group_by(Ozone) %>% filter(!duplicated(Wind) & !is.na(Wind))
stackoverflow
{ "language": "en", "length": 136, "provenance": "stackexchange_0000F.jsonl.gz:853617", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44507284" }
5b6b9e4bff1143f38f0455d978aa785dbba74c87
Stackoverflow Stackexchange Q: Should SAML 2.0 protocol be used for desktop application single sign on? I'm trying to implement SSO for a desktop application, and I'm currently doing research on what's the best tool to use. I came across SAML, but from my understanding SAML is really meant for web applications. Is there a way to use the SAML protocol for a desktop app? Example use case: user logs into machine, and when he or she clicks on the app icon, they are automatically signed in. A: Your example use case is IWA - Integrated windows auth - which is a browser function rather than a protocol function. It typically happens only on domain joined PC's. SAML relies on redirects so if you want to do this via SAML you need to add some kind of browser pop-up. You could use OpenID Connect - far more lightweight - built on REST - or the WS-Fed active profile i.e. WCF rather than http browser functionality.
Q: Should SAML 2.0 protocol be used for desktop application single sign on? I'm trying to implement SSO for a desktop application, and I'm currently doing research on what's the best tool to use. I came across SAML, but from my understanding SAML is really meant for web applications. Is there a way to use the SAML protocol for a desktop app? Example use case: user logs into machine, and when he or she clicks on the app icon, they are automatically signed in. A: Your example use case is IWA - Integrated windows auth - which is a browser function rather than a protocol function. It typically happens only on domain joined PC's. SAML relies on redirects so if you want to do this via SAML you need to add some kind of browser pop-up. You could use OpenID Connect - far more lightweight - built on REST - or the WS-Fed active profile i.e. WCF rather than http browser functionality. A: Before going for SAML 2.0, OpenID Connect, you may want to ask why do you need Single Sign On. The reason why we have single sign on is that we want to separate out Identity Provider (or OpenID Provider) and Service Provider (Relaying Party). Furthermore, it can be multiplexed in between, that is how identity federation happens. Of cause, the session setup are through server-to-server communications. But when it comes to Single Logout, it does involve with either front-channel (UI-to-server) communications or back-channel (server-to-server) communication. Now comes into your question: * *I think you can use SAML 2.0 to exchange the identity, but it is just too heavyweight to do it. SAML 2.0 has been widely used in the industry and that is why many people are still using it. *The newer SSO technology is OpenID Connect, which is identity layer on top of OAuth 2.0, you may want to take a look at it. OpenID Connect FAQ *But ultimately, your UI (web or mobile or desktop) are just checking against either a session or an identity store through backend service calls. And how backend authenticates the user purely depends on the implementation. Don't mix client with server.
stackoverflow
{ "language": "en", "length": 361, "provenance": "stackexchange_0000F.jsonl.gz:853648", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44507376" }
8aa05ce1ebbde487a99ca3da555395e4206f93d1
Stackoverflow Stackexchange Q: how to make a bazel target depend on all targets inside another BUILD file I have a use case, that for the deps of a target, i need to depend on all targets of another BUILD file. that BUILD file has about 100 targets so it's not realistic for me to write down all of them, is there a quicker way to specify the dependencies? A: You could create a macro that loops through all existing rules in a BUILD file and creates a filegroup for it. The other BUILD file would depend on this filegroup. The bazel docs have a stub example that finds all cc_library rules and aggregates them together.
Q: how to make a bazel target depend on all targets inside another BUILD file I have a use case, that for the deps of a target, i need to depend on all targets of another BUILD file. that BUILD file has about 100 targets so it's not realistic for me to write down all of them, is there a quicker way to specify the dependencies? A: You could create a macro that loops through all existing rules in a BUILD file and creates a filegroup for it. The other BUILD file would depend on this filegroup. The bazel docs have a stub example that finds all cc_library rules and aggregates them together.
stackoverflow
{ "language": "en", "length": 113, "provenance": "stackexchange_0000F.jsonl.gz:853649", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44507380" }
6d34c274375d6c1ebc40627df58901c03b7032d1
Stackoverflow Stackexchange Q: SDKMAN 5.5.9+231 says package is not a valid candidate on MacOS I have no idea why this started happening, but my SDKMAN stopped working and only displays the following message for whatever package I want to list, install, or use. $ sdk list java Stop! java is not a valid candidate. $ sdk install java Stop! java is not a valid candidate. $ sdk use java 8u131 Stop! java is not a valid candidate. Just typing sdk list works, though. But I can't do anything. My .bash_profile contains the following: export JAVA_HOME=$(/usr/libexec/java_home) export SDKMAN_DIR="/Users/myusername/.sdkman" [[ -s "/Users/myusername/.sdkman/bin/sdkman-init.sh" ]] && source "/Users/myusername/.sdkman/bin/sdkman-init.sh" A: Also double check if you put the target sdk in front of the version. This won't work: sdk install 9.0.4-openjdk Stop! 9.0.4-openjdk is not a valid candidate. Specifying it correctly works: sdk install java 9.0.4-openjdk
Q: SDKMAN 5.5.9+231 says package is not a valid candidate on MacOS I have no idea why this started happening, but my SDKMAN stopped working and only displays the following message for whatever package I want to list, install, or use. $ sdk list java Stop! java is not a valid candidate. $ sdk install java Stop! java is not a valid candidate. $ sdk use java 8u131 Stop! java is not a valid candidate. Just typing sdk list works, though. But I can't do anything. My .bash_profile contains the following: export JAVA_HOME=$(/usr/libexec/java_home) export SDKMAN_DIR="/Users/myusername/.sdkman" [[ -s "/Users/myusername/.sdkman/bin/sdkman-init.sh" ]] && source "/Users/myusername/.sdkman/bin/sdkman-init.sh" A: Also double check if you put the target sdk in front of the version. This won't work: sdk install 9.0.4-openjdk Stop! 9.0.4-openjdk is not a valid candidate. Specifying it correctly works: sdk install java 9.0.4-openjdk A: The problem was on the server side. Something to do with SDKMAN's Candidates API. As pointed out in the GitHub issue, you can get over the problem using the following command: sdk flush candidates Make sure to restart your terminal after that. A: Faced the same issue today. I was getting the same message after using sdk flush candidates and restarting the terminal. So had to run sdk update and then restart the terminal. This added the candidates back. A: My problem was that the following export export SDKMAN_DIR="$HOME/.sdkman" [[ -s "$HOME/.sdkman/bin/sdkman-init.sh" ]] && source "$HOME/.sdkman/bin/sdkman-init.sh was not at the bottom of ~/.bashrc file. It happened because I installed the other tools which ended up at the bottom of ~/.bashrc. When I moved it to the bottom and restarted the terminal sdk started to work.
stackoverflow
{ "language": "en", "length": 274, "provenance": "stackexchange_0000F.jsonl.gz:853660", "question_score": "13", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44507424" }
77145607f4a78d53a90b2365559253da9bcef174
Stackoverflow Stackexchange Q: Python 3.6: No module named '_sqlite3' I have two versions of python running 2.7 and 3.6 on Ubuntu 16.02 Using Python 2.7, the below code will run successfully. # will compile successfully import sqlite3 But when using Python 3.7, the below code will throw - No module named _sqlite3 error. # will throw no module error import sqlite3 Is there any solution to this problem?
Q: Python 3.6: No module named '_sqlite3' I have two versions of python running 2.7 and 3.6 on Ubuntu 16.02 Using Python 2.7, the below code will run successfully. # will compile successfully import sqlite3 But when using Python 3.7, the below code will throw - No module named _sqlite3 error. # will throw no module error import sqlite3 Is there any solution to this problem?
stackoverflow
{ "language": "en", "length": 66, "provenance": "stackexchange_0000F.jsonl.gz:853685", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44507479" }
955b986011e95f48cc09332df442a7c38a4c7203
Stackoverflow Stackexchange Q: Powershell 5.1 in Windows 2012 R2 I would like to install/upgrade powershell in my Windows 2012 R2 server. As default w2k12 have 4.0 powershell so I've downloaded Windows Management Framework 5.1 as file: W2K12-KB3191565-x64.msu When I run this - I'm getting error "The update is not applicable to your computer" - ok... researching google FOUND ... I have to install full version of .NET Framework 4.5.2 - so ok.. Downloading dotNetFx45_Full_setup.exe and RUN. Error: ""Microsoft .NET Framework 4.5 is already a part of this operating system. You do not need to install the .NET Framework 4.5 redistributable. Same or higher version of .NET Framework 4.5 has already been installed on this computer.:" OK trying install Developer Version NDP452-KB2901951-x86-x64-DevPack.exe Trying check if FULL is installed - (Get-ItemProperty -Path 'HKLM:\Software\Microsoft\NET Framework Setup\NDP\v4\Full' -ErrorAction SilentlyContinue).Version -like '4.5*' Still false What is going on ? Why its that hard install latest powerhsell 5.1 in Windows 2012 ? :/ A: Get the update at: https://www.microsoft.com/en-us/download/details.aspx?id=54616&ranMID=24542&ranEAID=TnL5HPStwNw&ranSiteID=TnL5HPStwNw-PY0rN6sxCyF8hlhQhRlodA&epi=TnL5HPStwNw-PY0rN6sxCyF8hlhQhRlodA&irgwc=1&OCID=AID2000142_aff_7593_1243925&tduid=(ir__biyzqumvw9kfrgvj0jd6oxxa1e2xjal2pedo1uo900)(7593)(1243925)(TnL5HPStwNw-PY0rN6sxCyF8hlhQhRlodA)()&irclickid=_biyzqumvw9kfrgvj0jd6oxxa1e2xjal2pedo1uo900 Then select the w2k 2012 R2 specific file name :
Q: Powershell 5.1 in Windows 2012 R2 I would like to install/upgrade powershell in my Windows 2012 R2 server. As default w2k12 have 4.0 powershell so I've downloaded Windows Management Framework 5.1 as file: W2K12-KB3191565-x64.msu When I run this - I'm getting error "The update is not applicable to your computer" - ok... researching google FOUND ... I have to install full version of .NET Framework 4.5.2 - so ok.. Downloading dotNetFx45_Full_setup.exe and RUN. Error: ""Microsoft .NET Framework 4.5 is already a part of this operating system. You do not need to install the .NET Framework 4.5 redistributable. Same or higher version of .NET Framework 4.5 has already been installed on this computer.:" OK trying install Developer Version NDP452-KB2901951-x86-x64-DevPack.exe Trying check if FULL is installed - (Get-ItemProperty -Path 'HKLM:\Software\Microsoft\NET Framework Setup\NDP\v4\Full' -ErrorAction SilentlyContinue).Version -like '4.5*' Still false What is going on ? Why its that hard install latest powerhsell 5.1 in Windows 2012 ? :/ A: Get the update at: https://www.microsoft.com/en-us/download/details.aspx?id=54616&ranMID=24542&ranEAID=TnL5HPStwNw&ranSiteID=TnL5HPStwNw-PY0rN6sxCyF8hlhQhRlodA&epi=TnL5HPStwNw-PY0rN6sxCyF8hlhQhRlodA&irgwc=1&OCID=AID2000142_aff_7593_1243925&tduid=(ir__biyzqumvw9kfrgvj0jd6oxxa1e2xjal2pedo1uo900)(7593)(1243925)(TnL5HPStwNw-PY0rN6sxCyF8hlhQhRlodA)()&irclickid=_biyzqumvw9kfrgvj0jd6oxxa1e2xjal2pedo1uo900 Then select the w2k 2012 R2 specific file name :
stackoverflow
{ "language": "en", "length": 171, "provenance": "stackexchange_0000F.jsonl.gz:853686", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44507481" }
5597a9da342f43e38cf37729a28c524e53e6b7b7
Stackoverflow Stackexchange Q: Cant Modify or Resize Amazon EBS Volume Cant Modify or Resize Amazon EBS Volume. In us-east-1d N. Virginia. The instance it's connected to is t2.medium CentOS 7. Any help would be appreciated. A: Update: the answer below was correct when written, but was subsequently superceded by this announcement on June 28, 2018: Starting today, Elastic Volumes extends support to Amazon Elastic Block Store (EBS) magnetic (standard) volume type. You can now dynamically increase capacity or change the type of magnetic (standard) volumes with no downtime or performance impact using a simple API call or a few console clicks. https://aws.amazon.com/about-aws/whats-new/2018/06/amazon-ebs-extends-elastic-volumes-to-support-ebs-magnetic--standard--volume-type/ The issue that originally triggered this question should no longer occur. In the screen shot, the volume type shows standard. That's a previous generation magnetic volume. The previous generation Magnetic volume type is not supported by the volume modification methods [...] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/limitations.html Those can't be resized, so the option is grayed out. Your new instance from the AMI probably has a gp2 SSD volume, which does support resize.
Q: Cant Modify or Resize Amazon EBS Volume Cant Modify or Resize Amazon EBS Volume. In us-east-1d N. Virginia. The instance it's connected to is t2.medium CentOS 7. Any help would be appreciated. A: Update: the answer below was correct when written, but was subsequently superceded by this announcement on June 28, 2018: Starting today, Elastic Volumes extends support to Amazon Elastic Block Store (EBS) magnetic (standard) volume type. You can now dynamically increase capacity or change the type of magnetic (standard) volumes with no downtime or performance impact using a simple API call or a few console clicks. https://aws.amazon.com/about-aws/whats-new/2018/06/amazon-ebs-extends-elastic-volumes-to-support-ebs-magnetic--standard--volume-type/ The issue that originally triggered this question should no longer occur. In the screen shot, the volume type shows standard. That's a previous generation magnetic volume. The previous generation Magnetic volume type is not supported by the volume modification methods [...] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/limitations.html Those can't be resized, so the option is grayed out. Your new instance from the AMI probably has a gp2 SSD volume, which does support resize. A: Stop the instance that the EBS volume is attached to. Then the modify option should be available for use. If that doesn't work then try detaching the volume. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html A: As @michael-sqlbot says, you are using magnetic "standard" EBS volume, you can convert to ssd "gp2" volume following these steps: * *Create snapshot of your EBS volume *Create new PIOPS/General Purpose SSD volume from your EBS snapshot *Detaching existing attached magnetic volume from the instance *Attach new PIOPS/General Purpose SSD volume More info: https://n2ws.com/blog/how-to-guides/how-to-change-your-ebs-volume-type
stackoverflow
{ "language": "en", "length": 253, "provenance": "stackexchange_0000F.jsonl.gz:853722", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44507584" }
6a6319525d162e992c1c2159c429d71ac133eb95
Stackoverflow Stackexchange Q: MVC Shows Directory Listing Not Loading Application I have a project that for some reason isn't loading the project, but just is showing me the directory listings. I have even tried running it in local IIS to see if it makes a difference and it doesn't Things I have tried * *modules runAllManagedModulesForAllRequests="true" *remove and reinstall IIS from manage windows applications in control panel If I create a template MVC project in visual studio and run it , it runs fine so that makes me think its not my configruation. Other uses with the same project and web config are having no problem getting the site to load. Also if I try and goto /Home/Index I get a 404 error. Not sure if that helps in any way Any other ideas I may be missing here IIS 10 on Windows 10 A: * *Check that IIS is installed and features you require are enabled *Register Asp.Net in Windows 10 for the Framework version you use. dism /online /enable-feature /all /featurename:IIS-ASPNET45 *Remove read-only tick and grant IIS with permissions on the folder your application is deployed.
Q: MVC Shows Directory Listing Not Loading Application I have a project that for some reason isn't loading the project, but just is showing me the directory listings. I have even tried running it in local IIS to see if it makes a difference and it doesn't Things I have tried * *modules runAllManagedModulesForAllRequests="true" *remove and reinstall IIS from manage windows applications in control panel If I create a template MVC project in visual studio and run it , it runs fine so that makes me think its not my configruation. Other uses with the same project and web config are having no problem getting the site to load. Also if I try and goto /Home/Index I get a 404 error. Not sure if that helps in any way Any other ideas I may be missing here IIS 10 on Windows 10 A: * *Check that IIS is installed and features you require are enabled *Register Asp.Net in Windows 10 for the Framework version you use. dism /online /enable-feature /all /featurename:IIS-ASPNET45 *Remove read-only tick and grant IIS with permissions on the folder your application is deployed. A: You need to go over your IIS configurations again, do re-check whether you have the “Default Document” set for your website in IIS Manager. Also, if you’re running directly via Visual Studio, check whether you have selected the relevant “Project” as the “Startup Project” for the Solution (visit https://msdn.microsoft.com/en-us/library/a1awth7y.aspx). If the above fail, do recheck the IIS Permission for the particular project (visit https://support.microsoft.com/en-us/help/313075/how-to-configure-web-server-permissions-for-web-content-in-iis) Lastly as mentioned above, do un-check the "Read Only" Flag if its there. Do share your feedback and let me know if your problem still persists. A: Maybe a bit too late but just check if you have these features turned on: After checking and installing ASP.NET 4.8, IIS should be able to load your Home/Index page. Thanks
stackoverflow
{ "language": "en", "length": 309, "provenance": "stackexchange_0000F.jsonl.gz:853726", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44507598" }
5679fa776dd9f1d241da52349998825692f15e6f
Stackoverflow Stackexchange Q: Why AWS CloudWatch does not have Memory usage metric for Autoscaling group I am trying to create a graph for memory usage of an autoscaling group but I discovered that there is no such metric. Although there is Memory usage metric but it is for individual instances. It is useless since instances keep on changing in autoscaling group. I want to know the technical reason why AWS cloudwatch didn't provide it. Moreover I want to know the work around to achieve it. A: The metrics that AWS provides can be collected at the hypervisor level. But memory metrics (like disk metrics) is from the OS level. So it is a custom metric that you have to periodically push to CloudWatch. Monitoring Memory and Disk Metrics for Amazon EC2 Linux Instances shows how to push your metrics to CloudWatch. Install the scripts (along with credentials if you are not using IAM role) before creating your AMI and you are set. Each instance in AS will start pushing its memory metric to CloudWatch. Not sure how useful it will be for you.
Q: Why AWS CloudWatch does not have Memory usage metric for Autoscaling group I am trying to create a graph for memory usage of an autoscaling group but I discovered that there is no such metric. Although there is Memory usage metric but it is for individual instances. It is useless since instances keep on changing in autoscaling group. I want to know the technical reason why AWS cloudwatch didn't provide it. Moreover I want to know the work around to achieve it. A: The metrics that AWS provides can be collected at the hypervisor level. But memory metrics (like disk metrics) is from the OS level. So it is a custom metric that you have to periodically push to CloudWatch. Monitoring Memory and Disk Metrics for Amazon EC2 Linux Instances shows how to push your metrics to CloudWatch. Install the scripts (along with credentials if you are not using IAM role) before creating your AMI and you are set. Each instance in AS will start pushing its memory metric to CloudWatch. Not sure how useful it will be for you.
stackoverflow
{ "language": "en", "length": 181, "provenance": "stackexchange_0000F.jsonl.gz:853731", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44507609" }
a9d3f03dd4c6e4344e6517193c11e69f55d1c41c
Stackoverflow Stackexchange Q: How to disable the default bundling in angular CLI with Angular 4 I am new to webpack and angular-cli. My problem is that when I create an Angular 4 project using angular-cli, everything works fine with ng-serve but everything get bundled by default. Web pack bundling info: I am not able to see the component.ts files in browser to debug. Is there any way to disable the bundling? angular-cli version details: A: When you do ng serve with the CLI, it will create sourcemap files by default. That means, that although the source files are bundled together, you can view the original source files in the debugger and step through them. You find them in the DevTools under the tab Sources, the folder webpack:// If you want to view your prod build like this, you can add the flag -sm for sourcemaps. In the prod build, there won't be sourcemaps by default. ng serve --prod -sm
Q: How to disable the default bundling in angular CLI with Angular 4 I am new to webpack and angular-cli. My problem is that when I create an Angular 4 project using angular-cli, everything works fine with ng-serve but everything get bundled by default. Web pack bundling info: I am not able to see the component.ts files in browser to debug. Is there any way to disable the bundling? angular-cli version details: A: When you do ng serve with the CLI, it will create sourcemap files by default. That means, that although the source files are bundled together, you can view the original source files in the debugger and step through them. You find them in the DevTools under the tab Sources, the folder webpack:// If you want to view your prod build like this, you can add the flag -sm for sourcemaps. In the prod build, there won't be sourcemaps by default. ng serve --prod -sm A: Yes also you can enable and disable this from the developer tool option Go to setting (press F12 then F1 ). Under the source you can enable and disable to source mapping. In deploy time you not going to put the map file so this will not get downloaded. Developer tool settings A: Use following build command to hide your source code under Sources Tab ng build --no-sourcemap ( Development environment ) ng build --env=prod --no-sourcemap (Production environment)
stackoverflow
{ "language": "en", "length": 236, "provenance": "stackexchange_0000F.jsonl.gz:853806", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44507799" }
1c8eb61c6685bbd0d62161fd04f03ccfaa9a74de
Stackoverflow Stackexchange Q: Object file (.o) vs header file for a single file library? Say my library is just a couple functions that neatly fit into 1 file and do not require any external dependencies. Is then there any advantage in compiling that library into an .o object file and distributing it that way rather than just providing it as a header file? I can't seem to think of any, though I'm just a beginner. And if there is advantage in using an object file, is there any reason to package that single object file into an archive (.a), rather than distributing the object file by itself? A: For a small library like this there really is no advantage in implementing it in a .o file - you have to supply a header as well anyway. For larger libraries things become less obvious - linking object code is usually faster than compiling large amounts of C++ text, which you then have to link anyway, but on the other hand header-only files are somewhat more convenient to use and to distribute.
Q: Object file (.o) vs header file for a single file library? Say my library is just a couple functions that neatly fit into 1 file and do not require any external dependencies. Is then there any advantage in compiling that library into an .o object file and distributing it that way rather than just providing it as a header file? I can't seem to think of any, though I'm just a beginner. And if there is advantage in using an object file, is there any reason to package that single object file into an archive (.a), rather than distributing the object file by itself? A: For a small library like this there really is no advantage in implementing it in a .o file - you have to supply a header as well anyway. For larger libraries things become less obvious - linking object code is usually faster than compiling large amounts of C++ text, which you then have to link anyway, but on the other hand header-only files are somewhat more convenient to use and to distribute. A: The difference is (single object file or multiple ones), that the library linking mechanism allows you to specify a path where these should be found automatically by the linker, while you can't do that for single object files. Thus, it doesn't matter if your library contains only one object file or more. Providing a library will be the correct way. and distributing it that way rather than just providing it as a header file? If you can provide all implementation in a single header that's the preferable choice though. A: The only "advantage" would be if you don't want to give your clients access to your source code implementation but just want to provide a header with function prototypes + a binary object. If you are fine with clients seeing your implementation, then a header-only library can be a good solution.
stackoverflow
{ "language": "en", "length": 320, "provenance": "stackexchange_0000F.jsonl.gz:853827", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44507849" }
f1338c10c642e18ba2a31bbb70a7ef7ef8e25ae4
Stackoverflow Stackexchange Q: Docker refused to connect After I docker-compose build and docker-compose up, if I go to localhost:5000 in my browser (Which is the port I exposed in the yml file), I get: This site can’t be reached. localhost refused to connect. However, if I go to 192.168.99.100:5000, the container loads. Is there a way I can fix this issue? A: Bind your container port to 127.0.0.1:5000. By default, if you don't specify an interface on port mapping, Docker bind that port to all available interfaces (0.0.0.0). If you want to bind a port only for localhost interface (127.0.0.1), you have to specify this interface on port binding. Docker docker run ... -p 127.0.0.1:5000:5000 ... Docker Compose ports: - "127.0.0.1:5000:5000" For further information, check Docker docs: https://docs.docker.com/engine/userguide/networking/default_network/binding/
Q: Docker refused to connect After I docker-compose build and docker-compose up, if I go to localhost:5000 in my browser (Which is the port I exposed in the yml file), I get: This site can’t be reached. localhost refused to connect. However, if I go to 192.168.99.100:5000, the container loads. Is there a way I can fix this issue? A: Bind your container port to 127.0.0.1:5000. By default, if you don't specify an interface on port mapping, Docker bind that port to all available interfaces (0.0.0.0). If you want to bind a port only for localhost interface (127.0.0.1), you have to specify this interface on port binding. Docker docker run ... -p 127.0.0.1:5000:5000 ... Docker Compose ports: - "127.0.0.1:5000:5000" For further information, check Docker docs: https://docs.docker.com/engine/userguide/networking/default_network/binding/
stackoverflow
{ "language": "en", "length": 126, "provenance": "stackexchange_0000F.jsonl.gz:853860", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44507979" }
a7bf0be81dc1e8bfbb525fa4510e4e4a0518a5f3
Stackoverflow Stackexchange Q: checkSelfPermission always returns GRANTED I have an Android app and I'd like to check the camera permission. However, even if I turn it off (in the app setting of the simulator or the real device), the result is always 0 (GRANTED). The simulator and real device I use is on SDK 23, Android M. int permissionCheck = ContextCompat.checkSelfPermission(mActivity, Manifest.permission.CAMERA); In the AndroidManifest.xml, I have : <uses-permission android:name="android.permission.CAMERA" /> When I log this : System.out.println("Build.VERSION.SdkInt : " + VERSION.SDK_INT); System.out.println("permissionCheck : " + permissionCheck); I got this : Build.VERSION.SdkInt : 23 permissionCheck : 0 A: In fact the targetSdkVersion has to be 23 minimum in the build.gradle but the solution to this problem was to use : int permissionCheck = PermissionChecker.checkSelfPermission(getReactApplicationContext(), Manifest.permission.CAMERA); Instead of : int permissionCheck = ContextCompat.checkSelfPermission(mActivity, Manifest.permission.CAMERA); PermissionChecker returns the correct answer but not ContextCompat.
Q: checkSelfPermission always returns GRANTED I have an Android app and I'd like to check the camera permission. However, even if I turn it off (in the app setting of the simulator or the real device), the result is always 0 (GRANTED). The simulator and real device I use is on SDK 23, Android M. int permissionCheck = ContextCompat.checkSelfPermission(mActivity, Manifest.permission.CAMERA); In the AndroidManifest.xml, I have : <uses-permission android:name="android.permission.CAMERA" /> When I log this : System.out.println("Build.VERSION.SdkInt : " + VERSION.SDK_INT); System.out.println("permissionCheck : " + permissionCheck); I got this : Build.VERSION.SdkInt : 23 permissionCheck : 0 A: In fact the targetSdkVersion has to be 23 minimum in the build.gradle but the solution to this problem was to use : int permissionCheck = PermissionChecker.checkSelfPermission(getReactApplicationContext(), Manifest.permission.CAMERA); Instead of : int permissionCheck = ContextCompat.checkSelfPermission(mActivity, Manifest.permission.CAMERA); PermissionChecker returns the correct answer but not ContextCompat. A: Check your "targetSdkVersion" in "build.gradle", it must be 23 or above, maybe the issue is you have set the build version to 23 but target version is less than 23. Please make sure all your sdk versions (build , target, compile) are set to 23 or above.
stackoverflow
{ "language": "en", "length": 186, "provenance": "stackexchange_0000F.jsonl.gz:853861", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44507982" }
d7da63b500f4243a27ae0ec579b534bad74b4aea
Stackoverflow Stackexchange Q: Copy Table from Local Sql Database to Azure SQL Database How can I go about copying a table from a local SQL database to an Azure SQL database nightly? Maybe I should use SSIS packages? A: Options (in rough order of preference): * *SQL Server Transactional Replication *SSIS *Azure Data Factory (especially for simple table copies) *SQL Server Snapshot Replication *Linked Server (INSERTs will be row-by-row) *Azure Data Sync (still in preview)
Q: Copy Table from Local Sql Database to Azure SQL Database How can I go about copying a table from a local SQL database to an Azure SQL database nightly? Maybe I should use SSIS packages? A: Options (in rough order of preference): * *SQL Server Transactional Replication *SSIS *Azure Data Factory (especially for simple table copies) *SQL Server Snapshot Replication *Linked Server (INSERTs will be row-by-row) *Azure Data Sync (still in preview) A: Looks like a good job for Azure SQL Data Sync. I won't copy paste the article here, but the main steps are: Step 1 - Create sync group Step 2 - Add sync members Step 3 - Configure sync group (here you can choose tables) The link I've provided has the details A: i used SSIS was the quickest way in the end, doing a data task
stackoverflow
{ "language": "en", "length": 141, "provenance": "stackexchange_0000F.jsonl.gz:853891", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508062" }
3029d44775db08ad39ab064d4e75614a23d6e227
Stackoverflow Stackexchange Q: Turning off the auto-capitalization of the initial character of the keyboard in iOS? Flutter I'm using Flutter, and I'm building a login screen. The default behavior of the keyboard on iOS seems to auto-uppercase the initial character. I'd like to turn that off. How do I do it? A: Capitalization in text fields is now disabled by default and can be configured via textCapitalization property: import 'package:flutter/services.dart'; TextField(... textCapitalization: TextCapitalization.words ..) — @dmjones, flutter/flutter
Q: Turning off the auto-capitalization of the initial character of the keyboard in iOS? Flutter I'm using Flutter, and I'm building a login screen. The default behavior of the keyboard on iOS seems to auto-uppercase the initial character. I'd like to turn that off. How do I do it? A: Capitalization in text fields is now disabled by default and can be configured via textCapitalization property: import 'package:flutter/services.dart'; TextField(... textCapitalization: TextCapitalization.words ..) — @dmjones, flutter/flutter A: iOS auto-capitalizes the first letter when a word is a proper noun, e.g. Matt or Brazil. So, when you type a dot in an email address, you are effectively creating a word and iOS wants to "correct" it. You can turn this off with autocorrect: false in a TextField or TextFormField widget. A: The UITextAutocapitalizationType is set in FlutterTextInputPlugin. Currently it isn't configurable and it defaults to UITextAutocapitalizationTypeSentences if the type of the field is TextInputType.text and UITextAutocapitalizationTypeNone otherwise. So basically, you can change the text input type to TextInputType.emailAddress or TextInputType.url and it won't be capitalized. If that isn't enough configurability for you, you'd have to change the Flutter engine.
stackoverflow
{ "language": "en", "length": 187, "provenance": "stackexchange_0000F.jsonl.gz:853908", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508118" }
cd46e4fdbc534badce7170049a4fcabd62a44393
Stackoverflow Stackexchange Q: ios - Disable Fabric Answers but keep Crashlytics I have integrated Fabric Crashlytics and Answers into my app today. According to German law the user must have the oppurtunity to disable the collection of analytics data. I found this solution for Android: https://stackoverflow.com/a/36203869/4543961 Can anyone help me to find a similar solution for iOS? That will allow me to disable data collection but keep Crashlytics data collection. A: As of Fabric 1.7.2 you explicitly specify kits you want to use upon initialization of Fabric: [Fabric with:@[[Crashlytics class], [Answers class]]]; Thus if user disabled collection of analytics data - you may wish to not even initialize Answer kit. Generally, just don't log any events and don't set any keys to Answers if user disabled data collection.
Q: ios - Disable Fabric Answers but keep Crashlytics I have integrated Fabric Crashlytics and Answers into my app today. According to German law the user must have the oppurtunity to disable the collection of analytics data. I found this solution for Android: https://stackoverflow.com/a/36203869/4543961 Can anyone help me to find a similar solution for iOS? That will allow me to disable data collection but keep Crashlytics data collection. A: As of Fabric 1.7.2 you explicitly specify kits you want to use upon initialization of Fabric: [Fabric with:@[[Crashlytics class], [Answers class]]]; Thus if user disabled collection of analytics data - you may wish to not even initialize Answer kit. Generally, just don't log any events and don't set any keys to Answers if user disabled data collection.
stackoverflow
{ "language": "en", "length": 126, "provenance": "stackexchange_0000F.jsonl.gz:853938", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508209" }
1909d6ebd03ab648156218c683f466be7f69ec54
Stackoverflow Stackexchange Q: c++ How to check if ostringstream is empty I want to check if ostringstream object is empty i.e. it does not hold any character sequence? I checked cpp reference for ostringstream but it was still not clear to me. A: You can check its size the way you would for any other std::ostream subclass: std::ofstream ofs; std::streampos pos = ofs.tellp(); // store current location ofs.seekp(0, std::ios_base::end); // go to end bool empty = (ofs.tellp() == 0); // check size == 0 ? ofs.seekp(pos); // restore location For an input stream (istringstream, ifstream, etc), then you'd use tellg() and seekg() instead. Q: Do you need to flush the stream before getting its size?
Q: c++ How to check if ostringstream is empty I want to check if ostringstream object is empty i.e. it does not hold any character sequence? I checked cpp reference for ostringstream but it was still not clear to me. A: You can check its size the way you would for any other std::ostream subclass: std::ofstream ofs; std::streampos pos = ofs.tellp(); // store current location ofs.seekp(0, std::ios_base::end); // go to end bool empty = (ofs.tellp() == 0); // check size == 0 ? ofs.seekp(pos); // restore location For an input stream (istringstream, ifstream, etc), then you'd use tellg() and seekg() instead. Q: Do you need to flush the stream before getting its size? A: In theory, you don't, but the std documentation does not mention anything about it... So the need to do so probably depends on the implementation. On an std::ostream, I'd recommend it, if you've inserted bytes recently, and are not located already at the end of the stream. In most use cases involving output stream you will be appending to the stream and be located at its far end, it is not necessary in this case to store and restore the seek pointer location. Flushing an std::ostringstream is not an expensive operation, in any case. On an output file stream, I'd personally call flush(), since I would not get the size of the stream in the middle of an operation anyway. To sum it up, my personal advice is to flush output streams before getting their size, unless you know for sure you are at the end. Note that for std::ifstream, the technique above is the only way to get the size of the file.
stackoverflow
{ "language": "en", "length": 278, "provenance": "stackexchange_0000F.jsonl.gz:853948", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508228" }
1fc2001d6e6831bd2a158c74e271539660a56c40
Stackoverflow Stackexchange Q: Increasing memory limit in Python? I am currently using a function making extremely long dictionaries (used to compare DNA strings) and sometimes I'm getting MemoryError. Is there a way to allot more memory to Python so it can deal with more data at once? A: Python doesn’t limit memory usage on your program. It will allocate as much memory as your program needs until your computer is out of memory. The most you can do is reduce the limit to a fixed upper cap. That can be done with the resource module, but it isn't what you're looking for. You'd need to look at making your code more memory/performance friendly.
Q: Increasing memory limit in Python? I am currently using a function making extremely long dictionaries (used to compare DNA strings) and sometimes I'm getting MemoryError. Is there a way to allot more memory to Python so it can deal with more data at once? A: Python doesn’t limit memory usage on your program. It will allocate as much memory as your program needs until your computer is out of memory. The most you can do is reduce the limit to a fixed upper cap. That can be done with the resource module, but it isn't what you're looking for. You'd need to look at making your code more memory/performance friendly. A: Python has MomeoryError which is the limit of your System RAM util you've defined it manually with resource package. Defining your class with slots makes the python interpreter know that the attributes/members of your class are fixed. And can lead to significant memory savings! You can reduce dict creation by python interpreter by using __slot__ . This will tell interpreter to not create dict internally and reuse same variable. If the memory consumed by your python processes will continue to grow with time. This seems to be a combination of: * *How the C memory allocator in Python works. This is essentially memory fragmentation, because the allocation cannot call ‘free’ unless the entire memory chunk is unused. But the memory chunk usage is usually not perfectly aligned to the objects that you are creating and using. *Using a number of small string to compare data. A process called interning used internally but creating multiple small strings brings load on interpreter. The best way is to create Worker Thread or single threaded pool to do your work and invalidate worker/kill to free up resources attached/used in worker thread. Below code creates single thread worker : __slot__ = ('dna1','dna2','lock','errorResultMap') lock = threading.Lock() errorResultMap = [] def process_dna_compare(dna1, dna2): with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor: futures = {executor.submit(getDnaDict, lock, dna_key): dna_key for dna_key in dna1} '''max_workers=1 will create single threadpool''' dna_differences_map={} count = 0 dna_processed = False; for future in concurrent.futures.as_completed(futures): result_dict = future.result() if result_dict : count += 1 '''Do your processing XYZ here''' logger.info('Total dna keys processed ' + str(count)) def getDnaDict(lock,dna_key): '''process dna_key here and return item''' try: dataItem = item[0] return dataItem except: lock.acquire() errorResultMap.append({'dna_key_1': '', 'dna_key_2': dna_key_2, 'dna_key_3': dna_key_3, 'dna_key_4': 'No data for dna found'}) lock.release() logger.error('Error in processing dna :'+ dna_key) pass if __name__ == "__main__": dna1 = '''get data for dna1''' dna2 = '''get data for dna2''' process_dna_compare(dna1,dna2) if errorResultMap != []: ''' print or write to file the errorResultMap''' Below code will help you understand memory usage : import objgraph import random import inspect class Dna(object): def __init__(self): self.val = None def __str__(self): return "dna – val: {0}".format(self.val) def f(): l = [] for i in range(3): dna = Dna() #print “id of dna: {0}”.format(id(dna)) #print “dna is: {0}”.format(dna) l.append(dna) return l def main(): d = {} l = f() d['k'] = l print("list l has {0} objects of type Dna()".format(len(l))) objgraph.show_most_common_types() objgraph.show_backrefs(random.choice(objgraph.by_type('Dna')), filename="dna_refs.png") objgraph.show_refs(d, filename='myDna-image.png') if __name__ == "__main__": main() Output for memory usage : list l has 3 objects of type Dna() function 2021 wrapper_descriptor 1072 dict 998 method_descriptor 778 builtin_function_or_method 759 tuple 667 weakref 577 getset_descriptor 396 member_descriptor 296 type 180 More read on slots please visit : https://elfsternberg.com/2009/07/06/python-what-the-hell-is-a-slot/ A: Try to update your py from 32bit to 64bit. Simply type python in the command line and you will see which your python is. The memory in 32bit python is very low.
stackoverflow
{ "language": "en", "length": 590, "provenance": "stackexchange_0000F.jsonl.gz:853957", "question_score": "24", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508254" }
d869d52a01ee38d4bf13e3297a6441c9028a1fae
Stackoverflow Stackexchange Q: Reload PM2 configuration file I have problems with reloading PM2 configuration file after editing it: { "apps": [ ... { "name": "foo", "script": "foo/index.js", "cwd": "foo", "watch": false } ] } I previously did pm2 restart config.json and pm2 reload config.json and pm2 gracefulReload config.json but they didn't reload the configuration for existing apps (the changes in app config did not apply). The only way that worked for me was: pm2 delete foo pm2 restart config.json How is this supposed to be done? A: If you are using pm2 for local development and have problems with reloading the config you should run: $ pm2 delete ecosystem.config.js This deletes existing services (don't worry, no files will be deleted). Then to reload the configuration run: $ pm2 start ecosystem.config.js (Tip: you may need to replace ecosystem.config.js with your config file name) This is a very rough way of reloading, but it's good if you want a clean slate. It's effective to solve some issues, like I had with node-config - I was getting NODE_APP_INSTANCE warnings even though I added instance_var to my ecosystem config.
Q: Reload PM2 configuration file I have problems with reloading PM2 configuration file after editing it: { "apps": [ ... { "name": "foo", "script": "foo/index.js", "cwd": "foo", "watch": false } ] } I previously did pm2 restart config.json and pm2 reload config.json and pm2 gracefulReload config.json but they didn't reload the configuration for existing apps (the changes in app config did not apply). The only way that worked for me was: pm2 delete foo pm2 restart config.json How is this supposed to be done? A: If you are using pm2 for local development and have problems with reloading the config you should run: $ pm2 delete ecosystem.config.js This deletes existing services (don't worry, no files will be deleted). Then to reload the configuration run: $ pm2 start ecosystem.config.js (Tip: you may need to replace ecosystem.config.js with your config file name) This is a very rough way of reloading, but it's good if you want a clean slate. It's effective to solve some issues, like I had with node-config - I was getting NODE_APP_INSTANCE warnings even though I added instance_var to my ecosystem config. A: As the reference states, configurations are no longer reloaded: Starting PM2 v2.1.X, environnements are immutable by default, that means they will never be updated unless you tell PM2 to do so, to update configurations, you will need to use --update-env options. So this should be pm2 startOrReload config.js --update-env A: If you need a COMPLETE PURGE and restart of the configuration: pm2 kill # kill ongoing pm2 processes pm2 flush # OPTIONAL if logs need to be removed pm2 start /path/to/ecosystem.config.js # load config file pm2 save # save current process list / save changes after loading config file This can be useful if you keep getting pm2 feedback: status: errored. If you only want to reload i.e. "refresh" the configuration use this: pm2 startOrReload config.js --update-env You should try reload first and if the configuration is still not refreshed, try the complete purge.
stackoverflow
{ "language": "en", "length": 328, "provenance": "stackexchange_0000F.jsonl.gz:853976", "question_score": "18", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508316" }
7698358e5c351a511945446e8867a0d3f56cc7f8
Stackoverflow Stackexchange Q: Is there a way to default to the number pad when using the NamePhonePad Swift keyboard type? Right now when using textField.keyboardType = .namePhonePad, it will automatically start out showing the alpha keyboard, and you then have to press the number toggle to see the number pad. Is there any way to reverse that and have the number pad show first as the default and then be able to manually toggle to the alpha keyboard? I know it's possible to create a custom button to be able to toggle the keyboard type, as well as changing the keyboard type based on what they type into the text field, but I just want a way to change its default first keyboard.
Q: Is there a way to default to the number pad when using the NamePhonePad Swift keyboard type? Right now when using textField.keyboardType = .namePhonePad, it will automatically start out showing the alpha keyboard, and you then have to press the number toggle to see the number pad. Is there any way to reverse that and have the number pad show first as the default and then be able to manually toggle to the alpha keyboard? I know it's possible to create a custom button to be able to toggle the keyboard type, as well as changing the keyboard type based on what they type into the text field, but I just want a way to change its default first keyboard.
stackoverflow
{ "language": "en", "length": 121, "provenance": "stackexchange_0000F.jsonl.gz:853998", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508376" }
c1bcc7c2560870495e9398b4a46301ed338c983a
Stackoverflow Stackexchange Q: Remove top padding from AlertDialog? So I made a AlertDialog and there's this nasty top padding on my AlertDialog for some reason using the default Android lollipop+ theme and I can't figure out how to edit it/get rid of it. Here's the code that I use to produce the AlertDialog AlertDialog.Builder builder; if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) { builder = new AlertDialog.Builder(this, android.R.style.Theme_Material_Dialog_Alert); } else { builder = new AlertDialog.Builder(this); } builder.setTitle("Found corrupted files") .setMessage("We've found " + count + " images that are either missing or " + "corrupt. Should we remove these entries from the list?") .setPositiveButton("Yeah", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface d, int i) { MainActivity.this.removeCorruptedImages(); d.cancel(); } }) .setNegativeButton("No", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface d, int i) { d.cancel(); } }) .setIcon(R.drawable.ic_warning_white_24dp); AlertDialog al = builder.create(); al.show(); And it produces this: I want to get rid of that padding/blank space above the title. A: Just try this line before showing your AlertDialog AlertDialog al = builder.create(); al.requestWindowFeature(Window.FEATURE_NO_TITLE); al.show();
Q: Remove top padding from AlertDialog? So I made a AlertDialog and there's this nasty top padding on my AlertDialog for some reason using the default Android lollipop+ theme and I can't figure out how to edit it/get rid of it. Here's the code that I use to produce the AlertDialog AlertDialog.Builder builder; if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) { builder = new AlertDialog.Builder(this, android.R.style.Theme_Material_Dialog_Alert); } else { builder = new AlertDialog.Builder(this); } builder.setTitle("Found corrupted files") .setMessage("We've found " + count + " images that are either missing or " + "corrupt. Should we remove these entries from the list?") .setPositiveButton("Yeah", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface d, int i) { MainActivity.this.removeCorruptedImages(); d.cancel(); } }) .setNegativeButton("No", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface d, int i) { d.cancel(); } }) .setIcon(R.drawable.ic_warning_white_24dp); AlertDialog al = builder.create(); al.show(); And it produces this: I want to get rid of that padding/blank space above the title. A: Just try this line before showing your AlertDialog AlertDialog al = builder.create(); al.requestWindowFeature(Window.FEATURE_NO_TITLE); al.show(); A: Did you tried to set view spacing like this : a1.setView(View view, int viewSpacingLeft, int viewSpacingTop, int viewSpacingRight, int viewSpacingBottom)
stackoverflow
{ "language": "en", "length": 187, "provenance": "stackexchange_0000F.jsonl.gz:854022", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508455" }
4bbc6d001f90c769f19c463d4f11bc14cbbbc990
Stackoverflow Stackexchange Q: Cloudwatch logs on terminal I am using AWS Lambda for my application. For logs, I have to see in UI only, which I really don't like to see. Is there a way that I could connect to Cloudwatch logs locally and then see the logs by the tail command? Or if I could access Cloudwatch server to see logs? Basically, I wanted to see logs on my terminal. If there is any way to do that, please let me know. Thanks for your help. A: You can use AWS CLI to get your logs in real time. See: get-log-events AWS doesn't provide a functionality where you can tail the log. There are few 3rd party tools you can use. I have used jorgebastida/awslogs which was sufficient for my needs. Update 02/25/2021: Thanks to @adavea, I just checked and found AWS has added a new feature to tail the CW logs. aws.logs.tail --follow (boolean) Whether to continuously poll for new logs.
Q: Cloudwatch logs on terminal I am using AWS Lambda for my application. For logs, I have to see in UI only, which I really don't like to see. Is there a way that I could connect to Cloudwatch logs locally and then see the logs by the tail command? Or if I could access Cloudwatch server to see logs? Basically, I wanted to see logs on my terminal. If there is any way to do that, please let me know. Thanks for your help. A: You can use AWS CLI to get your logs in real time. See: get-log-events AWS doesn't provide a functionality where you can tail the log. There are few 3rd party tools you can use. I have used jorgebastida/awslogs which was sufficient for my needs. Update 02/25/2021: Thanks to @adavea, I just checked and found AWS has added a new feature to tail the CW logs. aws.logs.tail --follow (boolean) Whether to continuously poll for new logs. A: There are some command line tools like cwtail and awslogs that do a -f follow tail. Your other option is a free tool I created called SenseLogs that does a live tail in your browser. It is 100% browser based. See https://github.com/sensedeep/senselogs/blob/master/README.md for details. A: On my linux/macosx/cygwin console, this will give you the latest log file. Substitute $1 with your group name echo aws logs get-log-events --log-group-name /aws/lambda/$1 --log-stream-name `aws logs describe-log-streams --log-group-name /aws/lambda/$1 --max-items 1 --descending --order-by LastEventTime | grep logStreamName | cut -f2 -d: | sed 's/,//'|sed 's/\"/'\''/g'`| sh - Note that you will need to install awscli (https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) I wrapped the above in a sh function function getcw() { echo aws logs get-log-events --log-group-name /aws/lambda/$1 --log-stream-name `aws logs describe-log-streams --log-group-name /aws/lambda/$1 --max-items 1 --descending --order-by LastEventTime | grep logStreamName | cut -f2 -d: | sed 's/,//'|sed 's/\"/'\''/g'`| sh - } and can view the latest log for my logs in chai-lambda-trigger using the command $ getcw chai-lambda-trigger If you wanted just the tail of the output, you could do $ getcw chai-lambda-trigger | tail
stackoverflow
{ "language": "en", "length": 340, "provenance": "stackexchange_0000F.jsonl.gz:854025", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508460" }
833fb55ef408605efc5ba1c19b1f097a0091925a
Stackoverflow Stackexchange Q: Algorithm that numpy is using for `numpy.linalg.lstsq` I am doing some linear algebra and I noticed that numpy.linalg.lstsq() is doing what I need. I assume that np.linalg.lstsq() is using a built-in algorithm from LAPACK or something. What algorithm is numpy.linalg.lstsq using? For some background, assume that I have a matrix A that has more columns than rows. For example: import numpy A = numpy.random.randn(4, 5) print(numpy.linalg.lstsq(A, A)) The resulting printed value happens to be equivalent to the following formula: A.T.dot(np.linalg.inv(A.dot(A.T))).dot(A) If I calculate numpy.linalg.lstsq(A, A) as above I would expect to get the identity matrix. However, it is giving another value. numpy.linalg.lstsq is obviously not using the formula for linear regression, because np.linalg.inv(A.T.dot(A)) is not computable because A.T.dot(A) is singular. You can see from my post on math.stackexchange.com that I am searching for another way to calculate the above algorithm. EDIT: I am not looking for a way to calculate the identity matrix. I am looking to copy the LAPACK formula that numpy.linalg.lstsq is using. Look at my older post above for background information.
Q: Algorithm that numpy is using for `numpy.linalg.lstsq` I am doing some linear algebra and I noticed that numpy.linalg.lstsq() is doing what I need. I assume that np.linalg.lstsq() is using a built-in algorithm from LAPACK or something. What algorithm is numpy.linalg.lstsq using? For some background, assume that I have a matrix A that has more columns than rows. For example: import numpy A = numpy.random.randn(4, 5) print(numpy.linalg.lstsq(A, A)) The resulting printed value happens to be equivalent to the following formula: A.T.dot(np.linalg.inv(A.dot(A.T))).dot(A) If I calculate numpy.linalg.lstsq(A, A) as above I would expect to get the identity matrix. However, it is giving another value. numpy.linalg.lstsq is obviously not using the formula for linear regression, because np.linalg.inv(A.T.dot(A)) is not computable because A.T.dot(A) is singular. You can see from my post on math.stackexchange.com that I am searching for another way to calculate the above algorithm. EDIT: I am not looking for a way to calculate the identity matrix. I am looking to copy the LAPACK formula that numpy.linalg.lstsq is using. Look at my older post above for background information.
stackoverflow
{ "language": "en", "length": 175, "provenance": "stackexchange_0000F.jsonl.gz:854052", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508561" }
f973ff06ca6e7e1b55f8678f7a6998ad36cd4778
Stackoverflow Stackexchange Q: SQL Implied Null values in joined table In joining two tables with a one to many relationship, I want a result displaying all the values in the many table with null values in the one table. Here's an example: tbl_Platform: PriKey = PlatformID PlatformID PlatformDesc 1 Application 2 Cloud 3 Storage 4 Backup tbl_Missed: PriKey= CustomerID+Week+PlatformID CustomerID Week Missed Platform ID 49 1 2017-05-19 1 Output desired: CustomerID Week Missed PlatformDesc 49 1 2017-05-19 Application 49 1 null Cloud 49 1 null Storage 49 1 null Backup The closest I've been able to come is using a cross join as follows: SELECT dbo.tbl_Missed.CustomerID, dbo.tbl_Missed.Week, dbo.tbl_Missed.Missed, dbo.tbl_Platform.PlatformDesc FROM dbo.tbl_Platform CROSS JOIN dbo.tbl_MissedSPT Which gives me: 49 1 2017-05-19 Application 49 1 2017-05-19 Cloud 49 1 2017-05-19 Storage 49 1 2017-05-19 Backup A: It seems as if you want to take the value of attribute missed if there is a match in platformID, and null otherwise. Try the following (hope there are no typos): SELECT dbo.tbl_Missed.CustomerID, dbo.tbl_Missed.week, CASE WHEN dbo.tbl_Missed.PlatformID = dbo.tbl_Platform.PlatformID THEN dbo.tbl_Missed.missed ELSE NULL END as missed, dbo.tbl_Platform.PlatformDesc FROM dbo.tbl_Platform CROSS JOIN dbo.tbl_MissedSPT
Q: SQL Implied Null values in joined table In joining two tables with a one to many relationship, I want a result displaying all the values in the many table with null values in the one table. Here's an example: tbl_Platform: PriKey = PlatformID PlatformID PlatformDesc 1 Application 2 Cloud 3 Storage 4 Backup tbl_Missed: PriKey= CustomerID+Week+PlatformID CustomerID Week Missed Platform ID 49 1 2017-05-19 1 Output desired: CustomerID Week Missed PlatformDesc 49 1 2017-05-19 Application 49 1 null Cloud 49 1 null Storage 49 1 null Backup The closest I've been able to come is using a cross join as follows: SELECT dbo.tbl_Missed.CustomerID, dbo.tbl_Missed.Week, dbo.tbl_Missed.Missed, dbo.tbl_Platform.PlatformDesc FROM dbo.tbl_Platform CROSS JOIN dbo.tbl_MissedSPT Which gives me: 49 1 2017-05-19 Application 49 1 2017-05-19 Cloud 49 1 2017-05-19 Storage 49 1 2017-05-19 Backup A: It seems as if you want to take the value of attribute missed if there is a match in platformID, and null otherwise. Try the following (hope there are no typos): SELECT dbo.tbl_Missed.CustomerID, dbo.tbl_Missed.week, CASE WHEN dbo.tbl_Missed.PlatformID = dbo.tbl_Platform.PlatformID THEN dbo.tbl_Missed.missed ELSE NULL END as missed, dbo.tbl_Platform.PlatformDesc FROM dbo.tbl_Platform CROSS JOIN dbo.tbl_MissedSPT
stackoverflow
{ "language": "en", "length": 184, "provenance": "stackexchange_0000F.jsonl.gz:854057", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508576" }
a890d883853534f05190123c01766cb0cda8b559
Stackoverflow Stackexchange Q: How to determine a Power BI report's width and height for embedding I am trying to embed a Power BI report into an iFrame in a web page. I have a list of reports gathered from the Power BI Rest API, and I would like to dynamically load the reports into an iFrame on the same page. Only problem is, I can't seem to find a way to figure out the report's width and height. I have a fixed with frame, so I'd like to calculate the needed height somehow (though if I can get the report dimensions / ratios I can figure that part out). I can't access the iFrame content height after load due to javascript cross-domain restrictions. A: please try adding the code below to your embed config settings settings: { layoutType: models.LayoutType.Custom, customLayout: { displayOption: models.DisplayOption.FitToPage } I hope this helps.
Q: How to determine a Power BI report's width and height for embedding I am trying to embed a Power BI report into an iFrame in a web page. I have a list of reports gathered from the Power BI Rest API, and I would like to dynamically load the reports into an iFrame on the same page. Only problem is, I can't seem to find a way to figure out the report's width and height. I have a fixed with frame, so I'd like to calculate the needed height somehow (though if I can get the report dimensions / ratios I can figure that part out). I can't access the iFrame content height after load due to javascript cross-domain restrictions. A: please try adding the code below to your embed config settings settings: { layoutType: models.LayoutType.Custom, customLayout: { displayOption: models.DisplayOption.FitToPage } I hope this helps. A: I put my project code. here have a div element "reportContainer". Iframe width & height always 100% and this not manageable . you can add height and width into container div. <div id="reportContainer" hidden="hidden" style="height:85vh; width:85vw"></div> @if (Model.ReportMode == Embed.Models.ReportMode.ExistingReport) { <script> var embedReportId = "@Model.CurrentReport.EmbedConfig.Id"; var embedUrl = "@Html.Raw(Model.CurrentReport.EmbedConfig.EmbedUrl)"; var accessToken = "@Model.CurrentReport.EmbedConfig.EmbedToken.Token"; var reportContainer = document.getElementById('reportContainer'); // call embedReport utility function defined inside App.ts PowerBIEmbedManager.embedReport(embedReportId, embedUrl, accessToken, reportContainer); </script> } </div> please see rendered image. A: I like to remove the styling from the iframe using javascript and then rely on css. I embed into a div called reportContainer <div id="reportContainer"></div> I use this CSS to style the reportContainer div <style> #reportContainer { min-height: 800px; min-width: 1330px; } </style> I use this javascript to remove the "style="width:100%;height:100%" style attribute from the iframe and set the iframe height and width attributes to the height and width of the reportContainer div. <script> // make this a function so you can pass in a DIV name to support the ability to have multiple reports on a page function resizeIFrameToFitContent(iFrame) { var reportContainer = document.getElementById('reportContainer'); iFrame.width = reportContainer.clientWidth; iFrame.height = reportContainer.clientHeight; console.log("hi"); } window.addEventListener('DOMContentLoaded', function (e) { // powerbi.js doesnt give the embeeded iframe's an ID so we need to loop to find them. // assuming the only iframes that should be on any of our pages is the one we are embedding. var iframes = document.querySelectorAll("iframe"); for (var i = 0; i < iframes.length; i++) { resizeIFrameToFitContent(iframes[i]); // PowerBI JavaScript adds "width:100%;height:100%;" in the style attribute which causes sizing issues. We'll style it from JavaScript and CSS. So we'll strip the inline style attribute from the iFrame. iframes[i].attributes.removeNamedItem("style"); //alert(iframes[i].parentNode.id); // gets the parent div containing the iFrame. Can use this to make sure were re-rizing the right iFrame if we have multiple reports on one page. } }); </script> Now you can easily manage the size of the reportContainer div using CSS. Not sure if this is the best approach but it has worked well for me. Enjoy. A: You can't access iFrame content if it's loading from another domain. It's not allowed in browsers. If you can load the content from the domain where your code is located then it can be done.
stackoverflow
{ "language": "en", "length": 520, "provenance": "stackexchange_0000F.jsonl.gz:854064", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508590" }
bdbea906ed7bca9d96cf015d26be752fb5d023b2
Stackoverflow Stackexchange Q: Angular 2 drag & drop I have these functions in my index.html of my Angular 2 project: <script> function allowDrop(ev) { ev.preventDefault(); } function drag(ev) { ev.dataTransfer.setData("text", "teststring"); } function drop(ev) { ev.preventDefault(); var data = ev.dataTransfer.getData("text"); console.log(data); } </script> When I put the functions in my component, it gives the following errors: Uncaught ReferenceError: drag is not defined at HTMLImageElement.ondragstart Uncaught ReferenceError: allowDrop is not defined at HTMLDivElement.ondragover the elements using the functions are in the component html: <div> ondrop="drop(event)" ondragover="allowDrop(event)"> <img> src="smiley.png" draggable="true" ondragstart="drag(event)"> </div> <div ondrop="drop(event)" ondragover="allowDrop(event)"> </div> A: There's no need to move your code to index.html, just do this: Change ondrop, ondragover, ondragstart, event to (drop), (dragover), (dragstart), $event respectively in your template(html).
Q: Angular 2 drag & drop I have these functions in my index.html of my Angular 2 project: <script> function allowDrop(ev) { ev.preventDefault(); } function drag(ev) { ev.dataTransfer.setData("text", "teststring"); } function drop(ev) { ev.preventDefault(); var data = ev.dataTransfer.getData("text"); console.log(data); } </script> When I put the functions in my component, it gives the following errors: Uncaught ReferenceError: drag is not defined at HTMLImageElement.ondragstart Uncaught ReferenceError: allowDrop is not defined at HTMLDivElement.ondragover the elements using the functions are in the component html: <div> ondrop="drop(event)" ondragover="allowDrop(event)"> <img> src="smiley.png" draggable="true" ondragstart="drag(event)"> </div> <div ondrop="drop(event)" ondragover="allowDrop(event)"> </div> A: There's no need to move your code to index.html, just do this: Change ondrop, ondragover, ondragstart, event to (drop), (dragover), (dragstart), $event respectively in your template(html). A: why are you defining in the index.html file? if you using any functions in the component html you need define these function in the component.ts file. so move these function into component class A: If you are going to use drag and drop frenquently, I recommend you this library: https://www.npmjs.com/package/ng2-dnd A: I found a solution on how to implement HTML5 drag and drop in Angular 2: http://www.radzen.com/blog/angular-drag-and-drop/ A: Declare a local variable to save data. For more information see: https://developer.mozilla.org/en-US/docs/Web/Events/drop
stackoverflow
{ "language": "en", "length": 200, "provenance": "stackexchange_0000F.jsonl.gz:854090", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508668" }
e9fad6bff72a747b841e4223fc52fcaba72dbcdd
Stackoverflow Stackexchange Q: Dapper parameters not being replaced I've been using Dapper on this project for months and there has been no issues. But now I have to use raw SQL instead of stored procedures with dapper to create and alter Users and Roles in the database since sp_addrolemember is deprecated now. I'm using this as my dapper code conn.Open(); var p = new DynamicParameters(); p.Add("@UserLogin", user); p.Add("@UserRole", role); conn.Execute("Create User @UserLogin", p); conn.Execute("ALTER ROLE @UserRole ADD MEMBER @UserLogin", p); I keep getting the same error: Invalid syntax at @UserLogin I'm passing my domain\username as the login and I've tried it without domain\, too. Same thing. Can someone shed some light on this? A: I'm not familiar with Dapper, but you could try embedding the parameter values into the SQL strings you are sending for execution: conn.Open(); conn.Execute("Create User " + user); conn.Execute("ALTER ROLE " + role + " ADD MEMBER " + user); Note that this comes with security risks and may cause other problems!
Q: Dapper parameters not being replaced I've been using Dapper on this project for months and there has been no issues. But now I have to use raw SQL instead of stored procedures with dapper to create and alter Users and Roles in the database since sp_addrolemember is deprecated now. I'm using this as my dapper code conn.Open(); var p = new DynamicParameters(); p.Add("@UserLogin", user); p.Add("@UserRole", role); conn.Execute("Create User @UserLogin", p); conn.Execute("ALTER ROLE @UserRole ADD MEMBER @UserLogin", p); I keep getting the same error: Invalid syntax at @UserLogin I'm passing my domain\username as the login and I've tried it without domain\, too. Same thing. Can someone shed some light on this? A: I'm not familiar with Dapper, but you could try embedding the parameter values into the SQL strings you are sending for execution: conn.Open(); conn.Execute("Create User " + user); conn.Execute("ALTER ROLE " + role + " ADD MEMBER " + user); Note that this comes with security risks and may cause other problems!
stackoverflow
{ "language": "en", "length": 164, "provenance": "stackexchange_0000F.jsonl.gz:854166", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508901" }
f9194e7e715a7a7205bf316f4efbd11bfb9f5b58
Stackoverflow Stackexchange Q: How to create multiple markers with infoWindow showing google maps api I'm using google directions and make a route between two points and setting a marker and I need to set a window with some information in both of these markers, but According to google I just can show a window one at the time! But in Uber App they did with both points. This is what I did : public void drawRoute(){ PolylineOptions po; if(polyline == null){ po = new PolylineOptions(); for(int i = 0, tam = latLngs.size(); i < tam; i++){ po.add(latLngs.get(i)); } po.color(Color.BLACK).width(10); polyline = mMap.addPolyline(po); LatLng myCurrentLocation = new LatLng(lat, lon); mMap.moveCamera(CameraUpdateFactory.newLatLngZoom(myCurrentLocation, 11)); Marker mMarker; mMarker = mMap.addMarker(new MarkerOptions().position(finalLocaltion).title(finalLocationName)); mMarker.showInfoWindow(); } else{ polyline.setPoints(latLngs); } } The window only appears when I click, and not by default!
Q: How to create multiple markers with infoWindow showing google maps api I'm using google directions and make a route between two points and setting a marker and I need to set a window with some information in both of these markers, but According to google I just can show a window one at the time! But in Uber App they did with both points. This is what I did : public void drawRoute(){ PolylineOptions po; if(polyline == null){ po = new PolylineOptions(); for(int i = 0, tam = latLngs.size(); i < tam; i++){ po.add(latLngs.get(i)); } po.color(Color.BLACK).width(10); polyline = mMap.addPolyline(po); LatLng myCurrentLocation = new LatLng(lat, lon); mMap.moveCamera(CameraUpdateFactory.newLatLngZoom(myCurrentLocation, 11)); Marker mMarker; mMarker = mMap.addMarker(new MarkerOptions().position(finalLocaltion).title(finalLocationName)); mMarker.showInfoWindow(); } else{ polyline.setPoints(latLngs); } } The window only appears when I click, and not by default!
stackoverflow
{ "language": "en", "length": 131, "provenance": "stackexchange_0000F.jsonl.gz:854172", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508915" }
eeeb23b5056123e21bc7a1dc275d97a04ebe6c70
Stackoverflow Stackexchange Q: Make pristine Angular form control dirty There is a reactive form in Angular 4, and some control is supposed to be set programmatically at some point. this.form = formBuilder.group({ foo: '' }); ... this.form.controls.foo.setValue('foo'); How can control pristine/dirty state be affected? Currently I'm using both form and foo pristine states, something like: <form [formGroup]="form"> <input [formControl]="form.controls.foo"> </form> <p *ngIf="form.controls.foo.pristine"> {{ form.controls.foo.errors | json }} </p> <button [disabled]="form.pristine">Submit</button> If pristine/dirty is supposed to designate only human interaction and can't be changed programmatically, what solution would be preferable here? A: Each instance of formControl has markAsDirty() and markAsPristine() methods (inherited from AbstractControl), so, you should be able to run this.form.controls.foo.markAsPristine() or better, using reactive forms API: this.form.get('foo').markAsPristine() or even this.form.markAsPristine() the same may be done with markAsDirty() method
Q: Make pristine Angular form control dirty There is a reactive form in Angular 4, and some control is supposed to be set programmatically at some point. this.form = formBuilder.group({ foo: '' }); ... this.form.controls.foo.setValue('foo'); How can control pristine/dirty state be affected? Currently I'm using both form and foo pristine states, something like: <form [formGroup]="form"> <input [formControl]="form.controls.foo"> </form> <p *ngIf="form.controls.foo.pristine"> {{ form.controls.foo.errors | json }} </p> <button [disabled]="form.pristine">Submit</button> If pristine/dirty is supposed to designate only human interaction and can't be changed programmatically, what solution would be preferable here? A: Each instance of formControl has markAsDirty() and markAsPristine() methods (inherited from AbstractControl), so, you should be able to run this.form.controls.foo.markAsPristine() or better, using reactive forms API: this.form.get('foo').markAsPristine() or even this.form.markAsPristine() the same may be done with markAsDirty() method
stackoverflow
{ "language": "en", "length": 127, "provenance": "stackexchange_0000F.jsonl.gz:854194", "question_score": "17", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44508982" }
a4d2fa0eed31a563ba1e6eacfe4269944649fe4c
Stackoverflow Stackexchange Q: Spring Cloud Feign Non blocking I/O or Asynchronous Call I am developing microservices using Spring cloud platform where service1 calls multiple other micro services e.g. service2, service3, service 4 etc. These services can be called in parallel and service1 will aggregate the result. Can I use Spring cloud feign (http://cloud.spring.io/spring-cloud-static/Dalston.SR1/#spring-cloud-feign) to generate rest client and call the services asynchronously or Should I use Spring 4 AsyncRestTemplate to call the services asynchronously? A: I have used CompletableFuture to chain async calls to mutiple micro services using feign client however was not eventually successful. Please go through below link for further information. What I understood is - Feign's is not designed for asynchronous invocation or zero-copy i/o. https://github.com/OpenFeign/feign/issues/361
Q: Spring Cloud Feign Non blocking I/O or Asynchronous Call I am developing microservices using Spring cloud platform where service1 calls multiple other micro services e.g. service2, service3, service 4 etc. These services can be called in parallel and service1 will aggregate the result. Can I use Spring cloud feign (http://cloud.spring.io/spring-cloud-static/Dalston.SR1/#spring-cloud-feign) to generate rest client and call the services asynchronously or Should I use Spring 4 AsyncRestTemplate to call the services asynchronously? A: I have used CompletableFuture to chain async calls to mutiple micro services using feign client however was not eventually successful. Please go through below link for further information. What I understood is - Feign's is not designed for asynchronous invocation or zero-copy i/o. https://github.com/OpenFeign/feign/issues/361
stackoverflow
{ "language": "en", "length": 117, "provenance": "stackexchange_0000F.jsonl.gz:854209", "question_score": "16", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44509028" }
f52f75621a30f9adec404d38820adb72c77ebaf2
Stackoverflow Stackexchange Q: Can't connect to local MySQL server through socket (From time to time) I have a LAMP stack setup. Occasionally, I get the following error message when I open some page from the browser: Error creating the connection!: SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I think the server was configured correctly. The problem happens about every two months. Every time when I reboot the Linux server, or restart mysql, the problem was gone. I want to fix this problem permanently. Can anyone give me some idea? Much appreciated. EDIT The problem occurs again and I checked the mysqld.sock file, it was not there. Do you have any idea how to fix the problem? – Ryan Jul 23 at 16:24 A: If your file my.cnf (usually in the /etc/mysql/ folder) is correctly configured with socket=/var/run/mysqld/mysqld.sock modified #bind-address = 127.0.0.1 to bind-address = localhost you can check if mysql is running with the following command: mysqladmin -u root -p status try changing your permission to mysql folder. If you are working locally, you can try: sudo chmod -R 755 /var/run/mysqld/ And then restart the mysql. Good luck.
Q: Can't connect to local MySQL server through socket (From time to time) I have a LAMP stack setup. Occasionally, I get the following error message when I open some page from the browser: Error creating the connection!: SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I think the server was configured correctly. The problem happens about every two months. Every time when I reboot the Linux server, or restart mysql, the problem was gone. I want to fix this problem permanently. Can anyone give me some idea? Much appreciated. EDIT The problem occurs again and I checked the mysqld.sock file, it was not there. Do you have any idea how to fix the problem? – Ryan Jul 23 at 16:24 A: If your file my.cnf (usually in the /etc/mysql/ folder) is correctly configured with socket=/var/run/mysqld/mysqld.sock modified #bind-address = 127.0.0.1 to bind-address = localhost you can check if mysql is running with the following command: mysqladmin -u root -p status try changing your permission to mysql folder. If you are working locally, you can try: sudo chmod -R 755 /var/run/mysqld/ And then restart the mysql. Good luck. A: Could it be the log file getting too large and rebooting flushes it. See this in docs on server maintenance and logfiles. Also see discussion at digital ocean. Appears to be confirmed by discussion at serverfault A: You could try change the permission of your MySQL sock file like this: chmod 777 '/var/run/mysqld/mysqld.sock' It is a test to see if whatever user mysqld is using, it will acess your mysqld.sock file. So, reboot your MySQL and change the permission of mysqld.sock. And you need to check that if your sock folder can be accessed through any mysqld process. A: If the mysqld.sock file doesn't exist, that is to say your config file is not correct.Check your mysql config file in /etc/mysql/my.conf, find the socket config just as Vanya Avchyan says. I think the socket config is /var/run/mysqld/mysqld.sock, but in fact your mysql process runs in other place sock file. I used to met that problem, the real socket file exists in /tmp/mysqld.sock. So run sudo find / -name 'mysqld.sock' to find the real sock file and change my.conf to this real place, restart your mysql. May have work.
stackoverflow
{ "language": "en", "length": 380, "provenance": "stackexchange_0000F.jsonl.gz:854228", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44509067" }
2a303ab0c0d6f752eac865767532f1a226b77031
Stackoverflow Stackexchange Q: [AADSTS65001]: The user or administrator has not consented to use the application I am writing a test application which uses AAD to get acquire token, and this call succeeds for one user with TenantId "abc" but fails for another user with tenantId "xyz" with the message: The user or administrator has not consented to use the application with ID f5986c14-cdb9-4e68-a89e-119d15b33afc. Send an interactive authorization request for this user and resource. Please note: * *I have created one native application in my AAD *I have added those users from another tenant into the Users list of the directory as User role and also, we granted the permissions for the native app to all the users in the directory in windows azure management portal Screenshot Here A: The IT administrator of the company with xyz domain ([email protected]) has to give consent on behalf of the whole company so that the users of that company will be able to use your application. Here is a very good example of the flow: https://blog.mastykarz.nl/implementing-admin-consent-multitenant-office-365-applications-implicit-oauth-flow/
Q: [AADSTS65001]: The user or administrator has not consented to use the application I am writing a test application which uses AAD to get acquire token, and this call succeeds for one user with TenantId "abc" but fails for another user with tenantId "xyz" with the message: The user or administrator has not consented to use the application with ID f5986c14-cdb9-4e68-a89e-119d15b33afc. Send an interactive authorization request for this user and resource. Please note: * *I have created one native application in my AAD *I have added those users from another tenant into the Users list of the directory as User role and also, we granted the permissions for the native app to all the users in the directory in windows azure management portal Screenshot Here A: The IT administrator of the company with xyz domain ([email protected]) has to give consent on behalf of the whole company so that the users of that company will be able to use your application. Here is a very good example of the flow: https://blog.mastykarz.nl/implementing-admin-consent-multitenant-office-365-applications-implicit-oauth-flow/ A: No matter native application and web application, if you want to enable the users on other tenant can use the application, the application required to give the consent first. And the figure you linked in the post only grant the permission for the tenant the app register. AidaNow already provided using the adal.js to grant the admin consent. We also can make a HTTP request to grant the admin consent easily by using the prompt parameter. Here is the request for your reference(refer this link): https://login.microsoftonline.com/common/oauth2/authorize? client_id=6731de76-14a6-49ae-97bc-6eba6914391e &response_type=code &redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F &response_mode=query &resource=https%3A%2F%2Fservice.contoso.com%2F &state=12345 &prompt=admin_consent More detail about consent, you can refer the document below: How to sign in any Azure Active Directory (AD) user using the multi-tenant application pattern
stackoverflow
{ "language": "en", "length": 289, "provenance": "stackexchange_0000F.jsonl.gz:854247", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44509123" }
1dc47cc1b7d7a7bb419d4a836632dd384a767b83
Stackoverflow Stackexchange Q: Error:Could not find com.google.gms:google-services:3.0.0 I have searched for a solution to this problem everywhere, but could not find a satisfying answer. I am trying to set up a sign-in page on my already built application by using firebase, but I came across with the error in the title in the first phase of set up. Even though I went through all steps on this link, I still got the error as shown below: Error:Could not find com.google.gms:google-services:3.0.0. Searched in the following locations: file:/C:/Program Files/Android/Android Studio/gradle/m2repository/com/google/gms/google-services/3.0.0/google-services-3.0.0.pom file:/C:/Program Files/Android/Android Studio/gradle/m2repository/com/google/gms/google-services/3.0.0/google-services-3.0.0.jar Required by: project : I also downloaded all the necessary SDK tools as explained in the firebase instructions, but still cannot locate any folder under the name /gms (looks like it was not downloaded). Any help is appreciated, thanks. A: Step 0 Instead of 3.0.0, make it 3.1.0 change this classpath 'com.google.gms:google-services:3.0.0' to classpath 'com.google.gms:google-services:3.1.0' Step 1 To solve this problem, go to "C:\Program Files\Android\Android Studio\gradle\m2repository\com\google\gms\google-services\3.1.0" If you do not find any folder, create it. Step 2 Download two files, google-services-3.1.0.jar (http://central.maven.org/maven2/com/google/gms/google-services/3.1.0/google-services-3.1.0.jar) google-services-3.1.0.pom (https://repo1.maven.org/maven2/com/google/gms/google-services/3.1.0/google-services-3.1.0.pom) ,paste them into the folder. Hope it helps.
Q: Error:Could not find com.google.gms:google-services:3.0.0 I have searched for a solution to this problem everywhere, but could not find a satisfying answer. I am trying to set up a sign-in page on my already built application by using firebase, but I came across with the error in the title in the first phase of set up. Even though I went through all steps on this link, I still got the error as shown below: Error:Could not find com.google.gms:google-services:3.0.0. Searched in the following locations: file:/C:/Program Files/Android/Android Studio/gradle/m2repository/com/google/gms/google-services/3.0.0/google-services-3.0.0.pom file:/C:/Program Files/Android/Android Studio/gradle/m2repository/com/google/gms/google-services/3.0.0/google-services-3.0.0.jar Required by: project : I also downloaded all the necessary SDK tools as explained in the firebase instructions, but still cannot locate any folder under the name /gms (looks like it was not downloaded). Any help is appreciated, thanks. A: Step 0 Instead of 3.0.0, make it 3.1.0 change this classpath 'com.google.gms:google-services:3.0.0' to classpath 'com.google.gms:google-services:3.1.0' Step 1 To solve this problem, go to "C:\Program Files\Android\Android Studio\gradle\m2repository\com\google\gms\google-services\3.1.0" If you do not find any folder, create it. Step 2 Download two files, google-services-3.1.0.jar (http://central.maven.org/maven2/com/google/gms/google-services/3.1.0/google-services-3.1.0.jar) google-services-3.1.0.pom (https://repo1.maven.org/maven2/com/google/gms/google-services/3.1.0/google-services-3.1.0.pom) ,paste them into the folder. Hope it helps. A: Modify your project build.gradle: buildscript { repositories { jcenter() // helps me to resolve this issue } dependencies { ... classpath "com.google.gms:google-services:3.1.0" } } ... A: The third step add firebase to your app Do you just add this line: classpath 'com.google.gms:google-services:3.1.0' to your app build.gradle,not project build.gradle? A: I had the same error. I changed the name of google-services.json file to googleservices.json. This was causing an error. The error was solved when I deleted all the earlier copies of the json file, downloaded the file again, and followed the remaining procedure as it is. A: I got this problem today. I made mistake to write this line classpath 'com.google.gms:google-services:3.1.0' on module level build.gradle, not on project level build.gradle. after I move that line to project level build.gradle, the error fixed.
stackoverflow
{ "language": "en", "length": 313, "provenance": "stackexchange_0000F.jsonl.gz:854251", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44509131" }
52257011f19dff40426f73e24ec0871335e72c94
Stackoverflow Stackexchange Q: Docker: What are the best practices when tagging images for an environment I have multiple environments. They are debug, dev, and prod. I'd like to refer to an image by the latest dev (latest) or dev (version 1.1) or prod (latest). How would I go about tagging builds and pushes? My first thought was to create separate repositories for each environment debug, dev, and prod. But I am starting to wonder if I can do this with just one repository. If its possible to do with one container what would the syntax be when building and pushing? A: There's two schools of thought, stable tagging, where you update a single tag, and unique tagging. Each have their pros and cons. Stable tags can create instability when deploying to self healing clusters as a new node might pull a new version, while the rest of the cluster is running a slightly older version. Unique tagging is a best practice for deployment. However, to manage base image updates of OS & Framework patching, you'll want to build upon stable tags in your dockerfile, and enable automatic container builds. For a more detailed walk through, with visuals, here's a post: https://blogs.msdn.microsoft.com/stevelasker/2018/03/01/docker-tagging-best-practices-for-tagging-and-versioning-docker-images/
Q: Docker: What are the best practices when tagging images for an environment I have multiple environments. They are debug, dev, and prod. I'd like to refer to an image by the latest dev (latest) or dev (version 1.1) or prod (latest). How would I go about tagging builds and pushes? My first thought was to create separate repositories for each environment debug, dev, and prod. But I am starting to wonder if I can do this with just one repository. If its possible to do with one container what would the syntax be when building and pushing? A: There's two schools of thought, stable tagging, where you update a single tag, and unique tagging. Each have their pros and cons. Stable tags can create instability when deploying to self healing clusters as a new node might pull a new version, while the rest of the cluster is running a slightly older version. Unique tagging is a best practice for deployment. However, to manage base image updates of OS & Framework patching, you'll want to build upon stable tags in your dockerfile, and enable automatic container builds. For a more detailed walk through, with visuals, here's a post: https://blogs.msdn.microsoft.com/stevelasker/2018/03/01/docker-tagging-best-practices-for-tagging-and-versioning-docker-images/ A: This is what has worked best for me and my team and I recommend it: I recommend having a single repo per project for all environments, it is easier to manage. Especially if you have microservices, then your project is composed by multiple microservices. Managing one repo per env per project is a pain. For example, I have a users api. The docker repo is users. And this repo is used by alpha, dev and beta. We create an env variable called $DOCKER_TAG in our CI/CD service and set it at the time the build is created, like this: DOCKER_TAG: $(date +%Y%m%d).$BUILD_NUMBER => This is in bash. Where $BUILD_NUMBER is previously set by the build being run when the CI/CD run is triggered. For example, when we merge a PR, a build is triggered, as build no. 1, so $BUILD_NUMBER: 1. The resulting tag looks like this when used: 20171612.1 so our docker image is: users:20171612.1 Why this format? * *It allows us to deploy the same tag on different environments with a run task. *It helps us keep track when an image was created and what build it belongs to. *Through the build number, we can find the commit information and map all together as needed, nice for trobleshooting. *It allows us to use the same docker repo per project. *It is nice to know when we created the image from the tag itself. So, when we merge, we create a single build. Then that build is deployed as needed to the different environments. We don't create an independent build per environment. And we keep track on what's deployed where. If there's a bug in an environment with certain tag, we pull such tag, build and trobleshoot and reproduce the issue under that condition. If we find an issue, we have the build number in the tag 20171612.1 so we know the build no. 1 has the issue. We check our CI/CD service and that tells us what commit is the most current. We check out that commit hash from git and debug and fix the issue. Then we deploy it as a hotfix, for example. If you don't have a CI/CD yet, and you are doing this manually, just set the tag in that format manually (pretty much type the full string as is) and instead of a build number, use a commit short git hash (if you are using git): 20170612.ed73d4f So you know what is the most current commit so you can troubleshoot issues with a specific image and map back to the code to create fixes as needed. You can also define any other suffix for your tag that maps to the code version, so you can easily troubleshoot (e.g. map to git tags if you are using those). Try it, adjust it as need it and do what it works best for you and your team. There's many ways to go around tagging. We tried many and this one is our favorite so far. Hope this is of help. A: I think "lastest" is for the last productive image. It is what I expect in the docker hub, although there are no images in development. On the other hand you can use tags like for example 0.0.1-dev. When this image is finished you can do the tag again and push, and then the repository will detect that the layers are already in the repository. Now when you are about a candidate version to come out to production, you have to only have the semantic version despite not being in an environment pruduccion. That's what I would do.
stackoverflow
{ "language": "en", "length": 803, "provenance": "stackexchange_0000F.jsonl.gz:854329", "question_score": "23", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44509375" }
128c858ed790fcaea8366a1fcdfe2311bad87676
Stackoverflow Stackexchange Q: String concatenation takes place on the next line in python Python has a simple concatenation using the + operator. But I am observing something unusual. I tried : final_path = '/home/user/' + path + '/output' Where path is a staring variable I want to concatenate. print final_path gives me: /home/user/path /output Instead of /home/user/path/output Why is going to the next line. Is the forward slash causing the issue. I tried using the escape character as well. but It does not work. A: From the looks of your code, it may be the variable path that is the problem. Check to see if path has a new line at the end. Escape characters start with a backslash \ and not a forward slash /.
Q: String concatenation takes place on the next line in python Python has a simple concatenation using the + operator. But I am observing something unusual. I tried : final_path = '/home/user/' + path + '/output' Where path is a staring variable I want to concatenate. print final_path gives me: /home/user/path /output Instead of /home/user/path/output Why is going to the next line. Is the forward slash causing the issue. I tried using the escape character as well. but It does not work. A: From the looks of your code, it may be the variable path that is the problem. Check to see if path has a new line at the end. Escape characters start with a backslash \ and not a forward slash /. A: maybe it depends on which string is contained in the variable path. If it ends with a carriage return ('\n'), this could explain why string variable final_path is printed on 2 lines. Regards. A: This happens when path is from another file for example a .txt where you are importing the data. I solved this by adding path.strip() which removes whitespaces before the str and after for the newline which is being generated. Just add .strip() to your variable. A: As victor said, your path variable has '\n' added at the end implicitly, so you can do such a trick to overcome the problem: final_path = '/home/user/' + path.strip('\n') + '/output'
stackoverflow
{ "language": "en", "length": 236, "provenance": "stackexchange_0000F.jsonl.gz:854377", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44509493" }
3ad1e11cc98f56449ed99fd07574656e3f7674a3
Stackoverflow Stackexchange Q: Convert a Transform to a RectTransform In my Unity project, I am creating objects dynamically through scripts. var btnExit = new GameObject("Player " + ID + "'s Exit Button"); btnExit.transform.SetParent(UI.transform); I need to set the anchors and pivot of the object. I should be able to do that using its RectTransform component, as I do it when I create objects in the scene editor. myRectTransform.anchorMin = new Vector2(1, 0); myRectTransform.anchorMax = new Vector2(0, 1); myRectTransform.pivot = new Vector2(0.5f, 0.5f); But the object's transform component is not a RectTransform, just the normal Transform. So I can't cast it to use those properties I need. RectTransform myRectTransform = (RectTransform)btnExit.transform; So how can I properly use the power of the RectTransform class on an object that I initialise through scripts instead of in the scene editor? A: RectTransform rectTransform = transform.GetComponent<RectTransform>(); RectTransform rectTransform = (transform as RectTransform); RectTransform rectTransform = (RectTransform)transform; As long as your object has a RectTransform, these are all valid ways to get the RectTransform from the built-in transform reference.
Q: Convert a Transform to a RectTransform In my Unity project, I am creating objects dynamically through scripts. var btnExit = new GameObject("Player " + ID + "'s Exit Button"); btnExit.transform.SetParent(UI.transform); I need to set the anchors and pivot of the object. I should be able to do that using its RectTransform component, as I do it when I create objects in the scene editor. myRectTransform.anchorMin = new Vector2(1, 0); myRectTransform.anchorMax = new Vector2(0, 1); myRectTransform.pivot = new Vector2(0.5f, 0.5f); But the object's transform component is not a RectTransform, just the normal Transform. So I can't cast it to use those properties I need. RectTransform myRectTransform = (RectTransform)btnExit.transform; So how can I properly use the power of the RectTransform class on an object that I initialise through scripts instead of in the scene editor? A: RectTransform rectTransform = transform.GetComponent<RectTransform>(); RectTransform rectTransform = (transform as RectTransform); RectTransform rectTransform = (RectTransform)transform; As long as your object has a RectTransform, these are all valid ways to get the RectTransform from the built-in transform reference. A: You can directly add a RectTransform component to the gameobject, and the Transform component will change to Rect Transform. For example: var btnExit = new GameObject("Player " + ID + "'s Exit Button"); btnExit.AddComponent<RectTransform>(); Then the cast will work: RectTransform myRectTransform = (RectTransform)btnExit.transform; P.S. Without adding the component, the above cast will throw error InvalidCastException: Cannot cast from source type to destination type. But with the component added, the cast works fine. Above code is tested in Unity 2018.2.5f1 (64bit). A: You can't cast Transform to RectTransform. That's what GetComponent is use for. Use GetComponent on the btnExit variable and you will be able to get the RectTransform from the Button. RectTransform myRectTransform = btnExit.GetComponent<RectTransform>(); Note: You should only do this after calling SetParent, since SetParent will make Unity automatically attach RectTransform to the btnExit variable if the Parent Object is a UI Object such as Canvas. You can also use AddComponent to attach RectTransform to that GameObject. If you don't do any of these, you will get null since RectTransform is not attached to the Button .
stackoverflow
{ "language": "en", "length": 350, "provenance": "stackexchange_0000F.jsonl.gz:854416", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44509636" }
67ea3ff6db4a342a3be402c6d2d93474b874b64c
Stackoverflow Stackexchange Q: Why are all of the classes in Rakudo's src/core/Int.pm declared with my? Looking at the source for Int, I see that all of the classes are declared with my, which I would have thought would make them private and not available outside that file. But, they obviously are. Why do they need to be declared that way? my class Rat { ... } my class X::Numeric::DivideByZero { ... } my class X::NYI::BigInt { ... } my class Int { ... } my subset UInt of Int where {not .defined or $_ >= 0}; my class Int does Real { # declared in BOOTSTRAP I figure that BOOTSTRAP comment has something to do with it. In the Perl6/Metamodel/BOOTSTRAP.nqp there are lines like: my stub Int metaclass Perl6::Metamodel::ClassHOW { ... }; A: The files in Rakudo's src/core/ directory are not compiled as separate modules with their own private file-level scope, but concatenated into a single file such as gen/moar/CORE.setting during the build proceess. Sematically, this 'setting' (known as 'prelude' in other languges) forms an outer lexical scope implicitly surrounding your program. The design is described in S02: Pseudo-packages, and parts of that section have made it into the official documentation.
Q: Why are all of the classes in Rakudo's src/core/Int.pm declared with my? Looking at the source for Int, I see that all of the classes are declared with my, which I would have thought would make them private and not available outside that file. But, they obviously are. Why do they need to be declared that way? my class Rat { ... } my class X::Numeric::DivideByZero { ... } my class X::NYI::BigInt { ... } my class Int { ... } my subset UInt of Int where {not .defined or $_ >= 0}; my class Int does Real { # declared in BOOTSTRAP I figure that BOOTSTRAP comment has something to do with it. In the Perl6/Metamodel/BOOTSTRAP.nqp there are lines like: my stub Int metaclass Perl6::Metamodel::ClassHOW { ... }; A: The files in Rakudo's src/core/ directory are not compiled as separate modules with their own private file-level scope, but concatenated into a single file such as gen/moar/CORE.setting during the build proceess. Sematically, this 'setting' (known as 'prelude' in other languges) forms an outer lexical scope implicitly surrounding your program. The design is described in S02: Pseudo-packages, and parts of that section have made it into the official documentation.
stackoverflow
{ "language": "en", "length": 199, "provenance": "stackexchange_0000F.jsonl.gz:854425", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44509676" }
96fe762d4af2422372a880cf796a0e97923c1232
Stackoverflow Stackexchange Q: Error: "There was an error running the selected code generator: Package restore failed" I am trying to add controller to my solution in ASP.NET Core project: When I try to do so I get this error: I get the same message for adding minimal dependencies and full dependencies for controller. A: I encountered this issue with net5.0, specifically against version 5.0.5 of some dependencies. I downgraded my nuget packages from 5.0.5 to 5.0.4 for these: "Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore" Version="5.0.4" "Microsoft.AspNetCore.Identity.EntityFrameworkCore" Version="5.0.4" "Microsoft.AspNetCore.Identity.UI" Version="5.0.4" "Microsoft.EntityFrameworkCore.Tools" Version="5.0.4"
Q: Error: "There was an error running the selected code generator: Package restore failed" I am trying to add controller to my solution in ASP.NET Core project: When I try to do so I get this error: I get the same message for adding minimal dependencies and full dependencies for controller. A: I encountered this issue with net5.0, specifically against version 5.0.5 of some dependencies. I downgraded my nuget packages from 5.0.5 to 5.0.4 for these: "Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore" Version="5.0.4" "Microsoft.AspNetCore.Identity.EntityFrameworkCore" Version="5.0.4" "Microsoft.AspNetCore.Identity.UI" Version="5.0.4" "Microsoft.EntityFrameworkCore.Tools" Version="5.0.4" A: I just recently ran into the same issue. I resolved it by eventually taking a look at each individual .csproj files included in my solution and fixing all the versions of the microsoft libraries that were included. I changed the metapackage that i was referencing from "Microsoft.AspNetCore.All" to "Microsoft.AspNetCore.App", i then loaded up the reference list on nuget for the "App" package and removed any references to libraries that are already included in the metapackage. I then made sure that i fixed the versions of any outstanding packages to match the version of the metapackage that the project automatically chooses ie in my case 2.2.0. Something that tripped me up was that if you have multiple projects included in your solution you need to make sure that they reference the same metapackage as if there is a version mismatch between projects included in your solution you will get this issue too. <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.All" Version="2.2.5" /> <PackageReference Include="Microsoft.AspNetCore.Authentication.JwtBearer" Version="2.2.0" /> <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="2.2.0" /> <PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="2.2.3" /> </ItemGroup> Changed to this. <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.App" /> <PackageReference Include="Microsoft.AspNetCore.Razor.Design" Version="2.2.0" PrivateAssets="All" /> <PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="2.2.0" /> </ItemGroup> A: I also had this issue. "Add Controller>API Controller with actions, using Entity Framework" would give the "Package Restore Failed" error. As Anish stated, it seems to be due to package versions being mis-aligned. I was able to resolve this issue using "Manage NUGET Packages for Solution", then performing an "Update All". This set my AspNetCore version to 2.1.5 and resolved my "Package Restore Failed" error, but then led to another error, "NETCore version 2.1.5 not found". Apparently the scaffolding code generator needs the AspNetCore and the NETCore versions to be in sync, so I manually downloaded and installed the NETCore version 2.1.5 from Microsoft Downloads. This worked, and I was finally able to generate Controllers. A: I just had this problem whilst adding a controller to a Core API with Entity Framework project. I'm using VS 16.8.5 with the most recent EF core, version 5.03. The class containing my DBContext class referenced EF 5.03 . I (eventually!) noticed whilst browsing Nuget that the various code generation packages (none of which were referenced in my .csproj file, I think because ASP.Net core ships as a framework since 3.0 but correct me if I am wrong someone!), and in particular Microsoft.VisualStudio.Web.CodeGeneration.EntityFrameworkCode were 5.02. I didn't touch my ASP.Net project, instead I dowgraded the other EF projects to 5.02 and it solved the problem. A: I had resolved it by update two files Microsoft.VisualStudio.Web.CodeGeneration and Microsoft.VisualStudio.Web.CodeGeneration.Design . Its version should be match with other packages version in application. A: Just update the NUGET packages from the Nuget Package Manager. A: i have same error. and Update NuGet Packages (Tools -> Nuget Package Manager -> Manage NuGet packages for solution -> Click on installed. You need select Version with notice Dependencies. Example my option is fine with: Microsoft.AspNetCore.Identity.EntityFramework Core with version 3.1.12 Microsoft.EntityFrameworkCore.Tools with version 3.1.12 Microsoft.EntityFrameworkCore.SqlServer with version 3.1.12 Microsoft.VisualStudio.Web.CodeGeneration.Design with version 3.1.5 A: I had the problem with a blazor server application version 5.0.5 and Microsoft Identity scaffolding. The highest available version of the CodeGeneration.Design package was 5.0.2, so i downgraded the other Microsoft packages (specially EntityFramework) to 5.0.2 and it solved the problem. A: I was getting the same error while making a new controller. I've fixed it like this. Actually, VS only had the Offline Package source and could not resolve the packages needed. Add the online reference: Tools > nuget package manager > package manager settings > Package Sources Add source: https://api.nuget.org/v3/index.json A: * *VS2019 [5.0]. *Update NuGet Packages (Tools -> Nuget Package Manager -> Manage NuGet packages for solution -> Click on the Updates tab, select all and run update. *Solution -> Clean *solution -> Build *create a Controller. I try everything but the above method work for me A: If no answer works for you, try running the code generator from the command line. For my sln with multiple projects, with net 5 and some NuGet packages of 5.0.5 and some of 5.0.2, Only code generator through the command line worked. Make Sure it is installed. or install it by the following command dotnet tool install -g dotnet-aspnet-codegenerator or update it by the following command dotnet tool update -g dotnet-aspnet-codegenerator The basic code generator commands can be found here Some of them are: Generator Operation area Scaffolds an Area controller Scaffolds a controller identity Scaffolds Identity razorpage Scaffolds Razor Pages view Scaffolds a view For example: dotnet-aspnet-codegenerator identity --dbContext MyDbContextClass To get help: dotnet-aspnet-codegenerator [YourGenerator] -h A: Had exactly same problem, in my situation CodeGenerator was missing I have added this item into ItemGroup in .csproj <PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="2.2.0" /> A: What fixed it for me after I couldn't scaffold IdentityFramework was by * *Checking VS2019 for updates. *Update NuGet Packages (Tools -> Nuget Package Manager -> Manage NuGet packages for solution -> Click on the updates tab, select all and run update. *Retry scaffolding identity A: The problem is that you have some of the older versions of nuget pacakges installed in your project and when you try to scaffold the Asp.net core tries to install the latest packages which are required for scaffolding at this point Asp.net core throws such exception. I suggest you to update your nuget packages and than try scafolding. A: Cleaning the solution showed me an error of NuGet packages needed to be updated! I updated them and build the solution. Build Successful and I was able to create the controller class. A: Trying to add MVC controller with views using EF for MVC project using Net5.0. Using the following NuGet packages specific versions worked for me, while the problem was solved by using less version than the version of Microsoft.EntityFrameworkCore, for both Microsoft.EntityFrameworkCore.SqlServer and Microsoft.EntityFrameworkCore.Tools, these packages are referenced in the MVC project. The correct packages are: <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.Mvc.Razor.RuntimeCompilation" Version="5.0.7" /> <PackageReference Include="Microsoft.EntityFrameworkCore" Version="5.0.7" /> <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="5.0.6" /> <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="5.0.6"> <PrivateAssets>all</PrivateAssets> <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets> </PackageReference> <PackageReference Include="Microsoft.EntityFrameworkCore.Design" Version="5.0.7" /> <PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="5.0.2" /> </ItemGroup> A: I also faced this same error when i was trying to scaffold identity template. I resolve this issue by updating nuget packages of the two major project of concerns(I mean the the two projects that has something to do with what i was to implement). A: I am also facing this issue. Please follow this step, * *Clean Your Solution *Open Nuget Manager *Check this version Microsoft.EntityFrameworkCore (I am using 5.0.8) *Check this version Microsoft.EntityFrameworkCore.Design (I am using 5.0.8) *Check this version Microsoft.EntityFrameworkCore.Tools (I am using 5.0.8) *Check this version Microsoft.EntityFrameworkCore.SqlServer (I am using 5.0.8) *Check this version Microsoft.VisualStudio.Web.CodeGeneration.Design (I am using 5.0.2) * *After that Rebuild your solution and Create Scaffolding Controller A: I know that some of you might still facing the same issue, I just did the next in VS 2022, 1- Checked the depencendies on the current project. 2- Remove all of them 3- Go to dependencies add a lower version and the clean the solution and add the views. A: I also faced the same issue, Here is How I solved the Issue There was an error running the selected code generation, 'Package restore failed. Rolling back package canges for web' 1- Check if your Solution has multiple projects, please check their Target Dot.net Framework.(in my case it was .Net Standard 1.6 for class libraries & .NetCoreApp 1.0 for Web Project, I changed it to .NetCoreApp 1.1) 2- After having the same framework, clean the web project, Rebuilt and Add new Controller. If its successful fine otherwise You might encounter another error e.g 'There was an error running the code generator: 'No executable found matching command "dotnet-aspnet-codegenerator"' If you have project.json file open it other wise open .csproj.user project in note pad, please add following Please note based on your .net version you might have different version no. You may find instruction in ScaffoldingReadMe.txt file if its generated in your project A: All I had to do was open the properties of my web project and change the TargetFramework from 2.1 to 2.2. Or to match whatever version of the framework your business and object layer are using. A: Had the same problem, but updating all the NuGet Packages has solved the problem. Right click on <your project name> file -> Manage NuGet Packages -> Updates -> Select all packages -> Update A: I'm running .NET Core (and Entity Framework Core) 3.1.x. I got this exact error and tried updating all the nuget packages and other relevant solutions already mentioned in the answers here. The issue was simply that my database server was not running (it runs on a local VM). In other words, my database context (i.e. ApplicationDbContext) mentioned in the 'Add Controller...' window, was not able to access the db. Once I started the db server, my scaffolding created without issue. Keep also in mind, the model/class (i.e. table) that the controller and views were referencing had not been created yet (I hadn't run add-migration yet). So, it just needed the db connection only. It's kind of a silly (obvious?) solution, but very misleading when looking at the 'Package Restore Failed' error message. A: I had a similar issue with entity framework core sqlite nuget packages. I installed sqlite and sqlite core packages fixed this. Probably a needy package is missing. Also make sure SQL Server and Server Agent are running. Check those on SQL Server Configuration > SQL Server Services > Right click on SQL Server or Server Agent and start the service then restart the server. Guess this might help someone A: I just updated EntityFrameworkCore from Version 3.1.10 to 3.1.13 and it solved the problem. My Project file looks like: <Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>netcoreapp3.1</TargetFramework> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.EntityFrameworkCore" Version="3.1.13" /> <PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="3.1.13" /> <PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="3.1.13" /> <PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="3.1.13"> <PrivateAssets>all</PrivateAssets> <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets> </PackageReference> <PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="3.1.5" /> </ItemGroup> </Project> A: I had this same issue when creating a new 'Identity' scaffolded item. I managed to get this working by removing everything within the <ItemGroup> tags within the csproj file and running the code generator. The generator then installs packages that it needs.
stackoverflow
{ "language": "en", "length": 1781, "provenance": "stackexchange_0000F.jsonl.gz:854432", "question_score": "28", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44509694" }
387c095a1f4f36b5d24bd373d87d0b3e435049d7
Stackoverflow Stackexchange Q: How to get sum of two different columns with Laravel Query Builder? I'm trying to get the sum of two different columns using Laravel query builder, the plain SQL Query below works just fine, but I can't get it to work with the Laravel Query. SELECT SUM(logins_sun + logins_mon) FROM users_stats WHERE id = 7; // returns: 1034 Here's what I have tried. $stats = DB::table('users_stats')->where('id', '=', '7')->sum('logins_sun', '+', 'logins_mon'); // returns: 587.0 And here is my DB structure. +----+------------+------------+ | id | logins_sun | logins_mon | +----+------------+------------+ | 7 | 587 | 447 | +----+------------+------------+ It was supposed to return 1034 but the Laravel Query is returning only the last value 587.0 . How can I get it working? A: You can try with the sum() method like: DB::table('users_stats') ->where('id', '7') ->sum(\DB::raw('logins_sun + logins_mon'));
Q: How to get sum of two different columns with Laravel Query Builder? I'm trying to get the sum of two different columns using Laravel query builder, the plain SQL Query below works just fine, but I can't get it to work with the Laravel Query. SELECT SUM(logins_sun + logins_mon) FROM users_stats WHERE id = 7; // returns: 1034 Here's what I have tried. $stats = DB::table('users_stats')->where('id', '=', '7')->sum('logins_sun', '+', 'logins_mon'); // returns: 587.0 And here is my DB structure. +----+------------+------------+ | id | logins_sun | logins_mon | +----+------------+------------+ | 7 | 587 | 447 | +----+------------+------------+ It was supposed to return 1034 but the Laravel Query is returning only the last value 587.0 . How can I get it working? A: You can try with the sum() method like: DB::table('users_stats') ->where('id', '7') ->sum(\DB::raw('logins_sun + logins_mon')); A: Try passing a callback to the sum() and do the addition there like: $stats = DB::table('users_stats')->where('id', '=', '7')->sum(function ($row) { return $row->logins_sun + $row->logins_mon; }); A: sum is an aggregate function and only takes one argument. It will sum the values of each row in a column. In your case, the query only returns one row, so the sum is just the value of that one column (the first argument passed to sum()). There may be some better way to do it, but I think you should be able to use a raw expression to return the sum of the two columns. $stats = DB::table('users_stats') ->select(DB::raw('logins_sun + logins_mon')) ->where('id', '=', '7'); A: You can run direct raw sql in laravel with the following way : $sql = "SELECT SUM(logins_sun + logins_mon) FROM users_stats WHERE id = :ID"; $result = DB::select($sql,['ID'=>7]);
stackoverflow
{ "language": "en", "length": 277, "provenance": "stackexchange_0000F.jsonl.gz:854434", "question_score": "11", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44509697" }
465d1de101075d0b3e76c4454f25c757bc5a396a
Stackoverflow Stackexchange Q: IText AllowAssembly and StandardEncryption I am attempting to digitally sign a pdf document while still allowing for the modification of annotations and allowing the adding and removing of pages using itextsharp, Version=4.1.6.0. Below is my current code: var signatureStamper = PdfStamper.CreateSignature(pdfReader, memoryStream, '\0', null); signatureStamper.SetEncryption(null, Encoding.UTF8.GetBytes(certificationBundle.Password), PdfWriter.ALLOW_PRINTING | PdfWriter.ALLOW_MODIFY_ANNOTATIONS | PdfWriter.AllowAssembly, PdfWriter.STANDARD_ENCRYPTION_128); With this configuration however, I am still unable to add and remove pages. Am I Using PdfWriter.AllowAssembly incorrectly? A: I am attempting to digitally sign a pdf document while still allowing for the modification of annotations and allowing the adding and removing of pages Addition or removal of pages is never allowed for signed documents, cf. this stack overflow answer. At most you are allowed to do the following: * *Adding signature fields *Adding or editing annotations *Supplying form field values *Digitally signing
Q: IText AllowAssembly and StandardEncryption I am attempting to digitally sign a pdf document while still allowing for the modification of annotations and allowing the adding and removing of pages using itextsharp, Version=4.1.6.0. Below is my current code: var signatureStamper = PdfStamper.CreateSignature(pdfReader, memoryStream, '\0', null); signatureStamper.SetEncryption(null, Encoding.UTF8.GetBytes(certificationBundle.Password), PdfWriter.ALLOW_PRINTING | PdfWriter.ALLOW_MODIFY_ANNOTATIONS | PdfWriter.AllowAssembly, PdfWriter.STANDARD_ENCRYPTION_128); With this configuration however, I am still unable to add and remove pages. Am I Using PdfWriter.AllowAssembly incorrectly? A: I am attempting to digitally sign a pdf document while still allowing for the modification of annotations and allowing the adding and removing of pages Addition or removal of pages is never allowed for signed documents, cf. this stack overflow answer. At most you are allowed to do the following: * *Adding signature fields *Adding or editing annotations *Supplying form field values *Digitally signing
stackoverflow
{ "language": "en", "length": 136, "provenance": "stackexchange_0000F.jsonl.gz:854446", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44509732" }
d0b573969728989da259242e333b06563f4e9a18
Stackoverflow Stackexchange Q: Spring Boot Swagger API not working Here's my pom.xml: <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>2.7.0</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>2.7.0</version> </dependency> I am using version 1.5.3.RELEASE of Spring Boot. Here's my swagger config file: @Configuration @EnableSwagger2 public class SwaggerConfig { @Bean public Docket swagger() { return new Docket(DocumentationType.SWAGGER_2) .select() .apis(RequestHandlerSelectors.any()) .paths(PathSelectors.any()) .build(); } } Here's my WebSecurityConfig.java: @Override public void configure(WebSecurity web) throws Exception { web.ignoring().antMatchers("/v2/api-docs", "/configuration/ui", "/swagger-resources", "/configuration/security", "/swagger-ui.html", "/webjars/**"); } When I do a get from the endpoint http://localhost:8080/v2/api-docs I get my JSON back: { "swagger": "2.0", "info": { "description": "Api Documentation", "version": "1.0", "title": "Api Documentation", "termsOfService": "urn:tos", "contact": {}, "license": { "name": "Apache 2.0", "url": "http://www.apache.org/licenses/LICENSE-2.0" } }, "host": "localhost:8080", "basePath": "/", //ETC } But when I try to access the UI at localhost:8080/swagger-ui.html I get a blank page that looks like this: If I click on the page, I get promoted with this What am I doing wrong? Is this some sort of spring security issue? A: You can suggest API description path to Swagger in application config, using springfox.documentation.swagger.v2.path property, e.g. springfox.documentation.swagger.v2.path: /rest/docs in application.yml. I've posted an example on github.
Q: Spring Boot Swagger API not working Here's my pom.xml: <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>2.7.0</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>2.7.0</version> </dependency> I am using version 1.5.3.RELEASE of Spring Boot. Here's my swagger config file: @Configuration @EnableSwagger2 public class SwaggerConfig { @Bean public Docket swagger() { return new Docket(DocumentationType.SWAGGER_2) .select() .apis(RequestHandlerSelectors.any()) .paths(PathSelectors.any()) .build(); } } Here's my WebSecurityConfig.java: @Override public void configure(WebSecurity web) throws Exception { web.ignoring().antMatchers("/v2/api-docs", "/configuration/ui", "/swagger-resources", "/configuration/security", "/swagger-ui.html", "/webjars/**"); } When I do a get from the endpoint http://localhost:8080/v2/api-docs I get my JSON back: { "swagger": "2.0", "info": { "description": "Api Documentation", "version": "1.0", "title": "Api Documentation", "termsOfService": "urn:tos", "contact": {}, "license": { "name": "Apache 2.0", "url": "http://www.apache.org/licenses/LICENSE-2.0" } }, "host": "localhost:8080", "basePath": "/", //ETC } But when I try to access the UI at localhost:8080/swagger-ui.html I get a blank page that looks like this: If I click on the page, I get promoted with this What am I doing wrong? Is this some sort of spring security issue? A: You can suggest API description path to Swagger in application config, using springfox.documentation.swagger.v2.path property, e.g. springfox.documentation.swagger.v2.path: /rest/docs in application.yml. I've posted an example on github. A: if You use Version - V3 || io.springfox >= 3.0.0 <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-boot-starter</artifactId> <version>3.0.0</version> </dependency> Java code @Configuration @EnableSwagger2 public class SwaggerConfig { @Bean public Docket api() { return new Docket(DocumentationType.SWAGGER_2).select() .apis(RequestHandlerSelectors.basePackage("Your Controller package name")) .paths(PathSelectors.any()).build(); } } V3 browser URL -> http://localhost:8080/swagger-ui/#/ Run (Must need) : Mvn clean A: Its very likely spring-security is not allowing/preventing your endpoints to be discovered. Try changing your ant matchers to the following and see if that helps. The security/ui configuration endpoints were incorrect. @Override public void configure(WebSecurity web) throws Exception { web.ignoring().antMatchers( "/v2/api-docs", "/swagger-resources/configuration/ui", "/swagger-resources", "/swagger-resources/configuration/security", "/swagger-ui.html", "/webjars/**"); } A: change the swagger version to 2.9.2 it will work. <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>2.9.2</version> </dependency>
stackoverflow
{ "language": "en", "length": 299, "provenance": "stackexchange_0000F.jsonl.gz:854455", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44509767" }
7f23605267e1c16e139f89d296297e718d04756e
Stackoverflow Stackexchange Q: request headers(CSRF) missing in wkwebview We are trying to open our app url in wkwebview which used to work fine in uiwebview. Our app is based on angular2 and nodejs and validate csrf header, this flows works with uiwebview but same is not working in wkwebview as x-xsrf-token is missing from request headers. Not sure what might be going wrong here Below is the difference in request header
Q: request headers(CSRF) missing in wkwebview We are trying to open our app url in wkwebview which used to work fine in uiwebview. Our app is based on angular2 and nodejs and validate csrf header, this flows works with uiwebview but same is not working in wkwebview as x-xsrf-token is missing from request headers. Not sure what might be going wrong here Below is the difference in request header
stackoverflow
{ "language": "en", "length": 69, "provenance": "stackexchange_0000F.jsonl.gz:854480", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44509867" }
82dc5261a407542627eb0826478081188c75fc66
Stackoverflow Stackexchange Q: How to count only the number elements in a list (PROLOG) count([],0). count([_|Tail], N) :- count(Tail, N1), N is N1 + 1. This count all the elements, but I need to count only the numbers. A: Prolog has an ISO builtin predicate number/1 that checks whether the given parameter is a number. We can simply use an if-then-else statement that either increments N is N1+1, or sets N = N1, like: count([],0). count([H|Tail], N) :- count(Tail, N1), ( number(H) -> N is N1 + 1 ; N = N1 ).
Q: How to count only the number elements in a list (PROLOG) count([],0). count([_|Tail], N) :- count(Tail, N1), N is N1 + 1. This count all the elements, but I need to count only the numbers. A: Prolog has an ISO builtin predicate number/1 that checks whether the given parameter is a number. We can simply use an if-then-else statement that either increments N is N1+1, or sets N = N1, like: count([],0). count([H|Tail], N) :- count(Tail, N1), ( number(H) -> N is N1 + 1 ; N = N1 ). A: You could use number/1 iso built-in predicate: count([],0). count([H|Tail], N) :- number(H),count(Tail, N1), N is N1 + 1. count([H|Tail], N) :- \+number(H),count(Tail, N).
stackoverflow
{ "language": "en", "length": 115, "provenance": "stackexchange_0000F.jsonl.gz:854547", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510056" }
de6b99ff660c2547f8ea88b3ac7b65672ced4276
Stackoverflow Stackexchange Q: What good is a hash code if it doesn't identify a unique object? Java API - Class Object Hash Code: It is not required that if two objects are unequal according to the equals(java.lang.Object) method, then calling the hashCode method on each of the two objects must produce distinct integer results. However, the programmer should be aware that producing distinct integer results for unequal objects may improve the performance of hash tables. What consistency is achieved through hashing if two objects can product distinct integer results? It seems odd that two different objects can return the same hash value. A: There are only 232 integral values a hash code can have (in Java, at least). If you have more possible objects than that, then two different objects having the same hash value is unavoidable. We do our best to avoid these "hash collisions", but often it's just impossible mathematically (see pigeonhole principle). We typically try to design hash functions such that their outputs are uniformly distributed within their range, thereby making collisions rare.
Q: What good is a hash code if it doesn't identify a unique object? Java API - Class Object Hash Code: It is not required that if two objects are unequal according to the equals(java.lang.Object) method, then calling the hashCode method on each of the two objects must produce distinct integer results. However, the programmer should be aware that producing distinct integer results for unequal objects may improve the performance of hash tables. What consistency is achieved through hashing if two objects can product distinct integer results? It seems odd that two different objects can return the same hash value. A: There are only 232 integral values a hash code can have (in Java, at least). If you have more possible objects than that, then two different objects having the same hash value is unavoidable. We do our best to avoid these "hash collisions", but often it's just impossible mathematically (see pigeonhole principle). We typically try to design hash functions such that their outputs are uniformly distributed within their range, thereby making collisions rare. A: What good is a hash code if it doesn't identify a unique object? The hash code doesn't let you identify a unique object, but it does let you identify a unique group that the object is in. You then only have to consider that group to find your object. For example, a HashMap might have 500 items divided between 1,000 groups. Thanks to the hash code, it'll immediately know which of those 1,000 groups to look at, rejecting the 999 others. Whether that group has 0, 1 or even 6 elements, it's still a huge time saver compared to looking through all 500.
stackoverflow
{ "language": "en", "length": 278, "provenance": "stackexchange_0000F.jsonl.gz:854564", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510112" }
bf06f20bdf8aeb79a365318b31a417b278275e9e
Stackoverflow Stackexchange Q: Flutter - Changing the color of the appbar is not working I am trying to change the background color of AppBar but it is not working. When choosing the color 0x673AB7 according to the image below, AppBar turns gray instead of purple. import "package:flutter/material.dart"; void main() { runApp(new ControlleApp()); } class ControlleApp extends StatelessWidget { @override Widget build(BuildContext context) { return new MaterialApp( title: "Controlle Financeiro", home: new HomePage(), ); } } class HomePage extends StatelessWidget { @override Widget build(BuildContext context) { return new Scaffold( appBar: new AppBar( backgroundColor: new Color(0x673AB7), ), ); } } A: It looks like your color is completely transparent. Try changing the color to 0xFF673AB7
Q: Flutter - Changing the color of the appbar is not working I am trying to change the background color of AppBar but it is not working. When choosing the color 0x673AB7 according to the image below, AppBar turns gray instead of purple. import "package:flutter/material.dart"; void main() { runApp(new ControlleApp()); } class ControlleApp extends StatelessWidget { @override Widget build(BuildContext context) { return new MaterialApp( title: "Controlle Financeiro", home: new HomePage(), ); } } class HomePage extends StatelessWidget { @override Widget build(BuildContext context) { return new Scaffold( appBar: new AppBar( backgroundColor: new Color(0x673AB7), ), ); } } A: It looks like your color is completely transparent. Try changing the color to 0xFF673AB7 A: As @Randal mentioned you are using the hex code without alpha value. If you did not specify it, it will be complete transparent. So, you can use first two values as alpha and other six for RGB. Have a look at Color class source code. There is a comment like below: /// Construct a color from the lower 32 bits of an [int]. /// /// The bits are interpreted as follows: /// /// * Bits 24-31 are the alpha value. /// * Bits 16-23 are the red value. /// * Bits 8-15 are the green value. /// * Bits 0-7 are the blue value. /// /// In other words, if AA is the alpha value in hex, RR the red value in hex, /// GG the green value in hex, and BB the blue value in hex, a color can be /// expressed as `const Color(0xAARRGGBB)`. /// /// For example, to get a fully opaque orange, you would use `const /// Color(0xFFFF9000)` (`FF` for the alpha, `FF` for the red, `90` for the /// green, and `00` for the blue).
stackoverflow
{ "language": "en", "length": 292, "provenance": "stackexchange_0000F.jsonl.gz:854567", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510123" }
5c1cfe6529d3e2c784040b9f43f9b2429069650a
Stackoverflow Stackexchange Q: Boto + Python + AWS S3: How to get last_modified attribute of specific file? It’s possible to get last_modified attribute: import boto3 s3 = boto3.resource('s3') bucket_name = 'bucket-one' bucket = s3.Bucket(bucket_name) for obj in bucket.objects.all(): print obj.last_modified But it does so for all the objects in a bucket. How can I get the last_modified attribute for one specific object/file under a bucket? A: Hope this helps you.... This will get the last modified time of a particular object in an S3 bucket import boto3 import time from pprint import pprint s3 = boto3.resource('s3',region_name='S3_BUCKET_REGION_NAME') bucket='BUCKET_NAME' key='OBJECT_NAME' summaryDetails=s3.ObjectSummary(bucket,key) timeFormat=summaryDetails.last_modified formatedTime=timeFormat.strftime("%Y-%m-%d %H:%M:%S") pprint( 'Bucket name is '+ bucket + ' and key name is ' + key + ' and last modified at time '+ formatedTime) Thanks....
Q: Boto + Python + AWS S3: How to get last_modified attribute of specific file? It’s possible to get last_modified attribute: import boto3 s3 = boto3.resource('s3') bucket_name = 'bucket-one' bucket = s3.Bucket(bucket_name) for obj in bucket.objects.all(): print obj.last_modified But it does so for all the objects in a bucket. How can I get the last_modified attribute for one specific object/file under a bucket? A: Hope this helps you.... This will get the last modified time of a particular object in an S3 bucket import boto3 import time from pprint import pprint s3 = boto3.resource('s3',region_name='S3_BUCKET_REGION_NAME') bucket='BUCKET_NAME' key='OBJECT_NAME' summaryDetails=s3.ObjectSummary(bucket,key) timeFormat=summaryDetails.last_modified formatedTime=timeFormat.strftime("%Y-%m-%d %H:%M:%S") pprint( 'Bucket name is '+ bucket + ' and key name is ' + key + ' and last modified at time '+ formatedTime) Thanks.... A: bucket_name = 'bucket-one' file_name = 'my_file' object = s3.Object(bucket_name, file_name) print object.last_modified or simply: print s3.Object(bucket_name, file_name).last_modified
stackoverflow
{ "language": "en", "length": 143, "provenance": "stackexchange_0000F.jsonl.gz:854597", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510201" }
8311921ee5150a6b0b380e93311b50af3d5c7d9a
Stackoverflow Stackexchange Q: sphal namespace is not configured for this process Problem I am debugging my app using a physical phone via usb, and I'm getting the following notification in logcat: I/vndksupport: sphal namespace is not configured for this process. Loading /system/lib/hw/gralloc.ranchu.so from the current namespace instead. Device Samsung Galaxy S7 running Android Marshmallow. The only Permission I'm using is ACCESS_WIFI_STATE Attempts to solve this issue Searching for sphal does not help as there are not any google results. Same for gralloc.ranch.so A: The answers here are misleading: It is unclear what your app is your and what your device is (e.g. custom ROM, or not, I'll assume Samsung stock ROM and device), but the reason for your error is that you are most probably using native API which from using some (or according to the vndk error from Oreo, including master build between the two) that has been restricted by the new linker. I do see you mentioned Marshmallow, but root cause is the same. Your app should use different API sets.
Q: sphal namespace is not configured for this process Problem I am debugging my app using a physical phone via usb, and I'm getting the following notification in logcat: I/vndksupport: sphal namespace is not configured for this process. Loading /system/lib/hw/gralloc.ranchu.so from the current namespace instead. Device Samsung Galaxy S7 running Android Marshmallow. The only Permission I'm using is ACCESS_WIFI_STATE Attempts to solve this issue Searching for sphal does not help as there are not any google results. Same for gralloc.ranch.so A: The answers here are misleading: It is unclear what your app is your and what your device is (e.g. custom ROM, or not, I'll assume Samsung stock ROM and device), but the reason for your error is that you are most probably using native API which from using some (or according to the vndk error from Oreo, including master build between the two) that has been restricted by the new linker. I do see you mentioned Marshmallow, but root cause is the same. Your app should use different API sets. A: I am experimenting the same problem, but on Android things. Solved by the following steps: * *Install the app *Reboot the Phone *Restart the app - IT WORKS Previous the error like "vndksupport: sphal namespace is not configured for this process" Now after reboot it don't have this problem.
stackoverflow
{ "language": "en", "length": 221, "provenance": "stackexchange_0000F.jsonl.gz:854716", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510562" }
7451043221dcc7e8e9749f5498bff26b8657c2b0
Stackoverflow Stackexchange Q: Google BigQuery case insensitive match How would I run the following query, like I would again mysql: SELECT * FROM [integrated-myth-15.testing_data_100k] WHERE title='down in la' Now it will match if I have a perfect case-sensitive string match, but how would I do it case insensitive for "down in la"? I'm working from the Web console. A: The best way is to append "IGNORE CASE" at the end of your query. SELECT * FROM [integrated-myth-15.testing_data_100k] WHERE title='down in la' IGNORE CASE Note: This will work only with legacy SQL As mentioned in the official documentations: String functions operate on string data. String constants must be enclosed with single or double quotes. String functions are case-sensitive by default. You can append IGNORE CASE to the end of a query to enable case- insensitive matching. IGNORE CASE works only on ASCII characters and only at the top level of the query.
Q: Google BigQuery case insensitive match How would I run the following query, like I would again mysql: SELECT * FROM [integrated-myth-15.testing_data_100k] WHERE title='down in la' Now it will match if I have a perfect case-sensitive string match, but how would I do it case insensitive for "down in la"? I'm working from the Web console. A: The best way is to append "IGNORE CASE" at the end of your query. SELECT * FROM [integrated-myth-15.testing_data_100k] WHERE title='down in la' IGNORE CASE Note: This will work only with legacy SQL As mentioned in the official documentations: String functions operate on string data. String constants must be enclosed with single or double quotes. String functions are case-sensitive by default. You can append IGNORE CASE to the end of a query to enable case- insensitive matching. IGNORE CASE works only on ASCII characters and only at the top level of the query. A: The standard way to do this is using LOWER or UPPER on the input string, e.g.: #legacySQL SELECT * FROM [integrated-myth-15.testing_data_100k] WHERE LOWER(title) = 'down in la'; Or: #standardSQL SELECT * FROM `integrated-myth-15.testing_data_100k` WHERE LOWER(title) = 'down in la'; A: Excuse me if this is way off. I have not used the product, I'm reading the docs to research it. I've found the following which might be of use. CONTAINS_SUBSTR Performs a normalized, case-insensitive search to see if a value exists as a substring in an expression. Returns TRUE if the value exists, otherwise returns FALSE. https://cloud.google.com/bigquery/docs/reference/standard-sql/string_functions#contains_substr This is interesting because the case sensitivity seems to be built-in to the function, which tells me that there may be others which work this way, and that it'll just work the way most people would expect it to :) COLLATE Also, I wonder whether you can apply a collation at query time to help. https://cloud.google.com/bigquery/docs/reference/standard-sql/collation-concepts#collate_define -- Assume there is a table with this column declaration: CREATE TABLE table_a ( col_a STRING COLLATE 'und:ci', col_b STRING COLLATE '', col_c STRING, col_d STRING COLLATE 'und:ci' ); -- This runs. Column 'b' has a collation specification and the -- column 'c' does not. SELECT STARTS_WITH(col_b_expression, col_c_expression) FROM table_a; -- This runs. Column 'a' and 'd' have the same collation specification. SELECT STARTS_WITH(col_a_expression, col_d_expression) FROM table_a; -- This runs. Even though column 'a' and 'b' have different -- collation specifications, column 'b' is considered the default collation -- because it's assigned to an empty collation specification. SELECT STARTS_WITH(col_a_expression, col_b_expression) FROM table_a; -- This works. Even though column 'a' and 'b' have different -- collation specifications, column 'b' is updated to use the same -- collation specification as column 'a'. SELECT STARTS_WITH(col_a_expression, COLLATE(col_b_expression, 'und:ci')) FROM table_a; -- This runs. Column 'c' does not have a collation specification, so it uses the -- collation specification of column 'd'. SELECT STARTS_WITH(col_c_expression, col_d_expression) FROM table_a;
stackoverflow
{ "language": "en", "length": 465, "provenance": "stackexchange_0000F.jsonl.gz:854719", "question_score": "16", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510581" }
3eca111ddf6a9d262f02019031184147a5ee3c1e
Stackoverflow Stackexchange Q: How can I call a generator function inside mapDispatchToProps? This is really a simple JS question; I'm sure the problem is one of scope. I want to do something like this, but this is incorrect syntax. Basically, I want an event in my component to dispatch a different action each time the event happens. This whole approach may be wrong in which case I would like to know how this should be done. const mapDispatchToProps = dispatch => { return { function* getNextSection() { yield dispatch(local_actions.general) yield dispatch(local_actions.fixbugs) yield dispatch(local_actions.resumefinish) } } } A: Interesting. The following code works: const action1 => ({ type: 'action1' }) const action2 => ({ type: 'action2' }) function* actionGenerator() { yield action1() yield action2() } // A generator returns an iterator, // it has to be stored in a variable const actionIterator = actionGenerator() const myAction = () => actionIterator.next().value connect(mapStatetoProps, { myAction })(MyComponent) Then you can use it like: this.props.myAction() // action1 this.props.myAction() // action2
Q: How can I call a generator function inside mapDispatchToProps? This is really a simple JS question; I'm sure the problem is one of scope. I want to do something like this, but this is incorrect syntax. Basically, I want an event in my component to dispatch a different action each time the event happens. This whole approach may be wrong in which case I would like to know how this should be done. const mapDispatchToProps = dispatch => { return { function* getNextSection() { yield dispatch(local_actions.general) yield dispatch(local_actions.fixbugs) yield dispatch(local_actions.resumefinish) } } } A: Interesting. The following code works: const action1 => ({ type: 'action1' }) const action2 => ({ type: 'action2' }) function* actionGenerator() { yield action1() yield action2() } // A generator returns an iterator, // it has to be stored in a variable const actionIterator = actionGenerator() const myAction = () => actionIterator.next().value connect(mapStatetoProps, { myAction })(MyComponent) Then you can use it like: this.props.myAction() // action1 this.props.myAction() // action2
stackoverflow
{ "language": "en", "length": 163, "provenance": "stackexchange_0000F.jsonl.gz:854722", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510591" }
8e594560255c9150c07c8a3634d530ed41e969ad
Stackoverflow Stackexchange Q: Armadillo C++ doesn't find matrix inverse I'm using Armadillo & C++ and I'm trying to find the inverse of a Matrix, however, the inverse just returns the matrix itself. It seems to me that there isn't any computation. Also, there are no errors thrown. I am using the following header: #include <armadillo> using namespace std; using namespace arma; and I have been using Armadillo for a couple days and ran through several matrix manipulations that work properly. Input: mat A = randu<mat>(5,5); A.print("A: "); mat B = inv(A); B.print("inv(A): "); Output: A: 0.0013 0.1741 0.9885 0.1662 0.8760 0.1933 0.7105 0.1191 0.4508 0.9559 0.5850 0.3040 0.0089 0.0571 0.5393 0.3503 0.0914 0.5317 0.7833 0.4621 0.8228 0.1473 0.6018 0.5199 0.8622 inv(A): 0.0013 0.1741 0.9885 0.1662 0.8760 0.1933 0.7105 0.1191 0.4508 0.9559 0.5850 0.3040 0.0089 0.0571 0.5393 0.3503 0.0914 0.5317 0.7833 0.4621 0.8228 0.1473 0.6018 0.5199 0.8622 Process finished with exit code 0 Question: Why isn't inv(ofAMatrix) working, any hints or ideas? Thanks!
Q: Armadillo C++ doesn't find matrix inverse I'm using Armadillo & C++ and I'm trying to find the inverse of a Matrix, however, the inverse just returns the matrix itself. It seems to me that there isn't any computation. Also, there are no errors thrown. I am using the following header: #include <armadillo> using namespace std; using namespace arma; and I have been using Armadillo for a couple days and ran through several matrix manipulations that work properly. Input: mat A = randu<mat>(5,5); A.print("A: "); mat B = inv(A); B.print("inv(A): "); Output: A: 0.0013 0.1741 0.9885 0.1662 0.8760 0.1933 0.7105 0.1191 0.4508 0.9559 0.5850 0.3040 0.0089 0.0571 0.5393 0.3503 0.0914 0.5317 0.7833 0.4621 0.8228 0.1473 0.6018 0.5199 0.8622 inv(A): 0.0013 0.1741 0.9885 0.1662 0.8760 0.1933 0.7105 0.1191 0.4508 0.9559 0.5850 0.3040 0.0089 0.0571 0.5393 0.3503 0.0914 0.5317 0.7833 0.4621 0.8228 0.1473 0.6018 0.5199 0.8622 Process finished with exit code 0 Question: Why isn't inv(ofAMatrix) working, any hints or ideas? Thanks! A: This works just fine with Armadillo 7.900.1 with Intel (R) MKL backend and Clang 5.0. You should never take the inverse of a matrix unless it is absolutely necessary. Also you have to make sure that the inverse actually exists otherwise the algorithm will happily output garbage. If you want to compute the inverse of A to find x as in x = A-1 b it is better to solve the linear system A x = b instead. These solvers are much faster and have way better convergence. #include <armadillo> int main() { arma::mat A = { { 0.0013 , 0.1741 , 0.9885 , 0.1662 , 0.8760 } , { 0.1933 , 0.7105 , 0.1191 , 0.4508 , 0.9559 } , { 0.5850 , 0.3040 , 0.0089 , 0.0571 , 0.5393 } , { 0.3503 , 0.0914 , 0.5317 , 0.7833 , 0.4621 } , { 0.8228 , 0.1473 , 0.6018 , 0.5199 , 0.8622 } }; A.print("A: "); arma::mat B = arma::inv(A); B.print("inv(A): "); arma::mat I = A*B; I.print("I: "); } Output: A: 0.0013 0.1741 0.9885 0.1662 0.8760 0.1933 0.7105 0.1191 0.4508 0.9559 0.5850 0.3040 0.0089 0.0571 0.5393 0.3503 0.0914 0.5317 0.7833 0.4621 0.8228 0.1473 0.6018 0.5199 0.8622 inv(A): 0.4736 -1.7906 4.4377 2.2515 -2.4784 2.9108 -3.1697 12.1159 7.7356 -11.1675 2.5212 -2.8557 6.8074 4.7142 -6.1801 -1.0317 0.9400 -2.3230 0.2413 1.3297 -2.0869 3.6766 -9.6555 -6.9062 8.9447 I: 1.0000e+00 1.1340e-16 -1.8134e-15 -6.4918e-16 -4.8899e-17 7.6334e-17 1.0000e+00 -9.1810e-16 -9.4668e-16 8.7907e-16 2.5424e-16 -4.3981e-16 1.0000e+00 9.2981e-16 -2.0864e-15 9.3036e-17 -2.6745e-17 7.5137e-16 1.0000e+00 -8.1372e-16 4.3422e-16 -4.2293e-16 1.1321e-15 1.0687e-15 1.0000e+00 A: "Works for me" as they. Driving this from R and RcppArmadillo: First, we read the matrix and use the generalized inverse from the MASS package: R> M <- as.matrix(read.table(text="0.0013 0.1741 0.9885 0.1662 0.8760 0.1933 0.7105 0.1191 0.4508 0.9559 0.5850 0.3040 0.0089 0.0571 0.5393 0.3503 0.0914 0.5317 0.7833 0.4621 0.8228 0.1473 0.6018 0.5199 0.8622")) M <- as.matrix(read.table(text="0.0013 0.1741 0.9885 0.1662 0.8760 + 0.1933 0.7105 0.1191 0.4508 0.9559 + 0.5850 0.3040 0.0089 0.0571 0.5393 + 0.3503 0.0914 0.5317 0.7833 0.4621 + 0.8228 0.1473 0.6018 0.5199 0.8622")) R> M V1 V2 V3 V4 V5 [1,] 0.0013 0.1741 0.9885 0.1662 0.8760 [2,] 0.1933 0.7105 0.1191 0.4508 0.9559 [3,] 0.5850 0.3040 0.0089 0.0571 0.5393 [4,] 0.3503 0.0914 0.5317 0.7833 0.4621 [5,] 0.8228 0.1473 0.6018 0.5199 0.8622 R> MASS::ginv(M) [,1] [,2] [,3] [,4] [,5] [1,] 0.473579 -1.790599 4.43767 2.251542 -2.47842 [2,] 2.910752 -3.169657 12.11587 7.735612 -11.16755 [3,] 2.521167 -2.855651 6.80743 4.714239 -6.18015 [4,] -1.031667 0.940028 -2.32302 0.241345 1.32967 [5,] -2.086858 3.676647 -9.65548 -6.906203 8.94472 R> The we use RcppArmadillo: R> Rcpp::cppFunction("arma::mat armaInv(arma::mat x) { return arma::inv(x); }", depends="RcppArmadillo") R> armaInv(M) [,1] [,2] [,3] [,4] [,5] [1,] 0.473579 -1.790599 4.43767 2.251542 -2.47842 [2,] 2.910752 -3.169657 12.11587 7.735612 -11.16755 [3,] 2.521167 -2.855651 6.80743 4.714239 -6.18015 [4,] -1.031667 0.940028 -2.32302 0.241345 1.32967 [5,] -2.086858 3.676647 -9.65548 -6.906203 8.94472 R> Same answer both ways.
stackoverflow
{ "language": "en", "length": 627, "provenance": "stackexchange_0000F.jsonl.gz:854734", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510612" }
570cf322c5ceecc79aaeaca860caf110e3909010
Stackoverflow Stackexchange Q: What are discrete animations? The MDN animation documentation refers to animation type being discrete. What does this mean? A: Discrete animations proceed from one keyframe to the next without any interpolation. Think of it the way you normally would think of an animation - one image to the next. Interpolation is inbetweening - filling in space between the main images (in the case of computer graphics these are found from formulas). In traditional hand-drawn animation, the main artist would produce the keyframes, and an assistant would draw the inbetweens. So discrete animation is like hand-drawn animation done without the inbetweens of an assistant.
Q: What are discrete animations? The MDN animation documentation refers to animation type being discrete. What does this mean? A: Discrete animations proceed from one keyframe to the next without any interpolation. Think of it the way you normally would think of an animation - one image to the next. Interpolation is inbetweening - filling in space between the main images (in the case of computer graphics these are found from formulas). In traditional hand-drawn animation, the main artist would produce the keyframes, and an assistant would draw the inbetweens. So discrete animation is like hand-drawn animation done without the inbetweens of an assistant.
stackoverflow
{ "language": "en", "length": 104, "provenance": "stackexchange_0000F.jsonl.gz:854749", "question_score": "17", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510663" }
e1d30474d1fbbaa92de25b41007cdfd117796542
Stackoverflow Stackexchange Q: No matching distribution found for coremltools I tried to use coremltools to convert caffemodel to mlmodel on my Mac. Following the " pip install -U coremltools " , i got this: " Collecting coremltools Could not find a version that satisfies the requirement coremltools (from versions: ) No matching distribution found for coremltools " enter image description here And, my python version is "Python 2.7.10", numpy version is "numpy (1.12.1)", protobuf version is "protobuf (3.2.0)" i used " pip search coremltools ", and got " coremltools (0.3.0) - Community Tools for CoreML ", but " pip install coremltools==0.3 " got " Could not find a version that satisfies the requirement coremltools==0.3 (from versions: ) No matching distribution found for coremltools==0.3 " wtf ? Does anyone get this as well ? A: Try installing coremltools in a virtualenv that runs Python 2.7. Note that it currently doesn't work with Python 3.x Installing virtualenv Once virtualenv is installed, create a new environment that runs Python 2.7 virtualenv --python=/usr/bin/python2.7 <DIR> Next, activate the environment source <DIR>/bin/activate Then proceed with installing coremltools per usual pip install -U coremltools
Q: No matching distribution found for coremltools I tried to use coremltools to convert caffemodel to mlmodel on my Mac. Following the " pip install -U coremltools " , i got this: " Collecting coremltools Could not find a version that satisfies the requirement coremltools (from versions: ) No matching distribution found for coremltools " enter image description here And, my python version is "Python 2.7.10", numpy version is "numpy (1.12.1)", protobuf version is "protobuf (3.2.0)" i used " pip search coremltools ", and got " coremltools (0.3.0) - Community Tools for CoreML ", but " pip install coremltools==0.3 " got " Could not find a version that satisfies the requirement coremltools==0.3 (from versions: ) No matching distribution found for coremltools==0.3 " wtf ? Does anyone get this as well ? A: Try installing coremltools in a virtualenv that runs Python 2.7. Note that it currently doesn't work with Python 3.x Installing virtualenv Once virtualenv is installed, create a new environment that runs Python 2.7 virtualenv --python=/usr/bin/python2.7 <DIR> Next, activate the environment source <DIR>/bin/activate Then proceed with installing coremltools per usual pip install -U coremltools A: I installed python 3.6 (i think all versions >= 2.7 will cause this problem). I had convert my default python version to 2.7 , but still not work. And i use another Mac with python version 2.7 as default, it did not appear again. and now , i installed coremltools successfully: " Collecting coremltools Downloading coremltools-0.3.0-py2.7-none-any.whl (1.4MB) 100% |████████████████████████████████| 1.4MB 171kB/s Requirement already up-to-date: numpy>=1.6.2 in /Library/Python/2.7/site-packages (from coremltools) Requirement already up-to-date: protobuf>=3.1.0 in /Library/Python/2.7/site-packages (from coremltools) Requirement already up-to-date: six>=1.9 in /Library/Python/2.7/site-packages (from protobuf>=3.1.0->coremltools) Requirement already up-to-date: setuptools in /Library/Python/2.7/site-packages (from protobuf>=3.1.0->coremltools) Installing collected packages: coremltools Successfully installed coremltools-0.3.0 " A: CoreMLTools requires Python 2.7 coremltools-0.4.0-py2.7 https://pypi.python.org/pypi/coremltools * *cd ~/Virtualenvs *virtualenv project_folder *cd project_folder *source bin/activate *pip install -U coremltools Recommended Homebrew and Python Installation Homebrew Install (pre-Python Installation) The macOS default PATH is /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin. You'll want to change it so that some Homebrew installations like Python will take precedence over stock macOS binaries. To make these changes, open ~/.bash_profile. vim ~/.bash_profile … and add these 4 lines: # Ensure user-installed binaries take precedence export PATH=/usr/local/bin:$PATH # Load .bashrc if it exists test -f ~/.bashrc && source ~/.bashrc Since the above directives will take effect on the next login, source the file to ensure it takes effect for the current session: source ~/.bash_profile Python and Virtualenvs Installations brew install python pip install virtualenv mkdir -p ~/Virtualenvs cd ~/Virtualenvs virtualenv project_folder cd project_folder source bin/activate pip install -U coremltools A: I was able to install it using virtualenv. Here is the details. http://satoshi.blogs.com/ml/2017/06/installing-coremltools-on-macos.html A: I installed python 3.6x But i couldn't install coremltool with it. Work around it is to go for virtualenv. If command: pip install virtualenv doesn't work just use latest command from python 3.6x i.e. pip3 install virtualenv. Hopefully, it should work. Cheers
stackoverflow
{ "language": "en", "length": 482, "provenance": "stackexchange_0000F.jsonl.gz:854760", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510701" }
39ac52e89f0f660869b32066cf280b5ac59f61f1
Stackoverflow Stackexchange Q: random sample of size N in Athena I'm trying to obtain a random sample of N rows from Athena. But since the table from which I want to draw this sample is huge the naive SELECT id FROM mytable ORDER BY RANDOM() LIMIT 100 takes forever to run, presumably because the ORDER BY requires all data to be sent to a single node, which then shuffles and orders the data. I know about TABLESAMPLE but that allows one to sample some percentage of rows rather than some number of them. Is there a better way of doing this? A: Athena is actually behind Presto. You can use TABLESAMPLE to get a random sample of your table. Lets say you want 10% sample of your table, your query will be something like: SELECT id FROM mytable TABLESAMPLE BERNOULLI(10) Pay attention that there is BERNOULLI and SYSTEM sampling. Here is the documentation for it.
Q: random sample of size N in Athena I'm trying to obtain a random sample of N rows from Athena. But since the table from which I want to draw this sample is huge the naive SELECT id FROM mytable ORDER BY RANDOM() LIMIT 100 takes forever to run, presumably because the ORDER BY requires all data to be sent to a single node, which then shuffles and orders the data. I know about TABLESAMPLE but that allows one to sample some percentage of rows rather than some number of them. Is there a better way of doing this? A: Athena is actually behind Presto. You can use TABLESAMPLE to get a random sample of your table. Lets say you want 10% sample of your table, your query will be something like: SELECT id FROM mytable TABLESAMPLE BERNOULLI(10) Pay attention that there is BERNOULLI and SYSTEM sampling. Here is the documentation for it.
stackoverflow
{ "language": "en", "length": 153, "provenance": "stackexchange_0000F.jsonl.gz:854765", "question_score": "25", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510714" }
eb8156d49ff1e1aea35beded4253195263bc6efd
Stackoverflow Stackexchange Q: When to use RPC over WebSocket? I have 2 components which need to communicate with each other in bidirectional way. For now I have 2 approaches, one is setting up RPC server in both side to establish the 2-way communication. Another is using websocket. Could anyone help me to compare the pros and cons of RPC and WebSocket ? Thanks A: WebSocket is a message-based transport, while RPC is a communication pattern. If you want routed RPCs over WebSocket, then take a look at the WAMP protocol (http://wamp-proto.org). This avoids having to set up a server/opening a port on each component, and allows them to communicate from behind NATs. Full disclosure: I am deeply involved in the WAMP ecosystem, but the protocol is open, as are most of the implementations.
Q: When to use RPC over WebSocket? I have 2 components which need to communicate with each other in bidirectional way. For now I have 2 approaches, one is setting up RPC server in both side to establish the 2-way communication. Another is using websocket. Could anyone help me to compare the pros and cons of RPC and WebSocket ? Thanks A: WebSocket is a message-based transport, while RPC is a communication pattern. If you want routed RPCs over WebSocket, then take a look at the WAMP protocol (http://wamp-proto.org). This avoids having to set up a server/opening a port on each component, and allows them to communicate from behind NATs. Full disclosure: I am deeply involved in the WAMP ecosystem, but the protocol is open, as are most of the implementations. A: As gzost says, RPC is a type of protocol. Websockets are a transport mechanism. RPEP is another protocol for RPC and event-based communication that has a javascript implementation for node.js and the browser. Its much simpler than WAMP (WAMP's spec is about 6 times larger) and its more flexible as a result.
stackoverflow
{ "language": "en", "length": 184, "provenance": "stackexchange_0000F.jsonl.gz:854769", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510719" }
0a1b9c9a2370a49e6f59793968635c9ac2425ce1
Stackoverflow Stackexchange Q: GPU-accelerated video processing with ffmpeg I want to use ffmpeg to accelerate video encode and decode with an NVIDIA GPU. From NVIDIA's website: NVIDIA GPUs contain one or more hardware-based decoder and encoder(s) (separate from the CUDA cores) which provides fully-accelerated hardware-based video decoding and encoding for several popular codecs. With decoding/encoding offloaded, the graphics engine and the CPU are free for other operations. My question is: can I use CUDA cores to encode and decode video, maybe faster? A: FFmpeg provides a subsystem for hardware acceleration, which includes NVIDIA: https://trac.ffmpeg.org/wiki/HWAccelIntro In order to enable support for GPU-assisted encoding with an NVIDIA GPU, you need: * *A ​supported GPU *Supported drivers for your operating system *The NVIDIA Codec SDK *ffmpeg configured with --enable-nvenc (default if the drivers are detected while configuring)
Q: GPU-accelerated video processing with ffmpeg I want to use ffmpeg to accelerate video encode and decode with an NVIDIA GPU. From NVIDIA's website: NVIDIA GPUs contain one or more hardware-based decoder and encoder(s) (separate from the CUDA cores) which provides fully-accelerated hardware-based video decoding and encoding for several popular codecs. With decoding/encoding offloaded, the graphics engine and the CPU are free for other operations. My question is: can I use CUDA cores to encode and decode video, maybe faster? A: FFmpeg provides a subsystem for hardware acceleration, which includes NVIDIA: https://trac.ffmpeg.org/wiki/HWAccelIntro In order to enable support for GPU-assisted encoding with an NVIDIA GPU, you need: * *A ​supported GPU *Supported drivers for your operating system *The NVIDIA Codec SDK *ffmpeg configured with --enable-nvenc (default if the drivers are detected while configuring) A: Quick use on ​supported GPU: CUDA ffmpeg -hwaccel cuda -i input output CUVID ffmpeg -c:v h264_cuvid -i input output Full hardware transcode with NVDEC and NVENC: ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i input -c:v h264_nvenc -preset slow output If ffmpeg was compiled with support for libnpp, it can be used to insert a GPU based scaler into the chain: ffmpeg -hwaccel_device 0 -hwaccel cuda -i input -vf scale_npp=-1:720 -c:v h264_nvenc -preset slow output.mkv Source: https://trac.ffmpeg.org/wiki/HWAccelIntro A: As Mike mentioned, ffmpeg wraps some of these HW-accelerations. You should use it instead of going for more low-level approaches (official NVIDIA libs) first! The table shows, that NVENC is probably your candidate. But: Be careful and do some benchmarking. While GPU-encoders should be very fast, they are also worse than CPU ones in comparison to visual quality. The thing to check here is: Does a GPU-encoder compete with a CPU-encoder when some quality at some given bitrate is targeted? I would say no no no (except for very high bitrates or very bad quality), but that's something which depends on your use-case. GPU-encoding is not a silver-bullet providing only advantages. A: For AMD cards, use these -vcodec options: Windows: h264_amf hevc_amf Linux: h264_vaapi hevc_vaapi ffmpeg -i input.mp4 -b:v 10400k -vcodec h264_amf -vf crop=1920:848:0:116 -c:a copy output.mp4 ffmpeg -i input.mp4 -b:v 10400k -vcodec hevc_amf -vf crop=1920:848:0:116 -c:a copy output.mp4 ffmpeg -i input.mp4 -b:v 10400k -vcodec h264_vaapi -vf crop=1920:848:0:116 -c:a copy output.mp4 ffmpeg -i input.mp4 -b:v 10400k -vcodec hevc_vaapi -vf crop=1920:848:0:116 -c:a copy output.mp4
stackoverflow
{ "language": "en", "length": 382, "provenance": "stackexchange_0000F.jsonl.gz:854784", "question_score": "67", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510765" }
5b8dfa27caf50cc2da74d62899dfc4d2aa7e84b5
Stackoverflow Stackexchange Q: Changing the field separator of awk to newline The -F option lets you specify the field separator for awk, but using '\n' as the line separator doesn't work, that is, it doesn't make $1 the first line of the input, $2 the second line, and so on. I suspect that this is because awk looks for the field separator within each line. Is there a way to get around this with awk, or some other Linux command? Basically, I want to separate my input by newline characters and put them into an Excel file. I'm still warming up to Linux and shell scripts, which is the reason for my lack of creativity with this problem. Thank you! A: You may require to overwrite the input record separator (RS), which default is newline. See my example below, $ cat test.txt a b c d $ awk 'BEGIN{ RS = "" ; FS = "\n" }{print $1,$2,$3,$4}' test.txt a b c d
Q: Changing the field separator of awk to newline The -F option lets you specify the field separator for awk, but using '\n' as the line separator doesn't work, that is, it doesn't make $1 the first line of the input, $2 the second line, and so on. I suspect that this is because awk looks for the field separator within each line. Is there a way to get around this with awk, or some other Linux command? Basically, I want to separate my input by newline characters and put them into an Excel file. I'm still warming up to Linux and shell scripts, which is the reason for my lack of creativity with this problem. Thank you! A: You may require to overwrite the input record separator (RS), which default is newline. See my example below, $ cat test.txt a b c d $ awk 'BEGIN{ RS = "" ; FS = "\n" }{print $1,$2,$3,$4}' test.txt a b c d A: Note that you can change both the input and output record separator so you can do something like this to achieve a similar result to the accepted answer. cat test.txt a b c d $ awk -v ORS=" " '{print $1}' test.txt a b c d A: one can simplify it to just the following, with a minor caveat of extra trailing space without trailing newline : % echo "a\nb\nc\nd" a b c d % echo "a\nb\nc\nd" | mawk 8 ORS=' ' a b c d % To rectify that, plus handling the edge case of no trailing newline from input, one can modify it to : % echo -n "a\nb\nc\nd" | mawk 'NF-=_==$NF' FS='\n' RS='^$' | odview 0000000 543301729 174334051 a b c d \n 141 040 142 040 143 040 144 012 a sp b sp c sp d nl 97 32 98 32 99 32 100 10 61 20 62 20 63 20 64 0a 0000010 % echo "a\nb\nc\nd" | mawk 'NF -= (_==$NF)' FS='\n' RS='^$' | odview 0000000 543301729 174334051 a b c d \n 141 040 142 040 143 040 144 012 a sp b sp c sp d nl 97 32 98 32 99 32 100 10 61 20 62 20 63 20 64 0a 0000010
stackoverflow
{ "language": "en", "length": 374, "provenance": "stackexchange_0000F.jsonl.gz:854798", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510813" }
bf2610c4dc749bf213e9457b635febd875e42e33
Stackoverflow Stackexchange Q: Git in terminal console - Android Studio - Autocomplete Is there any way to make the built in terminal console of Android Studio (3.0 Canary) autocomplete git commands like the new Intellij Idea 2017 does? -- EDITED I have installed a clean copy of Intellij Idea 2017.1.4 on Windows 10 and noticed that this functionality is not provided by IntelliJ actually. I'm trying to identify that plugin. -- EDITED The solution is just set an external bash terminal and restart Android Studio like @lidkxx pointed out... A: If you love bash & Linux command but forced to use Windows for development, you will love this trick. The console is colored like on Linux and autocomplete works way better than cmd/ PowerShell. Requirement: Git installed with git-bash Open Android Studio, go to File > Settings, open Tools > Terminal Set “Shell Path” to C:\Program Files\Git\bin\bash.exe
Q: Git in terminal console - Android Studio - Autocomplete Is there any way to make the built in terminal console of Android Studio (3.0 Canary) autocomplete git commands like the new Intellij Idea 2017 does? -- EDITED I have installed a clean copy of Intellij Idea 2017.1.4 on Windows 10 and noticed that this functionality is not provided by IntelliJ actually. I'm trying to identify that plugin. -- EDITED The solution is just set an external bash terminal and restart Android Studio like @lidkxx pointed out... A: If you love bash & Linux command but forced to use Windows for development, you will love this trick. The console is colored like on Linux and autocomplete works way better than cmd/ PowerShell. Requirement: Git installed with git-bash Open Android Studio, go to File > Settings, open Tools > Terminal Set “Shell Path” to C:\Program Files\Git\bin\bash.exe A: Not sure if there's anything out of the box, but maybe try Preferences... -> search for Terminal. There's a shell path field and you can choose your shell to be anything you like. I am using zsh with autosuggestions plugin and autocomplete works like a charm. Edit: You probably need to restart Android Studio for this change to take effect. A: For me, 2017.2 on Mac does not autocomplete anything. Probably you are using different shell there, so you just need to configure Android Studio to use it as well, in Settings - Tools - Terminal. A: Maybe its late. To achieve this in windows you can use Powershell along with posh-git. Here are the steps: 1- In Android Studio -> Settings -> Tools -> Terminal -> set the Shell path to powershell.exe 2- Install posh-git: 1- Verify you have PowerShell 2.0 or better with $PSVersionTable.PSVersion 2- Verify execution of scripts is allowed with Get-ExecutionPolicy (should be RemoteSigned or Unrestricted). If scripts are not enabled, run PowerShell as Administrator and call Set-ExecutionPolicy RemoteSigned -Scope CurrentUser -Confirm. 3- Verify that git can be run from PowerShell. If the command is not found, you will need to add a git alias or add %ProgramFiles%\Git\cmd to your PATH environment variable. 4- Clone the posh-git repository to your local machine. 5- From the posh-git repository directory, run .\install.ps1. Enjoy ;)
stackoverflow
{ "language": "en", "length": 372, "provenance": "stackexchange_0000F.jsonl.gz:854815", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510860" }
720c3fc57a1407cfdb51ef52d1e28bb930363d0c
Stackoverflow Stackexchange Q: Creating an ArrayList of unique items in an ArrayList I would like my code to create an ArrayList (uniquePinyinArrayList) of unique items from an existing ArrayList (pinyinArrayList) which contains duplicates. The "println" commands do not execute (I think they should do when a duplicate is from the pinyinArrayList is found in uniquePinyinArrayList) fun uniquePinyinArray(pinyinArrayList: ArrayList<String>) { val uniquePinyinArrayList = ArrayList<String>() for(currentPinyin in pinyinArrayList){ if (currentPinyin in uniquePinyinArrayList){ // do nothing println("already contained"+currentPinyin) println("uniquePinyin"+uniquePinyinArrayList) } else { uniquePinyinArrayList.add(currentPinyin) } } } I have also tried if (uniquePinyinArrayList.contains(currentPinyin)){ , though this also didn't work. Edit: This method actually gets run for each word from my list of source-words, and hence multiple ArrayLists are created. To fix this, I made a single ArrayList object for uniquePinyin outside of this loop. Things work as expected now! A: Check out the distinct() function, it will do all of this for you! fun main(args: Array<String>) { val listOfThings = listOf("A", "B", "C", "A", "B", "C") val distinctThings = listOfThings.distinct() println(listOfThings) // [A, B, C, A, B, C] println(distinctThings) // [A, B, C] } https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/distinct.html
Q: Creating an ArrayList of unique items in an ArrayList I would like my code to create an ArrayList (uniquePinyinArrayList) of unique items from an existing ArrayList (pinyinArrayList) which contains duplicates. The "println" commands do not execute (I think they should do when a duplicate is from the pinyinArrayList is found in uniquePinyinArrayList) fun uniquePinyinArray(pinyinArrayList: ArrayList<String>) { val uniquePinyinArrayList = ArrayList<String>() for(currentPinyin in pinyinArrayList){ if (currentPinyin in uniquePinyinArrayList){ // do nothing println("already contained"+currentPinyin) println("uniquePinyin"+uniquePinyinArrayList) } else { uniquePinyinArrayList.add(currentPinyin) } } } I have also tried if (uniquePinyinArrayList.contains(currentPinyin)){ , though this also didn't work. Edit: This method actually gets run for each word from my list of source-words, and hence multiple ArrayLists are created. To fix this, I made a single ArrayList object for uniquePinyin outside of this loop. Things work as expected now! A: Check out the distinct() function, it will do all of this for you! fun main(args: Array<String>) { val listOfThings = listOf("A", "B", "C", "A", "B", "C") val distinctThings = listOfThings.distinct() println(listOfThings) // [A, B, C, A, B, C] println(distinctThings) // [A, B, C] } https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/distinct.html A: You can convert your array list to set. Set<String> foo = new HashSet<String>(pinyinArrayList);
stackoverflow
{ "language": "en", "length": 193, "provenance": "stackexchange_0000F.jsonl.gz:854846", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510942" }
ea4a0c70ea66a6bfaea9cb52ac33c9a3599b84a4
Stackoverflow Stackexchange Q: Remove single elements from a vector I have a vector M containing single elements and repeats. I want to delete all the single elements. Turning something like [1 1 2 3 4 5 4 4 5] to [1 1 4 5 4 4 5]. I thought I'd try to get the count of each element then use the index to delete what I don't need, something like this: uniq = unique(M); list = [uniq histc(M,uniq)]; Though I'm stuck here and not sure how to go forward. Can anyone help? A: Here is a solution using unique, histcounts and ismember: tmp=unique(M) ; %finding unique elements of M %Now keeping only those elements in tmp which appear only once in M tmp = tmp(histcounts(M,[tmp tmp(end)])==1); %Thanks to rahnema for his insight on this [~,ind] = ismember(tmp,M); %finding the indexes of these elements in M M(ind)=[]; histcounts was introduced in R2014b. For earlier versions, hist can be used by replacing that line with this: tmp=tmp(hist(M,tmp)==1);
Q: Remove single elements from a vector I have a vector M containing single elements and repeats. I want to delete all the single elements. Turning something like [1 1 2 3 4 5 4 4 5] to [1 1 4 5 4 4 5]. I thought I'd try to get the count of each element then use the index to delete what I don't need, something like this: uniq = unique(M); list = [uniq histc(M,uniq)]; Though I'm stuck here and not sure how to go forward. Can anyone help? A: Here is a solution using unique, histcounts and ismember: tmp=unique(M) ; %finding unique elements of M %Now keeping only those elements in tmp which appear only once in M tmp = tmp(histcounts(M,[tmp tmp(end)])==1); %Thanks to rahnema for his insight on this [~,ind] = ismember(tmp,M); %finding the indexes of these elements in M M(ind)=[]; histcounts was introduced in R2014b. For earlier versions, hist can be used by replacing that line with this: tmp=tmp(hist(M,tmp)==1); A: You can get the result with the following code: A = [a.', ones(length(a),1)]; [C,~,ic] = unique(A(:,1)); result = [C, accumarray(ic,A(:,2))]; a = A(~ismember(A(:,1),result(result(:,2) == 1))).'; The idea is, add ones to the second column of a', then accumarray base on the first column (elements of a). After that, found the elements in first column which have accum sum in the second column. Therefore, these elements repeated once in a. Finally, removing them from the first column of A. A: Here is a cheaper alternative: [s ii] = sort(a); x = [false s(2:end)==s(1:end-1)] y = [x(2:end)|x(1:end-1) x(end)] z(ii) = y; result = a(z); Assuming the input is a = 1 1 8 8 3 1 4 5 4 6 4 5 we sort the list s and get index of the sorted list ii s= 1 1 1 3 4 4 4 5 5 6 8 8 we can find index of repeated elements and for it we check if an element is equal to the previous element x = 0 1 1 0 0 1 1 0 1 0 0 1 however in x the first elements of each block is omitted to find it we can apply [or] between each element with the previous element y = 1 1 1 0 1 1 1 1 1 0 1 1 we now have sorted logical index of repeated elements. It should be reordered to its original order. For it we use index of sorted elements ii : z = 1 1 1 1 0 1 1 1 1 0 1 1 finally use z to extract only the repeated elements. result = 1 1 8 8 1 4 5 4 4 5 Here is a result of a test in Octave* for the following input: a = randi([1 100000],1,10000000); -------HIST-------- Elapsed time is 5.38654 seconds. ----ACCUMARRAY------ Elapsed time is 2.62602 seconds. -------SORT-------- Elapsed time is 1.83391 seconds. -------LOOP-------- Doesn't complete in 15 seconds. *Since in Octave histcounts hasn't been implemented so instead of histcounts I used hist. You can test it Online A: X = [1 1 2 3 4 5 4 4 5]; Y = X; A = unique(X); for i = 1:length(A) idx = find(X==A(i)); if length(idx) == 1 Y(idx) = NaN; end end Y(isnan(Y)) = []; Then, Y would be [1 1 4 5 4 4 5]. It detects all single elements, and makes them as NaN, and then remove all NaN elements from the vector.
stackoverflow
{ "language": "en", "length": 573, "provenance": "stackexchange_0000F.jsonl.gz:854853", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510962" }
2ed1916d5f26a0a8b62139075eb70fd4265067b0
Stackoverflow Stackexchange Q: how to create a stacked DNC in tensorflow? There are 3 implementations of DNC using tensorflow that i saw : DeepMind , Mostafa-Samir and Siraj Raval. Basically what i want to understand is how to create a layer of DNC that can be used anywhere (Keras-like layer). For instance: create 2 layer of DNC. Can i do it with tensorflow (or using one of those 3 implementations)? and any tutorial/guideline to do it? Thanks!
Q: how to create a stacked DNC in tensorflow? There are 3 implementations of DNC using tensorflow that i saw : DeepMind , Mostafa-Samir and Siraj Raval. Basically what i want to understand is how to create a layer of DNC that can be used anywhere (Keras-like layer). For instance: create 2 layer of DNC. Can i do it with tensorflow (or using one of those 3 implementations)? and any tutorial/guideline to do it? Thanks!
stackoverflow
{ "language": "en", "length": 75, "provenance": "stackexchange_0000F.jsonl.gz:854860", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44510982" }
61c83c1f380c09afa1074c9d104a236fc47897f6
Stackoverflow Stackexchange Q: How to make a QCombobox only display content (not editable, not selectable) I have a problem with QComboBox like this: I have a QComboBox for configuring color, I use QColorDialog in QComboBox. In display mode, I just want to display value of QComboBox for user, user cannot edit value or select other value from QComboBox. I tried 2 solutions like this: * *use set property editable = false: user still chooses other value by selecting combobox *use set property enable = false: user cannot edit or select, but color in combobox is grey, not value that I configured, ex: red. I googled but don't grab any answers. Somebody helps me? A: You could disallow changes by creating a slot for currentIndexChanged: and changing it back.
Q: How to make a QCombobox only display content (not editable, not selectable) I have a problem with QComboBox like this: I have a QComboBox for configuring color, I use QColorDialog in QComboBox. In display mode, I just want to display value of QComboBox for user, user cannot edit value or select other value from QComboBox. I tried 2 solutions like this: * *use set property editable = false: user still chooses other value by selecting combobox *use set property enable = false: user cannot edit or select, but color in combobox is grey, not value that I configured, ex: red. I googled but don't grab any answers. Somebody helps me? A: You could disallow changes by creating a slot for currentIndexChanged: and changing it back.
stackoverflow
{ "language": "en", "length": 126, "provenance": "stackexchange_0000F.jsonl.gz:854874", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511024" }
4b920c77a75965e73d87df1f0eb014c866c46039
Stackoverflow Stackexchange Q: SQLAlchemy, prevent duplicate rows I'm wondering if it's possible to prevent committing duplicates to the database. For example, presume there is a class as follows class Employee(Base): id = Column(Integer, primary_key=True) name = Column(String) If I were to make a series of these objects, employee1 = Employee(name='bob') employee2 = Employee(name='bob') session.add_all([employee1, employee2]) session.commit() I would like only a single row to be added to the database, and employee1 and employee2 to point to the same object in memory (if possible). Is there functionality within SQLAlchemy to accomplish this? Or would I need to ensure duplicates don't exist programmatically? A: An alternate get_or_create() solution: from sqlalchemy.orm.exc import NoResultFound # ... def get_or_create(self, model, **kwargs): """ Usage: class Employee(Base): __tablename__ = 'employee' id = Column(Integer, primary_key=True) name = Column(String, unique=True) get_or_create(Employee, name='bob') """ instance = get_instance(model, **kwargs) if instance is None: instance = create_instance(model, **kwargs) return instance def create_instance(model, **kwargs): """create instance""" try: instance = model(**kwargs) sess.add(instance) sess.flush() except Exception as msg: mtext = 'model:{}, args:{} => msg:{}' log.error(mtext.format(model, kwargs, msg)) sess.rollback() raise(msg) return instance def get_instance(self, model, **kwargs): """Return first instance found.""" try: return sess.query(model).filter_by(**kwargs).first() except NoResultFound: return
Q: SQLAlchemy, prevent duplicate rows I'm wondering if it's possible to prevent committing duplicates to the database. For example, presume there is a class as follows class Employee(Base): id = Column(Integer, primary_key=True) name = Column(String) If I were to make a series of these objects, employee1 = Employee(name='bob') employee2 = Employee(name='bob') session.add_all([employee1, employee2]) session.commit() I would like only a single row to be added to the database, and employee1 and employee2 to point to the same object in memory (if possible). Is there functionality within SQLAlchemy to accomplish this? Or would I need to ensure duplicates don't exist programmatically? A: An alternate get_or_create() solution: from sqlalchemy.orm.exc import NoResultFound # ... def get_or_create(self, model, **kwargs): """ Usage: class Employee(Base): __tablename__ = 'employee' id = Column(Integer, primary_key=True) name = Column(String, unique=True) get_or_create(Employee, name='bob') """ instance = get_instance(model, **kwargs) if instance is None: instance = create_instance(model, **kwargs) return instance def create_instance(model, **kwargs): """create instance""" try: instance = model(**kwargs) sess.add(instance) sess.flush() except Exception as msg: mtext = 'model:{}, args:{} => msg:{}' log.error(mtext.format(model, kwargs, msg)) sess.rollback() raise(msg) return instance def get_instance(self, model, **kwargs): """Return first instance found.""" try: return sess.query(model).filter_by(**kwargs).first() except NoResultFound: return A: You could create a class method to get or create an Employee -- get it if it exists, otherwise create: @classmethod def get_or_create(cls, name): exists = db.session.query(Employee.id).filter_by(name=name).scalar() is not None if exists: return db.session.query(Employee).filter_by(name=name).first() return cls(name=name) employee1 = Employee(name='bob') db.session.add(employee1) employee2 = Employee(name='bob') employee1 == employee2 # False bob1 = Employee.get_or_create(name='bob') if bob1 not in db.session: db.session.add(bob1) len(add_to_session) # 1 bob2 = Employee.get_or_create(name='bob') if bob2 not in db.session: db.session.add(bob2) len(add_to_session) # 1 bob1 == bob2 # True A: There are at least 2 approaches: * *The database approach: create a relevant primary key; with SQLAlchemy you would define e.g. based on your minimalistic example name = Column('First Name', String(20), primary_key=True) *The coding approach: check whether the attribute, set of attributes already exists in the table, otherwise create it. See relevant code examples here. In terms of performance, I believe the database approach is better. It also is the one which makes more sense.
stackoverflow
{ "language": "en", "length": 342, "provenance": "stackexchange_0000F.jsonl.gz:854883", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511046" }
09f269b7f27551c74438ce8599badf94a341b239
Stackoverflow Stackexchange Q: How to change Drawable color dynamically for buttons I have drawable xml up.xml as in the following code <?xml version="1.0" encoding="utf-8"?> <layer-list xmlns:android="http://schemas.android.com/apk/res/android" > <item> <rotate android:fromDegrees="45" android:toDegrees="45" android:pivotX="-40%" android:pivotY="87%" > <shape android:shape="rectangle" > <solid android:color="@color/green" /> </shape> </rotate> </item> </layer-list> and I have this drawable attached to a button as follows <Button android:id="@+id/bill_amount_up" android:layout_width="30dp" android:layout_height="30dp" android:background="@drawable/up" /> I'm trying to change the color of the solid android:color="@color/green" in the up.xml file dynamically in my code. I tried the following, but it didn't work. ((GradientDrawable)billAmountUp.getBackground()).setColor(color); I get the following error java.lang.ClassCastException: android.graphics.drawable.LayerDrawable cannot be cast to android.graphics.drawable.GradientDrawable Can anyone help? Thank you. A: try this my friend Drawable myIcon = getResources().getDrawable( R.drawable.button ); ColorFilter filter = new LightingColorFilter( Color.BLACK, Color.BLACK); myIcon.setColorFilter(filter); Kotlin code val myIcon =ContextCompat.getDrawable(this@LoginActivity,R.drawable.button) val filter: ColorFilter = LightingColorFilter(Color.BLACK, Color.BLACK) myIcon.colorFilter = filter
Q: How to change Drawable color dynamically for buttons I have drawable xml up.xml as in the following code <?xml version="1.0" encoding="utf-8"?> <layer-list xmlns:android="http://schemas.android.com/apk/res/android" > <item> <rotate android:fromDegrees="45" android:toDegrees="45" android:pivotX="-40%" android:pivotY="87%" > <shape android:shape="rectangle" > <solid android:color="@color/green" /> </shape> </rotate> </item> </layer-list> and I have this drawable attached to a button as follows <Button android:id="@+id/bill_amount_up" android:layout_width="30dp" android:layout_height="30dp" android:background="@drawable/up" /> I'm trying to change the color of the solid android:color="@color/green" in the up.xml file dynamically in my code. I tried the following, but it didn't work. ((GradientDrawable)billAmountUp.getBackground()).setColor(color); I get the following error java.lang.ClassCastException: android.graphics.drawable.LayerDrawable cannot be cast to android.graphics.drawable.GradientDrawable Can anyone help? Thank you. A: try this my friend Drawable myIcon = getResources().getDrawable( R.drawable.button ); ColorFilter filter = new LightingColorFilter( Color.BLACK, Color.BLACK); myIcon.setColorFilter(filter); Kotlin code val myIcon =ContextCompat.getDrawable(this@LoginActivity,R.drawable.button) val filter: ColorFilter = LightingColorFilter(Color.BLACK, Color.BLACK) myIcon.colorFilter = filter
stackoverflow
{ "language": "en", "length": 136, "provenance": "stackexchange_0000F.jsonl.gz:854900", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511099" }
ab9af89b515a1f9f01d45dde5017189ca8051986
Stackoverflow Stackexchange Q: Template function overload for type containing a type I'm trying to do the following: #include <iostream> #include <vector> #include <tuple> #include <list> template <typename T> void f(T t) { std::cout << "1" << std::endl; } template <typename T, typename V> void f(T<std::tuple<V>> t) { std::cout << "2" << std::endl; } int main() { f(std::list<double>{}); // should use first template f(std::vector<std::tuple<int>>{}); // should use second template } What is the simplest way to do this in C++14? I thought that I could sort of pattern match in this way but the compiler won't have it. A: The template parameter T is used as a template-name, so it should be declared as template template parameter. e.g. template <template <typename...> class T, typename V> // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ void f(T<std::tuple<V>> t) { std::cout << "2" << std::endl; } LIVE
Q: Template function overload for type containing a type I'm trying to do the following: #include <iostream> #include <vector> #include <tuple> #include <list> template <typename T> void f(T t) { std::cout << "1" << std::endl; } template <typename T, typename V> void f(T<std::tuple<V>> t) { std::cout << "2" << std::endl; } int main() { f(std::list<double>{}); // should use first template f(std::vector<std::tuple<int>>{}); // should use second template } What is the simplest way to do this in C++14? I thought that I could sort of pattern match in this way but the compiler won't have it. A: The template parameter T is used as a template-name, so it should be declared as template template parameter. e.g. template <template <typename...> class T, typename V> // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ void f(T<std::tuple<V>> t) { std::cout << "2" << std::endl; } LIVE
stackoverflow
{ "language": "en", "length": 135, "provenance": "stackexchange_0000F.jsonl.gz:854905", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511121" }
3adfb3102b803a2a6eb52b6bfebcb7730f39f733
Stackoverflow Stackexchange Q: Build error: missing babel-preset-expo in expo mobile app I'm new to react-native and am in the early stages of creating an app with Expo. I had a working app until installing redux. Currently I am getting the following error from the XDE: Problem checking node_modules dependencies: Unexpected end of JSON input and the following from the ios simulator: Building JavaScript bundle: error TransformError: ../app/main.js: Couldn't find preset "babel-preset-expo" relative to directory "../app/" I believe my node modules contain valid JSON. It should be noted that I'm using a more current version of react-native than expo. A: I experienced this issue when I tried moving to expo version 21.0.0. You should try to delete your node modules and use yarn to install. package.json dependencies:{ "babel-preset-expo" : "^4.0.0", "expo": "^21.0.0", "react-native": "https://github.com/expo/react-native/archive/sdk-21-0.2.tar.gz" } my .babelrc { "presets": ["babel-preset-expo"], "env": { "development": { "plugins": ["transform-react-jsx-source"] } } }
Q: Build error: missing babel-preset-expo in expo mobile app I'm new to react-native and am in the early stages of creating an app with Expo. I had a working app until installing redux. Currently I am getting the following error from the XDE: Problem checking node_modules dependencies: Unexpected end of JSON input and the following from the ios simulator: Building JavaScript bundle: error TransformError: ../app/main.js: Couldn't find preset "babel-preset-expo" relative to directory "../app/" I believe my node modules contain valid JSON. It should be noted that I'm using a more current version of react-native than expo. A: I experienced this issue when I tried moving to expo version 21.0.0. You should try to delete your node modules and use yarn to install. package.json dependencies:{ "babel-preset-expo" : "^4.0.0", "expo": "^21.0.0", "react-native": "https://github.com/expo/react-native/archive/sdk-21-0.2.tar.gz" } my .babelrc { "presets": ["babel-preset-expo"], "env": { "development": { "plugins": ["transform-react-jsx-source"] } } }
stackoverflow
{ "language": "en", "length": 146, "provenance": "stackexchange_0000F.jsonl.gz:854915", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511152" }
369d498176482ac204ace178ef3f177f3b55ec19
Stackoverflow Stackexchange Q: R Shiny error "X11 font -adobe-helvetica-%s-%s-*-*-%d-*-*-*-*-*-*-*, face 1 at size 9 could not be loaded" I am programming in R and trying to display a plot using ggplot. But I am getting a: Error: X11 font -adobe-helvetica-%s-%s---%d-------*, face 1 at size 9 could not be loaded I am running on Ubuntu 16.04 Error Image
Q: R Shiny error "X11 font -adobe-helvetica-%s-%s-*-*-%d-*-*-*-*-*-*-*, face 1 at size 9 could not be loaded" I am programming in R and trying to display a plot using ggplot. But I am getting a: Error: X11 font -adobe-helvetica-%s-%s---%d-------*, face 1 at size 9 could not be loaded I am running on Ubuntu 16.04 Error Image
stackoverflow
{ "language": "en", "length": 55, "provenance": "stackexchange_0000F.jsonl.gz:854927", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511186" }
d2eef4824959f5f87996d70d3c6df3250657ca91
Stackoverflow Stackexchange Q: Connect botkit to my service restful api I installed botkit locally and is working perfect with slack. Now, i want to connect the bot with an external restful api to ask for example: HUMAN: How many clients do you have connected? Bot: The bot execute internally a query over the rest api of my service and then, answer Bot: There are 21 clients connected. Any suggestion? A: We do a similar operation and it's pretty simle. Use some sort or HTTP client to make a GET to your endpoint. We use the request npm. Then you just have to call the bot.reply in the callback. To kick off the interaction I'm using ambient to listen to any channel the bot is invited to, but you could set that to direct_message if that's how you roll. var request = require('request'); module.exports = function(controller) { controller.hears(['How many clients'], 'ambient', function(bot, message) { request('http://api.com/totalUsers', function (err, response, body) { console.log('error: ', err); // Handle the error if one occurred console.log('statusCode: ', response && response.statusCode); // Check 200 or such console.log('This is the count of users: ', body.usersCount); bot.reply(message, 'There are ' + body.usersCount + ' clients connected'); }); }); };
Q: Connect botkit to my service restful api I installed botkit locally and is working perfect with slack. Now, i want to connect the bot with an external restful api to ask for example: HUMAN: How many clients do you have connected? Bot: The bot execute internally a query over the rest api of my service and then, answer Bot: There are 21 clients connected. Any suggestion? A: We do a similar operation and it's pretty simle. Use some sort or HTTP client to make a GET to your endpoint. We use the request npm. Then you just have to call the bot.reply in the callback. To kick off the interaction I'm using ambient to listen to any channel the bot is invited to, but you could set that to direct_message if that's how you roll. var request = require('request'); module.exports = function(controller) { controller.hears(['How many clients'], 'ambient', function(bot, message) { request('http://api.com/totalUsers', function (err, response, body) { console.log('error: ', err); // Handle the error if one occurred console.log('statusCode: ', response && response.statusCode); // Check 200 or such console.log('This is the count of users: ', body.usersCount); bot.reply(message, 'There are ' + body.usersCount + ' clients connected'); }); }); };
stackoverflow
{ "language": "en", "length": 198, "provenance": "stackexchange_0000F.jsonl.gz:854971", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511333" }
28a4791b08bccfe1e490fc532617397a31c6e2c0
Stackoverflow Stackexchange Q: Antd - is there anyway to change background color of the Card title? By adding the style attribute I can only change the color of the body part of the Card component. How can I change the title part as well? <Card title='Card title' bordered loading={this.onLoading()} style={{ backgroundColor: '#aaaaaa' }}> <Row type='flex' justify='center'> <h1>Card content</h1> </Row> </Card> A: Finally i have no choice and use css to get around that. The structure of the antd card is like <div class="ant-card ant-card-bordered"> <div class="ant-card-head" \> <div class="ant-card-body" \> and .ant-card-head has the background style as #fff. if I do sth like <Card style={{background:"#aaa"}}, it will override the .ant-card class. If I do <Card bodyStyle={{background:"#aaa"}}, it will override the .ant-card-body class and I couldn't find any direct way to override the .ant-card-head class, so I ended up use css to set the .ant-card-body background to none and it works.
Q: Antd - is there anyway to change background color of the Card title? By adding the style attribute I can only change the color of the body part of the Card component. How can I change the title part as well? <Card title='Card title' bordered loading={this.onLoading()} style={{ backgroundColor: '#aaaaaa' }}> <Row type='flex' justify='center'> <h1>Card content</h1> </Row> </Card> A: Finally i have no choice and use css to get around that. The structure of the antd card is like <div class="ant-card ant-card-bordered"> <div class="ant-card-head" \> <div class="ant-card-body" \> and .ant-card-head has the background style as #fff. if I do sth like <Card style={{background:"#aaa"}}, it will override the .ant-card class. If I do <Card bodyStyle={{background:"#aaa"}}, it will override the .ant-card-body class and I couldn't find any direct way to override the .ant-card-head class, so I ended up use css to set the .ant-card-body background to none and it works. A: For antd version 4.x You can use headStyle and bodyStyle property to change the background color <Card title="Card title" headStyle={{ backgroundColor: '#5c6cfa', color: '#ffffff' }} bodyStyle={{ backgroundColor: '#a9bbff' }} bordered={false} style={{ width: 300 }} > <p>Card content</p> <p>Card content</p> <p>Card content</p> </Card> OR You can customize the css class .ant-card-head { background: #5c6cfa; color: #ffffff; } .ant-card-body { background: #a9bbff; } Screenshot: A: You have some typo. style:{{backgroundColor:'#aaaaaa'}} should be style={{ backgroundColor: '#aaaaaa' }} and it works for me: Using the same code and it does work: I may need to inspect your page to know why it doesn't work for you A: The following worked for me. I had to remove the background color of the entire card and set a different background color both for the Head and Body content: <Card title="Track title" style={{backgroundColor: 'rgba(255, 255, 255, 0.0)', border: 0 }} headStyle={{backgroundColor: 'rgba(255, 255, 255, 0.4)', border: 0 }} bodyStyle={{backgroundColor: 'rgba(255, 0, 0, 0.4)', border: 0 }} > <Card.Meta description="Track author" /> </Card> Result: Hope it helps!
stackoverflow
{ "language": "en", "length": 318, "provenance": "stackexchange_0000F.jsonl.gz:855007", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511421" }
43a496800f88dd98025200ccc0b630e9a4cb43cc
Stackoverflow Stackexchange Q: How to do an explicit fall-through in C The newer versions of gcc offer the Wimplicit-fallthrough, which is great to have for most switch statements. However, I have one switch statement where I want to allow fall throughs from all case-statements. Is there a way to do an explicit fall through? I'd prefer to avoid having to compile with Wno-implicit-fallthrough for this file. EDIT: I'm looking for a way to make the fall through explicit (if it's possible), not to turn off the warning via a compiler switch or pragma. A: GCC fallghrough magic comments You should not use this if you can help it, it is insane, but good to know about: int main(int argc, char **argv) { (void)argv; switch (argc) { case 0: argc = 1; // fall through case 1: argc = 2; }; } prevents the warning on GCC 7.4.0 with: gcc -Wall -Wextra main.c man gcc describes how different comments may or not be recognized depending on the value of: -Wimplicit-fallthrough=n C++17 [[fallthrough]] attribute C++17 got a standardized syntax for this: GCC 7, -Wimplicit-fallthrough warnings, and portable way to clear them?
Q: How to do an explicit fall-through in C The newer versions of gcc offer the Wimplicit-fallthrough, which is great to have for most switch statements. However, I have one switch statement where I want to allow fall throughs from all case-statements. Is there a way to do an explicit fall through? I'd prefer to avoid having to compile with Wno-implicit-fallthrough for this file. EDIT: I'm looking for a way to make the fall through explicit (if it's possible), not to turn off the warning via a compiler switch or pragma. A: GCC fallghrough magic comments You should not use this if you can help it, it is insane, but good to know about: int main(int argc, char **argv) { (void)argv; switch (argc) { case 0: argc = 1; // fall through case 1: argc = 2; }; } prevents the warning on GCC 7.4.0 with: gcc -Wall -Wextra main.c man gcc describes how different comments may or not be recognized depending on the value of: -Wimplicit-fallthrough=n C++17 [[fallthrough]] attribute C++17 got a standardized syntax for this: GCC 7, -Wimplicit-fallthrough warnings, and portable way to clear them? A: You should be able to use GCC diagnostic pragmas to disable that particular warning for your source file or some portion of a source file. Try putting this at the top of your file: #pragma GCC diagnostic ignored "-Wimplicit-fallthrough" A: Use __attribute__ ((fallthrough)) switch (condition) { case 1: printf("1 "); __attribute__ ((fallthrough)); case 2: printf("2 "); __attribute__ ((fallthrough)); case 3: printf("3\n"); break; }
stackoverflow
{ "language": "en", "length": 251, "provenance": "stackexchange_0000F.jsonl.gz:855013", "question_score": "31", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511436" }
722b560946cd751a4f94979b463f60afabca5dba
Stackoverflow Stackexchange Q: How do I hide a bootstrap-select? I need to hide a select statement for part of an web application. At one point in the code I say $("#select1").addClass("hidden"); to hide it. Until I decided to use bootstrap-select it worked fine. But since I added the class="selectpicker" it no longer hides when told to. I can see the 'hidden' has been added to the class statement using web inspector but the select is not actually hidden. How do I make this work? A: bootstrap-select convert your select tag to a list of buttons. You should hide or show its parent instead of itself to avoid css override. Example: <div class="form-group" id="form-group-1"> <label for="select1">Select list:</label> <select class="form-control selectpicker" id="select1"> <option>1</option> <option>2</option> <option>3</option> <option>4</option> </select> </div> <button id="btn-hide">Hide select</button> <button id="btn-show">Show select</button> In this case we will hide or show #form-group-1 instead of #select1: $("#btn-hide").click(function(){ $("#form-group-1").hide(); }); $("#btn-show").click(function(){ $("#form-group-1").show(); }); Please take a look at my JSFiddle.
Q: How do I hide a bootstrap-select? I need to hide a select statement for part of an web application. At one point in the code I say $("#select1").addClass("hidden"); to hide it. Until I decided to use bootstrap-select it worked fine. But since I added the class="selectpicker" it no longer hides when told to. I can see the 'hidden' has been added to the class statement using web inspector but the select is not actually hidden. How do I make this work? A: bootstrap-select convert your select tag to a list of buttons. You should hide or show its parent instead of itself to avoid css override. Example: <div class="form-group" id="form-group-1"> <label for="select1">Select list:</label> <select class="form-control selectpicker" id="select1"> <option>1</option> <option>2</option> <option>3</option> <option>4</option> </select> </div> <button id="btn-hide">Hide select</button> <button id="btn-show">Show select</button> In this case we will hide or show #form-group-1 instead of #select1: $("#btn-hide").click(function(){ $("#form-group-1").hide(); }); $("#btn-show").click(function(){ $("#form-group-1").show(); }); Please take a look at my JSFiddle. A: You can also try <select class="form-control selectpicker" id="your_id"> <option>1</option> <option>2</option> <option>3</option> <option>4</option> </select> and to hide or show use $('#your_id').selectpicker('hide'); $('#your_id').selectpicker('show');
stackoverflow
{ "language": "en", "length": 177, "provenance": "stackexchange_0000F.jsonl.gz:855034", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511501" }
7bd0c4a7e771d1f6b3c11a27c0b90707bb4651b3
Stackoverflow Stackexchange Q: Tess4J Mac: NoClassDefFoundError I'm trying to use Tess4J in my project. It doesn't include .dylib files for Mac, so I've built my own Tesseract and am using the .dylib from the Tesseract build. I'm able to load the native library with no issue, and I believe have the Tess4J library linked properly, since I can import it with no issue. However, when I try to create a new instance of Tesseract using: Tesseract t = new Tesseract(); I'm getting the following error: Exception in thread "main" java.lang.NoClassDefFoundError: com/sun/jna/Pointer at com.ddc.fmwscanner.main.FmwScanner.main(FmwScanner.java:21) Caused by: java.lang.ClassNotFoundException: com.sun.jna.Pointer at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) The only possible problem I can think of is that my Mac Tesseract install version is 3.0.5, whereas Tess4J's .dll files are named "libtesseract3051.dll", indicating that there might be version mismatch between the Tess4J .jar and the .dylib. Any guidance is appreciated! A: Okay, I figured this out. The Tess4J download includes a "lib" folder. I included this whole folder as a dependency in my project, and am no longer getting NoClassDefFound-related errors.
Q: Tess4J Mac: NoClassDefFoundError I'm trying to use Tess4J in my project. It doesn't include .dylib files for Mac, so I've built my own Tesseract and am using the .dylib from the Tesseract build. I'm able to load the native library with no issue, and I believe have the Tess4J library linked properly, since I can import it with no issue. However, when I try to create a new instance of Tesseract using: Tesseract t = new Tesseract(); I'm getting the following error: Exception in thread "main" java.lang.NoClassDefFoundError: com/sun/jna/Pointer at com.ddc.fmwscanner.main.FmwScanner.main(FmwScanner.java:21) Caused by: java.lang.ClassNotFoundException: com.sun.jna.Pointer at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) The only possible problem I can think of is that my Mac Tesseract install version is 3.0.5, whereas Tess4J's .dll files are named "libtesseract3051.dll", indicating that there might be version mismatch between the Tess4J .jar and the .dylib. Any guidance is appreciated! A: Okay, I figured this out. The Tess4J download includes a "lib" folder. I included this whole folder as a dependency in my project, and am no longer getting NoClassDefFound-related errors.
stackoverflow
{ "language": "en", "length": 176, "provenance": "stackexchange_0000F.jsonl.gz:855057", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511562" }
4a8b20d8c77c44419978685a18e14479ac22c601
Stackoverflow Stackexchange Q: How can i set layoutmanager to RecycleView using kotlin How can i set layoutmanager to RecycleView using kotlin as java code below: mRecyclerView.setLayoutManager(mLinearLayoutManager); A: Following two lines sets orientation to vertical mRecyclerView.layoutManager = LinearLayoutManager(this, LinearLayoutManager.VERTICAL ,false) OR mRecyclerView.layoutManager = LinearLayoutManager(this) mRecyclerView.layoutManager = LinearLayoutManager(this, LinearLayoutManager.HORIZONTAL ,false) sets horizontal orientation To set grid layout, mRecyclerView.layoutManager = GridLayoutManager(this, spanCount)
Q: How can i set layoutmanager to RecycleView using kotlin How can i set layoutmanager to RecycleView using kotlin as java code below: mRecyclerView.setLayoutManager(mLinearLayoutManager); A: Following two lines sets orientation to vertical mRecyclerView.layoutManager = LinearLayoutManager(this, LinearLayoutManager.VERTICAL ,false) OR mRecyclerView.layoutManager = LinearLayoutManager(this) mRecyclerView.layoutManager = LinearLayoutManager(this, LinearLayoutManager.HORIZONTAL ,false) sets horizontal orientation To set grid layout, mRecyclerView.layoutManager = GridLayoutManager(this, spanCount) A: You can try using below solution val mRecyclerView= v.findViewById<RecyclerView>(R.id.rec) //id RecyclerView mRecyclerView.layoutManager = LinearLayoutManager(this, LinearLayoutManager.HORIZONTAL,false) A: You can use recyclerView.layoutManager = LinearLayoutManager(context) // default orientation is vertical // if you want horizontal recyclerview // recyclerView.layoutManager = LinearLayoutManager(this, RecyclerView.HORIZONTAL, false) A: You can do like this val linearLayoutManager = LinearLayoutManager(this) linearLayoutManager.orientation = LinearLayoutManager.VERTICAL recyclerview!!.layoutManager = linearLayoutManager recyclerview!!.isNestedScrollingEnabled = true recyclerview!!.setHasFixedSize(true) A: use RecyclerView.HORIZONTAL for AndroidX instead of LinearLayoutManager.HORIZONTAL var vegetableList: RecyclerView = findViewById(R.id.list_vegetable) vegetableList.layoutManager = LinearLayoutManager(this, RecyclerView.HORIZONTAL, false) A: Choose the layout: * *LinearLayoutManager(context). // vertical *LinearLayoutManager(context, LinearLayoutManager.HORIZONTAL, false) // horizontal *GridLayoutManager(context, numberOfColumns) // grid Then apply the layout using Kotlin's apply() which removes repetition. val rv = view.findViewById(R.id.recyclerView) as RecyclerView rv.apply { layoutManager = LinearLayoutManager(context) adapter = recyclerViewAdapter() setHasFixedSize(true) ... } It can also be set in XML like this: app:layoutManager="androidx.recyclerview.widget.LinearLayoutManager" For more info see: here and here. A: Simply write this to set LayoutManager // Define this globally lateinit var recyclerView: RecyclerView // Initialize this after `activity` or `fragment` is created recyclerView = findViewById(R.id.recyclerView) as RecyclerView recyclerView.setHasFixedSize(true) recyclerView.layoutManager = LinearLayoutManager(activity!!) as RecyclerView.LayoutManager A: I had same issue, reason was I had initialize recyclerView as var recyclerView = findViewById<View>(R.id.recycleView) Make sure you initialize as below var recyclerView = findViewById<View>(R.id.recycleView) as RecyclerView A: Apply plugin in your app build apply plugin: 'kotlin-android-extensions' For my case view id of RecyclerView is my_recycler_view. In your java file write - my_recycler_view.layoutManager = LinearLayoutManager(context) By default LinearLayoutManager(context) will set vertical orientation, update it as per need. A: You can set using this code: binding.recyclerView.setHasFixedSize(true) binding.recyclerView.layoutManager = LinearLayoutManager(this ,LinearLayoutManager.VERTICAL ,false) binding.recyclerView.adapter = customAdapter(this ,getList()) A: private var mRecyclerView: RecyclerView? = null mRecyclerView?.layoutManager = LinearLayoutManager(activity) A: If you are working with Kotlin android. Declare lateinit variable smoothScroller lateinit var smoothScroller: SmoothScroller Within OnCreate Mehtod Initilize the smoothScroller override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) smoothScroller = object : LinearSmoothScroller(context) { override fun getVerticalSnapPreference(): Int { return SNAP_TO_START } } } Finally, Check if the Adopter is Initialized or not? Commit Old dataset changes if there are any. Set the position. Start scroller. if (this@ChooseTemplate::genericAdapter.isInitialized && this@ChooseTemplate::smoothScroller.isInitialized) { [email protected]!!.notifyDataSetChanged() smoothScroller.setTargetPosition([email protected]); dataBinding!!.rvTemplate.layoutManager!!.startSmoothScroll([email protected]); } dataBinding is an binding object. rvTemplate is a Recycler View. genericadapter is an adopter for rvTemplate. templatesrowIndex is the index to check which row item is currently selected. A: recyclerView.layoutManager = LinearLayoutManager(context) or recyclerView.layoutManager = GridLayoutManager(context, spanCount)
stackoverflow
{ "language": "en", "length": 438, "provenance": "stackexchange_0000F.jsonl.gz:855105", "question_score": "19", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511692" }
51ddffc7427f035ad561a232162566c2d637673e
Stackoverflow Stackexchange Q: How to load checkpoint and inference with C++ for tensorflow? I have Tensorflow with python api and got these checkpoint model file: model.ckpt-17763.data-00000-of-00001 model.ckpt-17763.index model.ckpt-17763.meta But I want a C/C++ shared library (.so file) when integrating into production. So I need to load these model file and inference with C++ code and compile to a shared library. Is there some tutorial or sample for doing this? A: You can write c++ code to load and use your graph with the instructions given here. You can use the files here to make a Cmake project with tensorflow outside the TF repository, and compile your library. However, you'll still need the .ckpt files next to your .so, I don't know how to intergate them inside it. There are a lot of questions about that on S.O., and a few tutorials (see the two cited in this answer) but since tensorflow is evolving fast, they quickly become outdated, and it's always a bit of struggle to get it to work properly (totally feasible, and getting easier, though).
Q: How to load checkpoint and inference with C++ for tensorflow? I have Tensorflow with python api and got these checkpoint model file: model.ckpt-17763.data-00000-of-00001 model.ckpt-17763.index model.ckpt-17763.meta But I want a C/C++ shared library (.so file) when integrating into production. So I need to load these model file and inference with C++ code and compile to a shared library. Is there some tutorial or sample for doing this? A: You can write c++ code to load and use your graph with the instructions given here. You can use the files here to make a Cmake project with tensorflow outside the TF repository, and compile your library. However, you'll still need the .ckpt files next to your .so, I don't know how to intergate them inside it. There are a lot of questions about that on S.O., and a few tutorials (see the two cited in this answer) but since tensorflow is evolving fast, they quickly become outdated, and it's always a bit of struggle to get it to work properly (totally feasible, and getting easier, though).
stackoverflow
{ "language": "en", "length": 175, "provenance": "stackexchange_0000F.jsonl.gz:855122", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511745" }
3393b75302779029eed063ac722d0d005de50b1c
Stackoverflow Stackexchange Q: Does Meteor support Mongo 3.4.2? in the meteor change log (http://docs.meteor.com/changelog.html), the latest place talk about the mongo version supported is in Meteor 1.4(http://docs.meteor.com/changelog.html#v1420160725), which announced a year ago, and the version of mongo supported is 3.2 I test 3.4.2 Mongo on my local machine, and it works, I want to know if the meteor 1.4.2.3 I used can work well with the latest stable mongo version 3.4.2(https://jira.mongodb.org/browse/SERVER?selectedTab=com.atlassian.jira.jira-projects-plugin:versions-panel&subset=-1)?
Q: Does Meteor support Mongo 3.4.2? in the meteor change log (http://docs.meteor.com/changelog.html), the latest place talk about the mongo version supported is in Meteor 1.4(http://docs.meteor.com/changelog.html#v1420160725), which announced a year ago, and the version of mongo supported is 3.2 I test 3.4.2 Mongo on my local machine, and it works, I want to know if the meteor 1.4.2.3 I used can work well with the latest stable mongo version 3.4.2(https://jira.mongodb.org/browse/SERVER?selectedTab=com.atlassian.jira.jira-projects-plugin:versions-panel&subset=-1)?
stackoverflow
{ "language": "en", "length": 69, "provenance": "stackexchange_0000F.jsonl.gz:855124", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511751" }
645b15e21ecbdbf6aac41ba4c7056d8fafcfcc10
Stackoverflow Stackexchange Q: How to catch I/O exception (exactly I/O, not std::exception) I tried the example program from here (with mingw-w64). The program crashed. So I edited it: #include <iostream> // std::cerr #include <fstream> // std::ifstream int main() { std::ifstream file; file.exceptions(std::ifstream::failbit | std::ifstream::badbit); try { file.open("not_existing.txt"); while (!file.eof()) file.get(); file.close(); } catch (std::ifstream::failure e) { std::cerr << "Exception opening/reading/closing file\n"; } catch (const std::exception& e) { std::cerr << "should not reach this"; } return 0; } Now it runs, but prints should not reach this, while I was expecting it to print Exception opening/reading/closing file. Why is my expectation wrong? EDIT: since this seems to be an important point, here's the exact version om my compiler: mingw-w64 version "x86_64-6.2.0-posix-sjlj-rt_v5-rev1" , i.e. GCC version 6.2 A: This may be a MingW bug. I get the expected result using MacOS Clang 802.0.42. The expected output being: Exception opening/reading/closing file This might be a known regression: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66145
Q: How to catch I/O exception (exactly I/O, not std::exception) I tried the example program from here (with mingw-w64). The program crashed. So I edited it: #include <iostream> // std::cerr #include <fstream> // std::ifstream int main() { std::ifstream file; file.exceptions(std::ifstream::failbit | std::ifstream::badbit); try { file.open("not_existing.txt"); while (!file.eof()) file.get(); file.close(); } catch (std::ifstream::failure e) { std::cerr << "Exception opening/reading/closing file\n"; } catch (const std::exception& e) { std::cerr << "should not reach this"; } return 0; } Now it runs, but prints should not reach this, while I was expecting it to print Exception opening/reading/closing file. Why is my expectation wrong? EDIT: since this seems to be an important point, here's the exact version om my compiler: mingw-w64 version "x86_64-6.2.0-posix-sjlj-rt_v5-rev1" , i.e. GCC version 6.2 A: This may be a MingW bug. I get the expected result using MacOS Clang 802.0.42. The expected output being: Exception opening/reading/closing file This might be a known regression: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66145
stackoverflow
{ "language": "en", "length": 153, "provenance": "stackexchange_0000F.jsonl.gz:855127", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511755" }
f9abaf40dd76195877fa605077a2246e6515a3ca
Stackoverflow Stackexchange Q: Binary Operator Cannot be Applied to two SCNDebugOptions I am just trying to set two flags for the debug options. Why is this a problem in Swift 4 A: instead of doing "|", use a set: sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints,ARSCNDebugOptions.showWorldOrigin]
Q: Binary Operator Cannot be Applied to two SCNDebugOptions I am just trying to set two flags for the debug options. Why is this a problem in Swift 4 A: instead of doing "|", use a set: sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints,ARSCNDebugOptions.showWorldOrigin] A: SCNDebugOptions confirm the protocol OptionSet, which confirm SetAlgebra protocol and SetAlgebra confirm ExpressibleByArrayLiteral protocol. public struct SCNDebugOptions : OptionSet {...} protocol OptionSet : RawRepresentable, SetAlgebra {...} public protocol SetAlgebra : Equatable, ExpressibleByArrayLiteral {...} That's why you can't use the pipe (|) sign for multiple arguments. instead, use an array. sceneView.debugOptions = [.showFeaturePoints, .showWorldOrigin]
stackoverflow
{ "language": "en", "length": 94, "provenance": "stackexchange_0000F.jsonl.gz:855149", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511823" }
3b663fb9f902843656dcc0d9d9067e3a53891a95
Stackoverflow Stackexchange Q: Get subscriber id from in-app purchase I have created a small app that has in-app subscription products. I want to fetch the subscriber ID via API after the transaction has occurred. I searched extensively and found that this ID is available only in a report "Subscriber Report" Subscriber ID | BigInt | The randomly generated Subscriber ID that is unique to each customer and developer Is there a receipt response where this ID might be available? Can I map transaction id received in the receipt response to a subscriber? Thanks! A: No, the subscriber ID in Apple "Subscriber Report" is internal so the information is actually anonymous. Looking on the bright side in wwwdc 2017 apple announced that they will provide the subscription details for users as part of the receipt so you will be able to get all the information in the subscriber report for a given user. See the part on 'Voluntary Churn' in https://medium.com/joytunes/wwdc-2017-amazing-new-features-for-subscriptions-676662a7d993
Q: Get subscriber id from in-app purchase I have created a small app that has in-app subscription products. I want to fetch the subscriber ID via API after the transaction has occurred. I searched extensively and found that this ID is available only in a report "Subscriber Report" Subscriber ID | BigInt | The randomly generated Subscriber ID that is unique to each customer and developer Is there a receipt response where this ID might be available? Can I map transaction id received in the receipt response to a subscriber? Thanks! A: No, the subscriber ID in Apple "Subscriber Report" is internal so the information is actually anonymous. Looking on the bright side in wwwdc 2017 apple announced that they will provide the subscription details for users as part of the receipt so you will be able to get all the information in the subscriber report for a given user. See the part on 'Voluntary Churn' in https://medium.com/joytunes/wwdc-2017-amazing-new-features-for-subscriptions-676662a7d993
stackoverflow
{ "language": "en", "length": 158, "provenance": "stackexchange_0000F.jsonl.gz:855163", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511882" }
1a5d1b74dd377ce53089118ef0fe9bffb38e025f
Stackoverflow Stackexchange Q: How to map through an array of input files? I have two functions: one that turn files into dataUrl and another that returns a promise with the result: fileToDataURL(file) { var reader = new FileReader() return new Promise(function (resolve, reject) { reader.onload = function (event) { resolve(event.target.result) } reader.readAsDataURL(file) }) } getDataURLs (target) { // target => <input type="file" id="file"> return Promise.all(target.files.map(fileToDataURL)) } target.files.map returns: TypeError: target.files.map is not a function. H How to modify getDataUrls so it returns an array with the dataUrls? A: While Patrick Roberts' answer is true, you may face an issue on TypeScript: Type 'FileList' is not an array type or a string type. Use compiler option '--downlevelIteration' to allow iterating of iterators. ts(2569) You can find a complete answer about --downlevelIteration the post Why downlevelIteration is not on by default?. In our case, to iterate on a FileList, instead of the spread operator, use Array.from(): function getDataURLs(target) { // target => <input type="file" id="file"> return Promise.all(Array.from(target.files).map(fileToDataURL)) }
Q: How to map through an array of input files? I have two functions: one that turn files into dataUrl and another that returns a promise with the result: fileToDataURL(file) { var reader = new FileReader() return new Promise(function (resolve, reject) { reader.onload = function (event) { resolve(event.target.result) } reader.readAsDataURL(file) }) } getDataURLs (target) { // target => <input type="file" id="file"> return Promise.all(target.files.map(fileToDataURL)) } target.files.map returns: TypeError: target.files.map is not a function. H How to modify getDataUrls so it returns an array with the dataUrls? A: While Patrick Roberts' answer is true, you may face an issue on TypeScript: Type 'FileList' is not an array type or a string type. Use compiler option '--downlevelIteration' to allow iterating of iterators. ts(2569) You can find a complete answer about --downlevelIteration the post Why downlevelIteration is not on by default?. In our case, to iterate on a FileList, instead of the spread operator, use Array.from(): function getDataURLs(target) { // target => <input type="file" id="file"> return Promise.all(Array.from(target.files).map(fileToDataURL)) } A: function getDataURLs(target) { // target => <input type="file" id="file"> return Promise.all([...target.files].map(fileToDataURL)) } FileList is not an Array, and does not inherit from Array, but it does implement the iterable protocol, so you can use the spread syntax to get it as an array. In case you're wondering how to check if a class like FileList supports the spread syntax, you can do this: console.log(FileList.prototype[Symbol.iterator]); If that returns a function (which it does), then that returned function is a generator function that is invoked on an instance of the class by the spread syntax.
stackoverflow
{ "language": "en", "length": 258, "provenance": "stackexchange_0000F.jsonl.gz:855176", "question_score": "13", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511925" }
b5638b4854def1b70ca41945c091c6ba3147bda4
Stackoverflow Stackexchange Q: python Postgresql CREATE DATABASE IF NOT EXISTS is error I tried to learn about postgresql using python. I want to create condition CREATE DATABASE IF NOT EXISTS, but I always get an error. The error is : File "learn_postgres.py", line 27, in InitDatabase cursor.execute("CREATE DATABASE IF NOT EXISTS python_db") psycopg2.ProgrammingError: syntax error at or near "NOT" LINE 1: CREATE DATABASE IF NOT EXISTS python_db A: You could query from pg_catalog.pg_database to check if the database is exist like this: SELECT datname FROM pg_catalog.pg_database WHERE datname = 'python_db' Then from here you can add the logic to create your db.
Q: python Postgresql CREATE DATABASE IF NOT EXISTS is error I tried to learn about postgresql using python. I want to create condition CREATE DATABASE IF NOT EXISTS, but I always get an error. The error is : File "learn_postgres.py", line 27, in InitDatabase cursor.execute("CREATE DATABASE IF NOT EXISTS python_db") psycopg2.ProgrammingError: syntax error at or near "NOT" LINE 1: CREATE DATABASE IF NOT EXISTS python_db A: You could query from pg_catalog.pg_database to check if the database is exist like this: SELECT datname FROM pg_catalog.pg_database WHERE datname = 'python_db' Then from here you can add the logic to create your db. A: Postgres does not support the condition IF NOT EXISTS in the CREATE DATABASE clause, however, IF EXISTS is supported on DROP DATABASE There are two options: * *drop & recreate cursor.execute('DROP DATABASE IF EXISTS python_db') cursor.execute('CREATE DATABASE python_db') # rest of the script *check the catalog first & branch the logic in python cursor.execute("SELECT 1 FROM pg_catalog.pg_database WHERE datname = 'python_db'") exists = cursor.fetchone() if not exists: cursor.execute('CREATE DATABASE python_db') # rest of the script A: from psycopg2 import sql from psycopg2.errors import DuplicateDatabase ... conn.autocommit = True cursor = conn.cursor() try: cursor.execute(sql.SQL('CREATE DATABASE {}').format(sql.Identifier(DB_NAME))) except DuplicateDatabase: pass
stackoverflow
{ "language": "en", "length": 199, "provenance": "stackexchange_0000F.jsonl.gz:855192", "question_score": "10", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44511958" }
f067d199f3b09beb923c5f6e229fb97e1d559370
Stackoverflow Stackexchange Q: Angular 4 form array incorrect value in input on removeAt() I've created a Plunker to demonstrate the problem https://embed.plnkr.co/pgu7szf9ySwZSitOA5dq/ If you remove #2, you see the #5 show up twice in the last two boxes. I cannot figure out why this is happening. A: you should nested your FormArray in FormGroup like this: export class AppComponent implements OnInit { public formG: FormGroup; public formArray: FormArray; constructor(private fb: FormBuilder) { } ngOnInit() { this.createForm(); } createForm() { this.formArray = this.fb.array([ this.fb.control(1), this.fb.control(2), this.fb.control(3), this.fb.control(4), this.fb.control(5), ]); this.formG = this.fb.group({ farray: this.formArray }); console.log(this.formArray); } addQuestion() { this.formArray.push(this.fb.control('')); } removeQuestion(i) { this.formArray.removeAt(i); } } And the template: <div class="container" [formGroup]="formG"> <div formArrayName="farray"> <div class="row form-inline" *ngFor="let question of formArray.controls; let i = index"> <textarea class="form-control" [formControlName]="i"></textarea> <button (click)="removeQuestion(i)" class="btn btn-secondary">Remove</button> </div> </div> </div> <button (click)="addQuestion()" class="btn btn-secondary">Add</button> Form in action: https://embed.plnkr.co/hJ0NMmzGezjMzWfYufaV/
Q: Angular 4 form array incorrect value in input on removeAt() I've created a Plunker to demonstrate the problem https://embed.plnkr.co/pgu7szf9ySwZSitOA5dq/ If you remove #2, you see the #5 show up twice in the last two boxes. I cannot figure out why this is happening. A: you should nested your FormArray in FormGroup like this: export class AppComponent implements OnInit { public formG: FormGroup; public formArray: FormArray; constructor(private fb: FormBuilder) { } ngOnInit() { this.createForm(); } createForm() { this.formArray = this.fb.array([ this.fb.control(1), this.fb.control(2), this.fb.control(3), this.fb.control(4), this.fb.control(5), ]); this.formG = this.fb.group({ farray: this.formArray }); console.log(this.formArray); } addQuestion() { this.formArray.push(this.fb.control('')); } removeQuestion(i) { this.formArray.removeAt(i); } } And the template: <div class="container" [formGroup]="formG"> <div formArrayName="farray"> <div class="row form-inline" *ngFor="let question of formArray.controls; let i = index"> <textarea class="form-control" [formControlName]="i"></textarea> <button (click)="removeQuestion(i)" class="btn btn-secondary">Remove</button> </div> </div> </div> <button (click)="addQuestion()" class="btn btn-secondary">Add</button> Form in action: https://embed.plnkr.co/hJ0NMmzGezjMzWfYufaV/
stackoverflow
{ "language": "en", "length": 140, "provenance": "stackexchange_0000F.jsonl.gz:855207", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512014" }
573f8ca25d0b90143bf77ab5f39fb931ca6a5b13
Stackoverflow Stackexchange Q: Update existing virtualenv to use Python 3.6 I have an existing virtualenv called 'edge'. It uses Python 3.5.2. I have upgrade my Python interpreter to 3.6 and I want the 'edge` env to use 3.6 instead. What command should I used to update edge's interpreter? I searched on SO but all the answers I can find are for creating a new env. In my case, I don't want to create a new env. A: All binary packages installed for python3.5 (for example numpy or simplejson) are not compatible with python3.6 (they are not abi compatible). As such, you can't upgrade / downgrade a virtualenv to a different version of python. Your best bet would be to create a new virtualenv based on the packages installed in the original virtualenv. You can get close by doing the following edge/bin/pip freeze > reqs.txt virtualenv edge2 -p python3.6 edge2/bin/pip install -r reqs.txt Note that virtualenvs generally aren't movable, so if you want it to exist at edge you'll probably want the following procedure instead edge/bin/pip freeze > reqs.txt mv edge edge_old virtualenv edge -p python3.6 edge/bin/pip install -r reqs.txt # optionally: rm -rf edge_old
Q: Update existing virtualenv to use Python 3.6 I have an existing virtualenv called 'edge'. It uses Python 3.5.2. I have upgrade my Python interpreter to 3.6 and I want the 'edge` env to use 3.6 instead. What command should I used to update edge's interpreter? I searched on SO but all the answers I can find are for creating a new env. In my case, I don't want to create a new env. A: All binary packages installed for python3.5 (for example numpy or simplejson) are not compatible with python3.6 (they are not abi compatible). As such, you can't upgrade / downgrade a virtualenv to a different version of python. Your best bet would be to create a new virtualenv based on the packages installed in the original virtualenv. You can get close by doing the following edge/bin/pip freeze > reqs.txt virtualenv edge2 -p python3.6 edge2/bin/pip install -r reqs.txt Note that virtualenvs generally aren't movable, so if you want it to exist at edge you'll probably want the following procedure instead edge/bin/pip freeze > reqs.txt mv edge edge_old virtualenv edge -p python3.6 edge/bin/pip install -r reqs.txt # optionally: rm -rf edge_old
stackoverflow
{ "language": "en", "length": 192, "provenance": "stackexchange_0000F.jsonl.gz:855209", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512025" }
7c9cbadf210f6f381f12d72516cfc3ce062d37a0
Stackoverflow Stackexchange Q: Large file upload - Request gets cancelled I am trying to upload video files of size 40-50mb. The progress bar freezes at a certain point and if i observe in my Networks tab on Google Chrome. The request gets cancelled and there is no error and the HTTP response header is empty. However this is working for both image/video files which are around 10-15mb. My code: Dropzone.autoDiscover = false; var myDropzone = new Dropzone("#my-awesome-dropzone", { maxFiles: 1, parallelUploads: 100, acceptedFiles: '.3gp,.3gp2,.h261,.h263,.h264,.jpgv,.jpm,.jpgm,.mp4,.mp4v,.mpg4,.mpeg,.mpg,.mpe,.m1v,.m2v,.ogv,.qt,.mov,.fli,.flv,.mks,.mkv,.wmv,.avi,.movie,.smv,.g3,.jpeg,.jpg,.jpe,.png,.btif,.sgi,.svg,.tiff,.tif', previewTemplate: previewTemplate, previewsContainer: "#previews", autoProcessQueue: false, clickable: ".fileinput-button", }); P.S: It is not a server side issue as i have tried uploading without Dropzone and everything works smoothly. A: Did you use dropzone.js version >= 4.4.0 and ajax request? If so, you must set the timeout (in ms) in your configuration. It's specify timeout value for xhr (ajax) request, and default value only 30s. Source: http://www.dropzonejs.com/#configuration
Q: Large file upload - Request gets cancelled I am trying to upload video files of size 40-50mb. The progress bar freezes at a certain point and if i observe in my Networks tab on Google Chrome. The request gets cancelled and there is no error and the HTTP response header is empty. However this is working for both image/video files which are around 10-15mb. My code: Dropzone.autoDiscover = false; var myDropzone = new Dropzone("#my-awesome-dropzone", { maxFiles: 1, parallelUploads: 100, acceptedFiles: '.3gp,.3gp2,.h261,.h263,.h264,.jpgv,.jpm,.jpgm,.mp4,.mp4v,.mpg4,.mpeg,.mpg,.mpe,.m1v,.m2v,.ogv,.qt,.mov,.fli,.flv,.mks,.mkv,.wmv,.avi,.movie,.smv,.g3,.jpeg,.jpg,.jpe,.png,.btif,.sgi,.svg,.tiff,.tif', previewTemplate: previewTemplate, previewsContainer: "#previews", autoProcessQueue: false, clickable: ".fileinput-button", }); P.S: It is not a server side issue as i have tried uploading without Dropzone and everything works smoothly. A: Did you use dropzone.js version >= 4.4.0 and ajax request? If so, you must set the timeout (in ms) in your configuration. It's specify timeout value for xhr (ajax) request, and default value only 30s. Source: http://www.dropzonejs.com/#configuration A: It has a timeout, whenever its exceded, the request gets cancelled, just put timeout: 180000, in options It would be: Dropzone.autoDiscover = false; var myDropzone = new Dropzone("#my-awesome-dropzone", { maxFiles: 1, timeout: 180000, parallelUploads: 100, acceptedFiles: '.3gp,.3gp2,.h261,.h263,.h264,.jpgv,.jpm,.jpgm,.mp4,.mp4v,.mpg4,.mpeg,.mpg,.mpe,.m1v,.m2v,.ogv,.qt,.mov,.fli,.flv,.mks,.mkv,.wmv,.avi,.movie,.smv,.g3,.jpeg,.jpg,.jpe,.png,.btif,.sgi,.svg,.tiff,.tif', previewTemplate: previewTemplate, previewsContainer: "#previews", autoProcessQueue: false, clickable: ".fileinput-button", }); A: The first step is to check with the server as some time nginx or other server tools will look into the header for the file size and reject the file of certain size exceeding. If the server is working fine then but due to network bandwidth issue. The server will still now give an error which needs to be handled by the client side. Here timeout comes into action. Dropzone.autoDiscover = false; var myDropzone = new Dropzone("#my-dropzone", { maxFiles: 1, timeout: 9000, /*milliseconds*/ autoProcessQueue: false }); myDropzone.on("sending", function(file, xhr, formData) { /*Called just before each file is sent*/ xhr.ontimeout = (() => { /*Execute on case of timeout only*/ console.log('Server Timeout') }); } A: enable chunking and set parallelChunkUploads to false like this config maxFilesize: 1000,//1000MB parallelUploads: 1, chunking: true, // enable chunking forceChunking: false, // forces chunking when file.size < chunkSize parallelChunkUploads: false, chunkSize: 2000000 //// chunk size 2,000,000 bytes (~2MB)
stackoverflow
{ "language": "en", "length": 348, "provenance": "stackexchange_0000F.jsonl.gz:855228", "question_score": "24", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512084" }
d39e4e3c51e5f3484ef7da1843a5b4db6df8d04d
Stackoverflow Stackexchange Q: Wait for asynchronous block before continuing I have a function lets call it "a" that runs some code and then returns a string "x" is updated in an asynchronous code block and then returned. How would I go about making the program wait to return x until after the asynchronous code runs? func a() -> String { //code //code var x: String async block { x = "test" } return x } A: You can use completion closure for this func a(completion: @escaping (_ value:String)->()) { var x: String = "" async block { x = "test" completion(x) //when x has new value } } //Call like this (value will be executed when the completion block is returned a { (value) in print(value) }
Q: Wait for asynchronous block before continuing I have a function lets call it "a" that runs some code and then returns a string "x" is updated in an asynchronous code block and then returned. How would I go about making the program wait to return x until after the asynchronous code runs? func a() -> String { //code //code var x: String async block { x = "test" } return x } A: You can use completion closure for this func a(completion: @escaping (_ value:String)->()) { var x: String = "" async block { x = "test" completion(x) //when x has new value } } //Call like this (value will be executed when the completion block is returned a { (value) in print(value) } A: Like everyone pointed out you can use a completion handler(closure) to perform the operation. But you can also wait for the asynchronous call to be completed using DispatchSemaphore. The semaphore obtains a lock when it makes wait call and it's released when it's signaled from the asynchronous block. func a()->String{ var x = "" let semaphore = DispatchSemaphore(value: 0) DispatchQueue.main.async { x = "test" semaphore.signal() } semaphore.wait() return x }
stackoverflow
{ "language": "en", "length": 196, "provenance": "stackexchange_0000F.jsonl.gz:855229", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512090" }
0ac1110357ce001559048d45b0e522274fb6f764
Stackoverflow Stackexchange Q: How to calculate gradients in ResNet architecture? I assume that somehow the gradient at each layer will be 0.1. The gradient of a paint/stack network a layer can compute by accumulating the gradient as In the ResNet, the gradient is propagated by skip connection. So, how can I achieve the gradient of x as above figure? Is it 0.1x0.1+0.1 or 0.1? A: Have added the gradient calculation in the above diagram. The gradient delta_x is the sum of the incoming gradient delta_y and the product of the gradients delta_y and delta_F. So in your example, it should be 0.1x0.1x0.1+0.1. But note that the in actual calculation of delta_F, the delta_y gets multiplied by weight_1 and gets passed/blocked depending on whether ReLu is active and then gets multiplied by the weights_2.
Q: How to calculate gradients in ResNet architecture? I assume that somehow the gradient at each layer will be 0.1. The gradient of a paint/stack network a layer can compute by accumulating the gradient as In the ResNet, the gradient is propagated by skip connection. So, how can I achieve the gradient of x as above figure? Is it 0.1x0.1+0.1 or 0.1? A: Have added the gradient calculation in the above diagram. The gradient delta_x is the sum of the incoming gradient delta_y and the product of the gradients delta_y and delta_F. So in your example, it should be 0.1x0.1x0.1+0.1. But note that the in actual calculation of delta_F, the delta_y gets multiplied by weight_1 and gets passed/blocked depending on whether ReLu is active and then gets multiplied by the weights_2.
stackoverflow
{ "language": "en", "length": 131, "provenance": "stackexchange_0000F.jsonl.gz:855237", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512126" }
d4d320b161edc36a84d9775da6b38eb0711a26bb
Stackoverflow Stackexchange Q: Is it possible to change the universal app to iphone only once it is uploaded to app store? I am trying to upload new version of my app which is only compatible for iphone. Earlier it was uploaded as universal app. Is it Possible to disable the IPAD compatibility during the next upload to the App store? Anyone have idea about this ? A: Hopefully this will help. iOS App change universal to iPhone only Developers who wish to issue updates, but remove device support, have three choices: Fix their app so that it can work on the devices they originally set out to support. Target a newer version of iOS that requires a newer device. Remove their app from the store, and upload the new app with a different bundle ID. Switch universal app to iPhone only app * *Remove this App from app store *Create new bundle & use it to your new app version *Deploy App to the store For more ideas visit the link above.
Q: Is it possible to change the universal app to iphone only once it is uploaded to app store? I am trying to upload new version of my app which is only compatible for iphone. Earlier it was uploaded as universal app. Is it Possible to disable the IPAD compatibility during the next upload to the App store? Anyone have idea about this ? A: Hopefully this will help. iOS App change universal to iPhone only Developers who wish to issue updates, but remove device support, have three choices: Fix their app so that it can work on the devices they originally set out to support. Target a newer version of iOS that requires a newer device. Remove their app from the store, and upload the new app with a different bundle ID. Switch universal app to iPhone only app * *Remove this App from app store *Create new bundle & use it to your new app version *Deploy App to the store For more ideas visit the link above. A: UIRequiredDeviceCapabilities would help you. Put telephony value like this in info.plist (IPhone specific...) Of course you need submit your app again. UIRequiredDeviceCapabilities UIRequiredDeviceCapabilities UIRequiredDeviceCapabilities (Array or Dictionary - iOS) lets iTunes and the App Store know which device-related features an app requires in order to run. iTunes and the mobile App Store use this list to prevent customers from installing apps on a device that does not support the listed capabilities.
stackoverflow
{ "language": "en", "length": 242, "provenance": "stackexchange_0000F.jsonl.gz:855243", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512139" }
bd4c79b420208f1ae1df2c74cea32ab2df9f311b
Stackoverflow Stackexchange Q: undefined reference to `cv::Stitcher::createDefault(bool)' I am trying to stitch images using opencv in c++, and when the program is compiled, its throwing errors for Stitcher stitcher = Stitcher::createDefault(); undefined reference to `cv::Stitcher::createDefault(bool)' and for Stitcher::Status status = stitcher.stitch(vImg, rImg); undefined reference to `cv::Stitcher::stitch(cv::_InputArray const&, cv::_OutputArray const&)' Please help me in fixing this error.Thanks in advance. A: This error indicates the compiler has a declaration for these functions but not a definition. Try checking your linker flags.
Q: undefined reference to `cv::Stitcher::createDefault(bool)' I am trying to stitch images using opencv in c++, and when the program is compiled, its throwing errors for Stitcher stitcher = Stitcher::createDefault(); undefined reference to `cv::Stitcher::createDefault(bool)' and for Stitcher::Status status = stitcher.stitch(vImg, rImg); undefined reference to `cv::Stitcher::stitch(cv::_InputArray const&, cv::_OutputArray const&)' Please help me in fixing this error.Thanks in advance. A: This error indicates the compiler has a declaration for these functions but not a definition. Try checking your linker flags.
stackoverflow
{ "language": "en", "length": 77, "provenance": "stackexchange_0000F.jsonl.gz:855351", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512462" }
49b25f1368c72f0b9036006d0d22327693941b5c
Stackoverflow Stackexchange Q: How make Android Java HMAC SHA256 as in PHP? I have a code in PHP: $str=base64_encode('1234'); $key='1234'; print(base64_encode(hash_hmac('sha256', $str, $key,true))); And what code for Android Java (Android Studio)? This code gives different result that in PHP: import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.util.Base64; import android.util.Log; import javax.crypto.Mac; import javax.crypto.spec.SecretKeySpec; private String hash_hmac(String str, String secret) throws Exception{ Mac sha256_HMAC = Mac.getInstance("HmacSHA256"); byte[] string = str.getBytes(); String stringInBase64 = Base64.encodeToString(string, Base64.DEFAULT); SecretKeySpec secretKey = new SecretKeySpec(secret.getBytes(), "HmacSHA256"); sha256_HMAC.init(secretKey); String hash = Base64.encodeToString(sha256_HMAC.doFinal(stringInBase64.getBytes()), Base64.DEFAULT); return hash; } String str = "1234"; String key = "1234"; try { Log.d("HMAC:", hash_hmac(str,key)); } catch (Exception e) { Log.d("HMAC:","stop"); e.printStackTrace(); } But in native Java it works fine. I can not resolve this ;( Maybe any limits for Android platform or device? A: You are converting your input string to base64 that's why it's not matching. here is correct code - private String hash_hmac(String str, String secret) throws Exception{ Mac sha256_HMAC = Mac.getInstance("HmacSHA256"); SecretKeySpec secretKey = new SecretKeySpec(secret.getBytes(), "HmacSHA256"); sha256_HMAC.init(secretKey); String hash = Base64.encodeToString(sha256_HMAC.doFinal(str.getBytes()), Base64.DEFAULT); return hash; }
Q: How make Android Java HMAC SHA256 as in PHP? I have a code in PHP: $str=base64_encode('1234'); $key='1234'; print(base64_encode(hash_hmac('sha256', $str, $key,true))); And what code for Android Java (Android Studio)? This code gives different result that in PHP: import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.util.Base64; import android.util.Log; import javax.crypto.Mac; import javax.crypto.spec.SecretKeySpec; private String hash_hmac(String str, String secret) throws Exception{ Mac sha256_HMAC = Mac.getInstance("HmacSHA256"); byte[] string = str.getBytes(); String stringInBase64 = Base64.encodeToString(string, Base64.DEFAULT); SecretKeySpec secretKey = new SecretKeySpec(secret.getBytes(), "HmacSHA256"); sha256_HMAC.init(secretKey); String hash = Base64.encodeToString(sha256_HMAC.doFinal(stringInBase64.getBytes()), Base64.DEFAULT); return hash; } String str = "1234"; String key = "1234"; try { Log.d("HMAC:", hash_hmac(str,key)); } catch (Exception e) { Log.d("HMAC:","stop"); e.printStackTrace(); } But in native Java it works fine. I can not resolve this ;( Maybe any limits for Android platform or device? A: You are converting your input string to base64 that's why it's not matching. here is correct code - private String hash_hmac(String str, String secret) throws Exception{ Mac sha256_HMAC = Mac.getInstance("HmacSHA256"); SecretKeySpec secretKey = new SecretKeySpec(secret.getBytes(), "HmacSHA256"); sha256_HMAC.init(secretKey); String hash = Base64.encodeToString(sha256_HMAC.doFinal(str.getBytes()), Base64.DEFAULT); return hash; }
stackoverflow
{ "language": "en", "length": 172, "provenance": "stackexchange_0000F.jsonl.gz:855360", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512493" }
8c74c6c13adfb2800563fe10e02d9652795b6d92
Stackoverflow Stackexchange Q: How to make hover effect stays even after unhover? I have created a simple JSFiddle for the problem. Here is the link: https://jsfiddle.net/tnkh/Loewjnr3/ CSS: .container{ background: white; display:flex; justify-content: center; align-items: center; height:50px } .circle { display: inline-block; width: 20px; height: 20px; background: #0f3757; -moz-border-radius: 50px; -webkit-border-radius: 50px; border-radius: 50px; margin-left:10px; float:left; transition: all 0.3s ease } .circle:hover { background:orange; } Basically over here, I can hover on any circle to change their color. I would like to ask how could I make the orange color stays on on any particular circle that I hovered on after the mouse moved away to white container? Any script or CSS animation I could use to solve the problem? A: Just add an mouseover event to .circle element and write an active CSS class which has background-color property and when event occurs remove active class from any .circle and add it to current element JS $(".container span.circle").on('mouseover',function(){ $(".circle").removeClass('active');//remove from other elements $(this).addClass('active'); }); CSS .active { background:orange; transition: all 0.5s ease } Updated Fiddle
Q: How to make hover effect stays even after unhover? I have created a simple JSFiddle for the problem. Here is the link: https://jsfiddle.net/tnkh/Loewjnr3/ CSS: .container{ background: white; display:flex; justify-content: center; align-items: center; height:50px } .circle { display: inline-block; width: 20px; height: 20px; background: #0f3757; -moz-border-radius: 50px; -webkit-border-radius: 50px; border-radius: 50px; margin-left:10px; float:left; transition: all 0.3s ease } .circle:hover { background:orange; } Basically over here, I can hover on any circle to change their color. I would like to ask how could I make the orange color stays on on any particular circle that I hovered on after the mouse moved away to white container? Any script or CSS animation I could use to solve the problem? A: Just add an mouseover event to .circle element and write an active CSS class which has background-color property and when event occurs remove active class from any .circle and add it to current element JS $(".container span.circle").on('mouseover',function(){ $(".circle").removeClass('active');//remove from other elements $(this).addClass('active'); }); CSS .active { background:orange; transition: all 0.5s ease } Updated Fiddle A: Using JQuery you can add a class to an element as such: $(element).on('hover', function() { // this if you're hovering over the element that would change, otherwise rename 'this' to whatever element class or id you want to change $(this).addClass('NameOfClass'); }); You can then have that class in CSS .NameOfClass { background-color: orange; } And then just remove that class when you want. A: Change .circle:hover to .hover .hover { background:orange; transition: all 0.5s ease } A) IF you want orange color be forever : Use this jquery $(document).ready(function(){ $('.circle').hover(function(){ $(this).addClass('hover'); }) }) $(document).ready(function(){ $('.circle').hover(function(){ $(this).addClass('hover'); }) }) .container{ background: white; display:flex; justify-content: center; align-items: center; height:50px } .circle { display: inline-block; width: 20px; height: 20px; background: #0f3757; -moz-border-radius: 50px; -webkit-border-radius: 50px; border-radius: 50px; margin-left:10px; float:left; transition: all 0.5s ease } .hover { background:orange; transition: all 0.5s ease } <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script> <div class= "container"> <span class="circle"></span> <span class="circle"></span> <span class="circle"></span> <span class="circle"></span> </div> B) If you want orange color be until mouse move on other circle Use this jquery $(document).ready(function(){ $('.circle').hover(function(){ $('.circle').removeClass('hover'); $(this).addClass('hover'); }) }) $(document).ready(function(){ $('.circle').hover(function(){ $('.circle').removeClass('hover'); $(this).addClass('hover'); }) }) .container{ background: white; display:flex; justify-content: center; align-items: center; height:50px } .circle { display: inline-block; width: 20px; height: 20px; background: #0f3757; -moz-border-radius: 50px; -webkit-border-radius: 50px; border-radius: 50px; margin-left:10px; float:left; transition: all 0.5s ease } .hover { background:orange; transition: all 0.5s ease } <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script> <div class= "container"> <span class="circle"></span> <span class="circle"></span> <span class="circle"></span> <span class="circle"></span> </div> A: You can use Jquery to set a class when the mouse is hovered. Then the class will remain set even after the mouse moves away. $(".circle").hover(function() { $(this).addClass("hovered"); }); I have created a jsfiddle to demonstrate. A: $( ".circle" ).mouseover(function(){ $(this).css('background','orange') } ) https://jsfiddle.net/rtxq9fnu/
stackoverflow
{ "language": "en", "length": 454, "provenance": "stackexchange_0000F.jsonl.gz:855368", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512514" }
14b120936fcc2e0de15576fb402f979684536e0a
Stackoverflow Stackexchange Q: SAML IdP - AWS Cognito/IAM as an Identity Provider I know services such as Auth0 can act as both SAML IdPs and integrate with third party IdPs. It would seem that Cognito can only integrate with other third party IdPs as a service provider, it can actually perform the role of an IdP. The use case is we have our apps creating users in Cognito. We'd like to use a third party application which can integrate with a SAML IdP to support SSO. Is this possible with Cognito or would we need to use something like Auth0? A: Currenlty, Cognito is an OIDC IdP and not a SAML IdP. If an application supports OIDC, you can use Cognito to connect to that. We have recently released in public beta a new feature that allows you to federated identity from another SAML IdP. Here's the blog entry https://aws.amazon.com/blogs/mobile/amazon-cognito-user-pools-supports-federation-with-saml/ We will consider your request for future releases.
Q: SAML IdP - AWS Cognito/IAM as an Identity Provider I know services such as Auth0 can act as both SAML IdPs and integrate with third party IdPs. It would seem that Cognito can only integrate with other third party IdPs as a service provider, it can actually perform the role of an IdP. The use case is we have our apps creating users in Cognito. We'd like to use a third party application which can integrate with a SAML IdP to support SSO. Is this possible with Cognito or would we need to use something like Auth0? A: Currenlty, Cognito is an OIDC IdP and not a SAML IdP. If an application supports OIDC, you can use Cognito to connect to that. We have recently released in public beta a new feature that allows you to federated identity from another SAML IdP. Here's the blog entry https://aws.amazon.com/blogs/mobile/amazon-cognito-user-pools-supports-federation-with-saml/ We will consider your request for future releases. A: A Cognito user pool by itself is not an SAML provider yet. But if you would like to use a Cognito user pool, and also use it as a SAML provider, you'll have to allow users to sign in through a real external SAML federated identity provider, such as AWS SSO, by integrating Cognito user pool with the external SAML IdP: And your app should not directly add a user to the Cognito user pool, but you will need to add users to your external SAML IdP, such as AWS SSO. During the sign-in process, Cognito will automatically add the external user to your user pool. (See https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-saml-idp-authentication.html)
stackoverflow
{ "language": "en", "length": 265, "provenance": "stackexchange_0000F.jsonl.gz:855379", "question_score": "13", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512540" }
affbad9a04679b8f8792e386b38f7617e0e03517
Stackoverflow Stackexchange Q: Python error: missing 1 required positional argument: 'self' I'm brand new to Python, and I'm trying to learn how to work with classes. Does anyone know how come this is not working? Any additional tips about the keyword "self" would be greatly appreciated. The code: class Enemy: life = 3 def attack(self): print('ouch!') self.life -= 1 def checkLife(self): if self.life <= 0: print('I am dead') else: print(str(self.life) + "life left") enemy1 = Enemy enemy1.attack() enemy1.checkLife() The error: C:\Users\Liam\AppData\Local\Programs\Python\Python36-32\python.exe C:/Users/Liam/PycharmProjects/YouTube/first.py Traceback (most recent call last): File "C:/Users/Liam/PycharmProjects/YouTube/first.py", line 16, in <module> enemy1.attack() TypeError: attack() missing 1 required positional argument: 'self' Process finished with exit code 1 A: Enemy is the class. Enemy() is an instance of the class Enemy. You need to initialise the class, enemy1 = Enemy() enemy1.attack() enemy1.checkLife()
Q: Python error: missing 1 required positional argument: 'self' I'm brand new to Python, and I'm trying to learn how to work with classes. Does anyone know how come this is not working? Any additional tips about the keyword "self" would be greatly appreciated. The code: class Enemy: life = 3 def attack(self): print('ouch!') self.life -= 1 def checkLife(self): if self.life <= 0: print('I am dead') else: print(str(self.life) + "life left") enemy1 = Enemy enemy1.attack() enemy1.checkLife() The error: C:\Users\Liam\AppData\Local\Programs\Python\Python36-32\python.exe C:/Users/Liam/PycharmProjects/YouTube/first.py Traceback (most recent call last): File "C:/Users/Liam/PycharmProjects/YouTube/first.py", line 16, in <module> enemy1.attack() TypeError: attack() missing 1 required positional argument: 'self' Process finished with exit code 1 A: Enemy is the class. Enemy() is an instance of the class Enemy. You need to initialise the class, enemy1 = Enemy() enemy1.attack() enemy1.checkLife()
stackoverflow
{ "language": "en", "length": 130, "provenance": "stackexchange_0000F.jsonl.gz:855406", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512630" }
330bc626908312eefd683dd52a3a08ae4a276384
Stackoverflow Stackexchange Q: openjdk code compilation/ IDE setup I am trying to understand openjdk vm code specifically gc code base. i tried to open in CLion but it shows lot of errors. Is there a document which explains how to setup and navigate code? A: The OpenJDK source distribution includes NetBeans project nbproject - just open this project in NetBeans IDE with C/C++ development pack. The project already contains configurations for Solaris, Linux, and MacOS. Here is step-by-step instructions(I didn't check them): * *http://marcelinorc.com/2016/02/17/using-netbeans-to-hack-openjdk9-in-ubuntu/ *https://dzone.com/articles/hack-openjdk-netbeans-ide In case of CLion you can use the following instructions. If you are interested in hotspot project - you can use this CMakeLists.txt cmake_minimum_required(VERSION 3.6) project(hotspot) set(CMAKE_CXX_STANDARD 98) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D_GNU_SOURCE \ -D_REENTRANT \ -DLINUX -DINCLUDE_SUFFIX_OS=_linux -DVM_LITTLE_ENDIAN \ -DTARGET_COMPILER_gcc \ -DAMD64 -DHOTSPOT_LIB_ARCH='amd64' -DINCLUDE_SUFFIX_CPU=_x86 -D_LP64 -DTARGET_ARCH_x86 \ -DCOMPILER1 -DCOMPILER2") include_directories( src/share/vm src/os/linux/vm src/cpu/x86/vm src/os_cpu/linux_x86/vm src/share/vm/precompiled) set(SOURCE_FILES // CLion will generate includes here automatically on project initialization ) add_executable(hotspot ${SOURCE_FILES})
Q: openjdk code compilation/ IDE setup I am trying to understand openjdk vm code specifically gc code base. i tried to open in CLion but it shows lot of errors. Is there a document which explains how to setup and navigate code? A: The OpenJDK source distribution includes NetBeans project nbproject - just open this project in NetBeans IDE with C/C++ development pack. The project already contains configurations for Solaris, Linux, and MacOS. Here is step-by-step instructions(I didn't check them): * *http://marcelinorc.com/2016/02/17/using-netbeans-to-hack-openjdk9-in-ubuntu/ *https://dzone.com/articles/hack-openjdk-netbeans-ide In case of CLion you can use the following instructions. If you are interested in hotspot project - you can use this CMakeLists.txt cmake_minimum_required(VERSION 3.6) project(hotspot) set(CMAKE_CXX_STANDARD 98) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D_GNU_SOURCE \ -D_REENTRANT \ -DLINUX -DINCLUDE_SUFFIX_OS=_linux -DVM_LITTLE_ENDIAN \ -DTARGET_COMPILER_gcc \ -DAMD64 -DHOTSPOT_LIB_ARCH='amd64' -DINCLUDE_SUFFIX_CPU=_x86 -D_LP64 -DTARGET_ARCH_x86 \ -DCOMPILER1 -DCOMPILER2") include_directories( src/share/vm src/os/linux/vm src/cpu/x86/vm src/os_cpu/linux_x86/vm src/share/vm/precompiled) set(SOURCE_FILES // CLion will generate includes here automatically on project initialization ) add_executable(hotspot ${SOURCE_FILES})
stackoverflow
{ "language": "en", "length": 151, "provenance": "stackexchange_0000F.jsonl.gz:855502", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512922" }
30e7095b8ed350b83da61390bc4d08a0be1faa64
Stackoverflow Stackexchange Q: Visual Studio code: C++ syntax highlighting for classes Is there a way to get Visual Studio code (Linux) to highlight custom classes and data types for C++? I want this so that when I create a function that returns a certain data-type, it will highlight it correctly and help with readability at a glance. A: Here we have the solutions! vscode cpp team have released a inside builder and can do Syntactic/lexical and semantic colorization
Q: Visual Studio code: C++ syntax highlighting for classes Is there a way to get Visual Studio code (Linux) to highlight custom classes and data types for C++? I want this so that when I create a function that returns a certain data-type, it will highlight it correctly and help with readability at a glance. A: Here we have the solutions! vscode cpp team have released a inside builder and can do Syntactic/lexical and semantic colorization
stackoverflow
{ "language": "en", "length": 76, "provenance": "stackexchange_0000F.jsonl.gz:855507", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512933" }
14151ee7743ad3ba5e92e8c159f4b1b20f90a668
Stackoverflow Stackexchange Q: How to use PHP7 code in HTML file with the help of .htaccess setting I am using linux server and in my server I have install PHP 7.* version. I want to use PHP code in HTML file. Right now it render PHP code in in web page. I am using the following code in my .htaccess file but it not working. AddHandler x-httpd-php .html .htm and AddHandler php7-script .php .html .htm and <FilesMatch "\.html?$"> SetHandler application/x-httpd-php7 </FilesMatch> But all are these not working. A: After you installed php7.0-cgi sudo apt install php7.0-cgi you can add to your .htaccess AddHandler php70-cgi .php tells Apache to run PHP on any file with the extension ".php" using the Module called php70-cgi that is afaik modules/php70-cgi.so A reason why its not working could be the webserver settings in /etc/apache2/sites-available/default if there is AllowOverride „None“ set it to „All“ else you can only make setting in <Directory> and not in .htaccess <Directory /var/www/> ... AllowOverride All ... </Directory>
Q: How to use PHP7 code in HTML file with the help of .htaccess setting I am using linux server and in my server I have install PHP 7.* version. I want to use PHP code in HTML file. Right now it render PHP code in in web page. I am using the following code in my .htaccess file but it not working. AddHandler x-httpd-php .html .htm and AddHandler php7-script .php .html .htm and <FilesMatch "\.html?$"> SetHandler application/x-httpd-php7 </FilesMatch> But all are these not working. A: After you installed php7.0-cgi sudo apt install php7.0-cgi you can add to your .htaccess AddHandler php70-cgi .php tells Apache to run PHP on any file with the extension ".php" using the Module called php70-cgi that is afaik modules/php70-cgi.so A reason why its not working could be the webserver settings in /etc/apache2/sites-available/default if there is AllowOverride „None“ set it to „All“ else you can only make setting in <Directory> and not in .htaccess <Directory /var/www/> ... AllowOverride All ... </Directory>
stackoverflow
{ "language": "en", "length": 165, "provenance": "stackexchange_0000F.jsonl.gz:855512", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512947" }
a28bcb8d2e80562e8ed171f63b1dbc30b484e0aa
Stackoverflow Stackexchange Q: ipython notebook multiple instances on different ports I would like to have multiple instances of ipython notebook running on different ports for the same user. Is it possible? Something like a list of ports for 'NotebookApp.port' with a default one. A: Just run jupyter notebook a second time; it will automatically select another port to use.
Q: ipython notebook multiple instances on different ports I would like to have multiple instances of ipython notebook running on different ports for the same user. Is it possible? Something like a list of ports for 'NotebookApp.port' with a default one. A: Just run jupyter notebook a second time; it will automatically select another port to use. A: run jupyter notebook --port=8090 change 8090 for the port you want
stackoverflow
{ "language": "en", "length": 69, "provenance": "stackexchange_0000F.jsonl.gz:855516", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512968" }
22af8d61b62fe4252b355fa12671e597aab86685
Stackoverflow Stackexchange Q: Main Thread Checker: UI API called on a background thread: -[UIApplication delegate] Xcode 9 seems to be reporting a lot of Main thread calls to UIApplication properties. Even though the UI is not being updated this is particularly cumbersome due to the extension of logs it produces a default environment. 4 TestApp 0x0000000101c262e0 __39-[ViewController viewDidLoad]_block_invoke + 196 5 libdispatch.dylib 0x0000000102279654 _dispatch_call_block_and_release + 24 6 libdispatch.dylib 0x0000000102279614 _dispatch_client_callout + 16 7 libdispatch.dylib 0x0000000102289008 _dispatch_queue_serial_drain + 716 8 libdispatch.dylib 0x000000010227ce58 _dispatch_queue_invoke + 340 9 libdispatch.dylib 0x000000010228a1c4 _dispatch_root_queue_drain_deferred_wlh + 412 10 libdispatch.dylib 0x00000001022917fc _dispatch_workloop_worker_thread + 868 11 libsystem_pthread.dylib 0x00000001ac6771e8 _pthread_wqthread + 924 12 libsystem_pthread.dylib 0x00000001ac676e40 start_wqthread + 4 A: If these reporting messages confuse you uncheck them: * *Edit Scheme... *Uncheck "Main Thread Checker" in Run > Diagnostics
Q: Main Thread Checker: UI API called on a background thread: -[UIApplication delegate] Xcode 9 seems to be reporting a lot of Main thread calls to UIApplication properties. Even though the UI is not being updated this is particularly cumbersome due to the extension of logs it produces a default environment. 4 TestApp 0x0000000101c262e0 __39-[ViewController viewDidLoad]_block_invoke + 196 5 libdispatch.dylib 0x0000000102279654 _dispatch_call_block_and_release + 24 6 libdispatch.dylib 0x0000000102279614 _dispatch_client_callout + 16 7 libdispatch.dylib 0x0000000102289008 _dispatch_queue_serial_drain + 716 8 libdispatch.dylib 0x000000010227ce58 _dispatch_queue_invoke + 340 9 libdispatch.dylib 0x000000010228a1c4 _dispatch_root_queue_drain_deferred_wlh + 412 10 libdispatch.dylib 0x00000001022917fc _dispatch_workloop_worker_thread + 868 11 libsystem_pthread.dylib 0x00000001ac6771e8 _pthread_wqthread + 924 12 libsystem_pthread.dylib 0x00000001ac676e40 start_wqthread + 4 A: If these reporting messages confuse you uncheck them: * *Edit Scheme... *Uncheck "Main Thread Checker" in Run > Diagnostics A: Check also ARKit template Xcode project Main Thread Checker log console. If UIApplication or UIApplicationDelegate method is called from another thread you can disable checking thread as in CGN answer, but it will disable this checker completely. You can also subclass what is necessary and call metod of superclass on main thread. This way you can still use Main Thread Sanitizer in other places in code
stackoverflow
{ "language": "en", "length": 193, "provenance": "stackexchange_0000F.jsonl.gz:855520", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44512977" }
e285568ed95cb577c9cc0b76207686a050f639b2
Stackoverflow Stackexchange Q: Mass/bulk update in rails without using update_all with a single query? I want to update multiple rows using single query of active record. I don't have to use update_all because it skips validation. Is there any way to do this in rails active record.? A: If you want to update multiple records without instantiating the models, update_all method should do the trick: Updates all records in the current relation with details given. This method constructs a single SQL UPDATE statement and sends it straight to the database. It does not instantiate the involved models and it does not trigger Active Record callbacks or validations. However, values passed to #update_all will still go through Active Record’s normal type casting and serialization. E.g: # Update all books with 'Rails' in their title Book.where('title LIKE ?', '%Rails%').update_all(author: 'David') As I understood, it even accepts an array as a parameter, allowing us to provide different hashes to update different records in their corresponding order, like in SQL UPDATE statement. Correct me, somebody, if I'm wrong.
Q: Mass/bulk update in rails without using update_all with a single query? I want to update multiple rows using single query of active record. I don't have to use update_all because it skips validation. Is there any way to do this in rails active record.? A: If you want to update multiple records without instantiating the models, update_all method should do the trick: Updates all records in the current relation with details given. This method constructs a single SQL UPDATE statement and sends it straight to the database. It does not instantiate the involved models and it does not trigger Active Record callbacks or validations. However, values passed to #update_all will still go through Active Record’s normal type casting and serialization. E.g: # Update all books with 'Rails' in their title Book.where('title LIKE ?', '%Rails%').update_all(author: 'David') As I understood, it even accepts an array as a parameter, allowing us to provide different hashes to update different records in their corresponding order, like in SQL UPDATE statement. Correct me, somebody, if I'm wrong. A: Mass update without using update_all can be achievable using activerecord-import gem. Please refer to this gem for more information. Methods with detail. Example: Lets say there is a table named "Services" having a "booked" column. We want to update its value using the gem outside the loop. services.each do |service| service.booked = false service.updated_at = DateTime.current if service.changed? end ProvidedService.import services.to_ary, on_duplicate_key_update: { columns: %i[booked updated_at] } active-record import by default does not update the "updated_at" column. So we've to explicitly update it. A: It sounds like you are looking for Update records - from the documentation Updating multiple records; different col with different data You can do like this Example: people = { 1 => { "first_name" => "David" }, 2 => { "first_name" => "Jeremy" } } Person.update(people.keys, people.values)
stackoverflow
{ "language": "en", "length": 304, "provenance": "stackexchange_0000F.jsonl.gz:855553", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44513074" }
79a63d49f93fa80d7fb63896746d85272736632a
Stackoverflow Stackexchange Q: How to display psd file using php? I was trying to read a psd file from database as like pdf or images but I failed. Is there any way to read or display psd file in browser? A: You can use ImageMagick for this, $im = new Imagick("image.psd"); foreach($im as $layer) { // do something with each $layer // example: save all layers to separate PNG files $layer->writeImage("layer" . ++$i . ".png"); } Refer to this question too: PHP: Get the position (x, y) of the layer of PSD file OR You could use something like this: PSD Library - Read .psd files without any 3rd party libraries.
Q: How to display psd file using php? I was trying to read a psd file from database as like pdf or images but I failed. Is there any way to read or display psd file in browser? A: You can use ImageMagick for this, $im = new Imagick("image.psd"); foreach($im as $layer) { // do something with each $layer // example: save all layers to separate PNG files $layer->writeImage("layer" . ++$i . ".png"); } Refer to this question too: PHP: Get the position (x, y) of the layer of PSD file OR You could use something like this: PSD Library - Read .psd files without any 3rd party libraries. A: You can use the code below. It uses a webapi but it's free and has no limitations. <?php $url = 'http://server.com/image.psd'; $data = json_decode(file_get_contents('http://api.rest7.com/v1/image_convert.php?url=' . $url . '&format=png')); if (@$data->success !== 1) { die('Failed'); } $image = file_get_contents($data->file); file_put_contents('rendered_page.png', $image); Additional bonus- it reads not only PSD images but many, many more. You can get more examples here: http://rest7.com/image_convert I am not affiliated with this website but I use it for over a week now so if you have questions I can try to answer :)
stackoverflow
{ "language": "en", "length": 196, "provenance": "stackexchange_0000F.jsonl.gz:855562", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44513104" }
a9c95f24fbe04bd70114f76d1be8c4011574ff58
Stackoverflow Stackexchange Q: Remove characters from a string that occur in another string in Kotlin Let me preface this by saying I am really new to Kotlin but am a little familiar with Python. My goal is to remove all the occurrences of the characters in one string from another string via some sort of function. I can show you how I would do this in Python : def removechars (s, chars) return s.translate(None, chars) And I can use it like this : print(removechars("The quick brown fox jumped over the sleazy dog!", "qot")) It would give this output : The uick brwn fx jumped ver the sleazy dg! How can I something similar in Kotlin? A: You can use Regex (the equivlant module in Python would be re): fun removeChars(s: String, c: String) = s.replace(Regex("[$c]"), "") println(removeChars("The quick brown fox jumped over the sleazy dog!", "qot")) Output: The uick brwn fx jumped ver he sleazy dg!
Q: Remove characters from a string that occur in another string in Kotlin Let me preface this by saying I am really new to Kotlin but am a little familiar with Python. My goal is to remove all the occurrences of the characters in one string from another string via some sort of function. I can show you how I would do this in Python : def removechars (s, chars) return s.translate(None, chars) And I can use it like this : print(removechars("The quick brown fox jumped over the sleazy dog!", "qot")) It would give this output : The uick brwn fx jumped ver the sleazy dg! How can I something similar in Kotlin? A: You can use Regex (the equivlant module in Python would be re): fun removeChars(s: String, c: String) = s.replace(Regex("[$c]"), "") println(removeChars("The quick brown fox jumped over the sleazy dog!", "qot")) Output: The uick brwn fx jumped ver he sleazy dg! A: I suggest using filterNot() in Kotlin: "Mississippi".filterNot { c -> "is".contains(c)} This should output "Mpp". A: I'm not familiar with Kotlin but I would declare both strings and a character variable. Then do a For...Next statement with the character being assigned in turn to each letter you want removed and search for the letter(s) in the altered string. It probably isn't the most efficient way of doing it but if you're okay with that slight delay in run time, it should work.
stackoverflow
{ "language": "en", "length": 237, "provenance": "stackexchange_0000F.jsonl.gz:855579", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44513142" }
73b48833f9556579ee8bbe62a29333c1af5db8c8
Stackoverflow Stackexchange Q: How to get Redux Form data in another Component How can i pass the data from redux form, So that i can access that data in App.js? here is the code i have written using redux-form.after i click on submit the data should be passed from form.js to app.js. and the data should be displayed on page. here is form.js- const Form=({fields:{name,address}})=>( <form> <center> <div> <label>First Name</label> <Field type="text" component="input" placeholder="Name" name="name"/> </div> <div> <label>Address</label> <Field type="text" component="input" placeholder="Phone" name="phone" /> </div> <button type="submit">Submit</button> </center> </form> ) export default reduxForm({ form: 'form', fields: ['name', 'address'] })(Form); how can i pass this inputed data to app.js? A: What you need to do is use getFormValues to get the redux field values So in App.js you can have import {getFormValues} from 'redux-form'; .. const App = (props) => { var {name, phone} = props.formStates console.log(name, phone); } function mapStateToProps(state) { return { formStates: getFormValues('form')(state) // here 'form' is the name you have given your redux form } } export default connect(mapStateToProps)(App)
Q: How to get Redux Form data in another Component How can i pass the data from redux form, So that i can access that data in App.js? here is the code i have written using redux-form.after i click on submit the data should be passed from form.js to app.js. and the data should be displayed on page. here is form.js- const Form=({fields:{name,address}})=>( <form> <center> <div> <label>First Name</label> <Field type="text" component="input" placeholder="Name" name="name"/> </div> <div> <label>Address</label> <Field type="text" component="input" placeholder="Phone" name="phone" /> </div> <button type="submit">Submit</button> </center> </form> ) export default reduxForm({ form: 'form', fields: ['name', 'address'] })(Form); how can i pass this inputed data to app.js? A: What you need to do is use getFormValues to get the redux field values So in App.js you can have import {getFormValues} from 'redux-form'; .. const App = (props) => { var {name, phone} = props.formStates console.log(name, phone); } function mapStateToProps(state) { return { formStates: getFormValues('form')(state) // here 'form' is the name you have given your redux form } } export default connect(mapStateToProps)(App) A: The above mentioned answer is absolutely correct however, when your form component gets unmounted the state may get destroyed and then it will not be available in any other component.To fix this you can add destroyOnUnmount: false to your reduxForm wrapper like this export default reduxForm({ form: 'form', destroyOnUnmount: false, })(Form) Now you can get the form data in any component by following the above answer. Hope this helps
stackoverflow
{ "language": "en", "length": 240, "provenance": "stackexchange_0000F.jsonl.gz:855619", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44513287" }
d5ac6f22d4b9a1d9f03ca586e5f02493003e301a
Stackoverflow Stackexchange Q: How can I disable automatic updates in ionic-cli 3.30? I need to stop Ionic CLI from checking for updates whenever I run an ionic command. This prevents me from even running the app offline. An illustration is as : C:\Users\TO-004\Desktop\EzyMarketplace\EzyExtension-App-2017-master>ionic serve ? The Ionic CLI has an update available (3.3.0 => 3.4.0)! Would you like to install it? (Y/n) A: Ionic seems to compare the release date by the timestamp of "lastCommand" in ~/.ionic/config.json. If that date is later than the release date it will not ask you the question. Just change the timestamp to a future date, not an ideal solution but it seems to work.
Q: How can I disable automatic updates in ionic-cli 3.30? I need to stop Ionic CLI from checking for updates whenever I run an ionic command. This prevents me from even running the app offline. An illustration is as : C:\Users\TO-004\Desktop\EzyMarketplace\EzyExtension-App-2017-master>ionic serve ? The Ionic CLI has an update available (3.3.0 => 3.4.0)! Would you like to install it? (Y/n) A: Ionic seems to compare the release date by the timestamp of "lastCommand" in ~/.ionic/config.json. If that date is later than the release date it will not ask you the question. Just change the timestamp to a future date, not an ideal solution but it seems to work. A: In 3.5.0 you can turn off the CLI's interactive mode by either using the --no-interactive flag or by manually setting cliFlags.interactive to false in ~/.ionic/config.json. A: My problem resolved with: ionic --no-interactive -v A: You can change definitely by setting daemon.updates to false in ~/.ionic/config.json A: You can change definitely by setting daemon.updates to false in ~/.ionic/config.json. ex. { .... "daemon" :{ "updates": false } ... }
stackoverflow
{ "language": "en", "length": 176, "provenance": "stackexchange_0000F.jsonl.gz:855624", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44513298" }