id
stringlengths
40
40
text
stringlengths
29
2.03k
original_text
stringlengths
3
154k
subdomain
stringclasses
20 values
metadata
dict
a383a7aed6cbd7ad69d8fab38ad2678be7a2bcac
Stackoverflow Stackexchange Q: Is it possible to use msmdpump approach when connecting to azure analysis service For on-prem analysis services (reference https://learn.microsoft.com/en-us/sql/analysis-services/instances/configure-http-access-to-analysis-services-on-iis-8-0) it is possible to configure http endpoint (which you can use for implementing custom authentication). Is there a way to expose http endpoint also for azure version of analysis services ? I tried playing with msmdpump.ini and all I got was various errors. UPDATE Looking at reflected Microsoft.AnalysisServices.AdomdClient.dll - the azure endpoint actually IS http endpoint. The communication goes like: POST https://[yourregion].asazure.windows.net/webapi/clusterResolve {"serverName":"your_as_server_name"} Reply: {"clusterFQDN":"[prefix]-[yourregion].asazure.windows.net", "coreServerName":"your_as_server_name", "tenantId":"... tenantID"} And then POST https://[prefix]-[yourregion].asazure.windows.net/webapi/xmla Authorization: Bearer your_azure_ad_jwt_here x-ms-xmlaserver: your_as_server_name // xmla request inside the body So in theory one should be able to leverage that to create a http proxy. However neither of those is documented/officially suported. A: I tried this and made it work for case of Execute (you can use Execute + Statement for most tasks) With second request you need three more headers (not sure about User-Agent): User-Agent: XmlaClient SOAPAction: urn:schemas-microsoft-com:xml-analysis:Execute x-ms-xmlacaps-negotiation-flags: 1,0,0,0,0
Q: Is it possible to use msmdpump approach when connecting to azure analysis service For on-prem analysis services (reference https://learn.microsoft.com/en-us/sql/analysis-services/instances/configure-http-access-to-analysis-services-on-iis-8-0) it is possible to configure http endpoint (which you can use for implementing custom authentication). Is there a way to expose http endpoint also for azure version of analysis services ? I tried playing with msmdpump.ini and all I got was various errors. UPDATE Looking at reflected Microsoft.AnalysisServices.AdomdClient.dll - the azure endpoint actually IS http endpoint. The communication goes like: POST https://[yourregion].asazure.windows.net/webapi/clusterResolve {"serverName":"your_as_server_name"} Reply: {"clusterFQDN":"[prefix]-[yourregion].asazure.windows.net", "coreServerName":"your_as_server_name", "tenantId":"... tenantID"} And then POST https://[prefix]-[yourregion].asazure.windows.net/webapi/xmla Authorization: Bearer your_azure_ad_jwt_here x-ms-xmlaserver: your_as_server_name // xmla request inside the body So in theory one should be able to leverage that to create a http proxy. However neither of those is documented/officially suported. A: I tried this and made it work for case of Execute (you can use Execute + Statement for most tasks) With second request you need three more headers (not sure about User-Agent): User-Agent: XmlaClient SOAPAction: urn:schemas-microsoft-com:xml-analysis:Execute x-ms-xmlacaps-negotiation-flags: 1,0,0,0,0
stackoverflow
{ "language": "en", "length": 164, "provenance": "stackexchange_0000F.jsonl.gz:857343", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44518734" }
2ac89aaa76f15984c527a3a0b993b13ad71f5c75
Stackoverflow Stackexchange Q: IBOutlet in protocol implementaion I have the following protocol: protocol TextViewInputField { var indexPath: IndexPath? { get set } var textView: UITextView { get set } var lblPlaceHolder: UILabel { get set } func updatePHHiddenState() } a cell TMStyle2Cell implements this protocol as follows: class TMStyle2Cell: UITableViewCell,TextViewInputField { @IBOutlet var lblPlaceHolder: UILabel! @IBOutlet var textView: UITextView! @IBOutlet var viewSeperator: UIView! var indexPath: IndexPath? func updatePHHiddenState() { } } Why am I getting the following error? TMStyle2Cell does not confirm to protocol TextVeiwInputField. A: Example of protocol. Tested in Swift 4.2. @objc protocol ImageRepresentable { var imageView: UIImageView! { get set } } And for view. class ViewA: UIView, ImageRepresentable { @IBOutlet weak var imageView: UIImageView! } For your case. @objc protocol TextViewInputField { var indexPath: IndexPath? { get set } var textView: UITextView! { get set } var lblPlaceHolder: UILabel! { get set } func updatePHHiddenState() }
Q: IBOutlet in protocol implementaion I have the following protocol: protocol TextViewInputField { var indexPath: IndexPath? { get set } var textView: UITextView { get set } var lblPlaceHolder: UILabel { get set } func updatePHHiddenState() } a cell TMStyle2Cell implements this protocol as follows: class TMStyle2Cell: UITableViewCell,TextViewInputField { @IBOutlet var lblPlaceHolder: UILabel! @IBOutlet var textView: UITextView! @IBOutlet var viewSeperator: UIView! var indexPath: IndexPath? func updatePHHiddenState() { } } Why am I getting the following error? TMStyle2Cell does not confirm to protocol TextVeiwInputField. A: Example of protocol. Tested in Swift 4.2. @objc protocol ImageRepresentable { var imageView: UIImageView! { get set } } And for view. class ViewA: UIView, ImageRepresentable { @IBOutlet weak var imageView: UIImageView! } For your case. @objc protocol TextViewInputField { var indexPath: IndexPath? { get set } var textView: UITextView! { get set } var lblPlaceHolder: UILabel! { get set } func updatePHHiddenState() } A: The types in your protocol and your implementation aren't matching. You need: protocol TextViewInputField { var indexPath: IndexPath? { get set } var textView: UITextView! { get set } var lblPlaceHolder: UILabel! { get set } func updatePHHiddenState() } If you use weak IBOutlets, you also need to include that: protocol TextViewInputField { var indexPath: IndexPath? { get set } weak var textView: UITextView! { get set } weak var lblPlaceHolder: UILabel! { get set } func updatePHHiddenState() } Finally, small point, but the set part of your protocol probably isn't necessary.
stackoverflow
{ "language": "en", "length": 241, "provenance": "stackexchange_0000F.jsonl.gz:857345", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44518737" }
d0a2bb012ae7487d13df3258094d53a5ddb79e29
Stackoverflow Stackexchange Q: Camunda Process Definition Deployment via API I'm trying to deploy a process definition from a file using the following code DeploymentBuilder deploymentBuilder = repositoryService.createDeployment().name(definitionName); deploymentBuilder.addInputStream(definitionName, definitionFileInputStream); String deploymentId = deploymentBuilder.deploy().getId(); System.out.println(deploymentId); The above code runs successfully and the new deploymentId is printed out. Later, I tried to list the deployed process definitions using the following code List<ProcessDefinition> definitions = repositoryService.createProcessDefinitionQuery().list(); System.out.println(definitions.size()); The above code runs successfully but the output is always 0. I've done some investigations and found that in the ACT_GE_BYTEARRAY table an entry with the corresponding deploymentId exists and the BYTES_ column contains that contents of the definitions file. I have also found that there is no corresponding entry found in ACT_RE_PROCDEF table. Is there something messing? from the API and the examples I found it seems that the above code shall suffice, or is there a missing step? Thanks for your help A: It turned out that the issue was related to definitionName (thanks thorben!) as it has to ends on either .bpmn20.xml or .bpmn. After further testing, the suffix is required for the following definitionName of the code deploymentBuilder.addInputStream(definitionName, definitionFileInputStream); Leaving the following definitionName without the suffix is fine repositoryService.createDeployment().name(definitionName);
Q: Camunda Process Definition Deployment via API I'm trying to deploy a process definition from a file using the following code DeploymentBuilder deploymentBuilder = repositoryService.createDeployment().name(definitionName); deploymentBuilder.addInputStream(definitionName, definitionFileInputStream); String deploymentId = deploymentBuilder.deploy().getId(); System.out.println(deploymentId); The above code runs successfully and the new deploymentId is printed out. Later, I tried to list the deployed process definitions using the following code List<ProcessDefinition> definitions = repositoryService.createProcessDefinitionQuery().list(); System.out.println(definitions.size()); The above code runs successfully but the output is always 0. I've done some investigations and found that in the ACT_GE_BYTEARRAY table an entry with the corresponding deploymentId exists and the BYTES_ column contains that contents of the definitions file. I have also found that there is no corresponding entry found in ACT_RE_PROCDEF table. Is there something messing? from the API and the examples I found it seems that the above code shall suffice, or is there a missing step? Thanks for your help A: It turned out that the issue was related to definitionName (thanks thorben!) as it has to ends on either .bpmn20.xml or .bpmn. After further testing, the suffix is required for the following definitionName of the code deploymentBuilder.addInputStream(definitionName, definitionFileInputStream); Leaving the following definitionName without the suffix is fine repositoryService.createDeployment().name(definitionName); A: It seems that you forget the isExecutable flag on your deployed process definitions. Please check if your process model contains a isExecutable flag. If you use the camunda modeler simply set this option in the properties panel of the process. If you call #deploy() with non-executable definitions an deployment will created, but the process definition are not deployed since they are not executable. On the latest version of camunda platform (7.7), a new method called #deployWithResult() was added to the DeploymentBuilder. This method returns the deployed process definitions, so it is easy to check if process definitions are deployed.
stackoverflow
{ "language": "en", "length": 295, "provenance": "stackexchange_0000F.jsonl.gz:857354", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44518759" }
41a63e4a54d7e8753399ea36151dbacf9bc2cede
Stackoverflow Stackexchange Q: How to configure SAML2 authentication for a loopbackJS application I would like to secure a loopback based app using SAML2.0 and OneLogin. I believe I should use the loopback-component-passport and passport-saml modules in order to achieve my goal. However I'm really struggling to find any good documentation that could help me to implement my use case. Seems like the provided sample is outdated and not so accurate. Would you have any useful pointers or advice that'd help me to get started. Thanks A: SAML authentication in Loopback is poorly documented, but supported. Reading the source code of passport-configurator tells us that the following configuration of providers.json will work: "saml": { "name": "saml", "authScheme" : "saml", "module": "passport-saml", "callbackURL": "", "entryPoint": "", "issuer": "", "audience": "", "certPath": "", "privateCertPath": "", "decryptionPvkPath": "", ... } Here the ellipsis indicates any additional options from the passport-saml provider. Note that no special processing is performed on these options; so, for instance, you will need to pass certPath, privateCertPath, etc. as strings rather than paths to files. See how passport is configured using these properties here.
Q: How to configure SAML2 authentication for a loopbackJS application I would like to secure a loopback based app using SAML2.0 and OneLogin. I believe I should use the loopback-component-passport and passport-saml modules in order to achieve my goal. However I'm really struggling to find any good documentation that could help me to implement my use case. Seems like the provided sample is outdated and not so accurate. Would you have any useful pointers or advice that'd help me to get started. Thanks A: SAML authentication in Loopback is poorly documented, but supported. Reading the source code of passport-configurator tells us that the following configuration of providers.json will work: "saml": { "name": "saml", "authScheme" : "saml", "module": "passport-saml", "callbackURL": "", "entryPoint": "", "issuer": "", "audience": "", "certPath": "", "privateCertPath": "", "decryptionPvkPath": "", ... } Here the ellipsis indicates any additional options from the passport-saml provider. Note that no special processing is performed on these options; so, for instance, you will need to pass certPath, privateCertPath, etc. as strings rather than paths to files. See how passport is configured using these properties here. A: So, I don't think there is a clear explanation in Loopback's docs about this, so what I would do is try to figure out how to configure the SAML provider in the prviders.json correctly in order to generate the right passport auth strategy (In your case, you should follow the passport-saml docs to figure out the exact parameters you need to pass). Loopback is using the loopback-component-passport module to read the provider and create the Passport strategies. You can dig into this file to figure out how exactly they are doing it.
stackoverflow
{ "language": "en", "length": 275, "provenance": "stackexchange_0000F.jsonl.gz:857374", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44518819" }
86d5ebb3b0e670a837cfe49a54554ff599919a9e
Stackoverflow Stackexchange Q: Add github url in jenkins blue ocean view When i run a job in jenkins and view it in the blue ocean view it shows me the short git hash on the top left. I would like this hash to be clickable and link to the github page of that particular commit. Is there a way to do this? This is the hash i'm talking about. A: This is a bug in Blue Ocean and we aim to have it fixed for the 1.2 release. Watch this ticket for updates.
Q: Add github url in jenkins blue ocean view When i run a job in jenkins and view it in the blue ocean view it shows me the short git hash on the top left. I would like this hash to be clickable and link to the github page of that particular commit. Is there a way to do this? This is the hash i'm talking about. A: This is a bug in Blue Ocean and we aim to have it fixed for the 1.2 release. Watch this ticket for updates. A: The icon with the box and arrow should be in the top right corner and allow you to go to the old UI to view more details on that build.
stackoverflow
{ "language": "en", "length": 122, "provenance": "stackexchange_0000F.jsonl.gz:857408", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44518971" }
3c3e40c36f23e636035aa3fe8d1a8483ca560c01
Stackoverflow Stackexchange Q: How can I use one forEach loop to iterate over two different arrays? I'd like to be able to log the values of both of these arrays onto the console with one forEach loop. As of now, I'm only iterating over the first one because that's all I know how to do! Is this possible? var array1 = [1, 2, 3, 4]; var array2 = [5, 6, 7, 8]; array1.forEach(function(item){ console.log(item) }); A: If both arrays have the same length then you can use index to log elements from other array. var array1 = [1, 2, 3, 4]; var array2 = [5, 6, 7, 8]; array1.forEach(function(item, index){ console.log(item, array2[index]) });
Q: How can I use one forEach loop to iterate over two different arrays? I'd like to be able to log the values of both of these arrays onto the console with one forEach loop. As of now, I'm only iterating over the first one because that's all I know how to do! Is this possible? var array1 = [1, 2, 3, 4]; var array2 = [5, 6, 7, 8]; array1.forEach(function(item){ console.log(item) }); A: If both arrays have the same length then you can use index to log elements from other array. var array1 = [1, 2, 3, 4]; var array2 = [5, 6, 7, 8]; array1.forEach(function(item, index){ console.log(item, array2[index]) }); A: You can use concat to join them together, so: var array1 = [1, 2, 3, 4]; var array2 = [5, 6, 7, 8]; array1.concat(array2).forEach(function(item){ console.log(item) }); Prints 1, 2, 3, 4, 5, 6, 7, 8 on separate lines. A: var array1 = [1, 2, 3, 4]; var array2 = [5, 6, 7, 8]; array1.forEach(function(item, i){ console.log(item, array1[i]) // logs first array console.log(item, array2[i])// logs second array }); A: An alternative solution is to merge the arrays into one, like so, assuming order is important to you: const combined = array1.map((array1val, index) => [array1val, array2[index]]) combined.forEach(vals => { vals.forEach(console.log) }) A: we can also use 'for' loop to iterate both array like this: for(i=0;i < array1.length || i < array2.length;i++){ console.log(array1[i] + '----' + array2[i]) }
stackoverflow
{ "language": "en", "length": 237, "provenance": "stackexchange_0000F.jsonl.gz:857416", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519001" }
9281438fb74cdd280e9782b38e801707e1c5386d
Stackoverflow Stackexchange Q: Prevent Algolia indexing certain post types I have the algolia search working on my site but in the backend its also indexing a post type which should never be indexed (as it contains private infromation). Is there a setting somewhere where I can say "NEVER index {post_type}"? A: If you're speaking about an Algolia for WordPress plugin, you can definitely define a function/hook to decide whether a post type should be indexed or not. For instance you could write: <?php /** * @param bool $should_index * @param WP_Post $post * * @return bool */ function exclude_post_types( $should_index, WP_Post $post ) { if ( false === $should_index ) { return false; } // Add all post types you don't want to make searchable. $excluded_post_types = array( 'myprivatetype' ); return ! in_array( $post->post_type, $excluded_post_types, true ); } // Hook into Algolia to manipulate the post that should be indexed. add_filter( 'algolia_should_index_searchable_post', 'exclude_post_types', 10, 2 ); You can read more on: https://community.algolia.com/wordpress/indexing-flow.html#indexing-decision
Q: Prevent Algolia indexing certain post types I have the algolia search working on my site but in the backend its also indexing a post type which should never be indexed (as it contains private infromation). Is there a setting somewhere where I can say "NEVER index {post_type}"? A: If you're speaking about an Algolia for WordPress plugin, you can definitely define a function/hook to decide whether a post type should be indexed or not. For instance you could write: <?php /** * @param bool $should_index * @param WP_Post $post * * @return bool */ function exclude_post_types( $should_index, WP_Post $post ) { if ( false === $should_index ) { return false; } // Add all post types you don't want to make searchable. $excluded_post_types = array( 'myprivatetype' ); return ! in_array( $post->post_type, $excluded_post_types, true ); } // Hook into Algolia to manipulate the post that should be indexed. add_filter( 'algolia_should_index_searchable_post', 'exclude_post_types', 10, 2 ); You can read more on: https://community.algolia.com/wordpress/indexing-flow.html#indexing-decision
stackoverflow
{ "language": "en", "length": 160, "provenance": "stackexchange_0000F.jsonl.gz:857437", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519051" }
3ce24e60bb9c7c86caffa99eca238e1453352134
Stackoverflow Stackexchange Q: How to do velocity concatenation? I have three variables that are extracted from date: date, month and year separately. I want to concatenate them into one variable and then convert into a date format. I am trying like this #set( $str = "$date_curr1$month_curr1$year_curr1" ) #set( $dateFormated = $dateTool.toDate("ddMMyyyy", $str)) A: There are several mistake in your code DateTool dateformat wrong Your format should be dd-MM-yyyy not ddMMyyyy. Velocity string concatenation we need to use always variables and set in velocity always #set I have added this map contextMap.put("dateTool",new DateTool()); contextMap.put("date_curr1","14"); contextMap.put("month_curr1","06"); contextMap.put("year_curr1","2017"); And velocity file #set($concat ="-") #set( $str = "$date_curr1$concat$month_curr1$concat$year_curr1 ") $str #set( $dateFormated = $dateTool.toDate("dd-MM-yyyy",$str)) $dateFormated Output 14-06-2017
Q: How to do velocity concatenation? I have three variables that are extracted from date: date, month and year separately. I want to concatenate them into one variable and then convert into a date format. I am trying like this #set( $str = "$date_curr1$month_curr1$year_curr1" ) #set( $dateFormated = $dateTool.toDate("ddMMyyyy", $str)) A: There are several mistake in your code DateTool dateformat wrong Your format should be dd-MM-yyyy not ddMMyyyy. Velocity string concatenation we need to use always variables and set in velocity always #set I have added this map contextMap.put("dateTool",new DateTool()); contextMap.put("date_curr1","14"); contextMap.put("month_curr1","06"); contextMap.put("year_curr1","2017"); And velocity file #set($concat ="-") #set( $str = "$date_curr1$concat$month_curr1$concat$year_curr1 ") $str #set( $dateFormated = $dateTool.toDate("dd-MM-yyyy",$str)) $dateFormated Output 14-06-2017
stackoverflow
{ "language": "en", "length": 111, "provenance": "stackexchange_0000F.jsonl.gz:857445", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519085" }
39d228f90445d6fdd6b0f7f8a535457cee9bdefc
Stackoverflow Stackexchange Q: How to fix the mutex lock? Google play has reported an ANR in my application, following is a snippet of the log associated with the ANR, "HeapTaskDaemon" daemon prio=5 tid=6 Blocked | group="system" sCount=1 dsCount=0 obj=0x12c9bc40 self=0xb93a6020 | sysTid=15299 nice=0 cgrp=default sched=0/0 handle=0xb40ed930 | state=S schedstat=( 0 0 0 ) utm=6793 stm=227 core=0 HZ=100 | stack=0xb3feb000-0xb3fed000 stackSize=1038KB | held mutexes= native: pc 0000000000016a5c /system/lib/libc.so (syscall+32) native: pc 00000000000f6ed9 /system/lib/libart.so (_ZN3art17ConditionVariable9TimedWaitEPNS_6ThreadExi+120) native: pc 00000000001d7265 /system/lib/libart.so (_ZN3art2gc13TaskProcessor7GetTaskEPNS_6ThreadE+240) native: pc 00000000001d7711 /system/lib/libart.so (_ZN3art2gc13TaskProcessor11RunAllTasksEPNS_6ThreadE+72) native: pc 000000000000037f /data/dalvik-cache/arm/system@[email protected] (Java_dalvik_system_VMRuntime_runHeapTasks__+74) at dalvik.system.VMRuntime.runHeapTasks (Native method) - waiting to lock an unknown object at java.lang.Daemons$HeapTaskDaemon.run (Daemons.java:355) at java.lang.Thread.run (Thread.java:818) Please throw some insight into how to get this fixed. Note :- You can find the entire crash log at this link :- https://pastebin.com/pxzLLnBt
Q: How to fix the mutex lock? Google play has reported an ANR in my application, following is a snippet of the log associated with the ANR, "HeapTaskDaemon" daemon prio=5 tid=6 Blocked | group="system" sCount=1 dsCount=0 obj=0x12c9bc40 self=0xb93a6020 | sysTid=15299 nice=0 cgrp=default sched=0/0 handle=0xb40ed930 | state=S schedstat=( 0 0 0 ) utm=6793 stm=227 core=0 HZ=100 | stack=0xb3feb000-0xb3fed000 stackSize=1038KB | held mutexes= native: pc 0000000000016a5c /system/lib/libc.so (syscall+32) native: pc 00000000000f6ed9 /system/lib/libart.so (_ZN3art17ConditionVariable9TimedWaitEPNS_6ThreadExi+120) native: pc 00000000001d7265 /system/lib/libart.so (_ZN3art2gc13TaskProcessor7GetTaskEPNS_6ThreadE+240) native: pc 00000000001d7711 /system/lib/libart.so (_ZN3art2gc13TaskProcessor11RunAllTasksEPNS_6ThreadE+72) native: pc 000000000000037f /data/dalvik-cache/arm/system@[email protected] (Java_dalvik_system_VMRuntime_runHeapTasks__+74) at dalvik.system.VMRuntime.runHeapTasks (Native method) - waiting to lock an unknown object at java.lang.Daemons$HeapTaskDaemon.run (Daemons.java:355) at java.lang.Thread.run (Thread.java:818) Please throw some insight into how to get this fixed. Note :- You can find the entire crash log at this link :- https://pastebin.com/pxzLLnBt
stackoverflow
{ "language": "en", "length": 127, "provenance": "stackexchange_0000F.jsonl.gz:857452", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519124" }
6e28b414ba7c80592b22a419eea72cd4e22fa277
Stackoverflow Stackexchange Q: force link to open firefox instead of safari in mobile Please forgive my english. As you may know, Webrtc is not compatible with Safari IOS on iphone. So I need to force link to open firefox instead of safari. I found solution for chrome: googlechromes://google.com If I do the same for firefox: firefox://google.com It open firefox but doesn't load the url. It will just display firefox with the previous url I open on my last firefox session. So I made a search and I found these: Force link to open in mobile safari from a web app with javascript Force link to open in Chrome iOS Facebook App browser - force link to open in Safari But none of these solutions answer to my specific question. Can someone already faced same issue? Thanks in advance. Kind regards Gauthier A: Firefox URL scheme to do that would look like: firefox://open-url?url=https://google.com There is an open in Firefox library that can help with escaping: https://github.com/mozilla-mobile/firefox-ios-open-in-client Firefox also has a bug open to add support in IntentKit https://bugzilla.mozilla.org/show_bug.cgi?id=1399801
Q: force link to open firefox instead of safari in mobile Please forgive my english. As you may know, Webrtc is not compatible with Safari IOS on iphone. So I need to force link to open firefox instead of safari. I found solution for chrome: googlechromes://google.com If I do the same for firefox: firefox://google.com It open firefox but doesn't load the url. It will just display firefox with the previous url I open on my last firefox session. So I made a search and I found these: Force link to open in mobile safari from a web app with javascript Force link to open in Chrome iOS Facebook App browser - force link to open in Safari But none of these solutions answer to my specific question. Can someone already faced same issue? Thanks in advance. Kind regards Gauthier A: Firefox URL scheme to do that would look like: firefox://open-url?url=https://google.com There is an open in Firefox library that can help with escaping: https://github.com/mozilla-mobile/firefox-ios-open-in-client Firefox also has a bug open to add support in IntentKit https://bugzilla.mozilla.org/show_bug.cgi?id=1399801 A: I have a couple of Safari bookmarks that I use for this type of thing. These are helpful when I am using Safari, but there is some formatting issue and so I want to quickly/easily open the same page in another iOS browser. HTH. Safari Bookmark Name: Open In Firefox Bookmark URL: javascript:location.href=%22firefox%3A%2F%2Fopen-url%3Furl%3D%22+location.href; Safari Bookmark Name: Open In Chrome Bookmark URL: javascript:location.href=%22googlechrome%22+location.href.substring(4);
stackoverflow
{ "language": "en", "length": 238, "provenance": "stackexchange_0000F.jsonl.gz:857476", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519184" }
3745a55f735aa66a40b4de42c7fc56d08a49b638
Stackoverflow Stackexchange Q: Add Dictionaries Keys Values to Array [JavaScript] Here is a sample of my data structure in JavaScript: var list = [{"username":"admin1"}, {"username":"admin2"}, {"username":"admin3"}, {"username":"admin4"}, {"username":"admin5"}]; How can I add each "username" to a new array (var result = [])? A sample of the final data structure would be: var result = ["admin1", "admin2", "admin3", "admin4", "admin5"]; Thank you * A: Use Array#map method to generate an array by iterating over the elements. var list = [{"username":"admin1"}, {"username":"admin2"}, {"username":"admin3"}, {"username":"admin4"}, {"username":"admin5"}]; var res = list.map(function(o) { return o.username }); console.log(res); With ES6 arrow function: var list = [{"username":"admin1"}, {"username":"admin2"}, {"username":"admin3"}, {"username":"admin4"}, {"username":"admin5"}]; let res = list.map(o => o.username); console.log(res);
Q: Add Dictionaries Keys Values to Array [JavaScript] Here is a sample of my data structure in JavaScript: var list = [{"username":"admin1"}, {"username":"admin2"}, {"username":"admin3"}, {"username":"admin4"}, {"username":"admin5"}]; How can I add each "username" to a new array (var result = [])? A sample of the final data structure would be: var result = ["admin1", "admin2", "admin3", "admin4", "admin5"]; Thank you * A: Use Array#map method to generate an array by iterating over the elements. var list = [{"username":"admin1"}, {"username":"admin2"}, {"username":"admin3"}, {"username":"admin4"}, {"username":"admin5"}]; var res = list.map(function(o) { return o.username }); console.log(res); With ES6 arrow function: var list = [{"username":"admin1"}, {"username":"admin2"}, {"username":"admin3"}, {"username":"admin4"}, {"username":"admin5"}]; let res = list.map(o => o.username); console.log(res); A: var list = [{"username":"admin1"}, {"username":"admin2"}, {"username":"admin3"}, {"username":"admin4"}, {"username":"admin5"}]; var array=[]; for (var key in list) { let value = list[key]; console.log(value.username); array.push(value.username); } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
stackoverflow
{ "language": "en", "length": 135, "provenance": "stackexchange_0000F.jsonl.gz:857505", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519294" }
e1e9ce917dcc66f8049801be8d0d4f1a4bf41481
Stackoverflow Stackexchange Q: how to add presto SQLalchemy URI to be connected in airbnb data visualization tool superset i have both presto and superset setup. presto is working well, can be accessed by command: . /app/hadoop/setjdk8.sh;bin/presto-cli --server http://myserver:8070 --catalog hive --schema default And tested ok with a sql query select count(*) on a hive table. superset is also setup on same server, and web UI is OK to access. but always fail to connect to presto when trying to do "Add Database" action to presto. The SQLAlchemy URI is input as: presto://myserver:8070/default Same error always pops out when "Test Connection" button is clicked. As to URI, after presto://, hostname, localhost, 127.0.0.1, ip all are tried, all end into 502 popout. Here is the error pic, A: You have the right URL, you just need to pass the schema as a query parameter and maybe drop the port number like this: presto://myserver/?schema=default Please make sure that your presto server is actually listening on port 8070 as its default is 8080 and superset usually connects fine without adding the port number.
Q: how to add presto SQLalchemy URI to be connected in airbnb data visualization tool superset i have both presto and superset setup. presto is working well, can be accessed by command: . /app/hadoop/setjdk8.sh;bin/presto-cli --server http://myserver:8070 --catalog hive --schema default And tested ok with a sql query select count(*) on a hive table. superset is also setup on same server, and web UI is OK to access. but always fail to connect to presto when trying to do "Add Database" action to presto. The SQLAlchemy URI is input as: presto://myserver:8070/default Same error always pops out when "Test Connection" button is clicked. As to URI, after presto://, hostname, localhost, 127.0.0.1, ip all are tried, all end into 502 popout. Here is the error pic, A: You have the right URL, you just need to pass the schema as a query parameter and maybe drop the port number like this: presto://myserver/?schema=default Please make sure that your presto server is actually listening on port 8070 as its default is 8080 and superset usually connects fine without adding the port number. A: the parameter --server http://myserver:8070 is kind of weird ... have you tried just --server myserver:8070 ??? A: presto://example.com:8080/system This worked for me with presto 316 version . system is catalog name
stackoverflow
{ "language": "en", "length": 209, "provenance": "stackexchange_0000F.jsonl.gz:857507", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519298" }
53fbc67510ff01173c008c787db70df2e3c43414
Stackoverflow Stackexchange Q: Issue running javascript library (photo sphere viewer) I wanted to use Photo Sphere Viewer in my project. So I ran npm i photo-sphere-viewer It appears to have downloaded the modules. Then inside my project I did: import PhotoSphereViewer from 'photo-sphere-viewer/dist/photo-sphere-viewer'; But I get error: Failed to compile. Error in ./~/photo-sphere-viewer/dist/photo-sphere-viewer.js Module not found: 'D.js' in /home/ghy/WebstormProjects/mia-map/node_modules/photo-sphere-viewer/dist @ ./~/photo-sphere-viewer/dist/photo-sphere-viewer.js 9:4-55 Can anyone help me spot what is wrong? PS. I think following line is causing issue (inside photo-sphere-viewer.js): if (typeof define === 'function' && define.amd) { define(['three', 'D.js', 'uevent', 'doT'], factory); } but I am even surprised why it enters inside this if as I didn't know I had require.js installed. PPS. I have a react application created by create-react-app A: Mine works with webpack using: import { Viewer } from 'photo-sphere-viewer'; const viewer = new Viewer({ container: document.querySelector('#viewer'), panorama: 'path/to/photo.jpg' }); You can install with npm or yarn npm install photo-sphere-viewer # or yarn add photo-sphere-viewer
Q: Issue running javascript library (photo sphere viewer) I wanted to use Photo Sphere Viewer in my project. So I ran npm i photo-sphere-viewer It appears to have downloaded the modules. Then inside my project I did: import PhotoSphereViewer from 'photo-sphere-viewer/dist/photo-sphere-viewer'; But I get error: Failed to compile. Error in ./~/photo-sphere-viewer/dist/photo-sphere-viewer.js Module not found: 'D.js' in /home/ghy/WebstormProjects/mia-map/node_modules/photo-sphere-viewer/dist @ ./~/photo-sphere-viewer/dist/photo-sphere-viewer.js 9:4-55 Can anyone help me spot what is wrong? PS. I think following line is causing issue (inside photo-sphere-viewer.js): if (typeof define === 'function' && define.amd) { define(['three', 'D.js', 'uevent', 'doT'], factory); } but I am even surprised why it enters inside this if as I didn't know I had require.js installed. PPS. I have a react application created by create-react-app A: Mine works with webpack using: import { Viewer } from 'photo-sphere-viewer'; const viewer = new Viewer({ container: document.querySelector('#viewer'), panorama: 'path/to/photo.jpg' }); You can install with npm or yarn npm install photo-sphere-viewer # or yarn add photo-sphere-viewer
stackoverflow
{ "language": "en", "length": 157, "provenance": "stackexchange_0000F.jsonl.gz:857520", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519320" }
b1e3bbba9c2e70df7858cb187cc305b5a28ecf9b
Stackoverflow Stackexchange Q: How to Step Into import statement when Python debugging in PyCharm? import numpy as np is just calling Includes/numpy/__init__, right? So, how to step into this while debugging? F7 doesn't work. Setting breakpoint inside numpy doesn't work either. A: Lets say you have a script in a project in PyCharm where you import numpy: import numpy as np a = np.zeros(100) You need to find the __ init __ .py of numpy (for instance in site-packages/numpy/__ init __.py). You can find it in External Libraries/ site-packages. It should be located right below your project folder in PyCharm Project view. Once you have located the file, open it and set a breakpoint at the first line of a code. Code starts like this: try: __NUMPY_SETUP__ except NameError: __NUMPY_SETUP__ = False To be able to reach the breakpoint in the __ init __.py of numpy, go back to the script and run it in the debug mode. This will bring you to the place in __ init __.py of numpy where the breakpoint is.
Q: How to Step Into import statement when Python debugging in PyCharm? import numpy as np is just calling Includes/numpy/__init__, right? So, how to step into this while debugging? F7 doesn't work. Setting breakpoint inside numpy doesn't work either. A: Lets say you have a script in a project in PyCharm where you import numpy: import numpy as np a = np.zeros(100) You need to find the __ init __ .py of numpy (for instance in site-packages/numpy/__ init __.py). You can find it in External Libraries/ site-packages. It should be located right below your project folder in PyCharm Project view. Once you have located the file, open it and set a breakpoint at the first line of a code. Code starts like this: try: __NUMPY_SETUP__ except NameError: __NUMPY_SETUP__ = False To be able to reach the breakpoint in the __ init __.py of numpy, go back to the script and run it in the debug mode. This will bring you to the place in __ init __.py of numpy where the breakpoint is.
stackoverflow
{ "language": "en", "length": 173, "provenance": "stackexchange_0000F.jsonl.gz:857521", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519322" }
e03ca0d25421339bc6630dc85c41702c1595bb97
Stackoverflow Stackexchange Q: Entity Framework Core SelectMany then Include I cant seem to figure out how to get EF Core to include / load related objects when using SelectMany. context.MyObject .Where(w => w.Id == Id) .SelectMany(m => m.SubObject) .Include(i => i.AnotherType) Would have thought something like the above would work, however the collapsed SubObject collection has the AnotherObject being null and not included. Been searching for hours. Any help would be appreciated. Thanks A: Would have thought something like the above would work It used to work in EF6, but currently is not supported by EF Core - it allows you to use eager load only the entity which the query starts with, as mentioned in the Loading Related Data - Ignored Includes section of the documentation: If you change the query so that it no longer returns instances of the entity type that the query began with, then the include operators are ignored. So to get the eager loading work in your scenario, the query should be like this (assuming you have inverse navigation or FK property in SubObject): context.SubObject .Where(so => so.Object.Id == Id) // or so.ObjectId == Id .Include(i => i.AnotherType)
Q: Entity Framework Core SelectMany then Include I cant seem to figure out how to get EF Core to include / load related objects when using SelectMany. context.MyObject .Where(w => w.Id == Id) .SelectMany(m => m.SubObject) .Include(i => i.AnotherType) Would have thought something like the above would work, however the collapsed SubObject collection has the AnotherObject being null and not included. Been searching for hours. Any help would be appreciated. Thanks A: Would have thought something like the above would work It used to work in EF6, but currently is not supported by EF Core - it allows you to use eager load only the entity which the query starts with, as mentioned in the Loading Related Data - Ignored Includes section of the documentation: If you change the query so that it no longer returns instances of the entity type that the query began with, then the include operators are ignored. So to get the eager loading work in your scenario, the query should be like this (assuming you have inverse navigation or FK property in SubObject): context.SubObject .Where(so => so.Object.Id == Id) // or so.ObjectId == Id .Include(i => i.AnotherType)
stackoverflow
{ "language": "en", "length": 192, "provenance": "stackexchange_0000F.jsonl.gz:857523", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519325" }
2bc838f4975b34f6a052ca0a89836100fc4479dc
Stackoverflow Stackexchange Q: How to get position of element React | Redux (.getBoundingClientRect() + .getWrappedInstance()) I am trying to find a position of a component rendered with React. I am using redux, so I had to modify the connect function to: export default connect(mapStateToProps, mapDispatchToProps,null,{ withRef: true })(MyComponent); Now I am getting a component back when I call: class OneColumn extends Component { componentDidMount(){ var input = this.refs.myComponentsRef.getWrappedInstance(); console.log(input); // Outputs > TheComponentIWant {props: Object, context: Object, refs: Object, updater: Object, _reactInternalInstance: ReactCompositeComponentWrapper…} var inputRect = input.getBoundingClientRect(); console.log(inputRect); // Outputs error: Uncaught TypeError: input.getBoundingClientRect is not a function } //... } So I can get the component, however I cannot then get the boundingClientRect. Anything I am overlooking? Please help:) A: I found an alternate solution that circumvents this problem: This post describes and refers to another post Basically when the components starts rendering the components give a callback function where you store the positions of those components.
Q: How to get position of element React | Redux (.getBoundingClientRect() + .getWrappedInstance()) I am trying to find a position of a component rendered with React. I am using redux, so I had to modify the connect function to: export default connect(mapStateToProps, mapDispatchToProps,null,{ withRef: true })(MyComponent); Now I am getting a component back when I call: class OneColumn extends Component { componentDidMount(){ var input = this.refs.myComponentsRef.getWrappedInstance(); console.log(input); // Outputs > TheComponentIWant {props: Object, context: Object, refs: Object, updater: Object, _reactInternalInstance: ReactCompositeComponentWrapper…} var inputRect = input.getBoundingClientRect(); console.log(inputRect); // Outputs error: Uncaught TypeError: input.getBoundingClientRect is not a function } //... } So I can get the component, however I cannot then get the boundingClientRect. Anything I am overlooking? Please help:) A: I found an alternate solution that circumvents this problem: This post describes and refers to another post Basically when the components starts rendering the components give a callback function where you store the positions of those components.
stackoverflow
{ "language": "en", "length": 156, "provenance": "stackexchange_0000F.jsonl.gz:857563", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519452" }
ff9ff21df61d315a0be5c508f6e76df1695fc114
Stackoverflow Stackexchange Q: OkHttp3 MockWebServer with IdlingResource Is there a standard way to combine a MockWebServer with an IdlingResource? I guess one has to implement a custom MockDispatcher but the way to go is not so evident. Has anybody tried that already?
Q: OkHttp3 MockWebServer with IdlingResource Is there a standard way to combine a MockWebServer with an IdlingResource? I guess one has to implement a custom MockDispatcher but the way to go is not so evident. Has anybody tried that already?
stackoverflow
{ "language": "en", "length": 40, "provenance": "stackexchange_0000F.jsonl.gz:857568", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519464" }
34c255dfc6303365d94ed6ae3e8cd38b571b7a0c
Stackoverflow Stackexchange Q: How can I change value of prop in react? How to change the value of props, how to setProps, suppose the value of this.props.contact.name is John, I want to change it to Johnny. How can I do this? For example: changeValue(){ this.props.contact.name='Johnny' } A: You would change the prop in the parent component, as that is what holds the value of the prop itself. This would force a re-render of any child components that use the specific prop being changed. If you want to intercept the props as they're sent, you can use the lifecycle method componentWillReceiveProps.
Q: How can I change value of prop in react? How to change the value of props, how to setProps, suppose the value of this.props.contact.name is John, I want to change it to Johnny. How can I do this? For example: changeValue(){ this.props.contact.name='Johnny' } A: You would change the prop in the parent component, as that is what holds the value of the prop itself. This would force a re-render of any child components that use the specific prop being changed. If you want to intercept the props as they're sent, you can use the lifecycle method componentWillReceiveProps. A: I would suggest rather then change the props value you can pass the function into props and then change the parent component state so it will change the child component props like your Parent Component should be class SendData extends React.Component{ constructor(props) { super(props); this.state = { images: [ 'http://via.placeholder.com/350x150', 'http://via.placeholder.com/350x151' ], currentImage: 0 }; this.fadeImage=this.fadeImage.bind(this); } fadeImage(e) { e.preventDefault(); this.setState({currentImage: (this.state.currentImage + 1) % this.state.images.length}) } render() { return( <FadeImage images={this.state.images} currentImage={this.state.currentImage} fadeImage={this.fadeImage}/> ) } } your Child Component should be like class FadeImage extends React.Component { constructor(props) { super(props); } render() { return ( <div className="image"> <CSSTransitionGroup transitionName="example" transitionEnterTimeout={300} transitionLeaveTimeout={300} > <section> <button className="button" onClick={this.props.fadeImage.bind(this)}>Click!</button> <img src={this.props.images[this.props.currentImage]}/></section> </CSSTransitionGroup> </div> ); } } Please check working example here Demo A: Props are immutable, that means you can not change them! https://facebook.github.io/react/docs/components-and-props.html If you want to save a new value build it as a state and use this.setState(...) https://facebook.github.io/react/docs/state-and-lifecycle.html
stackoverflow
{ "language": "en", "length": 248, "provenance": "stackexchange_0000F.jsonl.gz:857572", "question_score": "11", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519475" }
75e6558506c686cbe67c35b52c27c8cd66c4e197
Stackoverflow Stackexchange Q: Android localization - How to use values folder -b qualifier android resources folder values. What is the values**-b** option i read about here? What i want to do is have my app support two different languages, french and spanish. But i had a thought. it would be much more organized if instead of doing strings-es.xml and strings-fr.xml if i could do this: values-es | Strings.xml values-fr | Strings.xml this way if there is any other things that should be localized they can easily go into the respective folders. Is this possible ? A: The synax of -b+ allows you to include the script of the language, for languages that use multiple script like Serbian, whilst the old method don't allow this. The default format that Android uses (as of 7.1) is as following: values-sr is Serbian in the Cyrillic script values-b+sr+Latn is Serbian in the Latin script It is known as a BCP 47 language tag. BCP 47 documentation
Q: Android localization - How to use values folder -b qualifier android resources folder values. What is the values**-b** option i read about here? What i want to do is have my app support two different languages, french and spanish. But i had a thought. it would be much more organized if instead of doing strings-es.xml and strings-fr.xml if i could do this: values-es | Strings.xml values-fr | Strings.xml this way if there is any other things that should be localized they can easily go into the respective folders. Is this possible ? A: The synax of -b+ allows you to include the script of the language, for languages that use multiple script like Serbian, whilst the old method don't allow this. The default format that Android uses (as of 7.1) is as following: values-sr is Serbian in the Cyrillic script values-b+sr+Latn is Serbian in the Latin script It is known as a BCP 47 language tag. BCP 47 documentation A: Using '-b' is a newer way of supporting locales. You can support locales using the method you indicate above, but you can also support locales by putting them in directories like this: values-b+es/strings.xml values-b+fr/strings.xml
stackoverflow
{ "language": "en", "length": 195, "provenance": "stackexchange_0000F.jsonl.gz:857586", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519520" }
5287e901034a63dfb0718546a3bcae33b4ed5ee6
Stackoverflow Stackexchange Q: .NET-Core byte array to Image how do I convert a byte[] to an Image in .NET Core? I found this: using (var ms = new MemoryStream(byteArrayIn)) { return Image.FromStream(ms); } but it seems like Image doesnt exist in .NET-Core. A: No indeed it is not released yet. you can use the library ImageSharp.
Q: .NET-Core byte array to Image how do I convert a byte[] to an Image in .NET Core? I found this: using (var ms = new MemoryStream(byteArrayIn)) { return Image.FromStream(ms); } but it seems like Image doesnt exist in .NET-Core. A: No indeed it is not released yet. you can use the library ImageSharp. A: To return an image from a byte array, you can either: * *return base64 Byte[] profilePicture = await _db.Players .Where(p => p.Id == playerId) .Select(p => p.ProfilePicture) .FirstOrDefaultAsync(); return Ok(Convert.ToBase64String(profilePicture)); And then you can use any online tool that converts base64 to image to test it. *or return FileContentResult File(byte[] fileContents, string contentType) return File(profilePicture, "image/png"); You should be able to test this from Postman or anything similar as the picture will show up in the body of the response there. A: If you are using javascript img tag: html <img id="logoPic" src="" /> js: document.getElementById("logoPic").src = "data:image/png;base64," + yourByte; with insertCell: row.insertCell(2).innerHTML = "<img src='" + "data:image/png;base64," + yourByte + "'></>"; with td: "<td>"+ "<img src='" + "data:image/png;base64," + yourByte + "'></>" + "</td>" If you are using jquery: html <img id="logoPic" src="" /> JQuery: $('#logoPic').attr('src', `data:image/png;base64,${YourByte}`);
stackoverflow
{ "language": "en", "length": 193, "provenance": "stackexchange_0000F.jsonl.gz:857597", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519553" }
1991045720390be0bb214c95864416e1274a7f4e
Stackoverflow Stackexchange Q: Print message into R markdown console while knitting I'm wondering if there is a way to print a message containing one of the variables into Rmarkdown console at the end of knitting? I'm currently preparing an R Markdown template for others to use. I would like to inform the users about the readability index of the document once it finishes compiling. I have already found a way to calculate it in one of the chunks. Now I would like to automatically print it to the person who compiles it in a way that it is not shown in the final document. Any ideas? A: Ben Bolkers comment is the perfect answer. I ended up putting: {r cache = FALSE, echo = F, warning = F, message = F} message(rdblty) at the very bottom of the markdown file. rdblty is the variable calculated before that I wanted to print out.
Q: Print message into R markdown console while knitting I'm wondering if there is a way to print a message containing one of the variables into Rmarkdown console at the end of knitting? I'm currently preparing an R Markdown template for others to use. I would like to inform the users about the readability index of the document once it finishes compiling. I have already found a way to calculate it in one of the chunks. Now I would like to automatically print it to the person who compiles it in a way that it is not shown in the final document. Any ideas? A: Ben Bolkers comment is the perfect answer. I ended up putting: {r cache = FALSE, echo = F, warning = F, message = F} message(rdblty) at the very bottom of the markdown file. rdblty is the variable calculated before that I wanted to print out. A: How about ? cat(file="readability.txt")
stackoverflow
{ "language": "en", "length": 155, "provenance": "stackexchange_0000F.jsonl.gz:857600", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519564" }
fe4c18649ddf778295fc3c5ffa313efc796dd3f8
Stackoverflow Stackexchange Q: Running Iperf Server and Client using Multithreading in Python causes Segmentation fault A main class calls two other classes(IperfServer and IperfClient) and I'm trying to run them using multithreading. I am using the python wrapper class for iperf3. Both the classes are initiated but while running Iperf, I get segmentation Fault. CODE SNIPPET: class IperfServer(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): print("1") server = iperf3.Server() print("2") server.port = 5201 response = server.run() class IperfClient(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): print("3") connection = http.client.HTTPSConnection("abc.efg") print("4") connection.request(method="GET", url="/hij/") response = connectn.getresponse() connectn.close() print("5") client = iperf3.Client() client.run() class IperfAgent(object): thread1 = IperfClient() thread2 = IperfServer() thread1.start() thread2.start() OUTPUT: 3 1 Segmentation Fault I'm a newbie to python and multithreading. Could someone help me figure out the mistake I am making? A: Try running it in a subprocess (see multiprocessing.Process) instead of a thread. It appears that iperf_defaults requires being run on a main thread.
Q: Running Iperf Server and Client using Multithreading in Python causes Segmentation fault A main class calls two other classes(IperfServer and IperfClient) and I'm trying to run them using multithreading. I am using the python wrapper class for iperf3. Both the classes are initiated but while running Iperf, I get segmentation Fault. CODE SNIPPET: class IperfServer(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): print("1") server = iperf3.Server() print("2") server.port = 5201 response = server.run() class IperfClient(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): print("3") connection = http.client.HTTPSConnection("abc.efg") print("4") connection.request(method="GET", url="/hij/") response = connectn.getresponse() connectn.close() print("5") client = iperf3.Client() client.run() class IperfAgent(object): thread1 = IperfClient() thread2 = IperfServer() thread1.start() thread2.start() OUTPUT: 3 1 Segmentation Fault I'm a newbie to python and multithreading. Could someone help me figure out the mistake I am making? A: Try running it in a subprocess (see multiprocessing.Process) instead of a thread. It appears that iperf_defaults requires being run on a main thread.
stackoverflow
{ "language": "en", "length": 152, "provenance": "stackexchange_0000F.jsonl.gz:857665", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519799" }
8078ddea2987c2b54ec4e0702b79389e9456b5e0
Stackoverflow Stackexchange Q: Misbehavior of UITableView willDisplayCell method There is a UITableView of posts. Seen posts id's are saved in sqlite I want to show, seen posts in orange color and others in black. But when I set orange color for seen post in willDisplayCell method some cells are colored orange incorrectly, otherwise the print log ("Color it") is correct. override func tableView(tableView: UITableView, willDisplayCell cell: UITableViewCell, forRowAtIndexPath indexPath: NSIndexPath) { let post = postDataSource.posts[indexPath.row] print(post.id) let cellPost = cell as? PostListViewCell if post.isRead.boolValue == true { print("Color it") cellPost!.body.textColor = UIColor.orangeColor() cellPost!.title.textColor = UIColor.orangeColor() } } For instance, if just one post is seen, "Color it" is printed once. and it's correct. But some other cells are colored orange without "Color it" log. A: Try completing the if statement if (post.isRead.boolValue == true) { print("Color it") cellPost!.body.textColor = UIColor.orangeColor() cellPost!.title.textColor = UIColor.orangeColor() }else{ cellPost!.body.textColor = UIColor.blackColor() cellPost!.title.textColor = UIColor.blackColor()}
Q: Misbehavior of UITableView willDisplayCell method There is a UITableView of posts. Seen posts id's are saved in sqlite I want to show, seen posts in orange color and others in black. But when I set orange color for seen post in willDisplayCell method some cells are colored orange incorrectly, otherwise the print log ("Color it") is correct. override func tableView(tableView: UITableView, willDisplayCell cell: UITableViewCell, forRowAtIndexPath indexPath: NSIndexPath) { let post = postDataSource.posts[indexPath.row] print(post.id) let cellPost = cell as? PostListViewCell if post.isRead.boolValue == true { print("Color it") cellPost!.body.textColor = UIColor.orangeColor() cellPost!.title.textColor = UIColor.orangeColor() } } For instance, if just one post is seen, "Color it" is printed once. and it's correct. But some other cells are colored orange without "Color it" log. A: Try completing the if statement if (post.isRead.boolValue == true) { print("Color it") cellPost!.body.textColor = UIColor.orangeColor() cellPost!.title.textColor = UIColor.orangeColor() }else{ cellPost!.body.textColor = UIColor.blackColor() cellPost!.title.textColor = UIColor.blackColor()} A: 1.Understanding of Reusable table-view cell object From Apple Documentation For performance reasons, a table view’s data source should generally reuse UITableViewCell objects when it assigns cells to rows in its tableView(_:cellForRowAt:) method. A table view maintains a queue or list of UITableViewCell objects that the data source has marked for reuse. Call this method from your data source object when asked to provide a new cell for the table view. This method dequeues an existing cell if one is available or creates a new one using the class or nib file you previously registered. If no cell is available for reuse and you did not register a class or nib file, this method returns nil. 2.Usage of prepareForReuse() From Apple Documentation If a UITableViewCell object is reusable—that is, it has a reuse identifier—this method is invoked just before the object is returned from the UITableView method dequeueReusableCell(withIdentifier:) . For performance reasons, you should only reset attributes of the cell that are not related to content, for example, alpha, editing, and selection state. The table view's delegate in tableView(_:cellForRowAt:) should always reset all content when reusing a cell. If the cell object does not have an associated reuse identifier, this method is not called. If you override this method, you must be sure to invoke the superclass implementation. Another way to do manually resetting attributes of the cell which already described by @RJiryes.
stackoverflow
{ "language": "en", "length": 381, "provenance": "stackexchange_0000F.jsonl.gz:857710", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519945" }
19526e73f3cf899672c53c03fd5e684443f1b1d0
Stackoverflow Stackexchange Q: Does Rust support using an infix operator as a function? I am writing a function that does piecewise multiplication of two arrays. xs.iter() .zip(ys).map(|(x, y)| x * y) .sum() In some other languages, I can pass (*) as a function to map. Does Rust have this feature? A: Rust does not have any syntax to pass infix operators, mostly because it is redundant anyway. In Rust, each operator maps to a trait: * maps to the std::ops::Mul trait, for example. Therefore, using * directly should be using std::ops::Mul::mul: xs.iter().zip(ys).map(Mul::mul).sum(); However there are several difficulties: * *Generally, iterators yield references while Mul is implemented for plain values, *Mul::mul expects two arguments, xs.zip(ys) yields a single element (a tuple of two elements). So, you need to go from reference to value and then "unpack" the tuple and... it ends up being shorter to use a closure.
Q: Does Rust support using an infix operator as a function? I am writing a function that does piecewise multiplication of two arrays. xs.iter() .zip(ys).map(|(x, y)| x * y) .sum() In some other languages, I can pass (*) as a function to map. Does Rust have this feature? A: Rust does not have any syntax to pass infix operators, mostly because it is redundant anyway. In Rust, each operator maps to a trait: * maps to the std::ops::Mul trait, for example. Therefore, using * directly should be using std::ops::Mul::mul: xs.iter().zip(ys).map(Mul::mul).sum(); However there are several difficulties: * *Generally, iterators yield references while Mul is implemented for plain values, *Mul::mul expects two arguments, xs.zip(ys) yields a single element (a tuple of two elements). So, you need to go from reference to value and then "unpack" the tuple and... it ends up being shorter to use a closure. A: No. The * operator is implemented in std::Ops::Mul, but it can't be used directly: use std::ops::Mul::mul; fn main() { let v1 = vec![1, 2, 3]; let v2 = vec![1, 2, 3]; println!("{:?}", v1.iter().zip(v2).map(|(x, y)| mul).collect()); } Will result in the following error: error[E0253]: `mul` is not directly importable --> <anon>:1:5 | 1 | use std::ops::Mul::mul; | ^^^^^^^^^^^^^^^^^^ cannot be imported directly You could introduce your own function using the * operator, but there wouldn't be much added value :). A: Nnnyes. Sorta kinda not really. You can't write an operator as a name. But most operators are backed by traits, and you can write the name of those, so a * b is effectively Mul::mul(a, b), and you can pass Mul::mul as a function pointer. But that doesn't help in this case because Iterator::map is expecting a FnMut((A, B)) -> C, and the binary operators all implement FnMut(A, B) -> C. Now, you could write an adapter for this, but you'd need one for every combination of arity and mutability. And you'd have to eat a heap allocation and indirection or require a nightly compiler. Or, you could write your own version of Iterator::map on an extension trait that accepts higher arity functions for iterators of tuples... again, one for each arity... Honestly, it's simpler to just use a closure.
stackoverflow
{ "language": "en", "length": 366, "provenance": "stackexchange_0000F.jsonl.gz:857723", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44519975" }
dd9aa7440a3721fe218757d0e48777ac9169ee87
Stackoverflow Stackexchange Q: What is the page size for 32 and 64 bit versions of windows Os? I want to know the default page size for virtual memory in windows Os for both 32 and 64 bit versions. For ex: the page size of Linux (x86) is 4 Kb. A: call GetSystemInfo or better GetNativeSystemInfo and look for dwPageSize member of SYSTEM_INFO structure. however now under windows in both x86 and x64 page size is 0x1000 or 4Kb
Q: What is the page size for 32 and 64 bit versions of windows Os? I want to know the default page size for virtual memory in windows Os for both 32 and 64 bit versions. For ex: the page size of Linux (x86) is 4 Kb. A: call GetSystemInfo or better GetNativeSystemInfo and look for dwPageSize member of SYSTEM_INFO structure. however now under windows in both x86 and x64 page size is 0x1000 or 4Kb
stackoverflow
{ "language": "en", "length": 76, "provenance": "stackexchange_0000F.jsonl.gz:857746", "question_score": "12", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520047" }
cbd1cfabb5ee3a045f537a61e2f66d0281a48d6b
Stackoverflow Stackexchange Q: C# calculation returns 0 I am new to C# and this code always returns 0.00 and I don't know what is the reason. Could anyone help? It is a console program and a possible input is sofia 1500 the result should be 120.00 using System; namespace TradeComissions { class TradeComissions { static void Main(string[] args) { var town = Console.ReadLine().ToLower(); var amount = double.Parse(Console.ReadLine()); double result = 0.0; if(town == "sofia") { if (amount >= 0 && amount <= 500) { result = amount * (5 / 100); } else if (amount >= 500 && amount <= 1000) { result = amount * (7 / 100); } else if (amount >= 1000 && amount <= 10000) { result = amount * (8 / 100); } else if (amount > 10000) { result = amount * (12 / 100); } } Console.WriteLine("{0:f2}", result); } } } A: You are doing a mathematical division between 2 integers where denom is bigger that numerator e.g. 5 / 100, that operation result into an integer too (zero), do instead: result = amount * (5.0 / 100);
Q: C# calculation returns 0 I am new to C# and this code always returns 0.00 and I don't know what is the reason. Could anyone help? It is a console program and a possible input is sofia 1500 the result should be 120.00 using System; namespace TradeComissions { class TradeComissions { static void Main(string[] args) { var town = Console.ReadLine().ToLower(); var amount = double.Parse(Console.ReadLine()); double result = 0.0; if(town == "sofia") { if (amount >= 0 && amount <= 500) { result = amount * (5 / 100); } else if (amount >= 500 && amount <= 1000) { result = amount * (7 / 100); } else if (amount >= 1000 && amount <= 10000) { result = amount * (8 / 100); } else if (amount > 10000) { result = amount * (12 / 100); } } Console.WriteLine("{0:f2}", result); } } } A: You are doing a mathematical division between 2 integers where denom is bigger that numerator e.g. 5 / 100, that operation result into an integer too (zero), do instead: result = amount * (5.0 / 100); A: This will work: static void Main(string[] args) { var town = Console.ReadLine().ToLower(); var amount = double.Parse(Console.ReadLine()); double result = 0.0; if (town == "sofia") { if (amount >= 0 && amount <= 500) { result = amount * (0.05); } else if (amount >= 500 && amount <= 1000) { result = amount * (0.07); } else if (amount >= 1000 && amount <= 10000) { result = amount * (0.08); } else if (amount > 10000) { result = amount * (0.12); } } double f = result; Console.WriteLine("{0:f2}", result); Console.ReadKey(); }
stackoverflow
{ "language": "en", "length": 277, "provenance": "stackexchange_0000F.jsonl.gz:857756", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520073" }
98adea4d3df6707f2583b7fcb8bccd9d93d84fe2
Stackoverflow Stackexchange Q: Method Chaining in Julia I read https://github.com/JuliaLang/julia/issues/5571 which made me think I could break lines like that due to some of the comments: a = [x*5 for x in 0:20 if x>4] scale(y) = (x)-> y*x filter(y) = x -> [z for z in x if z>y] a|>(x->x/3) |>scale(2) |>filter(4) |>println But I get the error: ERROR: LoadError: syntax: "|>" is not a unary operator in include_from_node1(::String) at ./loading.jl:488 in process_options(::Base.JLOptions) at ./client.jl:265 in _start() at ./client.jl:321 Am I forced to use a|>(x->x/3)|>scale(2)|>filter(4)|>println? A: You can move the |> operators to the line-ends: julia> a|>(x->x/3)|> scale(2)|> filter(4)|> println This syntax is because the parser needs to decide unambiguously when a statement ends. (actually, I've asked a question about such an issue myself and got a good answer. see Why is `where` syntax in Julia sensitive to new-line?)
Q: Method Chaining in Julia I read https://github.com/JuliaLang/julia/issues/5571 which made me think I could break lines like that due to some of the comments: a = [x*5 for x in 0:20 if x>4] scale(y) = (x)-> y*x filter(y) = x -> [z for z in x if z>y] a|>(x->x/3) |>scale(2) |>filter(4) |>println But I get the error: ERROR: LoadError: syntax: "|>" is not a unary operator in include_from_node1(::String) at ./loading.jl:488 in process_options(::Base.JLOptions) at ./client.jl:265 in _start() at ./client.jl:321 Am I forced to use a|>(x->x/3)|>scale(2)|>filter(4)|>println? A: You can move the |> operators to the line-ends: julia> a|>(x->x/3)|> scale(2)|> filter(4)|> println This syntax is because the parser needs to decide unambiguously when a statement ends. (actually, I've asked a question about such an issue myself and got a good answer. see Why is `where` syntax in Julia sensitive to new-line?)
stackoverflow
{ "language": "en", "length": 138, "provenance": "stackexchange_0000F.jsonl.gz:857759", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520097" }
3c83de820c21faea82b32815e38f781635c48e39
Stackoverflow Stackexchange Q: Run maven using a different version of Java than the one used to run maven I'm using maven 3.2.5 to build my project, this version of maven is compatible with java 6. I want to build a project that enforces using java 5 with maven-enforcer-plugin. If I set JAVA_HOME to java 5 directory, I can't run maven and if I set JAVA_HOME to java 6 directory, the enforcer plugin complains about java version. Is there a way to to tell maven to use a different version of java from the one that is used to run maven? A: This is situation where you need to use toolchains which can exactly solve the problem. Like running Maven with JDK 7 (for example Maven 3.5.0) but the code needs to be compiled and tested with JDK 6. See the documentation about toolchains.
Q: Run maven using a different version of Java than the one used to run maven I'm using maven 3.2.5 to build my project, this version of maven is compatible with java 6. I want to build a project that enforces using java 5 with maven-enforcer-plugin. If I set JAVA_HOME to java 5 directory, I can't run maven and if I set JAVA_HOME to java 6 directory, the enforcer plugin complains about java version. Is there a way to to tell maven to use a different version of java from the one that is used to run maven? A: This is situation where you need to use toolchains which can exactly solve the problem. Like running Maven with JDK 7 (for example Maven 3.5.0) but the code needs to be compiled and tested with JDK 6. See the documentation about toolchains. A: You can use maven compiler plugin to setup which javac do you want to use, for example: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.6.1</version> <configuration> <verbose>true</verbose> <fork>true</fork> <executable>WRITE HERE THE PATH TO YOUR JAVAC 1.5</executable> <compilerVersion>1.5</compilerVersion> </configuration> </plugin> A: Right click on project * *Click properties *Click on project Facets *Select java version *Click on Apply
stackoverflow
{ "language": "en", "length": 195, "provenance": "stackexchange_0000F.jsonl.gz:857776", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520163" }
4bb39c4a690be09cbb09fc6839dee02fde7ea78a
Stackoverflow Stackexchange Q: Using pdf.js on a node server I want to convert a pdf to an image server-side, using node.js. My input for this task is pdf's url, and the desired output is a base64 string, representing an image. I've decided to try pdf.js (https://github.com/mozilla/pdf.js) and node-canvas (https://github.com/Automattic/node-canvas) - my plan is to read the pdf, render it to canvas and get image's base64 from the canvas. But pdf.js plays up server-side, I create a get document task, as described in examples: import pdfjs from 'pdfjs-dist'; const t = pdfjs.getDocument('http://cdn.mozilla.net/pdfjs/helloworld.pdf'); t.promise.then(function (doc) { console.log('got doc'); console.log(doc); }) .catch(err => { console.log(err); }); But nothing just happens. Promise neither resolves, nor rejects. How can I fix that and make it work? What am I doing wrong? Maybe there's another solution, that would allow me to get converted image's base 64 without storing it to the filesystem (all pdf to image converters for node I've seen so far save images to drive, but that's not the desired behaviour for me)? A: Try this piece of Code: https://github.com/mozilla/pdf.js/blob/master/examples/node/pdf2png/pdf2png.js Try using Async/Await inside Async function const doc = await pdfjs.getDocument('http://cdn.mozilla.net/pdfjs/helloworld.pdf').promise; const page = await doc.getPage(1);
Q: Using pdf.js on a node server I want to convert a pdf to an image server-side, using node.js. My input for this task is pdf's url, and the desired output is a base64 string, representing an image. I've decided to try pdf.js (https://github.com/mozilla/pdf.js) and node-canvas (https://github.com/Automattic/node-canvas) - my plan is to read the pdf, render it to canvas and get image's base64 from the canvas. But pdf.js plays up server-side, I create a get document task, as described in examples: import pdfjs from 'pdfjs-dist'; const t = pdfjs.getDocument('http://cdn.mozilla.net/pdfjs/helloworld.pdf'); t.promise.then(function (doc) { console.log('got doc'); console.log(doc); }) .catch(err => { console.log(err); }); But nothing just happens. Promise neither resolves, nor rejects. How can I fix that and make it work? What am I doing wrong? Maybe there's another solution, that would allow me to get converted image's base 64 without storing it to the filesystem (all pdf to image converters for node I've seen so far save images to drive, but that's not the desired behaviour for me)? A: Try this piece of Code: https://github.com/mozilla/pdf.js/blob/master/examples/node/pdf2png/pdf2png.js Try using Async/Await inside Async function const doc = await pdfjs.getDocument('http://cdn.mozilla.net/pdfjs/helloworld.pdf').promise; const page = await doc.getPage(1);
stackoverflow
{ "language": "en", "length": 190, "provenance": "stackexchange_0000F.jsonl.gz:857806", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520248" }
d21e00e6a0dae8f9e35c1b5097d365b46baf80dc
Stackoverflow Stackexchange Q: Hibernate -validator group sequence provider getDefaultSequenceProvider gets null as input I am using the hibernate validator group sequence and want to execute the groups in a sequence based on business rules. But the input to the groupSequenceProvider for its getValidationGroups is always null, and hence custom sequence never gets added. My request object: @GroupSequenceProvider(BeanSequenceProvider.class) public class MyBean { @NotEmpty private String name; @NotNull private MyType type; @NotEmpty(groups = Special.class) private String lastName; // Getters and setters } Enum type: public enum MyType { FIRST, SECOND } My custom sequence provider: public class BeanSequenceProvider implements DefaultGroupSequenceProvider<MyBean> { @Override public List<Class<?>> getValidationGroups(MyBean object) { final List<Class<?>> classes = new ArrayList<>(); classes.add(MyBean.class); if (object != null && object.getType() == MyType.SECOND) { classes.add(Special.class); } return classes; } } Group annotation: public interface Special { } When I execute the above code, I get the input MyBean object as null and cannot add the custom sequence. What am I missing? I am using hibernate-validator version as 5.4.1.Final
Q: Hibernate -validator group sequence provider getDefaultSequenceProvider gets null as input I am using the hibernate validator group sequence and want to execute the groups in a sequence based on business rules. But the input to the groupSequenceProvider for its getValidationGroups is always null, and hence custom sequence never gets added. My request object: @GroupSequenceProvider(BeanSequenceProvider.class) public class MyBean { @NotEmpty private String name; @NotNull private MyType type; @NotEmpty(groups = Special.class) private String lastName; // Getters and setters } Enum type: public enum MyType { FIRST, SECOND } My custom sequence provider: public class BeanSequenceProvider implements DefaultGroupSequenceProvider<MyBean> { @Override public List<Class<?>> getValidationGroups(MyBean object) { final List<Class<?>> classes = new ArrayList<>(); classes.add(MyBean.class); if (object != null && object.getType() == MyType.SECOND) { classes.add(Special.class); } return classes; } } Group annotation: public interface Special { } When I execute the above code, I get the input MyBean object as null and cannot add the custom sequence. What am I missing? I am using hibernate-validator version as 5.4.1.Final
stackoverflow
{ "language": "en", "length": 163, "provenance": "stackexchange_0000F.jsonl.gz:857819", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520306" }
867b955b2caba3ceb0ac1f82b5b94dd7e128a6e8
Stackoverflow Stackexchange Q: ReSharper Error: Cannot convert instance argument type 'System.Security.Principal.IPrincipal In a .net core class project, I have implemented IClaimsService of IdentityServer4.Services. I have the following code in that class protected virtual IEnumerable<Claim> GetStandardSubjectClaims(IPrincipal subject) { var claims = new List<Claim> { new Claim(JwtClaimTypes.Subject, subject.GetSubjectId()), new Claim(JwtClaimTypes.AuthenticationTime, subject.GetAuthenticationTimeEpoch().ToString(), ClaimValueTypes.Integer), new Claim(JwtClaimTypes.IdentityProvider, subject.GetIdentityProvider()), }; claims.AddRange(subject.GetAuthenticationMethods()); return claims; } In the above code subject is marked with red curvy underline, and the error is as follows Cannot convert instance argument type 'System.Security.Principal.IPrincipal [System.Security.Principal, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]' to 'System.Security.Principal.IPrincipal [System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]' Only, ReSarper is showing this as an error, not sure what to do to fix this. And this error stops showing other warnings, after filtering ReSharper shows other warnings
Q: ReSharper Error: Cannot convert instance argument type 'System.Security.Principal.IPrincipal In a .net core class project, I have implemented IClaimsService of IdentityServer4.Services. I have the following code in that class protected virtual IEnumerable<Claim> GetStandardSubjectClaims(IPrincipal subject) { var claims = new List<Claim> { new Claim(JwtClaimTypes.Subject, subject.GetSubjectId()), new Claim(JwtClaimTypes.AuthenticationTime, subject.GetAuthenticationTimeEpoch().ToString(), ClaimValueTypes.Integer), new Claim(JwtClaimTypes.IdentityProvider, subject.GetIdentityProvider()), }; claims.AddRange(subject.GetAuthenticationMethods()); return claims; } In the above code subject is marked with red curvy underline, and the error is as follows Cannot convert instance argument type 'System.Security.Principal.IPrincipal [System.Security.Principal, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]' to 'System.Security.Principal.IPrincipal [System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]' Only, ReSarper is showing this as an error, not sure what to do to fix this. And this error stops showing other warnings, after filtering ReSharper shows other warnings
stackoverflow
{ "language": "en", "length": 118, "provenance": "stackexchange_0000F.jsonl.gz:857823", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520318" }
21f74b0b786f74b4e6f67e0465ae6d0ad6ea3377
Stackoverflow Stackexchange Q: Can there be a race condition between __has_include() and the subsequent #include? Consider the following C++1z code that uses __has_include(): #if __has_include(<optional>) # include <optional> # define have_optional 1 #else # define have_optional 0 #endif Can there be a race condition between __has_include(<optional>) and the subsequent #include <optional> or does the standard guarantee a race-free behavior? For example, in an (improbable) situation where the header file gets deleted right after the __has_include() check, the #include would unexpectedly fail. A: While I would argue that it's very much an implementation-specific issue, this #include reference says A __has_include result of 1 only means that a header or source file with the specified name exists. It does not mean that the header or source file, when included, would not cause an error or would contain anything useful. So you should not count on a subsequent #include directive succeeding. The above linked reference actually continues the above quote with mentioning that a compiler with both C++14 and C++17 modes could have __has_include as an extension of its C++14 mode, with an example of using just <optional> could lead to __has_include(<optional>) succeeding in C++14 mode but the actual #include failing.
Q: Can there be a race condition between __has_include() and the subsequent #include? Consider the following C++1z code that uses __has_include(): #if __has_include(<optional>) # include <optional> # define have_optional 1 #else # define have_optional 0 #endif Can there be a race condition between __has_include(<optional>) and the subsequent #include <optional> or does the standard guarantee a race-free behavior? For example, in an (improbable) situation where the header file gets deleted right after the __has_include() check, the #include would unexpectedly fail. A: While I would argue that it's very much an implementation-specific issue, this #include reference says A __has_include result of 1 only means that a header or source file with the specified name exists. It does not mean that the header or source file, when included, would not cause an error or would contain anything useful. So you should not count on a subsequent #include directive succeeding. The above linked reference actually continues the above quote with mentioning that a compiler with both C++14 and C++17 modes could have __has_include as an extension of its C++14 mode, with an example of using just <optional> could lead to __has_include(<optional>) succeeding in C++14 mode but the actual #include failing.
stackoverflow
{ "language": "en", "length": 196, "provenance": "stackexchange_0000F.jsonl.gz:857825", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520327" }
75e8ed7111a3dfa5cdfaea721969cc36254026a3
Stackoverflow Stackexchange Q: how to publish comment to a post using facebook graph api explorer? I am using facebook graph api explorer to explore and test different things but getting no idea after even an extensive research that how to post comment against a post . I am doing this This is the second try
Q: how to publish comment to a post using facebook graph api explorer? I am using facebook graph api explorer to explore and test different things but getting no idea after even an extensive research that how to post comment against a post . I am doing this This is the second try
stackoverflow
{ "language": "en", "length": 53, "provenance": "stackexchange_0000F.jsonl.gz:857865", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520431" }
e0be8a3f0fe2f073979a13636234dc909357378b
Stackoverflow Stackexchange Q: Entity Framework : Use stored procedure to return raw table result Is it possible in EF to use stored procedures to return raw DataTable/DataSet like in classic ADO.net instead of returning a mapped/translated entity? A: EF is built on top of ADO.NET, so whenever you need to you can directly access the DbConnection for a DbContext and use it directly. Eg using (var db = new MyDbContext()) { db.Database.Connection.Open(); var con = (SqlConnection)db.Database.Connection; var cmd = new SqlCommand("exec MyProc", con); DataTable dt = new DataTable(); using (var rdr = cmd.ExecuteReader()) { dt.Load(rdr); } //. . .
Q: Entity Framework : Use stored procedure to return raw table result Is it possible in EF to use stored procedures to return raw DataTable/DataSet like in classic ADO.net instead of returning a mapped/translated entity? A: EF is built on top of ADO.NET, so whenever you need to you can directly access the DbConnection for a DbContext and use it directly. Eg using (var db = new MyDbContext()) { db.Database.Connection.Open(); var con = (SqlConnection)db.Database.Connection; var cmd = new SqlCommand("exec MyProc", con); DataTable dt = new DataTable(); using (var rdr = cmd.ExecuteReader()) { dt.Load(rdr); } //. . . A: Yes it is possible. Below I described how I did this. A stored procedure is part of your database. Therefore it is best to add the stored procedure to the class which accesses your database and creates the database model: your dbContext. public class MyDbContext : DbContext { // TODO: add DbSet properties #region stored procedures // TODO (1): add a function that calls the stored procedure // TODO (2): add a function to check if the stored procedure exists // TODO (3): add a function that creates the stored procedure // TODO (4): make sure the stored procedure is created when the database is created #endregion stored procedure } TODO (1): Procedure that calls the stored procedure: private const string MyStoredProcedureName = ...; private const string myParamName1 = ...; private const string myParamName2 = ...; public void CallMyStoredProcedure(MyParameters parameters) { object[] functionParameters = new object[] { new SqlParameter(myParamName1, parameters.GetParam1Value(), new SqlParameter(myParamName2, parameters.GetParam2Value(), ... // if needed add more parameters }; const string sqlCommand = @"Exec " + MyStoredProcedureName + " @" + myParamName1 + ", @" + myParamName2 ... // if needed add more parameters ; this.Database.ExecutSqlCommand(sqlComman, functionParameters); } TODO (2) Check if stored procedure exists // returns true if MyStoredProcedure exists public bool MyStoredProcedureExists() { return this.ProcedureExists(MyStoredProcedureName); } // returns true if stored procedure with procedureName exists public bool StoredProcedureExists(string procedureName) { object[] functionParameters = new object[] { new SqlParameter(@"procedurename", procedureName), }; string query = @"select [name] from sys.procedures where name= @procedurename"; return this.Database.SqlQuery<string>(query, functionParameters) .AsEnumerable() // bring to local memory .Where(item => item == procedureName) // take only the procedures with desired name .Any(); // true if there is such a procedure } TODO (3) Create the stored procedure: public void CreateProcedureUpdateUsageCosts(bool forceRecreate) { bool storedProcedureExists = this.MyStoredProcedureExists(); // only create (or update) if not exists or if forceRecreate: if (!storedProcedureExists || forceRecreate) { // create or alter: Debug.WriteLine("Create stored procedures"); // use a StringBuilder to create the SQL command that will create the // stored procedure var x = new StringBuilder(); // ALTER or CREATE? if (!storedProcedureExists) { x.Append(@"CREATE"); } else { x.Append(@"ALTER"); } // procedure name: x.Append(@" procedure "); x.AppendLine(MyStoredProcedureName); // parameters: (only as an example) x.AppendLine(@"@ReportPeriod int,"); x.AppendLine(@"@CustomerContractId bigint,"); x.AppendLine(@"@CallType nvarChar(80),"); // etc. // code x.AppendLine(@"as"); x.AppendLine(@"begin"); // only as example some of my procedure x.AppendLine(@"Merge [usagecosts]"); x.AppendLine(@"Using (Select @ReportPeriod as reportperiod,"); x.AppendLine(@" @CustomerContractId as customercontractId,"); ... x.AppendLine(@"end"); // execute the created SQL command this.Database.ExecuteSqlCommand(x.ToString()); } } TODO (4) Make sure the stored procedure is created when the database is created In MyDbContext: protected override void OnModelCreating(DbModelBuilder modelBuilder) { // TODO: if needed add fluent Api to build model // create stored procedure this.CreateProcedureUpdateUsageCosts(true); } Usage: using (var dbContext = new MyDbContext(...)) { MyParameters parms = FillMyParameters(); dbContext.CallMyStoredProcedure(parms); } A: Check this: https://stackoverflow.com/a/73263659/1898992. You just need to replace the sql query with your stored procedure.
stackoverflow
{ "language": "en", "length": 569, "provenance": "stackexchange_0000F.jsonl.gz:857895", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520529" }
edf1722e38807ba29cdbc2960634230fe21480a2
Stackoverflow Stackexchange Q: R markdown, hiding the library output I am working on a rmarkdown presentation. I am trying to show the usage of cast function. However, since the reshape package is necessary, to run the cast function, I need to load reshape library as below. {r package_options, echo=TRUE} library(reshape) cast(datam, isim~Ay, value="Sirano") However, after knitting the codes, I face with the output ; I just need to see the name of the library on the screen which is library(reshape) , and also I want it to be used to run cast function, but I dont want to see the package outputs as shown in the picture. Would someone help about that? A: If you want to hide all these messages you have to put : ```{r,warning=FALSE,message=FALSE} library(reshape) ```
Q: R markdown, hiding the library output I am working on a rmarkdown presentation. I am trying to show the usage of cast function. However, since the reshape package is necessary, to run the cast function, I need to load reshape library as below. {r package_options, echo=TRUE} library(reshape) cast(datam, isim~Ay, value="Sirano") However, after knitting the codes, I face with the output ; I just need to see the name of the library on the screen which is library(reshape) , and also I want it to be used to run cast function, but I dont want to see the package outputs as shown in the picture. Would someone help about that? A: If you want to hide all these messages you have to put : ```{r,warning=FALSE,message=FALSE} library(reshape) ```
stackoverflow
{ "language": "en", "length": 127, "provenance": "stackexchange_0000F.jsonl.gz:857943", "question_score": "19", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520686" }
ec904cca70f01db7686ebbcdeb11a654d2b9a726
Stackoverflow Stackexchange Q: How to display props or state in alert? How to display props or state in alert box ? how to display it on alert box as we print on console. alert('Props',this.props); A: Alert takes only one parameter. If you want to do something like that you should write: alert('props' + this.props);
Q: How to display props or state in alert? How to display props or state in alert box ? how to display it on alert box as we print on console. alert('Props',this.props); A: Alert takes only one parameter. If you want to do something like that you should write: alert('props' + this.props); A: You need to use JSON.stringify to convert it to string. alert('props: ' + JSON.stringify(this.props));
stackoverflow
{ "language": "en", "length": 67, "provenance": "stackexchange_0000F.jsonl.gz:857965", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520745" }
f4a495b56bac7935cc0d1e796fb27aceed37b237
Stackoverflow Stackexchange Q: how to close all ionic 2 modals? I need to close all current modal popups and log out the user in ionic 2 application when the device goes to idle. I used following methods to close popups in the home component. this.viewController.dismiss().then(_ => { console.log("modal dismiss"); }).catch(error => { console.log(error) }); and this.navController.popAll().then(_ => { console.log("modal dismiss"); }).catch(error => { console.log(error); }) But it throws following error You can't remove all pages in the navigation stack. nav.pop() is probably called too many times. and does not close any popup. Anyone knows how to do it? A: viewController.dismiss() only closes the current Modal. Meaning you will have to keep a reference to all open modals in and call dismiss() on each one. You can set use navController.setRoot(LoginPage) (link) to show the login-page.
Q: how to close all ionic 2 modals? I need to close all current modal popups and log out the user in ionic 2 application when the device goes to idle. I used following methods to close popups in the home component. this.viewController.dismiss().then(_ => { console.log("modal dismiss"); }).catch(error => { console.log(error) }); and this.navController.popAll().then(_ => { console.log("modal dismiss"); }).catch(error => { console.log(error); }) But it throws following error You can't remove all pages in the navigation stack. nav.pop() is probably called too many times. and does not close any popup. Anyone knows how to do it? A: viewController.dismiss() only closes the current Modal. Meaning you will have to keep a reference to all open modals in and call dismiss() on each one. You can set use navController.setRoot(LoginPage) (link) to show the login-page. A: This worked for me constructor(public navCtrl: NavController,public ionicApp: IonicApp){} this.viewCtrl.dismiss().then(_=>{ let activePortal = this.ionicApp._modalPortal.getActive() if (activePortal) { activePortal.dismiss(); //can use another .then here } }); A: We can create a recursive function which takes a current active modal reference and dismiss that modal after that we will call on the that function again in dismiss() event. Here is the code , recursive function public dismissAllModal () { let activeModal = this.ionicApp._modalPortal.getActive(); if (activeModal) { activeModal.dismiss().then(() => { this.dismissAllModal() }); } } Call the function where we should remove all modals (I placed in login page in constructor) this.viewCtrl.dismiss().then(_ => { this.dismissAllModal() }) A: Ok so I had the same issue and I solved it this way. The solution is to store all the modal instances using a service in an array. Then use a loop and dismiss all the modals refering them from that array. modal.html <ion-button (click)="openModal()"> Open Modal <ion-button> <ion-button (click)="close()"> Close Modals <ion-button> modal.service.ts modalInst=[]; i=0; storeModal(x) { this.modalInst[this.i]=x; this.i++; } modal.ts openModal() { var modal = await this.viewCtrl.create({ component: ModalPage }); this.service.storeModal(modal);// storing modal instances in an array return await modal.present(); } close() { for(var i=0; i< this.service.modalInst.length; i++) { this.service.modalInst[i].dismiss(); } }
stackoverflow
{ "language": "en", "length": 330, "provenance": "stackexchange_0000F.jsonl.gz:858008", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520883" }
b335f42ae4176d924ad996ffba27e4ff9bb69010
Stackoverflow Stackexchange Q: can't get node-ffi module working just for testing purposes I created a small DLL in delphi. The code being: library MyDll; uses SysUtils, Classes, Vcl.Dialogs; function DllMessage(const echo: string): string; export; begin //ShowMessage('Hello world from a Delphi DLL') ; Result := 'Echo: ' + echo; end; exports DllMessage; begin end I just want to run something as simple as this with this nodejs code: var ffi = require('ffi'); console.log("1"); var mylib = ffi.Library('MyDll', {'DllMessage': [ 'string', [ 'string' ] ] }); console.log(2"); var outstring = mylib.DllMessage('abc'); console.log("3" + outstring); problem is that I see "1" and "2" on the console and nothing else. Please help, any idea? Thanks in advance A: Pass your string data as buffer var message = Buffer.from('abc') var outstring = mylib.DllMessage(message); console.log("3" + outstring);
Q: can't get node-ffi module working just for testing purposes I created a small DLL in delphi. The code being: library MyDll; uses SysUtils, Classes, Vcl.Dialogs; function DllMessage(const echo: string): string; export; begin //ShowMessage('Hello world from a Delphi DLL') ; Result := 'Echo: ' + echo; end; exports DllMessage; begin end I just want to run something as simple as this with this nodejs code: var ffi = require('ffi'); console.log("1"); var mylib = ffi.Library('MyDll', {'DllMessage': [ 'string', [ 'string' ] ] }); console.log(2"); var outstring = mylib.DllMessage('abc'); console.log("3" + outstring); problem is that I see "1" and "2" on the console and nothing else. Please help, any idea? Thanks in advance A: Pass your string data as buffer var message = Buffer.from('abc') var outstring = mylib.DllMessage(message); console.log("3" + outstring);
stackoverflow
{ "language": "en", "length": 129, "provenance": "stackexchange_0000F.jsonl.gz:858031", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520948" }
a15911b6328927c7f9be68e0fec72c4a4a1c8caa
Stackoverflow Stackexchange Q: Uncaught Error : cannot find module I don't know why it cannot find module. I think loader in webpack.config.js is wrong but I don't know how to fix it. Uncaught Error: Cannot find module "./components/Images" at webpackMissingModule (app.js:3) at Object._hyphenPattern (app.js:3) at webpack_require (bootstrap 8f180ae…:19) at module.exports (bootstrap 8f180ae…:65) at bootstrap 8f180ae…:65 webpack.config.js var webpack = require('webpack') var path = require('path') module.exports = { entry: { app: './src/app.js' }, output: { filename: 'build/bundle.js', sourceMapFilename: 'build/bundle.map' }, devtool: '#source-map', module: { loaders: [ { loader: 'babel-loader', exclude: /(node_modules)/, query: { presets: ['react', 'es2015'] } } ] } } ./src/components/Images.js import React, { Component } from 'react' class Images extends Component { render() { return ( <div> Images Component </div> ) } } export default Images ./src/app.js import React, { Component } from 'react' import ReactDOM from 'react-dom' import Images from './components/Images' class App extends Component { render() { return ( <div> This is the react App! more stuff!! <Images /> </div> ) } } ReactDOM.render(<App />, document.getElementById('root'))
Q: Uncaught Error : cannot find module I don't know why it cannot find module. I think loader in webpack.config.js is wrong but I don't know how to fix it. Uncaught Error: Cannot find module "./components/Images" at webpackMissingModule (app.js:3) at Object._hyphenPattern (app.js:3) at webpack_require (bootstrap 8f180ae…:19) at module.exports (bootstrap 8f180ae…:65) at bootstrap 8f180ae…:65 webpack.config.js var webpack = require('webpack') var path = require('path') module.exports = { entry: { app: './src/app.js' }, output: { filename: 'build/bundle.js', sourceMapFilename: 'build/bundle.map' }, devtool: '#source-map', module: { loaders: [ { loader: 'babel-loader', exclude: /(node_modules)/, query: { presets: ['react', 'es2015'] } } ] } } ./src/components/Images.js import React, { Component } from 'react' class Images extends Component { render() { return ( <div> Images Component </div> ) } } export default Images ./src/app.js import React, { Component } from 'react' import ReactDOM from 'react-dom' import Images from './components/Images' class App extends Component { render() { return ( <div> This is the react App! more stuff!! <Images /> </div> ) } } ReactDOM.render(<App />, document.getElementById('root'))
stackoverflow
{ "language": "en", "length": 167, "provenance": "stackexchange_0000F.jsonl.gz:858033", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520958" }
8d35be8043fefc5c7bed3fc778360bf116482c33
Stackoverflow Stackexchange Q: Regex to exclude non-word Characters but leave spaces I am trying to write a Regex to stop a use entering invalid characters into a postcode field. from this link I manged to exclude all "Non-word" characters like so. Regex regex = new Regex(@"[\W_]+"); string cleanText = regex.Replace(messyText, "").ToUpper(); But this also excludes the "Space" characters. I am sure this is possible but I find regex very confusing! Can someone help out with an explanation of the regex pattern used? A: You can inverse your character class to make it a negated character class like this: [^\sa-zA-Z0-9]+ This will match any character except a whitespace or alphanumerical character. RegEx Demo (as this is not a .NET regex)
Q: Regex to exclude non-word Characters but leave spaces I am trying to write a Regex to stop a use entering invalid characters into a postcode field. from this link I manged to exclude all "Non-word" characters like so. Regex regex = new Regex(@"[\W_]+"); string cleanText = regex.Replace(messyText, "").ToUpper(); But this also excludes the "Space" characters. I am sure this is possible but I find regex very confusing! Can someone help out with an explanation of the regex pattern used? A: You can inverse your character class to make it a negated character class like this: [^\sa-zA-Z0-9]+ This will match any character except a whitespace or alphanumerical character. RegEx Demo (as this is not a .NET regex) A: Assuming valid postcodes comprise only alphanumeric character, you may replace with an empty string anything but alphanumerics and spaces: Regex regex = new Regex(@"[^a-zA-Z0-9\s]"); string cleanText = regex.Replace(messyText, "").ToUpper(); Please note that \s includes tabs, newlines and some other non-printable character. You may not want to consider them valid. In this is the case, just list the whitespace character literally: [^a-zA-Z0-9 ] A: You may use character class subtraction: [\W_-[\s]]+ It matches one or more non-word and underscore symbols with the exception of any whitespace characters. To exclude just horizontal whitespace characters use [\p{Zs}\t] in the subtraction part: [\W_-[\p{Zs}\t]]+ To exclude just vertical whitespace characters (line break chars) use [\n\v\f\r\u0085\u2028\u2029] in the subtraction part: [\W_-[\n\v\f\r\u0085\u2028\u2029]]+ A: This regex will capture everything except letters, digits, and spaces. [^\w\s\d]|_ The ^ inside the [ ] will cause the regex to look for everything except letters, digits, and spaces.
stackoverflow
{ "language": "en", "length": 264, "provenance": "stackexchange_0000F.jsonl.gz:858043", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44520990" }
922150a0517f8ae7465212ede46c9efc717bcfb8
Stackoverflow Stackexchange Q: How can we achieve multi-tenant option in the Spring scheduler? we have implemented multi-tenant option in our application. Each tenant have each separate DB. using application filter i can manage or assign the each tenant from the request. same how can we do it in the spring boot scheduler? @component public class scheduler{ @Scheduled(fixedRate = 5000) public void reminderEmail() { //how can we fetch the exact data from exact tenant DB? //since there is no request how can we get the tenant name for fetching exact tenant db? } } Please let me know how can we achieve this? A: Something like: ... public class TenantContext { private static final ThreadLocal<String> CONTEXT = new ThreadLocal<>(); public static void setTenantId(String tenantId) { CONTEXT.set(tenantId); } public static String getTenantId() { return CONTEXT.get(); } ... } then your Filter or Spring MVC interceptor could do this just before chaining the request: String tenantId = request.getHeader(TENANT_HEADER_NAME); TenantContext.setTenantId(tenantId); and reset it on the way back: TenantContext.setTenantId(null); To use it in a thread not related to an http request you could just do: TenantContext.setTenantId("tenant_1"); More could be found in my blog post Multi-tenant applications using Spring Boot, JPA, Hibernate and Postgres
Q: How can we achieve multi-tenant option in the Spring scheduler? we have implemented multi-tenant option in our application. Each tenant have each separate DB. using application filter i can manage or assign the each tenant from the request. same how can we do it in the spring boot scheduler? @component public class scheduler{ @Scheduled(fixedRate = 5000) public void reminderEmail() { //how can we fetch the exact data from exact tenant DB? //since there is no request how can we get the tenant name for fetching exact tenant db? } } Please let me know how can we achieve this? A: Something like: ... public class TenantContext { private static final ThreadLocal<String> CONTEXT = new ThreadLocal<>(); public static void setTenantId(String tenantId) { CONTEXT.set(tenantId); } public static String getTenantId() { return CONTEXT.get(); } ... } then your Filter or Spring MVC interceptor could do this just before chaining the request: String tenantId = request.getHeader(TENANT_HEADER_NAME); TenantContext.setTenantId(tenantId); and reset it on the way back: TenantContext.setTenantId(null); To use it in a thread not related to an http request you could just do: TenantContext.setTenantId("tenant_1"); More could be found in my blog post Multi-tenant applications using Spring Boot, JPA, Hibernate and Postgres A: If you are using a multitenant setup similar to the one at this link: https://www.ricston.com/blog/multitenancy-jpa-spring-hibernate-part-1/ and/or you have a default tenant. The easiest way to accomplish this is to add a static method to your CurrentTenantIdentifierResolverImpl class that changes the default tenant for asynchronous tasks that have no session. This is because that the scheduled task will always use the default tenant. CurrentTenantIdentifierResolverImpl.java private static String DEFAULT_TENANTID = "tenantId1"; public static void setDefaultTenantForScheduledTasks(String tenant) { DEFAULT_TENANT = tenant; } ScheduledTask.java @Scheduled(fixedRate=20000) public void runTasks() { CurrentTenantIdentifierResolverImpl.setDefaultTenantForScheduledTasks("tenantId2"); //do something CurrentTenantIdentifierResolverImpl.setDefaultTenantForScheduledTasks("tenantId1"); } Then after the scheduled task is complete change it back. That is how we accomplished it and it works for our needs. A: If you're using a request to determine which tenant is currently active and using tenant to determine database connections, then it's impossible to do anything involving the database from a scheduled task since the scheduled task has no tenant id
stackoverflow
{ "language": "en", "length": 350, "provenance": "stackexchange_0000F.jsonl.gz:858055", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521035" }
90a26247b15e4379421a9612a922b57f00c91afa
Stackoverflow Stackexchange Q: PDF Viewer in Qt I am trying to create a pdf viewer inside qt using Adobe Readers ActiveX, but it requires to install Adobe Reader, so is it possible without installing Adobe reader we can create pdf viewer A: QtPdf module in the Qt Labs. It comes with a Widgets-based PdfViewer example, which works out of the box. It can be easily incorporated into any Qt app - We are incorporating it into one if our QML applications, by creating a wrapper. Qt blog announcement here.
Q: PDF Viewer in Qt I am trying to create a pdf viewer inside qt using Adobe Readers ActiveX, but it requires to install Adobe Reader, so is it possible without installing Adobe reader we can create pdf viewer A: QtPdf module in the Qt Labs. It comes with a Widgets-based PdfViewer example, which works out of the box. It can be easily incorporated into any Qt app - We are incorporating it into one if our QML applications, by creating a wrapper. Qt blog announcement here.
stackoverflow
{ "language": "en", "length": 87, "provenance": "stackexchange_0000F.jsonl.gz:858064", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521053" }
4db826672a1726a95aa083b1090ac9c664f565a5
Stackoverflow Stackexchange Q: Attribute selection in h2o I am very beginner in h2o and I want to know if there is any attribute selection capabilities in h2o framework so to be applied in h2oframes? A: No there are not currently feature selection functions in H2O -- my advice would be to use Lasso regression (in H2O this means use GLM with alpha = 1.0) to do the feature selection, or simply allow whatever machine learning algorithm (e.g. GBM) you are planning to use to use all the features (they'll tend to ignore the bad ones, but it could still degrade performance of the algorithm to have bad features in the training data). If you'd like, you can make a feature request by filling out a ticket on the H2O-3 JIRA. This seems like a nice feature to have.
Q: Attribute selection in h2o I am very beginner in h2o and I want to know if there is any attribute selection capabilities in h2o framework so to be applied in h2oframes? A: No there are not currently feature selection functions in H2O -- my advice would be to use Lasso regression (in H2O this means use GLM with alpha = 1.0) to do the feature selection, or simply allow whatever machine learning algorithm (e.g. GBM) you are planning to use to use all the features (they'll tend to ignore the bad ones, but it could still degrade performance of the algorithm to have bad features in the training data). If you'd like, you can make a feature request by filling out a ticket on the H2O-3 JIRA. This seems like a nice feature to have. A: In my opinion, Yes My way is use automl to train your data. after training, you can get a lot of model. use h2o.get_model method or H2O server page to watch some model you like. you can get VARIABLE IMPORTANCES frame. then pick your features.
stackoverflow
{ "language": "en", "length": 182, "provenance": "stackexchange_0000F.jsonl.gz:858075", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521080" }
714a24a428e825f657d82664732dddfccf757707
Stackoverflow Stackexchange Q: Android: How to launch emulator from command line? I am using Windows 10 (64 bit) and Android Studio 2.3. My Android emulator is located in folder: d:\Programs\Android\avd.android\avd\Nexus_4_4.avd\ Suppose I'm in folder D:\temp. How from folder D:\temp I can launch my emulator (from command line of pc)? A: Step for : * *Copy this cd C:\Users{User}\AppData\Local\Android\Sdk\emulator from your system *Paste this on System Variable Path Setting. *List available emulators: emulator -list-avds *Start the emulator: emulator -avd {myEmulator}
Q: Android: How to launch emulator from command line? I am using Windows 10 (64 bit) and Android Studio 2.3. My Android emulator is located in folder: d:\Programs\Android\avd.android\avd\Nexus_4_4.avd\ Suppose I'm in folder D:\temp. How from folder D:\temp I can launch my emulator (from command line of pc)? A: Step for : * *Copy this cd C:\Users{User}\AppData\Local\Android\Sdk\emulator from your system *Paste this on System Variable Path Setting. *List available emulators: emulator -list-avds *Start the emulator: emulator -avd {myEmulator} A: None of the above answers helped me on a windows 10 machine. This is what a window user needs to do: * *Go to the emulator folder: cd C:\Users\{User}\AppData\Local\Android\Sdk\emulator *List available emulators: emulator -list-avds *Start the emulator: emulator -avd {myEmulator} A: For WINDOWS only 1. first step to locate your AppData in your user folder if its hidden then in file-mangaer navigate to view-hidden items(check mark) cd C:\Users\UserName\AppData\Local\android\sdk\emulator 2. second step is to see avd(android virtual devices) available emulator -list-avds 3. Third steo is to launch avd emulator -avd Pixel_2_API_29 A: Open command prompt anywhere and use the following command * *To get the list of available emulator emulator -list-avds *To open a emulator emulator -avd Nexus_5X_API_23 A: Run the following command in cmd: * *Cd C:\Users\Username\Appdata\local\Android\Sdk\Emulator *.\emulator -avd Andro (Emulator Name) You can make a shortcut by using a bat file. Step 1 - Save the Path C:\Users\Username\AppData\Local\Android\Sdk\emulator in Environment variable. Step 2 - Then make a .bat file and write the command emaulator -avd (Emaulator Name) A: For Windows Users: * *Open CMD Then First Locate your emulator folder directory cd c:\Users\<Your name>\AppData\Local\Android\Sdk\tools emulator.exe -list-avds emulator.exe -avd emulator-name or emulator-Id That's It. you are done. A: I could only successfully run the command from the tools folder (Windows 10): cd %ANDROID_HOME%/tools To get the list of available virtual devices: emulator -list-avds To run it: emulator -avd Nexus_5X_API_24 You can then put this at a .bat file: cd %ANDROID_HOME%/tools emulator -avd YOUR_VIRTUAL_DEVICE_ID A: For Mac users, open Terminal and run below commands - * *Navigate to the emulator folder directory(where SDK is installed). In my case - $ cd /Users/username/Library/Android/sdk/emulator *Then check for available emulators - $ ./emulator -list-avds *And finally - $ ./emulator -avd **nameofthedevice** -netdelay none -netspeed full A: Simple steps i followed : * *cd ~/Library/Android/sdk/tools/ *./emulator -list-avds *Pick one of your available emulator name and run ./emulator -avd Nexus_5X_API_24 A: Emulator can also be opened from here * *Open the location C:\Users{User_Name}\AppData\Local\Android\Sdk\tools *Now, open cmd from the current location. *Type the command emulator.exe -avd {NameOfYourEmulator} This worked for me in Windows 10. To know the list of emulators. Use the command emulator -list-avds A: * *Copy emulator.exe file location and paste that in global environment variable to access globaly in your system above all may be worked before but in 2020 emulator.exe is in diffrent folder C:\Users{username}\AppData\Local\Android\emulator for me it was C:\Users\sachin\AppData\Local\Android\emulator 2.simply run in terminal emulator -list-avds emulator -avd {emulator name} A: Try this way, it's Fast and no need append to syspath: Run "Run" Win+R and put cmd /K "cd C:\Users\{username}\AppData\Local\Android\Sdk\emulator&&emulator @{device_name}" change {username} to your username and {device_name} to your device name A: 1. first step to locate your AppData in your user folder if its hidden then in file-mangaer navigate to view-hidden items(check mark) cd C:\Users\UserName\AppData\Local\android\sdk\emulator 2. second step is to see avd(android virtual devices) available emulator -list-avds 3. Third steo is to launch avd emulator -avd Pixel_2_API_29
stackoverflow
{ "language": "en", "length": 565, "provenance": "stackexchange_0000F.jsonl.gz:858076", "question_score": "15", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521081" }
cfe5ca2a87d6d9d8e7105a3b61fa2d070ab64889
Stackoverflow Stackexchange Q: elm-css: How to give a value to `opacity` I am playing around with elm-css. Most of the things work as I expect them. But I am not able to give a correct value to the Css.opacity function. Here is what I have tried: Css.opacity 0.5 which gives the error: Function `opacity` is expecting the argument to be: Css.Number compatible But it is: Float The Css.Number is a type alias in the following form: type alias Number compatible = { compatible | value : String, number : Compatible } But I don't understand how to create a valid value for the Css.opacity function... A: You can create input for opacity by using one of the "unitless" functions, like Css.int or Css.num. For example: -- 42% opaque translucent = Css.opacity (Css.num 0.42) It is "unitless" because the CSS property of opacity does not define a unit like px or percent.
Q: elm-css: How to give a value to `opacity` I am playing around with elm-css. Most of the things work as I expect them. But I am not able to give a correct value to the Css.opacity function. Here is what I have tried: Css.opacity 0.5 which gives the error: Function `opacity` is expecting the argument to be: Css.Number compatible But it is: Float The Css.Number is a type alias in the following form: type alias Number compatible = { compatible | value : String, number : Compatible } But I don't understand how to create a valid value for the Css.opacity function... A: You can create input for opacity by using one of the "unitless" functions, like Css.int or Css.num. For example: -- 42% opaque translucent = Css.opacity (Css.num 0.42) It is "unitless" because the CSS property of opacity does not define a unit like px or percent.
stackoverflow
{ "language": "en", "length": 149, "provenance": "stackexchange_0000F.jsonl.gz:858090", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521171" }
9fc0491cc1a01a5b453e0159a3e5dc7a5c051062
Stackoverflow Stackexchange Q: What is the difference between Coroutine, Coroutine2 and Fiber? There are 3 thin threads with manual low-latency context switching in the Boost: * *Boost.Coroutine: http://www.boost.org/doc/libs/1_64_0/libs/coroutine/doc/html/index.html *Boost.Coroutine2: http://www.boost.org/doc/libs/1_64_0/libs/coroutine2/doc/html/index.html *Boost.Fiber: http://www.boost.org/doc/libs/1_64_0/libs/fiber/doc/html/index.html What is the difference between Coroutine1, Coroutine2 and Fiber in Boost? A: boost.coroutine is non-C++11 and therefore requires to use a private API from boost.context (reason because it is deprecated). boost.coroutine2 and boost.fiber require C++11 and use callcc()/continuation (implements context switch, call-with-current-continuation) from boost.context. boost.coroutine and boost.coroutine2 implement coroutines, while boost.fiber provides fibers (== lightweigt, coroperative userland-threads, green-threads, ...) with an API similar to std::thread. The difference between coroutines and fibers is described in N4024: Distinguishing coroutines and fibers - in short: fibers are switched by an internal scheduler while coroutines use no internal scheduler.
Q: What is the difference between Coroutine, Coroutine2 and Fiber? There are 3 thin threads with manual low-latency context switching in the Boost: * *Boost.Coroutine: http://www.boost.org/doc/libs/1_64_0/libs/coroutine/doc/html/index.html *Boost.Coroutine2: http://www.boost.org/doc/libs/1_64_0/libs/coroutine2/doc/html/index.html *Boost.Fiber: http://www.boost.org/doc/libs/1_64_0/libs/fiber/doc/html/index.html What is the difference between Coroutine1, Coroutine2 and Fiber in Boost? A: boost.coroutine is non-C++11 and therefore requires to use a private API from boost.context (reason because it is deprecated). boost.coroutine2 and boost.fiber require C++11 and use callcc()/continuation (implements context switch, call-with-current-continuation) from boost.context. boost.coroutine and boost.coroutine2 implement coroutines, while boost.fiber provides fibers (== lightweigt, coroperative userland-threads, green-threads, ...) with an API similar to std::thread. The difference between coroutines and fibers is described in N4024: Distinguishing coroutines and fibers - in short: fibers are switched by an internal scheduler while coroutines use no internal scheduler.
stackoverflow
{ "language": "en", "length": 125, "provenance": "stackexchange_0000F.jsonl.gz:858091", "question_score": "15", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521178" }
3e01d5cdf33c7094659fcf75a755406061eaa834
Stackoverflow Stackexchange Q: if let syntax in Python Coming from Swift I'm having a bit of a hard time working with .None type of Python. I have a few functions which might return None if the object I'm looking for is not found in the array. Then I have to nest my code as follows: varA = self.getVariableByName(Variables.varA) if varA is None: varB = self.getVariableByName(Variables.varB) varC = self.getVariableByName(Variables.varC) if varB is not None and varC is not None: # Do something with varB and varC In Swift I used to be able to bind the variables in the if statement let f = getVariableByName if f(Variables.varA) == nil, let varB = f(Variables.varB), let varC = f(Variables.varC) { // Do something with varB and varC } What is a more 'pythonic' way of dealing with None? A: you can use the ":=" operator: if varB := getVariableByName(Variables.varB): pass
Q: if let syntax in Python Coming from Swift I'm having a bit of a hard time working with .None type of Python. I have a few functions which might return None if the object I'm looking for is not found in the array. Then I have to nest my code as follows: varA = self.getVariableByName(Variables.varA) if varA is None: varB = self.getVariableByName(Variables.varB) varC = self.getVariableByName(Variables.varC) if varB is not None and varC is not None: # Do something with varB and varC In Swift I used to be able to bind the variables in the if statement let f = getVariableByName if f(Variables.varA) == nil, let varB = f(Variables.varB), let varC = f(Variables.varC) { // Do something with varB and varC } What is a more 'pythonic' way of dealing with None? A: you can use the ":=" operator: if varB := getVariableByName(Variables.varB): pass A: You can use the walrus operator (:=) from python 3.8 and upwards to make the assignment statement into an expression. Then you can check whether the assignment resulted in None within the if statement to decide whether to continue or not. if (var_a := self.get_variable_by_name(Variables.var_a)) is not None: var_b = self.get_variable_by_name(Variables.var_b) var_c = self.get_variable_by_name(Variables.var_c) if var_b is not None and var_c is not None: # Do stuff... A: I believe the nicest way to handle this case is exception handling. E.g. make self.getVariableByName raise an Exception if the element is not found. Then you could do: try: varA = self.getVariableByName(Variables.varA) except RuntimeError: varB = self.getVariableByName(Variables.varB) varC = self.getVariableByName(Variables.varC) # do something with varB and varC to get the equivalent of your Swift example. If you cannot/do not want to change self.getVariableByName, the best you can do is: f = self.getVariableByName if f(Variables.varA): varB, varC = f(Variables.varB), f(Variables.varC) if not (varB is None or varC is None): # do something with varB and varC A: if not hasattr(Variables, 'varA'): varB = Variables.varB but sure varB can be missing to, so it's good to check if there is such attribute or to use getattr() with default value or to catch AttributeError exception.
stackoverflow
{ "language": "en", "length": 346, "provenance": "stackexchange_0000F.jsonl.gz:858104", "question_score": "9", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521223" }
2e6b48d352c899ca2dbc1d4a8659e97be4987571
Stackoverflow Stackexchange Q: Get length of array in ngFor after pipes transformation I have the following template: <div *ngFor="let item of myArray | customPipe1 | customPipe2; let l = length"> Here is the length of my ngFor : {{l}} </div> Unfortunately length doesn't exist in ngFor. How can I work around this issue to have the length available inside my ngFor? A: Another solution could be the following <div *ngFor="let item of myArray | customPipe1 | customPipe2; let l = count"> Here is the length of my ngFor : {{l}} </div> Plunker Example See also * *https://github.com/angular/angular/blob/master/packages/common/src/directives/ng_for_of.ts#L15-L17
Q: Get length of array in ngFor after pipes transformation I have the following template: <div *ngFor="let item of myArray | customPipe1 | customPipe2; let l = length"> Here is the length of my ngFor : {{l}} </div> Unfortunately length doesn't exist in ngFor. How can I work around this issue to have the length available inside my ngFor? A: Another solution could be the following <div *ngFor="let item of myArray | customPipe1 | customPipe2; let l = count"> Here is the length of my ngFor : {{l}} </div> Plunker Example See also * *https://github.com/angular/angular/blob/master/packages/common/src/directives/ng_for_of.ts#L15-L17 A: <div *ngFor="let item of myArray | customPipe1 | customPipe2 as result"> Here is the length of my ngFor : {{result.length}} </div> See also https://angular.io/api/common/NgForOf A: Well, this is not well documented in the Angular doc, you can find under source code of Angular at - https://github.com/angular/angular/blob/master/packages/common/src/directives/ng_for_of.ts *ngFor accepts count params. constructor(public $implicit: T, public ngForOf: NgIterable<T>, public index: number,public count: number) { } So we can get the count like - <div *ngFor="let item of items | pipe; let l = count"> <div>{{count}}</div> </div> A: @Günter Zöchbauer {{(myArray | customPipe)?.length}} - Working as Expected A: This might sound dirty (it is) But const count = document.getElementbyId('id-of-wrapper').childElementCount; will do the trick. Need to call this function when some thing changes. Pros * *Not require re calculation of pipe *Works *count available outside of loop Cons * *dirty (a bit) *not proper angular way *dependency on dom
stackoverflow
{ "language": "en", "length": 241, "provenance": "stackexchange_0000F.jsonl.gz:858111", "question_score": "16", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521254" }
7e9e06c0b2d71bc44f8fc168edfc254fc0275d2b
Stackoverflow Stackexchange Q: watch an object in controller Angularjs var app = angular.module('myApp', []); app.controller('myCtrl', function($scope) { $scope.form = { name: 'my name', age: '25' } $scope.$watch('form', function(newVal, oldVal){ console.log(newVal); },true); $scope.foo= function(){ $scope.form.name = 'abc'; $scope.form.age = $scope.form.age++; } $scope.foo(); $scope.bar= function(){ $scope.form.name = 'xyz'; $scope.form.age = $scope.form.age++; } $scope.bar(); }); <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.9/angular.min.js"></script> <div ng-app="myApp" ng-controller="myCtrl">{{form}}<br> <label>Name:</label> <input type="text" ng-model="form.name"/><br> <label>Age:</label> <input type="text" ng-model="form.age"/> </div> In controller I changed 'form' object property four time,But $watch not firing. From UI(input) if I change it works A: Your case is that I assume that the angular Digest cycle did not had time to pass during all your rapid updates. Here is some references to understand more the digest cycle of Angular 1.X: * *https://www.thinkful.com/projects/understanding-the-digest-cycle-528/ https://www.ng-book.com/p/The-Digest-Loop-and-apply/ The prevent that kind of behaviour; your can force the digest to pass using $timeout ( that has a $apply) dependency : https://docs.angularjs.org/api/ng/service/$timeout Example: $timeout(function(){ $scope.form.name = name; $scope.form.age = $scope.form.age++; }) I have created a jsfiddle to illustrade your case : http://jsfiddle.net/IgorMinar/ADukg/
Q: watch an object in controller Angularjs var app = angular.module('myApp', []); app.controller('myCtrl', function($scope) { $scope.form = { name: 'my name', age: '25' } $scope.$watch('form', function(newVal, oldVal){ console.log(newVal); },true); $scope.foo= function(){ $scope.form.name = 'abc'; $scope.form.age = $scope.form.age++; } $scope.foo(); $scope.bar= function(){ $scope.form.name = 'xyz'; $scope.form.age = $scope.form.age++; } $scope.bar(); }); <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.9/angular.min.js"></script> <div ng-app="myApp" ng-controller="myCtrl">{{form}}<br> <label>Name:</label> <input type="text" ng-model="form.name"/><br> <label>Age:</label> <input type="text" ng-model="form.age"/> </div> In controller I changed 'form' object property four time,But $watch not firing. From UI(input) if I change it works A: Your case is that I assume that the angular Digest cycle did not had time to pass during all your rapid updates. Here is some references to understand more the digest cycle of Angular 1.X: * *https://www.thinkful.com/projects/understanding-the-digest-cycle-528/ https://www.ng-book.com/p/The-Digest-Loop-and-apply/ The prevent that kind of behaviour; your can force the digest to pass using $timeout ( that has a $apply) dependency : https://docs.angularjs.org/api/ng/service/$timeout Example: $timeout(function(){ $scope.form.name = name; $scope.form.age = $scope.form.age++; }) I have created a jsfiddle to illustrade your case : http://jsfiddle.net/IgorMinar/ADukg/ A: We can use watchgroup for this $scope.$watchGroup(['form.name', 'form.age'], function(newValues, oldValues, scope) { //... });
stackoverflow
{ "language": "en", "length": 180, "provenance": "stackexchange_0000F.jsonl.gz:858112", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521257" }
54a7415ba75371c0f005f3eca9405eaf23d473c0
Stackoverflow Stackexchange Q: Angular2 - Parameter $event implicitly has 'any' type My main concern is that $event shows error in this line: starClick($event) { Parameter $event implicitly has 'any' type I also wonder - according to Angular2 docs, what $event does is captures the event object, so let me ask stupid question - why we don't call it $object? Because it made me confuse quite a bit until I finally realised whats happening here.. import { Component } from '@angular/core'; @Component({ moduleId: module.id, selector: 'stars', template: ` <span class="glyphicon glyphicon-star-empty" (click)= "starClick($event)"></span> ` }) export class StarsComponent { starClick($event) { if($event.target.className == "glyphicon glyphicon-star-empty") { $event.target.className = "glyphicon glyphicon-star"; } else{ $event.target.className = "glyphicon glyphicon-star-empty"; } } } A: You are getting this error because of the below flag in tsconfig.json "noImplicitAny": true You have 2 ways to fix this. 1. "noImplicitAny": false //Make this false 2. Mention the event type in component code, for eg. For (click)="onClick($event)" should be captured in your component as onClick(event: KeyboardEvent) and (mouseover)='onMouseOver($event)' should be captured as onMouseOver(event: MouseEvent)
Q: Angular2 - Parameter $event implicitly has 'any' type My main concern is that $event shows error in this line: starClick($event) { Parameter $event implicitly has 'any' type I also wonder - according to Angular2 docs, what $event does is captures the event object, so let me ask stupid question - why we don't call it $object? Because it made me confuse quite a bit until I finally realised whats happening here.. import { Component } from '@angular/core'; @Component({ moduleId: module.id, selector: 'stars', template: ` <span class="glyphicon glyphicon-star-empty" (click)= "starClick($event)"></span> ` }) export class StarsComponent { starClick($event) { if($event.target.className == "glyphicon glyphicon-star-empty") { $event.target.className = "glyphicon glyphicon-star"; } else{ $event.target.className = "glyphicon glyphicon-star-empty"; } } } A: You are getting this error because of the below flag in tsconfig.json "noImplicitAny": true You have 2 ways to fix this. 1. "noImplicitAny": false //Make this false 2. Mention the event type in component code, for eg. For (click)="onClick($event)" should be captured in your component as onClick(event: KeyboardEvent) and (mouseover)='onMouseOver($event)' should be captured as onMouseOver(event: MouseEvent) A: I think that this is a warning and not an error, that you probably see in your code editor. You can avoid this simply by putting any as type of your parameter like: starClick($event:any) { ... $event is a convention of Angular and you should only use it in the HTML like: <input type="text" (yourCustomEvent)="onYourCustomEventHandle($event)"> And you can name it as you wish in the typescript code like this: onYourCustomEventHandle(apples){ ... } onYourCustomEventHandle(oranges){ ... } just name it as it makes more sense to you. When using custom events you use $event to pass data to your handler. When using it with the regular events like: <button (click)="onClick($event)" (mouseover)='onMouseOver($event)'> you simply get the event object as parameter in your code, but again you can name it as you wish in the code: onClick(myClickEvent){ ... } onClick(e){ ... } onMouseOver(event){ ... } onMouseOver(johnny){ ... } But don't forget to name it always $event inside the HTML A: I have the same problem and I solve it changing this code: starClick($event: any). This is how it looks like: import { Component } from '@angular/core'; @Component({ moduleId: module.id, selector: 'stars', template: ` <span class="glyphicon glyphicon-star-empty" (click)= "starClick($event)"></span> ` }) export class StarsComponent { starClick($event: any) { if($event.target.className == "glyphicon glyphicon-star-empty") { $event.target.className = "glyphicon glyphicon-star"; } else{ $event.target.className = "glyphicon glyphicon-star-empty"; } } }
stackoverflow
{ "language": "en", "length": 395, "provenance": "stackexchange_0000F.jsonl.gz:858141", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521323" }
b180d28b07d5ff3de0977abbac224b3de6a6c815
Stackoverflow Stackexchange Q: Reactjs: Stop rerendering on a particular state change with shouldComponentUpdate() I have multiple setState in my component onload. My page is re-rendering on click of dropdown values, because on-click I'm storing those values via setState. Here to stop re-rendering on click, I'm using the below code. shouldComponentUpdate() { return false } Now because of the above function, My values are not storing in state on page load. Is there anything I need to do extra ? A: You states don't show a change in your component, because it doesn't rereder at all, since you have returned false from the shouldComponentUpdate method. If you want to restrict rendering on particular state change then just check for that state and return false else return true like shouldComponentUpdate(nextProps, nextState) { if(this.state.dropdownVal != nextState.dropdownval) { return false } return true } However its not such a bad thing to rerender because React rendering process is very efficient and you would want to show the updated state in the DOM more often then not.
Q: Reactjs: Stop rerendering on a particular state change with shouldComponentUpdate() I have multiple setState in my component onload. My page is re-rendering on click of dropdown values, because on-click I'm storing those values via setState. Here to stop re-rendering on click, I'm using the below code. shouldComponentUpdate() { return false } Now because of the above function, My values are not storing in state on page load. Is there anything I need to do extra ? A: You states don't show a change in your component, because it doesn't rereder at all, since you have returned false from the shouldComponentUpdate method. If you want to restrict rendering on particular state change then just check for that state and return false else return true like shouldComponentUpdate(nextProps, nextState) { if(this.state.dropdownVal != nextState.dropdownval) { return false } return true } However its not such a bad thing to rerender because React rendering process is very efficient and you would want to show the updated state in the DOM more often then not. A: You have to use the shouldComponentUpdate() like shouldComponentUpdate(nextProps, nextState) { return !Immutable.is(this.state.synergy, nextState.synergy) } A: The answer: you need to re-render. There's nothing wrong with this. However, if you have a lot of logic built in to your render statements (which you shouldn't do, logic should be happening in your containers) this can be cumbersome. Use element key's efficiently, and there's nothing wrong with re-rendering. You're looking for a way to subvert the design patterns of React, when you should instead be conforming in MOST cases.
stackoverflow
{ "language": "en", "length": 257, "provenance": "stackexchange_0000F.jsonl.gz:858166", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521391" }
ce6c0eef6481b270bce74b37448a61de3fde6015
Stackoverflow Stackexchange Q: Angular/ Ui-Grid (ng-grid) / angular is not defined I am trying to get Angular´s ui-grid running and use the following code: import {NgModule} from "@angular/core"; import {BrowserModule} from "@angular/platform-browser"; import {RouterModule} from "@angular/router"; import {BosOverviewComponent} from "./bosoverview.component"; import {UiGridModule} from 'angular-ui-grid'; @NgModule({ declarations: [ BosOverviewComponent ], imports: [ BrowserModule, RouterModule, UiGridModule ], exports: [ BosOverviewComponent ], }) export class BusinessObjectsModule { } Using npm start, I always get the following error: Uncaught ReferenceError: angular is not defined at ui-grid.js:8 at Object.../../../../angular-ui-grid/ui-grid.js (ui-grid.js:10) at __webpack_require__ (inline.bundle.js:55) at Object.../../../../angular-ui-grid/index.js (index.js:1) at __webpack_require__ (inline.bundle.js:55) at Object.../../../../../src/app/views/businessobjects/businessobjects.module.ts (bosoverview.component.ts:7) at __webpack_require__ (inline.bundle.js:55) at Object.../../../../../src/app/app.module.ts (app.helpers.ts:66) at __webpack_require__ (inline.bundle.js:55) at Object.../../../../../src/main.ts (environment.ts:8) What should I do? Thanks! A: As you can see: UI-Grid is currently compatible with Angular versions ranging from 1.4.x to 1.6.x. You are trying to use it at Angular 2+...
Q: Angular/ Ui-Grid (ng-grid) / angular is not defined I am trying to get Angular´s ui-grid running and use the following code: import {NgModule} from "@angular/core"; import {BrowserModule} from "@angular/platform-browser"; import {RouterModule} from "@angular/router"; import {BosOverviewComponent} from "./bosoverview.component"; import {UiGridModule} from 'angular-ui-grid'; @NgModule({ declarations: [ BosOverviewComponent ], imports: [ BrowserModule, RouterModule, UiGridModule ], exports: [ BosOverviewComponent ], }) export class BusinessObjectsModule { } Using npm start, I always get the following error: Uncaught ReferenceError: angular is not defined at ui-grid.js:8 at Object.../../../../angular-ui-grid/ui-grid.js (ui-grid.js:10) at __webpack_require__ (inline.bundle.js:55) at Object.../../../../angular-ui-grid/index.js (index.js:1) at __webpack_require__ (inline.bundle.js:55) at Object.../../../../../src/app/views/businessobjects/businessobjects.module.ts (bosoverview.component.ts:7) at __webpack_require__ (inline.bundle.js:55) at Object.../../../../../src/app/app.module.ts (app.helpers.ts:66) at __webpack_require__ (inline.bundle.js:55) at Object.../../../../../src/main.ts (environment.ts:8) What should I do? Thanks! A: As you can see: UI-Grid is currently compatible with Angular versions ranging from 1.4.x to 1.6.x. You are trying to use it at Angular 2+... A: ktretyak is right. UI-Grid is only officially supported by AngularJS, not Angular 2+. Fortunately you have options. First, you may be able to make it work anyways. This github repo, for instance, claims to be an example of UI-Grid working in Angular 2+. And the dev team does have a plan to upgrade UI-Grid to Angular 5+, if you are willing to wait. It may be a better idea, though, to find a library that is officially supported on Angular 2+ now. Reading through this discussion on the UI-Grid site points you at several data grids that are compatible with Angular 2+. * *If you are already using Angular Material they have a mat-table *ngx-datatable is an Angular component for presenting large and complex data. It has all the features you would expect from any other table but in a light package with no external dependencies. (from the github page) *PrimeNG makes a set of UI components for Angular 2+. They're open source, on github, and they have a pretty slick datatable. *ag-grid makes "The Best HTML 5 Grid In The World" according to their website. They have a version works with Angular 2+. They have a freemium model, but their free version looks nice. *angular2-datatable is yet another open-source option that, of course, officially supports Angular 2+. So, don't stress yourself out trying to shoehorn in a library that isn't really built to do what you want it to. It's much easier to use one that is designed to work with Angular 2+.
stackoverflow
{ "language": "en", "length": 393, "provenance": "stackexchange_0000F.jsonl.gz:858176", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521427" }
059f4c8d77d5b40ec43a1bad7a60157595740226
Stackoverflow Stackexchange Q: How to invoke a method from ajax success function? I've a jQuery with ajax using to fetch some data from a servlet <script type="text/javascript"> $(document).ready(function() { $.ajax({ url:'ServiceToFetchDocType', type:'post', cache:false, success: function(response){ //some data fetched from ServiceToFetchDocType //Need to invoke another method here } }); </script> Is it possible to invoke another method inside the success function and get some value? I've very new to jQuery and ajax, any kind of help is appreciated. A: you can do something like this var invokeAfterSuccess = function() { } var successFunction = function(response) { /* do something here */ invokeAfterSuccess() } $.ajax({ url:'ServiceToFetchDocType', type:'post', cache:false, success: successFunction }) /*--------------- OR -------------*/ $.ajax({ url:'ServiceToFetchDocType', type:'post', cache:false }).done(successFunction)
Q: How to invoke a method from ajax success function? I've a jQuery with ajax using to fetch some data from a servlet <script type="text/javascript"> $(document).ready(function() { $.ajax({ url:'ServiceToFetchDocType', type:'post', cache:false, success: function(response){ //some data fetched from ServiceToFetchDocType //Need to invoke another method here } }); </script> Is it possible to invoke another method inside the success function and get some value? I've very new to jQuery and ajax, any kind of help is appreciated. A: you can do something like this var invokeAfterSuccess = function() { } var successFunction = function(response) { /* do something here */ invokeAfterSuccess() } $.ajax({ url:'ServiceToFetchDocType', type:'post', cache:false, success: successFunction }) /*--------------- OR -------------*/ $.ajax({ url:'ServiceToFetchDocType', type:'post', cache:false }).done(successFunction) A: <script type="text/javascript"> $(document).ready(function() { $.ajax({ url:'ServiceToFetchDocType', type:'post', cache:false, success: function(response){ Myfunction(); //this is how you can call function } }); Myfunction(){ alert("hiii") } } </script> // thats how it will work A: $(document).ready(function() { $.ajax({ url: 'ServiceToFetchDocType', type: 'post', cache: false, success: function(response) { /* invoke your function*/ yourFunction(); } }); }); A: [success: function (data) { TypeOfReportDropdown.closest("form").find("div\[id$='MonitoringChemicalData'\]")\[0\].innerHTML = data; var hours = $("#UptimeHourYear").val(); var emissions = round(parseFloat(($("#AverageMassLoadOut").val() * hours) / 1000), 1); $("#Emissions").val(emissions); $("#TotalEmissions").val(emissions); $(this).delay(3000).queue(function () { var emissieTotal = 0; var totalHAP = 0; $('\[data-emissie\]').each(function () { emissieTotal += Number($(this).data('emissie')); var hap = $(this).data('hap'); if (hap == "True") { totalHAP += Number($(this).data('emissie')); } }); var emissieFinalTotal = round(emissieTotal, 3); $('#TotalEmissionForOnlineInstruments').html(emissieFinalTotal); var totalHAPFinal = round(totalHAP, 3); $('#TotalHAPForOnlineInstruments').html(totalHAPFinal); }); }][1]
stackoverflow
{ "language": "en", "length": 236, "provenance": "stackexchange_0000F.jsonl.gz:858179", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521431" }
9fd2690cd048e90719eb1835e109c2381c4884da
Stackoverflow Stackexchange Q: SonarQube: How to apply multiple quality profiles to one project? I would like to use SonarQube 6.3.1 to analyze the Java and Kotlin code of an Android project. Therefore, I installed the Android Lint plugin besides the preinstalled SonarJava aka. Sonar way plugin. Both show up in the Java language dropdown in the Administration section of the project as shown in the screenshot. * *How can I apply multiple profiles at the same time? *Where can I find other profiles suitable for Java/Kotlin/Android projects? Related posts * *Sonarqube: use multiple custom quality profiles for a single multilanguage project...? A: You apply both profiles by creating a third profile that contains all the rules in each of your source profiles. The easiest way to accomplish that is to * *Go to Quality Profiles and create a new profile *Now in the list of profiles, click on the rule count for one of your source profiles. This takes you to the list of rules active in that profile *Use the Bulk Change > Activate In... option to turn those rules on in your new profile *Return to step 2 with the next source profile in your list.
Q: SonarQube: How to apply multiple quality profiles to one project? I would like to use SonarQube 6.3.1 to analyze the Java and Kotlin code of an Android project. Therefore, I installed the Android Lint plugin besides the preinstalled SonarJava aka. Sonar way plugin. Both show up in the Java language dropdown in the Administration section of the project as shown in the screenshot. * *How can I apply multiple profiles at the same time? *Where can I find other profiles suitable for Java/Kotlin/Android projects? Related posts * *Sonarqube: use multiple custom quality profiles for a single multilanguage project...? A: You apply both profiles by creating a third profile that contains all the rules in each of your source profiles. The easiest way to accomplish that is to * *Go to Quality Profiles and create a new profile *Now in the list of profiles, click on the rule count for one of your source profiles. This takes you to the list of rules active in that profile *Use the Bulk Change > Activate In... option to turn those rules on in your new profile *Return to step 2 with the next source profile in your list.
stackoverflow
{ "language": "en", "length": 196, "provenance": "stackexchange_0000F.jsonl.gz:858195", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521481" }
621723399262899d6048aa41c70bb8aab178aa09
Stackoverflow Stackexchange Q: curl: (5) Could not resolve proxy: DELETE; Unknown error I am using hadoop apache 2.7.1 on centos 7 and I want to delete a file(file1) by using webhdfs commands. curl -i -x DELETE "http://192.168.25.21:50070/webhdfs/v1/hadoophome/file1/?user.name=root&op=DELETE&recursive=true" But I am getting this error: curl: (5) Could not resolve proxy: DELETE; Unknown error I edited bashrc file as following : export http_proxy="" export https_proxy="" export ftp_proxy="" And source the file to save changes source ~/.bashrc But with the same error. So I tried to set no proxy in the culr command as curl -i -x --noproxy localhost DELETE "http://192.168.25.21:50070/webhdfs/v1/hadoophome/file1/?user.name=root&op=DELETE&recursive=true" With this error: curl: (5) Could not resolve proxy: --noproxy; Unknown error What should I edit to exclude this proxy? Thanks. A: -x stands for proxy. You should be using -X to specify the request method. So the command would be, curl -i -X DELETE "http://192.168.25.21:50070/webhdfs/v1/hadoophome/file1/?user.name=root&op=DELETE&recursive=true" Refer curl(1) for options.
Q: curl: (5) Could not resolve proxy: DELETE; Unknown error I am using hadoop apache 2.7.1 on centos 7 and I want to delete a file(file1) by using webhdfs commands. curl -i -x DELETE "http://192.168.25.21:50070/webhdfs/v1/hadoophome/file1/?user.name=root&op=DELETE&recursive=true" But I am getting this error: curl: (5) Could not resolve proxy: DELETE; Unknown error I edited bashrc file as following : export http_proxy="" export https_proxy="" export ftp_proxy="" And source the file to save changes source ~/.bashrc But with the same error. So I tried to set no proxy in the culr command as curl -i -x --noproxy localhost DELETE "http://192.168.25.21:50070/webhdfs/v1/hadoophome/file1/?user.name=root&op=DELETE&recursive=true" With this error: curl: (5) Could not resolve proxy: --noproxy; Unknown error What should I edit to exclude this proxy? Thanks. A: -x stands for proxy. You should be using -X to specify the request method. So the command would be, curl -i -X DELETE "http://192.168.25.21:50070/webhdfs/v1/hadoophome/file1/?user.name=root&op=DELETE&recursive=true" Refer curl(1) for options.
stackoverflow
{ "language": "en", "length": 146, "provenance": "stackexchange_0000F.jsonl.gz:858234", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521604" }
79b1f6b094c5ac5cbb43d33c90082ae7caa6bdd3
Stackoverflow Stackexchange Q: heroku commands and push giving me 'CLI is deprecated. Please reinstall' error I have had no problem pushing to heroku, but now whenever I write a command such as: heroku create heroku open git push heroku master It gives me this error: $ heroku open ▸ heroku-cli: This CLI is deprecated. Please reinstall from ▸ https://cli.heroku.com What should I do to stop this? I have ran heroku for sites that have given me no problem before but now the error is showing. I am using the same devise and the authentication details have not changed. I also restarted the computer (it's a mac). A: For Windows users. After installing an updated version of the cli I ended up with a 32 bit version in C:\Program Files (x86)\Heroku (called the Heroku Toolbelt - the old one), and a 64 bit version in 'C:\Program Files\Heroku' (called the Heroku CLI - the new one). Both were in my %PATH% environment variable, but the older one took precedence as it was higher up. Resolution is either: * *Remove C:\Program Files (x86)\Heroku from the system environment variables or *Uninstall the heroku toolbelt
Q: heroku commands and push giving me 'CLI is deprecated. Please reinstall' error I have had no problem pushing to heroku, but now whenever I write a command such as: heroku create heroku open git push heroku master It gives me this error: $ heroku open ▸ heroku-cli: This CLI is deprecated. Please reinstall from ▸ https://cli.heroku.com What should I do to stop this? I have ran heroku for sites that have given me no problem before but now the error is showing. I am using the same devise and the authentication details have not changed. I also restarted the computer (it's a mac). A: For Windows users. After installing an updated version of the cli I ended up with a 32 bit version in C:\Program Files (x86)\Heroku (called the Heroku Toolbelt - the old one), and a 64 bit version in 'C:\Program Files\Heroku' (called the Heroku CLI - the new one). Both were in my %PATH% environment variable, but the older one took precedence as it was higher up. Resolution is either: * *Remove C:\Program Files (x86)\Heroku from the system environment variables or *Uninstall the heroku toolbelt A: I was getting the same error and I fixed it with: $ brew update $ brew upgrade heroku I hope it helps! A: Just because you have the CLI installed does not mean when you run heroku that it is running the newly installed version. First, run which heroku to see where the heroku binary is that you're running. If it's not /usr/local/bin/heroku you'll need to either delete that file, or edit your PATH environment variable so /usr/local/bin takes precedence. If it is /usr/local/bin/heroku likely you need to update the symlink. If you run brew doctor it will tell you if the symlinks are not set correctly. A: Upgrade your heroku cli with homebrew as follows: brew upgrade heroku If you see Error: heroku not installed message, install it again: brew install heroku. You can also see the following output: The formula built, but is not symlinked into /usr/local Could not symlink bin/heroku Target /usr/local/bin/heroku already exists. You may want to remove it: rm '/usr/local/bin/heroku' To force the link and overwrite all conflicting files: brew link --overwrite heroku To list all files that would be deleted: brew link --overwrite --dry-run heroku Possible conflicting files are: /usr/local/bin/heroku -> /usr/local/heroku/bin/heroku In this case just follow the instructions and run: brew link --overwrite heroku Test if you still have the deprecation message, for ex.: heroku logs Hope this helps. A: All the solutions above didn't work for me as my brew wasn't compatible with OS X 10.2 If you get the following warning: Warning: You are using OS X 10.12. We do not provide support for this pre-release version. You may encounter build failures or other breakages. Here is what worked for me: Try to update brew: brew update You may encounter with a new permission issue as I did: Error: /usr/local must be writable! If so, simply run the following: sudo chgrp -R admin /usr/local sudo chmod -R g+w /usr/local brew update Now, when you have an updated brew that is compatible with Mac OS 10.2, all you need to do is to update heroku you can just upgrade it: brew upgrade heroku Or uninstall and then install it: brew uninstall heroku rm -rf ~/.local/share/heroku ~/.config/heroku ~/.cache/heroku brew install heroku To test your updated Heroku just try heroku logs Good luck! A: I had originally installed heroku as a ruby gem, so I had to run: $ gem uninstall heroku Then reinstall the new version from Homebrew $ brew install heroku A: I just asked the Heroku support and they advised me to reinstall Heroku-cli via homebrew and it worked like a charm. Cheers
stackoverflow
{ "language": "en", "length": 621, "provenance": "stackexchange_0000F.jsonl.gz:858267", "question_score": "18", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521712" }
159a998bcd054dd58d345ab74b1b0372b77058da
Stackoverflow Stackexchange Q: AttributeError: module 'pygal' has no attribute 'Worldmap' I'm trying to: import pygal wm = pygal.Worldmap() but it raises: AttributeError: module 'pygal' has no attribute 'Worldmap' Can anyone tell me what the problem is? A: You are probably looking at old documentation from what I can tell. The most recent docs state that you first need to install the map plugin with: pip install pygal_maps_world and then use it as: import pygal mm = pygal.maps.world.World()
Q: AttributeError: module 'pygal' has no attribute 'Worldmap' I'm trying to: import pygal wm = pygal.Worldmap() but it raises: AttributeError: module 'pygal' has no attribute 'Worldmap' Can anyone tell me what the problem is? A: You are probably looking at old documentation from what I can tell. The most recent docs state that you first need to install the map plugin with: pip install pygal_maps_world and then use it as: import pygal mm = pygal.maps.world.World()
stackoverflow
{ "language": "en", "length": 75, "provenance": "stackexchange_0000F.jsonl.gz:858279", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521747" }
4afd20aed95c3c579df60e3c5e95db02170118e9
Stackoverflow Stackexchange Q: Android O Geofence Trigger Delay I have a navigation app that used geofences(Creating and Monitoring Geofences). Google says (Background Location Limits): The average responsiveness for a geofencing event is every couple of minutes or so. On previous Android versions was no delay. But on Android O, the app shows a delay of ~4-5 minutes after leaving a geofence(approximately 50% of cases). Tested on ODP2 Has anybody faced a similar problem? A: Google added new paragraph to Create and monitor geofences doc Alerts can be late. The geofence service doesn't continuously query for location, so expect some latency when receiving alerts. Usually the latency is less than 2 minutes, even less when the device has been moving. If Background Location Limits are in effect, the latency is about 2-3 minutes on average. If the device has been stationary for a significant period of time, the latency may increase (up to 6 minutes).
Q: Android O Geofence Trigger Delay I have a navigation app that used geofences(Creating and Monitoring Geofences). Google says (Background Location Limits): The average responsiveness for a geofencing event is every couple of minutes or so. On previous Android versions was no delay. But on Android O, the app shows a delay of ~4-5 minutes after leaving a geofence(approximately 50% of cases). Tested on ODP2 Has anybody faced a similar problem? A: Google added new paragraph to Create and monitor geofences doc Alerts can be late. The geofence service doesn't continuously query for location, so expect some latency when receiving alerts. Usually the latency is less than 2 minutes, even less when the device has been moving. If Background Location Limits are in effect, the latency is about 2-3 minutes on average. If the device has been stationary for a significant period of time, the latency may increase (up to 6 minutes).
stackoverflow
{ "language": "en", "length": 152, "provenance": "stackexchange_0000F.jsonl.gz:858280", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521754" }
90a92015002958afdc4a33ccfc51dcf36b0ae825
Stackoverflow Stackexchange Q: How to use gcloud sql export and not add "CREATE DATABASE" and "USE" on the sql file I want to use the gcloud sql instance export command to export a database, to be imported to another database on the same server. The problem is that using: gcloud sql instances export instancename gs://bucket/dbname.sql.gz -d=dbname adds the below to the top of the sql file: CREATE DATABASE /*!32312 IF NOT EXISTS*/ `dbname` /*!40100 DEFAULT CHARACTER SET utf8 */; USE `dbname`; Since I want to be able to import the sql file to another database with the gcloud sql instance import, the USE dbname makes the import go to the dbname database instead of other one. So is there a way for me to export the database but don't add that to the file? I've searched the documentation of the command and didn't found anything related to that. A: Just manually edit the dump file, currently there is no existing functionality which would prepare an export according to your requirements.
Q: How to use gcloud sql export and not add "CREATE DATABASE" and "USE" on the sql file I want to use the gcloud sql instance export command to export a database, to be imported to another database on the same server. The problem is that using: gcloud sql instances export instancename gs://bucket/dbname.sql.gz -d=dbname adds the below to the top of the sql file: CREATE DATABASE /*!32312 IF NOT EXISTS*/ `dbname` /*!40100 DEFAULT CHARACTER SET utf8 */; USE `dbname`; Since I want to be able to import the sql file to another database with the gcloud sql instance import, the USE dbname makes the import go to the dbname database instead of other one. So is there a way for me to export the database but don't add that to the file? I've searched the documentation of the command and didn't found anything related to that. A: Just manually edit the dump file, currently there is no existing functionality which would prepare an export according to your requirements.
stackoverflow
{ "language": "en", "length": 168, "provenance": "stackexchange_0000F.jsonl.gz:858314", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521847" }
51877f0fca75defd49cd59d84701f315a0d2cc4e
Stackoverflow Stackexchange Q: Waypoints with CoffeeScript I want to notify event based on scrolling. I found Waypoints which can solve my problem but I am getting all examples with jQuery and Reactjs. How can I use it in CoffeeScript ? I am using below code. It is being fired everytime but i want it to fire only when it reaches at waypoint-header. I have this div in repeating mode, I mean this div is available after some list items (after each 20 items in list). Please help me to solve this. $(window).scroll -> waypoint = new Waypoint( element: document.getElementById('waypoint-header'), handler:(direction) -> console.debug 'hello' ) A: Here is an example in CoffeeScript with no jQuery and React: waypoint = new Waypoint element: document.getElementById('waypoint-header'), handler: (direction) -> console.log 'hello' You don't need to add an event listener, Waypoints library does it itself. working codepen
Q: Waypoints with CoffeeScript I want to notify event based on scrolling. I found Waypoints which can solve my problem but I am getting all examples with jQuery and Reactjs. How can I use it in CoffeeScript ? I am using below code. It is being fired everytime but i want it to fire only when it reaches at waypoint-header. I have this div in repeating mode, I mean this div is available after some list items (after each 20 items in list). Please help me to solve this. $(window).scroll -> waypoint = new Waypoint( element: document.getElementById('waypoint-header'), handler:(direction) -> console.debug 'hello' ) A: Here is an example in CoffeeScript with no jQuery and React: waypoint = new Waypoint element: document.getElementById('waypoint-header'), handler: (direction) -> console.log 'hello' You don't need to add an event listener, Waypoints library does it itself. working codepen A: If I get it right, it is normal for current code to be fired on any scroll event. If you need just trigger once when reach to waypoint-header I guess, you should just create a waypoint without any scroll event as indicated here waypoint = new Waypoint element: document.getElementById('waypoint-header'), handler:(direction) -> console.debug 'hello' To notify for each element inside list, I recommend to change id's to class and try this. waypoint = $(".waypoint-header").waypoint -> element: document.getElementById('waypoint-header'), handler:(direction) -> console.debug 'hello'
stackoverflow
{ "language": "en", "length": 221, "provenance": "stackexchange_0000F.jsonl.gz:858352", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44521981" }
9ce4a37c9d61dc763f3e857ff13d5de9b2783eb8
Stackoverflow Stackexchange Q: Change docker log messages location I met a problem with docker logging and after reading a lot of sources didn't find solution: is there a way to discard messages of docker daemon in /var/log/messages and select another location? A: Ok, I know that this question is quite old but I don't think it has been answered well and no correct answer has been stated. First of all the reason why it saves messages to that particular place starts in rsyslog configuration (/etc/rsyslog.conf) with the line: $ModLoad imjournal # provides access to the systemd journal So, because docker saves information to systemd journal it ends at /var/log/messages. To be able to save it to other places, you have to create a rule like the following at /etc/rsyslog.d/docker.conf. $FileCreateMode 0644 template(name="DockerLogFileName" type="list") { constant(value="/var/log/docker/") property(name="syslogtag" securepath="replace" \ regex.expression="docker/\\(.*\\)\\[" regex.submatch="1") constant(value="/docker.log") } if $programname == 'dockerd' then \ /var/log/docker/combined.log if $programname == 'dockerd' then \ if $syslogtag contains 'docker/' then \ ?DockerLogFileName else /var/log/docker/no_tag/docker.log $FileCreateMode 0600 I found the information for this configuration here: https://www.simulmedia.com/blog/2016/02/19/centralized-docker-logging-with-rsyslog/
Q: Change docker log messages location I met a problem with docker logging and after reading a lot of sources didn't find solution: is there a way to discard messages of docker daemon in /var/log/messages and select another location? A: Ok, I know that this question is quite old but I don't think it has been answered well and no correct answer has been stated. First of all the reason why it saves messages to that particular place starts in rsyslog configuration (/etc/rsyslog.conf) with the line: $ModLoad imjournal # provides access to the systemd journal So, because docker saves information to systemd journal it ends at /var/log/messages. To be able to save it to other places, you have to create a rule like the following at /etc/rsyslog.d/docker.conf. $FileCreateMode 0644 template(name="DockerLogFileName" type="list") { constant(value="/var/log/docker/") property(name="syslogtag" securepath="replace" \ regex.expression="docker/\\(.*\\)\\[" regex.submatch="1") constant(value="/docker.log") } if $programname == 'dockerd' then \ /var/log/docker/combined.log if $programname == 'dockerd' then \ if $syslogtag contains 'docker/' then \ ?DockerLogFileName else /var/log/docker/no_tag/docker.log $FileCreateMode 0600 I found the information for this configuration here: https://www.simulmedia.com/blog/2016/02/19/centralized-docker-logging-with-rsyslog/ A: Configure rsyslog to isolate the Docker logs into their own file. To do this create /etc/rsyslog.d/10-docker.conf and copy the following content into the file. # Docker logging daemon.* { /var/mylog stop } In summary this will write all logs for the daemon category to /var/mylog then stop processing that log entry so it isn’t written to the systems default syslog file. A: According to the Docker documentation, you can specify a different driver either as a command-line argument for the docker daemon or (preferably) in the daemon.json config file. Several drivers are available, e.g. for Syslog, HTTP-based logging, ... Update Here's an example configuration section for Syslog (from the documentation): { "log-driver": "syslog", "log-opts": { "syslog": "udp://1.2.3.4:1111" } }
stackoverflow
{ "language": "en", "length": 293, "provenance": "stackexchange_0000F.jsonl.gz:858365", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522024" }
df6de5a795013474c654ed735e7bd246464a61bd
Stackoverflow Stackexchange Q: Android style a SeekBar I want to create a style for my app's Seekbars. I just want to change the color of the thumb and the bar itself. I managed to change the thumb name like this: <style name="MySeekBar" parent="@style/Widget.AppCompat.SeekBar"> <item name="android:thumbTint"> @color/turquoise </item> </style> but I'm not sure how to change the bar's color. I know how to do it with code, but not in the xml: seekBar.getProgressDrawable().setColorFilter(getContext().getResources().getColor(R.color.turquoise), Mode.SRC_ATOP); How can I style the color filter with a style defined in xml? A: You can use android:progressTint to change the color of progressing part of the seek bar and android:progressBackgroundTint to change bar background's color in your style
Q: Android style a SeekBar I want to create a style for my app's Seekbars. I just want to change the color of the thumb and the bar itself. I managed to change the thumb name like this: <style name="MySeekBar" parent="@style/Widget.AppCompat.SeekBar"> <item name="android:thumbTint"> @color/turquoise </item> </style> but I'm not sure how to change the bar's color. I know how to do it with code, but not in the xml: seekBar.getProgressDrawable().setColorFilter(getContext().getResources().getColor(R.color.turquoise), Mode.SRC_ATOP); How can I style the color filter with a style defined in xml? A: You can use android:progressTint to change the color of progressing part of the seek bar and android:progressBackgroundTint to change bar background's color in your style A: Replace the android:color atributtes with the color you prefer. background.xml <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="line"> <stroke android:width="1dp" android:color="#D9D9D9"/> <corners android:radius="1dp" /> </shape> progress.xml <shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="line"> <stroke android:width="1dp" android:color="#2EA5DE"/> <corners android:radius="1dp" /> </shape> style.xml <layer-list xmlns:android="http://schemas.android.com/apk/res/android" > <item android:id="@android:id/background" android:drawable="@drawable/seekbar_background"/> <item android:id="@android:id/progress"> <clip android:drawable="@drawable/seekbar_progress" /> </item> </layer-list> OR USE ON YOUR SEEKBAR THEME AS FOLLOWING Since SeekBar uses colorAccent by default, you can create a new style with your custom color as colorAccent then use theme attribute to apply it to the SeekBar. <SeekBar> . . android:theme="@style/MySeekBar" . </SeekBar> in the @style: <style name="MySeekBar" parent="@style/Widget.AppCompat.SeekBar" > <item name="colorAccent">#ffff00</item> </style>
stackoverflow
{ "language": "en", "length": 208, "provenance": "stackexchange_0000F.jsonl.gz:858370", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522038" }
8b3d3d044f6ad9d658b8e09750efd6986ff32b70
Stackoverflow Stackexchange Q: Elasticsearch sorting based on several fields and time condition I have a little online store application and I would like to sort items based on their active price. So there is a normal price and optionally a discounted price. But the discounted price is valid for a time range only and the index is not updated when discount ends. For example: {price: 2.00, discount:{price:1.10, start:2016-08-11, end:2016-09-14}} Is it possible to include such condition on sorting? The goal is to ensure the correct sort order any given time. Edit: At the moment the ES version is rather old but update to the latest is not an issue. A: This is for Elasticsearch 5 and 6 (you didn't mention, as I said, the version and the mapping of that discount field). Also, you need to explicitly pass as a parameter the current date and time in milliseconds whenever you run that query: "sort": { "_script": { "type": "number", "script": { "lang": "painless", "source": "if (doc['discount.price']!=null && doc['discount.start'].date.getMillis() < params['time'] && doc['discount.end'].date.getMillis() > params['time']) return doc['discount.price'].value; else return doc['price'].value", "params": { "time": 1512717897000 } }, "order": "desc" } }
Q: Elasticsearch sorting based on several fields and time condition I have a little online store application and I would like to sort items based on their active price. So there is a normal price and optionally a discounted price. But the discounted price is valid for a time range only and the index is not updated when discount ends. For example: {price: 2.00, discount:{price:1.10, start:2016-08-11, end:2016-09-14}} Is it possible to include such condition on sorting? The goal is to ensure the correct sort order any given time. Edit: At the moment the ES version is rather old but update to the latest is not an issue. A: This is for Elasticsearch 5 and 6 (you didn't mention, as I said, the version and the mapping of that discount field). Also, you need to explicitly pass as a parameter the current date and time in milliseconds whenever you run that query: "sort": { "_script": { "type": "number", "script": { "lang": "painless", "source": "if (doc['discount.price']!=null && doc['discount.start'].date.getMillis() < params['time'] && doc['discount.end'].date.getMillis() > params['time']) return doc['discount.price'].value; else return doc['price'].value", "params": { "time": 1512717897000 } }, "order": "desc" } }
stackoverflow
{ "language": "en", "length": 187, "provenance": "stackexchange_0000F.jsonl.gz:858417", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522219" }
c5e9f87b445eb61489ad2a5a0d75c02e18a6a146
Stackoverflow Stackexchange Q: How to read a CSV file that starts with a specific substring in Python pandas? Say I have a CSV file whose name is like: Pokémon_Pikachu.csv. Is there a way to read it if I give only the first substring (Pokémon) and still read it in Pandas? A: import glob import pandas as pd for file in glob.glob("Pokémon*.csv"): print (file) this will get you the csv file names that start with Pokémon and if you want to read all the csv files into one, main_df = pd.DataFrame() for file in glob.glob("Pokémon*.py"): df = pd.read_csv(file) if main_df.empty: main_df = df else: main_df = main_df.join(df, how='outer') print main_df.head()
Q: How to read a CSV file that starts with a specific substring in Python pandas? Say I have a CSV file whose name is like: Pokémon_Pikachu.csv. Is there a way to read it if I give only the first substring (Pokémon) and still read it in Pandas? A: import glob import pandas as pd for file in glob.glob("Pokémon*.csv"): print (file) this will get you the csv file names that start with Pokémon and if you want to read all the csv files into one, main_df = pd.DataFrame() for file in glob.glob("Pokémon*.py"): df = pd.read_csv(file) if main_df.empty: main_df = df else: main_df = main_df.join(df, how='outer') print main_df.head() A: http://www.pythonforbeginners.com/code-snippets-source-code/python-os-listdir-and-endswith you can use os.listdir() to get a list of the contents of a directory, then filter those with string.startswith(substring) or string.endswith(substring). That would give you the filename(s) that you could put into pd.read_csv(filename) A: I am not sure what you mean but I suppose that you have files in directory with prefix Pokémon. The sollution is: import pandas as pd import os import glob for file in glob.glob(os.path.join(input_dir, 'Pokemon_*.csv')): pd.read_csv(file)
stackoverflow
{ "language": "en", "length": 179, "provenance": "stackexchange_0000F.jsonl.gz:858421", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522233" }
902c79f6430dd998f0b425cac303e5dced41a9ad
Stackoverflow Stackexchange Q: C++ Default constructor not inherited with "using" when move and copy constructors present class A{ public: A(){}; }; class B : public A{ public: using A::A; B(const B&) = default; B( B&&) = default; }; B b; The compiler (g++ (5.4.0-6ubuntu1) / c++11) says "no matching function for call to B::B()" and lists the copy and move constructors as candidates. If I comment those defaulted ones out then it compiles. What causes this? And what difference does it make that they are explicitly defaulted? If those 2 lines weren't there they would be defaulted anyway. A: If you declare any constructors, the default constructor is not implicitly generated, you can generate it by adding a = default for it as well: class B : public A { public: B() = default; B(const B&) = default; B( B&&) = default; }; This has changed with C++17 (as pointed out by other answer).
Q: C++ Default constructor not inherited with "using" when move and copy constructors present class A{ public: A(){}; }; class B : public A{ public: using A::A; B(const B&) = default; B( B&&) = default; }; B b; The compiler (g++ (5.4.0-6ubuntu1) / c++11) says "no matching function for call to B::B()" and lists the copy and move constructors as candidates. If I comment those defaulted ones out then it compiles. What causes this? And what difference does it make that they are explicitly defaulted? If those 2 lines weren't there they would be defaulted anyway. A: If you declare any constructors, the default constructor is not implicitly generated, you can generate it by adding a = default for it as well: class B : public A { public: B() = default; B(const B&) = default; B( B&&) = default; }; This has changed with C++17 (as pointed out by other answer). A: The default constructor cannot be inherited, the standard explicitly says so. Quoting C++11 12.9 [class.inhctor]/3 (emphasis mine) (*): For each non-template constructor in the candidate set of inherited constructors other than a constructor having no parameters or a copy/move constructor having a single parameter, a constructor is implicitly declared with the same constructor characteristics unless there is a user-declared constructor with the same signature in the class where the using-declaration appears. ... This means that for the default constructor, normal rules apply as if the using A::A; declaration wasn't there. So the presence of any other constructor declaration (such as the copy & move constructor) causes the default constructor not to be implicitly declared. Note that you can easily add it back by explicitly defaulting it: class B : public A{ public: using A::A; B() = default; B(const B&) = default; B( B&&) = default; }; (*) The same wording is present in C++14 (n4140), at the same location. I can't seem to find equivalent wording in C++1z (looking through n4582) A: Before C++17, the default constructor of the base class won't be inherited via using: All candidate inherited constructors that aren't the default constructor or the copy/move constructor and whose signatures do not match user-defined constructors in the derived class, are implicitly declared in the derived class. (until C++17) After C++17 the code works fine. Before that, the default constructor won't be inherited from the base class, and won't be generated for class B because copy/move constructor are provided. If no user-declared constructors of any kind are provided for a class type (struct, class, or union), the compiler will always declare a default constructor as an inline public member of its class. That's why if you comment copy/move constructor out it compiles. You can add the definition explicitly as a pre-C++17 workaround. e.g. class B : public A { public: B(const B&) = default; B( B&&) = default; B() = default; }; The code compiles with gcc8.
stackoverflow
{ "language": "en", "length": 481, "provenance": "stackexchange_0000F.jsonl.gz:858454", "question_score": "11", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522331" }
a7920d00a15e35c166a88c2d0d08ea21ede1a3a9
Stackoverflow Stackexchange Q: How to make QueryDSL and Lombok work together When a method or variable is annotated with Lombok annotation, the maven plugin will complain by processing the source generation for JPA. I get this kind of failure in the console logs: symbol: class __ location: class ServiceBaseMessage C:\workspaces\[...]\service\ServiceBaseMessage.java:44: error: cannot find symbol @Getter(onMethod = @__({ @JsonProperty("TYPE") })) How to make the apt-maven-plugin and queryDSL processor for JPA annotations work together with lombok annotations ? A: here is the syntax for GRADLE users (maven users please have a look at the other answers) // this adds lombok correctly to your project then you configure the jpa processor plugins { ... id 'io.franzbecker.gradle-lombok' version '1.7' } project.afterEvaluate { project.tasks.compileQuerydsl.options.compilerArgs = [ "-proc:only", "-processor", project.querydsl.processors() + ',lombok.launch.AnnotationProcessorHider$AnnotationProcessor' ] } here is a working version of QueryDSL and Lombok. Dependencies are imported by plugins, therefore no dependencies need to be declared: buildscript { repositories { mavenCentral() } } plugins { id 'io.franzbecker.gradle-lombok' version '1.7' id "com.ewerk.gradle.plugins.querydsl" version "1.0.9" } querydsl { jpa = true } // plugin needed so that the project.afterEvaluate { project.tasks.compileQuerydsl.options.compilerArgs = [ "-proc:only", "-processor", project.querydsl.processors() + ',lombok.launch.AnnotationProcessorHider$AnnotationProcessor' ] } dependencies { compile group: 'com.querydsl', name: 'querydsl-jpa', version: '4.1.3' }
Q: How to make QueryDSL and Lombok work together When a method or variable is annotated with Lombok annotation, the maven plugin will complain by processing the source generation for JPA. I get this kind of failure in the console logs: symbol: class __ location: class ServiceBaseMessage C:\workspaces\[...]\service\ServiceBaseMessage.java:44: error: cannot find symbol @Getter(onMethod = @__({ @JsonProperty("TYPE") })) How to make the apt-maven-plugin and queryDSL processor for JPA annotations work together with lombok annotations ? A: here is the syntax for GRADLE users (maven users please have a look at the other answers) // this adds lombok correctly to your project then you configure the jpa processor plugins { ... id 'io.franzbecker.gradle-lombok' version '1.7' } project.afterEvaluate { project.tasks.compileQuerydsl.options.compilerArgs = [ "-proc:only", "-processor", project.querydsl.processors() + ',lombok.launch.AnnotationProcessorHider$AnnotationProcessor' ] } here is a working version of QueryDSL and Lombok. Dependencies are imported by plugins, therefore no dependencies need to be declared: buildscript { repositories { mavenCentral() } } plugins { id 'io.franzbecker.gradle-lombok' version '1.7' id "com.ewerk.gradle.plugins.querydsl" version "1.0.9" } querydsl { jpa = true } // plugin needed so that the project.afterEvaluate { project.tasks.compileQuerydsl.options.compilerArgs = [ "-proc:only", "-processor", project.querydsl.processors() + ',lombok.launch.AnnotationProcessorHider$AnnotationProcessor' ] } dependencies { compile group: 'com.querydsl', name: 'querydsl-jpa', version: '4.1.3' } A: This solution worked for me. Add lombok.launch.AnnotationProcessorHider$AnnotationProcessor in your apt-maven-plugin configuration. <plugin> <groupId>com.mysema.maven</groupId> <artifactId>apt-maven-plugin</artifactId> <executions> <execution> <goals> <goal>process</goal> </goals> <configuration> <outputDirectory>target/generated-sources/java</outputDirectory> <processor>com.querydsl.apt.jpa.JPAAnnotationProcessor,lombok.launch.AnnotationProcessorHider$AnnotationProcessor</processor> </configuration> </execution> </executions> </plugin> It seems also to be working the same way with gradle: See https://github.com/ewerk/gradle-plugins/issues/59#issuecomment-247047011 A: Below pom snippet works for me with Querydsl, Lombok, Mapstruct all together by maven-compiler-plugin <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.1</version> <configuration> <source>1.8</source> <target>1.8</target> <encoding>UTF-8</encoding> <annotationProcessors> <annotationProcessor>lombok.launch.AnnotationProcessorHider$AnnotationProcessor</annotationProcessor> <annotationProcessor>com.querydsl.apt.jpa.JPAAnnotationProcessor</annotationProcessor> <annotationProcessor>org.mapstruct.ap.MappingProcessor</annotationProcessor> </annotationProcessors> <annotationProcessorPaths> <path> <groupId>com.querydsl</groupId> <artifactId>querydsl-apt</artifactId> <version>${querydsl.version}</version> </path> <path> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>${lombok.version}</version> </path> <path> <groupId>org.mapstruct</groupId> <artifactId>mapstruct-processor</artifactId> <version>${org.mapstruct.version}</version> </path> <path> <groupId>javax.annotation</groupId> <artifactId>javax.annotation-api</artifactId> <version>1.3.1</version> </path> <path> <groupId>org.eclipse.persistence</groupId> <artifactId>javax.persistence</artifactId> <version>2.0.0</version> </path> </annotationProcessorPaths> </configuration> </plugin> A: For gradle, follow the exact same order sourceSets { generated { java { srcDirs = ['build/generated/sources/annotationProcessor/java/main'] } } } dependencies { api 'com.querydsl:querydsl-jpa:4.4.0' annotationProcessor 'org.projectlombok:lombok' annotationProcessor('com.querydsl:querydsl-apt:4.4.0:jpa') annotationProcessor('javax.annotation:javax.annotation-api') }
stackoverflow
{ "language": "en", "length": 325, "provenance": "stackexchange_0000F.jsonl.gz:858496", "question_score": "13", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522494" }
7acf6f047ce16ed762811603821dd36be87922fd
Stackoverflow Stackexchange Q: Displaying php result in HTML using AJAX I am trying to display the result of my php in different <div>-s. My plan is that I make a query in my php and display the result in JSON format. Due to the JSON format my result can be displaying in different <div>. How can I reach that for example the "name" can be displayed between <div> tags? The example result of php: [ { "id": 0, "name": "example1", "title": "example2" }, { "id": 0, "name": "example1", "title": "example2" } ] The attempt: <div class="result"></div> <script> $.ajax({ type:'GET', url:'foo.php', data:'json', success: function(data){ $('.result').html(data); } }); </script> A: Hi try parsing the returned data into a JSON object : <div class="result"></div> <script> $.ajax({ type:'GET', url:'foo.php', data:'json', success: function(data){ // added JSON parse var jsonData = JSON.parse(data); // iterate through every object $.each(jsonData, function(index, element) { // do what you want with each JSON object $('.result').append('<div>' + element.name + '</div>'); }); } }); </script> // PS code is untested. // Regards
Q: Displaying php result in HTML using AJAX I am trying to display the result of my php in different <div>-s. My plan is that I make a query in my php and display the result in JSON format. Due to the JSON format my result can be displaying in different <div>. How can I reach that for example the "name" can be displayed between <div> tags? The example result of php: [ { "id": 0, "name": "example1", "title": "example2" }, { "id": 0, "name": "example1", "title": "example2" } ] The attempt: <div class="result"></div> <script> $.ajax({ type:'GET', url:'foo.php', data:'json', success: function(data){ $('.result').html(data); } }); </script> A: Hi try parsing the returned data into a JSON object : <div class="result"></div> <script> $.ajax({ type:'GET', url:'foo.php', data:'json', success: function(data){ // added JSON parse var jsonData = JSON.parse(data); // iterate through every object $.each(jsonData, function(index, element) { // do what you want with each JSON object $('.result').append('<div>' + element.name + '</div>'); }); } }); </script> // PS code is untested. // Regards A: Do it like below:- success: function(data){ var newhtml = ''; //newly created string variable $.each(data, function(i, item) {//iterate over json data newhtml +='<div>'+ item.name +'</div>'; // get name and wrapped inside div and append it to newly created string variable }); $('.result').html(newhtml); // put value of newly created div into result element } A: You can create it like this: for (var i = 0; i < res.length; i++) { var id = $("<div></div>").html(data[i].id); var name = $("<div></div>").html(data[i].name); var title = $("<div></div>").html(data[i].title); $('.result').append(id).append(name).append(title); }
stackoverflow
{ "language": "en", "length": 253, "provenance": "stackexchange_0000F.jsonl.gz:858509", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522534" }
4a42aee1d3037b8f5ee21970a3d6452ede435d66
Stackoverflow Stackexchange Q: Space Characters in Cognito Password We are currently seeing InvalidParameterException when attempting to create users with space characters in their password. Is there any way to update the password policy to enable this, is it a bug or expected behavior? A: The password policy of Cognito is described in the userPool settings here: http://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html Currently, Cognito does not support space characters in the password. This is not a bug, but the expected behavior right now. We will consider this request for future releases
Q: Space Characters in Cognito Password We are currently seeing InvalidParameterException when attempting to create users with space characters in their password. Is there any way to update the password policy to enable this, is it a bug or expected behavior? A: The password policy of Cognito is described in the userPool settings here: http://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html Currently, Cognito does not support space characters in the password. This is not a bug, but the expected behavior right now. We will consider this request for future releases A: Spaces now seem to be supported. From https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html: Special character from the following set. The space character is also treated as a special character. Leading or trailing whitespace does not seem to be supported though - https://github.com/aws-amplify/amplify-js/issues/869#issuecomment-524921236 is currently the only reference I can find to this limitation, but I have experienced the following a couple of times: software.amazon.awssdk.services.cognitoidentityprovider.model.InvalidPasswordException Password does not conform to policy: Password must satisfy regular expression pattern: ^\S.*\S$ (Service: CognitoIdentityProvider, Status Code: 400, Request ID: uuid, Extended Request ID: null) This seems to be a pre-check by Cognito before checking against the configured password policy.
stackoverflow
{ "language": "en", "length": 184, "provenance": "stackexchange_0000F.jsonl.gz:858533", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522600" }
16a486e05032b786d001b01dc9478dfa4717a4b8
Stackoverflow Stackexchange Q: Difference between NSRange and NSMakeRange Is there any difference between: NSRange(location: 0, length: 5) and: NSMakeRange(0, 5) Because Swiftlint throws a warning when I use NSMakeRange, but I don't know why. Thanks for the Help :-) A: Main difference is that NSRange(location: 0, length: 24) is auto-generated struct init method in Swift and NSMakeRange(0, 24) is just a predefined macro that setts location and length NS_INLINE NSRange NSMakeRange(NSUInteger loc, NSUInteger len) { NSRange r; r.location = loc; r.length = len; return r; } In general, result is the same, but if you're Swift use first one and if you're writing ObjC code use second ;)
Q: Difference between NSRange and NSMakeRange Is there any difference between: NSRange(location: 0, length: 5) and: NSMakeRange(0, 5) Because Swiftlint throws a warning when I use NSMakeRange, but I don't know why. Thanks for the Help :-) A: Main difference is that NSRange(location: 0, length: 24) is auto-generated struct init method in Swift and NSMakeRange(0, 24) is just a predefined macro that setts location and length NS_INLINE NSRange NSMakeRange(NSUInteger loc, NSUInteger len) { NSRange r; r.location = loc; r.length = len; return r; } In general, result is the same, but if you're Swift use first one and if you're writing ObjC code use second ;) A: The only difference between them is that NSRange(location: 0, length: 5) is an initializer for NSRange while NSMakeRange(0, 5) is a function which creates a new NSRange instance (by using the same initializer inside most likely) and actually is redundant in Swift. Swift has simply inherited it from Objective-C. I would stick to the former
stackoverflow
{ "language": "en", "length": 162, "provenance": "stackexchange_0000F.jsonl.gz:858545", "question_score": "29", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522626" }
fae253a743cadd37dd071afd4d29ddf6d47162f1
Stackoverflow Stackexchange Q: I can't understand source code implementation of string.kt in kotlin In kotlin source code, I can't understand how to implement of length of String.kt , it is as following: package kotlin public class String : Comparable<String>, CharSequence { companion object {} /** * Returns a string obtained by concatenating this string with the string representation of the given [other] object. */ public operator fun plus(other: Any?): String public override val length: Int public override fun get(index: Int): Char public override fun subSequence(startIndex: Int, endIndex: Int): CharSequence public override fun compareTo(other: String): Int} var len:Int = "abc".length; // len = 3 where to run the length?? where to implement the length function? A: The string functions are examples of what Kotlin considers Intrinsic functions. They are defined based on the platform they are running on and you won't be able to find an implementation of them in the source code. For the JVM they will be directly mapped to the corresponding native java.lang.String methods. This ensures there is no runtime overhead and leverages the optimizations done in the java standard library.
Q: I can't understand source code implementation of string.kt in kotlin In kotlin source code, I can't understand how to implement of length of String.kt , it is as following: package kotlin public class String : Comparable<String>, CharSequence { companion object {} /** * Returns a string obtained by concatenating this string with the string representation of the given [other] object. */ public operator fun plus(other: Any?): String public override val length: Int public override fun get(index: Int): Char public override fun subSequence(startIndex: Int, endIndex: Int): CharSequence public override fun compareTo(other: String): Int} var len:Int = "abc".length; // len = 3 where to run the length?? where to implement the length function? A: The string functions are examples of what Kotlin considers Intrinsic functions. They are defined based on the platform they are running on and you won't be able to find an implementation of them in the source code. For the JVM they will be directly mapped to the corresponding native java.lang.String methods. This ensures there is no runtime overhead and leverages the optimizations done in the java standard library.
stackoverflow
{ "language": "en", "length": 181, "provenance": "stackexchange_0000F.jsonl.gz:858570", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522697" }
fbd1b2097e41d74c0a0a3188475ddc33facc3a63
Stackoverflow Stackexchange Q: telegram botfather doesn't allow making more bots I'm trying to make telegram bot .I have made 20 bot so far and now when I select newbot from bot father it says this : " That I cannot do. You come to me asking for more than 20 bots. But you don't ask with respect. You don't offer friendship. You don't even think to call me Botfather" How Should I make more bots?? Thanks A: It already said that you can't create more bot with this account. There is no way to do that except delete useless one. You might want to create bot via your friends' account, it's simplest way.
Q: telegram botfather doesn't allow making more bots I'm trying to make telegram bot .I have made 20 bot so far and now when I select newbot from bot father it says this : " That I cannot do. You come to me asking for more than 20 bots. But you don't ask with respect. You don't offer friendship. You don't even think to call me Botfather" How Should I make more bots?? Thanks A: It already said that you can't create more bot with this account. There is no way to do that except delete useless one. You might want to create bot via your friends' account, it's simplest way.
stackoverflow
{ "language": "en", "length": 111, "provenance": "stackexchange_0000F.jsonl.gz:858583", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522745" }
b7d41248b79bd761f3b758fe7ef07e47def4741b
Stackoverflow Stackexchange Q: how to get path of picture taken by camera ios swift I was using the imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo delegate method for getting the url of the image which user chose on photo gallery. But when I try to get URL for image taken by camera its return nill. Here is my code. func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) { if (imagePicker.sourceType == UIImagePickerControllerSourceType.camera) { let data = UIImagePNGRepresentation(pickedImage) UIImageWriteToSavedPhotosAlbum(pickedImage, nil, nil, nil) if let image = info[UIImagePickerControllerOriginalImage] as? UIImage { let imageURL = info[UIImagePickerControllerReferenceURL] as? NSURL print(imageURL) } } A: The image you have taken is not saved and has not any name yet. You need to save the image before you can get the path for the image. That´s why it returns nil. If you select an image from the image library instead of taking one with the camera you´ll get a path.
Q: how to get path of picture taken by camera ios swift I was using the imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo delegate method for getting the url of the image which user chose on photo gallery. But when I try to get URL for image taken by camera its return nill. Here is my code. func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) { if (imagePicker.sourceType == UIImagePickerControllerSourceType.camera) { let data = UIImagePNGRepresentation(pickedImage) UIImageWriteToSavedPhotosAlbum(pickedImage, nil, nil, nil) if let image = info[UIImagePickerControllerOriginalImage] as? UIImage { let imageURL = info[UIImagePickerControllerReferenceURL] as? NSURL print(imageURL) } } A: The image you have taken is not saved and has not any name yet. You need to save the image before you can get the path for the image. That´s why it returns nil. If you select an image from the image library instead of taking one with the camera you´ll get a path. A: Rashwan L Answer worked for me as well and I was not able to save image with UIImageToSavedPhotoAlbum so written CameraImageManager class method for saving, loading and getting URL for upload image. You can also make it as extension of Image and URL class CameraImageManager { static func saveImage(imageName: String, image: UIImage) { guard let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first else { return } let fileName = imageName let fileURL = documentsDirectory.appendingPathComponent(fileName) guard let data = image.jpegData(compressionQuality: 1) else { return } //Checks if file exists, removes it if so. if FileManager.default.fileExists(atPath: fileURL.path) { do { try FileManager.default.removeItem(atPath: fileURL.path) print("Removed old image") } catch let removeError { print("couldn't remove file at path", removeError) } } do { try data.write(to: fileURL) } catch let error { print("error saving file with error", error) } } static func getImagePathFromDiskWith(fileName: String) -> URL? { let documentDirectory = FileManager.SearchPathDirectory.documentDirectory let userDomainMask = FileManager.SearchPathDomainMask.userDomainMask let paths = NSSearchPathForDirectoriesInDomains(documentDirectory, userDomainMask, true) if let dirPath = paths.first { let imageUrl = URL(fileURLWithPath: dirPath).appendingPathComponent(fileName) return imageUrl } return nil } static func loadImageFromDiskWith(fileName: String) -> UIImage? { let documentDirectory = FileManager.SearchPathDirectory.documentDirectory let userDomainMask = FileManager.SearchPathDomainMask.userDomainMask let paths = NSSearchPathForDirectoriesInDomains(documentDirectory, userDomainMask, true) if let dirPath = paths.first { let imageUrl = URL(fileURLWithPath: dirPath).appendingPathComponent(fileName) let image = UIImage(contentsOfFile: imageUrl.path) return image } return nil } } A: After you saved the image into the Photos Album, you can get the path in response callback func saveImage(image: UIImage, completion: @escaping (Error?) -> ()) { UIImageWriteToSavedPhotosAlbum(image, self, #selector(image(path:didFinishSavingWithError:contextInfo:)), nil) } @objc private func image(path: String, didFinishSavingWithError error: NSError?, contextInfo: UnsafeMutableRawPointer?) { debugPrint(path) // That's the path you want }
stackoverflow
{ "language": "en", "length": 417, "provenance": "stackexchange_0000F.jsonl.gz:858589", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522756" }
0217fc23a76f658c206b80b3e7b8efd755f6b4f9
Stackoverflow Stackexchange Q: how to redirect to home page after submitting redux-form? I have created a redux form, which when clicked on submit button doesn't redirect to the app.js page the url changes to this http://localhost:3000/?name=Cook&phone=45625465265&email=cook%40yahoo&work=Engineer&city= this is what i have written- form.js const Form=({handleSubmit})=>( <form onSubmit={handleSubmit}> <center> <div> <label>First Name</label> <Field type="text" component="input" placeholder="Name" name="name"/> </div> <div> <label>Address</label> <Field type="text" component="input" placeholder="Phone" name="phone" /> </div> <button type="submit">Submit</button> </center> </form> ) export default reduxForm({ form: 'form', fields: ['name', 'address'] })(Form); add.js class AddNew extends Component{ handleSubmit = (values) => { console.log('This Values',values) } render() { return ( <div style={{ padding: 150 }}> <Add onSubmit={this.handleSubmit}/> </div> )} } A: In order to Redirect, what you can do is that in the handleSubmit function of AddNew component, you can call dynamic routing with this.props.history.push() Now supposing your App.js is specified in the Routes with path /app You can do the following import {withRouter} from 'react-router' class AddNew extends Component{ handleSubmit = (values) => { console.log('This Values',values) this.props.history.push('/app'); } render() { return ( <div style={{ padding: 150 }}> <Add onSubmit={this.handleSubmit}/> </div> )} } export default withRouter(AddNew);
Q: how to redirect to home page after submitting redux-form? I have created a redux form, which when clicked on submit button doesn't redirect to the app.js page the url changes to this http://localhost:3000/?name=Cook&phone=45625465265&email=cook%40yahoo&work=Engineer&city= this is what i have written- form.js const Form=({handleSubmit})=>( <form onSubmit={handleSubmit}> <center> <div> <label>First Name</label> <Field type="text" component="input" placeholder="Name" name="name"/> </div> <div> <label>Address</label> <Field type="text" component="input" placeholder="Phone" name="phone" /> </div> <button type="submit">Submit</button> </center> </form> ) export default reduxForm({ form: 'form', fields: ['name', 'address'] })(Form); add.js class AddNew extends Component{ handleSubmit = (values) => { console.log('This Values',values) } render() { return ( <div style={{ padding: 150 }}> <Add onSubmit={this.handleSubmit}/> </div> )} } A: In order to Redirect, what you can do is that in the handleSubmit function of AddNew component, you can call dynamic routing with this.props.history.push() Now supposing your App.js is specified in the Routes with path /app You can do the following import {withRouter} from 'react-router' class AddNew extends Component{ handleSubmit = (values) => { console.log('This Values',values) this.props.history.push('/app'); } render() { return ( <div style={{ padding: 150 }}> <Add onSubmit={this.handleSubmit}/> </div> )} } export default withRouter(AddNew);
stackoverflow
{ "language": "en", "length": 181, "provenance": "stackexchange_0000F.jsonl.gz:858616", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522811" }
422f37feb879c1b5bd29c17beba2b0b22cd12252
Stackoverflow Stackexchange Q: How to use wkhtmltopdf without install it to the server I'm using a free web hosting solution (000freehosting.com) that doesn't allow me to install any new tools, so I'm wondering if I could use wkhtmltopdf without installing it on the server. Thank you. A: This requires installation from the admin. You'll need to use something like DomPDF that doesn't have any dependencies and can be added as a folder to your web directory.
Q: How to use wkhtmltopdf without install it to the server I'm using a free web hosting solution (000freehosting.com) that doesn't allow me to install any new tools, so I'm wondering if I could use wkhtmltopdf without installing it on the server. Thank you. A: This requires installation from the admin. You'll need to use something like DomPDF that doesn't have any dependencies and can be added as a folder to your web directory.
stackoverflow
{ "language": "en", "length": 74, "provenance": "stackexchange_0000F.jsonl.gz:858633", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522873" }
7e0cd643a6959236fe892594388938e63a06330a
Stackoverflow Stackexchange Q: WooCommerce hook for order creation from admin In my custom plugin (working in WooCommerce 2.6.x and 3.x), I need to get the order ID when a new order is created. I tried different hooks but they work only when the customer creates an order and not when an order is created from admin. I tried: * *woocommerce_new_order *woocommerce_thankyou *woocommerce_checkout_order_processed *woocommerce_checkout_update_order_meta Update Finally I used this: add_action('wp_insert_post', function($order_id) { if(!did_action('woocommerce_checkout_order_processed') && get_post_type($order_id) == 'shop_order' && validate_order($order_id)) { order_action($order_id); } }); where validate_order is: function validate_order($order_id) { $order = new \WC_Order($order_id); $user_meta = get_user_meta($order->get_user_id()); if($user_meta) return true; return false; } Thanks to validate_order the action isn't executed when you start to create the order. I use !did_action('woocommerce_checkout_order_processed') because I don't want that the action is executed if the order is created by a customer (I have a specific action for that, using woocommerce_checkout_order_processed). A: woocommerce_new_order hook is called after order creation: add_action('woocommerce_new_order', function ($order_id) { // ... }, 10, 1);
Q: WooCommerce hook for order creation from admin In my custom plugin (working in WooCommerce 2.6.x and 3.x), I need to get the order ID when a new order is created. I tried different hooks but they work only when the customer creates an order and not when an order is created from admin. I tried: * *woocommerce_new_order *woocommerce_thankyou *woocommerce_checkout_order_processed *woocommerce_checkout_update_order_meta Update Finally I used this: add_action('wp_insert_post', function($order_id) { if(!did_action('woocommerce_checkout_order_processed') && get_post_type($order_id) == 'shop_order' && validate_order($order_id)) { order_action($order_id); } }); where validate_order is: function validate_order($order_id) { $order = new \WC_Order($order_id); $user_meta = get_user_meta($order->get_user_id()); if($user_meta) return true; return false; } Thanks to validate_order the action isn't executed when you start to create the order. I use !did_action('woocommerce_checkout_order_processed') because I don't want that the action is executed if the order is created by a customer (I have a specific action for that, using woocommerce_checkout_order_processed). A: woocommerce_new_order hook is called after order creation: add_action('woocommerce_new_order', function ($order_id) { // ... }, 10, 1); A: If you are using the admin page .../wp-admin/post-new.php?post_type=shop_order to create the new order then there may not be a WooCommerce hook to do this as this order is created by the WordPress core. However, the WordPress action 'save_post_shop_order' will be called with the $post_ID which is the order id. See function wp_insert_post() in ...\wp-includes\post.php. A: You can use this hook woocommerce_process_shop_order_meta is is triggered when an order is manually created from the WordPress admin.
stackoverflow
{ "language": "en", "length": 234, "provenance": "stackexchange_0000F.jsonl.gz:858643", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522910" }
e67bb0f9defca146c87eb85e0bec7f1431eb0180
Stackoverflow Stackexchange Q: Sklearn Gaussian Regression - Memory Error I have those two vectors: x, size 3*46208 y, size 1*46208 I want to fit those data to a Gaussian model using the Sklearn library (in Python). I do this like this: kernel = ConstantKernel() + Matern(length_scale=1, nu=5/2) + WhiteKernel(noise_level=1) gp = gaussian_process.GaussianProcessRegressor(kernel=kernel) gp.fit(X, y_norm) Which gives me the following error: MemoryError It works if I only take 1000 rows instead of 46208, but crashes if I take 10000. If I do the maths, with a float taking 8 bytes, we would need (for the 10000 rows): 8 * 10000 * 4 = 320000 bytes = 320 Mb For me it should work, but I may be mistaken. Any ideas, suggestions ? PS: I am using the PyCharm IDE Thanks! A: 10k should not be a problem, actually only limitation is memory which is available for Python interpreter on your system. You can force Garbage Collector to free memory gc.collect() of increase SWAP size
Q: Sklearn Gaussian Regression - Memory Error I have those two vectors: x, size 3*46208 y, size 1*46208 I want to fit those data to a Gaussian model using the Sklearn library (in Python). I do this like this: kernel = ConstantKernel() + Matern(length_scale=1, nu=5/2) + WhiteKernel(noise_level=1) gp = gaussian_process.GaussianProcessRegressor(kernel=kernel) gp.fit(X, y_norm) Which gives me the following error: MemoryError It works if I only take 1000 rows instead of 46208, but crashes if I take 10000. If I do the maths, with a float taking 8 bytes, we would need (for the 10000 rows): 8 * 10000 * 4 = 320000 bytes = 320 Mb For me it should work, but I may be mistaken. Any ideas, suggestions ? PS: I am using the PyCharm IDE Thanks! A: 10k should not be a problem, actually only limitation is memory which is available for Python interpreter on your system. You can force Garbage Collector to free memory gc.collect() of increase SWAP size
stackoverflow
{ "language": "en", "length": 161, "provenance": "stackexchange_0000F.jsonl.gz:858660", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44522978" }
1b11b0656dd33b65bd823421bf2f40deb4abb8d3
Stackoverflow Stackexchange Q: Visual Studio Code: Use Beyond Compare diff tool The default diff tool of Visual Studio Code is nice, but I'd like to replace it with my beloved Beyond Compare. It's easy to configure in Visual Studio 2017. It's also configured as the git difftool. I assumed there should be an extension for Beyond Compare, but I couldn't find one. Also google only delivered results concerning the full VS IDE, but nothing to VSC. Any suggestions? A: There is a better solution now, the "Compare Helper" extension: https://marketplace.visualstudio.com/items?itemName=keewek.compare-helper Once installed and configured, you can select files or folders from the explorer and compare them via the context menu. Works like a charm, and configuration is trivial: "compareHelper.defaultExternalTools": { "folders": "bcompare", "images": "bcompare", "text": "bcompare" }, "compareHelper.externalTools": [ { "name": "bcompare", "path": "C:/Program Files/Beyond Compare 4/BCompare.exe", "compares": ["text", "folders", "images"] } ],
Q: Visual Studio Code: Use Beyond Compare diff tool The default diff tool of Visual Studio Code is nice, but I'd like to replace it with my beloved Beyond Compare. It's easy to configure in Visual Studio 2017. It's also configured as the git difftool. I assumed there should be an extension for Beyond Compare, but I couldn't find one. Also google only delivered results concerning the full VS IDE, but nothing to VSC. Any suggestions? A: There is a better solution now, the "Compare Helper" extension: https://marketplace.visualstudio.com/items?itemName=keewek.compare-helper Once installed and configured, you can select files or folders from the explorer and compare them via the context menu. Works like a charm, and configuration is trivial: "compareHelper.defaultExternalTools": { "folders": "bcompare", "images": "bcompare", "text": "bcompare" }, "compareHelper.externalTools": [ { "name": "bcompare", "path": "C:/Program Files/Beyond Compare 4/BCompare.exe", "compares": ["text", "folders", "images"] } ], A: I would file an issue/enhancement on Microsoft's Github @ the VSCode repo: https://github.com/Microsoft/vscode Best case, it's doable and someone there can direct you pretty quick on how to accomplish it; worst case it's added as an enhancement request and added into Code itself in due time. A: I came here searching for a solution to use Beyond Compare from within the VS Code sidebar explorer, which is probably not exactly what the OP was after. However, maybe he or others might still find this useful: There is an extension called "Windows Explorer Context Menu" which adds the option to show the native shell context menu for a selected file or folder in the VS Code explorer. Once the extension is installed, you can right-click a file or folder, choose Context Menu - Selected and then the desired Beyond Compare operation from the native shell menu. Unfortunately it does not recognise multiple selected files, so in order to compare two files or folders you have to do this twice, first Select left file/folder for Compare and then Compare (so tbh it's not really easier than just doing a Reveal in Explorer, but at least you can stay inside the VS Code context). A: Try this extension: GitDiffer - Visual Studio Marketplace It works for me on Windows 10, here is my .gitconfig settings [difftool "sourcetree"] cmd = 'C:/Program Files/Beyond Compare 4/BComp.exe' \"$LOCAL\" \"$REMOTE\" [mergetool "sourcetree"] cmd = 'C:/Program Files/Beyond Compare 4/BComp.exe' \"$LOCAL\" \"$REMOTE\" \"$BASE\" \"$MERGED\" trustExitCode = true [merge] tool = sourcetree [diff] guitool = sourcetree A: For VS Code on Mac OS 1. Install the VS Code Compare Helper Extension 2. Install Beyond Compare command line tools Inside of Beyond Compare install the command line tools from the menu 3. Add the following to your VSCode settings.json "compareHelper.defaultExternalTools": { "folders": "bcompare", "images": "bcompare", "text": "bcompare" }, "compareHelper.externalTools": [ { "name": "bcompare", "path": "bcompare", "compares": [ "text", "folders", "images" ], "args": [ "${FOLDER_ITEM_1}", "${FOLDER_ITEM_2}" ] } ],
stackoverflow
{ "language": "en", "length": 468, "provenance": "stackexchange_0000F.jsonl.gz:858726", "question_score": "31", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523201" }
0a3f699fc1637c36461e848ef4e015a631a198e7
Stackoverflow Stackexchange Q: Spring Data JPA: Named method without JpaRepository I would like to have a single-method interface with the method: boolean existsByStrAndStatus(String str, Character status); and have it work as-is as a named method. However, all the examples I've seen of this inherit from JpaRepository and I don't want to inherit from this interface as any implementation that I write (for testing purposes) also need to inherit all the built-in convenience methods that JpaRepository provide such as findAll, flush etc.. I am well aware of mocking frameworks, but I am looking for a solution that doesn't involve using for example Mockito. Is there an alternative to JpaRepository where I can still @Autowire this repository as I see fit, but if I need to write an implementation I only need to implement my own method? A: I think you should create custom implementation of your interface: class TestRepository implements Repository Which will implement only your custom method and rest leave unimplemented. Then you can use it in your tests.
Q: Spring Data JPA: Named method without JpaRepository I would like to have a single-method interface with the method: boolean existsByStrAndStatus(String str, Character status); and have it work as-is as a named method. However, all the examples I've seen of this inherit from JpaRepository and I don't want to inherit from this interface as any implementation that I write (for testing purposes) also need to inherit all the built-in convenience methods that JpaRepository provide such as findAll, flush etc.. I am well aware of mocking frameworks, but I am looking for a solution that doesn't involve using for example Mockito. Is there an alternative to JpaRepository where I can still @Autowire this repository as I see fit, but if I need to write an implementation I only need to implement my own method? A: I think you should create custom implementation of your interface: class TestRepository implements Repository Which will implement only your custom method and rest leave unimplemented. Then you can use it in your tests.
stackoverflow
{ "language": "en", "length": 167, "provenance": "stackexchange_0000F.jsonl.gz:858756", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523325" }
88e3312bbc600d538f4d1f5f5c8d5b6da02332c3
Stackoverflow Stackexchange Q: Adding virtual specifier in a derived class Consider following code: struct virtualfoo { virtualfoo{}; virtual ~virtualfoo{}; virtual double doStuff() = 0 }; struct realbar : virtualfoo { realbar{}; virtual ~realbar{}; virtual double doStuff(); }; Since I want to implement doStuff() for realbar, virtual isn't mandatory. But if I get this right, it won't hurt to have the virtual specifier next to realbar::doStuff(), does it? What side effects could I get with using/not using virtual? A: The virtual keyword is not necessary in the derived class. However it makes code clearer. Also in C++11 override keyword is introduced which allows the source code to clearly specify that a member function is intended to override a base class method. With keyword override the compiler will check the base class(es) to see if there is a virtual function with this exact signature. And if there is not, the compiler will throws an error.
Q: Adding virtual specifier in a derived class Consider following code: struct virtualfoo { virtualfoo{}; virtual ~virtualfoo{}; virtual double doStuff() = 0 }; struct realbar : virtualfoo { realbar{}; virtual ~realbar{}; virtual double doStuff(); }; Since I want to implement doStuff() for realbar, virtual isn't mandatory. But if I get this right, it won't hurt to have the virtual specifier next to realbar::doStuff(), does it? What side effects could I get with using/not using virtual? A: The virtual keyword is not necessary in the derived class. However it makes code clearer. Also in C++11 override keyword is introduced which allows the source code to clearly specify that a member function is intended to override a base class method. With keyword override the compiler will check the base class(es) to see if there is a virtual function with this exact signature. And if there is not, the compiler will throws an error. A: It doesn't matter whether you explicitly declare realbar::doStuff as virtual, since it is implicitly virtual due to virtualfoo:doStuff being virtual. So no side effects; realbar::doStuff will be virtual anyway. Confer, for example, this online C++ draft standard: 10.3 Virtual functions (2) If a virtual member function vf is declared in a class Base and in a class Derived, derived directly or indirectly from Base, a member function vf with the same name, parameter-type-list (8.3.5), cv-qualification, and ref- qualifier (or absence of same) as Base::vf is declared, then Derived::vf is also virtual (whether or not it is so declared) and it overrides Base::vf. ...
stackoverflow
{ "language": "en", "length": 255, "provenance": "stackexchange_0000F.jsonl.gz:858760", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523336" }
1f5ffca266eb843fc20ef53ea1d4a8f31a16d6f8
Stackoverflow Stackexchange Q: Java "new File()" does not create file This will be a really dumb question, but i can't seem to create a new file in java to save my life. It always throws java.io.FileNotFoundException: Users/username/Documents/testProject/test.txt (No such file or directory) I have tried like this: File newFile = new File("Users/username/Documents/testProject/test.txt"); and tried this: File newFile = new File("/Users/username/Documents/testProject/test.txt"); What am i doing wrong? Edit: apparently the issue wasn't there. I was trying to read from an empty file later on in the code, sorry folks. A: new File("...") does not create a new file. It creates a new object (in memory) containing a filename. You can then perform operations like exists(), canRead() and isDirectory() on it, and you can invoke createNewFile() to create an actual file out of it.
Q: Java "new File()" does not create file This will be a really dumb question, but i can't seem to create a new file in java to save my life. It always throws java.io.FileNotFoundException: Users/username/Documents/testProject/test.txt (No such file or directory) I have tried like this: File newFile = new File("Users/username/Documents/testProject/test.txt"); and tried this: File newFile = new File("/Users/username/Documents/testProject/test.txt"); What am i doing wrong? Edit: apparently the issue wasn't there. I was trying to read from an empty file later on in the code, sorry folks. A: new File("...") does not create a new file. It creates a new object (in memory) containing a filename. You can then perform operations like exists(), canRead() and isDirectory() on it, and you can invoke createNewFile() to create an actual file out of it. A: Additionally to Mike's answer you will probably need to put double // rather than just a single / as it is used as an escaping sequence. I am not sure if this applies in every situation but in case you still get any errors try this.
stackoverflow
{ "language": "en", "length": 176, "provenance": "stackexchange_0000F.jsonl.gz:858767", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523347" }
a39070368016614aaf1d439e150a4c5489752284
Stackoverflow Stackexchange Q: How do I exclude files from css lint in vscode VS Code seems to have a built in CSS linter, which is great. But there's a lot of code that I don't want to lint, as shown in the below screenshot. How do I exclude certain directories from being linted? A: Unfortunately, VS Code's built-in linter doesn't have an exclusion setting. You can only disable specific rules (see css. properties in settings.json). You could try using the stylelint extension with the ignoreFiles setting (see this answer) instead.
Q: How do I exclude files from css lint in vscode VS Code seems to have a built in CSS linter, which is great. But there's a lot of code that I don't want to lint, as shown in the below screenshot. How do I exclude certain directories from being linted? A: Unfortunately, VS Code's built-in linter doesn't have an exclusion setting. You can only disable specific rules (see css. properties in settings.json). You could try using the stylelint extension with the ignoreFiles setting (see this answer) instead. A: Use: { "css.validate": false } A: I had to use: { "css.lint.unknownProperties": "ignore" } In my settings.json A: One solution is to exclude the folder in */.vscode/settings.json like this: { "files.exclude": { "**/app/**/*.js.map": true, "**/app/**/*.js": true, "**/node_modules/**": true }
stackoverflow
{ "language": "en", "length": 128, "provenance": "stackexchange_0000F.jsonl.gz:858771", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523355" }
52a2e810aeba5299e334feb11566eb8c8002074a
Stackoverflow Stackexchange Q: TimeSpan addition and subtraction I need to add 2 timespans taken from a textbox formatted like this: mm:ss.fff For example: 00:59,800 + 00:02,300 - the result should be 01:02.100, but instead I have 01:02,060. I think I have a problem with my conversion below: string Sum1 = "00:" + "00:59,800"; Sum1 = Sum1.Replace(',', '.'); double FSum1 = TimeSpan.Parse(Sum1).TotalSeconds; string Sum2 = "00:" + "00:02,300"; Sum2 = Sum2.Replace(',', '.'); double FSum2 = TimeSpan.Parse(Sum2).TotalSeconds; double SumResult = FSum1 + FSum2; maskedTextBoxSumResult.Text = TimeSpan.FromMinutes(SumResult).ToString(@"hh\:mm\:ss\.fff"); Also, I need to do the same with the subtraction. Thanks for your help. A: Are you looking for TimeSpan.ParseExact? string left = "00:59,800"; string right = "00:02,300"; var result = TimeSpan.ParseExact(left, @"mm\:ss\,fff", CultureInfo.InvariantCulture) + TimeSpan.ParseExact(right, @"mm\:ss\,fff", CultureInfo.InvariantCulture); Console.Write(result.ToString(@"mm\:ss\.fff")); Outcome: 01:02.100
Q: TimeSpan addition and subtraction I need to add 2 timespans taken from a textbox formatted like this: mm:ss.fff For example: 00:59,800 + 00:02,300 - the result should be 01:02.100, but instead I have 01:02,060. I think I have a problem with my conversion below: string Sum1 = "00:" + "00:59,800"; Sum1 = Sum1.Replace(',', '.'); double FSum1 = TimeSpan.Parse(Sum1).TotalSeconds; string Sum2 = "00:" + "00:02,300"; Sum2 = Sum2.Replace(',', '.'); double FSum2 = TimeSpan.Parse(Sum2).TotalSeconds; double SumResult = FSum1 + FSum2; maskedTextBoxSumResult.Text = TimeSpan.FromMinutes(SumResult).ToString(@"hh\:mm\:ss\.fff"); Also, I need to do the same with the subtraction. Thanks for your help. A: Are you looking for TimeSpan.ParseExact? string left = "00:59,800"; string right = "00:02,300"; var result = TimeSpan.ParseExact(left, @"mm\:ss\,fff", CultureInfo.InvariantCulture) + TimeSpan.ParseExact(right, @"mm\:ss\,fff", CultureInfo.InvariantCulture); Console.Write(result.ToString(@"mm\:ss\.fff")); Outcome: 01:02.100 A: Here is my solution TimeSpan t1 = TimeSpan.Parse(maskedTextBoxSum1.Text); TimeSpan t2 = TimeSpan.Parse(maskedTextBoxSum2.Text); TimeSpan t3 = t1.Add(t2); maskedTextBoxSumResult.Text = t3.ToString(@"hh\:mm\:ss\.fff"); Thanks A: Don't modify the string to get it to parse. Use the correct cultural info instead: string Sum1 = "00:" + "00:59,800"; string Sum2 = "00:" + "00:02,300"; var frfr = new System.Globalization.CultureInfo("fr-FR"); var FSum1 = TimeSpan.Parse(Sum1, frfr); var FSum2 = TimeSpan.Parse(Sum2, frfr);; var SumResult = FSum1 + FSum2; SumResult.ToString(@"hh\:mm\:ss\.fff").Dump(); A: Many of your conversions are unnecessary. Try this: static void Main() { string Sum1 = "00:" + "00:59,800"; Sum1 = Sum1.Replace(',', '.'); var FSum1 = TimeSpan.Parse(Sum1); string Sum2 = "00:" + "00:02,300"; Sum2 = Sum2.Replace(',', '.'); var FSum2 = TimeSpan.Parse(Sum2); var SumResult = FSum1 + FSum2; var bo = SumResult.ToString(@"hh\:mm\:ss\.fff"); Console.WriteLine(bo); Console.ReadLine(); } A: You don't need convert to double to sum the values. string Sum1 = "00:" + "00:59,800"; string Sum2 = "00:" + "00:02,300"; TimeSpan sumResult = TimeSpan.Parse(Sum1) + TimeSpan.Parse(Sum2); maskedTextBoxSumResult.Text = sumResult.ToString(@"hh:mm:ss.fff");
stackoverflow
{ "language": "en", "length": 280, "provenance": "stackexchange_0000F.jsonl.gz:858774", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523366" }
e3a0b22d4c75779062b6d6bc952fb275c752be6f
Stackoverflow Stackexchange Q: player.getDuration() and player.getCurrentTime() is not function error? I have read similar questions posted but none did help my case. Here is my YouTube iframe: <iframe id="myframe" style="border: solid 4px #37474F; "> </iframe> I added the src attribute dynamically. Here is my player: var player; function onYouTubeIframeAPIReady() { player = new YT.Player('myframe', { events: { 'onReady': onPlayerReady, 'onStateChange': onPlayerStateChange } }); } I am trying to do player.getCurrentTime(); and player.getDuration(); but both keep giving the "function not defined" error. A: The YouTube player API sets player.getCurrentTime not in the constructor but later (possibly when the video is loaded). Likely, this is a bug. To see that this is true add console.log(player.getCurrentTime) before each call. You will see that this is undefined initially and later changes to a function. Work around this by testing whether player.getCurrentTime exists. If not, assume a return value of 0.0. Example: var videoPos = !player.getCurrentTime ? 0.0 : player.getCurrentTime(); The same problem seems to happen with getDuration.
Q: player.getDuration() and player.getCurrentTime() is not function error? I have read similar questions posted but none did help my case. Here is my YouTube iframe: <iframe id="myframe" style="border: solid 4px #37474F; "> </iframe> I added the src attribute dynamically. Here is my player: var player; function onYouTubeIframeAPIReady() { player = new YT.Player('myframe', { events: { 'onReady': onPlayerReady, 'onStateChange': onPlayerStateChange } }); } I am trying to do player.getCurrentTime(); and player.getDuration(); but both keep giving the "function not defined" error. A: The YouTube player API sets player.getCurrentTime not in the constructor but later (possibly when the video is loaded). Likely, this is a bug. To see that this is true add console.log(player.getCurrentTime) before each call. You will see that this is undefined initially and later changes to a function. Work around this by testing whether player.getCurrentTime exists. If not, assume a return value of 0.0. Example: var videoPos = !player.getCurrentTime ? 0.0 : player.getCurrentTime(); The same problem seems to happen with getDuration. A: As usr said, it's likely a bug. And here's another solution: var yourVariable = player.getCurrentTime() || 0;
stackoverflow
{ "language": "en", "length": 179, "provenance": "stackexchange_0000F.jsonl.gz:858787", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523396" }
b2e4254c95cd610d4004be416616abcb82ec3dfd
Stackoverflow Stackexchange Q: Is a single AWS Cognito Region, us-west-2 for example, suitable for serving Canada, US and Puerto Rico? I'm looking at my options for a managed sign-on service and AWS Cognito looks promising. I notice that it's user pools etc do not currently replicate across regions. I wanted to confirm that 1 region is sufficient us-west-# for example (or us-east-#) would be sufficient for an application that has users spread across Canada, the US and Puerto Rico. A: In general, not only in the case of Cognito, the closer your users are in the data center that hosts your services the better. And this is only so you can minimize the propagation delays between your clients and the data center hosting your service. Therefore, if you have to choose one region, choose the one that the majority of your clients are closer to. AWS Cognito, does not replicate userPools across regions at the moment. Therefore, if you want to use the AccessToken against that userPool you need to go to the region that the userPool resides. Now, every other service that accepts accessTokens, will accept your token inside AWS, outside AWS in any region.
Q: Is a single AWS Cognito Region, us-west-2 for example, suitable for serving Canada, US and Puerto Rico? I'm looking at my options for a managed sign-on service and AWS Cognito looks promising. I notice that it's user pools etc do not currently replicate across regions. I wanted to confirm that 1 region is sufficient us-west-# for example (or us-east-#) would be sufficient for an application that has users spread across Canada, the US and Puerto Rico. A: In general, not only in the case of Cognito, the closer your users are in the data center that hosts your services the better. And this is only so you can minimize the propagation delays between your clients and the data center hosting your service. Therefore, if you have to choose one region, choose the one that the majority of your clients are closer to. AWS Cognito, does not replicate userPools across regions at the moment. Therefore, if you want to use the AccessToken against that userPool you need to go to the region that the userPool resides. Now, every other service that accepts accessTokens, will accept your token inside AWS, outside AWS in any region. A: I'm adding this supplementary detail to the question as a reference for the token types that Cognito returns. As I just found it by googling some of the info in the answer above. Using the AccessToken against the userPool would be done for things like updating the user's account information. Which would be required to use the region the pool resides in since pools are not replicated. http://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-tokens-with-identity-providers.html ID Token The ID token is represented as a JSON Web Key Token (JWT). The token contains claims about the identity of the authenticated user. For example, it includes claims such as name, family_name, phone_number, etc. For more information about standard claims, see the OpenID Connect specification. A client app can use this identity information inside the application. The ID token can also be used to authenticate users against your resource servers or server applications. When an ID token is used outside of the application against your web APIs, you must verify the signature of the ID token before you can trust any claims inside the ID token. The ID token expires one hour after the user authenticates. You should not process the ID token in your client or web API after it has expired. Access Token The access token is also represented as a JSON Web Key Token (JWT). It contains claims about the authenticated user, but unlike the ID token, it does not include all of the user's identity information. The primary purpose of the access token is to authorize operations in the context of the user in the user pool. For example, you can use the access token against Amazon Cognito Identity to update or delete user attributes. The access token can also be used with any of your web APIs to make access control decisions and authorize operations in the context of the user. As with the ID token, you must first verify the signature of the access token in your web APIs before you can trust any claims inside the access token. The access token expires one hour after the user authenticates. It should not be processed after it has expired. Refresh Token The refresh token can only be used against Amazon Cognito to retrieve a new access or ID token. By default, the refresh token expires 30 days after the user authenticates. When you create an app for your user pool, you can set the app's Refresh token expiration (days) to any value between 1 and 3650.
stackoverflow
{ "language": "en", "length": 606, "provenance": "stackexchange_0000F.jsonl.gz:858800", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523447" }
c0451741ca95681994c13a58ac2f293dd9f29a8f
Stackoverflow Stackexchange Q: How to integrate Material Components Web with an Angular CLI project? I'm following the Getting Started instructions for adding Material Components for web, and I'm getting the following error on Step 4: <script>mdc.autoInit()</script> -> Uncaught ReferenceError: mdc is not defined at (index):17 As mentioned, this is an Angular CLI project, so the script is being loaded from the .angular-cli.json file: "scripts": [ "../node_modules/material-components-web/dist/material-components-web.js" ] Is there another place in the CLI setup I should be loading the file, or making the call to mdc.autoInit()? I've also looked at the Angular 2 Framework Integration example, and it was no help. It doesn't use the CLI, and it never calls mdc.autoInit(). A: In every project there is an index.html file. You need to put your scripts there.
Q: How to integrate Material Components Web with an Angular CLI project? I'm following the Getting Started instructions for adding Material Components for web, and I'm getting the following error on Step 4: <script>mdc.autoInit()</script> -> Uncaught ReferenceError: mdc is not defined at (index):17 As mentioned, this is an Angular CLI project, so the script is being loaded from the .angular-cli.json file: "scripts": [ "../node_modules/material-components-web/dist/material-components-web.js" ] Is there another place in the CLI setup I should be loading the file, or making the call to mdc.autoInit()? I've also looked at the Angular 2 Framework Integration example, and it was no help. It doesn't use the CLI, and it never calls mdc.autoInit(). A: In every project there is an index.html file. You need to put your scripts there.
stackoverflow
{ "language": "en", "length": 126, "provenance": "stackexchange_0000F.jsonl.gz:858821", "question_score": "4", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523515" }
53c31d344b146ed1cfbda66b871882a437b1421e
Stackoverflow Stackexchange Q: Get rows inserted after the last fetch from a Mysql table without primary key So, I have a table which has 3 columns. Customer_number, login_hash and some_hash The customer_number is not the auto increment id and the table is being indexed on login_hash. Now the table is updated every hour and new entries are being added. I have to get the new entries, use them for some calls and then store the resulting data. My plan is to always store the last row number in some last_row environment variable and then retrieve values after that row number till the last record. Then update the last_row number. How do I achieve this? And is there any better approach to this problem? I know this is a bad table design but I have to deal with this and can't change it.
Q: Get rows inserted after the last fetch from a Mysql table without primary key So, I have a table which has 3 columns. Customer_number, login_hash and some_hash The customer_number is not the auto increment id and the table is being indexed on login_hash. Now the table is updated every hour and new entries are being added. I have to get the new entries, use them for some calls and then store the resulting data. My plan is to always store the last row number in some last_row environment variable and then retrieve values after that row number till the last record. Then update the last_row number. How do I achieve this? And is there any better approach to this problem? I know this is a bad table design but I have to deal with this and can't change it.
stackoverflow
{ "language": "en", "length": 140, "provenance": "stackexchange_0000F.jsonl.gz:858822", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523516" }
f3c6fcbf943a0790dd4ea80f4b447c30129a8a5e
Stackoverflow Stackexchange Q: Airflow: pass {{ ds }} as param to PostgresOperator i would like to use execution date as parameter to my sql file: i tried dt = '{{ ds }}' s3_to_redshift = PostgresOperator( task_id='s3_to_redshift', postgres_conn_id='redshift', sql='s3_to_redshift.sql', params={'file': dt}, dag=dag ) but it doesn't work. A: dt = '{{ ds }}' Doesn't work because Jinja (the templating engine used within airflow) does not process the entire Dag definition file. For each Operator there are fields which Jinja will process, which are part of the definition of the operator itself. In this case, you can make the params field (which is actually called parameters, make sure to change this) templated if you extend the PostgresOperator like this: class MyPostgresOperator(PostgresOperator): template_fields = ('sql','parameters') Now you should be able to do: s3_to_redshift = MyPostgresOperator( task_id='s3_to_redshift', postgres_conn_id='redshift', sql='s3_to_redshift.sql', parameters={'file': '{{ ds }}'}, dag=dag )
Q: Airflow: pass {{ ds }} as param to PostgresOperator i would like to use execution date as parameter to my sql file: i tried dt = '{{ ds }}' s3_to_redshift = PostgresOperator( task_id='s3_to_redshift', postgres_conn_id='redshift', sql='s3_to_redshift.sql', params={'file': dt}, dag=dag ) but it doesn't work. A: dt = '{{ ds }}' Doesn't work because Jinja (the templating engine used within airflow) does not process the entire Dag definition file. For each Operator there are fields which Jinja will process, which are part of the definition of the operator itself. In this case, you can make the params field (which is actually called parameters, make sure to change this) templated if you extend the PostgresOperator like this: class MyPostgresOperator(PostgresOperator): template_fields = ('sql','parameters') Now you should be able to do: s3_to_redshift = MyPostgresOperator( task_id='s3_to_redshift', postgres_conn_id='redshift', sql='s3_to_redshift.sql', parameters={'file': '{{ ds }}'}, dag=dag ) A: PostgresOperator / JDBCOperator inherit from BaseOperator. One of the input parameters of BaseOperator is params: self.params = params or {} # Available in templates! So, you should be able to use it without creating a new class: (even though params is not included into template_fields) t1 = JdbcOperator( task_id='copy', sql='copy.sql', jdbc_conn_id='connection_name', params={'schema_name':'public'}, dag=dag ) SQL statement (copy.sql) might look like: copy {{ params.schema_name }}.table_name from 's3://.../table_name.csv' iam_role 'arn:aws:iam::<acc_num>:role/<role_name>' csv IGNOREHEADER 1 Note: copy.sql resides at the same location where the DAG is located. OR you can define "template_searchpath" variable in "default_args" and specify absolute path to the folder where template file resides. For example: 'template_searchpath': '/home/user/airflow/templates/'
stackoverflow
{ "language": "en", "length": 246, "provenance": "stackexchange_0000F.jsonl.gz:858839", "question_score": "15", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523567" }
4883af3a8d0cf5266893e1207fd748d3c8c03b65
Stackoverflow Stackexchange Q: getUserMedia() is not allowed in localhost - Safari 11 Trying to call getUserMedia from an insecure document. I'm testing safari 11 tech preview. Got this error while trying to run basic peer in localhost. Does anyone experience the same or is localhost treated as insecure region in safari 11? Any flag or settings to allow this in safari? Currently i'm using ngrok to tunnel it via https and accessing in the same machine. A: In last version of safari, the option to allow media capture from insecure sites is located directly in the web inspector window :
Q: getUserMedia() is not allowed in localhost - Safari 11 Trying to call getUserMedia from an insecure document. I'm testing safari 11 tech preview. Got this error while trying to run basic peer in localhost. Does anyone experience the same or is localhost treated as insecure region in safari 11? Any flag or settings to allow this in safari? Currently i'm using ngrok to tunnel it via https and accessing in the same machine. A: In last version of safari, the option to allow media capture from insecure sites is located directly in the web inspector window : A: Update: You can now enable this from the Develop menu: Select Allow Media Capture on Insecure Sites Original Answer: Yes, you will need to run it with HTTPS, even with localhost, when using Safari. Do you see get this error even with an ngrok tunnel using HTTPS?
stackoverflow
{ "language": "en", "length": 146, "provenance": "stackexchange_0000F.jsonl.gz:858851", "question_score": "17", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523609" }
272af5c4720d8f718fec523b3a3c75edea2e8b30
Stackoverflow Stackexchange Q: Animated marker with Mapbox GL JS I would like to animate (having an animated gif, or a png sequence) a marker with mapbox gl js. Does anyone has any link/doc/ressource talking about it? I can't find nothing but marker animation along lines. Thanks in advance A: Sure, you can use this example and create its variations based on your needs. For example, use following code to display gif marker: mapboxgl.accessToken = '<api key>'; var map = new mapboxgl.Map({ container: 'map', style: 'mapbox://styles/mapbox/streets-v9', center: [-74.50, 40], zoom: 9 }); function addeMarkerToMap(map, coordinates) { // create a DOM element for the marker var el = document.createElement('div'); el.className = 'marker'; // add marker to map new mapboxgl.Marker(el) .setLngLat(coordinates) .addTo(map); } addeMarkerToMap(map, [-74.5, 40]); body { margin: 0; padding: 0; } #map { position: absolute; top: 0; bottom: 0; width: 100%; } .marker { background-image: url(https://media.giphy.com/media/Bfa45K0r6cCIw/giphy.gif); width: 32px; height: 32px; } <script src='https://api.tiles.mapbox.com/mapbox-gl-js/v0.51.0/mapbox-gl.js'></script> <link href='https://api.tiles.mapbox.com/mapbox-gl-js/v0.51.0/mapbox-gl.css' rel='stylesheet' /> <div id='map' class="myMap"></div> PS: Replace <api key> with your actual API key to run the example code without authorization warning.
Q: Animated marker with Mapbox GL JS I would like to animate (having an animated gif, or a png sequence) a marker with mapbox gl js. Does anyone has any link/doc/ressource talking about it? I can't find nothing but marker animation along lines. Thanks in advance A: Sure, you can use this example and create its variations based on your needs. For example, use following code to display gif marker: mapboxgl.accessToken = '<api key>'; var map = new mapboxgl.Map({ container: 'map', style: 'mapbox://styles/mapbox/streets-v9', center: [-74.50, 40], zoom: 9 }); function addeMarkerToMap(map, coordinates) { // create a DOM element for the marker var el = document.createElement('div'); el.className = 'marker'; // add marker to map new mapboxgl.Marker(el) .setLngLat(coordinates) .addTo(map); } addeMarkerToMap(map, [-74.5, 40]); body { margin: 0; padding: 0; } #map { position: absolute; top: 0; bottom: 0; width: 100%; } .marker { background-image: url(https://media.giphy.com/media/Bfa45K0r6cCIw/giphy.gif); width: 32px; height: 32px; } <script src='https://api.tiles.mapbox.com/mapbox-gl-js/v0.51.0/mapbox-gl.js'></script> <link href='https://api.tiles.mapbox.com/mapbox-gl-js/v0.51.0/mapbox-gl.css' rel='stylesheet' /> <div id='map' class="myMap"></div> PS: Replace <api key> with your actual API key to run the example code without authorization warning.
stackoverflow
{ "language": "en", "length": 174, "provenance": "stackexchange_0000F.jsonl.gz:858869", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523655" }
1536ca355d10a75ccab59280cfa47c56fbe0f5f1
Stackoverflow Stackexchange Q: Scope for HealthDataTypes of Google Fit I want to read HealthDataTypes. Which Scope must I set when creating GoogleApiClient? .addScope(new Scope(????)) A: Google Fit provides fitness API scopes here. It is a list of specific scopes from which you can choose.
Q: Scope for HealthDataTypes of Google Fit I want to read HealthDataTypes. Which Scope must I set when creating GoogleApiClient? .addScope(new Scope(????)) A: Google Fit provides fitness API scopes here. It is a list of specific scopes from which you can choose. A: Based from this documentation, Google Fit restricts write access for the data types in HealthDataTypes to only certain developers because health data is potentially sensitive. Apps need user permission to read and write data of a restricted type. Any application can read fitness data of a restricted data type, but only Google-approved applications can write data of this type. If you would like to write to a restricted data type: * *Send an email to [email protected] and request to be added to the whitelist of apps allowed to write data of a restricted type to Google Fit. Provide a brief description of the data types you would like access to. *If the data from your application can originate from connected devices, please include the following details about your use case and connected devices: * *Data Type(s) to be written to. *Device model. *Validation Protocols Met (e.g. ESH 2002, BHS, ISO15197:2013). A: Ok, I found the correct answer by myself.. you can create a FittnessOption object using the required data types, and the get the implied scopes for it: GFitUtils.buildFitnessOptions( readTypes, writeTypes ).getImpliedScopes(); (where readTypes and writeTypes are lists of DataTypes in this way, you won't need to harcode the values from the google fit site
stackoverflow
{ "language": "en", "length": 248, "provenance": "stackexchange_0000F.jsonl.gz:858879", "question_score": "7", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523685" }
b6616b6c04dff3b140526a906787b0e25c2a7113
Stackoverflow Stackexchange Q: How to set a code to automatically add today's date to a filename? Every day I have to change a date "YYMMDD" manually then run a code. I would like to find a way to make this change automatically. So I can just run the code without having to manually enter today's date. In the example below I try to read in the file on june 12th, 2017. task<- read.csv("\pattern~file_170612.txt", sep = " ", header=F, stringsAsFactors = F) A: If it's today's date you need: * *Sys.Date() to get today's date *strftime() or similar to get it in a correct format. %y%m%d means "year in 2 digits, month in 2 digits, day in 2 digits" (see ?strftime ). If you need year in 4 digits, use %Y instead of %y. *paste0() to create the filename a short example: thedate <- strftime(Sys.Date(),"%y%m%d") thefile <- paste0("/pattern~file_",thedate,".txt") thefile # [1] "/pattern~file_170613.txt" On a sidenote: using backward slashes for file paths is not the best idea in R. You better use forward slashes.
Q: How to set a code to automatically add today's date to a filename? Every day I have to change a date "YYMMDD" manually then run a code. I would like to find a way to make this change automatically. So I can just run the code without having to manually enter today's date. In the example below I try to read in the file on june 12th, 2017. task<- read.csv("\pattern~file_170612.txt", sep = " ", header=F, stringsAsFactors = F) A: If it's today's date you need: * *Sys.Date() to get today's date *strftime() or similar to get it in a correct format. %y%m%d means "year in 2 digits, month in 2 digits, day in 2 digits" (see ?strftime ). If you need year in 4 digits, use %Y instead of %y. *paste0() to create the filename a short example: thedate <- strftime(Sys.Date(),"%y%m%d") thefile <- paste0("/pattern~file_",thedate,".txt") thefile # [1] "/pattern~file_170613.txt" On a sidenote: using backward slashes for file paths is not the best idea in R. You better use forward slashes.
stackoverflow
{ "language": "en", "length": 170, "provenance": "stackexchange_0000F.jsonl.gz:858887", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523726" }
823e023e362cf50284569508e153b3b3ee722eac
Stackoverflow Stackexchange Q: Use @ApiParam or @ApiModelProperty in swagger? Both the following annotations work for adding metadata to swagger-ui docs. Which one should be prefered, and why? public class MyReq { @ApiModelProperty(required = true, value = "the persons name") @ApiParam(required = true, value = "the persons name") private String name; } @RestController public class MyServlet { @RequestMapping("/") public void test(MyReq req) { } } A: There is a huge difference between the two. They are both used to add metadata to swagger but they add different metadata. @ApiParam is for parameters. It is usually defined in the API Resource request class. Example of @ApiParam: /users?age=50 it can be used to define parameter age and the following fields: * *paramType: query *name: age *description: age of the user *required: true @ApiModelProperty is used for adding properties for models. You will use it in your model class on the model properties. Example: model User has name and age as properties: name and age then for each property you can define the following: For age: * *type: integer, *format": int64, *description: age of the user, Check out the fields each denote in the swagger objects: @ApiModelProperty- https://github.com/OAI/OpenAPI-Specification/blob/master/versions/1.2.md#529-property-object @ApiParam - https://github.com/OAI/OpenAPI-Specification/blob/master/versions/1.2.md#524-parameter-object
Q: Use @ApiParam or @ApiModelProperty in swagger? Both the following annotations work for adding metadata to swagger-ui docs. Which one should be prefered, and why? public class MyReq { @ApiModelProperty(required = true, value = "the persons name") @ApiParam(required = true, value = "the persons name") private String name; } @RestController public class MyServlet { @RequestMapping("/") public void test(MyReq req) { } } A: There is a huge difference between the two. They are both used to add metadata to swagger but they add different metadata. @ApiParam is for parameters. It is usually defined in the API Resource request class. Example of @ApiParam: /users?age=50 it can be used to define parameter age and the following fields: * *paramType: query *name: age *description: age of the user *required: true @ApiModelProperty is used for adding properties for models. You will use it in your model class on the model properties. Example: model User has name and age as properties: name and age then for each property you can define the following: For age: * *type: integer, *format": int64, *description: age of the user, Check out the fields each denote in the swagger objects: @ApiModelProperty- https://github.com/OAI/OpenAPI-Specification/blob/master/versions/1.2.md#529-property-object @ApiParam - https://github.com/OAI/OpenAPI-Specification/blob/master/versions/1.2.md#524-parameter-object
stackoverflow
{ "language": "en", "length": 195, "provenance": "stackexchange_0000F.jsonl.gz:858903", "question_score": "16", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523777" }
7eb49084715a522239996db484a563567f863246
Stackoverflow Stackexchange Q: Cron job for git pull I am using crontab to setup cronjob in order to pull from git, manually these commands work but from cronjob the seem not to: * * * * * * * * * *cd /var/www/project/ && git pull How can that be fixed? A: It's likely due to sudo permissions required on /var/. Make a shell script cron_pull.sh anywhere and run it on cron. In cron_pull.sh: #!/bin/bash cd /var/www/project git pull Run sudo chmod +x cron_pull.sh In sudo crontab -e: * * * * * /path/to/cron_pull.sh
Q: Cron job for git pull I am using crontab to setup cronjob in order to pull from git, manually these commands work but from cronjob the seem not to: * * * * * * * * * *cd /var/www/project/ && git pull How can that be fixed? A: It's likely due to sudo permissions required on /var/. Make a shell script cron_pull.sh anywhere and run it on cron. In cron_pull.sh: #!/bin/bash cd /var/www/project git pull Run sudo chmod +x cron_pull.sh In sudo crontab -e: * * * * * /path/to/cron_pull.sh
stackoverflow
{ "language": "en", "length": 92, "provenance": "stackexchange_0000F.jsonl.gz:858960", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44523994" }
25ca5b1b606537772139a17fd1551428206a99a8
Stackoverflow Stackexchange Q: 'Authorize' only specific routes of a resource There is this method authorizeResource() which applies specific policies to all routes (except the index route). Is there a way to apply policies only on specific routes, analogous to this function: Route::resource('photo', 'PhotoController', ['only' => [ 'index', 'show' ]]); A: Yes, authorizeResource accepts an $options array as a third parameter. Just pass null for the second argument and the syntax for options is the same as it is for route middleware. public function __construct() { $this->authorizeResource(Photo::class, null, [ 'only' => ['create', 'store'], ]); }
Q: 'Authorize' only specific routes of a resource There is this method authorizeResource() which applies specific policies to all routes (except the index route). Is there a way to apply policies only on specific routes, analogous to this function: Route::resource('photo', 'PhotoController', ['only' => [ 'index', 'show' ]]); A: Yes, authorizeResource accepts an $options array as a third parameter. Just pass null for the second argument and the syntax for options is the same as it is for route middleware. public function __construct() { $this->authorizeResource(Photo::class, null, [ 'only' => ['create', 'store'], ]); } A: You can realistically define middleware in the controller: public PhotoController extends Controller { public function __construct() { $this->middleware("can:save,photo")->only(["save","edit"]); //You get the idea } } This assumes you've written a proper policy (check https://laravel.com/docs/5.4/authorization) A: Despite pointed out by @JeffPucket in his answer, the only option didn't work for me. I'm running Laravel 5.5 and what did work was the inverse logic: public function __construct() { $this->authorizeResource(Photo::class, null, [ 'except' => [ 'index', 'show' ], ]); } Notice that you should pass to that option the actions (controller's methods) you don't want to apply your policy. In this case, index and show will bypass the authorization middleware. Just for comparison, here are the results from php artisan route:list when using each option: only +--------+-----------+------------------------+-----------------+------------------------------------------------+--------------------------------------------------+ | Domain | Method | URI | Name | Action | Middleware | +--------+-----------+------------------------+-----------------+------------------------------------------------+--------------------------------------------------+ | | POST | comment | comment.store | App\Http\Controllers\CommentController@store | web,auth,can:create,App\Http\Controllers\Comment | | | GET|HEAD | comment | comment.index | App\Http\Controllers\CommentController@index | web,auth,can:view,App\Http\Controllers\Comment | | | GET|HEAD | comment/create | comment.create | App\Http\Controllers\CommentController@create | web,auth,can:create,App\Http\Controllers\Comment | | | GET|HEAD | comment/{comment} | comment.show | App\Http\Controllers\CommentController@show | web,auth,can:view,comment | | | PUT|PATCH | comment/{comment} | comment.update | App\Http\Controllers\CommentController@update | web,auth,can:update,comment | | | DELETE | comment/{comment} | comment.destroy | App\Http\Controllers\CommentController@destroy | web,auth,can:delete,comment | | | GET|HEAD | comment/{comment}/edit | comment.edit | App\Http\Controllers\CommentController@edit | web,auth,can:update,comment | +--------+-----------+------------------------+-----------------+------------------------------------------------+--------------------------------------------------+ except +--------+-----------+------------------------+-----------------+------------------------------------------------+--------------------------------------------------+ | Domain | Method | URI | Name | Action | Middleware | +--------+-----------+------------------------+-----------------+------------------------------------------------+--------------------------------------------------+ | | POST | comment | comment.store | App\Http\Controllers\CommentController@store | web,auth,can:create,App\Http\Controllers\Comment | | | GET|HEAD | comment | comment.index | App\Http\Controllers\CommentController@index | web,auth | | | GET|HEAD | comment/create | comment.create | App\Http\Controllers\CommentController@create | web,auth,can:create,App\Http\Controllers\Comment | | | GET|HEAD | comment/{comment} | comment.show | App\Http\Controllers\CommentController@show | web,auth | | | PUT|PATCH | comment/{comment} | comment.update | App\Http\Controllers\CommentController@update | web,auth,can:update,comment | | | DELETE | comment/{comment} | comment.destroy | App\Http\Controllers\CommentController@destroy | web,auth,can:delete,comment | | | GET|HEAD | comment/{comment}/edit | comment.edit | App\Http\Controllers\CommentController@edit | web,auth,can:update,comment | +--------+-----------+------------------------+-----------------+------------------------------------------------+--------------------------------------------------+ As you can see above, the middleware is only applied to specific routes when using except. Perhaps this is a bug in the framework. But it's hard to confirm that since this option doesn't seem to be documented. Even details on authorizeResource() method are non-existing.
stackoverflow
{ "language": "en", "length": 461, "provenance": "stackexchange_0000F.jsonl.gz:858982", "question_score": "3", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524073" }
9747f8da50ef265d252173c2956825d9f6a8565f
Stackoverflow Stackexchange Q: Update Array containing objects using spread operator I have an array containing objects in javascript / typescript. let array = [{id:1,name:'One'}, {id:2, name:'Two'}, {id:3, name: 'Three'}] How can I update name of the second element (with id 2) and copy the array to a new array using javascript spread (...) operator? A: Using Spred Operator, you can update particular array value using following method let array = [ { id: 1, name: "One" }, { id: 2, name: "Two" }, { id: 3, name: "Three" }, ]; const label = "name"; const newValue = "Two Updated"; // Errow comes if index was string, so make sure it was integer const index = 1; // second element, const updatedArray = [ ...array.slice(0, index), { // here update data value ...array[index], [label]: newValue, }, ...array.slice(index + 1), ]; console.log(updatedArray);
Q: Update Array containing objects using spread operator I have an array containing objects in javascript / typescript. let array = [{id:1,name:'One'}, {id:2, name:'Two'}, {id:3, name: 'Three'}] How can I update name of the second element (with id 2) and copy the array to a new array using javascript spread (...) operator? A: Using Spred Operator, you can update particular array value using following method let array = [ { id: 1, name: "One" }, { id: 2, name: "Two" }, { id: 3, name: "Three" }, ]; const label = "name"; const newValue = "Two Updated"; // Errow comes if index was string, so make sure it was integer const index = 1; // second element, const updatedArray = [ ...array.slice(0, index), { // here update data value ...array[index], [label]: newValue, }, ...array.slice(index + 1), ]; console.log(updatedArray); A: You can use a mix of .map and the ... spread operator You can set the value after you've created your new array let array = [{id:1,name:'One'}, {id:2, name:'Two'}, {id:3, name: 'Three'}]; let array2 = array.map(a => {return {...a}}) array2.find(a => a.id == 2).name = "Not Two"; console.log(array); console.log(array2); .as-console-wrapper { max-height: 100% !important; top: 0; } Or you can do it in the .map let array = [{id:1,name:'One'}, {id:2, name:'Two'}, {id:3, name: 'Three'}]; let array2 = array.map(a => { var returnValue = {...a}; if (a.id == 2) { returnValue.name = "Not Two"; } return returnValue }) console.log(array); console.log(array2); .as-console-wrapper { max-height: 100% !important; top: 0; } A: There are a few ways to do this. I would suggest using Array.map : let new_array = array.map(element => element.id == 2 ? {...element, name : 'New Name'} : element); or with Object.assign : let new_array = array.map(element => element.id == 2 ? Object.assign({}, element, {name : 'New Name'}) : element); Map returns a new array, so you shouldn't need the array spread operator. A: We can use let array = [{id:1,name:'One'}, {id:2, name:'Two'}, {id:3, name: 'Three'}]; let array2 = [...array] array2.find(a => a.id == 2).name = "Not Two"; console.log(array2); A: You can simply use map() and change the element there. here is the code--- array_copy = array.map((element) => { console.log(element.id); if (element.id === 2) { element.name = "name changed"; } return element; }); console.log(array_copy); Here the main array also gets modified, as elements inside the array are objects and it references to the same location even in the new array. A: You can do it like this in map, no need for spread: const array = [{id:1,name:'One'}, {id:2, name:'Two'}, {id:3, name: 'Three'}] const updatedArray = array.map(a => { if (a.id == 2) { a.name = 'New Name'; } return a; }); A: Merging properties from filterQueryParams to selectedLaws (existing solutions did not suit me): if (this.filterQueryParams && Object.prototype.toString.call(this.filterQueryParams) === '[object Array]') { for (const law of this.filterQueryParams) { if (law as Laws.LawDetail) { const selectedLaw = this.selectedLaws.find(x => x.languageCode === law.languageCode); if (selectedLaw) { for (const propName of Object.keys(law)) { selectedLaw[propName] = law[propName]; } } else { this.selectedLaws.push(law); } } } } A: import React,{useState} from 'react'; export function App(props) { const[myObject,setMyObject] = useState({ "Name":"", "Age":"" }); const[myarray, setmyarray] = useState([]); const addItem =() =>{ setMyObject({...myObject,"Name":"Da","Age":"20"}); setmyarray([...myarray, 1]); }; console.log(myarray);console.log(myObject); return ( <div className='App'> <h1>Hello React.</h1> <h2>Start editing to see some magic happen!</h2> <button onClick={addItem}>Add me</button> </div> ); } // Log to console console.log('Hello console') A: let array = [{id:1,name:'One'}, {id:2, name:'Two'}, {id:3, name: 'Three'}]; let array2 =[...array.slice(0, 0), Object.assign({}, array[0], { name:'new one' //change any property of idx }),...array.slice(0 + 1)] console.log(array); console.log(array2); [...array.slice(0, idx), Object.assign({}, array[idx], { x:new_x //change any property of idx }),...array.slice(idx + 1)]
stackoverflow
{ "language": "en", "length": 596, "provenance": "stackexchange_0000F.jsonl.gz:858999", "question_score": "37", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524121" }
3f527d75d63416795dca38da2d5ba635cf82369c
Stackoverflow Stackexchange Q: How to add a description to an Excel AddIn How do I add a description to an Excel AddIn like in the picture below: You can see the Euro Tools AddIn has one. A: Like so: File, Info, Properties, Advanced properties. Click the Summary tab. The Title is what shows up as the name of your add-in. The Comments section shows up as description.
Q: How to add a description to an Excel AddIn How do I add a description to an Excel AddIn like in the picture below: You can see the Euro Tools AddIn has one. A: Like so: File, Info, Properties, Advanced properties. Click the Summary tab. The Title is what shows up as the name of your add-in. The Comments section shows up as description.
stackoverflow
{ "language": "en", "length": 65, "provenance": "stackexchange_0000F.jsonl.gz:859006", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524131" }
b87c7111a3c1aa53dd886732acc414052e8cbf11
Stackoverflow Stackexchange Q: Iterate over $PATH variable using shell script If I type echo $PATH in the terminal I get the following result: /usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/fnorbert/.local/bin:/home/fnorbert/bin I want to iterate over these paths using a shell script, but I do not know how to do that. I tried the following: for i in 1 2 3 do echo $PATH | cut -d':' -f$i done This prints the first three paths, but I want to represent every path with the variable i if it is possible. A: Simplest way is probably to change IFS to a colon and let the word splitting do it: IFS=: for p in $PATH ; do echo "$p" done But that might trigger file name globbing, in the weird case that your PATH contained characters like *?[]. You'd need to use set -f to avoid that. Though changing IFS might be considered problematic anyway, since it affects the rest of the script. So in Bash, we could just split the paths to an array with read -a, this doesn't have a problem with glob characters either: IFS=: read -a paths <<< "$PATH" for p in "${paths[@]}" ; do echo "$p" done
Q: Iterate over $PATH variable using shell script If I type echo $PATH in the terminal I get the following result: /usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/home/fnorbert/.local/bin:/home/fnorbert/bin I want to iterate over these paths using a shell script, but I do not know how to do that. I tried the following: for i in 1 2 3 do echo $PATH | cut -d':' -f$i done This prints the first three paths, but I want to represent every path with the variable i if it is possible. A: Simplest way is probably to change IFS to a colon and let the word splitting do it: IFS=: for p in $PATH ; do echo "$p" done But that might trigger file name globbing, in the weird case that your PATH contained characters like *?[]. You'd need to use set -f to avoid that. Though changing IFS might be considered problematic anyway, since it affects the rest of the script. So in Bash, we could just split the paths to an array with read -a, this doesn't have a problem with glob characters either: IFS=: read -a paths <<< "$PATH" for p in "${paths[@]}" ; do echo "$p" done A: with echo: echo "${PATH//:/$'\n'}" sed: sed 's/:/\n/g' <<< "$PATH" tr: tr ':' '\n' <<< "$PATH" python: python -c "import os; print os.environ['PATH'].replace(':', '\n')" for iterate use for: for i in ${PATH//:/ }; do echo $i; done A: Here is a trivial solution, that extends your attempt a little for i in $( echo "$PATH" | cut -d: -f 1- --output-delimiter=" " ) ; do echo "$i" ; done One trick that is used here is -f 1- to specify all fields instead of just one. Another trick is to use the --output-delimiter option. This solution is suffers from sensitivity to special characters in directory names. Consider the following example PATH="help me":\*:now for i in $( echo "$PATH" | cut -d: -f 1- --output-delimiter=" " ) ; do echo "$i" ; done This would output help me foo now That is, * *spaces in directory names will not be treated correctly *special characters, such as * will be expanded by the shell into a list of files in the current directory (foo is the name of a file residing in the current directory) But if your PATH does not contain anything special, this would work. Otherwise or rather in all cases take the solution that uses read. A: You can use read with delimiter set as : while read -d ':' p; do echo "$p" done <<< "$PATH:" A: You can use awk for the requirement, echo $PATH | awk -F: '{ for (i=1; i<=NF; i++) {print $i}}' This is well efficient in performance.
stackoverflow
{ "language": "en", "length": 445, "provenance": "stackexchange_0000F.jsonl.gz:859027", "question_score": "8", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524190" }
9d74d0c6aa1869ce35c91704d214109c6d0e9c23
Stackoverflow Stackexchange Q: GIT Log show only recent entries I am trying to utilize cmd prompt to get the last 10 commits with the author, commit hash, and description to utilize in a form. I have been experimenting with git log --pretty=short, however, the output seems to go forever. I would like to know how to reduce the amount of commits returned to the last 10 commits utilizing the git log command. I plan on extracting the information into a data structure for later use. A: The following command will display the last 10 commits: git log -n 10 --pretty=short
Q: GIT Log show only recent entries I am trying to utilize cmd prompt to get the last 10 commits with the author, commit hash, and description to utilize in a form. I have been experimenting with git log --pretty=short, however, the output seems to go forever. I would like to know how to reduce the amount of commits returned to the last 10 commits utilizing the git log command. I plan on extracting the information into a data structure for later use. A: The following command will display the last 10 commits: git log -n 10 --pretty=short A: Run: git log -n <number-of-commits> --pretty=short For all options, see: https://git-scm.com/docs/git-log
stackoverflow
{ "language": "en", "length": 110, "provenance": "stackexchange_0000F.jsonl.gz:859040", "question_score": "10", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524225" }
5937591c8839c9aaa4aee603a8f6d8fe7da9a5f9
Stackoverflow Stackexchange Q: Using proxy like fiddler with fetch api How can I set a proxy using Fetch API. I'm developing a node.js application and I'd like to have a look at the response body of an HTTPS response. I'm using this npm package that uses node http inside: https://www.npmjs.com/package/isomorphic-fetch I tried to set the env variables like: set https_proxy=http://127.0.0.1:8888 set http_proxy=http://127.0.0.1:8888 set NODE_TLS_REJECT_UNAUTHORIZED=0 but it seems to work only with request NPM node module. I always get the following message: http://127.0.0.1:8888 (node:30044) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): FetchError: request to https://www.index.hu failed, reason: read ECONNRESET A: @jimliang has posted a solution for node-fetch. He used https://github.com/TooTallNate/node-https-proxy-agent fetch('https://www.google.com',{ agent:new HttpsProxyAgent('http://127.0.0.1:8580')}) .then(function(res){ //... })
Q: Using proxy like fiddler with fetch api How can I set a proxy using Fetch API. I'm developing a node.js application and I'd like to have a look at the response body of an HTTPS response. I'm using this npm package that uses node http inside: https://www.npmjs.com/package/isomorphic-fetch I tried to set the env variables like: set https_proxy=http://127.0.0.1:8888 set http_proxy=http://127.0.0.1:8888 set NODE_TLS_REJECT_UNAUTHORIZED=0 but it seems to work only with request NPM node module. I always get the following message: http://127.0.0.1:8888 (node:30044) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): FetchError: request to https://www.index.hu failed, reason: read ECONNRESET A: @jimliang has posted a solution for node-fetch. He used https://github.com/TooTallNate/node-https-proxy-agent fetch('https://www.google.com',{ agent:new HttpsProxyAgent('http://127.0.0.1:8580')}) .then(function(res){ //... })
stackoverflow
{ "language": "en", "length": 113, "provenance": "stackexchange_0000F.jsonl.gz:859046", "question_score": "6", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524236" }
574627ff3b1f8c0a45704ca0c6802d2dfdb60f7a
Stackoverflow Stackexchange Q: How to customize Froala in angular 2? I have added the Froala editor in my Angular 2 app and it works, I just cant find how to customize the toolbar, to show buttons that I want (bold, italic, underline, etc), any help? https://github.com/froala/angular2-froala-wysiwyg A: You can add option like this: <div *ngIf="homeIsInEditMode" [froalaEditor]="options" [(ngModel)]="homeMessage.libelleMessage"> </div> And in component add options you want: public options: Object = { placeholderText: 'Edit Your Content Here!', charCounterCount: false, toolbarButtons: ['bold', 'italic', 'underline', 'paragraphFormat','alert'], toolbarButtonsXS: ['bold', 'italic', 'underline', 'paragraphFormat','alert'], toolbarButtonsSM: ['bold', 'italic', 'underline', 'paragraphFormat','alert'], toolbarButtonsMD: ['bold', 'italic', 'underline', 'paragraphFormat','alert'], } You can get more details in the official documentation.
Q: How to customize Froala in angular 2? I have added the Froala editor in my Angular 2 app and it works, I just cant find how to customize the toolbar, to show buttons that I want (bold, italic, underline, etc), any help? https://github.com/froala/angular2-froala-wysiwyg A: You can add option like this: <div *ngIf="homeIsInEditMode" [froalaEditor]="options" [(ngModel)]="homeMessage.libelleMessage"> </div> And in component add options you want: public options: Object = { placeholderText: 'Edit Your Content Here!', charCounterCount: false, toolbarButtons: ['bold', 'italic', 'underline', 'paragraphFormat','alert'], toolbarButtonsXS: ['bold', 'italic', 'underline', 'paragraphFormat','alert'], toolbarButtonsSM: ['bold', 'italic', 'underline', 'paragraphFormat','alert'], toolbarButtonsMD: ['bold', 'italic', 'underline', 'paragraphFormat','alert'], } You can get more details in the official documentation. A: add this import 'froala-editor/js/plugins/font_size.min.js'; import 'froala-editor/js/plugins/font_family.min.js'; import 'froala-editor/js/plugins/emoticons.min.js'; import 'froala-editor/js/plugins/colors.min.js'; to a module where you want to insert editor I think it is a very bad implementation, but it work A: You can find an example here of setting custom options in the Angular Demo. The editor has 4 options for controlling the toolbar, as explained on https://www.froala.com/wysiwyg-editor/examples/toolbar-buttons: toolbarButtons for large devices (≥ 1200px) toolbarButtonsMD for medium devices (≥ 992px) toolbarButtonsSM for small devices (≥ 768px) toolbarButtonsXS for extra small devices (< 768px)
stackoverflow
{ "language": "en", "length": 189, "provenance": "stackexchange_0000F.jsonl.gz:859099", "question_score": "5", "source": "stackexchange", "timestamp": "2023-03-29T00:00:00", "url": "https://stackoverflow.com/questions/44524408" }