date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/20
1,213
4,125
<issue_start>username_0: I have XML similar in structure to the below example stored in a Camel header as a String: ``` xxx aaa bbb ccc John 4324234 New Book dsdaassda ... ... ``` I would like to extract the library element into another Camel header ``` xxx aaa bbb ccc ``` How can I achieve this using Camel Java DSL? (My main source of confusion is applying the XPath to an exchange header item rather than the body)<issue_comment>username_1: This should work: ``` .xpath("[your XPath expression]", "[the name of the header to select from]") ``` Your use case could be done like this ``` from("direct:input") .setHeader("newHeader").xpath("[your XPath]", "[xmlHeader]") .to("..."); ``` You can find this also in the [Camel docs](http://camel.apache.org/xpath.html), Section `Using XPath on Headers`. And it is for example used in this [Camel unit test](https://github.com/apache/camel/blob/master/camel-core/src/test/java/org/apache/camel/builder/xml/XPathHeaderNameTest.java), in the route at the bottom of the file. Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this. ``` @Override public void configure() throws Exception { XPathExpression xPathExpression = new XPathExpression(); xPathExpression.setHeaderName("completexml"); //assuming the header name of the xml as completexml xPathExpression.setExpression("/list/library"); from("direct:abc") .setHeader("OriginalBody", simple("${body}")) //get original body .split(xPathExpression) //split according to xPathExpression .setHeader("library", simple("${body}")) //set new header with only library element .setBody(simple("${header.OriginalBody}")) //set the original body back //.log("${header.completexml}") //.log("${header.library}") //.log("${body}") ; } ``` As per the answer [here](https://stackoverflow.com/questions/18483320/in-camel-how-can-i-split-a-route-using-xpath-on-a-header), we could give a second argument to xpath which is of type `String`. But With the latest version, I don't see any such method in xpath which takes a second argument as String. May be it is removed in newer versions. What I am doing is storing the original body in a header(since after split, original body is replaced by the result of split), then replacing it. Upvotes: 0 <issue_comment>username_3: Here's a working example of JUnit test: ``` @Override protected RouteBuilder createRouteBuilder() { return new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start") .routeId("xpathTest") .log(LoggingLevel.INFO, "XML input : \n${body}") .setHeader("XNODE").xpath("/list/library", NodeList.class) .setBody().xpath("/list/library", NodeList.class) .to("log:end?level=INFO&showAll=true&multiline=true"); } }; } ``` Оutput log: ``` 2018-03-22 00:50:00,141 [main ] INFO end - Exchange[ , Id: ID-BlackMirror-10897-1521668997689-0-2 , ExchangePattern: InOnly , Properties: {CamelCreatedTimestamp=Thu Mar 22 00:49:59 MSK 2018, CamelMessageHistory=[DefaultMessageHistory[routeId=xpathTest, node=log1], DefaultMessageHistory[routeId=xpathTest, node=setHeader1], DefaultMessageHistory[routeId=xpathTest, node=setBody1], DefaultMessageHistory[routeId=xpathTest, node=to1]], CamelToEndpoint=log://end?level=INFO&multiline=true&showAll=true} , Headers: {breadcrumbId=ID-BlackMirror-10897-1521668997689-0-1, XNODE=org.apache.xml.dtm.ref.DTMNodeList@5acf93bb} , BodyType: org.apache.xml.dtm.ref.DTMNodeList , Body: xxx aaa bbb ccc , Out: null: ``` So for the header you need an expression: ``` .setHeader("XNODE").xpath("/list/library", NodeList.class) ``` and for the body: ``` .setBody().xpath("/list/library", NodeList.class) ``` Upvotes: 0 <issue_comment>username_4: You can do it: ``` .setHeader("yourHeaderName", xpath("//*[local-name()='library']").stringResult()) ``` So you will have the library structure for work Upvotes: 0
2018/03/20
320
1,125
<issue_start>username_0: How to pass data of a hidden input through AJAX: It's not a form, just a hidden input. ``` $.ajax({ type: 'POST', url: 'messages-get-new.php', data: ?????????? success: function (data) { console.log('Submission was successful.'); console.log(data); $container.append(data); }, error: function (data) { console.log('An error occurred.'); console.log(data); }, }); ```<issue_comment>username_1: {'message-ids' : $('#message-ids').val()} Upvotes: 1 <issue_comment>username_2: ***use the below code*** ``` data:{ yourparameter: $("#message-ids").val()} ``` Upvotes: 0 <issue_comment>username_3: As others have said, use ``` {'message-ids' : $('#message-ids').val()} ``` The explanation being, ``` {'message-ids' //this is your key that ties to the value : $('#message-ids') //this uses JQuery to get the element that has an id of "message-ids" //The # sign is used to find something with an id that matches what comes after the # sign .val()} //This simply returns the value of that element ``` Upvotes: 3 [selected_answer]
2018/03/20
771
3,071
<issue_start>username_0: I need to send the response before executing some part of the code using PHP. I am explaining my code below. ``` $name=$_POST['name']; $email=$_POST['email']; $mobile=$_POST['mobile']; $description=mysqli_real_escape_string($connect,$_POST['description']); $date=date('m/d/Y H:i:s A'); $sql=mysqli_query($connect,'INSERT INTO db_feedback (name,email,mobile,description,date_added) values ("'.$name.'","'.$email.'","'.$mobile.'","'.$description.'","'.$date.'")'); if ($sql) { $result=array("msg"=>"Thank You !! Your feedback is sucessfully submitted.","status"=>1); echo json_encode($result); $email='<EMAIL>'; $name='Admin'; $msgSub="User Feedback"; $message=$description; $uname=$name; $uphone=$mobile; $uemail=$email; ob_start(); include "feedbackTemplate.php"; $msg_body=ob_get_clean(); $is_send=sendMail($email,'<EMAIL>',$msgSub,$msg_body); }else{ $result=array("msg"=>"Failed","status"=>0); echo json_encode($result); } ``` Here I am sending the email after inserting user data into database. I need before executing the email code part the `echo json_encode($result);` will send the response to the user and after the email part will execute.<issue_comment>username_1: Safest way is to put the task "send this email to that person" into database, and then fire a script from crontab to process those tasks. Script could be scheduled to run for example every 5 minutes. But be aware so it will not take more than 5 minutes to process tasks, so it won't be running more than one at the same time. I usually restricted the count of tasks processed at one script run and use `timeout` shell command to kill the script if it took too much time. Nowadays I use Redis or RabbitMQ and forking workers. Lookup "php background tasks" in Google and check [php-enqueue](https://enqueue.forma-<EMAIL>/). Upvotes: 0 <issue_comment>username_2: If you're running `php-fpm`, which you should, since it's 2018. and all popular web servers support it, then you can use [fastcgi\_finish\_request](http://php.net/fastcgi_finish_request) function What does it do for you? You do your database insert, you send a response to the user, call `fastcgi_finish_requets`, output is sent to user and after that function invoke your mailing code. Long running task is executed after user receives the response. Code: ``` echo json_encode($result); // Output is sent to user, but below code resumes execution fastcgi_finish_request(); // This basically resumes in background include "feedbackTemplate.php"; $msg_body=ob_get_clean(); $is_send=sendMail($email,'<EMAIL>',$msgSub,$msg_body); ``` **Note: this is only available when you run PHP using** `php-fpm`. Upvotes: 1
2018/03/20
611
2,201
<issue_start>username_0: I am trying to run `cordova emulate android` but I am getting this error. I have installed `gradle`, and I also configured `$ANDROID_HOME` path variable. **The error message:** FAILURE: Build failed with an exception. * What went wrong: A problem occurred configuring root project 'android'. > > Could not resolve all files for configuration ':classpath'. > Could not download proguard-base.jar (net.sf.proguard:proguard-base:5.3.3) > Could not get resource '<https://jcenter.bintray.com/net/sf/proguard/proguard-base/5.3.3/proguard-base-5.3.3.jar>'. > Response 304: Not Modified has no content! > > > How can I solve this error?<issue_comment>username_1: Safest way is to put the task "send this email to that person" into database, and then fire a script from crontab to process those tasks. Script could be scheduled to run for example every 5 minutes. But be aware so it will not take more than 5 minutes to process tasks, so it won't be running more than one at the same time. I usually restricted the count of tasks processed at one script run and use `timeout` shell command to kill the script if it took too much time. Nowadays I use Redis or RabbitMQ and forking workers. Lookup "php background tasks" in Google and check [php-enqueue](https://enqueue.forma-pro.com/). Upvotes: 0 <issue_comment>username_2: If you're running `php-fpm`, which you should, since it's 2018. and all popular web servers support it, then you can use [fastcgi\_finish\_request](http://php.net/fastcgi_finish_request) function What does it do for you? You do your database insert, you send a response to the user, call `fastcgi_finish_requets`, output is sent to user and after that function invoke your mailing code. Long running task is executed after user receives the response. Code: ``` echo json_encode($result); // Output is sent to user, but below code resumes execution fastcgi_finish_request(); // This basically resumes in background include "feedbackTemplate.php"; $msg_body=ob_get_clean(); $is_send=sendMail($email,'<EMAIL>',$msgSub,$msg_body); ``` **Note: this is only available when you run PHP using** `php-fpm`. Upvotes: 1
2018/03/20
778
2,760
<issue_start>username_0: For instance, in a Kubernetes cluster environment created using kubeadm, there are management Pods with the same IP address as the host's IP address like below. ``` [root@master1 ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE etcd-master1 1/1 Running 13 49d 10.91.111.113 master1 kube-apiserver-master1 1/1 Running 18 49d 10.91.111.113 master1 kube-controller-manager-master1 1/1 Running 29 49d 10.91.111.113 master1 kube-proxy-6vrvb 1/1 Running 13 81d 10.91.111.113 master1 kube-proxy-gsxcg 1/1 Running 13 81d 10.91.111.114 worker1 kube-proxy-lcjvw 1/1 NodeLost 1 64d 10.91.111.115 worker2 kube-scheduler-master1 1/1 Running 26 49d 10.91.111.113 master1 ``` I would like to know how to write a yaml file for assigning the same IP address as the host to pod.<issue_comment>username_1: Safest way is to put the task "send this email to that person" into database, and then fire a script from crontab to process those tasks. Script could be scheduled to run for example every 5 minutes. But be aware so it will not take more than 5 minutes to process tasks, so it won't be running more than one at the same time. I usually restricted the count of tasks processed at one script run and use `timeout` shell command to kill the script if it took too much time. Nowadays I use Redis or RabbitMQ and forking workers. Lookup "php background tasks" in Google and check [php-enqueue](https://enqueue.forma-pro.com/). Upvotes: 0 <issue_comment>username_2: If you're running `php-fpm`, which you should, since it's 2018. and all popular web servers support it, then you can use [fastcgi\_finish\_request](http://php.net/fastcgi_finish_request) function What does it do for you? You do your database insert, you send a response to the user, call `fastcgi_finish_requets`, output is sent to user and after that function invoke your mailing code. Long running task is executed after user receives the response. Code: ``` echo json_encode($result); // Output is sent to user, but below code resumes execution fastcgi_finish_request(); // This basically resumes in background include "feedbackTemplate.php"; $msg_body=ob_get_clean(); $is_send=sendMail($email,'<EMAIL>',$msgSub,$msg_body); ``` **Note: this is only available when you run PHP using** `php-fpm`. Upvotes: 1
2018/03/20
537
1,674
<issue_start>username_0: > > This is the code right here. I cant fix this thing. Please help > > > ``` php $strNewsQuery = ' SELECT TOP 15 N.[title], C.[title] AS category, C.[icon] AS icon, N.[datetime], N.[forenlink], N.[text] FROM [' . $_CONFIG['db_databases']['web'] . '].[dbo].[web_news] AS N LEFT JOIN [' . $_CONFIG['db_databases']['web'] . '].[dbo].[web_newscategories] AS C ON C.ncatid = N.category ORDER BY nid DESC '; $dbNewsQuery = @odbc_exec($odbc_connect, $strNewsQuery); if(@odbc_num_rows($dbNewsQuery) 0) { while($mixedNewsEntry = @odbc_fetch_array($dbNewsQuery)) { echo ' ![](images/icons/news/'.$mixedNewsEntry['icon'].'></span> </div> <div class=) ' . $mixedNewsEntry['title'] . ' ' . nl2br($mixedNewsEntry['text']) . ' ' . $\_LANG['publish\_date'] . ': ' . date($\_CONFIG['web\_date\_format']['full'], strtotime($mixedNewsEntry['datetime'])) . ' ' . $\_LANG['oclock\_optional'] . ' '; if(!empty($mixedNewsEntry['forenlink'])) { echo ' ' . $\_LANG['news\_link\_to\_forum'] . ': [READ MORE>>>](' . htmlspecialchars($mixedNewsEntry['forenlink']) . ' "Forum"). '; } echo ' '; } } else { echo createMessage($_LANG['no_entry'], 'hint'); } ? ``` > > How can i limit this sql into 5 i tried the "limit 0,5" method at the end of "ORDER BY nid DESC" does not work and tried other methods like java syntax > > ><issue_comment>username_1: Change `SELECT TOP 15` to `SELECT TOP 5` Upvotes: 2 [selected_answer]<issue_comment>username_2: The SQL for selecting 5 row is : `SELECT * FROM TABLE1 ORDER BY id DESC LIMIT 5;` Upvotes: 0
2018/03/20
621
2,340
<issue_start>username_0: I have input stream as wso2event on wso2 SP, and it contain timestamp and payload. Payload is in xml format. How can I parse this payload to variables? I see this doc <https://wso2-extensions.github.io/siddhi-map-xml/api/4.0.11/> but i don't know, how get incoming wso2event to sourcemapper and then pars it to variables?<issue_comment>username_1: **siddhi-map-\*** extensions are used to map input/output event attributes at the source/sink level. Since this xml payload is an attribute of another stream, you can use **siddhi-execution-map** extension to create a map from that xml. Then you can handle the xml payload as a hashmap within the siddhi app. Please refer to the documentation[1] for more details on this. [1] <https://wso2-extensions.github.io/siddhi-execution-map/> Upvotes: 3 [selected_answer]<issue_comment>username_2: Ok, I realized, that there is bug in <https://wso2-extensions.github.io/siddhi-execution-map/api/latest/> **createFromXML** function. This function correctly parse xml file over each element, but there is badly defined HashMap, because function is called recursively and every cycle is created new map and override data in old map. So at the end, there is map with only one key. ``` private Object getMapFromXML(OMElement parentElement) throws XMLStreamException { Map topLevelMap = new HashMap(); Iterator iterator = parentElement.getChildElements(); while (iterator.hasNext()) { OMElement streamAttributeElement = (OMElement) iterator.next(); String key = streamAttributeElement.getQName().toString(); Object value; if (streamAttributeElement.getFirstElement() != null) { value = getMapFromXML(streamAttributeElement); } else { logger.info("getFirstElement is null now, iam in else - " + key); String elementText = streamAttributeElement.getText(); if (elementText.equals("true") || elementText.equals("false")) { value = Boolean.parseBoolean(elementText); } else { if (NumberUtils.isNumber(elementText)) { try { value = numberFormat.parse(elementText); } catch (ParseException e) { value = elementText; } } else { value = elementText; } } } topLevelMap.put(key, value); } return topLevelMap; } ``` topLevelMap should be declared as private global variable. Could someone make ticket on wso2 github to resolve this bug please? Upvotes: 1
2018/03/20
1,469
5,018
<issue_start>username_0: Quoted from C++ Primer 5th **19.2.1. The dynamic\_cast Operator** > > A dynamic\_cast has the following form: > > > ``` dynamic_cast(e) dynamic\_cast(e) dynamic\_cast(e) ``` > > where type must be a class type and (ordinarily) names a class that has virtual > functions. In the first case, `e` must be a valid pointer (§ 2.3.2, p. 52); in the second, `e` > must be an lvalue; and in the third, `e` must not be an lvalue. > > > In all cases,**the type of `e` must be either a class type that is publicly derived from > the target type, a public base class of the target type, or the same as the target > type. If `e` has one of these types, then the cast will succeed. Otherwise, the cast fails.** > > If a dynamic\_cast to a pointer type fails, the result is 0. If a dynamic\_cast to a > reference type fails, the operator throws an exception of type `bad_cast` > > > However,here I've written a code snippet: ``` struct A {}; struct B : private A // note: *private* inheritance { A* test() { return dynamic_cast(this); } }; int main() { B b; if(b.test()==nullptr) throw 1; } ``` In the code snippet above, `A` is just a private base of `B`, which is not taken into account by the c++ primer. However, **this code snippet could be compiled and run without error**. Has the primer made a mistake?<issue_comment>username_1: This is all in all an unfortunate phrasing on the primers part. It bunched the two types of casts one can do into one sentence, and then misspoke as a result. Casting to a base class, doesn't require a runtime cast operation. It is, [as username_2 says](https://stackoverflow.com/a/49381799/817643), purely a static construct. And like username_2 quoted, it requires an **accessbile** base, not public one. So your code is all good and well. For a runtime cast (a downcast) the C++ standard places a requirement on the operand and the types involved in a dynamic cast in order for it to succeed. The class must be publicly derived, otherwise the implementation isn't obligated to make a successful cast down the inheritance chain. I mean, it could in theory make the cast successful, but according to the specification [*"the runtime check fails"*](https://timsong-cpp.github.io/cppwp/expr.dynamic.cast#8.3), which doesn't leave much leeway. But either way there's nothing wrong in your program that would make it fail to compile, nor is there anything there that would cause any sort of runtime error. --- If we change your code to cast down, and not cast up, here's an [example that doesn't even build](http://coliru.stacked-crooked.com/a/183f7faa7f8eee0f): ``` struct A {}; struct B : private A // note: *private* inheritance { A* test(B* p) { return dynamic_cast(p); } friend B\* foo(A\*); }; B\* foo(A\* a) { return dynamic\_cast(a); } int main() { B b; \*foo(&b); } ``` `A` is an accessible base of `B` in `foo`, and yet, the cast is ill-formed. --- The minimal change which will bring the primer back on course is to turn *"a class type that is **publicly** derived from the target type"* into *"a class type that is **accessibly** derived from the target type"*. Since there's nothing of the sort in the [publicly available errata](http://ptgmedia.pearsoncmg.com/images/9780321714114/errata/9780321714114_errata_10-31-12.html), we can guess it's an editorial mistake that is yet to be pointed out. Upvotes: 4 [selected_answer]<issue_comment>username_2: A derived-to-base `dynamic_cast` is static, not dynamic, and only requires the base to be accessible in the context of the cast (and unambiguous). See [[expr.dynamic.cast]/5](https://timsong-cpp.github.io/cppwp/expr.dynamic.cast#5). Upvotes: 3 <issue_comment>username_3: The bolded passages are all obviously wrong. The dynamic cast expression that deals with class types can be logically subdivided in two cases. * `type` is a class type which is a base class of the *statically determined* type of `e`. In this case dynamic\_cast is more or less synonymous with static\_cast. In particular, `type` must be an accessible and unambiguous (but not necessarily public) base class of the type of `e`. * `type` is a class type which is not a base class of the *statically determined* type of `e`. In this case, a runtime check takes place. This is further subdivided to the *downcast* and *crosscast* cases. The differences between those are only important in the case of multiple inheritance and/or non-public inheritance. In either of these cases, `type` must be a type of some subobject of the full object of `e`. The primer never says whether it means static or dynamic type of `e`, but in either case the description is completely wrong. For the cast to succeed, * `type` does not need to be related to the *static* type of `e` in any way * `type` must be a base class of the *dynamic* type of `e` or that type itself (but not a proper derived class of it), with further restrictions related to accessibility and non-ambiguity. Upvotes: 1
2018/03/20
342
1,381
<issue_start>username_0: I have installed PANDAS using -> pip install --upgrade pandas. "Requirement already up-to-date:--------" But when I use : ``` import pandas as pd ``` on spyder this is the error ``` import pandas as pd Traceback (most recent call last): File "", line 1, in import pandas as pd ModuleNotFoundError: No module named 'pandas' ``` I tried most of the solutions provided on other stackoverflow questions but nothing supposed to be working. please help. Thank you<issue_comment>username_1: The most common reasons to encounter this error is the incompatibility of pip installs and python versions. Though the error log mentions that your pandas module is not available even after you've installed it, there is a possibility to run into this error is due to the presence of multiple/incompatible versions of python where you've failed to install pandas according to the specific python version. So first check the python version that you're currently on using `python -V` And if you're on **Python 2.** , use `pip install pandas` If you're on **Python 3.** use `pip3 install pandas`. Then proceed on with your script. This at most should solve your issue. Upvotes: 1 <issue_comment>username_2: There must be two different pythons on your system 1) Anaconda 2) From Python.exe Try to install **pandas** from conda command window. Upvotes: 0
2018/03/20
463
1,331
<issue_start>username_0: How to create a select query on oracle that will select the present year and present year +1 sample table ``` Col1 2016-2017 2017-2018 2018-2019 2018-2019 ``` ``` select count(*) from tablename where Col1 = sysdate||-||sysdate+1; ``` and the output must be 2, because the present year was 2018 and present year increment by 1 is 2019 which will become 2018-2019 how to create a correct and proper query for select that works like this this was for the source query of the dashboard in oracle apex 5.1<issue_comment>username_1: At first you need to get the year from `sysdate`: ``` SELECT EXTRACT(year FROM sysdate) from dual ; ``` and using your own concatenation then leads to what you are looking for: ``` SELECT EXTRACT(year FROM sysdate) as currentyear, (EXTRACT(year FROM sysdate) + 1) as nextyear, EXTRACT(year FROM sysdate) || ' - ' || (EXTRACT(year FROM sysdate) + 1) as concat from dual ; ``` Upvotes: 2 <issue_comment>username_2: Use `TO_CHAR` ``` SELECT COUNT (*) FROM tablename WHERE Col1 = TO_CHAR (SYSDATE, 'YYYY') || '-' || (TO_CHAR (SYSDATE, 'YYYY') + 1); ``` Upvotes: 0 <issue_comment>username_3: Use `TO_CHAR` and `ADD_MONTHS` ``` SELECT count(*) FROM tablename WHERE Col1=TO_CHAR(SYSDATE,'YYYY')||'-'||TO_CHAR(ADD_MONTHS(SYSDATE,12),'YYYY') ``` Upvotes: 1
2018/03/20
734
2,561
<issue_start>username_0: I got a LAMP testing server (Apache 2.4.25) and in order to test http2 I have configured SSL with a self signed certificate. Everything works except jQuery Ajax requests which works perfectly over http, but now over https it returns an error 404. The PHP framework is codeIgniter behind. Could it be the self signed certificate which cause the problem? Is it considered as a cross domain request? here is the ajax code ``` //récup initiale des infos du panier de sélection $.ajax({ url: base_url + 'ajax_selection/getNumItems/', type: 'POST', // csrf protection data: {'php echo $this-security->get_csrf_token_name(); ?>': 'php echo $this-security->get_csrf_hash(); ?>'}, datatype: 'json', error: function (jqXHR, textStatus, errorThrown) { if (textStatus === 'timeout') { alert("Problème de connexion : Vérifiez votre connexion internet"); } else { throw "errorThrown : " + errorThrown + " | textStatus : " + textStatus + " | Error : AjaxContent has not a valid path"; } }, success: function (data, jqXHR, textStatus) { // notification de l'ajout à la liste de sélection //console.log('selected Item '+data);//test ok $('.link-selection').html(data); } }); ``` How can I deal with this problem? Thank you for your help [edit] here is the htaccess ``` # Toutes les autres URL vont être redirigées vers le fichier index.php. #RewriteRule blog$ wp/index.php #RewriteRule ^(.*)$ index.php/$1 [L] RewriteEngine On RewriteBase / RewriteCond %{ENV:REDIRECT\_STATUS} ^$ RewriteCond %{REQUEST\_FILENAME} !-d RewriteCond %{REQUEST\_FILENAME} !-f RewriteRule ^(.\*)$ index.php?url=$1 [QSA,L] # Redirection permanente des pages formation-motcle-annee vers /formation/motcle/annee RewriteRule ^formations-([a-z]+)-([0-9]+).html /formations/$1/$2 [L,R=301] RewriteRule ^formations-([a-z]+).html /formations/$1/$2 [L,R=301] #RewriteCond %{REQUEST\_FILENAME} !-f #RewriteCond %{REQUEST\_FILENAME} !-d #RewriteRule ^(.\*)$ index.php?/$1 [L] ```<issue_comment>username_1: Add the Access-Control-Allow-Origin header from the server ``` Access-Control-Allow-Origin: (your website url) ``` It worked for me, i hope it works for you as well. Upvotes: 0 <issue_comment>username_2: OK, i Found the problem it was related to the apache conf, I missed the directive ``` AllowOverride All ``` in the section, sorry & thanks for your help ! Upvotes: 1
2018/03/20
363
932
<issue_start>username_0: Say I have a field numCommande with a string "1610131223ZVV40" where 161013 is a date in form yymmdd. Is there any way in SQL to extract that 13/10/2016 date from the string field ? Thanks!<issue_comment>username_1: ``` SELECT TO_CHAR(TO_DATE(SUBSTR(Column_Name,1,6), 'YYMMDD'),'DD/MM/YYYY') FROM TableName ``` Live Demo > > <http://sqlfiddle.com/#!4/ce715/5> > > > Upvotes: 2 [selected_answer]<issue_comment>username_2: If the 'date' is always the first six characters, you can extract those with a plain `substr()` call: ``` substr(numCommande, 1, 6) ``` which gives you `'161013'`; and then convert that string to a date with a suitable format model: ``` select to_date(substr(numCommande, 1, 6), 'RRMMDD') from your_table; ``` Quick demo with static value instead: ``` select to_date(substr('1610131223ZVV40', 1, 6), 'RRMMDD') from dual; TO_DATE(SU ---------- 2016-10-13 ``` Upvotes: 2
2018/03/20
507
1,636
<issue_start>username_0: I am working on a Spring-MVC application in which I am trying to stream mp3 data. Unfortunately, it's directly triggering a download whenever a byte-array of information is sent in response. I found a few other links, but most of them are tied to an user-interface, so not much useful. What are the requirements for streaming a mp3 file? Here is the download code I have. ``` @RequestMapping(value = "/getsong/{token}") public ResponseEntity getsong(@PathVariable("token") String token, HttpServletResponse response) { try { Path path = Paths.get(FILE\_LOCATION); response.setContentType("audio/mp3"); response.setHeader("Content-Disposition", "attachment; filename=\"" + "song.mp3" + "\""); response.setContentLength((int) Files.size(path)); Files.copy(path, response.getOutputStream()); response.flushBuffer(); } catch (Exception ignored) { } return null; } ``` Thank you.<issue_comment>username_1: ``` SELECT TO_CHAR(TO_DATE(SUBSTR(Column_Name,1,6), 'YYMMDD'),'DD/MM/YYYY') FROM TableName ``` Live Demo > > <http://sqlfiddle.com/#!4/ce715/5> > > > Upvotes: 2 [selected_answer]<issue_comment>username_2: If the 'date' is always the first six characters, you can extract those with a plain `substr()` call: ``` substr(numCommande, 1, 6) ``` which gives you `'161013'`; and then convert that string to a date with a suitable format model: ``` select to_date(substr(numCommande, 1, 6), 'RRMMDD') from your_table; ``` Quick demo with static value instead: ``` select to_date(substr('1610131223ZVV40', 1, 6), 'RRMMDD') from dual; TO_DATE(SU ---------- 2016-10-13 ``` Upvotes: 2
2018/03/20
1,307
5,148
<issue_start>username_0: My question is this: what is the best way (or at least an effective way) to write to a file from multiple processes? **Note:** I am using c++11 and I want this to run on any platform (i.e. pure c++ code only). I have done some research and here is what I have concluded: 1. In my processes I have multiple threads. This is easily handled within each process using a mutex to serialise access to the file. 2. A c++/c++11 mutex or conditional variable cannot be used to serialise between processes. 3. I need some sort of external semaphore / lock file to act as a "mutex"... but I am not sure how to go about doing this. I have seen applications use things like creating a ".lock" file when in use. But for multiple rapid access it seems like this may not work (i.e. after one process has decided the file does not exist another could create it and then the first process will also try to create it) because the operation to test and create the file is not atomic. **Note:** Each process always writes one entire line at a time. I had thought that this might be enough to make the operation "atomic" (in that a whole line would get buffered before the next one), but this does not appear to be the case (unless I have my code wrong) since I (rarely) get a mangled line. Here is a code snippet of how I am doing a write (in case it is relevant): ``` // in c'tor m_osFile.open("test.txt", std::fstream::out | std::fstream::app) // in write func (std::string data) osFile << data<< std::endl; ``` This must be a common-ish issue, but I have not yet found a workable solution to it. Any code snippets would be welcome.<issue_comment>username_1: Usually the operating system provides special functions for locking files that are guaranteed to be atomic (like [lockf](https://stackoverflow.com/questions/2057784/locking-files-in-linux-with-c-c) on Linux or [LockFile(Ex)](https://msdn.microsoft.com/en-us/library/windows/desktop/aa365203(v=vs.85).aspx) on Windows). As by now, the C++ standard library provides no such functionality, so a plattform-independent approach to such facilities is provided by e.g. [Boost.Interprocess](http://www.boost.org/doc/libs/1_66_0/doc/html/boost/interprocess/file_lock.html). Upvotes: 2 <issue_comment>username_2: Well I can imagine two scenarios. Since you didn't specify in your questions how the processes are spawned, I imagine two situations: 1. Your first process spawns the second process (e.g. using `fork()`). 2. The two processes are generated separately in your environment. In the first scenario, a simple mutual exclusion access to the wanted resource (mutex) between your processes should work fine. This will prevent to a process to access a resource that is being used by the other process. The second scenario is a bit more complex, it would require that each process acknowledges the existence of the other. A similar issue has already been discussed [here](https://stackoverflow.com/questions/5210945/atomic-file-creation-on-linux), where it's present an interesting link on [how to avoid race conditions](https://www.dwheeler.com/secure-programs/Secure-Programs-HOWTO/avoid-race.html). I would also consider checking the `O_EXCL` and `O_CREAT` flags for this purpose Upvotes: 1 <issue_comment>username_3: > > My question is this: what is the best way (or at least an effective > way) to write to a file from multiple processes? > > > The best way is... don't do it! This really seems a sort of log (appending). I would just let every process write its own file and then merge them when needed. This is the common approach at least, and here it is the rationale. Any kind of intra-process locking is not going to work. Open files have buffering at OS level, even after being closed on some OSes (windows). You cannot perform file locking, if you want a portable solution ("I want this to run on any platform"): you are going to meet even possible performance penalties/undefined behavior depending on the filesystem being used (eg: samba, NFS). Writing concurrently and reliably to a single file is in fact a system-dependent activity, today. I don't mean that it is not possible - DB engines and other applications do it reliably, but it's a customized operation. As a good alternative, you can let one process act as a collector (as proposed by <NAME>), all the rest as producers, but this is not going to be a reliable alternative: logs need to get to disk "simply": if a bug can let the logs not to be written, the log purpose is going to be lost. However you can think to use this approach, decoupling the processes and letting the messages between them to be exchanged reliably and efficiently: if this is the case you can think to use a messaging solution like [RabbitMQ](http://www.rabbitmq.com/). In this case all the processes publish their "lines" to the message broker, and one more process consumes such messages and write them to file. Upvotes: 3 [selected_answer]<issue_comment>username_4: You could declare your file descriptor and a mutex (condition?) associated with it in a shared memory between all the processes. Upvotes: 0
2018/03/20
886
3,330
<issue_start>username_0: I get the following `warning: parameter 'tupleValue' set but not used [-Wunused-but-set-parameter]`, but the parameter is explicitly used in return statement. Does it really mean that the return value is ignored somewhere later in the call sequence and compiler is just cool enough to optimize it out on this line? Here is the code: ``` template typename removeLastType::type internalRemoveData(intValues, std::tuple tupleValue) { return std::forward\_as\_tuple(std::get(tupleValue)...); } ``` Complete source is here: <https://github.com/copperspice/cs_signal/blob/master/src/cs_internal.h#L316> Compiled in Windows: ``` -- The C compiler identification is GNU 5.3.0 -- The CXX compiler identification is GNU 5.3.0 ```<issue_comment>username_1: Not an answer actually, but you have undefined behavior in your code, as @RinatVeliakhmedov has pointed out. The return value of `internalRemoveData` function is a `std::tuple`, that consist of dangling references, i.e., references to no-longer existing objects. Consider the following simplified case: ``` struct X { X() { std::cout << "def ctor\n"; } ~X() { std::cout << "dtor\n"; } void f() { std::cout << "f\n"; } }; template auto f(std::tuple t) { return std::forward\_as\_tuple(std::get(t)...); } int main() { auto res = f<0>(std::tuple{}); std::cout << "---" << std::endl; // std::get<0>(res).f(); // would use no-longer existing object } ``` This program prints out: ``` def ctor dtor --- ``` After the `f` functions has finished, there does not exist any object of `X` anymore. --- Note that this problem can be very dangerous, since the program can behave as expected. I tried it in Wandbox with uncommented `std::get<0>(res).f();` statement and the program printed out `f` and did not manifest any error. Undefined behavior includes expected behavior as well. Upvotes: 2 <issue_comment>username_2: As one of the authors of the original code, I think the behavior here is well defined, but the reason is a bit subtle. Here is my logic. If I have missed something, let's keep the discussion open. The internalRemoveData method receives a tuple of values, and returns a tuple of values with exactly the same data types, minus the last element. Considering each element in the tuple separately, there are only three possible cases that matter. 1. If the element is a value type, the forward\_as\_tuple function will yield an lvalue reference. This lvalue reference will be assigned to the corresponding element of the return tuple, which is a value type. This will copy from the input to the output, no dangling reference. 2. If the element is an lvalue reference, it must have already been a reference to some valid original object. It will be forwarded as an lvalue reference, which is then bound to the corresponding lvalue reference in the output. 3. If the element is an rvalue reference, it also must have already been a reference to some valid original object. We get the same behavior as the lvalue reference case. The key to this is seeing that the return type is *not* the same as the type which forward\_as\_tuple returns. If this question gets closed, look for further discussion on our YouTube channel where the original question was asked: <https://www.youtube.com/watch?v=uK-2jzB41c4> Upvotes: 1
2018/03/20
1,039
4,229
<issue_start>username_0: first: ``` public class VolatileTest{ public volatile int inc = 0; public void increase(){ inc++; } public static void main(String[] args) { VolatileTest test = new VolatileTest(); for(int i = 0 ; i < 2 ; i ++){ new Thread(){ public void run(){ for(int j = 0 ; j < 1000 ; j++) test.increase(); } }.start(); } while(Thread.activeCount() > 1)Thread.yield(); System.out.println(test.inc); } } ``` second: ``` public class VolatileTest{ public volatile int inc = 0; public void increase(){ inc++; } public static void main(String[] args) { VolatileTest test = new VolatileTest(); new Thread(){ public void run(){ for(int j = 0 ; j < 1000 ; j++) test.increase(); } }.start(); new Thread(){ public void run(){ for(int j = 0 ; j < 1000 ; j++) test.increase(); } }.start(); while(Thread.activeCount() > 1)Thread.yield(); System.out.println(test.inc); } } ``` The first one uses a for and the second one doesn't, and that is the only difference, but the first one gets the result smaller than 2000, the second gets the result equals to 2000, why?<issue_comment>username_1: Consider this operation that you do inside the `increase` method. You first read the existing value, then increment it and write it back. There are several instructions here and that can be interrupted. The reason behind getting a value which is < 2000 is because of the race condition. Using the keyword `volatile` does not guarantee the atomicity. To guarantee atomicity you have to use a lock. Try this out. ``` private final Object lock = new Object(); public void increase() { synchronized (lock) { inc++; } } ``` Another alternative is to use `AtomicInteger` here. So your code will now look like this. ``` public AtomicInteger inc = new AtomicInteger(0); public void increase() { inc.incrementAndGet(); } ``` This also guarantees the atomicity as the name implies. Upvotes: 2 <issue_comment>username_2: The result `2000` of second test is not guranteed by jls, you can let the thread sleep sometime before increment to make it "break" more easily: ``` public void increase(){ try { Thread.sleep(20); } catch (Exception e) { } inc++; } ``` You might get: ``` 1997 1999 ``` or some other unpredict results. `volatile` can gurantee the changes to a variable are always visible to other threads, but can not gurantee the actions on this variable is atomic. Suppose `i = 1`, thread1 and thread2 might read `1` at the same time, and increment it to `2`, then write back, which leads to wrong result. Upvotes: 1 <issue_comment>username_3: That’s just coincidence. Both variants are equally broken, but the second defines two distinct classes doing the same thing, so before starting the second thread, this additional class has to be loaded, verified and initialized. This overhead gives the first thread a headstart, raising the chance of completing entirely before the second even starts. So the race condition does not materialize then, but since this execution is not guaranteed, it’s still a broken program containing the possibility of data races. Running the same program in an environment with faster class loading/initialization or an ahead-of-time strategy may exhibit the same behavior as with the first variant. Note that likewise, it’s not guaranteed that the first variant will experience lost updates. It may still happen that starting the second thread is slow enough to allow the first one to complete without data races. Even if both threads run, the system’s thread scheduling policy may change the likelihood of experiencing lost updates. Also, the entire loop could get optimized to a single increment by `1000`, that would not contradict the requirements for `volatile` variables, even if the current version of the HotSpot JVM doesn’t do that. Upvotes: 2 [selected_answer]
2018/03/20
399
1,595
<issue_start>username_0: Lets say my model looks something like this. ``` public class OrganizationDTO { public int Id { get; set; } public string Name { get; set; } public List Storages { get; set; } = new List(); } public class StorageDTO { public int Id { get; set; } public string Name { get; set; } public List Organizations { get; set; } = new List(); } ``` Organization may have multiple storages, so basically I want to duplicate organization per storage in my table. I want to show in table records like below Organization1 | Storage1 Organization1 | Storage2 Organization1 | Storage3 Organization1 | Storage4 Organization2 | Storage1 Organization2 | Storage2 I tried to group them by Storage collection but it didnt duplicate my records like I want. ``` var result = Context.Storages .SelectMany(org => org.Organizations) .GroupBy(org => org) .Select(grouping => grouping.FirstOrDefault()) .OrderBy(org => org.Name) .ProjectTo(); ```<issue_comment>username_1: Looks like you just want to flatten the relationship. ``` var query = from storage in Context.Storages from org in storage.Organizations select new {Storage = storage, Organization = org }; var result = query .OrderBy(row => row.Organization.Name) .ProjectTo(); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` var result = Context.Organizations .SelectMany(organizationDTO =>organizationDTO .Storages .Select(storage => new {organizationDTO, storage})); ``` Upvotes: 2
2018/03/20
283
762
<issue_start>username_0: Im trying to get a regex to find all the parts of this string seperated by pipe. My attempt thusfar finds the 233, but when i try to catch the alphanumeric string, it fails. ``` (\d+)?|([A-Z+0-9+])? ``` The string is ``` 233|LTE02ERBS00126|N/A|ULSA|1|1|263655|/company/ul_spectrum_files/MeContext=LTE02ERBS00126,ManagedElement=1,1,1,20180305.123252ULSA_SAMPLE|1520253172|1520253172 ```<issue_comment>username_1: For this kind of match you should use a regexp with negation: you should match every character that is not a pipe. I think you should use something like `[^\v]` Upvotes: -1 <issue_comment>username_2: I agree with username_1 about using negation This will catch everíthing until it hits a | ``` [^\|]+ ``` Upvotes: 0
2018/03/20
700
2,435
<issue_start>username_0: I am currently creating a module to execute a set of codes on hookActionProductCancel. Module is running well and I would like to send out an email after execution. ``` $template_path = $this->local_path . 'mails/'; Mail::Send((int)(Configuration::get('PS_LANG_DEFAULT')), 'xxx', //need to change the template directory to point to custom module 'Subject', array( '{aaa}' => $bbb, '{bbb}' => $ccc, '{ccc}' => $ddd, '{ddd}' => $eee ), $to, null, null, null); ``` I have created the templates and place the files as follow: 1. ../mails/en/xxx.html 2. ../mails/en/xxx.txt While I understand the default navigation for email templates are above, how do I use templates placed in my custom modules directory? I have created a directory - ../modules/custommodule/mails/ and placed both files but am not successful in pointing to it. Any guidance is appreciated. Thank you.<issue_comment>username_1: You can specify template path in [Mail::Send()](https://github.com/PrestaShop/PrestaShop/blob/1.6.1.x/classes/Mail.php#L87) . You can see that 11th parameter is `$template_path` so you just need to specify that (if you're calling send method from main module class you can use `$this->local_path . 'mails/'`). The `$template_path` parameter must be server file path not URI because the method uses `file_exists()` to check if a template exists. Method will extract from your module path that it is in fact a custom module template. Now the method will first check if you have mail template in `themes/shop_theme/modules/mymodule/mails/iso_lang/xxx.html` then in `modules/mymodule/mails/iso_lang/xxx.html` and load the first template it finds. Same goes for txt files. **Edit:** How to execute method correctly: ``` Mail::Send( (int)(Configuration::get('PS_LANG_DEFAULT')), 'xxx', //need to change the template directory to point to custom module 'Subject', array( '{aaa}' => $bbb, '{bbb}' => $ccc, '{ccc}' => $ddd, '{ddd}' => $eee ), $to, null, null, null, null, null, $this->local_path . 'mails/' // 11th parameter is template path ) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: yes, I agree with 'username_1' you have to pass 11th parameter correctly and if you face an error with `$this->local_path . 'mails/'` than try ``` $this->module->getLocalPath().'mails/' ``` Upvotes: 1
2018/03/20
606
1,967
<issue_start>username_0: I'm already using the [r2\_score](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html) function but don't understand how I can get the "adjusted" R^2 score from this. The description at this page doesn't mention it - maybe it's the adjusted score by default?<issue_comment>username_1: Adjusted R2 requires number of independent variables as well. That's why it will not be calculated using such an independent metrics function (as we are not providing, how ypred was calculated). However you can calculate the adjusted R2 from R2 with a simple formula given [here](https://en.wikipedia.org/wiki/Coefficient_of_determination#Adjusted_R2) [![enter image description here](https://i.stack.imgur.com/fLrDw.png)](https://i.stack.imgur.com/fLrDw.png) where n is number of observations in sample and p is number of independent variables in model Upvotes: 6 [selected_answer]<issue_comment>username_2: alternatively... ``` # adjusted R-squared 1 - ( 1-model.score(X, y) ) * ( len(y) - 1 ) / ( len(y) - X.shape[1] - 1 ) ``` Upvotes: 3 <issue_comment>username_3: ### Simple calculation of Adj. R2 ``` Adj_r2 = 1 - (1-r2_score(y, y_pred)) * (len(y)-1)/(len(y)-X.shape[1]-1) ``` Upvotes: 2 <issue_comment>username_4: The wikipedia [page](https://en.wikipedia.org/wiki/Coefficient_of_determination#Adjusted_R2) has been revised over the course of time in regards to this formula. To match the current state this would be the appropriate formula: Adj r2 = 1-(1-R2)\*(n-1)/(n-p) with sklearn you could write some re-usable code such as : ```py import numpy as np from sklearn.metrics import r2_score def r2(actual: np.ndarray, predicted: np.ndarray): """ R2 Score """ return r2_score(actual, predicted) def adjr2(actual: np.ndarray, predicted: np.ndarray, rowcount: np.int, featurecount: np.int): """ R2 Score """ return 1-(1-r2(actual,predicted))*(rowcount-1)/(rowcount-featurecount) ``` Upvotes: 2
2018/03/20
644
2,506
<issue_start>username_0: I'm new to akka and I'm trying to use it similarly to the Executor framework to fire off some tasks asynchronously as such: ``` override def receive: Receive = { case msg: (Int, ListMap[Double,Double], ListMap[Double,Double]) => processHour(msg._1, msg._2, msg._3) case msg: String => println("You sent me a String: " + msg) case _ => unhandled() } ``` My questions would be: 1. How can I get rid of the compiler warning *"non-variable type argument Int in type pattern (Int, scala.collection.immutable.ListMap[Double,Double], scala.collection.immutable.ListMap[Double,Double]) is unchecked since it is eliminated by erasure"* on the below? I would understand ListMap type erasure, but the compiler seems to complain about the Int.. 2. My message (payload) is pretty big (each ListMap has some tens of thousands of raws). The processing consists of displaying the Maps in a plot and computing a sort 'distance' between the two of them. Is there a preferred alternative to sending this as a message to gain more from parallelism? I'm thinking along the lines of storing the data outside the actor scope..<issue_comment>username_1: 1. Don't send `String`s and tuples with `Int`s and `List`s. Define a proper protocol with case classes you can pattern-match on. 2. Depends on your definition of "ok". You could try to tweak `maximum-frame-size` (see e.g. [here](https://stackoverflow.com/questions/31038115/akka-net-sending-huge-messages-maximum-frame-size)), but then you would soon have to also tweak all kinds of heartbeat-timeouts, because your actors would become increasingly unresponsive, and at some point parts of your system would begin to declare other parts of your system for dead and quarantine each other. With [artery](https://doc.akka.io/docs/akka/2.5/remoting-artery.html?language=scala) this should no longer be a problem. Upvotes: 2 <issue_comment>username_2: This warning is very common working with pattern matching. The compiler alerts that these types only exist at compile time so it wont differentiate between ListMap[Double,Double] and ListMap[Any, Any]. There are some workarounds using TypeTags. I would give an opportunity to Akka Typed to create a more expressive protocol. An example in how to migrate from classical version to Typed version is detailed here <https://www.bbva.com/en/akka-actors-evolution-type-safe-distributed-computations/>. For processing big datasets like in your case I´d use Spark. Upvotes: 3 [selected_answer]
2018/03/20
3,017
4,606
<issue_start>username_0: I have a `data.frame` object in R , for which I want to have non-unique row names: ``` 38.40000 41.75200 44.38400 44.18400 45.37600 37.49600 41.36800 33.93600 38.00800 42.51200 46.49600 40.48000 45.40800 46.32800 43.78400 39.88800 38.84000 40.56800 42.03200 38.89185 45.53846 50.08462 39.91538 36.95385 34.96154 39.74615 38.01538 35.75385 35.54615 36.69231 35.20769 38.05385 39.29231 37.96923 37.30000 36.86923 39.19231 38.81538 43.69231 38.06400 46.05176 41.69412 38.80000 37.75529 39.67529 39.07765 39.17647 38.24941 39.58588 38.63529 38.30588 41.87765 38.97412 40.13647 42.27294 38.24471 35.41647 40.80000 38.07059 42.11294 44.20000 43.42857 44.80000 35.20000 35.91429 37.82857 51.45714 44.68571 46.68571 48.74286 41.25091 39.45455 38.17091 40.70182 40.39273 41.28727 40.63636 41.50909 41.68364 41.29455 45.06909 41.09818 40.02909 42.50182 42.34909 39.84727 41.42909 40.47273 40.28000 40.51636 41.25091 39.45455 38.17091 40.70182 40.39273 41.28727 40.63636 41.50909 41.68364 41.29455 40.87407 39.27704 44.13630 43.25037 35.86667 37.30667 38.76148 40.74667 38.93333 43.16148 37.47259 37.73630 38.34370 39.00148 36.96889 37.76593 39.14667 37.92593 37.62963 38.89185 ``` The rownames I want for this dummy data would be `B,C,C,B,E2,E3`. However, I am aware that R does not allow non-unique rownames. I have seven possible rowname classes for my complete dataset, `A,B,C,D,E1,E2,E3`. I was trying to write a script in R (but failing), that takes a vector of my non-unique rownames, and adds numbers to the elements 1,2,3... so on, depending on the length of that character in the vector. Thus for this dummy data, the vector I would have is `B-1,C-1,C-2,B-2,E2-1,E3-1`, and my final matrix would be: ``` B-1 38.40000 41.75200 44.38400 44.18400 45.37600 37.49600 41.36800 33.93600 38.00800 42.51200 46.49600 40.48000 45.40800 46.32800 43.78400 39.88800 38.84000 40.56800 42.03200 38.89185 C-1 45.53846 50.08462 39.91538 36.95385 34.96154 39.74615 38.01538 35.75385 35.54615 36.69231 35.20769 38.05385 39.29231 37.96923 37.30000 36.86923 39.19231 38.81538 43.69231 38.06400 C-2 46.05176 41.69412 38.80000 37.75529 39.67529 39.07765 39.17647 38.24941 39.58588 38.63529 38.30588 41.87765 38.97412 40.13647 42.27294 38.24471 35.41647 40.80000 38.07059 42.11294 B-2 44.20000 43.42857 44.80000 35.20000 35.91429 37.82857 51.45714 44.68571 46.68571 48.74286 41.25091 39.45455 38.17091 40.70182 40.39273 41.28727 40.63636 41.50909 41.68364 41.29455 E2-1 45.06909 41.09818 40.02909 42.50182 42.34909 39.84727 41.42909 40.47273 40.28000 40.51636 41.25091 39.45455 38.17091 40.70182 40.39273 41.28727 40.63636 41.50909 41.68364 41.29455 E3-1 40.87407 39.27704 44.13630 43.25037 35.86667 37.30667 38.76148 40.74667 38.93333 43.16148 37.47259 37.73630 38.34370 39.00148 36.96889 37.76593 39.14667 37.92593 37.62963 38.89185 ```<issue_comment>username_1: If you *really* want to do this, then this will work ``` uniqify_names <- function(names_vector) { names <- unique(names_vector) count_table <- rep(0, length(names)) names(count_table) <- names # works because R has weird symbol lookup update_name <- function(name) { new_name <- paste0(name, ".", count_table[name]) count_table[name] <<- count_table[name] + 1 new_name } vapply(names_vector, update_name, FUN.VALUE = "character") } ``` It works like this: ``` > non_unique_names <- c("A", "B", "A", "A", "B", "C", "A", "B", "C") > uniqify_names(non_unique_names) A B A A B C A B C "A.0" "B.0" "A.1" "A.2" "B.1" "C.0" "A.3" "B.2" "C.1" ``` You can set the row names using `rownames` from this vector. Upvotes: 2 [selected_answer]<issue_comment>username_2: You can try this. ```r df <- data.frame(row_name = c('B','C','C','B','E2','E3')) library(dplyr) df <- df %>% group_by(row_name) %>% mutate(count = sequence(n()), unique_row_name = paste(row_name, count, sep="-")) ``` `df$unique_row_name` is your candidate! Upvotes: 0 <issue_comment>username_3: Since this question is how to: > > take a vector of non-unique rownames and add numbers to the elements 1,2,3 > > > I will ignore the PCA part (recommendations to use matrix or not to use rownames at all). To generate wanted vector of names you can use this: ``` foo <- c("A", "B", "C", "C", "B", "E", "E") paste0(foo, "-", sapply(seq_along(foo), function(x) sum(foo[1:x] == foo[x]))) [1] "A-1" "B-1" "C-1" "C-2" "B-2" "E-1" "E-2" ``` Upvotes: 1
2018/03/20
454
1,797
<issue_start>username_0: My daemon keeps querying db on a cronly basis. In every iteration, (a) the deamon makes a DB query (b) receives some documents from db (c) processes those results. I want to emit *the number of documents returned for the query* on Datadog. What is the right metric type?<issue_comment>username_1: [Gauge metric](https://docs.datadoghq.com/developers/metrics/#gauges "Gauge metric") types will do the job here given that your query does not run more than once within 10 seconds. If that is not the case, go for [count metric](https://docs.datadoghq.com/developers/metrics/#count "count metric") The flush interval in datadog by default is 10 seconds, if you use a **gauge metric** and the metric is reported more than once in a flush interval, datadog agent only sends the last value ignoring the previous ones. For **count metric** in contrast, the agent sums up all the values reported in the flush interval. More details about flush interval [here](https://help.datadoghq.com/hc/en-us/articles/211545826-Why-histogram-stats-are-all-the-same-inaccurate-Characteristics-of-Datadog-histograms- "here"). Upvotes: 2 <issue_comment>username_2: The best metric type would be a `histogram` metric. This will take multiple values, and pre-aggregate them within a flush window, so you will be able to get things like min/max/sum/avg and various percentiles. If you run multiple times within a flush window: * `count` would combine multiple values together, so you would lose the individual numbers, meaning you couldn't easily tell between the process returning a lot of documents, or it returning only a few, but being called a lot * `gauge`, as mentioned in @username_1's answer, would only keep the latest, making it harder to get thins like the max/min count. Upvotes: 1
2018/03/20
846
2,408
<issue_start>username_0: I am trying to install BIND server on centos7. I configured name.config and db.zone myroot-servers.loc . I check named-check config and is work properly but when I give command for named-checkzone following error occurred **[root@localhost ~]# named-checkconf [root@localhost ~]# sudo named-checkzone myroot-servers.loc /etc/named/zones/db.myroot-servers.loc /etc/named/zones/db.myroot-servers.loc:2: SOA record not at top of zone (IN.myroot-servers.loc) zone myroot-servers.loc/IN: loading from master file /etc/named/zones/db.myroot-servers.loc failed: not at top of zone zone myroot-servers.loc/IN: not loaded due to errors.** named.config ``` acl "trusted" { 10.128.10.11; # ns1 - can be set to localhost 10.128.20.12; # ns2 10.128.100.101; # host1 10.128.200.102; # host2 }; options { listen-on port 53 { 127.0.0.1; 10.128.10.11; }; # listen-on-v6 port 53 { ::1; }; allow-transfer { 10.128.20.12; }; # disable zone transfers by default allow-query { trusted; }; # allows queries from "trusted" clients }; include "/etc/named/named.conf.local"; ``` db.myroot-server.loc ``` $TTL 604800 IN SOA ns1.myroot-servers.loc. admin.myroot-servers.loc. ( 3 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; ; name servers - NS records IN NS ns1.myroot-servers.loc. IN NS ns2.myroot-servers.loc. ; name servers - A records ns1.myroot-servers.loc. IN A 10.128.10.11 ns2.myroot-servers.loc. IN A 10.128.20.12 ; 10.128.0.0/16 - A records host1.myroot-servers.loc. IN A 10.128.100.101 host2.myroot-servers.loc. IN A 10.128.200.102 ```<issue_comment>username_1: Try adding `$origin .myroot-servers.loc` at the top of zone record. Upvotes: 1 <issue_comment>username_2: Here is what made a difference for my experiment. Regarding `dnssec-signzone` I had the wrong domain in the "-o" flag argument for the given zone file so it was saying (in the syslog)... loading from master file /etc/bind/Zones/db.somedomain.tld.zone.**signed** failed: *not at top of zone* *[ BIND 9.11.5-P1-1ubuntu2.5-Ubuntu ]* Upvotes: 0
2018/03/20
665
2,055
<issue_start>username_0: For reference this is how the paragraph looks like. ``` {180319 arun S B} first set of chars. first chars. {180316 yyay S B} second set of chars. second line. {180314 ramaw S B} third line. third line. third line. {180309 jfds S B} fouth line {180221 shrbsd S B} fifth line.fith line part 2. {180214 shrbs S B} sixth line. ``` In this I need to extract first two lines. like ``` {180319 arun S B} first set of chars. first chars. {180316 yyay S B} second set of chars. second line. ``` I don't have any idea how to do this in tcl. can you please suggest me to do this. I have surfed many hours in online though I'm new to tcl so it is hard to understand me. Thanks<issue_comment>username_1: There's a couple of ways to do this: 1. split the paragraph into lines and join the first 2 lines: ``` set lines [split $para \n] set first2 [lrange $lines 0 1] set wanted [join $first2 \n] # or set wanted [join [lrange [split $para \n] 0 1] \n] ``` 2. find the position of the 2nd newline and take the characters from the start of the paragraph up to that position ``` set firstnewline [string first \n $para] set secondnewline [string first \n $para $firstnewline+1] set wanted [string range $para 0 $secondnewline-1] ``` You can also get the 2nd newline index with ``` set secondnewline [lindex [regexp -all -inline -indices \n $para] 1 0] ``` Tcl commands are documented here: <https://tcl.tk/man/tcl8.6/TclCmd/contents.htm> Upvotes: 2 [selected_answer]<issue_comment>username_2: working TCL code will is follow : ```html set file [open c:/filename.txt ] set file_device [read $file] set data [split $file_device "\n"] for {set count 0} {$count < 2} {incr count} { puts $data # for every iterartion one line will be printed. # split /n is use for getting the end of each line. # open command open the file at given path. # read command is use to read the open file. } close $file break ``` This will work for sure. Upvotes: 0
2018/03/20
587
2,011
<issue_start>username_0: How to check if enum is equal to the one of possibility cases. Is there better solution than: ``` [.orderedAscending, .orderedSame].contains(Calendar.current.compare(date, to: range.from, toGranularity: .day)) ``` In my opinion it is a little bit unclear and I looking for better. ### Edit: According to @Martin\_R answer of course in this example I can do negative equalization: ``` Calendar.current.compare(date1, to: date2, toGranularity: .day) != .orderedDescending ``` `ComparisonResult` used here is only an example. I looking for more general solution answer. For example, what if enum would be have more cases?<issue_comment>username_1: There's a couple of ways to do this: 1. split the paragraph into lines and join the first 2 lines: ``` set lines [split $para \n] set first2 [lrange $lines 0 1] set wanted [join $first2 \n] # or set wanted [join [lrange [split $para \n] 0 1] \n] ``` 2. find the position of the 2nd newline and take the characters from the start of the paragraph up to that position ``` set firstnewline [string first \n $para] set secondnewline [string first \n $para $firstnewline+1] set wanted [string range $para 0 $secondnewline-1] ``` You can also get the 2nd newline index with ``` set secondnewline [lindex [regexp -all -inline -indices \n $para] 1 0] ``` Tcl commands are documented here: <https://tcl.tk/man/tcl8.6/TclCmd/contents.htm> Upvotes: 2 [selected_answer]<issue_comment>username_2: working TCL code will is follow : ```html set file [open c:/filename.txt ] set file_device [read $file] set data [split $file_device "\n"] for {set count 0} {$count < 2} {incr count} { puts $data # for every iterartion one line will be printed. # split /n is use for getting the end of each line. # open command open the file at given path. # read command is use to read the open file. } close $file break ``` This will work for sure. Upvotes: 0
2018/03/20
444
1,659
<issue_start>username_0: I'm trying to configure Gitlab / Git in multi-user and multi-project mode on linux I have two local repositories **The problem:** If I create a repository specifying the login in the url, the local config will contain this credential, and we can only use this one. So we lose the ability to know who push what in the remote ``` git clone https: // login@ url / xxx/ yyy.git ``` **The bad solution:** Respecify the remote in the global config => we will not be able to work on other repositories **The ideal solution:** We should be able to clone a directory without specifying the user in the url: ``` git clone https: // @ url / xxx/ yyy.git ``` Git would ask the user / passwd every time Except that Gitlab does not allow it; make the following error: ``` git clone https://url/xxx/yyy.git Initialized empty Git repository in /path/.git/ error: The requested URL returned error: 401 Unauthorized while accessing https://url/xxx/yyy.git/info/refs fatal: HTTP request failed ``` Thanks in advance for your help<issue_comment>username_1: Please check your current `git` version installed on your local machine. The command for that would be: `git --version` Make sure you have git 1.7.10 or later. After that version it should prompt for the username and password. The Gitlab remote URLs are (without the specific user name): * HTTPS: `https://domain/owner_name/repository_name.git` * SSH: `git@domain:owner_name/repository_name.git` Upvotes: 2 [selected_answer]<issue_comment>username_2: You should need to use the latest version of git. Try to install that. After that, you will be able to clone it. Upvotes: 0
2018/03/20
423
1,661
<issue_start>username_0: I am trying to enable navigation in Kendo grid using arrow keys. I have seen may examples where arrow keys are used to navigate only between editable cells. There are 50+ columns in my kendo grid in which 20+ cell are editable. I have created a keydown event in databound which works fine only for editable cell. If I move to non editable cell Key press events are not working. **Current Working Logic:** Left and Right arrow keys are working only inside Editable cell. Once we reach non editable cell it's not working. **Desired Working Logic** When we press arrow key if the cell is editable then it should go in edit mode, if the cell is non editable the focus should stay and when we press left / right arrow key it cursor should move to prev/next cell. I have created a [telerik dojo](http://dojo.telerik.com/eLaWeCIj). **UPDATE** This Dojo is working as expected in IE. But in chrome it's not working. Basically `Right arrow key` should work like `TAB` key and `Left Arrow key` should work like `SHIFT+TAB`<issue_comment>username_1: Please check your current `git` version installed on your local machine. The command for that would be: `git --version` Make sure you have git 1.7.10 or later. After that version it should prompt for the username and password. The Gitlab remote URLs are (without the specific user name): * HTTPS: `https://domain/owner_name/repository_name.git` * SSH: `git@domain:owner_name/repository_name.git` Upvotes: 2 [selected_answer]<issue_comment>username_2: You should need to use the latest version of git. Try to install that. After that, you will be able to clone it. Upvotes: 0
2018/03/20
3,766
13,033
<issue_start>username_0: I am getting this weird error anytime i try to build apk ``` Process 'command 'C:\Users\<NAME>\AppData\Local\Android\sdk\build-tools\27.0.3\aapt.exe'' finished with non-zero exit value 1 ``` here is the logcat ``` org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:processDebugResources'. at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:103) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:73) at org.gradle.api.internal.tasks.execution.OutputDirectoryCreatingTaskExecuter.execute(OutputDirectoryCreatingTaskExecuter.java:51) at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:59) at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54) at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:59) at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:101) at org.gradle.api.internal.tasks.execution.FinalizeInputFilePropertiesTaskExecuter.execute(FinalizeInputFilePropertiesTaskExecuter.java:44) at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:88) at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:62) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54) at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43) at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:248) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:241) at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:230) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.processTask(DefaultTaskPlanExecutor.java:123) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.access$200(DefaultTaskPlanExecutor.java:79) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:104) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:98) at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:623) at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:578) at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:98) at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63) at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55) at java.lang.Thread.run(Thread.java:745) Caused by: org.gradle.tooling.BuildException: Failed to process resources, see aapt output above for details. at com.android.build.gradle.internal.res.LinkApplicationAndroidResourcesTask.invokeAaptForSplit(LinkApplicationAndroidResourcesTask.java:546) at com.android.build.gradle.internal.res.LinkApplicationAndroidResourcesTask.doFullTaskAction(LinkApplicationAndroidResourcesTask.java:266) at com.android.build.gradle.internal.tasks.IncrementalTask.taskAction(IncrementalTask.java:106) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:73) at org.gradle.api.internal.project.taskfactory.IncrementalTaskAction.doExecute(IncrementalTaskAction.java:50) at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:39) at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:26) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$1.run(ExecuteActionsTaskExecuter.java:124) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336) at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328) at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:199) at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:110) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:113) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:95) ... 33 more Caused by: com.android.ide.common.process.ProcessException: Failed to execute aapt at com.android.builder.core.AndroidBuilder.processResources(AndroidBuilder.java:813) at com.android.build.gradle.internal.res.LinkApplicationAndroidResourcesTask.invokeAaptForSplit(LinkApplicationAndroidResourcesTask.java:524) ... 50 more Caused by: java.util.concurrent.ExecutionException: com.android.ide.common.process.ProcessException: Error while executing process C:\Users\<NAME>\AppData\Local\Android\sdk\build-tools\27.0.3\aapt.exe with arguments {package -f --no-crunch -I C:\Users\<NAME>\AppData\Local\Android\sdk\platforms\android-27\android.jar -M \\?\G:\Muslim chat\MyNetwork\app\build\intermediates\manifests\full\debug\AndroidManifest.xml -S G:\Muslim chat\MyNetwork\app\build\intermediates\res\merged\debug -m -J \\?\G:\Muslim chat\MyNetwork\app\build\generated\source\r\debug -F G:\Muslim chat\MyNetwork\app\build\intermediates\res\debug\resources-debug.ap_ --custom-package com.rewardoapps.network -0 apk --output-text-symbols \\?\G:\Muslim chat\MyNetwork\app\build\intermediates\symbols\debug --no-version-vectors} at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:503) at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:482) at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79) at com.android.builder.internal.aapt.AbstractAapt.link(AbstractAapt.java:34) at com.android.builder.core.AndroidBuilder.processResources(AndroidBuilder.java:809) ... 51 more Caused by: com.android.ide.common.process.ProcessException: Error while executing process C:\Users\<NAME>\AppData\Local\Android\sdk\build-tools\27.0.3\aapt.exe with arguments {package -f --no-crunch -I C:\Users\<NAME>\AppData\Local\Android\sdk\platforms\android-27\android.jar -M \\?\G:\Muslim chat\MyNetwork\app\build\intermediates\manifests\full\debug\AndroidManifest.xml -S G:\Muslim chat\MyNetwork\app\build\intermediates\res\merged\debug -m -J \\?\G:\Muslim chat\MyNetwork\app\build\generated\source\r\debug -F G:\Muslim chat\MyNetwork\app\build\intermediates\res\debug\resources-debug.ap_ --custom-package com.rewardoapps.network -0 apk --output-text-symbols \\?\G:\Muslim chat\MyNetwork\app\build\intermediates\symbols\debug --no-version-vectors} at com.android.build.gradle.internal.process.GradleProcessResult.buildProcessException(GradleProcessResult.java:73) at com.android.build.gradle.internal.process.GradleProcessResult.assertNormalExitValue(GradleProcessResult.java:48) at com.android.builder.internal.aapt.AbstractProcessExecutionAapt$1.onSuccess(AbstractProcessExecutionAapt.java:78) at com.android.builder.internal.aapt.AbstractProcessExecutionAapt$1.onSuccess(AbstractProcessExecutionAapt.java:74) at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1237) at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:399) at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:911) at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:822) at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:664) at com.google.common.util.concurrent.SettableFuture.set(SettableFuture.java:48) at com.android.build.gradle.internal.process.GradleProcessExecutor$1.run(GradleProcessExecutor.java:58) Caused by: org.gradle.process.internal.ExecException: Process 'command 'C:\Users\<NAME>\AppData\Local\Android\sdk\build-tools\27.0.3\aapt.exe'' finished with non-zero exit value 1 at org.gradle.process.internal.DefaultExecHandle$ExecResultImpl.assertNormalExitValue(DefaultExecHandle.java:389) at com.android.build.gradle.internal.process.GradleProcessResult.assertNormalExitValue(GradleProcessResult.java:46) ... 9 more ``` and this is my build.gradle ``` apply plugin: 'com.android.application' android { compileSdkVersion 27 buildToolsVersion '27.0.3' defaultConfig { applicationId "com.rewardoapps.network" minSdkVersion 15 targetSdkVersion 26 versionCode 55 versionName '4.3' } dexOptions { jumboMode true } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } lintOptions { checkReleaseBuilds false // Or, if you prefer, you can continue to check for errors in release builds, // but continue the build even when errors are found: abortOnError false } productFlavors { } } dependencies { compile fileTree(include: ['*.jar'], dir: 'libs') compile 'com.mcxiaoke.volley:library:1.0.19' compile 'com.github.chrisbanes.photoview:library:1.2.3' compile 'com.facebook.android:facebook-android-sdk:4.+' compile 'com.pkmmte.view:circularimageview:1.1' compile 'com.melnykov:floatingactionbutton:1.3.0' compile 'com.squareup.okhttp:okhttp:2.5.0' compile 'com.github.bumptech.glide:glide:3.7.0' compile 'com.android.support:appcompat-v7:26.1.0' compile 'com.android.support:design:26.1.0' compile 'com.android.support:recyclerview-v7:26.1.0' compile 'com.android.support:cardview-v7:26.1.0' compile 'com.squareup.picasso:picasso:2.5.2' compile 'com.github.bumptech.glide:glide:3.7.0' compile 'com.balysv:material-ripple:1.0.2' compile 'com.google.firebase:firebase-core:11.6.2' compile 'com.google.firebase:firebase-ads:11.6.2' compile 'com.google.firebase:firebase-messaging:11.6.2' compile 'com.google.android.gms:play-services-gcm:11.6.2' compile 'com.google.android.gms:play-services-ads:11.6.2' compile files('libs/YouTubeAndroidPlayerApi.jar') } apply plugin: 'com.google.gms.google-services' ```<issue_comment>username_1: Please check your current `git` version installed on your local machine. The command for that would be: `git --version` Make sure you have git 1.7.10 or later. After that version it should prompt for the username and password. The Gitlab remote URLs are (without the specific user name): * HTTPS: `https://domain/owner_name/repository_name.git` * SSH: `git@domain:owner_name/repository_name.git` Upvotes: 2 [selected_answer]<issue_comment>username_2: You should need to use the latest version of git. Try to install that. After that, you will be able to clone it. Upvotes: 0
2018/03/20
1,206
3,248
<issue_start>username_0: I trying to show the weekends between 2 dates in a formatted way. For an example: ``` $start = strtotime(date('Y-m-d')); $end = strtotime(2018-06-12); for ($i = $start; $i <= $end; $i = strtotime("+1 day", $i)) { //show weekends as // saturday and sunday - march 24-25, 2018 // saturday - march 31, 2018 // sunday - april 1, 2018 // saturday and sunday - april 7-8, 2018 //......... } ``` If you can see above i need to group the weekend and if the saturday and sunday falls in 2 different months show them separately. Can someone please help me how to do this?<issue_comment>username_1: Try this: ``` $start = strtotime(date('Y-m-d')); $end = strtotime('2018-06-12'); for ($i = $start; $i <= $end; $i = strtotime("+1 day", $i)) { $date = date('D Y-m-d N', $i); $n = (int)date('N', $i); if ($n > 5) { echo $date . ' --- '; } } ``` date('N', $i) will give you number of the weekday(1-monday, 7-sunday) And you check if its greater than 5(6 or 7) (Saturday or Sunday) Upvotes: 0 <issue_comment>username_2: This should do: ``` $start = strtotime('2018-03-18'); $end = strtotime('2018-06-12'); for ($i = $start; $i <= $end; $i = strtotime("+1 day", $i)) { if (date('w', $i) == 6) { list ($currentDate, $currentMonth, $currentYear) = explode(' ', date('j F Y', $i)); $i = strtotime("+1 day", $i); list ($nextDate, $nextMonth, $nextYear) = explode(' ', date('j F Y', $i)); if ($currentMonth == $nextMonth) { echo 'Saturday and Sunday - ' . $currentMonth. ' ' . $currentDate . '-' . ($currentDate + 1) . ', ' . $currentYear . PHP_EOL; continue; } echo 'Saturday - ' . $currentMonth . ' ' . $currentDate . ', ' . $currentYear . PHP_EOL; echo 'Sunday - ' . $nextMonth . ' ' . $nextDate . ', ' . $nextYear . PHP_EOL; continue; } elseif (date('w', $i) == 0) { echo 'Sunday - ' . date('F j, Y', $i) . PHP_EOL; } } ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: Here is one method which creates an array that I later can implode to get the string format you want. The array is built on year, month, week and day. Then it's just a matter of iterating and echoing. ``` $start = strtotime(date('Y-m-d')); $end = strtotime("2018-06-12"); for ($i = $start; $i <= $end;) { If(date("N", $i) == 6){ $arr[date("Y", $i)][date("F", $i)][date("W", $i)][date("l", $i)] = date("d", $i); $i+= 86400; $arr[date("Y", $i)][date("F", $i)][date("W", $i)][date("l", $i)] = date("d", $i); $i+= 86400*6; }Else If(date("N", $i) == 7){ $arr[date("Y", $i)][date("F", $i)][date("W", $i)][date("l", $i)] = date("d", $i); $i+= 86400*6; }Else{ $i+= 86400; } } Foreach($arr as $year => $years){ Foreach($years as $month => $months){ Foreach($months as $week){ Echo implode(" and ",array_keys($week)) . " - " . $month . " " . Implode("-", $week) . ", ". $year . "\n"; } } } ``` <https://3v4l.org/nHe0d> Edit: forgot to output month. Edit 2: changed the initial loop to not loop all days. Should make it slightly faster. Edit 3: found a bug in the code. Corrected. Upvotes: 1
2018/03/20
1,394
2,744
<issue_start>username_0: I'm developing a script in python2 to generate a xml file with information that is in a database (psql database), but I get the following error: > > Cannot serialize datetime.datetime(2018, 2, 4, 23, 5) (type datetime) > > > The code is the following: ``` for row in rows: Jobs = ET.SubElement(JobList, 'Jobs') ........ scheduledTime.text = row[7] startTime.text = row[8] endTime.text = row[9] ........ myJobList = ET.tostring(JobList) ``` and the data returned by the fetchall of the query is: ``` (3090, 'Backup-Local.2018-02-04_23.05.00_57', 'Backup-Local', 'B', 'F', 1, 'T', datetime.datetime(2018, 2, 4, 23, 5), datetime.datetime(2018, 2, 4, 23, 5, 2), datetime.datetime(2018, 2, 4, 23, 5, 20), datetime.datetime(2018, 2, 4, 23, 5, 20), 1517785520L, 349, 1515088289, 488, 386893432L, 397505297L, 0, 0, 2, 16, 0, 0, 0, 0, 0, '', 'File') ``` I want to know how can I 'translate' the datetime returned to a string or if exists a datetime type in xml?!?!<issue_comment>username_1: When searching for ur error message at google i find this post: [How to overcome "datetime.datetime not JSON serializable"?](https://stackoverflow.com/questions/11875770/how-to-overcome-datetime-datetime-not-json-serializable) there one is recommended to use function isoformat()for convertion. But there is a bright further discussion worth to be read. Upvotes: 2 [selected_answer]<issue_comment>username_2: You can use the `datetime` modules `strftime` method to convert datetime object to string object. **Ex:** ``` import datetime s = (3090, 'Backup-Local.2018-02-04_23.05.00_57', 'Backup-Local', 'B', 'F', 1, 'T', datetime.datetime(2018, 2, 4, 23, 5), datetime.datetime(2018, 2, 4, 23, 5, 2), datetime.datetime(2018, 2, 4, 23, 5, 20), datetime.datetime(2018, 2, 4, 23, 5, 20), 1517785520L, 349, 1515088289, 488, 386893432L, 397505297L, 0, 0, 2, 16, 0, 0, 0, 0, 0, '', 'File') res = [] for i in s: if isinstance(i, datetime.datetime): res.append(i.strftime("%Y-%m-%d %H:%M:%S")) #Convert datetime to string. else: res.append(i) print(res) print(tuple(res)) ``` **Output:** ``` [3090, 'Backup-Local.2018-02-04_23.05.00_57', 'Backup-Local', 'B', 'F', 1, 'T', '2018-02-04 23:05:00', '2018-02-04 23:05:02', '2018-02-04 23:05:20', '2018-02-04 23:05:20', 1517785520L, 349, 1515088289, 488, 386893432L, 397505297L, 0, 0, 2, 16, 0, 0, 0, 0, 0, '', 'File'] (3090, 'Backup-Local.2018-02-04_23.05.00_57', 'Backup-Local', 'B', 'F', 1, 'T', '2018-02-04 23:05:00', '2018-02-04 23:05:02', '2018-02-04 23:05:20', '2018-02-04 23:05:20', 1517785520L, 349, 1515088289, 488, 386893432L, 397505297L, 0, 0, 2, 16, 0, 0, 0, 0, 0, '', 'File') ``` Upvotes: 2
2018/03/20
1,023
2,071
<issue_start>username_0: TypeError: Converting circular structure to JSON at JSON.stringify () ``` var userInfo = currentUser.children; if(currentUser.type ==1){ userInfo.push(currentUser); //console.log(typeof userInfo); } ``` It produce me error Converting circular structure to JSON. How can i come over this please guide.<issue_comment>username_1: When searching for ur error message at google i find this post: [How to overcome "datetime.datetime not JSON serializable"?](https://stackoverflow.com/questions/11875770/how-to-overcome-datetime-datetime-not-json-serializable) there one is recommended to use function isoformat()for convertion. But there is a bright further discussion worth to be read. Upvotes: 2 [selected_answer]<issue_comment>username_2: You can use the `datetime` modules `strftime` method to convert datetime object to string object. **Ex:** ``` import datetime s = (3090, 'Backup-Local.2018-02-04_23.05.00_57', 'Backup-Local', 'B', 'F', 1, 'T', datetime.datetime(2018, 2, 4, 23, 5), datetime.datetime(2018, 2, 4, 23, 5, 2), datetime.datetime(2018, 2, 4, 23, 5, 20), datetime.datetime(2018, 2, 4, 23, 5, 20), 1517785520L, 349, 1515088289, 488, 386893432L, 397505297L, 0, 0, 2, 16, 0, 0, 0, 0, 0, '', 'File') res = [] for i in s: if isinstance(i, datetime.datetime): res.append(i.strftime("%Y-%m-%d %H:%M:%S")) #Convert datetime to string. else: res.append(i) print(res) print(tuple(res)) ``` **Output:** ``` [3090, 'Backup-Local.2018-02-04_23.05.00_57', 'Backup-Local', 'B', 'F', 1, 'T', '2018-02-04 23:05:00', '2018-02-04 23:05:02', '2018-02-04 23:05:20', '2018-02-04 23:05:20', 1517785520L, 349, 1515088289, 488, 386893432L, 397505297L, 0, 0, 2, 16, 0, 0, 0, 0, 0, '', 'File'] (3090, 'Backup-Local.2018-02-04_23.05.00_57', 'Backup-Local', 'B', 'F', 1, 'T', '2018-02-04 23:05:00', '2018-02-04 23:05:02', '2018-02-04 23:05:20', '2018-02-04 23:05:20', 1517785520L, 349, 1515088289, 488, 386893432L, 397505297L, 0, 0, 2, 16, 0, 0, 0, 0, 0, '', 'File') ``` Upvotes: 2
2018/03/20
580
2,192
<issue_start>username_0: I currently creating a program using Java + Selenium WebDriver, currently having a difficulty of looping technique in Java. Below is my code: Assume, > > rt = {Capacity Utilization,Overlay Report} > > > fq = {15 Minute,Hourly,Daily} > > > Code:: ``` for( String rt : rep_type ) { new Select(driver.findElement(By.name("reporttype"))).selectByVisibleText(rt); for( String fq : freq ){ try { Thread.sleep(2000); new Select(driver.findElement(By.name("frequency"))).selectByVisibleText(fq); Thread.sleep(3000); } catch (Exception e){ e.printStackTrace(); } Thread.sleep(1500); try { WebElement selectElement = driver.findElement(By.id("firstPeriod")); Select select = new Select(selectElement); List opts = select.getOptions(); ------ ----- ---- -- - ``` The problem is, when 'rt' equal to "Overlay Report", there are no element for this report to match the String in 'fq' array. Hence, it will proceed to the next code within the same loop. How I can jump into the initial loop again which is, ``` for( String rt : rep_type ) { ``` so it will not proceed to the next code when there are no condition met with 'fq' array.<issue_comment>username_1: You're looking for either the `break` or `continue` keywords. When called from within a loop they will either break out of the loop or skip to the end of the current iteration. So, your potential solution will look like this: ``` for( String rt : rep_type ) { if ("Overlay Report".equals(rt)) { continue; } // rest of code ``` Upvotes: 1 <issue_comment>username_2: You can label your loop and use it just like this: ``` myLoopForInitialCycle: for(Smth smth : smths) { ... for(Smth smth2: smths2) { if (smth.equals(smth2) { continue myLoopForInitialCycle; } } } ``` This smells like structural style, and some measures have to be executed to avoid it, but I think this is just what you are asking for. Upvotes: 2
2018/03/20
344
1,177
<issue_start>username_0: I am testing yesware track emails, I am sending a test email to my email address while clicking the link in the email it's showing that link is opened from my IP address also the link is opened in different IP at the same time, just worried is my email is hacked, confused any suggestions That IP address is 172.16.17.32, 192.168.127.12, 172.16.58.3<issue_comment>username_1: You're looking for either the `break` or `continue` keywords. When called from within a loop they will either break out of the loop or skip to the end of the current iteration. So, your potential solution will look like this: ``` for( String rt : rep_type ) { if ("Overlay Report".equals(rt)) { continue; } // rest of code ``` Upvotes: 1 <issue_comment>username_2: You can label your loop and use it just like this: ``` myLoopForInitialCycle: for(Smth smth : smths) { ... for(Smth smth2: smths2) { if (smth.equals(smth2) { continue myLoopForInitialCycle; } } } ``` This smells like structural style, and some measures have to be executed to avoid it, but I think this is just what you are asking for. Upvotes: 2
2018/03/20
629
2,693
<issue_start>username_0: [![enter image description here](https://i.stack.imgur.com/dUugb.jpg)](https://i.stack.imgur.com/dUugb.jpg) As you already know that **Android Oreo** introduced the new design for **media controls**, rather than selecting a singular color for the **notification based on the app's color**, media playback notifications can instead **draw colors out of the album artwork**. Android then uses those colors to make a notification that blends the artwork into the notification while making the notification itself pop in your **notification shade**. May I know how we can do this ?<issue_comment>username_1: <https://developer.android.com/reference/android/app/Notification.Builder.html> To fill with solid colour: ``` .setColorized(boolean colorize) ``` Set whether this notification should be colorized. -A media template lets the user control media currently playing from an app. The collapsed view displays up to three actions, and the large icon can show a related image, such as an album cover. The expanded view displays up to five actions with a larger image, or six actions if no image is displayed. Colors from provided images automatically color the notification's background and other elements. (<https://material.io/guidelines/patterns/notifications.html#notifications-templates>) -Media notifications will automatically get colorized based on the album art. The Palette API takes the album art, extracts some colors, and applies it to the media player notification. Upvotes: 0 <issue_comment>username_2: This is MediaStyle for Notification. You need to set MediaStyle and media session tokon and thats it. For example: ``` NotificationCompat.Builder builder = new NotificationCompat.Builder(this, CHANNEL_ID); //building some actions... builder.setSmallIcon(R.mipmap.ic_launcher) .setStyle(new android.support.v4.media.app.NotificationCompat.MediaStyle() .setShowActionsInCompactView(0, 1, 2) .setShowCancelButton(true) .setMediaSession(mediaSessionCompat.getSessionToken())) .setCategory(NotificationCompat.CATEGORY_TRANSPORT) .setVisibility(NotificationCompat.VISIBILITY_PUBLIC) .setShowWhen(false) .setContentTitle("Title Name") .setContentText("Content text") .setSmallIcon(R.drawable.pause) .setWhen(0) .setAutoCancel(true) .setLargeIcon(icon); ``` You can find tutorial here: [Tutorial](https://medium.com/google-developers/migrating-mediastyle-notifications-to-support-android-o-29c7edeca9b7) Upvotes: 2
2018/03/20
652
2,373
<issue_start>username_0: In my C++ code I get events about data that has to be inserted in the database. The events produce different threads and I have a BlockingConcurrentQueue that is used in a producer consumer model. Every thread writes (produces) in the queue something like: ``` INSERT INTO CHAT_COMMENTS (chat_comment_id, comment) values (3,'This is a comment'; ``` The above string generated as ``` sprintf(insert_statement, "INSERT INTO CHAT_COMMENTS (chat_comment_id, comment) values (%d,'%s')",e->id,e->comment); ``` A scheduler runs every a while and executes all of these insert statements in a MySQL database. Now, the problem is that comments might have some MySQL special characters as shown below: ``` cout << comment; // produces "this_% LIKE 'a comment """\m/'DROP TABLE USERS" INSERT INTO CHAT_COMMENTS (chat_comment_id, comment) values (3,'this_% LIKE 'a comment """\m/'DROP TABLE USERS'); ``` Is there a way to handle these case? I know that one can use prepared statements, but I am looking for something different as PreparedStatements are not thread safe and also because I want to execute the queries above in batches.<issue_comment>username_1: You're probably looking for the [`mysql_real_escape_string()`](https://dev.mysql.com/doc/refman/5.7/en/mysql-real-escape-string.html) function. Upvotes: 1 <issue_comment>username_2: Don't try and code around SQL injection. ---------------------------------------- Use prepared statements. If you need to ensure that the prepared statement (or connection) is only used on one thread at a time, associate it with a `std::mutex` (or similar) that you lock while it is being used. ``` class threaded_statement; class threaded_connection { std::unique_ptr conn; std::mutex mut; public: threaded\_statement prepareStatement(const char \* stmt) { std::lock\_guard guard(mut); return { conn->prepareStatement(stmt) }; } }; class threaded\_statement { std::unique\_ptr stmt; std::mutex mut; public: threaded\_statement(sql::PreparedStatement \* stmt) : stmt(stmt) {} void setInt(int pos, int value) { std::lock\_guard guard(mut); stmt->setInt(pos, value); } void setString(int pos, std::string value) { std::lock\_guard guard(mut); stmt->setString(pos, value); } // etc ... void execute() { std::lock\_guard guard(mut); stmt->execute(); } } ``` Upvotes: 0
2018/03/20
824
2,523
<issue_start>username_0: I like how easy it is to write some variables to console output in C++ using `qDebug` from Qt: ``` int a = b = c = d = e = f = g = 1; qDebug() << a << b << c << d << e << f << g; ``` Result: ``` 1 1 1 1 1 1 1 ``` In comparison, using `std::cout` would require me to add the spacing and newline manually to get the same result: ``` std::cout << a << " " << b << " " << c << " " << d << " " << e << " " << f << " " << g << "\n"; ``` While I often use Qt, I sometimes work on projects where adding the Qt framework just to get access to qDebug would be overkill. And although it is not hard to write a new class that behaves similar to `qDebug`, I am wondering if any established alternative to `std::cout` with similar behavior to `qDebug` already exists? **Edit:** What I am looking for is ideally an *established* library (or snippet, but I prefer something existing over rolling my own) that I can always use as my go-to solution when I need something like this. It could be header-only, or a large logging library that is much used and well-tested, or a simple, small snippet. The point is that it should be small and/or standard enough that other collaborators would be okay with including it in a project just for debugging/logging purposes. **Edit 2**: To clarify: It would be great to have a solution that both inserts spaces between the variables and newlines for each statement: ``` myDebug << 1 << 2 << 3; myDebug << 4 << 5 << 6; ``` Should return: ``` 1 2 3 4 5 6 ```<issue_comment>username_1: ``` struct debugcout { }; template debugcout& operator<<(debugcout& os, const T& x) { std::cout << x << ' '; return os; } inline debugcout debug{}; ``` Usage: ``` int main() { debug << 1 << 2 << 3; } ``` Upvotes: 3 <issue_comment>username_2: ``` #include class myDebug { bool is\_first{true}; bool is\_last{true}; public: myDebug() = default; myDebug(myDebug const &) = delete; myDebug & operator = (myDebug const &) = delete; myDebug & operator = (myDebug &&) = delete; myDebug(myDebug && dc) noexcept : is\_first{false} { dc.is\_last = false; } ~myDebug() { if (is\_last) std::cout << '\n'; } template friend myDebug operator<<(myDebug db, const T& x) { if (db.is\_first) db.is\_first = false; else std::cout << ' '; std::cout << x; return db; } }; ``` ``` int main() { myDebug() << 1 << 2 << 3; myDebug() << 4 << 5 << 6; } ``` **`[Demo](http://coliru.stacked-crooked.com/a/24df0dca1903450a)`** Upvotes: 3 [selected_answer]
2018/03/20
321
1,261
<issue_start>username_0: Is it possible to have the Dropdown Context Menu work for the whole Table row? We can render the Dropdown using the cell render method, but that way it doesn't cover the whole table cell because of cell padding. Thus the user has to precisely click the displayed text, instead of the whole cell.<issue_comment>username_1: For the context menu you can use `Dropdown` or `Popover` components with some css customization, but the problem is the popup menu doesn't appear next to mouse pointer, it appears relatively to a wrapped element. You can check it **[here](https://codesandbox.io/s/n90579l9pm)** what I ment above. I have made a custom Popup component. It looks for me some how ugly but anyway it works fine. **[You can check it here](https://codesandbox.io/s/rm23kroqyo)**. Upvotes: 5 [selected_answer]<issue_comment>username_2: > > For the context menu you can use `Dropdown` or `Popover` components with some css customization, but the problem is the popup menu doesn't appear next to mouse pointer, it appears relatively to a wrapped element. You can check it [here](https://codesandbox.io/s/n90579l9pm) what I ment above. > > > It seems antd has already fixed the popup position issue of `Dropdown` component. Upvotes: 2
2018/03/20
954
3,896
<issue_start>username_0: I am trying to get the default language from the browser and I use the following code to get it: ``` var languages = HttpContext.Request.UserLanguages; ``` Since the above is not supported with .NET Core 2 I tested with: ``` var requestContext = Request.HttpContext.Features.Get(); ``` However, it returns null. What is the correct way or alternative to get the language?<issue_comment>username_1: You have to add the localization middleware to enable parsing of the culture header, and then get the value through IRequestCultureFeature. Check this link : <https://github.com/aspnet/Mvc/issues/3549> Upvotes: 0 <issue_comment>username_2: You need to add the localisation middleware to be able to get the `IRequestCultureFeature` feature: ``` public void Configure(IApplicationBuilder app) { //... //Add this: app.UseRequestLocalization(); //... } ``` Now in your controller you can request the feature like this: ``` var requestCulture = Request.HttpContext.Features.Get(); ``` Upvotes: 4 <issue_comment>username_3: You can get the browser language from the [Request Headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers) Write on your controller: ``` //For example --> browserLang = 'en-US' var browserLang= Request.Headers["Accept-Language"].ToString().Split(";").FirstOrDefault()?.Split(",").FirstOrDefault(); ``` Upvotes: 3 <issue_comment>username_4: `IRequestCultureFeature` provides the first matched language, which is supported by your application. Declaration of supported languages is defined in Configure() of your Startup class (see [example](https://github.com/aspnet/Localization/blob/master/samples/LocalizationSample/Startup.cs)). If you still need all accepted languages as a simple `string[]` like the older `Request.UserLanguages` property, then use the `HeaderDictionaryTypeExtensions.GetTypedHeaders()` extension defined in the `Microsoft.AspNetCore.Http` namespace: ``` // In your action method. var languages = Request.GetTypedHeaders() .AcceptLanguage ?.OrderByDescending(x => x.Quality ?? 1) // Quality defines priority from 0 to 1, where 1 is the highest. .Select(x => x.Value.ToString()) .ToArray() ?? Array.Empty(); ``` The array `languages` contains the list of accepted languages according to the priority parameter `q`. The language with the highest priority comes first. To get the default language take the first element of the array `languages`. As an extension method: ``` using System.Collections.Generic; using System.Linq; using Microsoft.AspNetCore.Http; public static class HttpRequestExtensions { public static string[] GetUserLanguages(this HttpRequest request) { return request.GetTypedHeaders() .AcceptLanguage ?.OrderByDescending(x => x.Quality ?? 1) .Select(x => x.Value.ToString()) .ToArray() ?? Array.Empty(); } } ``` Upvotes: 6 [selected_answer]<issue_comment>username_5: Maybe my solution will be useful to someone. The difference is that if the headers do not have an Accept-Language, the default language is used. This code for NET 6. ``` builder.Services.Configure(config => { CultureInfo en = new CultureInfo("en"); CultureInfo ru = new CultureInfo("ru"); var langs = new List { en, ru }; config.DefaultRequestCulture = new RequestCulture(en); config.SupportedCultures = langs; config.SupportedUICultures = langs; }); app.UseRequestLocalization(); public IActionResult Index() { var languageRequest = HttpContext.Features.Get()?.RequestCulture.Culture.Name; // use switch with languageRequest or another your logic } ``` Also, you can not use builder.Services.Configure, then in languageRequest you will receive the full name of the language. For example: "en-En", "ru-RU" etc... Upvotes: 0
2018/03/20
1,056
4,055
<issue_start>username_0: The condition I have is : ``` nvl(GECM_ICP_PKG.GECM_GET_PARAMETER_VALUE_FNC('GECM_SCX_VALIDATION','REN_SCX_VALIDATION',v_org_name,NULL,NULL),'N') = 'Y' SELECT nvl(GECM_ICP_PKG.GECM_GET_PARAMETER_VALUE_FNC('GECM_SCX_VALIDATION','REN_SCX_VALIDATION',v_org_name,NULL,NULL),'N') FROM DUAL; ``` Here my requirement is to pass values from `org_id` column of `ap_supplier_sites_all` table as `v_org_name` parameter. like for loop, I want to check for all the org\_id's without checking by passing each `org_id` in `v_org_name` parameter.<issue_comment>username_1: You have to add the localization middleware to enable parsing of the culture header, and then get the value through IRequestCultureFeature. Check this link : <https://github.com/aspnet/Mvc/issues/3549> Upvotes: 0 <issue_comment>username_2: You need to add the localisation middleware to be able to get the `IRequestCultureFeature` feature: ``` public void Configure(IApplicationBuilder app) { //... //Add this: app.UseRequestLocalization(); //... } ``` Now in your controller you can request the feature like this: ``` var requestCulture = Request.HttpContext.Features.Get(); ``` Upvotes: 4 <issue_comment>username_3: You can get the browser language from the [Request Headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers) Write on your controller: ``` //For example --> browserLang = 'en-US' var browserLang= Request.Headers["Accept-Language"].ToString().Split(";").FirstOrDefault()?.Split(",").FirstOrDefault(); ``` Upvotes: 3 <issue_comment>username_4: `IRequestCultureFeature` provides the first matched language, which is supported by your application. Declaration of supported languages is defined in Configure() of your Startup class (see [example](https://github.com/aspnet/Localization/blob/master/samples/LocalizationSample/Startup.cs)). If you still need all accepted languages as a simple `string[]` like the older `Request.UserLanguages` property, then use the `HeaderDictionaryTypeExtensions.GetTypedHeaders()` extension defined in the `Microsoft.AspNetCore.Http` namespace: ``` // In your action method. var languages = Request.GetTypedHeaders() .AcceptLanguage ?.OrderByDescending(x => x.Quality ?? 1) // Quality defines priority from 0 to 1, where 1 is the highest. .Select(x => x.Value.ToString()) .ToArray() ?? Array.Empty(); ``` The array `languages` contains the list of accepted languages according to the priority parameter `q`. The language with the highest priority comes first. To get the default language take the first element of the array `languages`. As an extension method: ``` using System.Collections.Generic; using System.Linq; using Microsoft.AspNetCore.Http; public static class HttpRequestExtensions { public static string[] GetUserLanguages(this HttpRequest request) { return request.GetTypedHeaders() .AcceptLanguage ?.OrderByDescending(x => x.Quality ?? 1) .Select(x => x.Value.ToString()) .ToArray() ?? Array.Empty(); } } ``` Upvotes: 6 [selected_answer]<issue_comment>username_5: Maybe my solution will be useful to someone. The difference is that if the headers do not have an Accept-Language, the default language is used. This code for NET 6. ``` builder.Services.Configure(config => { CultureInfo en = new CultureInfo("en"); CultureInfo ru = new CultureInfo("ru"); var langs = new List { en, ru }; config.DefaultRequestCulture = new RequestCulture(en); config.SupportedCultures = langs; config.SupportedUICultures = langs; }); app.UseRequestLocalization(); public IActionResult Index() { var languageRequest = HttpContext.Features.Get()?.RequestCulture.Culture.Name; // use switch with languageRequest or another your logic } ``` Also, you can not use builder.Services.Configure, then in languageRequest you will receive the full name of the language. For example: "en-En", "ru-RU" etc... Upvotes: 0
2018/03/20
3,178
8,364
<issue_start>username_0: There is a big javascript library (~ 40 000 lines of code) and an application which uses less than **50%** of the library's code. There is a test which utilizes all the needed functionality from the library and can produce coverage report. How to remove programmatically every unused line of code relying on the test? **Note**: coverage report contains indices of lines which were executed but it is not accurate: closing braces are ignored, lines with method names are marked as executed even if the method body was not etc.<issue_comment>username_1: This approach will not work I fear. Not that easy and not with the data you have available at least. 1. The coverage report for your *test which utilizes all the needed functionality* is using which coverage metric? Does it excercise all values, conditions and possible combinations thereof? If not, you may miss out on usage of some part of the code. 2. If your coverage report is not accurate you cannot rely on it for removal actions. Although braces Given a sufficiently good test suite you can use the code coverage reports for hints however. Remove code that is reported as unused, re-run your tests and check whether they still pass. Repeat until nore more code snippets can be removed. Upvotes: 1 <issue_comment>username_2: You can use some java script automation tools to remove your unwanted codes, first you need to install either one of the below js liblary(node must). [tree-shaking](https://www.engineyard.com/blog/tree-shaking) [UglifyJS2](https://github.com/mishoo/UglifyJS2#compressor-options) visit any one sites. or you can remove using chrome dev tools(but be careful, test many cases before you identify unwanted codes, because this procees will identify the unwanted code means, which codes did not executed by you way of process or your test cases) [Remove Ugly JS code by chrome dev](https://www.afasterweb.com/2017/04/30/using-chrome-to-identify-unused-code/) This worked fine for my case(but my js code less than 10k lines) Upvotes: 2 <issue_comment>username_3: You can try to use: ``` npm install -g fixmyjs fixmyjs ``` This is the [fixmyjs project](https://github.com/jshint/fixmyjs) it is a great tool for cleanup, it appears to lack compatibility with some versions of ecmascript Upvotes: 3 <issue_comment>username_4: [Closure Compiler](https://closure-compiler.appspot.com) provides some quite advanced unused code clean up features. Some examples: Removing a dead code block -------------------------- ``` function hello(name) { alert('Hello, ' + name); } function hi(name) { alert('Hi, ' + name); } hello('New user 1'); hello('New user 2'); ``` Compiles to: ``` alert("Hello, New user 1"); alert("Hello, New user 2"); ``` completely stripping away the `hi` function and inlining `hello`. ([live demo](https://closure-compiler.appspot.com/home#code%3D%252F%252F%2520%253D%253DClosureCompiler%253D%253D%250A%252F%252F%2520%2540compilation_level%2520ADVANCED_OPTIMIZATIONS%250A%252F%252F%2520%2540output_file_name%2520default.js%250A%252F%252F%2520%2540formatting%2520pretty_print%250A%252F%252F%2520%253D%253D%252FClosureCompiler%253D%253D%250A%250A%252F%252F%2520ADD%2520YOUR%2520CODE%2520HERE%250A%250Afunction%2520hello(name)%2520%257B%250A%2520%2520alert('Hello%252C%2520'%2520%252B%2520name)%253B%250A%257D%250A%250Afunction%2520hi(name)%2520%257B%250A%2520%2509alert('Hi%252C%2520'%2520%252B%2520name)%253B%250A%257D%250A%250Ahello('New%2520user%25201')%253B%250Ahello('New%2520user%25202')%253B%250A)) Moving to a more complicated case --------------------------------- As the code gets more complicated, it finds new ways to optimize. For example: ``` let greeted = 0; function hello(name) { greeted += 1; alert('Hello, ' + name); } function hi(name) { greeted += 1; alert('Hi, ' + name); } hello('New user ' + greeted); hello('New user ' + greeted); ``` Becomes: ``` var a = 0; function b() { var c = "New user " + a; a += 1; alert("Hello, " + c); } b(); b(); ``` ([live demo](https://closure-compiler.appspot.com/home#code%3D%252F%252F%2520%253D%253DClosureCompiler%253D%253D%250A%252F%252F%2520%2540compilation_level%2520ADVANCED_OPTIMIZATIONS%250A%252F%252F%2520%2540output_file_name%2520default.js%250A%252F%252F%2520%2540formatting%2520pretty_print%250A%252F%252F%2520%253D%253D%252FClosureCompiler%253D%253D%250A%250A%252F%252F%2520ADD%2520YOUR%2520CODE%2520HERE%250Alet%2520greeted%2520%253D%25200%253B%250A%250Afunction%2520hello(name)%2520%257B%250A%2520%2520greeted%2520%252B%253D%25201%253B%250A%2520%2520alert('Hello%252C%2520'%2520%252B%2520name)%253B%250A%257D%250A%250Afunction%2520hi(name)%2520%257B%250A%2520%2520greeted%2520%252B%253D%25201%253B%250A%2520%2509alert('Hi%252C%2520'%2520%252B%2520name)%253B%250A%257D%250A%250Ahello('New%2520user%2520'%2520%252B%2520greeted)%253B%250Ahello('New%2520user%2520'%2520%252B%2520greeted)%253B%250A%250A)) Make sure you turn on the `ADVANCED_OPTIMIZATIONS` compilation level to enable dead code removal. Upvotes: 3 <issue_comment>username_5: There are two techniques to eliminate dead code and it is possible using javascript build systems- webpack. 1. **Dead code elimination (DCE)** : compiler optimisation- It excludes which is not needed in the program. 2. **Tree Shaking** It works in reverse direction, includes only what is actually needed in the program. [Click here](https://www.thomann.io/blog/post/webpack_conditional_compilation_dead_code_elimination) for detailed configuration. Upvotes: 2 <issue_comment>username_6: In order to automatically remove unused code from bundle, we have: 1. Tree shaking 2. Ugliy and Minification tools such as uglifyjs, Terser 3. Google closure compiler (best results) However, in order to find the unused assets, to remove manually, you can use [deadfile](https://m-izadmehr.github.io/deadfile/#/) library: <https://m-izadmehr.github.io/deadfile/> It can simply find unused files, in any JS project. Without any config, it supports ES6, JSX, and Vue files: [![enter image description here](https://i.stack.imgur.com/d13Io.png)](https://i.stack.imgur.com/d13Io.png) Upvotes: 2 <issue_comment>username_7: You can process the report by leaving only the lines specified in the "ranges" in the "text" ```js const coverageReport = [ { "url": "http://127.0.0.1:8080/index.js", "ranges": [ { "start": 96, "end": 197 }, { "start": 294, "end": 434 }, { "start": 469, "end": 482 }, { "start": 511, "end": 574 }, { "start": 598, "end": 606 }, { "start": 630, "end": 655 } ], "text": "function unused() {\n console.log('unused1');console.log('unused2');\n console.log('unused3');\n}\n\nfunction used() {\n console.log('used1');console.log('used2');\n console.log('used3');\n}\n\nused();\n\nfunction unused2() {\n console.log('unused1');console.log('unused2');\n console.log('unused3');\n}\n\nfunction used2() {\n console.log('used1');console.log('used2');\n console.log('used3');\n}\n\nused2();\n\nif (true) {\n console.log('used4');\n} else {\n console.log('unused4');\n}\n\nif (false) {\n console.log('unused5');\n} else {\n console.log('used5');\n}\n\ntrue ? console.log('used6') : console.log('unused6');\nfalse ? console.log('unused7') : console.log('used7');\n" } ]; const deleteUnusedCode = (fileObjs) => { const outFileObjs = []; for (const fileObj of fileObjs) { const {url, ranges, text} = fileObj; let outText = ''; for (const {start, end} of ranges) { outText += text.slice(start, end); } outFileObjs.push({url, text: outText}) } return outFileObjs } const showFileObjs = (fileObjs) => { for (const {url, text} of fileObjs) { console.log(url); console.log(text); console.log(); } } console.log('before:'); showFileObjs(coverageReport); console.log('after:'); showFileObjs(deleteUnusedCode(coverageReport)); ``` Yes, after this we have only the js code used, but unfortunately it is not valid. You can manually fix the errors and then everything will be ok. I wrote a project in which I showed the use of a coverage report to generate files based on it: <https://github.com/fosemberg/coverage-report-handler> Upvotes: 0
2018/03/20
650
1,927
<issue_start>username_0: trying to get the monthly aggregated data from Legacy table. Meaning date columns are strings: ``` amount date_create 100 2018-01-05 200 2018-02-03 300 2018-01-22 ``` However, the command ``` Select DATE_TRUNC(DATE date_create, MONTH) as month, sum(amount) as amount_m from table group by 1 ``` Returns the following error: > > Error: Syntax error: Expected ")" but got identifier "date\_create" > > > Why does this query not run and what can be done to avoid the issue? Thanks<issue_comment>username_1: Try to add double quote to date\_creat : ``` Select DATE_TRUNC('date_create', MONTH) as month, sum(amount) as amount_m from table group by 1 ``` Upvotes: 0 <issue_comment>username_2: It looks like you meant to cast `date_create` instead of using the `DATE` keyword (which is how you construct a literal value) there. Try this instead: ``` Select DATE_TRUNC(DATE(date_create), MONTH) as month, sum(amount) as amount_m from table GROUP BY 1 ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: I figured it out: `date_trunc(cast(date_create as date), MONTH) as Month` Upvotes: 1 <issue_comment>username_4: Another option for BigQuery Standard SQL - using [`PARSE_DATE`](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#parse_date) function ```sql #standardSQL WITH `project.dataset.table` AS ( SELECT 100 amount, '2018-01-05' date_create UNION ALL SELECT 200, '2018-02-03' UNION ALL SELECT 300, '2018-01-22' ) SELECT DATE_TRUNC(PARSE_DATE('%Y-%m-%d', date_create), MONTH) AS month, SUM(amount) AS amount_m FROM `project.dataset.table` GROUP BY 1 ``` with result as ```sql Row month amount_m 1 2018-01-01 400 2 2018-02-01 200 ``` In practice - I prefer PARSE\_DATE over CAST as former kind of documents expectation about data format Upvotes: 1
2018/03/20
1,221
3,178
<issue_start>username_0: I'm trying to transform this unicode value: > > string\_value = u'd\xe9cid\xe9' > > > to > > string\_value = u'décidé' > > > I feel like I've tried everything: ``` decoded_str = string_value.decode('utf-8') ``` or ``` string_value = str(string_value) decoded_str = string_value.encode('latin1').decode('utf-8') ``` or ``` string_value = string_value.decode('latin-1') ``` for this one the result is: > > d\xc3\xa9cid\xc3\xa9 > > > I have the same result if I do: ``` string_value = string_value.encode('utf-8') ``` I've read from: [How do I convert 'blah \xe9 blah' to 'blah é blah'](https://stackoverflow.com/questions/27197507/how-do-i-convert-blah-xe9-blah-to-blah-%C3%A9-blah) also from: [Why does Python print unicode characters when the default encoding is ASCII?](https://stackoverflow.com/questions/2596714/why-does-python-print-unicode-characters-when-the-default-encoding-is-ascii) and: [How do I convert a unicode to a string at the Python level?](https://stackoverflow.com/questions/2783079/how-do-i-convert-a-unicode-to-a-string-at-the-python-level) EDIT: My problem is I need to use the data, I mean if I have : > > string\_value = u'mai 2017 \u2013 Aujourd\u2019hui' > > > which is : > > mai 2017 – Aujourd’hui > > > I want to do: ``` string_list = string_value.split('-') ``` But the result is: ``` [u'mai 2017 \u2013 Aujourd\u2019hui'] ``` And I would: ``` ['mai 2017', 'Aujourd’hui'] ``` NEW EDIT: I understand that I'm going to the wrong direction, thanks to your answer. \xe9 is the right representation of 'é' and it's not a problem. My real issue is why does json.loads() transform 'mai 2017 – Aujourd’hui' to 'mai 2017 \u2013 Aujourd\u2019hui' ?<issue_comment>username_1: I am not sure what you're asking: `\xe9` is a representation of the code point 233 (`e9` in hexadecimal), [which simply is the letter "é"](http://www.codetable.net/decimal/233): ``` >>> u'é' == u'\xe9' True ``` Your confusion might stem from the fact that the `repr` of a Python string is (in Python 2) in ASCII, so non-ASCII characters are escaped. The Python console displays a value using `repr` if you do not `print` it explicitly: ``` >>> print(repr(u'é')) u'\xe9' >>> print(repr(u'\xe9')) u'\xe9' ``` However, when you print the value, then it that conversion doesn't happen and everything works as expected: ``` >>> print(u'é') é >>> print(u'\xe9') é ``` Also note that in Python 3, `repr` returns Unicode: ``` Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> print(repr(u'\xe9')) 'é' ``` **Update after the question was edited:** As pointed out in the comments, `\u2013` is not the same character as `-` (just as `a` and `b` are separate characters). So you'll need to split on `\u2013` instead of splitting on `-`. Upvotes: 2 <issue_comment>username_2: [splitting a string with a unicode delimiter?](https://stackoverflow.com/questions/42323347/splitting-a-string-with-a-unicode-delimiter) so... ``` print string_value.split(u"\u2013") ``` Upvotes: 1 [selected_answer]
2018/03/20
917
3,438
<issue_start>username_0: Sorry for bad English :) I ran into a problem in my application. I want to implement a post search by the post title, post text, and its tags, I have a search form and I want ALL posts that contain this word in the title or text or tags to be displayed, but now it works differently, for example, I have 2 posts: one of them contains the word "hello" in the header, and the other has the tag "hello", I want both of these posts to be displayed, but now only the one that has the tag is displayed, and the second with the word "hello" in the header is ignored, I don't know why. I know that the problem with my `Post.search` method. Please help me, thank in advance! **Here is my code:** **Post.rb:** ``` class Post < ApplicationRecord mount_uploader :image, ImageUploader validates :body, presence: true, length: { maximum: 500} validates :title, presence: true, length: { maximum: 50} validates :adress, length: { maximum: 50} belongs_to :user has_many :taggings has_many :tags, through: :taggings has_many :comments, dependent: :destroy has_many :favorites, dependent: :destroy def self.search(keywords) if keywords joins(:tags).where("lower (title) ILIKE :value OR lower (body) ILIKE :value OR lower (tags.name) ILIKE :value", value: "%#{keywords.downcase}%") else all.order("created_at DESC") end end end ``` **Post Controller index action:** ``` def index @posts = Post.search(params[:keywords]).uniq end ```<issue_comment>username_1: Try to the following **Option One** ``` Post.includes(:tags).where("lower (posts.title) ILIKE :value OR lower (posts.body) ILIKE :value OR lower (tags.name) LIKE :value", value: "%#{params[:keywords].downcase}%" ).references(:tags) ``` Or something like this direct to your controller index action **Option Two** ``` def index if params[:keywords].present? @posts = Post.includes(:tags).where("lower (posts.title) ILIKE :value OR lower (posts.body) ILIKE :value OR lower (tags.name) LIKE :value", value: "%#{params[:keywords].downcase}%" ).references(:tags) else @posts = Post.all.order(created_at: :DESC) end end ``` **Option Three** ``` def index if params[:keywords].present? @posts = Post.where('true') @posts = @posts.where('lower (title) ILIKE' => "%#{params[:keywords]}%") @posts = @posts.where('lower (body) ILIKE' => "%#{params[:keywords]}%") @posts = @posts.includes(:tags).where('lower (tags.name)' => "%#{params[:keywords]}%") @posts = @posts.order(created_at: :DESC).uniq else @posts = Post.all.order(created_at: :DESC) end end ``` For comlex search you can see this [`RailsCast`](http://railscasts.com/episodes/343-full-text-search-in-postgresql?view=asciicast) tutorial for [`pg_search`](https://github.com/Casecommons/pg_search) Hope it helps. Upvotes: 3 [selected_answer]<issue_comment>username_2: I think it's because of "joins". Now you have "inner join" and when it hits a row in "tags" table it removes from the result the rest. So you would need right/left join instead. Upvotes: 0
2018/03/20
749
2,621
<issue_start>username_0: I have a file that contains names of directories and some other information, but the names always come first.The file looks like this: ``` /home/user/Desktop/IS/proj_1/sch/text 4 2018-03-14 07:41:01 /home/user/Desktop/IS/file1.txt 3 2018-03-14 16:50:01 ... ``` I have a variable "name" that contains this for example: ``` /home/user/Desktop/IS/file1.txt ``` And I need to delete that one particular line from the file somehow. I've searched many posts and tried using various quotations and ways of expansions, but nothing did the trick. If I type it in directly, it deletes the line without problem, but I'm having a hard time doing it from a variable. This is what I came up with but it still doesn't work. ``` sed -i '/"$name"/d' $File_name ```<issue_comment>username_1: Try to the following **Option One** ``` Post.includes(:tags).where("lower (posts.title) ILIKE :value OR lower (posts.body) ILIKE :value OR lower (tags.name) LIKE :value", value: "%#{params[:keywords].downcase}%" ).references(:tags) ``` Or something like this direct to your controller index action **Option Two** ``` def index if params[:keywords].present? @posts = Post.includes(:tags).where("lower (posts.title) ILIKE :value OR lower (posts.body) ILIKE :value OR lower (tags.name) LIKE :value", value: "%#{params[:keywords].downcase}%" ).references(:tags) else @posts = Post.all.order(created_at: :DESC) end end ``` **Option Three** ``` def index if params[:keywords].present? @posts = Post.where('true') @posts = @posts.where('lower (title) ILIKE' => "%#{params[:keywords]}%") @posts = @posts.where('lower (body) ILIKE' => "%#{params[:keywords]}%") @posts = @posts.includes(:tags).where('lower (tags.name)' => "%#{params[:keywords]}%") @posts = @posts.order(created_at: :DESC).uniq else @posts = Post.all.order(created_at: :DESC) end end ``` For comlex search you can see this [`RailsCast`](http://railscasts.com/episodes/343-full-text-search-in-postgresql?view=asciicast) tutorial for [`pg_search`](https://github.com/Casecommons/pg_search) Hope it helps. Upvotes: 3 [selected_answer]<issue_comment>username_2: I think it's because of "joins". Now you have "inner join" and when it hits a row in "tags" table it removes from the result the rest. So you would need right/left join instead. Upvotes: 0
2018/03/20
470
1,739
<issue_start>username_0: Consider the following code: ``` auditlog.getMessages() .stream() .filter(m -> messageId.equals(m.getMessageid())) .findFirst() .orElseThrow(NoMessageFoundException::new) ``` `NoMessageFoundException` is a custom unchecked exception, extending from `RuntimeException`. When `findFirst()` returns an empty optional I expect a `NoMessageFoundException` to be thrown, however, the code just carries on. Is it impossible to do this with unchecked exceptions? I could change `NoMessageFoundException` to a checked exception, but then I would have to write a `try/catch` block or some sort of wrapper to catch the exception as explained [here](http://www.baeldung.com/java-lambda-exceptions) but I wish to not do that. Any ideas?<issue_comment>username_1: There is no limitation on the type of Exception that can be thrown. ``` public T orElseThrow(Supplier extends X exceptionSupplier) throws X { if (value != null) { return value; } else { throw exceptionSupplier.get(); } } ``` If the code "carries on", it means that a message is found. Upvotes: 3 [selected_answer]<issue_comment>username_2: if the exception is not getting throw is because there is at least one element remaining after the filter action... see this example: ``` public class ASFasf { public static void main(String[] args) { List l = Arrays.asList(1, 2, 3, 4, 5); Integer iR = l.stream().filter(x -> x > 100).findFirst().orElseThrow(NoMessageFoundException::new); System.out.println(iR); } } class NoMessageFoundException extends RuntimeException { public NoMessageFoundException() { super("Opala!!"); } } ``` iR will never get printed, and a `NoMessageFoundException` is thrown.... Upvotes: 1
2018/03/20
641
1,991
<issue_start>username_0: ``` import paramiko client = paramiko.SSHClient() client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) client.connect('hostname', username='test1234', password='<PASSWORD>') path = ['/home/test/*.txt', '/home/test1/*.file', '/home/check/*.xml'] for i in path: for j in glob.glob(i): print j client.close() ``` I am trying to list the wildcard files on remote server by using `glob.glob`. But `glob.glob()` is not working. Using Python 2.6. Remote server contains these files: `/home/test1/check.file`, `/home/test1/validate.file`, `/home/test1/vali.file` Can anyone please help on this issue.<issue_comment>username_1: The `glob` will not magically start working with a remote server, just because you have instantiated `SSHClient` before. You have to use Paramiko API to list the files, like [`SFTPClient.listdir`](https://docs.paramiko.org/en/latest/api/sftp.html#paramiko.sftp_client.SFTPClient.listdir): ``` import fnmatch ``` ``` sftp = client.open_sftp() for filename in sftp.listdir('/home/test'): if fnmatch.fnmatch(filename, "*.txt"): print filename ``` You can also use a regular expression for the matching, if it suits your needs better. See [Using wildcard in remote path using Paramiko's SFTPClient](https://stackoverflow.com/q/51866185/850848). --- *Side note: Do not use `AutoAddPolicy`. You lose security by doing so. See [Paramiko "Unknown Server"](https://stackoverflow.com/q/10670217/850848#43093883)*. Upvotes: 5 [selected_answer]<issue_comment>username_2: Or use [pysftp](https://pysftp.readthedocs.io/en/release_0.2.9/) which is *paramiko* wrapper and write something like this: ``` import pysftp def store_files_name(fname): pass def store_dir_name(dir_name): pass def store_other_file_type(other_file): pass with pysftp.Connection('server', username='user', password='<PASSWORD>') as sftp: sftp.walktree('.', store_files_name, store_dir_name, store_other_file_type) ``` Upvotes: 2
2018/03/20
739
2,267
<issue_start>username_0: I have below mentioned data frame in R: ``` ID T1 T2 T3 T4 T5 T6 Jul-17 8 2 1000 1 1 15000 Jun-17 3 2 2000 5 1 25000 May-17 9 2 5000 4 1 30000 ``` I want to create below mentioned bullet line from above dataframe (by adding `T1` and `T4` for line one and `T3` and `T6` for line 2) and than convert it into html format. The code should pick the `Jul-17` from first row of dataframe. * some text here `Jul-17` are **9**. * some text here `Jul-17` are **25,000**. I have already two html tables as shown below, where i have some free space remain in top right hand side (mentioned as `Space` in below example) where i want to paste these 2 bullet line when using `mailR` library to e-mail it. Body looks like below: ``` Table1 Space Table2 Table2 ```<issue_comment>username_1: The `glob` will not magically start working with a remote server, just because you have instantiated `SSHClient` before. You have to use Paramiko API to list the files, like [`SFTPClient.listdir`](https://docs.paramiko.org/en/latest/api/sftp.html#paramiko.sftp_client.SFTPClient.listdir): ``` import fnmatch ``` ``` sftp = client.open_sftp() for filename in sftp.listdir('/home/test'): if fnmatch.fnmatch(filename, "*.txt"): print filename ``` You can also use a regular expression for the matching, if it suits your needs better. See [Using wildcard in remote path using Paramiko's SFTPClient](https://stackoverflow.com/q/51866185/850848). --- *Side note: Do not use `AutoAddPolicy`. You lose security by doing so. See [Paramiko "Unknown Server"](https://stackoverflow.com/q/10670217/850848#43093883)*. Upvotes: 5 [selected_answer]<issue_comment>username_2: Or use [pysftp](https://pysftp.readthedocs.io/en/release_0.2.9/) which is *paramiko* wrapper and write something like this: ``` import pysftp def store_files_name(fname): pass def store_dir_name(dir_name): pass def store_other_file_type(other_file): pass with pysftp.Connection('server', username='user', password='<PASSWORD>') as sftp: sftp.walktree('.', store_files_name, store_dir_name, store_other_file_type) ``` Upvotes: 2
2018/03/20
472
1,490
<issue_start>username_0: I am using opentok for video feeding in my application. How to integrate annotations on opentok using javascript?<issue_comment>username_1: The `glob` will not magically start working with a remote server, just because you have instantiated `SSHClient` before. You have to use Paramiko API to list the files, like [`SFTPClient.listdir`](https://docs.paramiko.org/en/latest/api/sftp.html#paramiko.sftp_client.SFTPClient.listdir): ``` import fnmatch ``` ``` sftp = client.open_sftp() for filename in sftp.listdir('/home/test'): if fnmatch.fnmatch(filename, "*.txt"): print filename ``` You can also use a regular expression for the matching, if it suits your needs better. See [Using wildcard in remote path using Paramiko's SFTPClient](https://stackoverflow.com/q/51866185/850848). --- *Side note: Do not use `AutoAddPolicy`. You lose security by doing so. See [Paramiko "Unknown Server"](https://stackoverflow.com/q/10670217/850848#43093883)*. Upvotes: 5 [selected_answer]<issue_comment>username_2: Or use [pysftp](https://pysftp.readthedocs.io/en/release_0.2.9/) which is *paramiko* wrapper and write something like this: ``` import pysftp def store_files_name(fname): pass def store_dir_name(dir_name): pass def store_other_file_type(other_file): pass with pysftp.Connection('server', username='user', password='<PASSWORD>') as sftp: sftp.walktree('.', store_files_name, store_dir_name, store_other_file_type) ``` Upvotes: 2
2018/03/20
755
2,651
<issue_start>username_0: I have this code ``` textureAtlas = TextureAtlas("atlas.atlas") val box = textureAtlas.findRegion("box") ``` I want to create a texture with "box". Is it possible? box.texture return the original texture, not the regioned. Oh and I don't want to use Sprite and SpriteBatch. I need this in 3D, not 2D. Thanks<issue_comment>username_1: You can do this: ``` Texture boxTexture = new TextureRegion(textureAtlas.findRegion("box")).getTexture(); ``` Upvotes: -1 <issue_comment>username_2: TextureAtlas actually not separating pieces. When you get region from atlas its just saying that this is the area you gonna use (u,v,u2,v2) and this is original reference to whole texture. This is why batch.draw(Texture) and batch.draw(TextureRegion) are not same in use. However taking part of picture as texture is possible. * You can use pixmap to do it. First generate pixmap from atlas texture. Then create new empty pixmap in size of "box" area you want. Then assign pixel arrays and generate texture from your new pixmap. It may be quite expensive due to your Textureatlas size. * You can use framebuffer. Create FBbuilder and build new frame buffer.Draw texture region to this buffer and get texture from it. Problem here is the sizes of texture will be same as viewport/screen sizes.I guess you can create new camera to change it to sizes you want. ``` GLFrameBuffer.FrameBufferBuilder frameBufferBuilder = new GLFrameBuffer.FrameBufferBuilder(widthofBox, heightofBox); frameBufferBuilder.addColorTextureAttachment(GL30.GL_RGBA8, GL30.GL_RGBA, GL30.GL_UNSIGNED_BYTE); frameBuffer = frameBufferBuilder.build(); OrthographicCamera c = new OrthographicCamera(widthofBox, heightofBox); c.up.set(0, 1, 0); c.direction.set(0, 0, -1); c.position.set(widthofBox / 2, heightofBox / 2, 0f); c.update(); batch.setProjectionMatrix(c.combined); frameBuffer.begin(); batch.begin(); batch.draw(boxregion...) batch.end(); frameBuffer.end(); Texture texturefbo = frameBuffer.getColorBufferTexture(); ``` Texturefbo will be y flipped. You can fix this with texture draw method by setting scaleY to -1 or You can scale scaleY to -1 while drawing on framebuffer or can change camera like this ``` up.set(0, -1, 0); direction.set(0, 0, 1); ``` to flip to camera on y axis. Last thing came to my mind is mipmapping this texture.Its also not so hard. ``` texturefbo.bind(); Gdx.gl.glGenerateMipmap(GL20.GL_TEXTURE_2D); texturefbo.setFilter(Texture.TextureFilter.MipMapLinearLinear, Texture.TextureFilter.MipMapLinearLinear); ``` Upvotes: 2 [selected_answer]
2018/03/20
545
1,955
<issue_start>username_0: Hi Im still learning node and trying something cool with javascript nodejs. Meanwhile I got stuck when pass separate "where" sequelize statement into one. Okay, this is my current code : ``` var periodsParam = {}; periodsParam = { delete: 'F', tipe: 1, variantid: (!ctx.params.id ? ctx.params.id : variants.id) }; if (ctx.query.country) { periodsParam = { country: ctx.query.country }; } console.log(periodsParam); ``` From code above, its always return `{ country: 'SG' }` , but I want to return `{ delete: 'F', tipe: 1, variantid: 1, country: 'SG' }` How can I resolve that ? Anyhelp will appreciate, thankyouu.<issue_comment>username_1: The problem was that you were always re initializing it. You should set it as a property of the existing object. Update from ``` periodsParam = { country: ctx.query.country }; ``` to ``` periodsParam.country = ctx.query.country; ``` Upvotes: 2 <issue_comment>username_2: You can also just assign the Object like this: `periodsParam = Object.assign({}, periodsParam, { country: ctx.query.country });` Upvotes: 1 <issue_comment>username_3: The problem is, you're using `=` sign with `periodsParam` 3 times and you end up with `periodsParam` returning only `country`, because of this lines: ``` if (ctx.query.country) { periodsParam = { country: ctx.query.country }; } ``` Instead of assigning new object to `periodsParam`, use dot notation to add another key-value pair, like this: ``` if (ctx.query && ctx.query.country) { //before accesing .country check if ctx.query is truthy periodsParam.country = ctx.query.country; } ``` As [@Paul](https://stackoverflow.com/users/3756843/paul-rey) suggested, condition should be `ctx.query && ctx.query.country` - it will prevent TypeError if `ctx.query` is `undefined`. Upvotes: 3 [selected_answer]
2018/03/20
865
3,612
<issue_start>username_0: Yes, I read the other questions on the same topic, but they do not cover my issue. We run two environments; DEV and Prod. The two were synched last week, meaning they ought to contain the same data, run the same SSIS packages, and source the same source data. However, today we had a package on PROD go through its' usual steps (3 tables being truncated, and then loaded from OLEDB source to OLEDB destination, one after the other). The package finished without throwing an error, and the first 2 tables contain data, whereas the last one does not. On DEV, everything looks fine. I went through the package history, and it actually shows it wrote 0 rows: [![enter image description here](https://i.stack.imgur.com/dAced.png)](https://i.stack.imgur.com/dAced.png) Yesterday, however, it worked as intended: [![enter image description here](https://i.stack.imgur.com/XKLMS.png)](https://i.stack.imgur.com/XKLMS.png) When I manually ran the package, it wrote data. When I click "Preview", it displays data. When I manually run the source query, it consistently returns data, the same amount of rows, every time. The SSIS catalog has not been updated (no changes were deployed to PROD between yesterday and today). The source query does not use table variables, but it does use CTEs. I have seen suggestions to add `SET NOCOUNT ON`, and willing to accept this could be an explanation. However, those answers seem to indicate the package never writes any data, whereas this package has worked successfully before, and works successfully on DEV. Does anyone have any explanation as to how I can explain to my customer that I have no clue as to why 1 package suddenly chose not to write any data, and how I can ensure this won't happen again, to either this package or any of the other packages?<issue_comment>username_1: This can be tricky. Try the following: 1. Under `Integration Service Catalogs -> SSISDB -> project -> (right click)Reports -> Standard Reports -> All executions`. Check here if at any point, ETL job lost contact with warehouse. 2.If you have logging enabled, try to see at what task\_name your package started returning 0: ``` select data_stats_id, execution_id, package_name, task_name, source_component_name, destination_component_name, rows_sent from ssisdb.catalog.execution_data_statistics ``` 3. How are you handling transactions and checkpoints? This is important if you want to know root cause of this issue. It may happen that due to loss of connectivity had forced to rollback any write in warehouse. Upvotes: 1 <issue_comment>username_2: As it turns out, the issue was caused by an oversight. Because we run DEV and PROD on the same server (we know, and have recommended the customer to at the very least consider using different instances)), we use variables in which we point at the proper environment (set in the environment variables). The query feeding this particular package was updated, and apparently rather than using the variable to switch databases, it was hard-coded (likely as result of testing, and then forgetting to update the variable). The load for DEV and PROD run at the same time, and we suspect that while PROD was ready, DEV was still processing the source tables, and thus 0 rows were returned. We only found this out today because the load again ran fine right until this morning. I was too late to catch it using Profiler, but because it was only this package, I checked, and spotted the hardcoded reference to \_DEV. Thanks everyone for chiming in. Upvotes: 1 [selected_answer]
2018/03/20
500
1,916
<issue_start>username_0: I need to perform two operations on the result of JSON responses.so can we have those different operations inside single JS file? or do we need to have mapping like one JS file for one operation. Please help on this<issue_comment>username_1: I don't recommend trying to create complicated JavaScript in Karate, it just leads to maintainability issues. If you really want an object with multiple utility methods on it, write a Java class with static methods, and it will be much easier to maintain / debug. That said, if you really insist - look at this answer: <https://stackoverflow.com/a/47002604/143475> But this is what I recommend for most projects. In one "common" feature file, define multiple methods like this: ``` Scenario: * def now = function(){ return java.lang.System.currentTimeMillis() } * def uuid = function(){ return java.util.UUID.randomUUID() + '' } ``` You can now call this feature like this: ``` * call read('common.feature') ``` And now all the functions in that feature are available for use: ``` * def time = now() * def id = uuid() ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: @kmancusi This is how I did a `common.feature` file with my common functions and then the following `my.test.feature` shows how I import that to use it in another feature. **common.feature** ``` @ignore Feature: Scenario: * def now = """ function() { return java.lang.System.currentTimeMillis() } """ * def uuid = """ function() { return java.util.UUID.randomUUID() + '' } """ ``` **my.test.feature** ``` Feature: my tests Background: * configure logPrettyRequest = true * configure logPrettyResponse = true * configure ssl = true Scenario: basic test * def util = call read('common.feature') * def sessionId = util.uuid() * print sessionId ``` Upvotes: 3
2018/03/20
377
1,434
<issue_start>username_0: I've got a PowerShell-Script to create a VM from an Image in Azure and in this Script I deposited a `.json` (Parameter for VM, etc.). But if I want to create more than one VM the Names of the VM, Vnet, etc. cannot be the same for every execution (have to be in the same Resource Group). So my Question: How can I insert Variables in the .json File to change the Name of the VM, etc. for every execution? Perhaps I have to rethink? [![PowerShell-Script](https://i.stack.imgur.com/G7aTp.png)](https://i.stack.imgur.com/G7aTp.png)<issue_comment>username_1: A very basic approach could be something like this: ``` # Grab the file contents $contents = Get-Content -Path $templateFile # Update some tokens in the file contents $contents = $contents.replace("original value", "new value") # Push the updated contents to a new file Set-Content -Path $updatedFile -Value $contents ``` Upvotes: 2 <issue_comment>username_2: If you have a value that changes with every deployment, you could also consider using the **-TemplateParameterObject** parameter with the New-AzureRmResourceGroupDeployment cmdlet. That way, you can generate the values in your powershell script without having to output them to json file first. For more details, have a look at the [cmdlet specs](https://learn.microsoft.com/en-us/powershell/module/azurerm.resources/new-azurermresourcegroupdeployment?view=azurermps-5.5.0) Upvotes: 0
2018/03/20
395
1,555
<issue_start>username_0: I am having an issue while binding the Image path to img src in angular 2. I am saving the image in localdisk and saving the path in the database and retrieving the image path and binding to the img source tag but it is not binding. I tried to bind the local disk url directly to the image tag then i am getting error "**Not Allowed to load local resources.**", If i bind the local path value to **SafeUrl variable** then the error is **unsafe:E:\Main Projects\img\20182201521541566536.jpg:1** can any one please give me some solution. thanks<issue_comment>username_1: You cannot read files from your harddrive as image source. You need to provide the images in your project, so they are available on localhost. Upvotes: 0 <issue_comment>username_2: Below works for me Step 1 ) import { DomSanitizer } from '@angular/platform-browser'; Step 2 )Then in constructor add the dependency public \_DomSanitizationService: DomSanitizer Step 3 )Now bind property as this.ImagePath = this.\_DomSanitizationService.bypassSecurityTrustUrl(this.FilePath); Upvotes: -1 <issue_comment>username_2: I have one more solution for it and its working for me. We can host our images on IIS and then we can provide the http path to SRC of IMG tag. When you deploy your code to any live server then you have to pass the http path of images as per the live server settings. Browsers consider those URLs as unsafe which dont have http attached to them. If anyone still finding any issue then please let me know, I'll share my code here. Upvotes: 0
2018/03/20
706
2,526
<issue_start>username_0: I am trying to have expandable list items in a table with constant header. For this I am using `Table` and `ExpansionPanel` component from React material ui 1.0.0-beta.34. However, table is not alligning well. All the body data comes under one header `TableCell`. Here is my code. Table component: ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; import { withStyles } from 'material-ui/styles'; import Table, { TableBody, TableCell, TableHead, TableRow } from 'material-ui/Table'; import Paper from 'material-ui/Paper'; import { styles } from './styles'; import ExpandableTableRow from 'client/components/expandable-table-row'; class GenericTable extends Component { render() { const { classes, items, headItems } = this.props; return ( {headItems.map((n, i) => { return {n} }) } { !items.length && There is no data to display } {items.map((item, i) => { return ( ); })} ); } } GenericTable.propTypes = { items: PropTypes.array.isRequired, headItems: PropTypes.array.isRequired, }; export default withStyles(styles)(GenericTable); ``` ExpandableTableRow component: ``` import React from 'react'; import PropTypes from 'prop-types'; import { withStyles } from 'material-ui/styles'; import ExpansionPanel, { ExpansionPanelSummary, ExpansionPanelDetails, } from 'material-ui/ExpansionPanel'; import Typography from 'material-ui/Typography'; import { styles } from './styles'; import { TableRow, TableCell } from 'material-ui/Table'; class ExpandableTableRow extends React.Component { render() { const { classes, item } = this.props; return ( {item.a} {item.b} {item.c} {item.d} {item.e} {item.f} Expansion panel expanded ); } } ExpandableTableRow.propTypes = { }; export default withStyles(styles)(ExpandableTableRow); ``` I am trying to make this work. Any help is appreciated.<issue_comment>username_1: Im also looking into the same problem. Apparently, material UI has an issue thread regarding expandable table rows. this [link](https://github.com/mui-org/material-ui/issues/4476) might help you. Edit: React now has docs for Collapsible Table here. <https://material-ui.com/components/tables/#collapsible-table> Upvotes: 2 <issue_comment>username_2: This is how I solved this problem earlier. I wrapped a `Collapse` transition inside a `TableCell` and set it's `hidden` and `in` prop. Example: ``` { } ``` Upvotes: 5 [selected_answer]
2018/03/20
760
2,614
<issue_start>username_0: I can print the date in this format: Mon, Mar 19, 2018, but I am not sure how to get the Day of the week in this format. Please help<issue_comment>username_1: dateFormatter.dateFormat = "EEE, MMM dd, yyyy" For dat of week in alphabets, you use EEEE or EEE similar to MMM & yyyy for month year. Upvotes: 1 <issue_comment>username_2: To get the day for a particular date: ``` let customDateFormatter = DateFormatter() print(customDateFormatter.weekdaySymbols[Calendar.current.component(.weekday, from: Date())]) ``` // "Wednesday" [source](https://stackoverflow.com/questions/41068860/get-weekday-from-date-swift-3) Upvotes: 4 <issue_comment>username_3: ``` let dateFormatter = DateFormatter() // uncomment to enforce the US locale // dateFormatter.locale = Locale(identifier: "en-US") dateFormatter.setLocalizedDateFormatFromTemplate("EEE MMM d yyyy") print(dateFormatter.string(from: Date())) // "Tue, Mar 20, 2018" for en-US locale ``` Note that I am using a template to provide the exact format, therefore the format will be properly localized in every language. Upvotes: 5 [selected_answer]<issue_comment>username_4: With swift 4 ``` func timeStamp()->String { let dateFormater = DateFormatter() dateFormater.locale = Locale(identifier: "en-US") dateFormater.setLocalizedDateFormatFromTemplate("EEE MMM d yyyy") return dateFormatter.string(from: Date()) } ``` Use it. ``` let getTimeStamp = timeStamp() print(getTimeStamp) ``` Upvotes: 2 <issue_comment>username_5: The best way to change your date is follow this method. ``` func ChangeDateFormat(date:String,FromFormat: String, ToFormat: String) -> String { let dateFormatter1 = DateFormatter() dateFormatter1.dateFormat = FromFormat let myDate = dateFormatter1.date(from: date) dateFormatter1.dateFormat = ToFormat if(myDate != nil){ let Date = dateFormatter1.string(from: myDate!) return Date } else{ return "" } } ``` and then you can use this method like ``` String(ChangeDateFormat(date: StartDate, FromFormat: "yyyy-MM-dd hh:mm:ss a", ToFormat: "MMM d, yyyy")) ``` You can pass your date format in which format do you want in your case it should be ``` String(ChangeDateFormat(date: StartDate, FromFormat: "Pass your date format", ToFormat: "EEE MMM d, yyyy")) ``` Upvotes: 0 <issue_comment>username_6: Rather than needing to spell out a date format. I would simplify it further to: ``` dateFormatter.dateStyle = .full ``` Or if you just want the day: ``` dateFormatter.dateFormat = "EEEE" ``` Upvotes: 2
2018/03/20
1,044
4,017
<issue_start>username_0: How can I return 1 or 0 using COUNT between two dates? ``` CREATE PROCEDURE [dbo].[CheckCharterDate] @DateCheck date, @DateCheck2 date, @charterID int AS SELECT Count(StartDate) FROM Booking WHERE StartDate >= @DateCheck AND EndDate <= @DateCheck2 AND CharterID = @charterID RETURN 0 ``` This stored procedure only returns 1 or 0 whenever the user selects exactly the two dates that are in the database but if they select a date in between, it returns a 0 and it says that it's available.<issue_comment>username_1: I think you are looking for overlapping date ranges. The way to test if two ranges overlap is to make sure one starts before the second ends, while the second starts before the first end. You can see a visualization of this in the [overlap](/questions/tagged/overlap "show questions tagged 'overlap'") tag [info](https://stackoverflow.com/tags/overlap/info). ``` CREATE PROCEDURE [dbo].[CheckCharterDate] @DateCheck date, @DateCheck2 date, @charterID int AS SELECT Count(StartDate) FROM Booking WHERE StartDate <= @DateCheck2 AND EndDate >= @DateCheck AND CharterID = @charterID ``` Also, as username_3 noted in his answer, you should use `exists` instead of `count`. Upvotes: 0 <issue_comment>username_2: Use `Case...when` when you want user defined output ``` CREATE PROCEDURE [dbo].[CheckCharterDate] @DateCheck date, @DateCheck2 date, @charterID int AS BEGIN SELECT CASE WHEN Count(StartDate)=0 THEN 0 ELSE 1 END AS Count_StartDate FROM Booking WHERE StartDate <= @DateCheck2 AND EndDate >= @DateCheck AND CharterID = @charterID END; ``` **Why I Changed the order of inequalities?** Since for overlapping date ranges what is the basic condition: If you have to check `start_date` and `end_date` is overlapping between `@DateCheck` and `@DateCheck2` `( @DateCheck < @DateCheck2)` then Check if `start_date` is less than `@DateCheck2` or not And also `end_date` is greater than `@DateCheck` or not. So that's why there should be a change in the inequalities. Upvotes: 3 [selected_answer]<issue_comment>username_3: Don't use `COUNT()` for this. From a performance perspective, it is suboptimal. Instead, use `EXISTS`: ``` CREATE PROCEDURE [dbo].[CheckCharterDate] ( @DateCheck date, @DateCheck2 date, @charterID int ) AS BEGIN SELECT (CASE WHEN EXISTS (SELECT 1 FROM Booking b WHERE b.StartDate <= @DateCheck2 AND b.EndDate >= @DateCheck2 AND b.CharterID = @charterID THEN 1 ELSE 0 END) as overlap_flag END; ``` I don't recommend that you return a value from a stored procedure -- unless that value is a status. Instead, just write a user defined function: ``` CREATE FUNCTION dbo.udf_CheckCharterDate ( @DateCheck date, @DateCheck2 date, @charterID int ) RETURNS INT BEGIN DECLARE @flag int; SELECT @flat = (CASE WHEN EXISTS (SELECT 1 FROM Booking b WHERE b.StartDate <= @DateCheck2 AND b.EndDate >= @DateCheck2 AND b.CharterID = @charterID THEN 1 ELSE 0 END) as overlap_flag; RETURN @flag; END; ``` The function can simply be used as any other function. For instance: ``` WHERE dbo.udf_CheckCharterDate( . . . ) = 1 ``` Upvotes: 0 <issue_comment>username_4: ``` CREATE PROCEDURE [dbo].[CheckCharterDate] @DateCheck date, @DateCheck2 date, @charterID int AS declare @count int set @count = SELECT Count(StartDate) FROM Booking WHERE StartDate <= @DateCheck2 AND EndDate >= @DateCheck AND CharterID = @charterID if (@count > 0) begin select 1 end else begin select 0 end ``` Upvotes: 0
2018/03/20
1,138
4,140
<issue_start>username_0: I am getting error as > > Too few parameters expected 1 > > > on below line. Please assist ``` mrs.Open sSQLSting, Conn, 3, 1 ``` Complete coding ``` Dim Conn As New adodb.Connection Dim mrs As New adodb.Recordset Dim DBPath As String, sconnect As String DBPath = ThisWorkbook.FullName sconnect = "Provider=MSDASQL.1;DSN=Excel Files;DBQ=" & DBPath & ";HDR=Yes';" Conn.Open sconnect sSQLSting = "SELECT * From [DataSheet$A1:D5325] where [Vertical Name]= '" & Sheets(1).ComboBox1.List(i) & "'" mrs.Open sSQLSting, Conn, 3, 1 Sheets("Sheet4").Cells(lastrow, 4).Value = mrs.RecordCount mrs.Close Conn.Close ```<issue_comment>username_1: I think you are looking for overlapping date ranges. The way to test if two ranges overlap is to make sure one starts before the second ends, while the second starts before the first end. You can see a visualization of this in the [overlap](/questions/tagged/overlap "show questions tagged 'overlap'") tag [info](https://stackoverflow.com/tags/overlap/info). ``` CREATE PROCEDURE [dbo].[CheckCharterDate] @DateCheck date, @DateCheck2 date, @charterID int AS SELECT Count(StartDate) FROM Booking WHERE StartDate <= @DateCheck2 AND EndDate >= @DateCheck AND CharterID = @charterID ``` Also, as username_3 noted in his answer, you should use `exists` instead of `count`. Upvotes: 0 <issue_comment>username_2: Use `Case...when` when you want user defined output ``` CREATE PROCEDURE [dbo].[CheckCharterDate] @DateCheck date, @DateCheck2 date, @charterID int AS BEGIN SELECT CASE WHEN Count(StartDate)=0 THEN 0 ELSE 1 END AS Count_StartDate FROM Booking WHERE StartDate <= @DateCheck2 AND EndDate >= @DateCheck AND CharterID = @charterID END; ``` **Why I Changed the order of inequalities?** Since for overlapping date ranges what is the basic condition: If you have to check `start_date` and `end_date` is overlapping between `@DateCheck` and `@DateCheck2` `( @DateCheck < @DateCheck2)` then Check if `start_date` is less than `@DateCheck2` or not And also `end_date` is greater than `@DateCheck` or not. So that's why there should be a change in the inequalities. Upvotes: 3 [selected_answer]<issue_comment>username_3: Don't use `COUNT()` for this. From a performance perspective, it is suboptimal. Instead, use `EXISTS`: ``` CREATE PROCEDURE [dbo].[CheckCharterDate] ( @DateCheck date, @DateCheck2 date, @charterID int ) AS BEGIN SELECT (CASE WHEN EXISTS (SELECT 1 FROM Booking b WHERE b.StartDate <= @DateCheck2 AND b.EndDate >= @DateCheck2 AND b.CharterID = @charterID THEN 1 ELSE 0 END) as overlap_flag END; ``` I don't recommend that you return a value from a stored procedure -- unless that value is a status. Instead, just write a user defined function: ``` CREATE FUNCTION dbo.udf_CheckCharterDate ( @DateCheck date, @DateCheck2 date, @charterID int ) RETURNS INT BEGIN DECLARE @flag int; SELECT @flat = (CASE WHEN EXISTS (SELECT 1 FROM Booking b WHERE b.StartDate <= @DateCheck2 AND b.EndDate >= @DateCheck2 AND b.CharterID = @charterID THEN 1 ELSE 0 END) as overlap_flag; RETURN @flag; END; ``` The function can simply be used as any other function. For instance: ``` WHERE dbo.udf_CheckCharterDate( . . . ) = 1 ``` Upvotes: 0 <issue_comment>username_4: ``` CREATE PROCEDURE [dbo].[CheckCharterDate] @DateCheck date, @DateCheck2 date, @charterID int AS declare @count int set @count = SELECT Count(StartDate) FROM Booking WHERE StartDate <= @DateCheck2 AND EndDate >= @DateCheck AND CharterID = @charterID if (@count > 0) begin select 1 end else begin select 0 end ``` Upvotes: 0
2018/03/20
1,183
4,414
<issue_start>username_0: I have 2 clases named Fruits and Balls. For each of them, one of the attribute is Price. I need to make a static metod that return from an mix aray only the element with the smallest Price. I have the objects f1, f2, f3, b1, b2, b3. I've made an ArrayList list like this: ``` { Fruit f1 = new Fruit("Apple", "Red", "Summer", 9.5); ... Ball b1 = new Ball("Red", "Stripes", 10.5); ... ArrayList list = new ArrayList(); list.add(f1); list.add(b1); ... } ``` Now regarding the method I have to make I started like this: ``` static void method(ArrayList list) { for (int i = 0; i < lista.size(); i++) { } ``` I get stucked in the mix of getting elemets out of the array and the getters used to find the price for each element. Can you help me? Even if you don t understand what I m trying to do could you help me with some tips of mix arays of different elements. Thank you!<issue_comment>username_1: I think you are looking for overlapping date ranges. The way to test if two ranges overlap is to make sure one starts before the second ends, while the second starts before the first end. You can see a visualization of this in the [overlap](/questions/tagged/overlap "show questions tagged 'overlap'") tag [info](https://stackoverflow.com/tags/overlap/info). ``` CREATE PROCEDURE [dbo].[CheckCharterDate] @DateCheck date, @DateCheck2 date, @charterID int AS SELECT Count(StartDate) FROM Booking WHERE StartDate <= @DateCheck2 AND EndDate >= @DateCheck AND CharterID = @charterID ``` Also, as username_3 noted in his answer, you should use `exists` instead of `count`. Upvotes: 0 <issue_comment>username_2: Use `Case...when` when you want user defined output ``` CREATE PROCEDURE [dbo].[CheckCharterDate] @DateCheck date, @DateCheck2 date, @charterID int AS BEGIN SELECT CASE WHEN Count(StartDate)=0 THEN 0 ELSE 1 END AS Count_StartDate FROM Booking WHERE StartDate <= @DateCheck2 AND EndDate >= @DateCheck AND CharterID = @charterID END; ``` **Why I Changed the order of inequalities?** Since for overlapping date ranges what is the basic condition: If you have to check `start_date` and `end_date` is overlapping between `@DateCheck` and `@DateCheck2` `( @DateCheck < @DateCheck2)` then Check if `start_date` is less than `@DateCheck2` or not And also `end_date` is greater than `@DateCheck` or not. So that's why there should be a change in the inequalities. Upvotes: 3 [selected_answer]<issue_comment>username_3: Don't use `COUNT()` for this. From a performance perspective, it is suboptimal. Instead, use `EXISTS`: ``` CREATE PROCEDURE [dbo].[CheckCharterDate] ( @DateCheck date, @DateCheck2 date, @charterID int ) AS BEGIN SELECT (CASE WHEN EXISTS (SELECT 1 FROM Booking b WHERE b.StartDate <= @DateCheck2 AND b.EndDate >= @DateCheck2 AND b.CharterID = @charterID THEN 1 ELSE 0 END) as overlap_flag END; ``` I don't recommend that you return a value from a stored procedure -- unless that value is a status. Instead, just write a user defined function: ``` CREATE FUNCTION dbo.udf_CheckCharterDate ( @DateCheck date, @DateCheck2 date, @charterID int ) RETURNS INT BEGIN DECLARE @flag int; SELECT @flat = (CASE WHEN EXISTS (SELECT 1 FROM Booking b WHERE b.StartDate <= @DateCheck2 AND b.EndDate >= @DateCheck2 AND b.CharterID = @charterID THEN 1 ELSE 0 END) as overlap_flag; RETURN @flag; END; ``` The function can simply be used as any other function. For instance: ``` WHERE dbo.udf_CheckCharterDate( . . . ) = 1 ``` Upvotes: 0 <issue_comment>username_4: ``` CREATE PROCEDURE [dbo].[CheckCharterDate] @DateCheck date, @DateCheck2 date, @charterID int AS declare @count int set @count = SELECT Count(StartDate) FROM Booking WHERE StartDate <= @DateCheck2 AND EndDate >= @DateCheck AND CharterID = @charterID if (@count > 0) begin select 1 end else begin select 0 end ``` Upvotes: 0
2018/03/20
1,719
6,606
<issue_start>username_0: I am making an app that lets the user upload PDF files and then I save them to Firebase. Now I'm trying to display them in my app, I don't want to let the user download the file, but I want it to be displayed directly in the app. This is how I save the files: ``` private void uploadFile() { progressBar.setVisibility(View.VISIBLE); StorageReference sRef = mStorageReference.child(Constants.STORAGE_PATH_COURSES + System.currentTimeMillis() + ".pdf"); sRef.putFile(filepath) .addOnSuccessListener(new OnSuccessListener() { @SuppressWarnings("VisibleForTests") @Override public void onSuccess(UploadTask.TaskSnapshot taskSnapshot) { progressBar.setVisibility(View.GONE); textViewStatus.setText("File Uploaded Successfully"); Course upload = new Course(editTextFilename.getText().toString(), taskSnapshot.getDownloadUrl().toString()); mDatabaseReference.child(mDatabaseReference.push().getKey()).setValue(upload); } }) .addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception exception) { Toast.makeText(getApplicationContext(), exception.getMessage(), Toast.LENGTH\_LONG).show(); } }) .addOnProgressListener(new OnProgressListener() { @SuppressWarnings("VisibleForTests") @Override public void onProgress(UploadTask.TaskSnapshot taskSnapshot) { double progress = (100.0 \* taskSnapshot.getBytesTransferred()) / taskSnapshot.getTotalByteCount(); textViewStatus.setText((int) progress + "% Uploading..."); } }); } ``` I searched for the possibilities and tried a few but I can not make them work. I tried to use this, but it displays nothing. <https://github.com/barteksc/AndroidPdfViewer> ``` pdfView=(PDFView) findViewById(R.id.pdfView); Bundle bundle = getIntent().getExtras(); if (bundle != null) { url = getIntent().getStringExtra("url"); } uri=Uri.parse(url); pdfView.fromUri(uri).load(); ``` If I load the pdf from the assets folder, it works well. I've also tried the webview: ``` WebView webview = (WebView) findViewById(R.id.webview); webview.getSettings().setJavaScriptEnabled(true); webview.loadUrl("http://drive.google.com/viewerng/viewer?embedded=true&url=" + url) ``` I'm really new to Android and I'm sure that I do something wrong...I would be really grateful if someone could help me. Thanks!<issue_comment>username_1: `AndroidPdfViewer` library supports loading PDF file from local storage not from remote url.Since your are trying to load remote link it wont show anything.You need to download your pdf file in app cache and load it from that path `WebView` will work fine in this situation because its loading a remote url in a browser You can refer the following Issue [HERE](https://github.com/barteksc/AndroidPdfViewer/issues/423) Upvotes: 3 [selected_answer]<issue_comment>username_2: I'm not using Firebase but my PDF is stored on a server. I call the server using Retrofit to download the PDF as a Stream. Then I give the Stream to pdf Viewer **To display the PDF** ``` private void displayPDF(InputStream stream) { mPdfView.fromStream(stream) .defaultPage(0) .load(); } ``` **Retrofit Service** ``` @Streaming @GET Call downloadPDF(@Url String url); ``` **In Retrofit response** ``` displayPDF(responseBody.byteStream()); ``` Upvotes: 1 <issue_comment>username_3: I finally succeeded using PDFViewer. Here is my code: ``` protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_course); pdfView=(PDFView) findViewById(R.id.pdfView); Bundle bundle = getIntent().getExtras(); if (bundle != null) { url = getIntent().getStringExtra("url"); } new RetrievePDFStream().execute(url); } class RetrievePDFStream extends AsyncTask{ @Override protected InputStream doInBackground(String... strings) { InputStream inputStream=null; try{ URL urlx=new URL(strings[0]); HttpURLConnection urlConnection=(HttpURLConnection) urlx.openConnection(); if(urlConnection.getResponseCode()==200){ inputStream=new BufferedInputStream(urlConnection.getInputStream()); } }catch (IOException e){ return null; } return inputStream; } @Override protected void onPostExecute(InputStream inputStream) { pdfView.fromStream(inputStream).load(); } } ``` This really helped: <https://www.youtube.com/watch?v=-Ld1IoOF_uk> Upvotes: 2 <issue_comment>username_4: This one works, too. This is from Firebase Doc website: ``` protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Write a message to the database pdfView = (PDFView) findViewById(R.id.pdfView); mFirebaseStorage=FirebaseStorage.getInstance(); mmFirebaseStorageRef=mFirebaseStorage.getReference().child("sources"); final long ONE_MEGABYTE = 1024 * 1024; mmFirebaseStorageRef.child("smpl.pdf").getBytes(ONE_MEGABYTE).addOnSuccessListener(new OnSuccessListener() { @Override public void onSuccess(byte[] bytes) { pdfView.fromBytes(bytes).load(); } }).addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { Toast.makeText(MainActivity.this,"download unsuccessful",Toast.LENGTH\_LONG).show(); } }); } ``` Upvotes: 1 <issue_comment>username_5: after searching and retry different codes i found the solution . view pdf file without downloading in device Library for pdf view is ``` implementation 'com.github.barteksc:android-pdf-viewer:2.8.2' ``` Add in XML ``` ``` and This is java code ``` StorageReference storageRef; FirebaseApp app; FirebaseStorage storage; storageRef = FirebaseStorage.getInstance().getReference(); app = FirebaseApp.getInstance(); storage = FirebaseStorage.getInstance(app); storageRef = storage.getReference().child("books/booksPDF/documenttcv.pdf"); storageRef.getStream().addOnSuccessListener(new OnSuccessListener() { @Override public void onSuccess(StreamDownloadTask.TaskSnapshot taskSnapshot) { pdfview.fromStream(taskSnapshot.getStream()).load(); } }).addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { Toast.makeText(ReadBookActivity.this, "Fail :"+e.getMessage(), Toast.LENGTH\_SHORT).show(); } }); ``` NOTE : When you save pdf file programaticaly in firbase storage then you need to keep the name of file insted of url , because you retrieve files by using name no url Upvotes: 1
2018/03/20
373
1,339
<issue_start>username_0: In my `swift 4` project I have `iOS 9.0` as `deployement target` and I'm using the `safe area layout guide`. In the `storyboard`, I always give 0 as value between my main `view` top space and the `safe area` top. When I run the application in `iOS 9` or `iOS 10` I'm having a white space at the top. How can I remove this white space without disabling the `safe area layout guide`?<issue_comment>username_1: I guess you are testing on iPhone X? You can change safe area insets such way: ``` var newSafeArea = UIEdgeInsets() newSafeArea.top -= 20 self.additionalSafeAreaInsets = newSafeArea ``` Upvotes: 1 <issue_comment>username_2: Goto `Storyboard` > `Select the ViewController` > `Attribute Inspector` (3rd Tab from right in the right pane.) > Uncheck `Adjust Scroll View insets`. Upvotes: 2 <issue_comment>username_3: I can't explain why, but in the storyboard views don't calculate the navigationbar height and therefor you will have an (often 44px) white space. You can remove this by either as username_2 said, by turning of `Adjust Scroll View insets` or you can do it programmatically by setting `edgesForExtendedLayout = []` in your `viewDidLoad()` <https://developer.apple.com/documentation/uikit/uiviewcontroller/1621515-edgesforextendedlayout> tells you some more about it. Upvotes: 0
2018/03/20
602
2,121
<issue_start>username_0: Ive just installed redis and sidekiq on my app, I Have a job to update a field on the users table when called upon, And a job that sends an email once a week to all users. Now if i boot up sidekiq with `bundle exec sidekiq` The job to update the users field fires off and completes but the email job stays in the enqueued section. But if i boot it up with `bundle exec sidekiq -q workers -q mailers` Which i got from the Sidekiq github page only the mail jobs get completed and the others stay in the enqueued section. Is there a command to be able to run both? Ive only started to learn about sidekiq and redis yesterday so sorry if this a stupid question, I have the `activejob.que_adaptar` set to `sidekiq` in application.rb. This is how i have my sidekiq worker for the User job set up: ``` class DeactivateUser include Sidekiq::Worker def perform User.active.update_all(active: false) end end ``` Thanks.<issue_comment>username_1: Create `config/sidekiq.yml` with list of your queues and run `sidekiq -C config/sidekiq.yml`. ``` #config/sidekiq.yml --- :concurrency: 5 :queues: - default - mailers - orders ``` Upvotes: 2 <issue_comment>username_2: As username_1 configured, follow the same approach to have multiple queues processed by single sidekiq process. In addition, you have to mention the queue name in the worker file. So that the sidekiq process will get the jobs from redis of that queue. For ex: ( `sidekiq_options queue: :mailers` ) ``` class WORKER_NAME include Sidekiq::Worker sidekiq_options queue: : def perform ... # your work end end ``` If you want to start process only listening to one queue mailers and having concurrency 10, please start like this. `bundle exec sidekiq -c 10 -C config/myapp_sidekiq.yml -q mailers -q queue1 -q queue2` This will listen to total of 3 queues named `mailers`, `queue1`, `queue2`. This is how you will have to start the sidekiq. For more documentation, please [refer to this](https://github.com/mperham/sidekiq/wiki/Advanced-Options). Upvotes: 2 [selected_answer]
2018/03/20
369
1,294
<issue_start>username_0: I came across a weird issue on google chrome which seems to be new because I'm pretty sure it worked before (on older version). ```css .container { width: 400px; height: 266px; overflow: hidden; position: relative; border: 2px solid red; margin-bottom: 30px; } iframe { position: absolute; width: 480px; height: 350px; top: 50%; left: 50%; transform: translateX(-50%) translateY(-50%); z-index: -1; overflow:hidden; } ``` ```html ``` You see that the first container contains a youtube iframe and is bigger than the container, but with overflow:hidden it shouldn't be visible and 'cutted' on the sites. But google chrome is not able to do this anymore (all other browsers works fine). The second container just contains a normal page and there the hidden overflow works well, so I guess it's somehow related to video embeds (vimeo embeds = the same issue). It seems like a bug from their site. Do you have any workaround for this?<issue_comment>username_1: The Chrome overflow problem is resolved. Google fixed it Upvotes: 1 <issue_comment>username_2: I updated today my Google Chrome to version 66.0.3359.117 and the bug was resolved. [jsfiddle with your example](https://i.stack.imgur.com/X3YfN.png) Upvotes: 0
2018/03/20
631
2,161
<issue_start>username_0: **my array** ``` $data1 = array( array( 'title' => 'My title', 'name' => '<NAME>', 'date' => 'My date' ), array( 'title' => 'Another title', 'name' => '<NAME>', 'date' => 'Another date' ) ); ``` I want to add one array 'status' => 1 all associative array : ``` $data = array( array( 'title' => 'My title', 'name' => '<NAME>', 'date' => 'My date', 'status' => 1 ), array( 'title' => 'Another title', 'name' => '<NAME>', 'date' => 'Another date', 'status' => 1 ), array( 'title' => 'second title', 'name' => '<NAME>', 'date' => 'second date', 'status' => 1 ) ); ```<issue_comment>username_1: Simple [foreach()](http://php.net/manual/en/control-structures.foreach.php) wiil do the job:- ``` foreach($data1 as &$data){ $data['status'] = 1; } print_r($data1); ``` Output:-<https://eval.in/975058> ***Reference:-*** [Passing by Reference](http://php.net/manual/en/language.references.pass.php) Upvotes: 4 [selected_answer]<issue_comment>username_2: It's not exactly clear what you're trying to do but if you want to add the `status` flag to each of your items just try ``` foreach($data1 as &$item) { $item['status'] = 1; } ``` This adds to every `$item` in `$data1` the new associative key `status` with value `1` Upvotes: 2 <issue_comment>username_3: The "Passing by reference" solution above is probably the most elegant one but if you don't want to modify your existing array, you can do it like this: ``` foreach ( $data1 as $value ) { $value["status"] = 1; $data[] = $value; } var_dump ( $data ); ``` Upvotes: 1 <issue_comment>username_4: ``` php foreach($data as $key=$val){ $data[$key]['status'] = 1; } echo " ``` "; print_r($data); ?> ``` ``` You can add status key in existing array. Upvotes: 1
2018/03/20
735
2,306
<issue_start>username_0: First I will explain my situation: I added react-native-multiple-select-list package to my react-native project by running: 'yarn add react-native-multiple-select-list': ``` "dependencies": { "react-native-multiple-select-list": "^1.0.4" } ``` The package [email protected] has a dependency on react-native-vector-icons package: ``` "dependencies": { "react-native-vector-icons": "^3.0.0" }, ``` The problem is that [email protected] has unmet dependency with my react@16 I found on github that the on the new version (1.0.5) the issue was fixed and the version of react-native-vector-icons changed to 4.5 which is met with react@16 But this version (1.0.5) is not published yet to npm. So my questions are: * Is there a way you use the last github version? * I tried to change manually the version of "react-native-vector-icons" to 4.5 and than run `yarn install` from inside the package. It works, but each time I run `yarn add` (no matter which package) it gets changed back the version to 3. Is there way to keep it on 4.5? Thanks Elad<issue_comment>username_1: Simple [foreach()](http://php.net/manual/en/control-structures.foreach.php) wiil do the job:- ``` foreach($data1 as &$data){ $data['status'] = 1; } print_r($data1); ``` Output:-<https://eval.in/975058> ***Reference:-*** [Passing by Reference](http://php.net/manual/en/language.references.pass.php) Upvotes: 4 [selected_answer]<issue_comment>username_2: It's not exactly clear what you're trying to do but if you want to add the `status` flag to each of your items just try ``` foreach($data1 as &$item) { $item['status'] = 1; } ``` This adds to every `$item` in `$data1` the new associative key `status` with value `1` Upvotes: 2 <issue_comment>username_3: The "Passing by reference" solution above is probably the most elegant one but if you don't want to modify your existing array, you can do it like this: ``` foreach ( $data1 as $value ) { $value["status"] = 1; $data[] = $value; } var_dump ( $data ); ``` Upvotes: 1 <issue_comment>username_4: ``` php foreach($data as $key=$val){ $data[$key]['status'] = 1; } echo " ``` "; print_r($data); ?> ``` ``` You can add status key in existing array. Upvotes: 1
2018/03/20
386
1,123
<issue_start>username_0: I am new in excel-python. I wanted to export values in to excel from python. I have simple code as below. ``` import xlwt book = xlwt.Workbook (encoding = "utf-8") sheet1 = book.add_sheet ("sheet 1") sheet1.write(0,0,"Display") x = 1 m = 1 for x in range (1,9): sheet1.write (m,0,x) print (x) x = x+1 for m in range (1,9): m = m +1 book.save("trial.xls") ``` after running this code i am getting errors like: > > Exception: attempt to overwrite cell: sheetname= u'sheet 1' rowx=9 > colx = 0 and print (x) is printing the values of x till 2. > > > Can some one correct me. Thank you in advance.<issue_comment>username_1: You don't need the second for loop, because the first one will loop it until the range of 9 ends. I'm right? ``` for x in range (1,9): sheet1.write (m,0,x) print (x) x = x+1 m = m +1 ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: When you create a sheet, you need to explicitly allow overwriting (it's disabled by default) as below: ``` sheet1 = book.add_sheet ("sheet 1",cell_overwrite_ok=True) ``` Upvotes: 1
2018/03/20
695
2,359
<issue_start>username_0: I am using WP Job manager with Woo Subscriptions. Now: 1. Initially, I selected a package(Woo Subscription) 2. Then I added all the details. 3. But did not submit it. 4. Came back to the site, so to buy again I need to select a package. So I selected the package and filled in details and went to the payment package. 5. Now in my cart both the packages are present (i.e the one I selected without buying first time and the recent one) 6. How can this be fixed so the latest selected one is in the cart and earlier one deleted as soon as latest one selected. I tried this [Woocommerce Delete all products from cart and add current product to cart](https://stackoverflow.com/questions/21181911/woocommerce-delete-all-products-from-cart-and-add-current-product-to-cart) but did not help.<issue_comment>username_1: Updated *(with 2 different alternatives)*: You should try the following: ``` add_filter( 'woocommerce_add_to_cart_validation', 'remove_cart_item_before_add_to_cart', 20, 3 ); function remove_cart_item_before_add_to_cart( $passed, $product_id, $quantity ) { if( ! WC()->cart->is_empty() ) WC()->cart->empty_cart(); return $passed; } ``` *Code goes in functions.php file of your active child theme (or theme)*. **Tested and works** with both ajax-add-to-cart and normal add-to-cart… --- Or you can use this different one that will keep in cart the last added item: ``` // Keep only last cart item add_action( 'woocommerce_before_calculate_totals', 'keep_only_last_cart_item', 30, 1 ); function keep_only_last_cart_item( $cart ) { if ( is_admin() && ! defined( 'DOING_AJAX' ) ) return; if ( did_action( 'woocommerce_before_calculate_totals' ) >= 2 ) return; $cart_items = $cart->get_cart(); if( count( $cart_items ) > 1 ){ $cart_item_keys = array_keys( $cart_items ); $cart->remove_cart_item( reset($cart_item_keys) ); } } ``` *Code goes in function.php file of your active child theme (or theme)*. **Tested and works** Upvotes: 6 [selected_answer]<issue_comment>username_2: Go to your site database and look for table 'wp\_woocommerce\_sessions' and see if its 'session\_id' column contains 0 value if so 1. delete all these 0 entry records 2. make 'session\_id' Column as primary. 3. make 'session\_id' Column auto increment. Upvotes: -1
2018/03/20
572
2,215
<issue_start>username_0: I am trying to store the result of my read/write stored procedure in local temporary table. After creating the temp table i am writing - ``` INSERT INTO #TMP call SPName; ``` to store the result in temp table but it says **feature not supported**. Is there anyway to store the result of stored procedure in temp table. I don't want to alter the definition of already build SP but want to store the result in temp table to join with another set of statement to get my final result.<issue_comment>username_1: Have you tried to use an output table type as one of your stored procedure parameter? And then use that out parameter in your stored procedure to return the result? This is the typical approach used in many example in the documentation. ``` DROP TYPE TT_MYTYPE; CREATE TYPE TT_MYTYPE AS TABLE(A VARCHAR (100), B INT); DROP PROCEDURE MYPROC; CREATE PROCEDURE MYPROC (OUT OUTPUT_TABLE TT_MYTYPE ) LANGUAGE SQLSCRIPT SQL SECURITY INVOKER AS BEGIN OUTPUT_TABLE = SELECT 'Test' as A, 3 as B FROM DUMMY; END; -- output table is created dynamically and in the overview you can get the table name CALL MYPROC( ? ) with overview; -- pass an output parameter table to receive the result DROP TABLE MYOUTPUT; CREATE TABLE MYOUTPUT LIKE TT_MYTYPE; CALL MYPROC( MYOUTPUT ) with overview; SELECT * FROM MYOUTPUT; ``` Hope this helps. Upvotes: 2 <issue_comment>username_2: You can use the **WITH RESULT VIEW** extension of the procedure definition. For example, if I use the @Abdel's code ``` CREATE PROCEDURE MYPROC (OUT OUTPUT_TABLE TT_MYTYPE ) LANGUAGE SQLSCRIPT READS SQL DATA WITH RESULT VIEW myView AS BEGIN OUTPUT_TABLE = SELECT 'Test' as A, 3 as B FROM DUMMY ; END; ``` Then you can simply execute a SELECT statement on myView as follows ``` SELECT * FROM myView ; ``` For more details on how you can use [With Result View to select data from HANA procedure](http://www.kodyaz.com/sap-abap/select-from-procedure-on-hana-database-using-sqlscript.aspx), you can refer to given tutorial. If your procedure accepts input parameters, the view created by "with result view" can be queried just like a parametric view Upvotes: 1
2018/03/20
1,521
4,544
<issue_start>username_0: I have a pandas dataframe: [![SamplePandas](https://i.stack.imgur.com/HNXlY.png)](https://i.stack.imgur.com/HNXlY.png) It has around 3m rows. There are 3 kinds of `age_units`: Y, D, W for years, Days & Weeks. Any individual over 1 year old has an age unit of Y and my first grouping I want is <2y old so all I have to test for in Age Units is Y... I want to create a new column `AgeRange` and populate with the following ranges: * <2 * 2 - 18 * 18 - 35 * 35 - 65 * 65+ so I wrote a function ```py def agerange(values): for i in values: if complete.Age_units == 'Y': if complete.Age > 1 AND < 18 return '2-18' elif complete.Age > 17 AND < 35 return '18-35' elif complete.Age > 34 AND < 65 return '35-65' elif complete.Age > 64 return '65+' else return '< 2' ``` I thought if I passed in the dataframe as a whole, I would get back what I needed and then could create the column I wanted something like this: ```py agedetails['age_range'] = ageRange(agedetails) ``` BUT when I try to run the first code to create the function I get: ```none File "", line 4 if complete.Age > 1 AND complete.Age < 18 return '2-18' ^ SyntaxError: invalid syntax ``` Clearly it is not accepting the AND - but I thought I heard in class I could use AND like this? I must be mistaken but then what would be the right way to do this? So after getting that error, I'm not even sure the method of passing in a dataframe will throw an error either. I am guessing probably yes. In which case - how would I make that work as well? I am looking to learn the best method, but part of the best method for me is keeping it simple even if that means doing things in a couple of steps...<issue_comment>username_1: With Pandas, you should avoid row-wise operations, as these usually involve an inefficient Python-level loop. Here are a couple of alternatives. ### Pandas: [`pd.cut`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html) As @JonClements suggests, you can use `pd.cut` for this, the benefit here being that your new column becomes a [Categorical](https://pandas.pydata.org/pandas-docs/stable/categorical.html). You only need to define your boundaries (including `np.inf`) and category names, then apply `pd.cut` to the desired numeric column. ``` bins = [0, 2, 18, 35, 65, np.inf] names = ['<2', '2-18', '18-35', '35-65', '65+'] df['AgeRange'] = pd.cut(df['Age'], bins, labels=names) print(df.dtypes) # Age int64 # Age_units object # AgeRange category # dtype: object ``` ### NumPy: [`np.digitize`](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.digitize.html) `np.digitize` provides another clean solution. The idea is to define your boundaries and names, create a dictionary, then apply `np.digitize` to your Age column. Finally, use your dictionary to map your category names. Note that for boundary cases the lower bound is used for mapping to a bin. ``` import pandas as pd, numpy as np df = pd.DataFrame({'Age': [99, 53, 71, 84, 84], 'Age_units': ['Y', 'Y', 'Y', 'Y', 'Y']}) bins = [0, 2, 18, 35, 65] names = ['<2', '2-18', '18-35', '35-65', '65+'] d = dict(enumerate(names, 1)) df['AgeRange'] = np.vectorize(d.get)(np.digitize(df['Age'], bins)) ``` ### Result ``` Age Age_units AgeRange 0 99 Y 65+ 1 53 Y 35-65 2 71 Y 65+ 3 84 Y 65+ 4 84 Y 65+ ``` Upvotes: 7 [selected_answer]<issue_comment>username_2: This task can also be done using numpy methods. In particular, `numpy.select` can be used here to convert the numeric data into categorical data. It is very similar to the if-else ladder in the OP; only the conditions are in one list and the return values are in another list. ```py import numpy as np conds = [df['Age']<2, df['Age'].between(2, 18), df['Age'].between(19, 35), df['Age'].between(36, 65)] names = ['<2', '2-18', '18-35', '35-65', '65+'] df['AgeRange'] = np.select(conds, names[:-1], names[-1]) ``` Another method is `numpy.searchsorted`. In fact, under the hood, `pd.cut` implements this method as well. The basic idea is to find where each age would be inserted in `bins` to preserve order (which is essentially what binning is) and select the corresponding label from `names`. ```py bins = [0, 2, 18, 35, 65, np.inf] names = np.array(['<2', '2-18', '18-35', '35-65', '65+']) df['AgeRange'] = names[np.searchsorted(bins, df['Age'])-1] ``` Upvotes: 0
2018/03/20
608
1,936
<issue_start>username_0: I want to make an online appointment in my Laravel project. For that I have generated a mail function for the appointment.This mail function is working in xammp server....But not in live server. Got this error while trying to send mail ``` GET http://sencare.com.bd/sencare.com.bd/confirm_appointment?patient_name=test&patient_number=01654&patient_email=rahmanarafat13%40gmail.com&patient_age=8&patient_gender=1&service_id=6&schedule_time_id=1&date=+2018-03-27 500 (Internal Server Error) send @ jquery-2.1.4.min.js:4 ajax @ jquery-2.1.4.min.js:4 n.(anonymous function) @ jquery-2.1.4.min.js:4 (anonymous) @ (index):2029 dispatch @ jquery-2.1.4.min.js:3 r.handle @ jquery-2.1.4.min.js:3 ``` Mail function code inside controller ``` public function sendMail($emailDataArray) { \Mail::send('frontend.mail.demo', $emailDataArray, function($message) use ($emailDataArray) { $message->to($emailDataArray['patient_email'], 'Test')->subject('APPOINTMENT'); }); } ``` Here `$emailDataArray` is an array with the data of from, to and others. Is there any mistake ?<issue_comment>username_1: You are getting the error `'Connection could not be established with host smtp.gmail.com'` which can be seen if you open the Network tab and looking at the response error when you try to book an appointment. This is most likely due to not setting up your mail service properly. Make sure you have all the correct `.env` variables and details in the `config/mail.php` set <https://laravel.com/docs/5.5/mail> Upvotes: 1 <issue_comment>username_2: Just put SMTP details in `.env` file then try once again. ``` MAIL_DRIVER=smtp MAIL_HOST=mail.mailhost.com MAIL_PORT=25 MAIL_USERNAME= MAIL_PASSWORD= MAIL_ENCRYPTION=null ``` Upvotes: 0 <issue_comment>username_3: Solved this problem. Just change the driver name in `.env` file from `smtp` to `sendmail` ``` 'driver' => 'sendmail', ``` Upvotes: 0
2018/03/20
898
3,095
<issue_start>username_0: I wonder if you can help me slim down my code. I've built a bespoke *share* include for a client site. It works fine but I'm sure one of you clever lot can help me to make it less bulky. I have three share URLs: `$share_url_facebook`, `$share_url_twitter` and `$share_url_linkedin`. Each URL is different in its structure and is fairly complex. I then have the following code: ``` echo '['; echo ''; echo '](' . $share_url_facebook . ' "Share this on facebook")'; echo '['; echo ''; echo '](' . $share_url_twitter . ' "Share this on twitter")'; echo '['; echo ''; echo '](' . $share_url_linkedin . ' "Share this on LinkedIn")'; ``` Is there a way for me to create some sort of loop to prevent the repetition? --- Based on the accepted solution by @JasonK here is my final(simplified) tested code: ``` $share = [ 'facebook' => [ 'title' => 'Facebook', 'url' => 'https://www.facebook.com/' ], 'twitter' => [ 'title' => 'Twitter', 'url' => 'https://twitter.com' ], 'linkedin' => [ 'title' => 'LinkedIn', 'url' => 'https://linkedin.com' ] ]; foreach ($share as $key => $details) { echo '['; echo ''; echo '](' . $details['url'] . ' "Share this on ' . $details['title'] . '")'; } ```<issue_comment>username_1: You can use array to store "config". Key is a name of service, value is url or variable with url. After that, you can iterate on this array using foreach. In foreach you can use echo or other function to output string, or asigne string to variable. I used sprintf to format all string without contacenation and to assigned it to variable <http://php.net/manual/en/function.sprintf.php> ``` $services [ 'facebook' => $share_url_facebook, 'twitter' => $share_url_twitter ] $output = ''; foreach ($services as $serviceName => $url) { $output += sprintf( '', $url, ucfirst($serviceName), $serviceName ); } ``` Upvotes: 1 <issue_comment>username_2: As you already suggested; create an array and iterate over it. ``` $social = [ 'facebook' => [ 'title' => 'Facebook' 'url' => 'https://facebook.com' ], 'twitter' => [ 'title' => 'Twitter' 'url' => 'https://twitter.com' ], 'linkedin' => [ 'title' => 'LinkedIn' 'url' => 'https://linkedin.com' ] ]; foreach ($social as $key => $details) { echo 'Key ' . $key; echo 'Title ' . $details['title']; echo 'URL ' . $details['url']; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: There is an even shorter solution to this, which uses PHPs dynamic variables and does not force you to create an array with all details. This way you can remain your old variables. This is especially useful, if they come from a file you didnt write yourself and therefore cant/dont want to change ``` $social = [ 'facebook', 'twitter', 'linkedin' ]; foreach ($social as $site) { print '['; print ''; print '](' . ${'share_url_' . $site} . ' "Share this on ' . $site . '")'; } ``` Upvotes: 1
2018/03/20
361
1,223
<issue_start>username_0: I runned into few examples where people share code for calculating the difference between two days. Eg. ``` $now = new DateTime(); $itemDate->diff($now)->format("%r%a") ``` But almost always these types of posts don't really have an explanation about what format parameters are about. I'm okay with regular Date format parameters but (as in this case) not sure about eg. %r.<issue_comment>username_1: r - Sign "-" when the difference is negative, empty when positive a - Total number of days as a result of a DateTime::diff() or (unknown) otherwise. As an example, ``` php $now = new DateTime(); $d = new DateTime('2019-01-01T15:03:01.012345Z'); $x = $d-diff($now)->format("%r%a"); echo $x; ?> ``` Output: string(4) "-287" Upvotes: 2 <issue_comment>username_2: When doing difference between `DateTimeInterface` objects, `DateInterval` object will be returned. You don't have DateTime's anymore, you have interval, and intervals are formatted different as DateTime objects. Format is explained here: <http://php.net/manual/en/dateinterval.format.php> [![enter image description here](https://i.stack.imgur.com/AHmr8.png)](https://i.stack.imgur.com/AHmr8.png) Upvotes: 4 [selected_answer]
2018/03/20
912
3,095
<issue_start>username_0: I have a template txt file. This template needs to be written as 10 new files where I can then make amendments to each file based on certain conditions (not relevant to the question). I read my template file as follows: ``` with open('template.txt', 'r') as template_file: file_lines = template_file.readlines() file_lines = [line.rstrip('\n') for line in file_lines] for i in range(10): new_file = open('output_%s' % i, 'w') new_file.write(file_lines) new_file.close() ``` It won't work as I cannot write a list to each file, it must be a string, but I don't know how to get every element from that list to be written in the same file 10 times...Each time I try it a different way I end up getting each line on different files, rather than all lines in all files. Something wrong in my logic I cannot work out. Another way I can do it as : ``` template_file = open('template.txt', 'r') template_lines = template_file.read() for i in range(10): new_files = open('output_%s' % i, 'w') new_files.write(template_lines) ``` But I want to be able amendment particular lines which makes it more convenient to write into each new file line by line (via readlines())<issue_comment>username_1: Try indenting the `new_file.close()` so it's saved after every for loop Upvotes: 0 <issue_comment>username_2: You can use the python method `writelines()` on the fileobject. Something like: ``` with open('template.txt', 'r') as template_file: file_lines = template_file.readlines() file_lines = [line.rstrip('\n') for line in file_lines] for i in range(10): with open('output_%s' % i, 'w') as new_file: new_file.write_lines(file_lines) ``` Also not sure you need the `rstrip('\n')` as `readlines()` already should be removing newlines. Upvotes: 2 <issue_comment>username_3: I don't fully understand what you want to achieve. Anyways, here is a code snippet of how you can write the same line to every one of the 10 files: ``` with open('template.txt', 'r') as template_file: file_lines = template_file.readlines() file_lines = [line.rstrip('\n') for line in file_lines] for line in file_lines: for i in range(10): new_file = open('output_%s' % i, 'a') # appending content new_file.write(line) new_file.close() ``` or even this, if you prefer: ``` with open('template.txt', 'r') as template_file: file_lines = template_file.readlines() file_lines = [line.rstrip('\n') for line in file_lines] for i in range(10): new_file = open('output_%s' % i, 'w') for line in file_lines: new_file.write(line) new_file.close() ``` Upvotes: 0 <issue_comment>username_4: The `writelines` method did the job. ``` with open(TRANSFER_SKED, 'r') as template_file: file_lines = template_file.readlines() # file_lines = [i.rstrip('\n') for i in file_lines] for i in range(10): with open('output_%s.txt' % i, 'w') as new_file: new_file.writelines(file_lines) ``` Upvotes: 0
2018/03/20
736
2,450
<issue_start>username_0: I want to create a nested JSON format which looks something like this for my `Event` model. ``` "event" { "quiz": { "name": "", "desc": "", "events"[ { "name": "general quiz", "desc": "" }] "dance":{ "name":"", "desc":"", "events"[{ "name":"solo dance", "desc":"" }] } ``` essentially, i want my `events` to be categorized into actual events under the name `quiz`, `dance`, etc, and each of them will have separate events like solo,group dance etc. Any tips on how I can do this (with or without jbuilder)<issue_comment>username_1: Try indenting the `new_file.close()` so it's saved after every for loop Upvotes: 0 <issue_comment>username_2: You can use the python method `writelines()` on the fileobject. Something like: ``` with open('template.txt', 'r') as template_file: file_lines = template_file.readlines() file_lines = [line.rstrip('\n') for line in file_lines] for i in range(10): with open('output_%s' % i, 'w') as new_file: new_file.write_lines(file_lines) ``` Also not sure you need the `rstrip('\n')` as `readlines()` already should be removing newlines. Upvotes: 2 <issue_comment>username_3: I don't fully understand what you want to achieve. Anyways, here is a code snippet of how you can write the same line to every one of the 10 files: ``` with open('template.txt', 'r') as template_file: file_lines = template_file.readlines() file_lines = [line.rstrip('\n') for line in file_lines] for line in file_lines: for i in range(10): new_file = open('output_%s' % i, 'a') # appending content new_file.write(line) new_file.close() ``` or even this, if you prefer: ``` with open('template.txt', 'r') as template_file: file_lines = template_file.readlines() file_lines = [line.rstrip('\n') for line in file_lines] for i in range(10): new_file = open('output_%s' % i, 'w') for line in file_lines: new_file.write(line) new_file.close() ``` Upvotes: 0 <issue_comment>username_4: The `writelines` method did the job. ``` with open(TRANSFER_SKED, 'r') as template_file: file_lines = template_file.readlines() # file_lines = [i.rstrip('\n') for i in file_lines] for i in range(10): with open('output_%s.txt' % i, 'w') as new_file: new_file.writelines(file_lines) ``` Upvotes: 0
2018/03/20
581
1,819
<issue_start>username_0: I want to redirect to `https://` on my website and remove any `www.` subdomain in the url using htaccess on my WordPress site, because my security certificate doesn't cover the `www` subdomain. This is almost working: ``` RewriteEngine On RewriteBase / RewriteCond %{HTTP\_HOST} ^www\. RewriteCond http%1://%{HTTP\_HOST} ^(https?://)(www\.)?(.+)$ RewriteRule ^ https://%3%{REQUEST\_URI} [R=301,L] # BEGIN WordPress RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST\_FILENAME} !-f RewriteCond %{REQUEST\_FILENAME} !-d RewriteRule . /index.php [L] # END WordPress ``` This redirects to `https://` and removes `www.` from `http://` urls. However, it doesn't remove the subdomain if the `https://` is already there, so e.g. `http://www.example.com` is changed to `https://example.com`, but `https://www.example.com` remains unchanged. What am I doing wrong?<issue_comment>username_1: You could exclude main domain to match all sub domain + remove wwww + force https : ``` RewriteCond %{HTTP_HOST} !^(www\.)?maindomain\.com$ RewriteCond %{HTTPS} off [OR] RewriteCond %{HTTP_HOST} ^www\. RewriteCond %{HTTP_HOST} ^(www\.)?(.+)$ RewriteRule ^ https://%2%{REQUEST_URI} [L,R=301] ``` Then , If you want to force maindomain only use this : ``` RewriteCond %{HTTP_HOST} ^(www\.)?maindomain\.com$ RewriteCond %{HTTPS} off RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] ``` But maindomain will be forced into https as it is , if you want to remove or add www let me know **NOTE:** Clear browser cache then test Upvotes: 0 <issue_comment>username_2: It should be as simple as this: ``` RewriteEngine On RewriteCond %{HTTPS} !on [OR] RewriteCond %{HTTP_HOST} !^example.com RewriteRule ^ https://example.com%{REQUEST_URI} [R=301,L,QSA] ``` Upvotes: 3 [selected_answer]
2018/03/20
409
1,251
<issue_start>username_0: I have a library which has structure similar to this ``` class Foo { bar(someArgument){ const baz = {"foobarbaz" : someArgument} Object.assign(this, baz) } } ``` intented usage is similar to this ``` var foo = new Foo foo.bar(myArgument) ``` How can I write a facade around this ?<issue_comment>username_1: You could exclude main domain to match all sub domain + remove wwww + force https : ``` RewriteCond %{HTTP_HOST} !^(www\.)?maindomain\.com$ RewriteCond %{HTTPS} off [OR] RewriteCond %{HTTP_HOST} ^www\. RewriteCond %{HTTP_HOST} ^(www\.)?(.+)$ RewriteRule ^ https://%2%{REQUEST_URI} [L,R=301] ``` Then , If you want to force maindomain only use this : ``` RewriteCond %{HTTP_HOST} ^(www\.)?maindomain\.com$ RewriteCond %{HTTPS} off RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] ``` But maindomain will be forced into https as it is , if you want to remove or add www let me know **NOTE:** Clear browser cache then test Upvotes: 0 <issue_comment>username_2: It should be as simple as this: ``` RewriteEngine On RewriteCond %{HTTPS} !on [OR] RewriteCond %{HTTP_HOST} !^example.com RewriteRule ^ https://example.com%{REQUEST_URI} [R=301,L,QSA] ``` Upvotes: 3 [selected_answer]
2018/03/20
1,321
3,995
<issue_start>username_0: My docker file looks like this: ``` FROM openjdk:9 VOLUME /tmp ADD target/myjar-1.0-SNAPSHOT.jar app.jar ENTRYPOINT [“java”,”-jar”,”/app.jar”] ``` When I run `docker build -t myjar` it builds fine. When I run `docker run image` I get this error: ``` /bin/sh: 1: [“java”,”-jar”,”/app.jar”]: not found ``` I heard this could be a "relative path" issue? I'm not sure how to fix it or where the jar should even reside. I need help debugging this.<issue_comment>username_1: You can change the `ADD` instruction to an absolute path: ``` ADD target/myjar-1.0-SNAPSHOT.jar /app.jar ``` Upvotes: 0 <issue_comment>username_2: Ensure that you `ADD` your jar at the root using: ``` ADD target/myjar-1.0-SNAPSHOT.jar /app.jar ``` Additionally, if you don't override the command when you start your image, use `CMD` ([documentation](https://docs.docker.com/engine/reference/builder/#cmd)) instead of `ENTRYPOINT` (and also try to use more "normal" quotes instead of `”`): ``` CMD ["java", "-jar", "/app.jar"] ``` **EDIT:** Are you sure you're using double quotes ? **EDIT 2:** Try without brackets: ``` CMD java -jar /app.jar ``` Upvotes: 2 <issue_comment>username_3: I Have solved this using the following command: ``` CMD exec java -jar "$FINALNAME" ``` More detail [here](https://github.com/docker-library/openjdk/issues/94) Upvotes: 0 <issue_comment>username_4: This Error occurs only in Windows 10 . Use below command $> docker container commit --change='CMD java -jar /tmp/app-name.jar' /app-name: Upvotes: 2 <issue_comment>username_5: Please do remember, docker container internal is a Linux (or similar kind of) environment. While we are running the below command on windows command prompt(C:/>) it is missing bash shell docker container commit --change='CMD ["java","-jar","/tmp/hello-world-rest-api.jar"]' boring\_archimedes advanceinfo/hello-world-rest-api:manual2 So, we are getting below error /bin/sh: [java,-jar,/tmp/hello-world-rest-api.jar]: not found Please use one command prompt in windows which support bash shell example Git Bash prompt($) Note: don't change the above command, only change the command prompt, it will work 1000% Upvotes: 2 <issue_comment>username_6: I was getting below error : docker : /bin/sh: [java,-jar,/tmp/hello-world-rest-api.jar]: not found Below commit resolved my issue in windows 10 : docker container commit **--change='CMD java -jar /tmp/hello-world-rest- api.jar'** kind\_hermann in28min/hello-world-rest-api:singraul-3 For Linux machine : docker container commit **--change='CMD ["java","-jar","/tmp/hello-world-rest- api.jar"]'** kind\_hermann in28min/hello-world-rest-api:singraul-2 Upvotes: 0 <issue_comment>username_7: To add to [username_2's answer](https://stackoverflow.com/a/49383081/10907864). I experienced a similar issue when trying to deploy a Java application: This was my `Dockerfile`: FROM openjdk:8-jre ``` VOLUME /tmp WORKDIR /app COPY payment-collection.jks . RUN cat /app/payment.jks RUN keytool -list -v -keystore /app/payment.jks -storepass my-name. RUN mkdir cert COPY payment.jks /app/cert COPY /target/payment-1.0.1-SNAPSHOT.jar /app ENTRYPOINT [\"java\",\"-jar\",\"/app/${prod_jar_name}.jar\"] EXPOSE 443 ``` But when I deploy the docker app, I get the error: > > /bin/sh: 1: ["java","-jar","/app/payment-1.0.1-SNAPSHOT.jar"]: not found > > > **Here's how I fixed it**: I simply changed from this: ``` ENTRYPOINT [\"java\",\"-jar\",\"/app/payment-1.0.1-SNAPSHOT.jar\"] ``` To this: ``` ENTRYPOINT ["java", "-jar", "/app/payment-1.0.1-SNAPSHOT.jar"] ``` So my new `Dockerfile` looked like this after then: ``` FROM openjdk:8-jre VOLUME /tmp WORKDIR /app COPY payment-collection.jks . RUN cat /app/payment.jks RUN keytool -list -v -keystore /app/payment.jks -storepass my-name. RUN mkdir cert COPY payment.jks /app/cert COPY /target/payment-1.0.1-SNAPSHOT.jar /app ENTRYPOINT ["java", "-jar", "/app/payment-1.0.1-SNAPSHOT.jar"] EXPOSE 443 ``` Upvotes: 0
2018/03/20
2,392
9,560
<issue_start>username_0: Are there some special things that must be considered to avoid memory leaks in [Mule Applications](https://www.mulesoft.com/de/resources/esb/what-mule-esb)? How can we avoid memory leaks in Mule Applications? For example; Do we actually have to remove flow variables? What must be done done explicitly by the developers of the Mule Applications and what is done (automatically) by the [Mule Runtime](https://www.mulesoft.com/de/platform/mule) and the [JVM GC](https://stackoverflow.com/questions/3798424/what-is-the-garbage-collector-in-java)?<issue_comment>username_1: **General Recommendations** * Session Variables For applications with many endpoints, prefer fewer and smaller session variables over many or large ones.The session scope is serialized and deserialized every time a message crosses an endpoint, even a VM endpoint. So if an application has a lot of endpoints, it will involve many serializations/deserializations. Using fewer and smaller session variables helps minimize this overhead. * Payload format When it comes to performance, not all formats are equal. Some payload formats allow faster access to data than others. Bean payloads tend to be the fastest for Mule applications. So if it is a viable option given other considerations, create payloads in Java objects. * Data Extraction Mule Expression Language (MEL) can be used to extract data from messages.8 In terms of performance, allowing MEL to extract data can be preferable to using a scripting language. Scripting languages are dynamically typed. Some are even interpreted at runtime. Those factors can generate overhead that may degrade performance. * Flow References Flow references are a pleasantly direct way to enable flow communication within an application. Flow references are preferred for communications between flows than VM endpoints. Flow references inject messages into the target flow without intermediate steps. Although the VM connector is an in-memory protocol. It emulates transport semantics that serialize and deserialize parts of messages. The phenomenon is especially notable in the Session scope. As such, flow references are superior to VM endpoints for the purpose of inter-flow communication because the former avoid unnecessary overhead generated by serialization and deserialization. **JVM and GC flags can be set for Mule in wrapper.conf .** It is easy to get passionate about a particular Java Virtual Machine (JVM) or garbage collection (GC) method. JRockit versus HotSpot, parallel mark-and-sweep (MS) versus G1. * MuleSoft uses HotSpot, the standard Oracle JVM. HotSpot is well-supported and easy to customize for a variety of purposes. MuleSoft’s performance testing emphasizes throughput, and thus parallel GC. HotSpot is also easy to optimize for response time. The tips in the following sections show how to calibrate HotSpot for throughput or response time. * Designate the initial and maximum heap sizes to be the same value. This can be done by setting MaxMetaspaceSize=MetaspaceSize and MaxNewSize=NewSize . Doing so can avoid the need for the JVM to dynamically allocate additional memory during runtime. The flags are set in wrapper.conf . e.g. wrapper.java.additional.16=-XX:NewSize=1365m wrapper.java.additional.17=-XX:MaxNewSize=1365m wrapper.java.additional.18=-XX:MetaspaceSize=256m wrapper.java.additional.19=-XX:MaxMetaspaceSize=256m wrapper.java.additional.20=-Xms=2048m wrapper.java.additional.21=-Xmx=2048m * There are at least two reasons why such dynamic reallocation can hinder performance. First, the JVM performs a major GC for each heap resize. A full GC stops all threads for a period of time. That holds even when using concurrent mark-and-sweep (CMS). World-stopping should always be minimized, other things being equal. This is especially crucial for applications prioritizing low response times. Dynamic heap resizing creates a second worry when memory is tight. Suppose the JVM increases its heap size during runtime and the system does not have enough free memory pages readily available. As a result, some pages for a kernel-chosen process might be swapped out to disk. The circumstance would incur slowdown due to increased disk IO. **Garbage Collection** HotSpot is equipped with three canonical garbage collection (GC) mechanisms. These are serial, parallel, and concurrent mark-and-sweep (CMS).18 Garbage First (G1) has recently been added to the list.19 The JVM uses parallel GC by default on machines with 2 or more physical processors and 2 or more GB of physical memory. Parallel GC is the default garbage collection algorithm in HotSpot JVM. When triggered, it uses multiple threads to scan, move and collect the unreachable objects in the heap. CMS GC (Concurrent-Mark-Sweep) Concurrent mark-and-sweep (CMS) GC is designed to reduce application pauses by running most of the cleaning phases concurrently with the application threads, hence it offers more control over the stall time which affects the application response time. Here is an example demonstrating how to set the JVM to use CMS, plus other options. Set the following in Mule’s wrapper.conf file. Section 6, “Sample Configuration Files” gives additional context in which the flags are set. ``` wrapper.java.additional.22=-XX:+UseConcMarkSweepGC wrapper.java.additional.23=-XX:CMSInitiatingOccupancyFraction=65 wrapper.java.additional.24=-XX:UseCMSInitiatingOccupancyOnly ``` The flag -XX:CMSInitiatingOccupancyFraction designates a percentage of the total heap usage. When that percentage is reached, the JVM will trigger a CMS GC. A value of 40 to 70 typically suffices for applications running on Mule. If the value is too low, it could result in excessive, premature collections. It is usually recommend to start at a relatively higher value for -XX:CMSInitiatingOccupancyFraction and decrease it as needed to optimize for the fewest CMS events for the best performance. Specify -XX:+UseCMSInitiatingOccupancyOnly when designating -XX: +CMSInitiatingOccupancyFraction . Otherwise, the JVM attempts to dynamically adjust the value for -XX:+CMSInitiatingOccupancyFraction. A changing value is undesirable in most production scenarios. That is because dynamic adjustment is based on statistical analysis that may not reliably account for load spikes. GC logging is a good idea for performance tests. The GC log, once enabled, provides extremely valuable information about the activities in the heap and how they affect the runtime performance. GC logging tends to have little overhead for disk IO. Here is an example of how to enable various aspects of GC logging. Add these configurations to Mule’s wrapper.conf file. ``` wrapper.java.additional.4=-XX:+PrintGCApplicationStoppedTime wrapper.java.additional.5=-XX:+PrintGCDetails wrapper.java.additional.6=-XX:+PrintGCDateStamps wrapper.java.additional.7=-XX:+PrintTenuringDistribution wrapper.java.additional.8=-XX:ErrorFile=%MULE_HOME%/logs/err.log wrapper.java.additional.9=-Xloggc:%MULE_HOME%/logs/gc.log wrapper.java.additional.10=-XX:+HeapDumpOnOutOfMemoryError ``` Upvotes: 2 <issue_comment>username_2: A good way to get to the memory leak suspects would be to take a heap dump (of all the nodes) right after you start seeing a decline in memory reclaim post-major GC. There are multiple tools available that help analyze the memory leaks. There is [a great blog post](https://dzone.com/articles/enduring-black-fridays-with-mulesoft-apis) in the topic. This has summarized some memory-leak related issues like the following findings for example: **Finding**: *Pooled memory manager generally grabs 10% of JVM heap and lives with it without releasing.* Fix: Switch the Grizzly Memory Manager implementation HeapMemoryManager. Note that HeapMemoryManager is the default implementation and is recommended by Grizzly for performance; albeit, Mule treats PoolMemoryManager implementation as the default. Wrapper.conf changes: ``` wrapper.java.additional.=-Dorg.glassfish.grizzly.DEFAULT\_MEMORY\_MANAGER=org.glassfish.grizzly.memory.HeapMemoryManager ``` **Finding**: *Async logging was being used widely and associated Log4J was observed to be holding a lot of JVM memory.* The default setting of 256\*1024 slots was apparently too high. Since this RingBuffer does not grow or shrink, a high fixed size with each slot allocated as a separate object (RingBufferLogEvent), each holding a log event, could occupy a considerable amount of memory. **Fix**: *Reduce the Log4J RingBuffer size to 128 in wrapper.conf or log4j2.xml* ``` wrapper.java.additional.=-DAsyncLoggerConfig.RingBufferSize=128 ``` Or, in log4j2.xml: ``` ``` Memory leak due to default HazelCast implementation used for aggregator components (splitter-Aggregator pattern). **Finding**: Heap analysis pointed memory being held up by default HazelCast objectstore implementation used in splitter-aggregator components used in specific flows. It appeared as if the store was not getting expired appropriately. **Fix**: Custom Object store implementation (subclass of PartitionedInMemoryObjectStore) was written and TTL (TimeToLive) for entries explicitly defined. ``` @Override public void expire(int entryTTL, int maxEntries, String partitionName) throws ObjectStoreException { super.expire(entryTTL, maxEntries, partitionName); if (getPrivatePartitionSize(partitionName) == 0) { disposePartition(partitionName); } } ``` **reference : <https://dzone.com/articles/enduring-black-fridays-with-mulesoft-apis>** Upvotes: 4 [selected_answer]
2018/03/20
838
3,202
<issue_start>username_0: I use a header only library in my C++ project. **When I make a change** in my code, Visual Studio starts parsing files in my solution, including the included library. Since the library is big, parsing it is slow and **I have to wait for a long time until IntelliSense becomes relevant** again. Is there a way to speed up this parsing, perhaps, by excluding the library from IntelliSense? This question is NOT a duplicate of [Visual Studio 2017 is too slow during building and debugging](https://stackoverflow.com/questions/42877071/visual-studio-2017-is-too-slow-during-building-and-debugging) because it deals specifically with speeding up IntelliSense, which is not touched by the other question, nor it's answers.<issue_comment>username_1: You can increase Rescan Solution interval in the VS settings. Goto **Options** -> **Text Editor** -> **C/C++** -> **Advanced** and set **Rescan Solution interval** to desired minutes. I prefer 5000. The value must be between 0 and 5000. > > Changes you make are parsed in real time (ie: as you make them), > however every set amount of time a complete parse is triggered to > ensure that the whole database is up to date. This extra parse won’t > actually clean up your database and recreate it from scratch but > rather scan for changes made on files that are not active (take for > example opening one of the header files your project is referencing on > a different instance of Visual Studio). By default this is every 60 > minutes, by changing this you can control that interval. > > > Upvotes: 1 <issue_comment>username_2: This seemingly unrelated issue fixed slow IntelliSense for me: <https://stackoverflow.com/a/55401328/6800366> * Go to Options > Environment > General * Uncheck "Automatically adjust visual experience based on client performance" * Uncheck "Use hardware graphics acceleration if available" Maybe unchecking "Enable rich client visual experience" also helps for you. It made no difference for me. Upvotes: 3 <issue_comment>username_3: 1) Visual settings, 2) re-scan interval and 3) deletion of project cache made limited (if any for 1,2) effects in my case. --- Then I start playing with **Automatic Precompiled Header Cache Quota** (*Tools > Options > Text Editor > C/C++ > Advanced*). In first step I've increased it from default 50Gb to 75gb - guess what: it became slower and slower. Once cache directory (*[solution directory].vs*) reached the limit - IntelliSense became completely useless. If you think: really, how 50gb-75gb of precompiled code could lead to faster IntelliSense? Such amount of data has to be indexed, queried... maintained. **Setting limit to 10Gb made my day**. IntelliSense became responsive as it should be. --- I have to point out that issue is not limited to VS2017, but also occurs in VS2019. This issue actually made me trying VS2019. While VS2019 is really much better with C++ than VS2017, it has the same problem with project cache. Also, it is important to say that this is not only VS C++ issue, I had same/similar problem with project cache [.vs] back in my C# years. It is just more drastic with C++ projects due language specifics. Upvotes: 1
2018/03/20
627
2,332
<issue_start>username_0: ### Code used ``` var values = [ [ 'value1','value2','value3' ] ]; var body = { values: values }; gapi.client.sheets.spreadsheets.values.update({ spreadsheetId: '1Lofhq9R7X5wzGvO7fMViN8D8q1W3fiNxO5jjP7XL_s0', range: 'Sheet1!A1:A4', valueInputOption:'RAW', resource: body }).then((response) => { var result = response.result; console.log(`${result.updatedCells} cells updated.`); }); ``` Error shown ``` { "error": { "code": 400, "message": "Invalid JSON payload received. Unknown name \"valueInputOption\": Cannot bind query parameter. Field 'valueInputOption' could not be found in request message.", "status": "INVALID_ARGUMENT", "details": [ { "@type": "type.googleapis.com/google.rpc.BadRequest", "fieldViolations": [ { "description": "Invalid JSON payload received. Unknown name \"valueInputOption\": Cannot bind query parameter. Field 'valueInputOption' could not be found in request message." } ] } ] } } ``` I've changed access settings of the google spreadsheet to allow anyone with it's link to view and edit it. This error is arising only when I'm trying to write into the spreadsheet, I'm able to read a spreadsheet using other functions.<issue_comment>username_1: No one knows what domains you white-listed so no one can reproduce your code. And your spreadsheet doesn't have the `Class Data` sheet and you tried wrongly to write at `A1:C1` not `A1:A4`. So I added a sheet and changed `values` and tried with the [API explorer](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/update). The result was `200 OK`. Maybe you can try `'RAW'` instead of `"RAW"` or/and `values: values` instead of `resource: body`. [![enter image description here](https://i.stack.imgur.com/6zGd5.png)](https://i.stack.imgur.com/6zGd5.png) Upvotes: 2 <issue_comment>username_2: I had specified the authorization scope required by the API incorrectly(before the update function was called). Error got fixed after using `https://www.googleapis.com/auth/spreadsheets` instead of `https://www.googleapis.com/auth/spreadsheets.readonly` Upvotes: 1 [selected_answer]
2018/03/20
426
1,499
<issue_start>username_0: I have a problem with CarouselPage. When initializing, I assign CarouselPage.ItemSource = .... At start i need to show second page, not first. **My code:** ``` Weeks = new ObservableCollection { new Week {Days = GetDays(currentMonday.AddDays(-7)), ItemTappedCommand = JobTappedCommand}, new Week {Days = GetDays(currentMonday), ItemTappedCommand = JobTappedCommand}, new Week {Days = GetDays(currentMonday.AddDays(7)), ItemTappedCommand = JobTappedCommand}, }; CurrentDate = Weeks[1].DateOfFirstDayOfWeek; ``` How can I do this?<issue_comment>username_1: No one knows what domains you white-listed so no one can reproduce your code. And your spreadsheet doesn't have the `Class Data` sheet and you tried wrongly to write at `A1:C1` not `A1:A4`. So I added a sheet and changed `values` and tried with the [API explorer](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/update). The result was `200 OK`. Maybe you can try `'RAW'` instead of `"RAW"` or/and `values: values` instead of `resource: body`. [![enter image description here](https://i.stack.imgur.com/6zGd5.png)](https://i.stack.imgur.com/6zGd5.png) Upvotes: 2 <issue_comment>username_2: I had specified the authorization scope required by the API incorrectly(before the update function was called). Error got fixed after using `https://www.googleapis.com/auth/spreadsheets` instead of `https://www.googleapis.com/auth/spreadsheets.readonly` Upvotes: 1 [selected_answer]
2018/03/20
316
1,043
<issue_start>username_0: I am using material icons, which I love, BTW. So, I've looked at the ligatures here: <https://material.io/icons/> and been using them. But I have a screen where two of the icons aren't showing up and I can't figure it out. Here is my HTML: ``` *arrow upward* *arrow downward* *clear* *delete* ``` Clear and delete show up, but the upward and downward arrows don't show up. Here is the stackblitz: <https://stackblitz.com/edit/angular-hntkry?file=app%2Fapp.component.html> I've googled, but can't find issues where some of the icons show up, but not others.<issue_comment>username_1: Have you linked Material+Icons font family? If no, Please do ``` ``` as well as @Mateusz Juruś is right, there should be `_` instead of `space`. Upvotes: 0 <issue_comment>username_2: `Arrow_upward` and `arrow_downward` are spelled with `_` between the words and not a blank space. Upvotes: 3 [selected_answer]<issue_comment>username_3: Shouldn't it be `*keyboard\_arrow\_down*` instead of `*arrow downward*` Upvotes: 0
2018/03/20
872
2,889
<issue_start>username_0: I have two different contents, I want to show the contents as marquee in single line and the contents should display one after another with some delay time duration. ``` Label 1 content here Label 2 content here ```<issue_comment>username_1: Instead of using and tags (which you shouldn't use in this context), please try using JavaScript this way: ```js $(function () { $(".slider .slide").hide(); $(".slider .slide:first").fadeIn().delay(10000).fadeOut(function () { $(this).next().fadeIn(); }); setInterval(function () { $(".slider .slide:first").fadeIn().delay(10000).fadeOut(function () { $(this).next().fadeIn(); }); }, 20000); }); ``` ```html Label 1 content here Label 2 content here ``` The above is a bare minimal demonstration of what you might like. Let me know if you need more improvement on this. ☺ Upvotes: 1 <issue_comment>username_2: You may use this way, without marquee, basing on jquery queue and adding a css class to your elements: ```js $(function () { var current_button = 0; $('div.slider .slide:first') .show(100) .addClass('active'); setInterval(function() { if(current_button===$('.slide').length) { $('div.slider .slide:first') .show(100) .addClass('active'); current_button=0; } else { $('div.slider .slide') .hide(100) .removeClass('active'); $('div.slider .slide:eq(' + current_button + ')') .show(100) .addClass('active'); current_button++; } },3500) }); ``` ```css .slide { float: left; transform: translateX(400%); transition: all 7s; } .active { transform: translateX(-350%); } ``` ```html Label 1 content here Label 2 content here ``` EDIT: I think there is a better way, but I try a solution in the code above. Hope it helps Upvotes: 0 <issue_comment>username_3: Here is my solution, without tag, which is deprecated now: ```js //list of slides to be shown const content = [ 'first slide', 'second slide', 'third slide' ]; let key = 0; const marquee = $('.marquee'); marquee.on('animationstart', () => { key = 0; marquee.text(content[key]); }); marquee.on('animationiteration', () => { key++; if(typeof content[key] === 'undefined') key = 0; marquee.text(content[key]); }); marquee.removeClass('paused'); ``` ```css .marquee-container { width: 100vw; overflow: hidden; white-space: nowrap; } .marquee { padding-left: 100vw; display: inline-block; animation: marquee 5s linear infinite; animation-play-state: running; } .marquee.paused, .marquee-container:hover .marquee { animation-play-state: paused; } @keyframes marquee { 0% { transform: translateX(0); } 100% { transform: translateX(-100%); } } ``` ```html ``` Upvotes: 2
2018/03/20
577
2,202
<issue_start>username_0: The query I'm trying to do is query that taking from OrderDetails all the items that accounts bought sum them for one row and show for me how much bought from any product, what i am tring to do it is to show the itemname and the item Image not only the id and sum this is the query that i did: ``` string strSql = "SELECT Sum(OrderDetails.ItemStock),OrderDetails.ItemID,Items.ItemName,Items.ItemImage FROM OrderDetails,Items WHERE OrderDetails.ItemID = Items.ItemID GROUP BY OrderDetails.ItemID ORDER BY SUM(OrderDetails.ItemStock) DESC"; ``` the error that i got from this query : "The query does not include the specified phrase 'ItemName' as part of an aggregate function." when i'm writing the query like this : ``` string strSql = "SELECT Sum(OrderDetails.ItemStock),OrderDetails.ItemID FROM OrderDetails GROUP BY OrderDetails.ItemID ORDER BY SUM(OrderDetails.ItemStock) DESC"; ``` it is ok<issue_comment>username_1: Error message `"The query does not include the specified phrase 'ItemName' as part of an aggregate function."` clearly says that you need to include column `ItemName` which is missing in your `group by` clause. based on your columns in `select` statement you query should be as below , which will be all columns on which aggregate function not applied include in `group by` clause. that is requirement of `group by` clause in sql. ``` string strSql = "SELECT Sum(OrderDetails.ItemStock), OrderDetails.ItemID,Items.ItemName,Items.ItemImage FROM OrderDetails,Items WHERE OrderDetails.ItemID = Items.ItemID GROUP BY OrderDetails.ItemID,,Items.ItemName,Items.ItemImage ORDER BY SUM(OrderDetails.ItemStock) DESC"; ``` Upvotes: 2 <issue_comment>username_2: Learn to use explicit `JOIN` syntax! And, fix your aggregation as well: ``` SELECT SUM(od.ItemStock), od.ItemID, i.ItemName, i.ItemImage FROM OrderDetails as od INNER JOIN Items as i ON od.ItemID = i.ItemID GROUP BY od.ItemID, i.ItemName, i.ItemImage ORDER BY SUM(od.ItemStock) DESC; ``` The fix to your problem is simply to include all the non-aggregated columns in the `GROUP BY`. But you should learn how to write clean SQL along the way. Upvotes: 1
2018/03/20
2,993
9,877
<issue_start>username_0: I'm completely new to Ruby on Rails but I think I might be missing something obvious. I'm currently working on a webapp that scrapes auction websites. The bones of the app was created by someone else. I'm currently trying to add new website scrapes but they don't seem to be working. I have read through some of the Nokogiri documentation, checked that the scraped information is indeed not being written to the database (the seeded URLs that are being targeted have been when I check via the rails console) and used the chrome extension CSS Selector Tester to check that I am targeting the correct CSS selectors. The record ids are correct when I check via the rails console. I have put what I think are the important sections of code below, but I might be missing something that I don't realise is important. The websites I'm having issues with are [Lot-art.com](https://www.lot-art.com/auction-search/?form_id=lot_search_form&page=1&mq=&q=Vase&ord=recent) & [Lot-Tissimo.com](https://lot-tissimo.com/en/cmd=s&lwr=&ww=Vase&xw=&srt=SN&wg=EUR&page=1) Any help will be much appreciated. Seeded URLs ``` Source.create(name: "Auction.fr", query_template: "https://www.auction.fr/_en/lot/search/?contexte=futures&tri=date_debut%20ASC&query={query}&page={page}") Source.create(name: "Invaluable.co.uk", query_template: "https://www.invaluable.co.uk/search/api/search-results?keyword={query}&size=1000") Source.create(name: "Interencheres.com", query_template: "http://www.interencheres.com/en/recherche/lot?search%5Bkeyword%5D={query}&page={page}") Source.create(name: "Gazette-drouot.com", query_template: "http://catalogue.gazette-drouot.com/html/g/recherche.jsp?numPage={page}&filterDate=1&query={query}&npp=100") Source.create(name: "Lot-art.com", query_template: "http://www.lot-art.com/auction-search/?form_id=lot_search_form&page=1&mq=&q={query}⩝=recent") Source.create(name: "Lot-tissimo.com", query_template: "https://lot-tissimo.com/en/cmd=s&lwr=&ww={query}&xw=&srt=SN&wg=EUR&page={page}") ``` Scheduler code ``` require 'rufus-scheduler' require 'nokogiri' require 'mechanize' require 'open-uri' require "net/https" s = Rufus::Scheduler.singleton s.interval '1m' do setting = Setting.find(1) agent = Mechanize.new agent.user_agent_alias = 'Windows Chrome' agent.cookie_jar.load(File.join(Rails.root, 'tmp/cookies.yaml')) List.all.each do |list| number_of_new_items = 0 list.actions.each do |action| url = action.source.query_template.gsub('{query}', action.list.query) case action.source.id when 1 # Auction.fr 20.downto(1) do |page| doc = Nokogiri::HTML(open(url.gsub('{page}', page.to_s))) doc.css("div.list-products > ul > li").reverse.each do |item_data| price = 0 if item_data.at_css("h3.h4.adjucation.ft-blue") && /Selling price : ([\d\s]+) €/.match(item_data.at_css("h3.h4.adjucation.ft-blue").text) price = /Selling price : ([\d\s]+) €/.match(item_data.at_css("h3.h4.adjucation.ft-blue").text)[1].gsub(" ", "") end item = action.items.new( title: item_data.at_css("h2").text.strip, url: item_data.at_css("h2 a")["href"], picture: item_data.at_css("div.image-wrap.lazy div.image img")["src"], price: price, currency: "€" ) ActiveRecord::Base.logger.silence do # This disable writing logs if item.save number_of_new_items = number_of_new_items + 1 end end end end when 97 # Lot-Tissimo.com 5.downto(1) do |page| doc = Nokogiri::HTML(open(url.gsub('{page}', page.to_s))) doc.css("#inhalt > .objektliste").reverse.each do |item_data| # price = 0 # if item_data.at_css("h3.h4.adjucation.ft-blue") && /Selling price : ([\d\s]+) €/.match(item_data.at_css("h3.h4.adjucation.ft-blue").text) # price = /Selling price : ([\d\s]+) €/.match(item_data.at_css("h3.h4.adjucation.ft-blue").text)[1].gsub(" ", "") # end item = action.items.new( title: item_data.at_css("div.objli-desc").text.strip, url: item_data.at_css("td.objektliste-foto a")["href"], picture: item_data.at_css("td.objektliste-foto a#lot_link img")["src"], price: price, currency: "€" ) ActiveRecord::Base.logger.silence do # This disable writing logs if item.save number_of_new_items = number_of_new_items + 1 end end end end when 2 # Invaluable.co.uk doc = JSON.parse(open(url).read) doc["itemViewList"].reverse.each do |item_data| puts item_data["itemView"]["photos"] item = action.items.new( title: item_data["itemView"]["title"], url: "https://www.invaluable.co.uk/buy-now/" + item_data["itemView"]["title"].parameterize + "-" + item_data["itemView"]["ref"], picture: item_data["itemView"]["photos"] != nil ? item_data["itemView"]["photos"].first["_links"]["medium"]["href"] : nil, price: item_data["itemView"]["price"], currency: item_data["itemView"]["currencySymbol"] ) ActiveRecord::Base.logger.silence do # This disable writing logs if item.save number_of_new_items = number_of_new_items + 1 end end end when 3 # Interencheres.com # doc = Nokogiri::HTML(open(url)) 5.downto(1) do |page| doc = Nokogiri::HTML(open(url.gsub('{page}', page.to_s))) doc.css("div#lots_0 div.ligne_vente").reverse.each do |item_data| price = 0 item = action.items.new( title: item_data.at_css("div.ph_vente div.des_vente p a").text.strip, url: "http://www.interencheres.com" + item_data.at_css("div.ph_vente div.des_vente p a")["href"], picture: item_data.at_css("div.ph_vente div.gd_ph_vente img")["src"], price: price, currency: "€" ) ActiveRecord::Base.logger.silence do # This disable writing logs if item.save number_of_new_items = number_of_new_items + 1 end end end end when 4 # Gazette-drouot.com 5.downto(1) do |page| # doc = Nokogiri::HTML(open(url.gsub('{page}', page.to_s))) doc = agent.get(url.gsub('{page}', page.to_s)) # doc = agent.get(url) doc.css("div#recherche_resultats div.lot_recherche").reverse.each do |item_data| price = 0 picture = item_data.at_css("img.image_thumb_recherche") ? item_data.at_css("img.image_thumb_recherche")["src"] : nil item = action.items.new( title: item_data.at_css("#des_recherche").text.strip.truncate(140), url: "http://catalogue.gazette-drouot.com/html/g/" + item_data.at_css("a.lien_under")["href"], picture: picture, price: price, currency: "€" ) ActiveRecord::Base.logger.silence do # This disable writing logs if item.save number_of_new_items = number_of_new_items + 1 end end end end when 69 # Lot-art.com doc = agent.get(url) doc.css("div.lot_list_holder").reverse.each do |item_data| price = 0 item = action.items.new( title: item_data.at_css("div.lot_list_body a")[0].text.strip.truncate(140), url: item_data.at_css("div.lot_list_body")["href"], picture: item_data.at_css("a.lot_list_thumb img") ["src"], price: price, currency: "€" ) ActiveRecord::Base.logger.silence do # This disable writing logs if item.save number_of_new_items = number_of_new_items + 1 end end end end end if number_of_new_items > 0 && setting.notifications_per_hour > setting.notifications_this_hour && setting.pushover_app_token.present? && setting.pushover_user_key.present? url = URI.parse("https://api.pushover.net/1/messages.json") req = Net::HTTP::Post.new(url.path) req.set_form_data({ :token => setting.pushover_app_token, :user => setting.pushover_user_key, :message => "#{number_of_new_items} new items on #{list.name}!", :url_title => "Check the list", :url => "http://spottheauction.com/lists/#{list.id}" }) res = Net::HTTP.new(url.host, url.port) res.use_ssl = true res.verify_mode = OpenSSL::SSL::VERIFY_PEER res.start {|http| http.request(req) } end end agent.cookie_jar.save(File.join(Rails.root, 'tmp/cookies.yaml')) end s.cron '0 * * * *' do setting = Setting.find(1) setting.notifications_this_hour = 0 setting.save end ```<issue_comment>username_1: `new` just initializes an instance but doesn't save the instance. Do you actually call `save` somewhere? You have two options: Call `save` on the item: ``` item = action.items.new( # ... ) item.save ``` Or use `create` instead of `new`: ``` item = action.items.create( # ... ) ``` Upvotes: 2 <issue_comment>username_2: In case someone else comes across this. I got the scraping of lot-art.com to work. It seemed that I was lacking specificity in the css selector for nokogiri to pull the correct data. I am still having continuing issues with lot-tissimo although that appears to be from something else as other scrapers have issues such as scraping-hub's portia spiders. Upvotes: 0
2018/03/20
447
1,544
<issue_start>username_0: How can I use Django 1.11 as default version in **pycharm** ? Letting you know that I'm using **macOS** and default Django version is 2.0.3 !<issue_comment>username_1: Pycharm has nothing to do with Django version. You can just install django version you need. Use: `pip3 install django==1.11` > > Note: > > > You can check your globally installed django version by typing: ``` python3 import django django.VERSION ``` You will probably get 2.0.3 as global django version. So now you have to type: ``` pip3 install django==1.11 ``` That will install django with version 1.11 as global. From now, whenever you will create new project via: `django-admin startproject your_project` Django version of that new project will be 1.11 by default > > And some advice: > > > Next time just use `virtualenv` or `virtualenvwrapper`, by using it you will set your libraries locally for each project Upvotes: 2 <issue_comment>username_2: You can use `python virtualenv` to create a custom environment for your project with django version 1.11. * <https://docs.python.org/3/library/venv.html> * <https://www.jetbrains.com/help/pycharm-edu/creating-virtual-environment.html> In your project folder : ``` python3 -m venv env source env/bin/activate pip install django=1.11 ``` This will create a folder named `env` (`source` command activates this environment). You may need to install all your other dependencies. In PyCharm, under project settings `Project Interpreter`, select this environment. Upvotes: 0
2018/03/20
836
3,271
<issue_start>username_0: I am working in Magento 2.2.1, I am trying to get product-collection of a category by its category id. Every time when i use to call [using this example](https://magento.stackexchange.com/questions/125086/get-product-collection-by-category-id-on-phtml-file-magento2), I always get an error.<issue_comment>username_1: Try Below Code: ``` php $objectManager = \Magento\Framework\App\ObjectManager::getInstance(); $categoryFactory = $objectManager-get('\Magento\Catalog\Model\CategoryFactory'); $categoryHelper = $objectManager->get('\Magento\Catalog\Helper\Category'); $categoryRepository = $objectManager->get('\Magento\Catalog\Model\CategoryRepository'); $store = $objectManager->get('Magento\Store\Model\StoreManagerInterface')->getStore(); $categoryId = 47; // YOUR CATEGORY ID $category = $categoryFactory->create()->load($categoryId); $categoryProducts = $category->getProductCollection() ->addAttributeToSelect('*'); foreach ($categoryProducts as $product) { $imageUrl = $store->getBaseUrl(\Magento\Framework\UrlInterface::URL_TYPE_MEDIA) . 'catalog/product' . $product->getImage(); ?> [![](<?= $imageUrl;?>) = $product-getName(); ?>](<?= $product->getProductUrl(); ?>) = $product-getPrice(); ?> php } ? ``` I hope it will help you Upvotes: 3 [selected_answer]<issue_comment>username_2: Better and more actual way to get products by category - via `ProductRepository` and built-in Filters (from Magento 2.2) ``` public function __construct( ProductRepositoryInterface $productRepository, SearchCriteriaBuilder $criteriaBuilder ) { $this->productRepository = $productRepository; $this->criteriaBuilder = $criteriaBuilder; } /** * @return ProductInterface[] */ public function getProducts(): array { $categoryIdsToExport = $this->config->getCategoriesToExport(); return $this->productRepository->getList( $this->criteriaBuilder //It's Custom Filter from di.xml ->addFilter('category_id', $categoryIdsToExport, 'in') //Here you cat filter products in standart Magento way ->addFilter('status', \Magento\Catalog\Model\Product\Attribute\Source\Status::STATUS_ENABLED) ->addFilter('visibility', \Magento\Catalog\Model\Product\Visibility::VISIBILITY_BOTH) ->create() )->getItems(); } ``` Unfortunately There are few info in stackexchange about "Search Criteria Unify Processing" - better and currently proper way to filter,sort models. [Here Magento doc about Search Criteria Unify Processing](https://devdocs.magento.com/guides/v2.4/extension-dev-guide/searching-with-repositories.html#m2devgde-searchcriteria-unify-processing) Also you can register your own CustomFilter to filter products. See example in `vendor/magento/module-catalog/etc/di.xml` : ``` Magento\Catalog\Model\Api\SearchCriteria\CollectionProcessor\FilterProcessor\ProductCategoryFilter Magento\Catalog\Model\Api\SearchCriteria\CollectionProcessor\FilterProcessor\ProductStoreFilter Magento\Catalog\Model\Api\SearchCriteria\CollectionProcessor\FilterProcessor\ProductStoreFilter Magento\Catalog\Model\Api\SearchCriteria\CollectionProcessor\FilterProcessor\ProductWebsiteFilter ``` Upvotes: 0
2018/03/20
425
1,537
<issue_start>username_0: I'm getting this error by executing `ng g c user`, but it also happens with `ng generate component` and I don't understand i need a help: > > The "c" alias is already in use by the "--collection" option and cannot be used by the "--change-detection" option. Please use a different alias. > > ><issue_comment>username_1: I do not why change but there is a new [commit](https://github.com/angular/devkit/commit/88a3e309e747835410ce7b909823fb263ca37d1c) to the package that rename the some aliases and produces this bug in your project look for the file ``` node_modules/@schematics/angular/component/schema.json ``` in this look the ChangeDetection Category and change the alias property from c to cd for example an try again. ``` "changeDetection": { "description": "Specifies the change detection strategy.", "enum": ["Default", "OnPush"], "type": "string", "default": "Default", "alias": "cd" }, ``` With this I'm capable to run **ng g c Home or ng generate component Home** again. Hope this helps. Upvotes: 2 [selected_answer]<issue_comment>username_2: Changing the schema.json as suggested by the answer, didn't work for me. I'm using a globally installed angular cli @1.7.4. Since this issue seems to be caused by @schematics/angular (v0.7.5 in my case), I uninstalled this package from my projects dependencies ``` npm uninstall @schematics/angular --save ``` Afterwards, `ng generate component` worked for me. Upvotes: 2
2018/03/20
2,399
8,439
<issue_start>username_0: **TL;DR: My question is how to bundle some of my sass files into single sass file?** I've been developing an Angular component library and I package it with ng-packagr. Let's call it `@my-lib/ngx-components`. Consumers of my lib will import my components like `@my-lib/ngx-components/navbar`. I decided to add theming support to components. For example, I have a navbar component with default colors (background, text, hover etc.) I want consumers of my library to be able to override these colors with their own theme. That's why I've written a `mixin` which takes a `$theme` input and override some css rules as follows (this is a basic version of what I have) `_navbar-theme.sass` ``` @mixin navbar-theme($theme) $primary-color: map-get($theme, primary-color) $secondary-color: map-get($theme, secondary-color) $color: map-get($theme, color) .navbar background-color: $primary-color color: $color &:hover background-color: $secondary-color ``` Each component has its own `*-theme.sass` file. I also have global `_theming.sass` file which imports all of these as follows `_theming.sass` ``` @import './components/navbar/navbar-theme' @import './components/button/button-theme' @import './components/dropdown/dropdown-theme' ``` I want to export this `_theming.sass` file from my lib, so people can import this file in their own `sass` file as `@import '~@my-lib/ngx-components/theming'` and start using all of the `mixin`s available. If they want to have custom `navbar`, `button` etc, they should be able to use those `mixin`s with single import. I tried to make it look like angular-material theming setup. At first, I have tried `node-sass` which is already in my dependencies. But, it tries to build sass into css so it omits mixins in the output file. Then, I looked at what angular-material has done. They use [scss-bundle](https://github.com/SimplrJS/scss-bundle) I thought "this is exactly what I want." However, it requires `scss` files, not `sass` files. It cannot read `sass` files. Then, I thought "Okay, I can give up on sass and start using scss. How do I convert all those files to scss without going through them by hand". Then, I found [sass-convert](https://github.com/SassDoc/sass-convert). [In this question](https://stackoverflow.com/questions/26493111/is-it-possible-to-convert-whole-folder-with-convert-sass-scss-to-sass) it was said that I can use it within command line. However, when I install `sass-convert` with npm globally, it didn't give me a command line executable. I think I need `Gulp` to use it. I've been avoding to use `Gulp` from the beginning, because it means another tool to learn and it adds complexity to codebase. At this point, I feel like ["Hal fixing light bulb"](https://www.youtube.com/watch?v=AbSehcT19u0) **TL;DR: My question is how to bundle some of my sass files into single sass file?** Also, If you can come up with a solution that requires webpack, that's fine too.<issue_comment>username_1: Let's through your opinion or questions: > > I want to export this \_theming.sass file from my lib, so people can > import this file in their own sass file as @import > '~@my-lib/ngx-components/theming' and start using all of the mixins > available. If they want to have custom navbar, button etc, they should > be able to use those mixins with single import. > > > You need to know, what is your target audience. Mostly people using angular cli for create their app like template scratch. So you need provide `css` bundle (people just want import your css) and `sass` bundle (who want to use your object or your mixin). > > I want to export this \_theming.sass file from my lib, so people can > import this file in their own sass file as @import > '~@my-lib/ngx-components/theming' and start using all of the mixins > available. If they want to have custom navbar, button etc, they should > be able to use those mixins with single import. > > > I tried to make it look like angular-material theming setup. > > > Firstly, you need to know that `@angular/material` doesn't export `sass` (they use scss) but they export css thene compiled by `scss-bundle` (as you mention it) see their [code](https://github.com/angular/material2/blob/fcc09089c38fb5d2d78db45f473908a924e5c14e/tools/sass_bundle.ts) and documentation [theme](https://material.angular.io/guide/theming#defining-a-custom-theme). > > I thought "this is exactly what I want." However, it requires scss > files, not sass files. It cannot read sass files. > > > I would like quote this [answer](https://stackoverflow.com/questions/5654447/whats-the-difference-between-scss-and-sass#answer-5654471): > > Sass is a CSS pre-processor with syntax advancements. Style sheets in > the advanced syntax are processed by the program, and turned into > regular CSS style sheets. However, they do not extend the CSS standard > itself. > > > It is better you need transfer your code from sass to scss (by yourself), it would not much to do it (I think, I always write scss instead sass file). --- Solution: ========= 1. Provide css and sass (scss better) ------------------------------------- When you deliver your component libs, You have to provide `css` and `scss`. Beacuse angular cli doesn't provide `scss` loader by default. Don't use `sass` file, use `scss` file see my refer answer on top. **scss-bundle + webpack** Since you have to provide css, you can you [webpack shell plugin](https://www.npmjs.com/package/webpack-shell-plugin) to bundle scss. Scss have provide cli, if you want to use cli. 2. Structure your scss ---------------------- Okay, let's take sample from bootstrap@4 module for this case. Bootstrap use structure like this ([Documents](https://getbootstrap.com/docs/4.0/getting-started/theming/#sass)): ``` scss |-- _variables.scss |-- _mixins.scss |-- _functions.scss |-- ... |-- index.scss ``` inside `index.scss` will have like this: ``` @import 'variables' @import 'mixins' @import 'functions' ... ``` so, this `scss` you have to deliver beside `css`. Like `bootstrap` do, then `mixin` will available to consumer. Also this good approach when consumer to find `scss` file in `scss` folder (easy to pointing which is scss put in). --- UPDATE ====== For bundle to single file you have to create task runner to do it. In your case you want to use webpack, you can create a plugin to do it. Here example plugin: [scss-bundle-plugin.js](https://github.com/username_1/hello-world-loader/blob/master/scripts/scss-bundle-plugin/scss-bundle-plugin.js) call to you config webpack: ``` plugins: [ new webpack.NoEmitOnErrorsPlugin(), new SCSSBundlePlugin({ file: path.join(__dirname, 'src/index.scss') }) ], ``` To try playground, checkout [hello-world-loader](https://github.com/username_1/hello-world-loader) then: ``` # install dependency npm install # try play ground npm run webpack ``` it will create file `_theme.scss` at `./dist`. My advice don't use webpack, use task runner instead (gulp or grunt) for this simple case. Webpack too advance and hard to write task. Upvotes: 3 [selected_answer]<issue_comment>username_2: **My solution for scss / sass files** I've used small module [bundle-scss](https://www.npmjs.com/package/bundle-scss?activeTab=readme) It bundles files by file name mask. So you need to pass correct mask like `./src/**/*.theme.scss` specify destination file and maybe your custom sort-order You don't have to create one entry point file with all imports. `bundle-scss` will get all files by mask analyze all imports and include this files as well Upvotes: 0 <issue_comment>username_3: There is also a widely used package, called [scss-bundle](https://github.com/reactway/scss-bundle). It is quite simple to use, you just create a config file with all relevant configuration and then run scss-bundle. This for example will use all scss files, imported in entry.scss and move it to out.scss. All imports will be resolved, except for angular themes in this example, like `@import '~@angular/material/theming';`. scss-bundle.config.json: ``` { "bundlerOptions": { "entryFile": "my-project/src/entry.scss", "outFile": "dist/out.scss", "rootDir": "my-project/src", "project": "../../", "ignoreImports": [ "~@angular/.*" ], "logLevel": "debug" } } ``` Upvotes: 2
2018/03/20
1,776
6,275
<issue_start>username_0: This is my first post, so hi everybody! :) I have a question regarding a schema of my database. I'm writing RESTful application using Spring. The idea is to allow user to create his own diet based on products stored in DB. So I came to creating entity Meal, which should consist of Products and amount of those products. It seems like natural way to have something like this is using Map. Problem is, that as I have read there is a problem with mapping such class to JSON Object, which I would like to send to clients browser. My other idea was to store List of objects like ProductWithQuantity instead of such map, but I'm a little worried that DB would be quickly flooded by entries like 1 glass of milk, 2 glasses of milk, 1.1243 glasses of milk and so on. So my question is - do you have any better idea for the schema for such purpose? ;)<issue_comment>username_1: Let's through your opinion or questions: > > I want to export this \_theming.sass file from my lib, so people can > import this file in their own sass file as @import > '~@my-lib/ngx-components/theming' and start using all of the mixins > available. If they want to have custom navbar, button etc, they should > be able to use those mixins with single import. > > > You need to know, what is your target audience. Mostly people using angular cli for create their app like template scratch. So you need provide `css` bundle (people just want import your css) and `sass` bundle (who want to use your object or your mixin). > > I want to export this \_theming.sass file from my lib, so people can > import this file in their own sass file as @import > '~@my-lib/ngx-components/theming' and start using all of the mixins > available. If they want to have custom navbar, button etc, they should > be able to use those mixins with single import. > > > I tried to make it look like angular-material theming setup. > > > Firstly, you need to know that `@angular/material` doesn't export `sass` (they use scss) but they export css thene compiled by `scss-bundle` (as you mention it) see their [code](https://github.com/angular/material2/blob/fcc09089c38fb5d2d78db45f473908a924e5c14e/tools/sass_bundle.ts) and documentation [theme](https://material.angular.io/guide/theming#defining-a-custom-theme). > > I thought "this is exactly what I want." However, it requires scss > files, not sass files. It cannot read sass files. > > > I would like quote this [answer](https://stackoverflow.com/questions/5654447/whats-the-difference-between-scss-and-sass#answer-5654471): > > Sass is a CSS pre-processor with syntax advancements. Style sheets in > the advanced syntax are processed by the program, and turned into > regular CSS style sheets. However, they do not extend the CSS standard > itself. > > > It is better you need transfer your code from sass to scss (by yourself), it would not much to do it (I think, I always write scss instead sass file). --- Solution: ========= 1. Provide css and sass (scss better) ------------------------------------- When you deliver your component libs, You have to provide `css` and `scss`. Beacuse angular cli doesn't provide `scss` loader by default. Don't use `sass` file, use `scss` file see my refer answer on top. **scss-bundle + webpack** Since you have to provide css, you can you [webpack shell plugin](https://www.npmjs.com/package/webpack-shell-plugin) to bundle scss. Scss have provide cli, if you want to use cli. 2. Structure your scss ---------------------- Okay, let's take sample from bootstrap@4 module for this case. Bootstrap use structure like this ([Documents](https://getbootstrap.com/docs/4.0/getting-started/theming/#sass)): ``` scss |-- _variables.scss |-- _mixins.scss |-- _functions.scss |-- ... |-- index.scss ``` inside `index.scss` will have like this: ``` @import 'variables' @import 'mixins' @import 'functions' ... ``` so, this `scss` you have to deliver beside `css`. Like `bootstrap` do, then `mixin` will available to consumer. Also this good approach when consumer to find `scss` file in `scss` folder (easy to pointing which is scss put in). --- UPDATE ====== For bundle to single file you have to create task runner to do it. In your case you want to use webpack, you can create a plugin to do it. Here example plugin: [scss-bundle-plugin.js](https://github.com/username_1/hello-world-loader/blob/master/scripts/scss-bundle-plugin/scss-bundle-plugin.js) call to you config webpack: ``` plugins: [ new webpack.NoEmitOnErrorsPlugin(), new SCSSBundlePlugin({ file: path.join(__dirname, 'src/index.scss') }) ], ``` To try playground, checkout [hello-world-loader](https://github.com/username_1/hello-world-loader) then: ``` # install dependency npm install # try play ground npm run webpack ``` it will create file `_theme.scss` at `./dist`. My advice don't use webpack, use task runner instead (gulp or grunt) for this simple case. Webpack too advance and hard to write task. Upvotes: 3 [selected_answer]<issue_comment>username_2: **My solution for scss / sass files** I've used small module [bundle-scss](https://www.npmjs.com/package/bundle-scss?activeTab=readme) It bundles files by file name mask. So you need to pass correct mask like `./src/**/*.theme.scss` specify destination file and maybe your custom sort-order You don't have to create one entry point file with all imports. `bundle-scss` will get all files by mask analyze all imports and include this files as well Upvotes: 0 <issue_comment>username_3: There is also a widely used package, called [scss-bundle](https://github.com/reactway/scss-bundle). It is quite simple to use, you just create a config file with all relevant configuration and then run scss-bundle. This for example will use all scss files, imported in entry.scss and move it to out.scss. All imports will be resolved, except for angular themes in this example, like `@import '~@angular/material/theming';`. scss-bundle.config.json: ``` { "bundlerOptions": { "entryFile": "my-project/src/entry.scss", "outFile": "dist/out.scss", "rootDir": "my-project/src", "project": "../../", "ignoreImports": [ "~@angular/.*" ], "logLevel": "debug" } } ``` Upvotes: 2
2018/03/20
1,522
5,190
<issue_start>username_0: I have a NodeJS lambda function that I run on AWS. I want to write a simple test for the `.handler` function. **CODE** Here is the `index.js` code: ``` // importing dependencies var mySQLWriter = require('./mySQLWriterService'); exports.handler = function(event, context, callback) { console.log('Printing out JSON.stringify(event): '); console.log(JSON.stringify(event)); event.Records.forEach((record) => { if (record.eventName === 'INSERT') { console.log('We have an INSERT happening.'); mySQLWriter(record, callback); } }); }; ``` I want to write a simple test that would pass if `mySQLWriter` is called. Using Mocha and Chai and with help from dashmud below, I have attempted to do this, but it isn't working, here is my `indexTests.js` code: ``` const chai = require('chai'); const expect = chai.expect; const spies = require('chai-spies'); chai.use(spies); const appStart = require('../index'); const mySQLWriter = require('../mySQLWriterService'); describe('lambda function', () => { it('should call the mySQLWriter() function', () => { const spy = chai.spy.on(mySQLWriter, 'mySQLWriter'); let event = { Records: [ { eventName: 'INSERT', dynamodb: { NewImage: { DeviceId: { S: 'Device 000' }, TimeStamp: { S: '2018-03-20T11:15:31.668Z' }, Accuracy: { S: '5' }, Latitude: { S: '53.639645' }, Longitude: { S: '-1.782491' }, Speed: { S: '1' }, } } } ] }; const context = {}; appStart.handler(event, context, () => { expect(spy).to.have.been.called(); done(); }) }); }); ``` When I run the test, I get: [![enter image description here](https://i.stack.imgur.com/gbbfG.png)](https://i.stack.imgur.com/gbbfG.png)<issue_comment>username_1: Looks like you're not calling the handler function, so try changing ``` appStart() ``` to ``` appStart.handler(someEvent) ``` Upvotes: 0 <issue_comment>username_2: Your `appStart` is just a module. You need to call the `handler` inside your `appStart` (`index.js`) and do your assertions inside the callback. ``` const chai = require('chai'); const expect = chai.expect; const spies = require('chai-spies'); chai.use(spies); const appStart = require('../index'); describe('lambda function', () => { it('should call the mySQLWriter() function', done => { const spy = chai.spy.on(mySQLWriter, 'mySQLWriter'); const event = {}; const context = {}; appStart.handler(event, context, () => { expect(spy).to.have.been.called(); done(); }) }); }); ``` --- Update based on comment and updated question: Based on your screenshot, it seems like `mySQLWriterService.js` exports an object with a `mySQLWriter` function. This will not work. ``` var mySQLWriter = require('./mySQLWriterService'); ``` I think is should be like this: ``` const mySQLWriter = require('./mySQLWriterService').mysqlWriter; ``` (I'm not 100% sure as you did not include the code inside your `mySQLWriterService.js`.) P.S. Don't use `var`. Ever. Upvotes: 0 <issue_comment>username_3: [![enter image description here](https://i.stack.imgur.com/LR3nR.png)](https://i.stack.imgur.com/LR3nR.png) Removed **chai** and used **sinon** instead ``` //const chai = require('chai'); //const expect = chai.expect; const sinon = require('sinon'); //chai.use(sinon); const SQLWriter = require('./mysqlwriterservice.js'); const appStart = require('./sinonsqlwriter'); describe('lambda function', () => { it('should call the mySQLWriter() function', () => { const spy = sinon.spy(SQLWriter, 'mySQLWriter'); let event = { Records: [ { eventName: 'INSERT', dynamodb: { NewImage: { DeviceId: { S: 'Device 000' }, TimeStamp: { S: '2018-03-20T11:15:31.668Z' }, Accuracy: { S: '5' }, Latitude: { S: '53.639645' }, Longitude: { S: '-1.782491' }, Speed: { S: '1' }, } } } ] }; const context = {}; appStart.handler(event, context, () => { console.log("Call count"+spy.callCount) //expect(spy).to.have.been.called(); }) }); }); ``` // sinonsqlwriter.js // importing dependencies ``` const SQLWriter = require('./mysqlwriterservice.js'); exports.handler = function(event, context, callback) { console.log('Printing out JSON.stringify(event): '); console.log(JSON.stringify(event)); event.Records.forEach((record) => { if (record.eventName === 'INSERT') { console.log('We have an INSERT happening.'); SQLWriter.mySQLWriter(record, callback); SQLWriter.mySQLWriter(record, callback); } }); callback(); }; ``` // mysqlwriterservice.js I used the code from your shared link. Below is the **updated** answer: [![enter image description here](https://i.stack.imgur.com/hz1hk.png)](https://i.stack.imgur.com/hz1hk.png) Upvotes: 1
2018/03/20
453
1,610
<issue_start>username_0: I'm currently trying to make my script dynamic and looking for a way to grab my file data with wildcards. This is my old code: ``` #process .csv into the database if(($handle = fopen("9_filename.csv", "r"))) ``` Here I just grab a file through a string. Works fine. But it's static and I need to process 20 files per week at once, which change the calendar week in the file name. So after some research I found the "glob" function, which allows me to set wildcards. So I assign a variable with the "glob" function. A var\_dump returns the correct file name "9\_filename.csv". So the string exists. This is my new code: ``` #variables $file = glob("*_filename.csv"); #process .csv into the database if(($handle = fopen($file, "r"))) ``` Unfortunately as a result I get the following error: > > Warning: fopen() expects parameter 1 to be a valid path, array given > in > > ><issue_comment>username_1: Because `glob` finds *all files matching pattern*, it must return an array. And fopen can only handle one at a time, so you need to feed them one at a time: ``` #variables $files = glob("*_filename.csv"); foreach($files as $file){ #process .csv into the database if(($handle = fopen($file, "r"))) } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: `glob()` functions return an array of filenames or directories matching a specified pattern. So once you get an array you need to loop through each array. In that case you can then use `if(($handle = fopen("9_filename.csv", "r")))` where `9_filename.csv` will be one the array value. Upvotes: 0
2018/03/20
555
2,108
<issue_start>username_0: I am having a problem with my C# code. The code is designed to read RFID tags and based on the UID, tell a database what kind of process to start. The problem I am having is when I read a tag and set it to the required sleep, it stil reads tags and basically waits with executing the next procedure with the UID it wasn't even supposed to read. **Code:** ``` string myConnectionString = "server=" + mysql_host + ";uid=" + mysql_user + ";" + "pwd=" + mysql_pass + ";database=" + mysql_daba; MySqlConnection connect; connect = new MySqlConnection(myConnectionString); string query = "SELECT * FROM Tags WHERE tagCode = @tagCode"; AutodetectArduinoPort(); try{ ArduPort.PortName = AutodetectArduinoPort(); ArduPort.Open(); ArduPort.Write("startmonitor"); } catch{ Console.WriteLine("comport did not connect."); } int delay; while (true){ string tagData = ArduPort.ReadLine(); Console.WriteLine(tagData); connect.Open(); MySqlCommand command = new MySqlCommand(query, connect); command.Parameters.AddWithValue("@tagCode", tagData); MySqlDataReader reader = command.ExecuteReader(); if (reader.Read()){ string url = reader.GetValue(3).ToString(); delay = Convert.ToInt32(reader.GetValue(4)); command.Dispose(); Process.Start(url); connect.Close(); Thread.Sleep(delay); } } ```<issue_comment>username_1: Because `glob` finds *all files matching pattern*, it must return an array. And fopen can only handle one at a time, so you need to feed them one at a time: ``` #variables $files = glob("*_filename.csv"); foreach($files as $file){ #process .csv into the database if(($handle = fopen($file, "r"))) } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: `glob()` functions return an array of filenames or directories matching a specified pattern. So once you get an array you need to loop through each array. In that case you can then use `if(($handle = fopen("9_filename.csv", "r")))` where `9_filename.csv` will be one the array value. Upvotes: 0
2018/03/20
700
2,361
<issue_start>username_0: I have a index.js in a folder called "vuex" with the following ``` const module = { state, mutations, actions, getters } export default { module, plugin } ``` state, mutations, actions were imported from another file I'm trying to get the "state" property in another file so I ``` import module from './veux' ``` then ``` const { state } = module ``` however `state` is **undefined** which is weird because `console.log(module)` shows me that `module.state` is present I'm new to this ES6-7 flow so but what exactly am I doing wrong here?<issue_comment>username_1: Since you have exported the object `{module, plugin}` as `default export` after importing like ``` import module from './veux' ``` module will have structure like ``` module = { module: { state, mutations, actions, getters }, plugin } ``` so in order to access state, you will write `module.module.state` or ``` const {module: {state}} = module; // nested destructuring console.log(state) ``` an easier to understand and readable method would be to export your module with named export like ``` export const module = { state, mutations, actions, getters } export default plugin ``` and import it like ``` import plugin, { module } from './veux' ``` after which you can do ``` const { state } = module; ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: It'll work if you do this: ``` import { module } from './veux'; ``` Upvotes: -1 <issue_comment>username_3: > > I'm trying to get the "state" property in another file so I > > > > ``` > import module from './veux' > const { state } = module > > ``` > > however state is undefined which is weird because console.log(module) > shows me that module.state is present > > > No, you're importing the whole default-exported object (with its `module` and `plugin` properties) as `module`. The property would be `module.module.state`. > > I have a index.js in a folder called "vuex" with the following > > > > ``` > const module = { state, mutations, actions, getters } > export default { module, plugin } > > ``` > > Don't. Use named exports for exporting multiple things: ``` export const module = { state, mutations, actions, getters } export { plugin } ``` then you can do ``` import { module } from './veux' const { state } = module ``` Upvotes: 1
2018/03/20
720
2,305
<issue_start>username_0: ``` char td[] = {'1','0','0','1','1','1'}; char s_td[] = new char[td.length]; for(int i=0; i ``` For the given code snippet, I need `s_td[] = {0,0,0,1,0,1}`, i.e. simple XOR operation. Instead, I am getting `{ ,0, ,1, ,1}`. As you can see it is working correctly for even positions but giving blank space in odd positions. I tried casting it to **int** which gives `{0,48,0,49,0,49}` while using **Character.getNumericValue(s\_td[i])** gives `{-1,0,-1,1,-1,1}`. Thanks in advance!<issue_comment>username_1: Since you have exported the object `{module, plugin}` as `default export` after importing like ``` import module from './veux' ``` module will have structure like ``` module = { module: { state, mutations, actions, getters }, plugin } ``` so in order to access state, you will write `module.module.state` or ``` const {module: {state}} = module; // nested destructuring console.log(state) ``` an easier to understand and readable method would be to export your module with named export like ``` export const module = { state, mutations, actions, getters } export default plugin ``` and import it like ``` import plugin, { module } from './veux' ``` after which you can do ``` const { state } = module; ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: It'll work if you do this: ``` import { module } from './veux'; ``` Upvotes: -1 <issue_comment>username_3: > > I'm trying to get the "state" property in another file so I > > > > ``` > import module from './veux' > const { state } = module > > ``` > > however state is undefined which is weird because console.log(module) > shows me that module.state is present > > > No, you're importing the whole default-exported object (with its `module` and `plugin` properties) as `module`. The property would be `module.module.state`. > > I have a index.js in a folder called "vuex" with the following > > > > ``` > const module = { state, mutations, actions, getters } > export default { module, plugin } > > ``` > > Don't. Use named exports for exporting multiple things: ``` export const module = { state, mutations, actions, getters } export { plugin } ``` then you can do ``` import { module } from './veux' const { state } = module ``` Upvotes: 1
2018/03/20
233
827
<issue_start>username_0: How to get `scalesPageToFit` behaviour for HTML content displayed in a `WKWebView` *without using JavaScript*? (disabling JS is a requirement for in our case). Sorry for not posting any code, but I have no idea how I could achieve this. All solutions I know and all I found are based on JS. [Example](https://stackoverflow.com/a/26583062/1457385). I already add to the HTML if it does not exist yet, but at least images are often displayed way too big. All images are provided to `WKWebView` by a `WKURLSchemeHandler`.<issue_comment>username_1: Please put ``` ``` inside your tag. Upvotes: 3 <issue_comment>username_2: ``` extension String { mutating func fitContentToScreen() { self.add(prefix: "") } } ``` You add this header line to html content to fit screen. Upvotes: 0
2018/03/20
695
2,138
<issue_start>username_0: I'm trying to convert this query into code igniter query. But i don't know where to start. I'm new to code igniter.Please help. ``` public function get_loginsecuritydetails($security_date,$apt_id) { $sql = "SELECT s.* , IF( DATE( s.security_date ) = DATE( '$security_date' ) , 1, 0 ) AS loginstatus,a.agency_companyname FROM security s JOIN agency a ON(a.agency_id=s.security_cat) WHERE apartment_id='$apt_id'; "; $res=mysqli_query($sql) or die(mysqli_error()); return $res; } ```<issue_comment>username_1: You can convert it in codeigniter query builder as below: ``` $this->db->select("s.* , IF( DATE( s.security_date ) = DATE( '$security_date' ) , 1, 0 ) AS loginstatus,a.agency_companyname"); $this->db->join("agency as a","a.agency_id=s.security_cat"); $this->db->where('apartment_id',$apt_id); $query = $this->db->get('security as s'); $result = $query->result(); return $result; ``` Upvotes: 1 <issue_comment>username_2: ``` Try like this - $this->db->select("s.* , IF( DATE( s.security_date ) = DATE( '$security_date' ) , 1, 0 ) AS loginstatus,a.agency_companyname"); $this->db->join("agency as a","a.agency_id=s.security_cat"); $this->db->where('apartment_id',$apt_id); $querys = $this->db->get('security as s'); $result = $querys->result(); ``` Upvotes: 1 <issue_comment>username_3: You should start at reading the manual, <https://www.codeigniter.com/userguide3/database/index.html> Having said that the easiest solution is to simply run the query like: ``` $this->db->query('YOUR QUERY HERE'); ``` The part that says 'YOUR QUERY HERE' should be replaced with the actual query like: ``` $this->db->query("SELECT s.* , IF( DATE( s.security_date ) = DATE( '$security_date' ) , 1, 0 ) AS loginstatus,a.agency_companyname FROM security s JOIN agency a ON(a.agency_id=s.security_cat) WHERE apartment_id='$apt_id'"); ``` Now I'm pretty sure you'd want to clean this up and start using paramaters inside your query instead of PHP variables. So you should look into the query builder class, <https://www.codeigniter.com/userguide3/database/query_builder.html> Upvotes: 0
2018/03/20
303
1,098
<issue_start>username_0: This is my code: ``` interface a {} class b{} class c extends b implements a{} class d extends b{} class e{ public void makeItWork(){ b[] bees = new b[] {new c(), new d()}; for (b bee: bees){ if (bee instanceof a) { a beeA = (a) bee; //how to call the method test if object bee conforms the the interface? test(beeA.getClass(), beeA); //this goes wrong } } } public void test(Class classType, T concrete){ } } ``` Besides maybe the bad design, I would like to know if it is possible to call the method `test` on objects who implements the interface `a`.<issue_comment>username_1: your `test` method doesn't need a generic type parameter. You can define it as: ``` public void test(Class extends a classType, a concrete) { } ``` P.S. please use capitalized class names. Upvotes: 2 [selected_answer]<issue_comment>username_2: You can actually get away without using generics at all here: ``` public void test(a concrete) { } ``` Upvotes: 0
2018/03/20
295
982
<issue_start>username_0: Trying to load an .xlsx file from url but it gives error like, ``` Fatal error: Uncaught InvalidArgumentException: File "http://localhost/test/csvfile/samplesms.xlsx" does not exist. in D:\wamp\www\test\vendor\phpoffice\phpspreadsheet\src\PhpSpreadsheet\Shared\File.php on line 137 ``` Loading file with below, ``` $filename = "http://localhost/test/csvfile/samplesms.xlsx"; $reader = \PhpOffice\PhpSpreadsheet\IOFactory::createReader('Xlsx'); $reader->setReadDataOnly(TRUE); $spreadsheet = $reader->load($filename); ``` File is already at given location.<issue_comment>username_1: your `test` method doesn't need a generic type parameter. You can define it as: ``` public void test(Class extends a classType, a concrete) { } ``` P.S. please use capitalized class names. Upvotes: 2 [selected_answer]<issue_comment>username_2: You can actually get away without using generics at all here: ``` public void test(a concrete) { } ``` Upvotes: 0