_id
stringlengths
2
6
partition
stringclasses
3 values
text
stringlengths
4
46k
language
stringclasses
1 value
title
stringclasses
1 value
d12501
train
You look for this, using map operator and not filter getStep$(step): Observable<number> { return of([1,2,3]).pipe( map((res: number[]) => res.filter((r) => step === r)[0]) ) as Observable<number>; } The error you get is because you forgot to add [] to the type of the filter operator, and still this is not what you tried to achieve anyway. getStep$(step): Observable<number> { return of([1,2,3]).pipe( // HERE I ADDED [] to number filter((res: number[]) => step === res), first() ) as Observable<number>; }
unknown
d12502
train
This line const grade = [70, 90, 50] / 3; returns NaN So other condition with switch case will not work as you expected. I think you want to achieve this after calculating the average of the 3 subjects. You can find the sum of 3 subjects using Array.prototype.reduce() and Then use switch...case syntax properly. See more about switch-case So try like this way, const grade = [70, 90, 50].reduce((a, b) => a + b, 0) / 3; switch (true) { // see this line case grade < 60: console.log("F"); break; case grade >= 60 && grade < 70: console.log("D"); break; case grade >= 70 && grade < 80: console.log("C"); break; case grade >= 80 && grade < 90: console.log("B"); break; case grade >= 100: console.log("Congrats on your A!"); } OR Less typing with if..else if version without using break; though. const grade = [70, 90, 50].reduce((a, b) => a + b, 0) / 3; if(grade < 60) console.log("F"); else if (grade >= 60 && grade < 70) console.log("D"); else if (grade >= 70 && grade < 80) console.log("C"); else if (grade >= 80 && grade < 90) console.log("B"); else if (grade >= 100) console.log("Congrats on your A!"); A: To get the average, you need to first add all the values in the array together before dividing them by the amount of grades there is: const average = grades.reduce((acc, val) => acc + val, 0) / grades.length; After that, you should use if/else-if blocks instead of switch to compare the values. const grades = [70, 90, 50] const average = grades.reduce((acc, val) => acc + val, 0) / grades.length; console.log(average) if (average < 60) console.log("F"); else if (average < 70) console.log("D"); else if (average < 80) console.log("C"); else if (average < 90) console.log("B"); else console.log("Congrats on your A!"); You don't need to check for the prior condition not applying (average >= 60 && and so on), since this uses else id, this also means that you don't need to specify a condition for the A grade, since all other conditions don't aplly. A: If you put expressions in the case labels, then the switch value needs to be true or false (usually true). That said, while you can put expressions in the case labels (in JavaScript, unlike many other languages), if...else if...else is more common. Additionally, this line: const grade = [70, 90, 50] / 3; ...sets grade to NaN, because the / coerces the array to a number, which goes by way of converting it to a string first ("70,90,50") and then "70,90,50" can't be implicitly converted to a number, so the result is NaN. NaN / 3 is NaN because any math operation with NaN results in NaN. Finally, the last grade >= 100 condition looks dodgy. It means you aren't handling 90...99. You probably just want default (or else) there. I assume you wanted to get the average grade instead, which you can do with reduce (or a simple loop): const grade = [70, 90, 50].reduce((a, b) => a + b) / 3; So doing that and using switch (true): const grade = [70, 90, 50].reduce((a, b) => a + b) / 3; switch (true) { case grade < 60: console.log("F"); break; case grade >= 60 && grade < 70: console.log("D"); break; case grade >= 70 && grade < 80: console.log("C"); break; case grade >= 80 && grade < 90: console.log("B"); break; default: console.log("Congrats on your A!"); } You don't need those grade >= 60 && and such, since each case label is tested in source code order and the first one wins. const grade = [70, 90, 50].reduce((a, b) => a + b) / 3; switch (true) { case grade < 60: console.log("F"); break; case grade < 70: console.log("D"); break; case grade < 80: console.log("C"); break; case grade < 90: console.log("B"); break; default: // Made this default instead console.log("Congrats on your A!"); } Or the if...else if...else version: const grade = [70, 90, 50].reduce((a, b) => a + b) / 3; if (grade < 60) { console.log("F"); } else if (grade < 70) { // console.log("D"); } else if (grade < 80) { console.log("C"); } else if (grade < 90) { console.log("B"); } else { console.log("Congrats on your A!"); }
unknown
d12503
train
I have taken the liberty to compute edge lengths in a simpler way: from scipy.spatial.distance import euclidean lengths = {} inv_lengths = {} for edge in G.edges(): startnode = edge[0] endnode = edge[1] d = euclidean(pos[startnode], pos[endnode]) lengths[edge] = d inv_lengths[edge] = 1/d And this is how I implemented the matrix equation   : E = np.array([[0], [60], [40]], dtype=np.float64) L1 = lengths[(1, 2)] L2 = lengths[(2, 3)] L3 = lengths[(1, 3)] L = np.array([[1/L1 + 1/L3, -1/L1, -1/L3], [ -1/L1, 1/L1 + 1/L2, -1/L2], [ -1/L3, -1/L2, 1/L2 + 1/L3]], dtype=np.float64) qLE = np.dot(L, E) The code above yields the same result (approximately) than yours: In [55]: np.set_printoptions(precision=2) In [56]: qLE Out[56]: array([[-24.64], [ 19.97], [ 4.67]]) In summary, I think this is not a programming issue. Perhaps you should revise the flow model...
unknown
d12504
train
One option is conditional aggregation with analytic functions: SELECT country, CASE WHEN SUM(CASE WHEN MAKE = 'PQR' THEN 1 ELSE 0 END) OVER (PARTITION BY country) > 0 THEN 'PQR' ELSE 'OTHERS' END AS MAKE FROM yourTable ORDER BY country; Demo
unknown
d12505
train
You are making two mistakes. You are setting the backcolor to DarkGray before the if statement, so it will always give the same result, and you are comparing DarkGray to DarkGray instead of the forms backcolor to DarkGray. So... Get rid of this line: this.BackColor = System.Drawing.Color.DarkGray; And change this: if (System.Drawing.Color.DarkGray == System.Drawing.Color.DarkGray) To this: if (this.BackColor == System.Drawing.Color.DarkGray) This is the whole click event: public void picDarkMode_MouseClick(object sender, MouseEventArgs e) { if (this.BackColor == System.Drawing.Color.DarkGray) { this.BackColor = System.Drawing.Color.LightYellow; } else { this.BackColor = System.Drawing.Color.DarkGray; } }
unknown
d12506
train
After doing some research and have come with the following: var subjects = []; var subjectTemplate = {GUID:""}; for (var x = 0; x < 5; x++) { var subject = Object.Create(subjectTemplate); subject.GUID = <generate GUID>; subjects[x] = subject; } A: With javascript you do not need to declare or initialize the fields when you first initialize the object. Replace var clsSubject = subjectTemplate; with var clsSubject = {}; With your current implementation you set all clsSubject to the same object instance, namely subjectTemplate. A: You can just change the delcaration of clsSubject to create a new object. You can actually just fill in the properties directly if you want. Also you didn't push to the array correctly. var clsSubject = { GUID: id.generateRandomNumber(), Title: "Intorduction to js", Description: "Subject to learn js" }; subject.push(clsSubject);
unknown
d12507
train
Use this plugin to highlight code snippets https://wordpress.org/plugins/crayon-syntax-highlighter/
unknown
d12508
train
Your if statement is missing its brackets. In javascript, all if statements are expected to be wrapped in standard brackets. Without these you will get a syntax error. the standard is: if ( /*your if comparison here e.g 1 == 1*/) { } else { } Try changing: if isNaN(valid_amt) == false { to: if (isNaN(valid_amt) == false) { Same for: if(valid_amt % 1 != 0) && (valid_amt>0){ to if((valid_amt % 1 != 0) && (valid_amt>0)){ A: Try to replace these lines if isNaN(valid_amt) == false { with if (!isNan(valid_amt) || valid_amt!=undefined) { A: first you have to send to the isNan() function the value of the valid_amt: if isNaN(valid_amt.value) == false and also, in order to avoid calling the server function when the validation is invalid, add return false; to the else statement: <script language="javascript" type="text/javascript"> function validatecontrol() { var valid_amt = document.getElementById("txtTotamt"); if isNaN(valid_amt.value) == false { if(valid_amt % 1 != 0) && (valid_amt>0){ return true; }else{ document.getElementById("lblerror").innerHTML ="Error"; return false; } }else{ document.getElementById("lblerror").innerHTML ="Error"; return false; } } </script> A: Please modify your script as: <script type="text/javascript"> function validatecontrol() { var valid_amt = document.getElementById("txtTotamt"); if (isNaN(valid_amt) == false) { if((valid_amt % 1 != 0) && (valid_amt>0)) { return true; } else { document.getElementById("lblerror").innerHTML ="Error"; return false; } } else { document.getElementById("lblerror").innerHTML ="Error"; return false; } } </script> A: Two corrections in aspx code .. * *OnClientClick="Javascript:return validatecontrol();" *OnClick="Validate_Click" asp:ImageButton ID="Validate" runat="server" OnClientClick="Javascript:return validatecontrol();" OnClick="Validate_Click" /> & one correction in Javascript function use return false; in else part of javascript function. If value is not appropriate then it will stop further processing.
unknown
d12509
train
You have few options: * *Use eval as a quick&dirty solution: from sklearn.linear_model import LinearRegression model_str = 'LinearRegression()' model = eval(model_str) print(model) Note: using eval is not a safe because it will execute any string so if you have no control about a string variable being executed a malicious code may be executed. Also you need to import exact the same class before using eval. Without the first line the code will not work. *use pickle.dumps from sklearn.linear_model import LinearRegression model_class_serialized = pickle.dumps(LinearRegression) print(model_class_serialized) model_class = pickle.loads(model_class_serialized) print(model_class) In that case you do not need explicitly import classes from sklearn. An example of saving models to YAML: import pickle import yaml from sklearn.linear_model import LinearRegression from sklearn.ensemble import RandomForestClassifier from sklearn.svm import LinearSVC models = [LinearRegression, RandomForestClassifier, LinearSVC] with open('models.yml', 'w') as out_file: yaml.dump({m.__name__: pickle.dumps(m) for m in models}, out_file) After that you can read YAML file like that: import pickle import yaml with open('models.yml', 'r') as in_file: models = yaml.load(in_file, Loader=yaml.FullLoader) models = {name: pickle.loads(serialized_cls) for name, serialized_cls in models.items()} print(models) Which produces: {'LinearRegression': <class 'sklearn.linear_model._base.LinearRegression'>, 'LinearSVC': <class 'sklearn.svm._classes.LinearSVC'>, 'RandomForestClassifier': <class 'sklearn.ensemble._forest.RandomForestClassifier'>} so you can easily instantiate these models: model = models['RandomForestClassifier']() print(model) # model.fit(...)
unknown
d12510
train
Try this in your config file. You can see the query and other details at the bottom of the page. 'db'=>array( 'enableProfiling'=>true, 'enableParamLogging' => true, ), 'log'=>array( 'class'=>'CLogRouter', 'routes'=>array( … array( 'class'=>'CProfileLogRoute', 'levels'=>'profile', 'enabled'=>true, ), ), ),
unknown
d12511
train
It looks like your dates are your indices, in which case you would want to merge on the index, not column. If you have two dataframes, df_1 and df_2: df_1.merge(df_2, left_index=True, right_index=True, how='inner') A: You can add parameters left_index=True and right_index=True if you need merge by indexes in function merge: merge=pd.merge(df,d, how='inner', left_index=True, right_index=True) Sample (first value of index in d was changed for matching): print df catcode_amt type feccandid_amt amount date 1915-12-31 A5000 24K H6TX08100 1000 1916-12-31 T6100 24K H8CA52052 500 1954-12-31 H3100 24K S8AK00090 1000 1985-12-31 J7120 24E H8OH18088 36 1997-12-31 z9600 24K S6ND00058 2000 print d catcode_disp disposition feccandid_disp bills date 1997-12-31 A0000 support S4HI00011 1.0 2007-12-31 A1000 oppose S4IA00020', 'P20000741 1 NaN 2007-12-31 A1000 support S8MT00010 1.0 2007-12-31 A1500 support S6WI00061 2.0 2007-12-31 A1600 support S4IA00020', 'P20000741 3 NaN merge=pd.merge(df,d, how='inner', left_index=True, right_index=True) print merge catcode_amt type feccandid_amt amount catcode_disp disposition \ date 1997-12-31 z9600 24K S6ND00058 2000 A0000 support feccandid_disp bills date 1997-12-31 S4HI00011 1.0 Or you can use concat: print pd.concat([df,d], join='inner', axis=1) date 1997-12-31 z9600 24K S6ND00058 2000 A0000 support feccandid_disp bills date 1997-12-31 S4HI00011 1.0 EDIT: EdChum is right: I add duplicates to DataFrame df (last 2 values in index): print df catcode_amt type feccandid_amt amount date 1915-12-31 A5000 24K H6TX08100 1000 1916-12-31 T6100 24K H8CA52052 500 1954-12-31 H3100 24K S8AK00090 1000 2007-12-31 J7120 24E H8OH18088 36 2007-12-31 z9600 24K S6ND00058 2000 print d catcode_disp disposition feccandid_disp bills date 1997-12-31 A0000 support S4HI00011 1.0 2007-12-31 A1000 oppose S4IA00020', 'P20000741 1 NaN 2007-12-31 A1000 support S8MT00010 1.0 2007-12-31 A1500 support S6WI00061 2.0 2007-12-31 A1600 support S4IA00020', 'P20000741 3 NaN merge=pd.merge(df,d, how='inner', left_index=True, right_index=True) print merge catcode_amt type feccandid_amt amount catcode_disp disposition \ date 2007-12-31 J7120 24E H8OH18088 36 A1000 oppose 2007-12-31 J7120 24E H8OH18088 36 A1000 support 2007-12-31 J7120 24E H8OH18088 36 A1500 support 2007-12-31 J7120 24E H8OH18088 36 A1600 support 2007-12-31 z9600 24K S6ND00058 2000 A1000 oppose 2007-12-31 z9600 24K S6ND00058 2000 A1000 support 2007-12-31 z9600 24K S6ND00058 2000 A1500 support 2007-12-31 z9600 24K S6ND00058 2000 A1600 support feccandid_disp bills date 2007-12-31 S4IA00020', 'P20000741 1 NaN 2007-12-31 S8MT00010 1.0 2007-12-31 S6WI00061 2.0 2007-12-31 S4IA00020', 'P20000741 3 NaN 2007-12-31 S4IA00020', 'P20000741 1 NaN 2007-12-31 S8MT00010 1.0 2007-12-31 S6WI00061 2.0 2007-12-31 S4IA00020', 'P20000741 3 NaN A: I ran into similar problems. You most likely have a lot of NaTs. I removed all my NaTs and then performed the join and was able to join it. df = df[df['date'].notnull() == True].set_index('date') d = d[d['date'].notnull() == True].set_index('date') df.join(d, how='right')
unknown
d12512
train
You should remove your view. Add this code; [self.view removeFromSuperview];
unknown
d12513
train
* *Display your ad image on page load and ask user to click to play video. *Load your video with a proper player plugin *Start playing video *Continuously check video duration using player API *At a specific duration like (15th second) display and overlay div on top of your video *Done. Also if you're not that good with javascript probably it's better to start with something less complicated.
unknown
d12514
train
This answer uses OCR to find the location X,Y coordinates of the text 'Saimon' You download TessNet(2) Tessnet2 is a .NET 2.0 Open Source OCR assembly using Tesseract engine. You can implement code similar to this: using System; namespace OCRTest { using System.Drawing; using tessnet2; class Program { static void Main(string[] args) { try { var image = new Bitmap(@"C:\OCRTest\saimon.jpg"); var ocr = new Tesseract(); ocr.SetVariable("tessedit_char_whitelist", "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz.,$-/#&=()\"':?"); // If digit only //@"C:\OCRTest\tessdata" contains the language package, without this the method crash and app breaks ocr.Init(@"C:\OCRTest\tessdata", "eng", true); var result = ocr.DoOCR(image, Rectangle.Empty); foreach (Word word in result){ if(word.contains("aimon")){ Console.WriteLine("" + word.Confidence + " " + word.Text + " " +word.Top+" "+word.Bottom+ " " +word.Left + " " +word.Right); } } Console.ReadLine(); } catch (Exception exception) { } } } } You should be able to use these coordinates to automate your mouse to click. To test online with another OCR how OCR works, please provide your screenprint and check their results. OCR is so good these days!
unknown
d12515
train
You may try jquery-clean $.htmlClean($myContent); A: Is there a way to either validate the user's input, automatically close tags, or somehow wrap the user input in an element to stop it leaking over? Yes: When the user is done editing the text area, you can parse what they've written using the browser, then get an HTML version of the parsed result from the browser: var div = $("<div>"); div.html($("#the-textarea").val()); var html = div.html(); Live example — type an unclosed tag in and click the button: $("input[type=button]").on("click", function() { var div = $("<div>"); div.html($("#the-textarea").val()); var html = div.html(); $(document.body).append("<p>You wrote:</p><hr>" + html + "<hr>End of what you wrote."); }); <p>Type something unclosed here:</p> <textarea id="the-textarea" rows="5" cols="40"></textarea> <br><input type="button" value="Click when ready"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> Important Note: If you're going to store what they write and then display it to anyone else, there is no client-side solution, including the above, which is safe. Instead, you must use a server-side solution to "sanitize" the HTML you get from them, to remove (for instance) malicious content, etc. All the above does is help you get mostly-well-formed markup, not safe markup. Even if you're just displaying it to them, it would still be best to sanitize it, since they can work around any client-side pre-processing you do. A: You could try and use : http://ejohn.org/blog/pure-javascript-html-parser/ . But if the user is entering the html by hand you could just check to have all tags closed properly. If not, just display an error message to the user. A: You can create a jQuery element using the text and then get it's html, like so Sample <textarea> <div> <div> <span>some content</span> <span>some content </div> </textarea> Script alert($($('textarea').text()).html()); alert($($('textarea').text()).html()); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <textarea> <div> <div> <span>some content</span> <span>some content </div> </textarea> A: The simple way to check if entered HTML is actually valid and parseable by browser is to let browser try it out itself using DOMParser. Then you could check if result is ok or not: function checkHTML(html) { var dom = new DOMParser().parseFromString(html, "text/xml"); return dom.documentElement.childNodes[0].nodeName !== 'parsererror'; } $('button').click(function() { var html = $('textarea').val(); var isValid = checkHTML(html);console.log(isValid) $('div').html(isValid ? html : 'HTML is not valid!'); }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <textarea cols="80" rows="7">&lt;div&gt;Some HTML</textarea> <button style="vertical-align:top">Check</button> <div></div>
unknown
d12516
train
I have a workaround that you could use to solve the issue. Create two classes. TestClass and AfterClass. In the AfterClass, you will have to keep one test method so that after will get triggered. Now, In test execution step of declarative pipeline, you can execute mvn clean test -PTest and in the post always of declarative pipeline script, you can execute below maven command mvn clean test -PCleanup TestClass: import org.junit.Test; public class TestClass { @Test public void testClass() { System.out.println("Test"); } } AfterClass: @Test public void beforeCleanUp() { System.out.println("Before Cleanup"); } @After public void after() { System.out.println("After"); } POM.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.test</groupId> <artifactId>stackovrflw-62492859</artifactId> <version>0.0.1-SNAPSHOT</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> </dependencies> <profiles> <profile> <id>Test</id> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>3.0.0-M5</version> <configuration> <test>com.test.junit.classes.TestClass</test> </configuration> </plugin> </plugins> </build> </profile> <profile> <id>Cleanup</id> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>3.0.0-M5</version> <configuration> <test>com.test.junit.classes.AfterClass</test> </configuration> </plugin> </plugins> </build> </profile> </profiles> </project> Note: This is not the actual solution as I do not have much experience on JUnit. But you could use this workaround, if you do not get any proper solution. I will be also trying to find proper solution for you
unknown
d12517
train
Unfortunately, Roslyn does not expose a way to do that at the moment, but I agree that it is something we will probably need eventually. A: The library Microsoft.Build.Evaluation, which is, I believe, the successor of Roslyn does have this feature, but it is not easy to find. I use the code below to obtain the default namespace. My tests have shown that it matches the RootNamespace, stored in the .csproj file. private string GetDefaultNamespace(Microsoft.Build.Evaluation.Project p) { string rtnVal = "UNKNOWN_NAMESPACE"; foreach (ProjectItemDefinition def in p.ItemDefinitions.Values) { if (def.ItemType == "ProjectReference") { foreach(ProjectProperty prop in def.Project.AllEvaluatedProperties){ if(prop.Name == "RootNamespace"){ rtnVal = prop.EvaluatedValue; } } } } return rtnVal; }
unknown
d12518
train
It looks to me like they forgot to actually attach that RetainFragment somehow to the Activity so FragmentManager has a chance to find it. Fragments that are not attached don't survive configuration changes. Not sure if but it should work with the addition below public static RetainFragment findOrCreateRetainFragment(FragmentManager fm) { RetainFragment fragment = (RetainFragment) fm.findFragmentByTag(TAG); if (fragment == null) { fragment = new RetainFragment(); // add this line fm.beginTransaction().add(fragment, TAG).commit(); } return fragment; }
unknown
d12519
train
First let's take out all the Feb 29th, because these days are in the middle of the data and not appear in every year, and will bother the averaging: Feb29=60+365*[1:4:32]; mean_Feb29=mean(GPH(:,:,Feb29),3); % A matrix of 95x38 with the mean of all February 29th GPH(:,:,Feb29)=[]; % omit Feb 29th from the data Last_Jan_1=GPH(:,:,end); % for Jan 1st you have additional data set, of year 2011 GPH(:,:,end)=[]; % omit Jan 1st 2011 re_GPH=reshape(GPH,95,38,365,[]); av_GPH=mean(re_GPH,4); Now re_GPH is a matrix of 95x38x365, where each slice in 3rd dimension is an average of a day in the year, starting Jan 1st, etc. If you want the to include the last Jan 1st (Jen 1st 2011), run this line at after the previous code: av_GPH(:,:,1)=mean(cat(3,av_GPH(:,:,1),Last_Jan_1),3); For the ease of knowing which slice nubmer corresponds to each date, you can make an array of all the dates in the year: t1 = datetime(2011,1,1,'format','MMMMd') t2 = datetime(2011,12,31,'format','MMMMd') t3=t1:t2; Now, for example : t3(156)= datetime June5 So av_GPH(:,:,156) is the average of June 5th. For your comment, if you want to subtract each day from its average: sub_GPH=GPH-repmat(av_GPH,1,1,32); And for February 29th, you will need to do that BEFORE you erase them from the data (line 3 up there): sub_GPH_Feb_29=GPH(:,:,Feb29)-repmat(mean_Feb29,1,1,8);
unknown
d12520
train
The background for a RoundedRect button is the rectangle in which it is located, and not the button itself. Try changing the background color to red or something equally visible and you will see that the background is only visible between the rounded corners of the button and the rectangular frame in which the button is set. Sadly, you cannot change the color of the button, which is what most people think they are doing when they change the backgroundColor property. To change the button color, you will need to use UIButtonTypeCustom. A: try setting your UIColor to myButton.layer.backgroundColor = [UIColor x..] you'll need a #import <QuartzCore/QuartzCore.h> im pretty sure the colour in the rounded rect button is in the layer, not in the regular background, this is why people struggle to change its colour.. alternately, put the alpha on the views colour back to 1 and hit the views alpha property, that'll certainly do the lot in one move. A: PengOne is right – setting the background color doesn't change the color of the rounded rect. It only affects the corners around the rect. But you can also just set the button's alpha property to something less than one. Try 0.7 to make the view semi-transparent. A: Try using UIButtonTypeCustom instead UIButtonTypeRoundedRect as the button type. A: UIButtonTypeRoundedRect is weird. Setting background /alpha etc on it only changes the background of the rectangle into which the button is rendered. You could change the button type to UIButtonTypeCustom and then use QuartzCore to set cornerRadius to make it appear rounded.
unknown
d12521
train
It will have the same IP addresses as the computer you’re running it on. A: Jep, like Todd said, the same as your machines IP. You can also simply visit http://www.whatismyip.com with mobile Safari or your Mac's web browser ;-) A: I think the by visiting the website http://www.test-ipv6.com/ is also a good choice. As the site tells you both the ipv4 and ipv6 global-unicast address
unknown
d12522
train
I think I understand your question. I think that you could structure your ViewModels like this: interface ICommandViewModel : ICommand { string Name {get;} } interface INodeViewModel { IEnumerable<ICommandViewModel> CommandList {get;} } public class NodeViewModel : INodeViewModel { public NodeViewModel() { //Init commandList //Populate commandList here(you could also do lazy loading) } public NodeViewModel(IEnumerable<ICommandViewModel> commands) { CommandList = commands; } public IEnumerable<ICommandViewModel> CommandList {get;private set;} } and then in xaml <TreeViewItem> <TreeViewItem.ContextMenu Items={Binding CommandList}> <ContextMenu.ItemTemplate> <DataTemplate> <MenuItem Header="{Binding Name}" Command="{Binding}"/> </DataTemplate> </ContextMenu.ItemTemplate> </TreeViewItem.ContextMenu> </TreeViewItem> I don't have much experience with hierarchical datatemplate but you get the gist from the above. You could also do the above with a style in the but I don't have a xaml editor in front of me to give you right syntax. Hope that helps
unknown
d12523
train
If the DbUtils creates the connection in the same thread, like as: public static Connection getConnection() throws SQLException { return DriverManager.getConnection(url, username, password); } Then it's threadsafe. But if the connection is a class variable, like as: private static Connection connection = DriverManager.getConnection(url, username, password); public static Connection getConnection() throws SQLException { return connection; } Then it is definitely not threadsafe because the same connection will be shared among all threads. Also when it's closed in a thread, all subsequent threads won't be able to use the connection because it's not open anymore. Also when it's never closed, the DB will timeout the connection sooner or later, usually after a few hours, and your application won't work anymore because the connection is not open anymore. As to the servlet, public abstract class DatabaseInvokerServlet extends HttpServlet { private AbstractUser currentUser; private HttpServletRequest request; private HttpServletResponse response; // ... } it's definitely not threadsafe. You're assigning the current user, request and response as instance variables. From each servlet class, there is only one instance during the application's lifetime. This instance is shared among all visitors/sessions throughout the entire application's lifetime. Each HTTP request operates in a separate thread and uses the same instance. Imagine two simultaneous visitors: visitor A will set the current user, request and response. The DB process however takes a long time. Before the response of visitor A has returned, visitor B calls the same servlet and thus the current user, request and response will be overriden. Then, the query of visitor A finishes and wants to write to the response, it is instead writing to the response of visitor B! Visitor B sees the result of the query of visitor A and visitor A sees nothing on his screen! You should never assign request/session-specific data as instance variable of the servlet. You should keep them method (thread) local. public abstract class DatabaseInvokerServlet extends HttpServlet { protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { AbstractUser currentUser = request.getSession().getAttribute("user"); // Keep the variables in the method block! // Do not assign them as instance variable! } } As to the complete picture, this approach is clumsy. The database access layer should have nothing to do with servlets. It should operate in its own standalone classes which you could just construct/invoke in every other Java class, any servlet class, or a normal application with main(), or whatever. You should not have any single line of java.sql.* imports in your servlet classes (expect of maybe SQLException if it is not abstracted away). You should not have any single line of javax.servlet.* imports in your database classes. See also: * *Servlet instantiation and (session) variables *Basic DAO tutorial A: If the utility class has state (example: class or instance variables) most probably yes. A: If I guess right the DBUtils is returning new instance for each call of getConnection(). And as the DBUtils class is a utility class so it shouldn't be maintaining any state. In this scenario no you dont need any addition efforts for synchronization. A: Servlets runs in several threads. The J2EE spec says there is only one instance per servlet class running in one web container for non single thread servlet. Servlet 2.3 specs A servlet container may send concurrent requests through the service method of the servlet. To handle the requests the developer of the servlet must make adequate provisions for concurrent processing with multiple threads in the service method. Synchronisation in servlet. Never have an member variable in a servlet, it is not thread safe.
unknown
d12524
train
I don't believe [A-z] is a valid character class. You certainly do not need \\ when using @. Try this: @"^[a-zA-Z]{2}\d{6}$" If you need the format to have 4 numerals followed by a . then two more numerals, try this: @"^[a-zA-Z]{2}\d{4}\.\d{2}$" (Note that for .NET, \d will match numerals in any script, so you may want to replace it with [0-9] if you want to only match those) A: You have way too many escape characters. Try: string sPattern = @"^[a-zA-Z]{2}\d{6}$"; A: A-z isn't valid (mixed case), and you don't have 6 consecutive digits. You have 4, a decimal, and then 2 more. Try ^[a-zA-Z]{2}\d{4}.\d{2}$ A: It fails because the value you are testing as a decimal in it and your regex pattern does not. Plus, your regex pattern is going to look at the entire string. That is ^ says start at the beginning of the string and $ says the end of the string. If you only want a "starts with", then drop the $ at the end of the pattern.
unknown
d12525
train
I posted this same question in the firebase quickstart iOS repository, And i got the following response DecodeWav op is never supported by TensorFlowLite. So at present Tensorflow Lite does not support audio processing even-though Tensorflow itself supports audio processing.
unknown
d12526
train
You need to specify the mapping of your query variables. Also your syntax is assuming string substitution rather than query variables. Try: const LOAD_PRODUCTS = gql` query myQuery ( $category: String! ) { category(input: { title: $category }){ name products { … } } }
unknown
d12527
train
Set your combo box to have two columns, hide the second column but bind to it. To do this, set the following properties: * *Column Count = 2 *Column Widths = 2cm; 0cm *Bound Column = 2 *Row Source Type = Value List *Row Source = Home; Is Null; Away; Is Not Null Now your combo box shows Home / Away to the user, but returns Is Null / Is Not Null to the query.
unknown
d12528
train
After no answers here and several days of searching and trial and error, I have found the issue. In general, I guess this reshape error I was getting you can get if you are feeding the model with an image size other that it is expecting or setup to receive. The issue is that, everything I have read says that typically you must feed the model with a 227 x 227 x 3 image. Then, I started noticing that size varies on some posts. Some people say 225 x 225 x 3, others say 250 x 250 x 3 and so on. I had tried those sizes as well with no luck. As you can see in my edit in the question, I did have a clue. When using somebody else's pretrained model, my code works fine. However, when I use my custom model which I created on the Microsoft Azure CustomVision.ai site, I was getting this error. So, I decided I would try to inspect the models to see what was different. I followed this post: Inspect a pre trained model When I inspected the model that works using TensorBoard, I see that the input is 227 x 227 x 3 which is what I expected. However, when I viewed my model, I noticed that it was 224 x 224 x 3! I changed my code to resize the image to that size and it works! Problem went away. So, to summarize, for some reason Microsoft Custom Vision service model generated a model to expect an image size of 224 x 224 x 3. I didn't see any documentation or setting for this. I also don't know if that number will change with each model. If you get a similar shape error, the first place I would check is the size of the image you are feeding your model and what it expects as an input. The good news is you can check your model, even if pre-trained, using TensorBoard and the post I linked above. Look at the input section, it should look something like this: Hope this helps!
unknown
d12529
train
Are you sure the Profile object exists for that user and you are not getting RelatedObjectDoesNotExist error in logs? If yes then you have to create a Profile object, in the above signup form. Profile.objects.update_or_create(user=user, defaults={"info":self.cleaned_data['info']})
unknown
d12530
train
Change } while ( $('td.active').length == 10 ); to } while ( $('td.active').length < 10 ); while != until But if you have less than 10 td, you'll loop indefinitely. And it will very often break as i always grows bigger. I think you want this : $(function() { var max = 25; // don't forget the "var", if you don't want to declare global variables var min = 1; do { var match = Math.ceil(Math.random() * (max - min )- min); $('td').eq(match).addClass('active'); } while ($('td.active').length < 10); }) A: Try this : if you are sure that their are minimum 10 td's $(function() { max = 25; min = 1; var i = 0; var y=0; do { var match = Math.ceil(Math.random() * (max - min )- min); $('td').each(function() { i++; console.log( match ); if (i == match) { $(this).addClass('active'); y++; } }) } while (y<10); }) A: You haven't said much about what the code is attempting to do, but in addition to the other suggestions, I think you also mean var match= Math.ceil(Math.random() * (max - min )+ min); Note the '+ min' instead of minus.
unknown
d12531
train
This is another way from correct answer on Sujan Sivagurunathan, i wrote not in comment because i dont have 50 reputation. If you have AbstractFacade.java write this on create method public void create(T entity) { ValidatorFactory factory = Validation.buildDefaultValidatorFactory(); javax.validation.Validator validator = factory.getValidator(); Set<ConstraintViolation<T>> constraintViolations = validator.validate(entity); if (constraintViolations.size() > 0 ) { System.out.println("Constraint Violations occurred.."); for (ConstraintViolation<T> contraints : constraintViolations) { System.out.println(contraints.getRootBeanClass().getSimpleName()+ "." + contraints.getPropertyPath() + " " + contraints.getMessage()); } getEntityManager().persist(entity); } } A: To know what caused the constraint violation, you can use the following validator and logger. ValidatorFactory factory = Validation.buildDefaultValidatorFactory(); Validator validator = factory.getValidator(); Set<ConstraintViolation<Clients>> constraintViolations = validator.validate(clients); if (constraintViolations.size() > 0 ) { System.out.println("Constraint Violations occurred.."); for (ConstraintViolation<Clients> contraints : constraintViolations) { System.out.println(contraints.getRootBeanClass().getSimpleName()+ "." + contraints.getPropertyPath() + " " + contraints.getMessage()); } } Put the logger before persisting the entity. So between em.getTransaction().begin(); //here goes the validator em.persist(clients); Compile and run. The console will show you, just before the exception stack trace, which element(s) caused the violation(s). You can but should catch your try block containing any persistence method with ConstraintViolationException (to avoid further problems and/or inform the user an error occurred and its reason). However, in a well built system there shouldn't be any constraint violation exception during persistence. In JSF, and other MVC framework, the validation step must be totally or partially done at the client side before submit/persistence. That's a good practice I would say.
unknown
d12532
train
Does this answer your question? You can get the search params of a URL with the URL constructor and accessing the searchParams property. const div = document.querySelector('div'); const appendParams = (url) => { const params = Object.fromEntries(new URL(url).searchParams.entries()); for (const key in params) { const p = document.createElement('p'); p.textContent = `${key}: ${params[key]}`; div.appendChild(p); } }; window.addEventListener('DOMContentLoaded', () => { appendParams('https://hello.world?foo=bar&fizz=buzz&abc=123&hello=world'); }); <div></div>
unknown
d12533
train
The two URLs that are printed show exactly what is happening. You are posting to a URL without a final slash, but you have the default APPEND_SLASH setting, so Django is redirecting to the URL with a final slash appended. Redirects are always GETs. Make sure you post to the URL with the slash.
unknown
d12534
train
You can take the following steps: Suppose you need to fill an N * N array. * *Create a List and add to it (N * N) / 10 1s and (N * N * 9) / 10 0s. list.addAll(Collections.nCopies(count,1 or 0)) can help you here. *Run Collections.shuffle on that List to obtain random order. *Iterate over the elements of the List. The first N elements will become the first row the the 2D array, the next N elements will become the second row of the array, and so on... A: An alternative to shuffling is to pick 10% x N random positions and put a 1 (if a 0 was in the position, otherwise pick another position). In pseudo code: int[][] array = new int[N][N] apply N * N / 10 times { index = random(0 .. N * N) if array(index) = 0 then array(index) = 1 else retry with another index } You will need to convert the index from 0 to N*N into a pair of i,j in the 2D array. A: I would use "double random = Math.random();" And then an if to check if the variable random is less or equal to 0.1 A: Make the array, take a random row and column, while the percentage is not exceeded, check if the position has 0, if yes fill it with 1. int[][] array = new int[N][N]; int percentage = N*N/10; int filled = 0; while(filled <= percentage) { Random rand = new Random(); int i = rand.nextInt(N+1); int j = rand.nextInt(N+1); if(array[i][j] == 0) { filled++; array[i][j] = 1; } } for(int i = 0; i < N; i++) { for(int j = 0; j < N; j++) { System.out.print(array[i][j] + " "); } System.out.println(); }
unknown
d12535
train
I have adapted your code as follows: //txt = loadXMLDoc("http://ip-api.com/xml/xx.xx.xx.x"); var txt = '<country>United Kingdom</country><countryCode>UK</countryCode><region>Bristol</region><ip>xx.xx.xx.x</ip><ISP>VodafoneM</ISP>'; txt = '<query>' + txt + '</query>'; // Info: http://www.w3schools.com/xml/xml_parser.asp if (window.DOMParser) { parser = new DOMParser(); xmlDoc = parser.parseFromString(txt, "text/xml"); } else // Internet Explorer { xmlDoc = new ActiveXObject("Microsoft.XMLDOM"); xmlDoc.async = false; xmlDoc.loadXML(txt); } var x = xmlDoc.getElementsByTagName("country")[0]; var y = x.childNodes[0]; //document.write(y.nodeValue); document.getElementById("content").innerHTML = y.nodeValue; Please check the result in jsfiddle.
unknown
d12536
train
"); scanf("%s",&A); printf("Valor C:"); scanf("%d",&C); long long num = sum_and_subtract(); printf("sum_and_subtract = %lld:0x%x\n", num, num); return 0; } Any suggestions would by highly appreciated!
unknown
d12537
train
They are making use of the Use the HTML5 History API, this link will show you how you can accompish the same effect.
unknown
d12538
train
Edit: July, 16th 2201 As a matter of fact, there is a simpler way of getting that data: * *How to map an array of objects from Cloud Firestore to a List of objects? Seeing that your code is in Java, please see the solution below: FirebaseFirestore.getInstance().collection("coll").document("9999").get().addOnCompleteListener(new OnCompleteListener<DocumentSnapshot>() { @Override public void onComplete(@NonNull Task<DocumentSnapshot> task) { if (task.isSuccessful()) { DocumentSnapshot document = task.getResult(); if (document.exists()) { Map<String, Object> friendsMap = document.getData(); for (Map.Entry<String, Object> entry : friendsMap.entrySet()) { if (entry.getKey().equals("Friends")) { Map<String, Object> newFriend0Map = (Map<String, Object>) entry.getValue(); for (Map.Entry<String, Object> e : newFriend0Map.entrySet()) { if (e.getKey().equals("newFriend0")) { Map<String, Object> fNameMap = (Map<String, Object>) e.getValue(); for (Map.Entry<String, Object> dataEntry : fNameMap.entrySet()) { if (dataEntry.getKey().equals("fName")) { Log.d("TAG", dataEntry.getValue().toString()); } } } } } } } else { Log.d("TAG", "No such document"); } } else { Log.d("TAG", "get failed with ", task.getException()); } } }); The result in your logcat will be: jem Didn't see the name of your collection in the screenshot, so I named it simply coll but you should definitely change it to the correct one. A: Basically you need to perform 2 steps, retrieve the document data with one of the provided libraries and then manipulate the resulting map object with the means of your programming language. Here is a simple Nodejs example getting 'fName': let docRef = db.collection('<collection_name>').doc('9999'); docRef.get() .then(doc => { if (!doc.exists) { console.log('No such document!'); } else { console.log('Document data:', doc.data().Friends.newFriend0.fName); } }) .catch(err => { console.log('Error getting document', err); }); A: Firestore document is in JSON format thus we can just convert the documentSnapshot.getData() to String first then to JSON so it will be more easier to access the Map objects data without needing too many loops/iteration. Another way is to do it like this documentSnapshot.getData().get("get_only_map_key_to_be_converted_as_JSON") convert it to string then to JSON as well. The possible remaining problem is if you store url in your document's map object which will have a problem as those link needs to be unescaped to become a valid JSON though I haven't tried that scenario yet in Android Java.
unknown
d12539
train
Look into OpenCV, It has a lot of options for supervised/semi supervised Learning methods. As you have mentioned there is a visible texture difference between the tress and background vegetation, a good place for you would be to start would be color based segmentation and evolving it to use textures as well. OpenCV ML tutorial is a good starting point. Moreover you can also combine the NDVI data to create a stronger feature set.
unknown
d12540
train
#include <stdio.h> int reverse(char *str, int pos){ char ch = str[pos]; return (ch == '\0')? 0 : ((str[pos=reverse(str, ++pos)]=ch), ++pos); } int main(){ char buffer[100]; scanf("%99[^\n]", buffer); reverse(buffer, 0); fprintf(stdout, "%s\n", buffer); return 0; } A: Since this is obviously homework, no complete solution, only ideas. First, you don't need to limit yourself to one recursive function. You can have several, and a resursive implementation of strlen is trivial enough. my_strlen(char *p) evaluates to 0 if *p = 0, and to 1 + my_strlen(p+1) otherwise. You can probably do it in a single recursion loop, too, but you don't have to. One more thing: as you recurse, you can run your code both on the front end of recursion and on the back end. A recursion is like a loop; but after the nested call to self returns, you have a chance to perform another loop, this one backwards. See if you can leverage that. A: It is quite simple. When you enter the string, you save the character at the position you are are. Call the method recursively more character to the right. When you get to the end of the string, you will have N stack frames, each holding one character of an N character string. As you return, write the characters back out, but increment the pointer as you return, so that the characters are written in the opposite order. A: I'll try to be nice and give you a little sample code. With some explanation. The beauty of recursion is that you don't need to know the length of the string, however the string must be null terminated. Otherwise you can add a parameter for the string length. What you need to think is that the last input character is your first output character, therefore you can recurse all the way to the end until you hit the end of the string. As soon as you hit the end of the string you start appending the current character to the output string (initially set to nulls) and returning control to the caller which will subsequently append the previous character to the output string. void reverse(char *input, char *output) { int i; if(*input) /* Checks if end of string */ reverse(input+1,output); /* keep recursing */ for(i=0;output[i];i++); /* set i to point to the location of null terminator */ output[i]=*input; /* Append the current character */ output[i+1]=0; /* Append null terminator */ } /* end of reverse function and return control to caller */ Finally, lets test the function. int main(argc, argv) { char a[]="hello world"; char b[12]="\0"; reverse(a,b); printf("The string '%s' in reverse is '%s'\n", a, b); } Output The string 'hello world' in reverse is 'dlrow olleh'
unknown
d12541
train
Shouldn't that have been included in the "Release" directory after being published? No. When you publish a project, you specify a different directory for the publish output when creating the publish profile. The Release directory is only for assets that are used during debugging. During development, the NuGet dependencies are not in the Release folder, they are in the NuGet cache. So, you cannot always just copy the Release folder and expect it to run. After publishing, the entire application (including all dependencies) are output to the publish location. Do note that the publish output doesn't necessarily have to be a folder. For example, depending on the type of project it may be published to IIS with web deploy or to an FTP location.
unknown
d12542
train
By default eclipse runs on java in a JVM. But JVMs have more and more support for dynamic scripting languages. You can always use org.mozilla.javascript so your view can implement parts in javascript. The linke I've included to eclipse Orbit builds are versions that have been OSGi-ified so they can be used easily in eclipse.
unknown
d12543
train
John I did something very similar to what you tried and it worked for me without any issue. @Service public class CosmosService { public void connectCosmos() throws Exception { DocumentClient client = new DocumentClient("https://somename-cosmosdb.documents.azure.com:443/", "somepassword", new ConnectionPolicy(), ConsistencyLevel.Session); client.getDatabaseAccount(); FeedOptions options = new FeedOptions(); options.setEnableCrossPartitionQuery(true); List<Document> result = client .queryDocuments( "dbs/" + "samples" + "/colls/" + "orders", "SELECT * FROM c", options) .getQueryIterable() .toList(); result.stream().forEach(System.out::println); } } Application class @SpringBootApplication public class RatellaStackApplication implements CommandLineRunner { @Autowired CosmosService cosmosService; public static void main(String[] args) { SpringApplication.run(RatellaStackApplication.class, args); } @Override public void run(String... args) throws Exception { cosmosService.connectCosmos(); System.exit(0); } } Dependency <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>azure-documentdb</artifactId> <version>2.4.7</version> </dependency> calling the service method from the controller works too. @RestController public class MyController { @Autowired CosmosService cosmosService; @GetMapping("/hello") public String hello() throws Exception { cosmosService.connectCosmos(); return "OK"; } } Link to the source code of the Rest API. For simplicity I put everything in the SpringBoot Application class. https://gist.github.com/RaviTella/e544ef7b266ba425abead7f05193f717 A: How about getting the documentClient from applicationContext instead of initializing a new one. Let spring initialize one for you. @Autowired private ApplicationContext applicationContext; DocumentClient documentClient = (DocumentClient) applicationContext.getBean(DocumentClient.class); What version of Spring Boot are you using ? Are you using spring-data-cosmosdb SDK ?
unknown
d12544
train
It's not entirely clear what resultset you want to return. This may be of some help to you: SELECT t.country , COUNT(DISTINCT t.id) AS count_table1_rows , COUNT(r.id) AS count_table2_rows , COUNT(*) AS count_total_rows FROM table1 t LEFT JOIN table2 r ON r.table1_id = t.id WHERE t.timestamp >= NOW() - INTERVAL 7 DAY AND t.timestamp < NOW() GROUP BY t.country ORDER BY COUNT(DISTINCT t.id) DESC LIMIT 5 That will return a maximum of 5 rows, one row per country, with counts of rows in table1, counts of rows found in table2, and a count of the total rows returned. The LEFT keyword specifies an "outer" join operation, such that rows from table1 are returned even if there are no matching rows found in table2. To get the count for each "reason", associated with each country, you could do something like this: SELECT t.country , COUNT(DISTINCT t.id) AS count_table1_rows FROM table1 t LEFT JOIN ( SELECT s.country , r.reason , COUNT(*) AS cnt_r FROM table1 s JOIN table2 r ON s.table1_id = t.id WHERE s.timestamp >= NOW() - INTERVAL 7 DAY AND s.timestamp < NOW() GROUP BY s.country , r.reason ) u ON u.country = t.country WHERE t.timestamp >= NOW() - INTERVAL 7 DAY AND t.timestamp < NOW() GROUP BY t.country , u.reason ORDER BY COUNT(DISTINCT t.id) DESC , t.country DESC , u.cnt_r DESC , u.reason DESC This query doesn't "limit" the rows being returned. It would be possible to modify the query to have only a subset of the rows returned, but that can get complex. And before we muck the complexity of adding "top 5 within top 5" type limits, we want to ensure that the rows returned by a query are a superset of the rows we actually want. A: Is this what you want? select t2.reason, count(*) from (select t1.country, count(*) from table1 t1 where timestamp between @STARTTIME and @ENDTIME group by country order by count(*) desc limit 5 ) c5 join table1 t1 on c5.country = t1.country and t1.timestamp between @STARTTIME and @ENDTIME join table2 t2 on t2.table1_id = t1.id group by t2.reason; The c5 subquery gets the five countries. The other two bring back the data for the final aggregation.
unknown
d12545
train
Maybe you don't create AVD correctly. Try that: Nexus 5x with Marshmallow x86_64 OS. I'm using that emulator and works perfectly.
unknown
d12546
train
If your List is sorted and has good random access (as ArrayList does), you should look into Collections.binarySearch. Otherwise, you should use List.indexOf, as others have pointed out. But your algorithm is sound, fwiw (other than the == others have pointed out). A: Java API specifies two methods you could use: indexOf(Object obj) and lastIndexOf(Object obj). The first one returns the index of the element if found, -1 otherwise. The second one returns the last index, that would be like searching the list backwards. A: There is indeed a fancy shmancy native function in java you should leverage. ArrayList has an instance method called indexOf(Object o) (http://docs.oracle.com/javase/6/docs/api/java/util/ArrayList.html) You would be able to call it on _categories as follows: _categories.indexOf("camels") I have no experience with programming for Android - but this would work for a standard Java application. Good luck. A: ArrayList has a indexOf() method. Check the API for more, but here's how it works: private ArrayList<String> _categories; // Initialize all this stuff private int getCategoryPos(String category) { return _categories.indexOf(category); } indexOf() will return exactly what your method returns, fast. A: the best solution here class Category(var Id: Int,var Name: String) arrayList is Category list val selectedPositon=arrayList.map { x->x.Id }.indexOf(Category_Id) spinner_update_categories.setSelection(selectedPositon) A: ArrayList<String> alphabetList = new ArrayList<String>(); alphabetList.add("A"); // 0 index alphabetList.add("B"); // 1 index alphabetList.add("C"); // 2 index alphabetList.add("D"); // 3 index alphabetList.add("E"); // 4 index alphabetList.add("F"); // 5 index alphabetList.add("G"); // 6 index alphabetList.add("H"); // 7 index alphabetList.add("I"); // 8 index int position = -1; position = alphabetList.indexOf("H"); if (position == -1) { Log.e(TAG, "Object not found in List"); } else { Log.i(TAG, "" + position); } Output: List Index : 7 If you pass H it will return 7, if you pass J it will return -1 as we defined default value to -1. Done A: Use indexOf() method to find first occurrence of the element in the collection. A: The best way to find the position of item in the list is by using Collections interface, Eg, List<Integer> sampleList = Arrays.asList(10,45,56,35,6,7); Collections.binarySearch(sampleList, 56); Output : 2
unknown
d12547
train
What you're trying to do here is essentially templating: you specify the structure of the commands in one place, and the values that go in them in another. There are many templating solutions but an easy one for in-line templates like this is Text::Template, and just needs a minimal change to your input strings. use strict; use warnings; use Text::Template 'fill_in_string'; # presuming this text is from an external source that can't access these variables my $template = '/configure service sdp {$rsdp} mpls description "to-{$x2}"'; my $rsdp = 'foo'; my $x2 = 'bar'; # now these variables are defined my $command = fill_in_string $template, HASH => {rsdp => $rsdp, x2 => $x2}; A: So you're reading text strings from your input file. Those text strings contain mentions of Perl variables and you want those variables to be expanded in the string without you doing anything to achieve that. I'm sorry, but life isn't that easy :-) The text strings you read from your input file are just dumb text strings. They know nothing about Perl variables. It's up to you to put in the clever work to make this work how you want it to work. Some people will tell you to use eval to do this. Please ignore them. It's too dangerous. What we will do is look for specific strings in your text (strings that will look like the variable names you are interested in) and substitute them for the current values stored in those variables. For example: $dc =~ s/\$rsdp\b/$rsdp/g; $dc =~ s/\$x2\b/$x2/g; Notice that I've: * *Escaped the $ in the pattern on the left-hand side of the substitution, so it's just a $ and not a variable sigil. *Marked the end of the variable name with \b which marks a "word boundary". This is so we only match our specific variable name and not any other, longer variable name that happens to start with our variable name. *Using a global replacement (/g) in case the same variable name appears more than once in the input string. This needs to happen at the start of your while loop - after you have read the text from the file and before you send it as a command.
unknown
d12548
train
Yes it can, though you'll only be able to commit to projects from one repository at a time. One way of achieving this and making it reproducible by any developer who checks out your project is to use the svn:externals property on your solution's root folder to pull in projects from other repositories. To edit or add this property, you can either use the svn command line, or TortoiseSVN. You'll find more details on the svn:externals property itself in the Subversion red book. A: Like David says this is possible. I would like to add to this that having multiple repositories not only takes away the ability to atomically commit, but also you won't be able to branch/tag the project and its dependencies into a single tag or branch. I wouldn't recommend using multiple repositories for these reasons.
unknown
d12549
train
import ActiveDirectory $Group = Get-ADGroup -filter {Name -eq "GroupName"} Get-ADUser -filter {EmailAddress -like "*"} | % {Add-ADGroupMember $Group $_}
unknown
d12550
train
Something like this? let result initialCondition = let rec loop = function | 101 -> state { return () } | i -> state { do! strategy.Update i do! loop (i+1) } initialCondition |> runState (loop 10) Alternatively, define a For member on your builder and write it the more imperative way: let result initialCondition = let f = state { for i in 10 to 100 do do! strategy.Update i } initialCondition |> runState f Also, note that there is likely a bug in your definition of Strategy.Update: processedx is bound but unused.
unknown
d12551
train
Here is a conceptual example for you. XML shredding is happening in T-SQL via two XQuery methods: * *.node() *.value() XQuery Language Reference (SQL Server) SQL DECLARE @xml XML = N'<root> <obj name="TableName0"> <int name="ANumberThatsAColumn1" val=""/> <int name="ANumberThatsAColumn2" val=""/> <int name="ANumberThatsAColumn3" val=""/> <str name="ADateThatsAColumn4" val="2022-12-16T08:43:07.9870485-06:00"/> </obj> <list name="TableName1"> <obj href="RowOfData1/"> <int name="ColName1" val="0"/> <str name="ColName2" val="1"/> <int name="ColName3" val="1"/> <int name="ColName4" val="0"/> <int name="ColName5" val="6"/> <int name="ColName6" val="0"/> <str name="ColName7" val="#3062"/> <ref name="ColName8" href="SomeValue1"/> </obj> <obj href="RowOfData2/"> <int name="ColName1" val="0"/> <str name="ColName2" val="1"/> <int name="ColName3" val="1"/> <int name="ColName4" val="0"/> <int name="ColName5" val="6"/> <int name="ColName6" val="0"/> <str name="ColName7" val="#2543"/> <ref name="ColName8" href="SomeValue2"/> <int name="ColName9" val="0"/> <int name="ColName10" val="0"/> <int name="ColName11" val="0"/> </obj> </list> <list name="TableName2"> <list name="row1"> <int name="R0" val="0" displayName="#7925"/> <int name="R1" val="1" displayName="#7926"/> </list> <list name="row2"> <int name="R0" val="0" displayName="#21641"/> <int name="R1" val="1" displayName="#21642"/> <int name="R2" val="2" displayName="#21643"/> </list> </list> </root>'; -- INSRT INTO <targetTable> SELECT c.value('@name', 'VARCHAR(20)') AS name , c.value('@val', 'INT') AS val , c.value('@displayName', 'VARCHAR(20)') AS displayName FROM @xml.nodes('/root/list[@name="TableName2"]/list/int') AS t(c); -- INSRT INTO <targetTable> SELECT c.value('@name', 'VARCHAR(20)') AS name , c.value('@val', 'VARCHAR(20)') AS val , c.value('@href', 'VARCHAR(20)') AS href FROM @xml.nodes('/root/list[@name="TableName1"]/obj/*') AS t(c);
unknown
d12552
train
There are many ways. Some of them are Write in RouteConfig.cs routes.MapRoute( name: "Properties", url: "Order/EditOrder/SearchOrder/{action}/{id}", defaults: new { controller = "YourControllerName", action = "SearchOrder",\\ bcoz you have given action name attribute otherwise your method name id = UrlParameter.Optional \\ you can Change optional to Fixed to make passing Id parameter compulsory } ); If using Mvc5, you can do Attribute Routing by following a simple syntax: [Route("Order/EditOrder/SearchOrder/{id?}")] // ? For optional parameter public ActionResult EditOrderSearchOrder(){} And To enable attribute routing, call MapMvcAttributeRoutes in RouteConfig.cs public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute(“{resource}.axd/{*pathInfo}”); routes.MapMvcAttributeRoutes(); } }
unknown
d12553
train
Let's start from the beginning. When you pass items to query(), it will build a SELECT statement from these items. If they are models, then it will enumerate all the fields of such models. The query will automatically add the tables to select FROM. Unless you tell it to, the query will not perform joins automatically, so you must add assume what tables to look up your values from. Additional clauses like join() tell the query how the JOIN operation should be performed. Additional arguments to join() will be used as the ON clause, otherwise the query will infer the clause based on mapper-defined relationships. So to summarize, the ON clause is omitted whenever you do not specify additional arguments to join(). In the following expression, the ON clause is omitted: query(User, Address).join(Address) This does not mean that the SQL emitted will not have an ON clause; it will do the right thing by inferring the proper ON clause using the relationships defined on the model. When there are multiple possibilities, you need to specify the clause yourself: query(Person).join(Person.parent1).join(Person.parent2) should result in a query that returns a person and both of their parents. In this case, the ON clause was not omitted.
unknown
d12554
train
You have to prefix the wildcard with the table name (or alias, if you've used one): SELECT X.*, TO_CHAR(SYSDATE, 'DD-MM-YYYY') AS TODAYS_DATE FROM X Using the wildcard is generally not considered a good idea, as you have no control over the order the columns are listed (if the table was built differently in different environments) and anyone consuming this output may be thrown if the table definition changes in the future, e.g. by adding another column. It's better to list all the columns individually.
unknown
d12555
train
I might come too late to help the author of this post but I just faced the same problem, so here is the answer for other people who is wondering about it. ANSWER: You need to override the following method, it will provide you the down, move, and up events of the second tap: onDoubleTapEvent(MotionEvent e) If you want the UP motion of the second tap then: onDoubleTapEvent(MotionEvent e) { if(e.getAction() != MotionEvent.ACTION_UP) { return false; // Don't do anything for other actions } // YOUR CODE HERE return true; } Good Luck! /Carlos A: It's not well documented but in order to ignore double tap, and detect it as two separate taps, it's enough to set null as double tap listener. GestureDetector gd = new GestureDetector(context, gestureListener); gd.setOnDoubleTapListener(null); Works for both GestureDetector and GestureDetectorCompat. A: the GestureDetector.OnDoubleTapListener has the following method enter code here public class ViewGestureListener extends GestureDetector.SimpleOnGestureListener, GestureDetector.OnDoubleTapListener { @Override public boolean onSingleTapConfirmed(MotionEvent e) { Log.d(TAG, "onSingleTapConfirmed"); return false; } @Override public boolean onDoubleTap(MotionEvent e) { Log.d(TAG, "onDoubleTap"); return false; } @Override public boolean onDoubleTapEvent(MotionEvent e) { Log.d(TAG,"onDoubleTapEvent"); return false; } the OnDoubleTapEvent is called after each tap of the double tap. You can return true here and get each tap separately.
unknown
d12556
train
Assuming I'm understanding the purpose of this complex COUNTIFS formula - It seems the problem is with the last range/criteria combo. Try changing Sheets("HR Data Detail").Range("Z2:Z" & finRow), "'HR Data Summary'!C2") to the following: .Range("Z2:Z" & finRow), "=" & Sheets("HR Data Summary").Range("$C$2").Value
unknown
d12557
train
if you know how long the serialized data will be you can use varchar, otherwise id use text. it might just be best to use text anyways. A: It depends on how big the data you want to serialize is. It could be text or longtext. Btw, very often (but not always) storing of serialized data is a bad design, which should be reimplemented using N:M or 1:N (many-to-many or one-to-many) relations A: If the data is just a generic pile of bytes, such as an image, then use a BLOB. If it's a pile of text, use one of the TEXT types. Otherwise, it depends upon what kind of data you're dealing with.
unknown
d12558
train
Got it solved. Was thinking about it incorrectly. When NodeJs starts (or on the first subscriber) the system should connect to RabbitMQ and emit events on Socket.io regardless of the page the user is on. It's the page that determines what topic the user wants to listen to via the connect code to socket.io. This way there is only one instance the of the RabbitMQ code ever running and the server can emit to anyone interested. And this way there is no conflict.
unknown
d12559
train
You can use TabRow like this: val items = (0..1) var activeTabIndex by remember { mutableStateOf(0) } TabRow( selectedTabIndex = activeTabIndex, backgroundColor = Color.Transparent, indicator = { Box( Modifier .tabIndicatorOffset(it[activeTabIndex]) .fillMaxSize() .background(color = Color.Cyan) .zIndex(-1F) ) }, ) { items.mapIndexed { i, item -> Tab(selected = activeTabIndex == i, onClick = { activeTabIndex = i }) { Icon( painter = painterResource(id = someIcon), contentDescription = null, tint = Color.Black, modifier = Modifier.padding(vertical = 20.dp) ) } } }
unknown
d12560
train
Because you are working on the same list. You are assigning effectively the same instance to _tempPointList in this line (and removing the reference to your original _tempPointList which you created in the line above.): _tempPointList = _pointList; I'd suggest you instantiate your copy list by directly copying the list with this call: var _tempPointList = new List<Point>(_pointList); //creates a shallow copy I see yet another problem: you are removing elements from a list while you are iterating over it. Don't you get an System.InvalidOperationException when continuing to iterate? I'd solve this by iterating over the original list and remove from the copy list like this: foreach (var d in dots) { var _tempPointList = new List<Point>(_pointList); foreach (var point in _pointList) { if (d >= point.X && d <= point.Y) { _tempPointList.Remove(point); } } _pointList = _tempPointList; } As mentionend in the comments of your question, you could just use a predicate on List.RemoveAll() which deletes an item if the predicate returns true. I didn't test the performance, but feel free to compare. foreach (var d in dots) { _pointList.RemoveAll(point => d >= point.X && d <= point.Y); } A: You will need to make a copy of the list for your logic to work. // instead of this var _tempPointList = new List<Point>(); // make a copy like this var _tempPointList = new List<Point>(_pointList); Otherwise, you have just copied a reference to the list and both _tempPointList and _pointList point to same memory A: You're having this problem because both _tempPointList and _pointList have the same reference so when you modify one list the other is modified automatically. another problem you're having is with Foreach you can't moidify a list when iterating over it using Foreach
unknown
d12561
train
If you getting the notification when your app is in foreground the it's ok you just use a data payload instead of notification payload you can use both together but data payload works when app in background or even close. I recommend you for only use the data payload because it's work 100% time. One of my app didn't get notification when app is closed but I used both payload then I just changed it to only data payload and it worked fine.
unknown
d12562
train
Your function descents to the left in the first call of recurse() (line 9). If it is done with the left node, it'll go to the right in the second call to recurse() (line 16). The sequence of the nodes will be 1 -> 2 -> 4 -> (2) -> (1) -> 3 -> 5 -> 7 -> (5) -> (3) -> 6 -> (3) -> (1). The nodes in braces show where the call stack is reduced. A: If you imagine what is happening as a stack,it's much easier to understand. It starts on the node with value 1. When it checks the left node,it's left on hold as the first item of the stack(to be called again when it's the topmost item of the stack again) It would look something like this when it starts on the second iteration: 2 --- 1 Then,it would keep executing and adding to the stack until it check that the node 4 has no left or right node. At this moment,it would look like this: 4 --- 2 --- 1 Then,since there's nothing else to do on the node with value 4,it gets removed out of the stack,and then it executes the next topmost of the stack from where it left off:checking the right node of it. I hope this helped you visualise and understand what really happens because that really helped me and some college coleagues to understand binary trees.
unknown
d12563
train
I think, no need to "hack". It can be achieved easier: ... String[] accountTypes = new String[]{GoogleAuthUtil.GOOGLE_ACCOUNT_TYPE}; Intent intent = AccountPicker.newChooseAccountIntent(null, null, accountTypes, false, description, null, null, null); // set the style if ( isItDarkTheme ) { intent.putExtra("overrideTheme", 0); } else { intent.putExtra("overrideTheme", 1); } intent.putExtra("overrideCustomTheme", 0); try { startActivityForResult(intent, YOUR_REQUEST_CODE_PICK_ACCOUNT); } catch (ActivityNotFoundException e) { ... } ... A: I had the same problem, but I finally found the solution. Take a look to AccountPicker.class, where are methods: newChooseAccountIntent() and zza(); You have to change AccountPicker.newChooseAccountIntent(null, null, accountTypes, false, null, null, null, null); to AccountPicker.zza(null, null, accountTypes, false, null, null, null, null, false, 1, 0); Last two arguments are for "overrideTheme" and "overrideCustomTheme". So set the first one to 1 and it will override the theme to light. :-) Hope it helps. A: My solution is Intent intent = AccountPicker.a(null, null,accountTypes, true, null, null, null, null, false, 1, 0);
unknown
d12564
train
runnable is immediately added to the message queue which can be seen in source code of the handler class postDelayed->sendMessageDelayed->sendMessageAtTime public boolean sendMessageAtTime(Message msg, long uptimeMillis) { boolean sent = false; MessageQueue queue = mQueue; if (queue != null) { msg.target = this; sent = queue.enqueueMessage(msg, uptimeMillis); } else { RuntimeException e = new RuntimeException( this + " sendMessageAtTime() called with no mQueue"); Log.w("Looper", e.getMessage(), e); } return sent; } queue.enqueueMessage runs a for (;;) to delay the callback. In this continuous loop message's time is checked for execution. you can see this in the source code A: The structure we call Handler runs with a Looper which contains a MessageQueue inside it. A call to Handler.post(runnable) creates a new message for MessageQueue and puts it at the last of the queue, where its time is set as SystemClock.uptimeMillis(). Looper always loops for checking messages and if there is a message and the target time has been passed, it executes the Runnable inside it. To clarify: Handler.post(runnable) sends a message with SystemClock.uptimeMillis() where on the next check, the new SystemClock.uptimeMillis() becomes bigger than the message's execution time, allowing the Looper to fetch the runnable and execute it. Now, if you use Handler.postDelayed(runnable, delay) it posts the message with SystemClock.uptimeMillis() + delay where the delay is in milliseconds, which we will call messageTime for now. What does the looper do? It checks the message, if SystemClock.uptimeMillis() is smaller than messageTime, it skips the execution. Looper always loops, so there becomes a time where SystemClock.uptimeMillis() actually becomes bigger than messageTime where the runnable becomes eligible for execution. Unless you call Handler.postAtFrontOfQueue(runnable), the calls are always put to the bottom of the queue, not the top. Finalize: Handler.post(runnable) --> messageTime = SystemClock.uptimeMillis() Handler.postDelayed(runnable, delay) --> messageTime = SystemClock.uptimeMillis() + delay. Looper executes the runnables where SystemClock.uptimeMillis() >= messageTime. That's the gist of it.
unknown
d12565
train
Here's a function that counts the two parts of a pair separately, counting elements that equal a given value. let count_val_in_pairs value pairs = List.fold_left (fun (cta, ctb) (a, b) -> ((if a = value then cta + 1 else cta), (if b = value then ctb + 1 else ctb))) (0, 0) pairs Your count_defections is this: let count_defections history = count_val_in_pairs false history Your count_cooperations is this: let count_cooperations history = count_val_in_pairs true history
unknown
d12566
train
I hope you are using eclipse to develop an Android app. If not the please do that. Here is a information to debug an Android app. A: Have you tried just playing around with the Eclipse debugger? Try: * *Click on the left-hand margin next to code you want to investigate, setting a "breakpoint" *Click on the bug icon to run your application in Debug mode. *When your application arrives at that line of code, Eclipse will pop up a dialog confirming a change to the debug perspective. *Play with the step-next/step-out buttons in the debugger.
unknown
d12567
train
The issue stemmed from domain name resolution. The /etc/hosts file needed to be modified to point the IP address of the machine of the hadoop machine for both localhost and the fully qualified domain name. 192.168.0.201 hadoop.fully.qualified.domain.com localhost A: Safemode is an HDFS state in which the file system is mounted read-only; no replication is performed, nor can files be created or deleted. Filesystem operations that access the filesystem metadata like 'ls' in you case will work. The Namenode can be manually forced to leave safemode with this command( $ hadoop dfsadmin -safemode leave).Verify status of safemode with ( $ hadoop dfsadmin -safemode get)and then run dfsadmin report to see if it shows data.If after getting out of safe mode the report still dose not show any data then i'm suspecting communication between namenode and datanode is not hapenning. Check namenode and datanode logs after this step. The next steps could be to try restarting datanode process and last resort will be to format namenode which will result in loss of data.
unknown
d12568
train
>>> Object.prototype.toString.call(arguments) <<< "[object Arguments]" >>> Array.isArray(arguments) //is not an array <<< false >>> arguments instanceof Array //does not inherit from the Array prototype either <<< false arguments is not an Array object, that is, it does not inherit from the Array prototype. However, it contains an array-like structure (numeric keys and a length property), thus Array.prototype.slice can be applied to it. This is called duck typing. Oh and of course, Array.prototype.slice always returns an array, hence it can be used to convert array-like objects / collections to a new Array. (ref: MDN Array slice method - Array-like objects) A: arguments is not a "real" array. The arguments object is a local variable available within all functions; arguments as a property of Function can no longer be used. The arguments object is not an Array. It is similar to an Array, but does not have any Array properties except length. For example, it does not have the pop method. However it can be converted to a real Array. You could do: var args = Array.prototype.slice.call(arguments); A: Arguments is not an Array. It's an Arguments object. Fortunately, slice only requires an Array-like object, and since Arguments has length and numerically-indexed properties, slice.call(arguments) still works. It is a hack, but it's safe everywhere. A: Referring to MDN: »The arguments object is not an Array. It is similar to an Array, but does not have any Array properties except length. For example, it does not have the pop method. However it can be converted to a real Array:« https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Functions_and_function_scope/arguments In order to call slice, you have to get the slicefunction, from the Array-prototype.
unknown
d12569
train
The version information at the bottom of http://connellchamberofcommerce.com/ indicates that the .net version of app pool is 2.0.50727.5491, where as in your web.config file its 4.5.2. So change the app pool to use 4.0. A: Your web.config looks fine to me. My guess would be that you have set the wrong .NET version in the AppPool under which your site is running. Go to your IIS -> Select your desired AppPool -> Click Basic settings and change the .Net version accordingly.
unknown
d12570
train
These steps fixed it: * *Go to the nuget website and download the package. *Extract the file as a .zip file. *Go inside the lib folder and copy the .dll file to [Unity Project]\Assets\Plugins (create the folder if it doesn't exist)
unknown
d12571
train
You can use this library to swipe between images with effects: https://github.com/daimajia/AndroidImageSlider dependencies { compile "com.android.support:support-v4:+" compile 'com.squareup.picasso:picasso:2.3.2' compile 'com.nineoldandroids:library:2.4.0' compile 'com.daimajia.slider:library:1.1.5@aar' } Add the Slider to your layout: <com.daimajia.slider.library.SliderLayout android:id="@+id/slider" android:layout_width="match_parent" android:layout_height="200dp" /> Wiki: https://github.com/daimajia/AndroidImageSlider/wiki/Slider-view
unknown
d12572
train
What version of Java are you running on? Java 11 provides TLSv1.3, which is the default offering if you have generic TLS selected, but NiFi 1.7.0 doesn't support TLSv1.3 (and doesn't run on Java 11). So assuming you are running on Java 8, recent updates have introduced TLSv1.3 but should still provide for TLSv1.2. This can also indicate that the certificate you have provided is invalid or incompatible with the cipher suite list provided by the client. You can use $ openssl s_client -connect <host:port> -debug -state -CAfile <path_to_your_CA_cert.pem> to try diagnosing the available cipher suites & protocol versions. Adding -tls1_2 or -tls1_3, etc. will restrict the connection attempt to the specified protocol version as well. You should definitely upgrade from NiFi 1.7.0 -- it was released over 2 years ago, has known issues, and there have been close to 2000 bug fixes and features added since, including numerous security issues. NiFi 1.12.1 is the latest released version.
unknown
d12573
train
Cygwin will actually do magic for you if you put your DOS paths in quotes, for example cd "C:\Program Files\" A: Cygwin does not recognize Windows drive letters such as s:, use /cygdrive/s instead. Your cygwin command should look like this: /cygdrive/s/programs/mongodb/mongodb/bin/mongod.exe --dbpath s:/programs/mongodb/data/ --repair Notice that the path like parameters you pass to the executable are in windows format as mongod.exe is not a Cygwin binary. To make it easier, you could add mongod.exe your path, then you do not need to specify the directory it is in.
unknown
d12574
train
In functional programming terms, the checksum algorithm is foldLeft with a carefully chosen binary operation. The requirements for this binary operation, in English: * *In every two-digit input, if we change one of the digits, then the checksum changes (Latin square…); *In every three-digit input, if the latter two digits are distinct and we swap them, then the checksum changes (…with weak total antisymmetry); *The Latin square has zeros on the diagonal. In Python 3: def validate(latinSquare): Q = range(len(latinSquare)) return all( x == y for c in Q for x in Q for y in Q if latinSquare[latinSquare[c][x]][y] == latinSquare[latinSquare[c][y]][x] ) and all(latinSquare[x][x] == 0 for x in Q) print( validate( [ [0, 3, 1, 7, 5, 9, 8, 6, 4, 2], [7, 0, 9, 2, 1, 5, 4, 8, 6, 3], [4, 2, 0, 6, 8, 7, 1, 3, 5, 9], [1, 7, 5, 0, 9, 8, 3, 4, 2, 6], [6, 1, 2, 3, 0, 4, 5, 9, 7, 8], [3, 6, 7, 4, 2, 0, 9, 5, 8, 1], [5, 8, 6, 9, 7, 2, 0, 1, 3, 4], [8, 9, 4, 5, 3, 6, 2, 0, 1, 7], [9, 4, 3, 8, 6, 1, 7, 2, 0, 5], [2, 5, 8, 1, 4, 3, 6, 7, 9, 0], ] ) ) A: This is a conversion into Scala of the Python 3 Answer that was provided by David Eisenstat's answer. View in Scastie: def isValidDammOperationTable(validLatinSquare: List[List[Int]]): Boolean = { val indices = validLatinSquare.indices.toList ( indices.forall(index => validLatinSquare(index)(index) == 0) && indices.forall( c => indices.forall( x => indices.forall( y => (validLatinSquare(validLatinSquare(c)(x))(y) != validLatinSquare(validLatinSquare(c)(y))(x)) || (x == y) ) ) ) ) } val exampleLatinSquareX10: List[List[Int]] = List( List(0, 3, 1, 7, 5, 9, 8, 6, 4, 2) , List(7, 0, 9, 2, 1, 5, 4, 8, 6, 3) , List(4, 2, 0, 6, 8, 7, 1, 3, 5, 9) , List(1, 7, 5, 0, 9, 8, 3, 4, 2, 6) , List(6, 1, 2, 3, 0, 4, 5, 9, 7, 8) , List(3, 6, 7, 4, 2, 0, 9, 5, 8, 1) , List(5, 8, 6, 9, 7, 2, 0, 1, 3, 4) , List(8, 9, 4, 5, 3, 6, 2, 0, 1, 7) , List(9, 4, 3, 8, 6, 1, 7, 2, 0, 5) , List(2, 5, 8, 1, 4, 3, 6, 7, 9, 0) ) println(isValidDammOperationTable(exampleLatinSquareX10)) //prints "true"
unknown
d12575
train
As far as I understand the code, the main function exits after starting the children. You need to add code that waits until all children have exited, means the SIGCHLD signal was caught for each exit of your child. A: You are printing parent: I'm the parent and parent: exiting within the loop. If you want them to be printed only once, move them out of the loop. You may also want to look into using the wait or waitpid functions for waiting for child process termination instead of using a signal handler and sleep.
unknown
d12576
train
Use one more div slant-middle in between slant-left and slant-right with same background color as slant-left and rotate it to 45deg. transform: rotateZ(45deg); -ms-transform: rotateZ(45deg); -moz-transform: rotateZ(45deg); -webkit-transform: rotateZ(45deg); -o-transform: rotateZ(45deg); A: You can use a pseudo-element to achieve this result. See the snippet below (see full page for a more accurate result): * { box-sizing: border-box; } .slantrow { background-color: bisque; } .slant-inner { display: flex; width: 1100px; margin: 0 auto; } .slant-inner p { position: relative; } /* Avoid paragraph being overlapped by the slanted box */ .slant-left { width: 60%; padding-right: 5%; } .slant-right { position: relative; width: 40%; padding-left: 5%; background-color: antiquewhite; } .slant-right::before { content: ''; position: absolute; left: 0; top: 0; width: 100%; height: 100%; background-color: inherit; transform-origin: bottom left; transform: skewX(10deg); } <div class="row slantrow"> <div class="row slant-inner"> <div class="col-md-6 slant-left"> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean et augue at ipsum semper auctor a ut ante. Vivamus lacinia mollis semper. Aliquam fringilla eros at tortor semper, eget finibus urna tincidunt. Nulla mollis vestibulum elit vitae elementum. Nam lacinia elit id lacus bibendum, eget placerat augue auctor. Fusce viverra odio sapien, et auctor tellus accumsan eget. Duis ullamcorper eget elit nec varius. Nunc a nisl quis nunc bibendum lobortis vitae in risus. Ut eu pellentesque augue. Nulla ut nibh laoreet, egestas velit id, ultrices arcu. Curabitur eget iaculis orci. </p> </div> <div class="col-md-6 slant-right"> <p>Aenean varius sollicitudin nulla. Proin in nisi urna. Aliquam ullamcorper dui vitae augue fringilla cursus vel finibus justo. Nullam tortor urna, rutrum et vulputate congue, consequat vitae nunc. Integer sit amet nibh blandit, venenatis velit ut, scelerisque quam. Phasellus quis leo eu quam sagittis egestas vel at eros. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. </p> </div> </div> </div> A: This is how I would do it, by using css clip-path, in case you do not have to support IE. .slantrow { position: relative; background-color: bisque; } .slant-inner { } .slant-left { width: 60%; } .slant-right { width: 40%; /*background-color: antiquewhite;*/ } .slant { position: absolute; top: 0; right: 0; bottom: 0; /* Play with `left` here */ /* Used -15px, as the default gutter with in BS is 30px, you can adjust this based on your setup */ left: calc(50% - 15px); background-color: antiquewhite; /* Play with `calc()` here */ -webkit-clip-path: polygon(0% 0%, 100% 0%, 100% 100%, calc(0% + 15px) 100%); clip-path: polygon(0% 0%, 100% 0%, 100% 100%, calc(0% + 15px) 100%); } <div class="slantrow"> <div class="slant"></div> <div class="container slant-inner"> <div class="row"> <div class="col-6 slant-left"> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec ipsum lectus, lacinia sed facilisis vel, euismod at quam. Pellentesque in urna dui. Duis ut elit id erat interdum tempor. Nam at lectus sit amet dolor interdum cursus a non enim.</p> </div> <div class="col-6 slant-right"> <p>Morbi et pretium ex. Ut eros sapien, tincidunt et tincidunt eu, semper in libero. Nunc luctus ornare massa ut porta.</p> </div> </div> </div> </div> <link href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" rel="stylesheet"/>
unknown
d12577
train
Try the T-pipe: %T>%. From documentation: Pipe a value forward into a function- or call expression and return the original value instead of the result. This is useful when an expression is used for its side-effect, say plotting or printing. There is a good example of it here along with the other two less common pipe commands. Inserting back into your original context: data(cars) cars %>% group_by(speed) %>% summarize(dist_mean = mean(dist)) %T>% print() %>% group_by(speed > 10) %>% summarize(dist_mean = mean(dist_mean)) %T>% print() Edit: As per @Ritchie Sacramento's comment the pipes are part of the magrittr package. They also appear to be re-exported by dplyr. If they don't work after calling library(dplyr) then you will need to call library(magrittr) to access them.
unknown
d12578
train
double d = QInputDialog::getDouble(this, tr("QInputDialog::getDouble()"), tr("Amount:"), 37.56, -10000, 10000, 2, &ok); * *A dialog will popup with parent the widget in which you are using this function. (this) *The dialog's title will be QInputDialog::getDouble() (tr is used in order to translate this string if you want using QtLinguist) *Inside the dialog will be a double spibox and a label *The label's string will be Amount: *The default value of the spinbox (what you see when the dialog popups) will be 37.56 *The minimum value will be -10000 (you will not be able to set a value less than this) *The maximum value will be 10000 (you will not be able to set a value greater than this) *Two decimal point will be displayed, eg 3.478 will be displayed as 3.48. *If the user presses the Ok button then the ok argument will be set to true, otherwise it will be set to false Check the documentation which includes an example for more details.
unknown
d12579
train
Some of the error messages in Propel 1.2 (the bundled version) could do with being a bit more helpful, that's for sure. It also doesn't use PDO, and so is much slower than recent versions. I'd recommend that you bump up to at least Symfony 1.3, which has a much better version of Propel installed. @richsage recommends Symfony 1.4, but the forms system that that enforces - in my humble opinion - was vastly over-complicated. Symfony 1.3 has it also, but at least there you can switch to 10compatibility mode (I forget exactly what it is called) - and this lets you return to the component helpers approach. Also, you can upgrade your version of Propel right up to 1.6 in Symfony 1.3 (or 1.4) by adding a plugin. A: Similar issue [wrapped: connect failed [Native Error: No such file or directory] [User Info: Array]] Having had exactly the same issue with an old Symfony1 project at work, the fix that worked for me was to change the databases.yml from:- mysql://root:@localhost/database_name_here to mysql://root:@127.0.0.1/database_name_here The underlying issue for this was that the original dev environment was setup on Windows and my Dev environment is on a Mac. This may not be the same situation for you but hopefully will help someone out, searching for this error
unknown
d12580
train
if your category name is like this : $str ="X | Y" and you want Y You can try this: $exp = explode("|", $str) $y = $exp[1]
unknown
d12581
train
Like this: elem.Descendants().Where(e => !e.Descendants("p").Any())
unknown
d12582
train
I would do this to move the changes from branch1 to branch2: git checkout branch2 git merge --squash branch1 No commit has been created or "copied" between the branches. The changes can be modified before committing if need be. A: If I understand what you want to do correctly, one way would be to start with your first branch: git checkout branch1 Create a new branch from there: git checkout -b branch2 Reset back to master, which will remove any commits that were made on branch1, but leave the changes as unstaged: git reset master You can then modify the files further and commit them as one commit.
unknown
d12583
train
Concerning your second question: 2) Is there a text which can guide me to the whole process of java thread creation in detail which should cover the respective OS calls to windows ? This guide, though slightly off topic is great: http://blog.jamesdbloom.com/JVMInternals.html And the following book The Java Virtual Machine Specification, Java SE 7 Edition (google book link) explains the JVM internals in depth and as such would be my best bet (disclaimer: I had only skimmed through the visible parts) If that is not good enough, you could always download the source code for the open jdk (7) and crawl through the code... An excerpt from the open jdk code is this (openjdk\hotspot\src\share\vm\runtime\thread.cpp - c'tor): // Base class for all threads: VMThread, WatcherThread, ConcurrentMarkSweepThread, // JavaThread Thread::Thread() { // stack and get_thread set_stack_base(NULL); set_stack_size(0); set_self_raw_id(0); set_lgrp_id(-1); // allocated data structures set_osthread(NULL); set_resource_area(new ResourceArea()); set_handle_area(new HandleArea(NULL)); set_active_handles(NULL); set_free_handle_block(NULL); set_last_handle_mark(NULL); // This initial value ==> never claimed. _oops_do_parity = 0; // the handle mark links itself to last_handle_mark new HandleMark(this); // plain initialization debug_only(_owned_locks = NULL;) debug_only(_allow_allocation_count = 0;) NOT_PRODUCT(_allow_safepoint_count = 0;) NOT_PRODUCT(_skip_gcalot = false;) CHECK_UNHANDLED_OOPS_ONLY(_gc_locked_out_count = 0;) _jvmti_env_iteration_count = 0; set_allocated_bytes(0); _vm_operation_started_count = 0; _vm_operation_completed_count = 0; _current_pending_monitor = NULL; _current_pending_monitor_is_from_java = true; _current_waiting_monitor = NULL; _num_nested_signal = 0; omFreeList = NULL ; omFreeCount = 0 ; omFreeProvision = 32 ; omInUseList = NULL ; omInUseCount = 0 ; _SR_lock = new Monitor(Mutex::suspend_resume, "SR_lock", true); _suspend_flags = 0; // thread-specific hashCode stream generator state - Marsaglia shift-xor form _hashStateX = os::random() ; _hashStateY = 842502087 ; _hashStateZ = 0x8767 ; // (int)(3579807591LL & 0xffff) ; _hashStateW = 273326509 ; _OnTrap = 0 ; _schedctl = NULL ; _Stalled = 0 ; _TypeTag = 0x2BAD ; // Many of the following fields are effectively final - immutable // Note that nascent threads can't use the Native Monitor-Mutex // construct until the _MutexEvent is initialized ... // CONSIDER: instead of using a fixed set of purpose-dedicated ParkEvents // we might instead use a stack of ParkEvents that we could provision on-demand. // The stack would act as a cache to avoid calls to ParkEvent::Allocate() // and ::Release() _ParkEvent = ParkEvent::Allocate (this) ; _SleepEvent = ParkEvent::Allocate (this) ; _MutexEvent = ParkEvent::Allocate (this) ; _MuxEvent = ParkEvent::Allocate (this) ; #ifdef CHECK_UNHANDLED_OOPS if (CheckUnhandledOops) { _unhandled_oops = new UnhandledOops(this); } #endif // CHECK_UNHANDLED_OOPS #ifdef ASSERT if (UseBiasedLocking) { assert((((uintptr_t) this) & (markOopDesc::biased_lock_alignment - 1)) == 0, "forced alignment of thread object failed"); assert(this == _real_malloc_address || this == (void*) align_size_up((intptr_t) _real_malloc_address, markOopDesc::biased_lock_alignment), "bug in forced alignment of thread objects"); } #endif /* ASSERT */ }
unknown
d12584
train
I have been pondering pretty much the same problem for a while and come to the following conclusion. The simplest way to integrate Angular created elements with d3 is to add the directives with .attr and then .call the compile service on the d3 generated elements. Like this: mySvg.selectAll("circle") .data(scope.nodes) .enter() .append("circle") .attr("tooltip-append-to-body", true) .attr("tooltip", function(d){ return d.name; }) .call(function(){ $compile(this[0].parentNode)(scope); }); Here is a Plunker. I think the idea of generating elements with Angular ngRepeat rather than d3 is working against the frameworks. D3 does not expect to be handed a bunch of elements. It wants to be handed data - almost always an array. It then has a stack of excellent functions to convert that data into various SVG or HTML elements. Let it do that. It seems from this quote... D3 makes it trivial to add elements and bind each to the data it represents, but the graph visualization is only part of a much larger application: I need to create different types of elements representing these same nodes and edges in different parts of the application (which D3 has nothing to do with), and I'd like to keep all of these elements bound to a single dataset. ... you are implying that generating elements with d3 somehow prevents you from binding the same data to different parts of the application. I can't see why. Just have d3 generate elements from a scope array (as is done in the linked Plunker). Then use the same dataset wherever you want in the usual Angular way. Other parts of the application can update the dataset and a $watch callback can re-render the d3 graphic.
unknown
d12585
train
Watch a computed value. computed:{ combined(){ return this.a && this.b } } watch:{ combined(value){ if (value) //do something } } There is a sort of short hand for the above using $watch. vm.$watch( function () { return this.a + this.b }, function (newVal, oldVal) { // do something } ) A: It's pretty much the same solution as @Bert suggested. But you can just do the following: data() { return { combined: { a: false, b: false, } } }, Then: watch: { combined:{ deep:true, handler } }
unknown
d12586
train
I'm using Ctrl + O all the time to jump back (not only for Jedi, but also). Also with Ctrl + I you can do the opposite: Jump forward.
unknown
d12587
train
Figured it out... Don't use the syntax I used above in your callback. Put the px.line call inside a variable(fig, in this case), and then use fig.add_scatter to add data from a different dataframe to the graph. Both parts of the graph will update from the callback. Also, fig.add_scatter doesn't have a dataframe argument, so use df.column or df[column] (ex. 'dfa.Date' below) # --- import libraries --- import dash import dash_core_components as dcc import dash_html_components as html import pandas as pd import plotly.express as px from dash.dependencies import Output, Input # --- load data --- df_h = pd.read_csv('df_h.csv') df_h['Date'] = pd.to_datetime(df_h['Date']) df_arima = pd.read_csv('df_arima.csv') df_arima['Date'] = pd.to_datetime(df_arima['Date']) df_arima['Date'] = df_arima['Date'].dt.strftime('%Y-%m') external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] # --- initialize the app --- app = dash.Dash(__name__, external_stylesheets=external_stylesheets) app.layout = html.Div([ dcc.Graph(id = 'forecast-container') ] ) # --- dropdown callback --- @app.callback( Output('forecast-container', 'figure'), Input('city-dropdown', 'value')) def update_figure(selected_city): dff = df_h[['Date', selected_city]] # dff[selected_city] = dff[selected_city].round(0) dfa = df_arima[df_arima['City'] == selected_city] fig = px.line(dff, x = 'Date', y = selected_city, hover_data = {selected_city: ':$,f'}) fig.add_scatter(x = dfa.Date, y = dfa.Mean, line_color = 'orange', name = 'Forecast Mean') fig.add_scatter(x = dfa.Date, y = dfa.Lower_ci, fill = 'tonexty', fillcolor = 'rgba(225,225,225, 0.3)', marker = {'color': 'rgba(225,225,225, 0.9)'}, name = 'Lower 95% Confidence Interval') fig.add_scatter(x = dfa.Date, y = dfa.Upper_ci, fill = 'tonexty', fillcolor = 'rgba(225,225,225, 0.3)', marker = {'color': 'rgba(225,225,225, 0.9)'}, name = 'Upper 95% Confidence Interval') fig.update_layout(template = 'xgridoff', yaxis = {'title': 'Median Home Price ($USD)'}, xaxis = {'title': 'Year'}, title = {'text': 'Median Home Price vs. Year for {}'.format(selected_city), 'font': {'size': 24}, 'x': 0.5, 'xanchor': 'center'}) return fig if __name__ == '__main__': app.run_server(debug = True)
unknown
d12588
train
Well, in your first code block, you are checking the width of your DIV before you actually create the DIV in your markup, and sense you aren't using a DOMReady event or Window Loaded event, it will come up 0 or null at best. So, if you just move your script block below the div in your first example that will fix that. I imagine you will find a similar situation in your second example but I can't be sure without seeing more code.
unknown
d12589
train
solved by adding [FromBody] to the Param in API: public IActionResult AddUpdateGenre([FromBody] ManagementVM data)
unknown
d12590
train
I'm still not sure why the volatile() method is turning off updates to templates. This might be a real bug but in terms of solving my problem the important thing to recognise was that the volatile approach was never the best approach. Instead the important thing to ensure is that when mood comes in as a function that the function's dependencies are included in the CP's dependencies. So, for instance, if mood is passed the following function: mood: function(context) { return context.title === 'Monkeys' ? 'happy' : 'sad'; } For this function to evaluate effectively -- and more importantly to trigger a re-evaluation at the right time -- the title property must be part of the computed property. Hopefully that's straight forward as to why but here's how I thought I might accommodated this: _moodDependencies: ['title','subHeading','style'], _mood: computed('mood','size','_moodDependencies', function() { let mood = this.get('mood'); console.log('mood is: %o', mood); if (typeOf(mood) === 'function') { run( ()=> { mood = mood(this); }); } return !mood || mood === 'default' ? '' : `mood-${mood}`; }), That's better, as it allows at build time for a static set of properties to be defined per component (the _mood CP for me is part of a mixin used by a few components). Unfortunately this doesn't yet work completely as the it apparently doesn't unpack/destructure the _moodDependencies. I'll get this sorted and update this unless someone else beats me to the punch.
unknown
d12591
train
In real world we always looking to think and code in simple and automated ways and methods, if we can't achieve our goal, then we have to looking to more complex solutions. now let's see your code, the first way is more easy straightforward to achieve your goal, if the source of A, B, C, D,...etc is from table we can directly write it like: myLocalVAr not in (select column_name from table_name) this is working fine, and any new value will be easy to handle by this approach, however "not in" clause has disadvantages: * *you have to be sure that the return values in subquery do not have any single null value, if it does, then the whole logic will be null, in this case we can handle it by using nvl function or eliminate null records by using where clause. *performance: the "not in" clause does not give us a good performance when we have a long list of values which we compre myLocalVAr with it. so we are using here "not exists" instead. the second approach you are mentioned is not practical, it is hard working, error born, and hard to manage with new values in the future, you have to add new values by yourself and modify this code every time you need to compare with new value, so we do not follow solutions like this one.
unknown
d12592
train
There doesn't look to be anything particularly wrong with your SPARQL query and you have made no obvious mistakes (other than some syntax validity issues which I discuss later) The problem appears to be that the SPARQL service you are using uses a triple store that doesn't cope with queries with large numbers of joins very well. When experimenting with your query moving the triple patterns around produced a Stack Overflow in the SPARQL service! I would suggest downloading the data yourself from http://linkedpolitics.ops.few.vu.nl/home - there are links under point 3 of the About the Data section from which you can download the data yourself. You can then load it into the triple store of your choice and run your query against that instead. For example I downloaded the data and put it into Apache Jena Fuseki (disclaimer - I work on the Apache Jena project) and was able to run the query almost instantaneously after I fixed the query to be proper valid SPARQL. Making the Query valid SPARQL The query as given is not strictly valid SPARQL so you'll need to correct it in order to run it elsewhere. Firstly the various prefixes used are not defined by the query because the service you are using inserts them automatically, to run this query against another triple store you'll need to add the following to the start of the query: PREFIX dcterms: <http://purl.org/dc/terms/> PREFIX foaf: <http://xmlns.com/foaf/0.1/> PREFIX lpv: <http://purl.org/linkedpolitics/vocabulary/> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> It is also not legal to perform a variable assignment where the variable name given is already in scope e.g. (SAMPLE(?speaker) AS ?speaker) so those need to change: (SAMPLE(?speaker) AS ?speaker1) Which results in the following valid and portable SPARQL query: PREFIX dcterms: <http://purl.org/dc/terms/> PREFIX foaf: <http://xmlns.com/foaf/0.1/> PREFIX lpv: <http://purl.org/linkedpolitics/vocabulary/> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT ?text (SAMPLE(?speaker) AS ?speaker1) (SAMPLE(?given) AS ?given1) (SAMPLE(?surname) AS ?surname1) (SAMPLE(?acronym) AS ?country1) (SAMPLE(?partyLabel) AS ?partyLabel1) (SAMPLE(?type) AS ?type1) WHERE { <http://purl.org/linkedpolitics/eu/plenary/2010-12-16_AgendaItem_4> dcterms:hasPart ?speech. ?speech lpv:speaker ?speaker. ?speaker foaf:givenName ?given. ?speaker foaf:familyName ?surname. ?speaker lpv:countryOfRepresentation ?country. ?country lpv:acronym ?acronym. ?speech lpv:translatedText ?text. ?speaker lpv:politicalFunction ?func. ?func lpv:institution ?institution. ?institution rdfs:label ?partyLabel. ?institution rdf:type ?type. FILTER(langMatches(lang(?text), "en")) } GROUP BY ?text
unknown
d12593
train
What about presenting your date picker modally in UIActionSheet? In the example above that black bar (which is probably a sizing issue) is replaced with a convenient toolbar in which you can place buttons like "save" and "cancel", and I believe that if you use a UIActionSheet and click outside of it it's automatically dismissed (I'm not sure about this), but if' it's not you can use it's dismiss method to hide it. About the table view, you can use [myTableView scrollToRowAtIndexPath:[NSIndexPath indexPathForRow:someRow inSection:someSection] atScrollPosition:UITableViewScrollPositionMiddle animated:YES]; to set the offset of the table scroll at a particular cell you want.
unknown
d12594
train
The obvious answer seems to be that they either forgot to apply it in each version or they don't consider it important enough to make default because it is on the border of being considered a bug or a usability preference because it has an easy workaround(i.e. using the GUI instead of shortcuts). I wouldn't think applying the hotfix would hurt anything - they wouldn't make it available if that were the case. Changing QFE_Richmond registry key to 1 is a to enable a hotfix. http://support.microsoft.com/?kbid=291110 "Typically, hotfixes are made to address a specific customer situation and may not be distributed outside the customer organization." In addition, the RefEdit control seems to have alternatives: http://peltiertech.com/WordPress/refedit-control-alternative/ Which have been recommended because it has compatability issues: http://peltiertech.com/WordPress/unspecified-painfully-frustrating-error/ So you could probably presume that MS has some gaps in their quality control for the RefEdit feature. Good Luck. EDIT/ADDITION: By the way, QFE stands for Quick Fix Engineering
unknown
d12595
train
Use HashSet. https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.hashset-1?view=netframework-4.7.2 public class ParentClass { public string Name { get; set; } public HashSet<ChildClass> ChildClassCollection { get; set; } } public class ChildClass { public int Id { get; set; } public string Name { get; set; } } A: Here is what I ended up with: PRetty cleand, plus, when it encounters issue, it will exit this.RuleFor(or => or.ChildClassCollection) .Must(this.IsDistinct) .WithMessage("There are more than one entity with the same Id"); public bool IsDistinct(List<UpdateRoleDTO> elements) { var encounteredIds = new HashSet<int>(); foreach (var element in elements) { if (!encounteredIds.Contains(element.Id)) { encounteredIds.Add(element.Id); } else { return false; } } return true; } A: Do you want to use only fluentValidator to achieve this without any relationship? If so, you can custom a validator,then use some logic to confirm whether there is a duplicate value in database,like this: public class ChildClassValidator : AbstractValidator<ChildClass> { private readonly MyDbContext _context; public ChildClassValidator(MyDbContext context) { _context = context; RuleFor(x => x.Id).NotEmpty().WithMessage("ID is required.").Must(IsUnique).WithMessage("parent have more than one ids"); } private bool IsUnique(int id) { var model = _context.ParentClasses.GroupBy(x => x.Id) .Where(g => g.Count() > 1) .Select(y => y.Key) .ToList(); //judge whether parentclass has duplicate id if (model==null) return true; else return false; } } A: I use mine custom helper, based by ansver: Check a property is unique in list in FluentValidation. Because the helper is much more convenient to use than each time to describe the rules. Such a helper can be easily chained with other validators like IsEmpty. Code: public static class FluentValidationHelper { public static IRuleBuilder<T, IEnumerable<TSource>> Unique<T, TSource, TResult>( this IRuleBuilder<T, IEnumerable<TSource>> ruleBuilder, Func<TSource, TResult> selector, string? message = null) { if (selector == null) throw new ArgumentNullException(nameof(selector), "Cannot pass a null selector."); ruleBuilder .Must(x => { var array = x.Select(selector).ToArray(); return array.Count() == array.Distinct().Count(); }) .WithMessage(message ?? "Elements are not unique."); return ruleBuilder; } } Usage: For example we want to choose unique id RuleFor(_ => _.ChildClassCollection!) .Unique(_ => _.Id); Or want choose unique id and name RuleFor(_ => _.ChildClassCollection!) .Unique(_ => new { _.Id, _.Name }); Update I optimization performance using code from answer YoannaKostova public static class FluentValidationHelper { public static bool IsDistinct<TSource, TResult>(this IEnumerable<TSource> elements, Func<TSource, TResult> selector) { var hashSet = new HashSet<TResult>(); foreach (var element in elements.Select(selector)) { if (!hashSet.Contains(element)) hashSet.Add(element); else return false; } return true; } public static IRuleBuilder<T, IEnumerable<TSource>> Unique<T, TSource, TResult>( this IRuleBuilder<T, IEnumerable<TSource>> ruleBuilder, Func<TSource, TResult> selector, string? message = null) { if (selector == null) throw new ArgumentNullException(nameof(selector), "Cannot pass a null selector."); ruleBuilder .Must(x => x.IsDistinct(selector)) .WithMessage(message ?? "Elements are not unique."); return ruleBuilder; } }
unknown
d12596
train
The same answer as your previous question... def prepare_data(self): a = np.random.uniform(0, 500, 500) b = np.random.normal(0, self.constant, len(a)) c = a + b X = np.transpose(np.array([a, b])) # Converting numpy array to Tensor self.x_train_tensor = torch.from_numpy(X).float().to(device) self.y_train_tensor = torch.from_numpy(c).float().to(device) training_dataset = TensorDataset(self.x_train_tensor, self.y_train_tensor) self.training_dataset = training_dataset def setup(self): data = self.training_dataset self.train_data, self.val_data = random_split(data, [400, 100]) def train_dataloader(self): return DataLoader(self.train_data) def val_dataloader(self): return DataLoader(self.val_data) A: Simply call setup() on your DataModule object after calling prepare() on that object. So: dm = DataModuleClass() dm.prepare_data() dm.setup()
unknown
d12597
train
Anything you want to filter, facet, or order on, Sunspot needs to know about. So in your model: searchable do text :first_name, :surname integer :skill_ids, :multiple => true, :references => Skill end Your #search call in your controller looks right. In your view, you'd do something along these lines: - @search.facet(:skill_ids).rows.each do |row| = row.instance.name row.instance will return the instance of Skill that the row's value refers to (that's what the :references option is doing in the searchable definition). I'm not sure what you mean by "select multiple facets to search by" -- one can generate multiple facets (which give users choices for further search refinement) by calling the facet method multiple times in a search; and you can then use their facet choices with scope restrictions using the with method, which you can also call as many times as you'd like. Speaking of wikis, most of this information is available (with more explanation) in the Sunspot wiki: * *http://wiki.github.com/outoftime/sunspot/
unknown
d12598
train
okay, i eventually tracked down the fix. I am using the Shadowbox JS plugin. Turns out that this plugin has been removed from the repository at wordpress, and i had missed out on the latest update which fixed the compatibility issues. Follow the link below to find out more and download teh latest version if you require. http://sivel.net/2011/12/shadowbox-js-plugin-pulled-from-the-wordpress-org-repository/
unknown
d12599
train
My understanding is that React's Context API was essentially introduced for quick and dirty global state management That's a common misunderstanding. Context is not a state management system, any more than props is a state management system. Context (like props) is a way to get data from one component to another. The difference is that props always passes the data to direct children, while context makes the data available to whichever random components in a subtree are interested. My understanding is that one of the main downsides of Context API is that it any update to any property on it will re-render all component fields which are bound to Context API (full re-render). This is true. Similarly, if you change props, the component that receives those props must rerender 1 - Can you confirm that Context API does a full component re-render when any state value is changed on it? Of the specific components that are listening to that context, yes. 2 - Can you confirm that Redux uses Context API under the covers? And can you confirm if Redux Toolkit still uses Context API under the covers? React-redux does use context, yes. Pure redux and redux toolkit don't, since they're general purpose libraries not directly related to react, but I think you meant react-redux. That <Provider store={store}> that you must render at the top of a react-redux app is there to provide a context to the components underneath it. Components that call hooks like useSelector or useDispatch then use the context to find the store that they should interact with. 3 - I was under the impression that Context API was introduced to be a simpler alternative to tools like Redux. But if Redux uses Context API then did Context API come first but maybe it wasn't exposed directly to the developer? Context has existed for a long time, but it used to be an unofficial feature. They've also made it easier to use over time. 4 - If Redux does use Context API under the covers then can you provide any insight into how Redux avoids a full component re-render? The context is only providing a minimal amount of things to the child components, most importantly the store object. The reference to the store object does not typically change, so since the context value does not change, the child components do not need to render. To see exactly what it's providing, see this code: https://github.com/reduxjs/react-redux/blob/master/src/components/Provider.tsx#L33 The contents of the store does change, but that's not what the context is providing. To get the contents of the store, the individual components subscribe to the store, using redux's store.subscribe, plus a special hook called useSyncExternalStore. Basically, redux fires an event when the store's state is updated, and then the individual components set their own local state if it's a change they care about. This local state change is what causes the rerender. If you're writing code that uses context, you're rarely going to be doing things fancy enough to require useSyncExternalStore or a custom subscription system. So the main things you'll want to keep in mind are: * *Keep the context focused on a single task. For example, if you have a theme object to control your app's colors, and also a user object which describes who is currently logged in, put these in different contexts. That way a component that just cares about the theme doesn't need to rerender when the user changes, and vice versa. *If your context value is an object, memoize it so it's not changing on every render (see this documentation) A: I'm a Redux maintainer. @NicholasTower gave a great answer, but to give some more details: Context and Redux are very different tools that solve different problems, with some overlap. Context is not a "state management" tool. It's a Dependency Injection mechanism, whose only purpose is to make a single value accessible to a nested tree of React components. It's up to you to decide what that value is, and how it's created. Typically, that's done using data from React component state, ie, useState and useReducer. So, you're actually doing all the "state management" yourself - Context just gives you a way to pass it down the tree. Redux is a library and a pattern for separating your state update logic from the rest of your app, and making it easy to trace when/where/why/how your state has changed. It also gives your whole app the ability to access any piece of state in any component. In addition, there are some distinct differences between how Context and (React-)Redux pass along updates. Context has some major perf limitations - in particular, any component that consumes a context will be forced to re-render, even if it only cares about part of the context value. Context is a great tool by itself, and I use it frequently in my own apps. But, Context doesn't "replace Redux". Sure, you can use both of them to pass data down, but they're not the same thing. It's like asking "Can I replace a hammer with a screwdriver?". No, they're different tools, and you use them to solve different problems. Because this is such a common question, I wrote an extensive post detailing the differences: Why React Context is Not a "State Management" Tool (and Why It Doesn't Replace Redux) To answer your questions specifically: * *Yes, updating a Context value forces all components consuming that context to re-render... but there's actually a good chance that they would be re-rendering anyway because React renders recursively by default, and setting state in a parent component causes all components inside of that parent to re-render unless you specifically try to avoid it. See my post A (Mostly) Complete Guide to React Rendering Behavior, which explains how all this works. *Yes, React-Redux does use Context internally... but only to pass down the Redux store instance, and not the current state value. This leads to very different update characteristics. Redux Toolkit, on the other hand, is just about the Redux logic and not related to any UI framework specifically. *Context was not introduced to be an alternative to Redux. There was a "legacy Context" API that existed in React well before Redux itself was created, and React-Redux used that up through v5. However, that legacy context API was broken in some key ways. The current React Context API was introduced in React 16.3 to fix the problems in legacy Context, not specifically to replace Redux. *React-Redux uses store subscriptions and selectors in each component instance, which is a completely different mechanism than how Context operates. I'd definitely suggest reading the posts I linked above, as well as these other related posts: * *Redux - Not Dead Yet! *When (and when not) to Reach for Redux *React, Redux, and Context Behavior. A: Original blog post by Mark Erikson: I'll just copy paste some info, but here's the original source and I recommend going directly here: https://blog.isquaredsoftware.com/2020/01/blogged-answers-react-redux-and-context-behavior/ More links here: * *https://github.com/markerikson/react-redux-links *https://blog.isquaredsoftware.com/2018/11/react-redux-history-implementation/ *https://medium.com/async/how-useselector-can-trigger-an-update-only-when-we-want-it-to-a8d92306f559 An explanation of how React Context behaves, and how React-Redux uses Context internally There's a couple assumptions that I've seen pop up repeatedly: * *React-Redux is "just a wrapper around React context" *You can avoid re-renders caused by React context if you destructure the context value Both of these assumptions are incorrect, and I want to clarify how they actually work so that you can avoid mis-using them in the future. For context behavior, say we have this initial setup: function ProviderComponent() { const [contextValue, setContextValue] = useState({a: 1, b: 2}); return ( <MyContext.Provider value={contextValue}> <SomeLargeComponentTree /> </MyContext.Provider> ) } function ChildComponent() { const {a} = useContext(MyContext); return <div>{a}</div> } If the ProviderComponent were to then call setContextValue({a: 1, b: 3}), the ChildComponent would re-render, even though it only cares about the a field based on destructuring. It also doesn't matter how many levels of hooks are wrapping that useContext(MyContext) call. A new reference was passed into the provider, so all consumers will re-render. In fact, if I were to explicitly re-render with <MyContext.Provider value={{a: 1, b: 2}}>, ChildComponent would still re-render because a new object reference has been passed into the provider! (Note that this is why you should never pass object literals directly into context providers, but rather either keep the data in state or memoize the creation of the context value.) For React-Redux: yes, it uses context internally, but only to pass the Redux store instance down to child components - it doesn't pass the store state using context!. If you look at the actual implementation, it's roughly this but with more complexity: function useSelector(selector) { const [, forceRender] = useReducer( counter => counter + 1, 0); const {store} = useContext(ReactReduxContext); const selectedValueRef = useRef(selector(store.getState())); useLayoutEffect(() => { const unsubscribe = store.subscribe(() => { const storeState = store.getState(); const latestSelectedValue = selector(storeState); if(latestSelectedValue !== selectedValueRef.current) { selectedValueRef.current = latestSelectedValue; forceRender(); } }) return unsubscribe; }, [store]) return selectedValueRef.current; } So, React-Redux only uses context to pass the store itself down, and then uses store.subscribe() to be notified when the store state has changed. This results in very different performance behavior than using context to pass data. There was an extensive discussion of context behavior in React issue #14110: Provide more ways to bail out of hooks. In that thread, Sebastian Markbage specifically said: My personal summary is that new context is ready to be used for low frequency unlikely updates (like locale/theme). It's also good to use it in the same way as old context was used. I.e. for static values and then propagate updates through subscriptions. It's not ready to be used as a replacement for all Flux-like state propagation. In fact, we did try to pass the store state in context in React-Redux v6, and it turned out to be insufficiently performant for our needs, which is why we had to rewrite the internal implementation to use direct subscriptions again in React-Redux v7. For complete detail on how React-Redux actually works, read my post The History and Implementation of React-Redux, which covers the changes to the internal implementation over time, and how we actually use context.
unknown
d12600
train
To answer your questions: but it take more time..how to reduce this big time? Before you can reduce that you need to find out exactly where that more comes from. As you wrote in a comment, you are using a remote request to obtain the data. Sorting for array the data you've provided works extremely fast, so I would assume you don't need to optimize the array sorting but just the way when and how you get the data from remote. One way to do so is to cache that or to prefetch it or to do parallel processing. But the details so far are not important at all, unless you've found out where that more comes from so that it is clear what is responsible for big time so then it can be looked into reducing it. Hope this is helpful so far, and feel free to add the missing information to your question. You can see the array-only code in action here, it's really fast: * *https://eval.in/51770 You can find the execution stats just below the output, exemplary from there: OK (0.008 sec real, 0.006 sec wall, 14 MB, 99 syscalls) A: You should really install a profiler (XHProf) and check what exactly takes so much time. I assume it is the sorting, because foreach through the final array of 5 elements should be lighning fast. Why do you sort it then? If the sole purpose of sorting is to find 5 "lowest" items, then the fastest way would be to just find 5 lowest items: $min5 = array(); foreach ($flights_result->json_data['response']['itineraries'] as $key => $value) { $amount = $value['price']['totalAmount']; // Just put first 5 elements in our result array if(count($min5) < 5) { $min5[$key] = $amount; continue; } // Find largest element of those 5 we check $maxMinK = null; foreach($min5 as $minK=>$minV) { if($maxMinK === null) { $maxMinK = $minK; continue; } if($minV > $min5[$maxMinK]) { $maxMinK = $minK; } } // If our current amount is less than largest one found so far, // we should remove the largest one and store the current amount instead if($amount < $min5[$maxMinK]) { unset($min5[$maxMinK]); $min5[$key] = $amount; } } asort($min5); // now we can happily sort just those 5 lowest elements It will find 5 items with about O(6n) which in your case should better than potential O(n²) with sorting; Then you may just use it like: foreach($min5 as $key=>$minValue) { $intinerary = $flights_result->json_data['response']['itineraries'][$key] ... } This should be a lot faster, provided it was the sorting! so get that XHprof and check :) A: Here's what I would have done: <?php // Initialize html Array $html = array(); // Iterate Over Values, Using Key as Label. foreach( $value['inboundInfo'] as $inboundLabel => &$inboundValue ) { // Check for Special Cases While Adding to html Array if( $inboundLabel == 'flightNumbers' ) { $html[] = $inboundLabel . ': ' . implode( ', ', $inboundValue ); } elseif( $inboundLabel == 'flightClasses' ) { foreach( $inboundValue as $fcName => &$fcValue ) { $html[] = 'flightClasses ' . $fcName . ': ' . $fcValue; } } else { $html[] = $inboundLabel . ': ' . $inboundValue; } } // Don't Need Foreach to Complicate Things Here $html[] = 'carrier name: ' . $value[' carrier']['name']; $html[] = 'carrier code: ' . $value[' carrier']['code']; // Add Price Info to Array foreach( $value['price'] as $priceLabel => &$price ) { $html[] = $priceLabel . ': ' . $price; } $html[] = ' -------- </br>'; // And Finally: echo implode( "<br/>\r\n", $html ); It's either that or write a recursive function to go through all the data. Also note, this only works if your data is in the order you want. A: I would do the following things: // This is optional, it just makes the example more readable (IMO) $itineraries = $flights_result->json_data['response']['itineraries']; // Then this should sort in the smallest way possible foreach ($flights_result->json_data['response']['itineraries'] as $key => $value) { $mid[$key] = $value['price']['totalAmount']; } // Sort only one array keeping the key relationships. asort($mid); // Using the keys in mid, since they are the same as the keys in the itineraries // we can get directly to the data we need. foreach($mid AS $key => $value) { $value = $itineraries[$key]; // Then continue as above, I think your performance issue is the sort ... } A: Buffer echo/print (ie, do not echo/print from inside loop): $buffer = ""; for ($i = 0; $i < 1000; $i++) { $buffer .= "hello world\n"; } print($buffer); This is probably negligable in your case, but worth doing for larger iteration counts. You don't need to sort the entire array if you're only interested in the 5 lowest prices. Loop through the array while maintaining a list of the keys with the 5 lowest prices. I'm too rusty in PHP to effectively provide a sample. A: I see two potential areas slowing down the code: * *Sorting the array *Echoing + string concatenating all those strings I would first find out which area is causing the lag. If profiling is difficult, you can insert debugging print statements into your code with each statement print the current time. This will give you a rough idea, which area of code is taking bulk of the time. Something like: echo "Time at this point is " . date(); Once we have those results we can optimize further. A: Try this... $arr_fli = array(0 => array('f_name'=> 'Flight1', 'price' => 2000), 1 => array('f_name'=> 'Flight1', 'price' => 5000), 3 => array('f_name'=> 'Flight1', 'price' => 7000), 4 => array('f_name'=> 'Flight1', 'price' => 4000), 5 => array('f_name'=> 'Flight1', 'price' => 6000), 6 => array('f_name'=> 'Flight1', 'price' => 800), 7 => array('f_name'=> 'Flight1', 'price' => 1000), 8 => array('f_name'=> 'Flight1', 'price' => 500) ); foreach($arr_fli as $key=>$flights) { $fl_price[$flights['price']] = $flights; } sort($fl_price); $i = 0; foreach($fl_price as $final) { $i++; print_r($final); echo '<br />'; if($i==5) { break; } } A: The first thing to understand is which part of you code takes long to execute. A quick and dirty but simple way is to use your logger. Which I assume you are using already to log all your other information you want to keep as your code runs, such as memory usage, disc usage, user requests, purchases made etc. Your logger can be a highly sofisticated tool (just google for "loggin framework" or the likes) or as simple as message writer to your file. To get a quick start, you can using something like that: class Logger { private $_fileName; private $_lastTime; public function __construct ($fileName) { $this->_fileName = $fileName; $this->_lastTime = microtime(true); } public function logToFile ($message) { // open file for writing by appending your message to the end of the file $file = fopen($this->_fileName, 'a'); fwrite($file, $message . "\n"); fclose($file); } // optional for testing - see your logs quickly without bothering with files public function logToConsole($message) { echo $message; } // optional $message to add, e.g. about what happened in your code public function logTimePassed ($message = '') { $timePassed = microtime(true) - $this->_lastTime; //$this->logToConsole("Milliseconds Passed since last log: " . $timePassed . ", " . $message . "\n"); $this->logToFile("Milliseconds Passed since last log: " . $timePassed . ", " . $message . "\n"); // reset time $this->_lastTime = microtime(true); } } // before your code starts $myLogFileName = 'my-logs'; // or whatever the name of the file to write in $myLogger = new Logger($myLogFileName); // ... your code goes ... // after something interesting $myLogger->logTimePassed(); // log time since last logging // ... your code continues ... // after something else, adding a message for your record $myLogger->logTimePassed("after sorting my flight array"); Now you can go over your code and place your loggers at all crucial points after anything that may potentially take too long. Add your messages to know what happened. There can potentially be many places for delays. Usually array manipulations done in-memory are blazing fast. But more attention needs to be paid for more time consuming operations such as: * *File reading/ writing, directory access *Database access *HTTP requests For instance, http requests - are your echos being sent immediately to browser over a network? Are they being sent every time in the loop? If yes, you probably want to avoid it. Either save them into array as indicated by other answers, or use Output Buffering. Further to minimize http requests to your server, you can try to put more functions on the client side and use ajax to only retrieve what is really needed from the server. Also don't forget about things that are hidden. For instance, I see object property access: $flights_result->json_data How is this object implemented? Is it in-memory only? Does it call to outside services? If yes, that may be your culprit. Then you have to work on optimizing that. Reducing the number of queries by caching them, optimizing your data so you only need to query the changes, etc. All this depends on the structure of you application and may have nothing to do with the rest of your code. If you want any help on that, you obviously need to put this information in your question. Basically anything that is not done entirely in memory can cause delays. As for the in-memory operations, unless your data are huge or your operations are intense, their effect will likely be negligible. Still if you have any doubt, simply place your loggers at all "suspicious" places. Now to your specific code, it mixes all sort of things, such that outside object access, array value access, array sorting, echo outputs, maybe other things that are hidden. This makes it hard even to know where to place your time loggers. What would make it much easier, is to use object oriented approach and follow, among others, the Principle of Separating Concerns (google on that). That way your objects and their methods will have single responsibilities and you will easily see who does what and where to place your loggers. I can't highly enough recommend the legendary book by "Uncle Bob" to learn more about it. I presume your code comes from inside a function. According to Uncle Bob: The first rule of functions is that they should be small. The second rule is that they should be smaller than that. and Functions should do one thing. They should do it well. They should do it only. When splitting your code into more functions, you are forced to give those functions meaningful names that can greatly improve readability of your code. Needless to say, all functions not intended for use outside the class, should be private, so you can easily re-use your classes via inheritance. There is a lot that can be done, but just to get you started, here are some ideas: * *Encapsulate your data array into an object with methods only doing there what you need. Something like that: class Itineraries { private $_itineraryArray; public function __construct(array $itineraryArray) { $this->_itineraryArray = $itineraryArray; } public function getNCheapest ($number) { $this->_sortByPrice(); $result = array(); for ($i=0; $i< $number; $i++) { $result[] = $this->_itineraryArray[$i]; } return $result; } private function _sortByPrice() { $prices = $this->_getPriceArray(); array_multisort($prices, SORT_ASC, $this->_itineraryArray); } private function _getPriceArray () { foreach ($this->_itineraryArray as $entry) { $this->_getPrice($entry); } } private function _getPrice($entry) { return $entry['price']['totalAmount']; // or whatever it is } } //Then you instantiate your object: $myItineraries = new Itineraries($flights_result->json_data['response']['itineraries']); // or whatever Notice that should your array structure completely change, you'll only need to adjust the method _getPrice and the single line to instantiate your object! The rest of your code will remain intact! This is a part of your Model Layer in the Model-View-Controller paradigm. You can google on that to find lots of information. The model knows how to handle its data but knows nothing about the source of them, no browser output, no http requests etc. * *Then everything responsible for generating user output goes into your View Layer or into so-called Presenter Layer, where you have other objects dealing with it. All your 'echo's will go here. Something like that: class Presenter { public function outputNCheapest ($myItineraries, $number) { $outputData = $myItineraries->getNCheapest ($number); echo $this->generateOutput($outputData); } private function _generateOutput($outputData) { $html = ''; // your html string is generated return $html; } } However, I am personally against generating HTML on the server. It is just a waste of bandwidth and the time of your users waiting for response. Every time I am waiting for my browser to reload the whole page, I want the developer to read this. You can output JSON instead of HTML and request it with ajax or via other ways on the client side. Furthermore, your user's browser can cache some of the data instead of requesting them again, and decide what to ask for. That will further minimize the delay for your users, which at the end of the day is precisely what you are concerned with.
unknown