_id
stringlengths
2
6
partition
stringclasses
3 values
text
stringlengths
4
46k
language
stringclasses
1 value
title
stringclasses
1 value
d15001
val
The deploy user needs to be a sudo user in order to restart the puma systemctl service. You can fix the issue by creating new file inside /etc/sudoers.d directory and add this line deploy ALL=(ALL) NOPASSWD:/bin/systemctl
unknown
d15002
val
This can be thought of as finding the longest path through a DAG. Each position in the string is a node and each substring match is an edge. You can trivially prove through induction that for any node on the optimal path the concatenation of the optimal path from the beginning to that node and from that node to the end is the same as the optimal path. Thanks to that you can just keep track of the optimal paths for each node and make sure you have visited all edges that end in a node before you start to consider paths containing it. Then you just have the issue to find all edges that start from a node, or all substring that match at a given position. If you already know where the substring matches are, then it's as trivial as building a hash table. If you don't you can still build a hashtable if you use Rabin-Karp. Note that with this you'll still visit all the edges in the DAG for O(e) complexity. Or in other words, you'll have to consider once each substring match that's possible in a sequence of connected substrings from start to the end. You could get better than this by doing preprocessing the substrings to find ways to rule out some matches. I have my doubts if any general case complexity improvements can come for this and any practical improvements depend heavily on your data distribution. A: O(N+M) solution: Set f[1..N]=-1 Set f[0]=0 for a = 0 to N-1 if f[a] >= 0 For each substring beginning at a Let b be the last index of the substring, and c its score If f[a]+c > f[b+1] Set f[b+1] = f[a]+c Set g[b+1] = [substring number] Now f[N] contains the answer, or -1 if no set of substrings spans the string. To get the substrings: b = N while b > 0 Get substring number from g[N] Output substring number b = b - (length of substring) A: It is not clear whether M substrings are given as sequences of characters or indeces in the input string, but the problem doesn't change much because of that. Let us have input string S of length N, and M input strings Tj. Let Lj be the length of Tj, and Pj - score given for string Sj. We say that string This is called Dynamic Programming, or DP. You keep an array res of ints of length N, where the i-th element represents the score one can get if he has only the substring starting from the i-th element (for example, if input is "abcd", then res[2] will represent the best score you can get of "cd"). Then, you iterate through this array from end to the beginning, and check whether you can start string Sj from the i-th character. If you can, then result of (res[i + Lj] + Pj) is clearly achievable. Iterating over all Sj, res[i] = max(res[i + Lj] + Pj) for all Sj which can be applied to the i-th character. res[0] will be your final asnwer. A: inputs: N, the number of chars in a string e[0..N-1]: (b,c) an element of set e[a] means [a,b) is a substring with score c. (If all substrings are possible, then you could just have c(a,b).) By e.g. [1,2) we mean the substring covering the 2nd letter of the string (half open interval). (empty substrings are not allowed; if they were, then you could handle them properly only if you allow them to be "taken" at most k times) Outputs: s[i] is the score of the best substring covering of [0,i) a[i]: [a[i],i) is the last substring used to cover [0,i); else NULL Algorithm - O(N^2) if the intervals e are not sparse; O(N+E) where e is the total number of allowed intervals. This is effectively finding the best path through an acyclic graph: for i = 0 to N: a[i] <- NULL s[i] <- 0 a[0] <- 0 for i = 0 to N-1 if a[i] != NULL for (b,c) in e[i]: sib <- s[i]+c if sib>s[b]: a[b] <- i s[b] <- sib To yield the best covering triples (a,b,c) where cost of [a,b) is c: i <- N if (a[i]==NULL): error "no covering" while (a[i]!=0): from <- a[i] yield (from,i,s[i]-s[from] i <- from Of course, you could store the pair (sib,c) in s[b] and save the subtraction.
unknown
d15003
val
On class Category add this code: public override string ToString(){ return this.Name; } When you call the object on need a string, the object Category show Name property. A: The column binding for the data grid view might not be right. Please try the following in the design view: 1. Right click on your data grid view and select "Edit Columns" 2. Select the column for which the data is not coming right. 3. In the "DataPropertyName" under "Data" mention "Name" i.e. the property name of your Item class.
unknown
d15004
val
Nothing magic ModelA.objects.filter(att1=queryset of modelB) A: say you have object B with fields att2 and att3 class modelA(models.Model): att1 = models.ForeignKey(modelB) class modelB(models.Model): att2 = models.CharField(max_length=255) att3 = models.CharField(max_length=255) then you filter by doing: results = modelA.objects.filter(att1__att2='foo') hope this helps
unknown
d15005
val
IIUC count values in Algorithm columns and filter if greater like 1: df = pd.DataFrame({'T':['AAABBX','AAABBX','AAABBX'], 'Algorithm1': ['AX','AB','AAB'], 'Algorithm2' : ['BX','AAX','AB'], 'Algorithm3' : ['AX','AB','AAX']}) s = df.filter(like='Algorithm').stack().value_counts() m = s.gt(1) print (s) AB 3 AX 2 AAX 2 AAB 1 BX 1 dtype: int64 df['new'] = pd.Series(s.index[m])[:m.sum()] print (df) T Algorithm1 Algorithm2 Algorithm3 new 0 AAABBX AX BX AX AB 1 AAABBX AB AAX AB AX 2 AAABBX AAB AB AAX AAX If number of values is greater like length of DataFrame: df = pd.DataFrame({'T':['AAABBX','AAABBX','AAABBX'], 'Algorithm1': ['AX','AB','AAB'], 'Algorithm2' : ['AAX','AAX','AB'], 'Algorithm3' : ['AX','AB','AAB']}) s = df.filter(like='Algorithm').stack().value_counts() m = s.gt(1) print (s) AB 3 AAB 2 AX 2 AAX 2 dtype: int64 df['new'] = pd.Series(s.index[m])[:m.sum()] print (df) T Algorithm1 Algorithm2 Algorithm3 new 0 AAABBX AX AAX AX AB 1 AAABBX AB AAX AB AAB 2 AAABBX AAB AB AAB AX If number of values is less like length of DataFrame: df = pd.DataFrame({'T':['AAABBX','AAABBX','AAABBX'], 'Algorithm1': ['AX','AB','AX'], 'Algorithm2' : ['AA','AAX','AB'], 'Algorithm3' : ['AX','AB','AX']}) s = df.filter(like='Algorithm').stack().value_counts() m = s.gt(1) print (s) AX 4 AB 3 AAX 1 AA 1 dtype: int64 df['new'] = pd.Series(s.index[m])[:m.sum()] print (df) T Algorithm1 Algorithm2 Algorithm3 new 0 AAABBX AX AA AX AX 1 AAABBX AB AAX AB AB 2 AAABBX AX AB AX NaN
unknown
d15006
val
The return type from stored procedure is used to return exit code, which is int Your computation is resulting in value between 0 and 1 You need to define output variable of type float or numeric(10,4) and return that value Alter Proc [dbo].[FindAnnualLeave1] @Empid varchar(20), @AnnualPending numeric(10,4) OUTPUT
unknown
d15007
val
Although this has already been answered, but since you are new to all this stuff, here is how to debug it: -- get the pid of current shell (using ps). PID TTY TIME CMD 1611 pts/0 00:00:00 su 1619 pts/0 00:00:00 bash 1763 pts/0 00:00:00 ps -- from some other shell, attach strace (system call tracer) to the required pid (here 1619): strace -f -o <output_file> -p 1619 -- Run both the commands that you tried -- open the output file and look for exec family calls for the required process, here: grep The output on my machine is some thing like: 1723 execve("/bin/grep", ["grep", "--color=auto", "p{2}", "foo"], [/* 19 vars */]) = 0 1725 execve("/bin/grep", ["grep", "--color=auto", "p\\{2\\}", "foo"], [/* 19 vars */]) = 0 Now you can see the difference how grep was executed in both the cases and can figure out the problem yourself. :) still the -e flag mystery is yet to be solved.... A: Without the quotes, the shell will try to expanding the options. In your case the curly brackets '{}' have a special meaning in the shell much like the asterisk '*' which expands to a wildcard. A: With quotes, your complete regex gets passed directly to grep. Without the quotes, grep sees your regex as p{2}. Edit: To clarify, without the quotes your slashes are being removed by shell before your regex is passed to grep. Try: echo grep p\{2\} test.txt And you'll see your output as... grep p{2} test.txt The quotes prevent shell from escaping characters before they get to grep. You could also escape your slashes and it will work without quotes - grep p\\{2\\} test.txt A: From the grep man page In basic regular expressions the meta-characters ?, +, {, |, (, and ) lose their special meaning; instead use the backslashed versions \?, \+, \{, \|, \(, and \). so these two become functional equivalent egrep p{2} and grep "p\{2\}" the first uses EREs(Extended Regular Expressions) the second uses BREs(Basic Regular Expressions) in your example because your using grep(which supports BREs when you don't use the -e switch) and you're enclosed in quotes so "\{" gets expanded as a special BRE character. You second instance doesn't work because your just looking for the literal string 2{p} which doesn't exist in your file you can demonstrate that grep is expanding your string as a BRE by trying: grep "p\{2" grep will complain grep: Unmatched \{ A: The first one greps the pattern using regex, then pp: echo "apple" | grep 'p\{2\}' The second one greps the pattern literally, then p{2}: echo "ap{2}le" | grep p\{2\}
unknown
d15008
val
You can create a file with a different set of paramaters like $cat input_param.txt 'param_1' 'param_2' 'param_3' 'value_1' 'value_2' 'value_3' 'value_1' 'value_3' 'value_4' $ and call script in loop while read param; do echo ./shell_script_program.sh ${param} | sh ; done < input_param.txt A: You can create a wrapper script for this. Create an environment file with the parameters value. For eg a .env file with the param values. You can modify the param value by making changes to .env file. cat .env param_1=value1 param_2=value2 In the wrapper load the environment file. Wrapper script Code: cat wrapper.sh ./.env file sh shell_script_program.sh 'value_1' 'value_2' 'value_3' Check for exit status echo $? sh shell_script_program.sh 'param_1' 'param_2' 'param_3' echo $? You can execute wrapper script sh wrapper.sh Based on exit status you can decide whether to proceed or not.
unknown
d15009
val
In most cases while we uploading images we rename them as the time stamp because of the uniqueness. But for searching, dealing with time stamp is much harder, then add a class name of the div as the file name. ... var div = document.createElement("div"); div.id = Date.now(); div.className += ' ' + File.name; ... I am sure this will be help full. Thanks A: Not sure I get the question :) Try instead: $(div).attr('id', 'yourIDHere') Or you never use your defined var file , could it not be? div.id = file.name; You can generate UUIDs via JS see here
unknown
d15010
val
Finally figured this out. read_only_fields on my SubThingSerializer prevented the data from getting through the validation resulting in empty dicts being created. I used read_only_fiels to prevent non-existent sub_thing data from being passed to my code, but I guess I'll need to find another way.
unknown
d15011
val
Change the line you are using multidimensional array if you use $amenitytype['Property_name']. there is no property in the name Property_name in $amenitytype array if(strpos(trim($amenitytype['Property_name']), trim($list['Name'])) == TRUE):?> To if(strpos(trim($amenitytype[0]['Property_name']), trim($list['Name'])) !== false):?> A: You should use !== FALSE instead of == TRUE as strpos may return 0 which will be considered as not true, while in fact it points to the first character in the haystack. Also your code isn't clear what $amenitytype contains so you may have to replace this line <?php if(strpos(trim($amenitytype['Property_name']), trim($list['Name'])) == TRUE):?> with this <?php if(strpos(trim($amenitytypes[0]['Property_name']), trim($list['Name'])) !== FALSE):?>
unknown
d15012
val
I may be wrong here, but saving that as a .csv file would not be easy. I don't know Python very much but I'll leave my two cents here: import urllib.request # lib that handles URLs target_url = "https://www.census.gov/construction/bps/txt/tb2u2010.txt" data = urllib.request.urlopen(target_url) with open('output.txt', 'w') as f: # change this to .csv for line in data: f.write(str(data.read())) This will create a .txt with everything from the website on a single line.
unknown
d15013
val
When using the bash command line in the Azure portal, it means you use the Azure Cloud Shell, not your local machine. So the SSH key you created is stored in the cloud shell with the path /home/username/.ssh/, not the local machine. Here is a screenshot that creates the SSH key in the Azure Cloud Shell:
unknown
d15014
val
Check your web.config. Likely the "AutomaticDataBind" property is set to "false" in one environment, and "true" on your dev box. I cannot be sure, obviously, but I've been hammered by a similar issue in the past and the symptoms were exactly like you describe here :-) P.S. Sitecore defaults this value to false. A: Having poked around the sitecore forums some more, I came across this blog post explaining one potential solution. I added <type>System.Web.UI.WebControls.GridView</type> to the <typesThatShouldNotBeExpanded> section of Web.config and it seems to work for us. It seems to be to do with sitecore's page layout rendering pipeline, where it expands sub-layouts and internal placeholders to generate the full page rendering. It accesses the .Net controls on the page and pokes them around a bit, which can cause some controls to not work correctly. There is an article on the SDN about this, although you can only read it if you have an account with sufficient privalegs. Hope this might help any other sitecore users out there in future.
unknown
d15015
val
Check out the signatur of your forEach consumer function. The secound argument is the index. poster.forEach((p, idx) => { p.addEventListener('click',function(){ videoTag.src = xResult[movieLanguageUrl][idx]; }); }); Check out this link. A: Get the index value on the second parameter of the forEach function and it should work: poster.forEach((p, index) => { p.addEventListener('click', () => { videoTag.src = xResult[movieLanguageUrl][index + 1]; }); });
unknown
d15016
val
First you need to call QFile::open() before calling readAll(). Second point, you can not write to file in Qt Resources. If you want a cross platform way to save settings and such for your software take a look at QStandardPaths::writableLocation() and QSettings. Note that QSettings won't handle JSON out of the box, but it will handle all the read/write to file for you (and the file format and location for you if you took car of setting QCoreApplication::applicationName and QCoreApplication::organizationName).
unknown
d15017
val
According to RFC 1738, While the syntax for the rest of the URL may vary depending on the particular scheme selected, URL schemes that involve the direct use of an IP-based protocol to a specified host on the Internet use a common syntax for the scheme-specific data: //user:password@host:port/url-path Some or all of the parts "user:password@", ":password", ":port", and "/url-path" may be excluded. The scheme specific data start with a double slash "//" to indicate that it complies with the common Internet scheme syntax. A: // Indicates that a contact to a server is to be achieved. (For example, when sending email the notation 'mailto:<email address>...', without slashes, could be used). Note that this doesn't mean a connection between a browser and server. When a browser has sent a request, there is no connection between the browser and the server.
unknown
d15018
val
Your code would work if you changed text to textContent (or innerText on old IE, but it's not quite the same thing) and 4 to 3. But, it's fragile. You can be more precise with querySelector: var div = document.querySelector(".Wrapper div"); console.log(div.textContent); Live Example: var div = document.querySelector(".Wrapper div"); console.log(div.textContent); <div class="Wrapper"> <h3 class="date" id="date">{{date}}</h3> <div class="descriptionWrapper"> <p class="jobDescription">{{job}}</p> <p class="jobAreaDescription">{{jobArea}}</p> <p class="placeDescription">{{ort}}</p> <p class="kindDescription">{{anstellung}}</p> </div> <div class="jobLink"> {{#custom_link jobLink}} {{linkText}} {{/custom_link}} </div> </div> Live your version, that only looks at the first such match. If you want all of them, you'll need a loop: var wrappers = document.querySelectorAll(".Wrapper"); // Or .getElementsByClassName("Wrapper"); for (var i = 0; i < wrappers.length; ++i) { var div = wrappers[i].querySelector("div"); console.log(div.textContent); } Live Example: var wrappers = document.querySelectorAll(".Wrapper"); // Or .getElementsByClassName("Wrapper"); for (var i = 0; i < wrappers.length; ++i) { var div = wrappers[i].querySelector("div"); console.log(div.textContent); } <div class="Wrapper"> <h3 class="date" id="date">{{date}}</h3> <div class="descriptionWrapper"> <p class="jobDescription">{{job}} first</p> <p class="jobAreaDescription">{{jobArea}}</p> <p class="placeDescription">{{ort}}</p> <p class="kindDescription">{{anstellung}}</p> </div> <div class="jobLink"> {{#custom_link jobLink}} {{linkText}} {{/custom_link}} </div> </div> <div class="Wrapper"> <h3 class="date" id="date">{{date}}</h3> <div class="descriptionWrapper"> <p class="jobDescription">{{job}} second</p> <p class="jobAreaDescription">{{jobArea}}</p> <p class="placeDescription">{{ort}}</p> <p class="kindDescription">{{anstellung}}</p> </div> <div class="jobLink"> {{#custom_link jobLink}} {{linkText}} {{/custom_link}} </div> </div> <div class="Wrapper"> <h3 class="date" id="date">{{date}}</h3> <div class="descriptionWrapper"> <p class="jobDescription">{{job}} third</p> <p class="jobAreaDescription">{{jobArea}}</p> <p class="placeDescription">{{ort}}</p> <p class="kindDescription">{{anstellung}}</p> </div> <div class="jobLink"> {{#custom_link jobLink}} {{linkText}} {{/custom_link}} </div> </div> In a modern browser (or with the polyfill approach I describe in this other answer), you can use forEach instead of the for loop (or even for-of in ES2015+), which makes it a bit more concise: document.querySelectorAll(".Wrapper").forEach(function(wrapper) { var div = wrapper.querySelector("div"); console.log(div.textContent); }); Live Example: document.querySelectorAll(".Wrapper").forEach(function(wrapper) { var div = wrapper.querySelector("div"); console.log(div.textContent); }); <div class="Wrapper"> <h3 class="date" id="date">{{date}}</h3> <div class="descriptionWrapper"> <p class="jobDescription">{{job}} first</p> <p class="jobAreaDescription">{{jobArea}}</p> <p class="placeDescription">{{ort}}</p> <p class="kindDescription">{{anstellung}}</p> </div> <div class="jobLink"> {{#custom_link jobLink}} {{linkText}} {{/custom_link}} </div> </div> <div class="Wrapper"> <h3 class="date" id="date">{{date}}</h3> <div class="descriptionWrapper"> <p class="jobDescription">{{job}} second</p> <p class="jobAreaDescription">{{jobArea}}</p> <p class="placeDescription">{{ort}}</p> <p class="kindDescription">{{anstellung}}</p> </div> <div class="jobLink"> {{#custom_link jobLink}} {{linkText}} {{/custom_link}} </div> </div> <div class="Wrapper"> <h3 class="date" id="date">{{date}}</h3> <div class="descriptionWrapper"> <p class="jobDescription">{{job}} third</p> <p class="jobAreaDescription">{{jobArea}}</p> <p class="placeDescription">{{ort}}</p> <p class="kindDescription">{{anstellung}}</p> </div> <div class="jobLink"> {{#custom_link jobLink}} {{linkText}} {{/custom_link}} </div> </div> Or with ES2015+ (again, you may need to do some polyfilling per above): for (const wrapper of document.querySelectorAll(".Wrapper")) { const div = wrapper.querySelector("div"); console.log(div.textContent); } Live Example: for (const wrapper of document.querySelectorAll(".Wrapper")) { const div = wrapper.querySelector("div"); console.log(div.textContent); } <div class="Wrapper"> <h3 class="date" id="date">{{date}}</h3> <div class="descriptionWrapper"> <p class="jobDescription">{{job}} first</p> <p class="jobAreaDescription">{{jobArea}}</p> <p class="placeDescription">{{ort}}</p> <p class="kindDescription">{{anstellung}}</p> </div> <div class="jobLink"> {{#custom_link jobLink}} {{linkText}} {{/custom_link}} </div> </div> <div class="Wrapper"> <h3 class="date" id="date">{{date}}</h3> <div class="descriptionWrapper"> <p class="jobDescription">{{job}} second</p> <p class="jobAreaDescription">{{jobArea}}</p> <p class="placeDescription">{{ort}}</p> <p class="kindDescription">{{anstellung}}</p> </div> <div class="jobLink"> {{#custom_link jobLink}} {{linkText}} {{/custom_link}} </div> </div> <div class="Wrapper"> <h3 class="date" id="date">{{date}}</h3> <div class="descriptionWrapper"> <p class="jobDescription">{{job}} third</p> <p class="jobAreaDescription">{{jobArea}}</p> <p class="placeDescription">{{ort}}</p> <p class="kindDescription">{{anstellung}}</p> </div> <div class="jobLink"> {{#custom_link jobLink}} {{linkText}} {{/custom_link}} </div> </div>
unknown
d15019
val
Google Chrome has a built in feature that lets you translate an entire page into your language. What you could do is set the language in the application to, say, English, go to the site in Chrome and auto translate back in to your language. However this could be tricky and may require something like https://chrome.google.com/webstore/detail/google-translate/aapbdbdomjkkjkaonfhkkikfgjllcleb?hl=en google chrome extension to fulfill your requirement. You can automate this utility using autoit. To add an extension for chrome driver, please refer to those links: Create .CRX file and Add extension to Chrome WebDriver A possible workaround could be if you can utilize something like these custom configs. Those Python examples, should help : Firefox: import os from selenium import webdriver profile = webdriver.FirefoxProfile() profile.set_preference('intl.accept_languages', 'es') driver = webdriver.Firefox(profile) Chrome: from selenium.webdriver.chrome.options import Options chrome_options = Options() chrome_options.add_experimental_option( "prefs", {'--lang': 'es'}) driver = webdriver.Chrome(chrome_options=chrome_options)
unknown
d15020
val
Credits to martin clayton for his comment, moving fail in the sequential fixes the issue. <macrodef name="searchfile"> <attribute name="file" /> <attribute name="path" default="${custom.buildconfig},${wst.basedir}" /> <attribute name="name" /> <attribute name="verbose" default="false" /> <sequential> <first id="@{name}"> <multirootfileset basedirs="@{path}" includes="@{file}" erroronmissingdir="false" /> </first> <property name="@{name}" value="${toString:@{name}}" /> <echo>property @{name}=${toString:@{name}}</echo> <fail message="@{file} was not found in ${custom.buildconfig},${wst.basedir}, customdir=${customdir}"> <condition> <equals arg1="${toString:@{name}}" arg2=""/> </condition> </fail> </sequential> </macrodef>
unknown
d15021
val
Which version of Total.js are you using? Try to install latest beta version of Total.js framework as a global module: $ npm install -g total.js@beta and try again --translate. If your problem still persists, write me an email at [email protected]
unknown
d15022
val
It's sovled by adding .htaccess
unknown
d15023
val
Your heart function assumes the turtle's heading is 0, but in the course of drawing a heart, heading changes due to left/right calls. One solution is to reset heading with t.setheading(0) at the start of the function. Also, your loop/if combo is overcomplicated. I suggest either removing the ifs or removing both the loop and the ifs and using 3 separate heart calls. Here's a simplified version: import turtle def heart(x): t.penup() t.goto(x, -100) t.pendown() t.color("black", "red") t.setheading(0) t.begin_fill() t.left(45) t.forward(100) t.circle(50, 180) t.right(90) t.circle(50, 180) t.forward(100) t.end_fill() t = turtle.Turtle() t.hideturtle() for x in range(-250, 251, 250): heart(x) turtle.exitonclick() Consider making size and y values parameters for heart, and optionally t. As is, it's a bit on the hardcoded side.
unknown
d15024
val
You stated your options correctly, either low interval/waiting or hooking your own custom OnValidateIdentity. Here's a similar question: Propagate role changes immediately
unknown
d15025
val
Personally, I don't see any reason to over-hide this data, as it supplies no clue to people who view it on how to utilize these symbols to do something "bad". However, if that's really a huge problem for you, i.e. you are afraid of being sort of reverse-engineered somehow, then you may opt to code obfuscation. For example, Semantic Designs offer a product for these purposes and claim that it's of high quality. I've never had a chance to try that stuff myself. Keep in mind that it's commercial.
unknown
d15026
val
The documentation states that undefined will be omitted from the results. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify#Description If undefined, a Function, or a Symbol is encountered during conversion it is either omitted (when it is found in an object) or censored to null (when it is found in an array). JSON.stringify() can also just return undefined when passing in "pure" values like JSON.stringify(function(){}) or JSON.stringify(undefined). I was running the following example in the JavaScript console, and was expecting to see the word "undefined" in the template. console.log(JSON.stringify(undefined)); // prints "undefined" console.log(typeof JSON.stringify(undefined)); // does not print "string" it prints "undefined" I was mistakenly thinking the console message "undefined" was a string value. A: Maybe in the controller set the value for that particular variable and if it's not undefined pipe json else just print undefined as value. {{ variable !== undefined ? (variable | json) : variable }}
unknown
d15027
val
If you can't do it with Styled Maps (and I don't see how you can right now), you could use Styled Maps to hide the state boundaries: { featureType: 'administrative.province', elementType: 'geometry.stroke', stylers: [{visibility: '#off'}] }, Then add your own (note that you need a source of the boundaries somewhat consistent with the map tiles). Example of using borders from a FusionTable proof of concept fiddle proof of concept fiddle with styled borders code snippet: function initMap() { // Styles a map in night mode. var map = new google.maps.Map(document.getElementById('map'), { center: { lat: 40.674, lng: -73.945 }, zoom: 7, styles: [{ featureType: 'administrative.province', elementType: 'geometry.stroke', stylers: [{ visibility: '#off' }] }, ] }); var layer = new google.maps.FusionTablesLayer({ query: { select: 'kml_4326', from: '19lLpgsKdJRHL2O4fNmJ406ri9JtpIIk8a-AchA' }, map: map }); } html, body, #map { height: 100%; width: 100%; margin: 0; padding: 0; } <div id="map"></div> <!-- Replace the value of the key parameter with your own API key. --> <script async defer src="https://maps.googleapis.com/maps/api/js?callback=initMap"></script>
unknown
d15028
val
You can write your own implementation of the ResponseCreator that sets up a number of URI and responses (for asynchronous requests) and return the matching response based on the input URI. I've done something similar and it is works.
unknown
d15029
val
You can incapsulate query results in array and after print it; $sql = "SELECT item, cost, veg, spicy_level FROM food1"; $result = $conn->query($sql); $a = array(); while($row = $result->fetch_assoc()) { if($a['food1'] ==null) $a['food1'] = array(): array_push($a['food1'],$row);} echo json_encode($a); ?></i> A: Your code should be : $sql = "SELECT item, cost, veg, spicy_level FROM food1"; $result = $conn->query($sql); $food['food1'] = array(); while($row = $result->fetch_assoc()) { $food['food1'][] = $row; } echo json_encode($food); A: Don't call json_encode each time through the loop. Put all the rows into an array, and then encode that. $food = array(); while ($row = $result->fetch_assoc()) { $food[] = $row; } echo json_encode(array('food1' => $food));
unknown
d15030
val
As stated in the answer that PeterM linked to, the default allocation size for sequence generators is 50, which is the root cause of the problem, since you defined the sequence with an increment of 1. I'll just comment on the negative values issue. An allocation size of 50 (set in SequenceGenerator.allocationSize) means that Hibernate will: * *create the sequence with INCREMENT BY 50 (if you let it) *grab the next value n from the sequence *start allocating ids from n-50 till n *repeat the two steps above once it runs out of numbers Since you've made the sequence increment by 1, it's easy to see where the negative values come from (and why constraint violations will follow). If you tried inserting more than 50 rows, you'd run into constraint violations without having to restart the server.
unknown
d15031
val
Creating connection object for each query and closing it is expensive That's exactly the reason to use connection pools, like c3p0, dbcp, or more modern Hikari connection pool with which spring boot 2 ecosystem integrates perfectly. To access the data you probably may want to use a thin wrapper over raw JDBC API which is fairly cumbersome. Spring provides a wrapper like this, its called JdbcTemplate (It much less "advanced" than hibernate, it doesn't do any automatical mapping, doesn't generate crazy and not-so-crazy queries), it acts pretty much like jdbc but saves you from creating Prepared Statements, iterating over result sets, etc. I won't really explain here how to use JdbcTemplate, I'll just state that you may get a connection from it but I don't think you'll ever need it. Of course under the hood it will be integrated with Hikari connection pool, so that the actual connection will be taken from the pool. Here you can find an example of such a facility with full DAO implementation (its called repository) and JdbcTemplate object configuration
unknown
d15032
val
You can use php's native functiion to convert string to date format. $startDate = date('Y-m-d', strtotime('22 Mar, 2021'));//2021-03-22 $endDate = date('Y-m-d', strtotime('22 Apr, 2021'));//2021-04-22 Now you can perform laravel search query. For example: DB::table('yourTable') ->whereBetween('created_at', [$startDate, $endDate]) ->get(); See Document: strtotime
unknown
d15033
val
The right way is to follow the HTML standard. You can validate your HTML page here. Your mail client should follow it and should throw away what's not supported or what's insecure, like JavaScript. I'll expose some reasons why following standards could be beneficial here: * *a webmail willing to show your mail as a full page, could keep your format. *a webmail will simply strip the tags and attributes it doesn't want. But you can never know which ones. *It's easier to find (server side) components that follow format standards, and thus are less error prone. Parsers not following standards could possibly break, making your email not getting shown. A: I don't think there is a right way but trying to make the email viewable in as many email readers as posible. I usually check the emails in Thunderbird, because Outlook forgives more. In Thunderbird this is the HTML code for an email (i have an extension that shows the html) <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> </head> <body bgcolor="#ffffff" text="#000000"> This is the body text<br> <div class="moz-signature"><i><br> <br> Regards<br> Alex<br> </i></div> </body> </html> BTW, i use plain text email for all my web forms every time I can. I had many issues with blackberry email using html+plain text emails. A: Whether or not you include the html/head/body tags is entirely irrelevant — they are always optional and will not affect the rendering of the document in any way. What matters most is whether quirks mode is on or not. Unfortunately, you can’t control that in a webmail setting. Tables and inline styles are your friends. Your best bet is to test in as many webmail and desktop clients as you can. A: Many of the posts on this thread are rather old, and as a result they are no longer accurate. These days HTML emails should include a doctype, html and body declaration if you intend to do anything fancy at all. There are a multitude of guides on this subject which can help you learn how to properly code HTML Email, but most of them disregard the specifics of a doctype, which is how I stumbled on your question. I suggest you read the following 2 posts which are from reputable teams familiar with the various problems: campaign monitor's take email on acid's take A: Depends entirely on the email client that receives it. In my experience, most email clients that will interpret HTML don't care if you have full body/head/html tags, etc. In fact you don't even need those tags for most browsers. You need to have the head tags to include style/title, etc. Otherwise they are not really necessary, per se. I've never seen them to be necessary. A: There's 1 thing I know to be true: Using HTML opening and closing tags will help in general spam scoring due to the fact that many such appliance based filters and software firewalls will add a point or so to an email that uses html but does not use the opening and closing tags.
unknown
d15034
val
Camera preview size and picture size are two different parameters, which you can update through setPictureSize() and setPreviewSize() APIs. But if user doesn't change these sizes, it will use default values. In this case you are updating the preview size by calculating optimum preview size based on some aspect ratio. But you are not updating picture size, so it will be using default picture size. If picture size and preview size aspect ratios are different, definitely the captured image looks different from preview. So update the picture size also by calculating optimum picture size with same aspect ratio as preview size.
unknown
d15035
val
Instead of apply, can use lapply as apply converts to matrix and it wouln't hold the attributes created by hms library(lubridate) times[] <- lapply(times, hms) str(times) #'data.frame': 3 obs. of 2 variables: # $ exp1:Formal class 'Period' [package "lubridate"] with 6 slots # .. ..@ .Data : num 4 53 44 # .. ..@ year : num 0 0 0 # .. ..@ month : num 0 0 0 # .. ..@ day : num 0 0 0 # .. ..@ hour : num 17 17 17 # .. ..@ minute: num 19 28 38 # $ exp2:Formal class 'Period' [package "lubridate"] with 6 slots # .. ..@ .Data : num 4 53 45 # .. ..@ year : num 0 0 0 # .. ..@ month : num 0 0 0 # .. ..@ day : num 0 0 0 # .. ..@ hour : num 17 17 17 # .. ..@ minute: num 22 31 41 With the devel version of dplyr, we can use mutate with across library(dplyr) times %>% mutate(across(everything(), hms)) A: library(dplyr) library(lubridate) times %>% mutate_all(hms) #OR mutate_all(times, hms) # exp1 exp2 #1 17H 19M 4S 17H 22M 4S #2 17H 28M 53S 17H 31M 53S #3 17H 38M 44S 17H 41M 45S
unknown
d15036
val
I recommend to install the php module : source : http://technet.microsoft.com/en-us/library/cc793139(v=sql.90).aspx source : http://www.php.net/manual/en/book.mssql.php If there is no way, move your database to your website's hosting (converting db to mysql), and make your C# program use mysql and point it to the website database. good luck.
unknown
d15037
val
Check if modules are visible to odoo. Module versions should stay the same when you are migrating. Try with empty base and see if there are any errors after initializing your database. Try to install your modules to empty base.
unknown
d15038
val
I had to read the file content first, and then write on the file. Seems like withWriter erases the file contents: def f = new File('/tmp/file.txt') text = f.text f.withWriter { w -> w << text.replaceAll("(?s)%%#%", /\$/) } You might want to do a per line read if the file is too large. Otherwise, you can use that multiline (?s) regex. Note I escaped $, because replace and replaceAll behaves differently, in a sense that replace accepts a char and, thus, will be unaffected by regex strings, whereas replaceAll will need escaping. Here is my test: $ echo "%%#% aaaa bbbb cccc%%#%dddd" > file.txt && groovy Subst.groovy && cat file.txt $ aaaa bbbb cccc$dddd
unknown
d15039
val
If you want to exclude the text matched by (?:^|\\W)#, enclose it in a look-behind: (?<=(?:^|\\W)#) Then you can drop the capturing group, and the main match will contain only the content after the #. Before, I'd suggest this: (?<=\B#) However, after looking at this bug report and this question about inconsistency between \w and \b in Java, I'd say you need to be careful when using the shorter one, since \b and \B's definition in default mode is not synced with \w.
unknown
d15040
val
I've solved my issue in case anyone else comes across this. It may not be the optimal solution but I haven't found an alternative. I've added a custom validator to the Media class that calls validate() on the embedded Film class and adds any errors that arise to the Media objects errors class Media{ ObjectId id; String name; Film film; static mapWith = "mongo" static embedded = ["film"] static constraints = { film(validator : {Film film, def obj, def errors -> boolean valid = film.validate() if(!valid){ film.errors.allErrors.each {FieldError error -> final String field = "film" final String code = "$error.code" errors.rejectValue(field,code,error.arguments,error.defaultMessage ) } } return valid } ) }
unknown
d15041
val
So just add them as strings. var out = a + "/" + b; or use toString() var out = a.toString() + b.toString();
unknown
d15042
val
Use style binding - https://coryrylan.com/blog/angular-progress-component-with-svg style.stroke-dasharray="{{varible}}, 100" A: Try attribute binding attr.stroke-dasharray="{{this.master.locationVsStatusMap.size}}, 100" Forked Example:https://stackblitz.com/edit/angular-wqjlc5 Ref this: https://teropa.info/blog/2016/12/12/graphics-in-angular-2.html
unknown
d15043
val
Your DbContext is fine, but you have to register it with dependency injection and inject it into your classes instead of using new. Your startup.cs ConfigureServices method should have your database with the connection string. services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer( Configuration.GetConnectionString("DefaultConnection"))); Then, on the classes you want to use them (like a Controller) you inject it into the constuctor. public class HomeController : Controller { private readonly ApplicationDbContext _db; public HomeController(ApplicationDbContext db) { _db = db; } } A: Your ApplicationDbContext should be: public class ApplicationDbContext : IdentityDbContext<ApplicationUser> { public virtual DbSet<AspNetUsersExtendedDetails> AspNetUsersExtendedDetails { get; set; } public virtual DbSet<AspNetApplications> AspNetApplications { get; set; } public virtual DbSet<AspNetEventLogs> AspNetEventLogs { get; set; } public virtual DbSet<AspNetRolesExtendedDetails> AspNetRolesExtendedDetails { get; set; } public virtual DbSet<AspNetUserRolesExtendedDetails> AspNetUserRolesExtendedDetails { get; set; } public virtual DbSet<AspNetUserAccessTokens> AspNetUserAccessTokens { get; set; } public ApplicationDbContext(DbContextOptions options) : base(options) { } } And your should register it like that, depending on the DB server you use, for SqlServer: services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DbContext"))
unknown
d15044
val
You can try to parse it as String first and then as Boolean: val strO = (json \ "attributeValue").asOpt[String] val value: Option[String] = strO match { case str@Some(_) => str case None => (json \ "attributeValue").asOpt[Boolean].map(_.toString) } A: You can use the .orElse function when you are trying to read an attribute in different ways: import play.api.libs.json.{JsPath, Json, Reads} import play.api.libs.functional.syntax._ val json1 = """ |{ | "attributeName": "some String", | "attributeValue": false |} """.stripMargin val json2 = """ |{ | "attributeName": "some String", | "attributeValue": "daily" |} """.stripMargin // I modified you case class to make the example short case class Data(attributeName: String, attributeValue: String) object Data { // No need to define a reads function, just assign the value implicit val readsData: Reads[Data] = ( (JsPath \ "attributeName").read[String] and // Try to read String, then fallback to Boolean (which maps into String) (JsPath \ "attributeValue").read[String].orElse((JsPath \ "attributeValue").read[Boolean].map(_.toString)) )(Data.apply _) } println(Json.parse(json1).as[Data]) println(Json.parse(json2).as[Data]) Output: Data(some String,false) Data(some String,daily)
unknown
d15045
val
Just append two lines at back. string Query="SELECT * FROM Table1 WHERE 1=1 "; if (condition1) Query+="AND Col1=0 "; if (condition2) Query+="AND Col2=1 "; if (condition3) Query+="AND Col3=2 "; Query.Replace("1=1 AND ", ""); Query.Replace(" WHERE 1=1 ", ""); E.g. SELECT * FROM Table1 WHERE 1=1 AND Col1=0 AND Col2=1 AND Col3=2 will become to SELECT * FROM Table1 WHERE Col1=0 AND Col2=1 AND Col3=2 While SELECT * FROM Table1 WHERE 1=1 will become to SELECT * FROM Table1 ===================================== Thanks for pointing out a flaw of this solution: "This could break the query if, for any reason, one of the conditions contains the text "1=1 AND " or " WHERE 1=1 ". This could be the case if the condition contains a subquery or tries to check if some column contains this text, for example. Maybe this isn't a problem in your case but you should keep it in mind… " In order to get rid of this issue, we need to distinguish the "main" WHERE 1=1 and those from subquery, which is easy: Simply make the "main" WHERE special: I would append a "$" sign string Query="SELECT * FROM Table1 WHERE$ 1=1 "; if (condition1) Query+="AND Col1=0 "; if (condition2) Query+="AND Col2=1 "; if (condition3) Query+="AND Col3=2 "; Then still append two lines: Query.Replace("WHERE$ 1=1 AND ", "WHERE "); Query.Replace(" WHERE$ 1=1 ", ""); A: One solution is to simply not write queries manually by appending strings. You could use an ORM, like Entity Framework, and with LINQ to Entities use the features the language and framework offer you: using (var dbContext = new MyDbContext()) { IQueryable<Table1Item> query = dbContext.Table1; if (condition1) { query = query.Where(c => c.Col1 == 0); } if (condition2) { query = query.Where(c => c.Col2 == 1); } if (condition3) { query = query.Where(c => c.Col3 == 2); } PrintResults(query); } A: Use this: string Query="SELECT * FROM Table1 WHERE "; string QuerySub; if (condition1) QuerySub+="AND Col1=0 "; if (condition2) QuerySub+="AND Col2=1 "; if (condition3) QuerySub+="AND Col3=2 "; if (QuerySub.StartsWith("AND")) QuerySub = QuerySub.TrimStart("AND".ToCharArray()); Query = Query + QuerySub; if (Query.EndsWith("WHERE ")) Query = Query.TrimEnd("WHERE ".ToCharArray()); A: Why not using an existing Query Builder ? Something like Sql Kata. It supports complex where conditions, joins and subqueries. var query = new Query("Users").Where("Score", ">", 100).OrderByDesc("Score").Limit(100); if(onlyActive) { query.Where("Status", "active") } // or you can use the when statement query.When(onlyActive, q => q.Where("Status", "active")) it works with Sql Server, MySql and PostgreSql. A: If this is SQL Server, you can make this code much cleaner. This also assumes a known number of parameters, which may be a poor assumption when I think about the possibilities. In C#, you would use: using (SqlConnection conn = new SqlConnection("connection string")) { conn.Open(); SqlCommand command = new SqlCommand() { CommandText = "dbo.sample_proc", Connection = conn, CommandType = CommandType.StoredProcedure }; if (condition1) command.Parameters.Add(new SqlParameter("Condition1", condition1Value)); if (condition2) command.Parameters.Add(new SqlParameter("Condition2", condition2Value)); if (condition3) command.Parameters.Add(new SqlParameter("Condition3", condition3Value)); IDataReader reader = command.ExecuteReader(); while(reader.Read()) { } conn.Close(); } And then on the SQL side: CREATE PROCEDURE dbo.sample_proc ( --using varchar(50) generically -- "= NULL" makes them all optional parameters @Condition1 varchar(50) = NULL @Condition2 varchar(50) = NULL @Condition3 varchar(50) = NULL ) AS BEGIN /* check that the value of the parameter matches the related column or that the parameter value was not specified. This works as long as you are not querying for a specific column to be null.*/ SELECT * FROM SampleTable WHERE (Col1 = @Condition1 OR @Condition1 IS NULL) AND (Col2 = @Condition2 OR @Condition2 IS NULL) AND (Col3 = @Condition3 OR @Condition3 IS NULL) OPTION (RECOMPILE) --OPTION(RECOMPILE) forces the query plan to remain effectively uncached END A: The quickest literal solution to what you're asking that I can think of is this: string Query="SELECT * FROM Table1"; string Conditions = ""; if (condition1) Conditions+="AND Col1=0 "; if (condition2) Conditions+="AND Col2=1 "; if (condition3) Conditions+="AND Col3=2 "; if (Conditions.Length > 0) Query+=" WHERE " + Conditions.Substring(3); It doesn't seem elegant, sure, to which I would refer you to CodeCaster's recommendation of using an ORM. But if you think about what this is doing here, you're really not worried about 'wasting' 4 characters of memory, and it's really quick for a computer to move a pointer 4 places. If you have the time to learn how to use an ORM, it could really pay off for you. But in regards to this, if you're trying to keep that additional condition from hitting the SQL db, this will do it for you. A: Depending on the condition, it might be possible to use boolean logic in the query. Something like this : string Query="SELECT * FROM Table1 " + "WHERE (condition1 = @test1 AND Col1=0) "+ "AND (condition2 = @test2 AND Col2=1) "+ "AND (condition3 = @test3 AND Col3=2) "; A: I like the fluent interface of stringbuilder, so I made some ExtensionMethods. var query = new StringBuilder() .AppendLine("SELECT * FROM products") .AppendWhereIf(!String.IsNullOrEmpty(name), "name LIKE @name") .AppendWhereIf(category.HasValue, "category = @category") .AppendWhere("Deleted = @deleted") .ToString(); var p_name = GetParameter("@name", name); var p_category = GetParameter("@category", category); var p_deleted = GetParameter("@deleted", false); var result = ExecuteDataTable(query, p_name, p_category, p_deleted); // in a seperate static class for extensionmethods public StringBuilder AppendLineIf(this StringBuilder sb, bool condition, string value) { if(condition) sb.AppendLine(value); return sb; } public StringBuilder AppendWhereIf(this StringBuilder sb, bool condition, string value) { if (condition) sb.AppendLineIf(condition, sb.HasWhere() ? " AND " : " WHERE " + value); return sb; } public StringBuilder AppendWhere(this StringBuilder sb, string value) { sb.AppendWhereIf(true, value); return sb; } public bool HasWhere(this StringBuilder sb) { var seperator = new string [] { Environment.NewLine }; var lines = sb.ToString().Split(seperator, StringSplitOptions.None); return lines.Count > 0 && lines[lines.Count - 1].Contains("where", StringComparison.InvariantCultureIgnoreCase); } // http://stackoverflow.com/a/4217362/98491 public static bool Contains(this string source, string toCheck, StringComparison comp) { return source.IndexOf(toCheck, comp) >= 0; } A: IMHO, I think that your approach is wrong: Query the database by concatenating string is NEVER a good idea (risk of SQL injection and the code can easily be broken if you do some changes elsewhere). You can use an ORM (I use NHibernate) or at least use SqlCommand.Parameters If you absolutely want to use string concatenation, I would use a StringBuilder (it is the right object for string concatenation): var query = new StringBuilder("SELECT * FROM Table1 WHERE"); int qLength = query.Length;//if you don't want to count :D if (Condition1) query.Append(" Col1=0 AND"); if (Condition2) query.Append(" Col2=0 AND"); .... //if no condition remove WHERE or AND from query query.Length -= query.Length == qLength ? 6 : 4; As the last thought, Where 1=1 is really ugly but SQL Server will optimize it anyway. A: The Dapper SqlBuilder is a pretty good option. It's even used in production on StackOverflow. Read Sam's blog entry about it. As far as I know, it's not part of any Nuget package, so you'll need to copy paste its code into your project or download the Dapper source and build the SqlBuilder project. Either way, you'll also need to reference Dapper for the DynamicParameters class. A: A slight bit of overkill in this simple case but I've used code similar to this in the past. Create a function string AddCondition(string clause, string appender, string condition) { if (clause.Length <= 0) { return String.Format("WHERE {0}",condition); } return string.Format("{0} {1} {2}", clause, appender, condition); } Use it like this string query = "SELECT * FROM Table1 {0}"; string whereClause = string.Empty; if (condition 1) whereClause = AddCondition(whereClause, "AND", "Col=1"); if (condition 2) whereClause = AddCondition(whereClause, "AND", "Col2=2"); string finalQuery = String.Format(query, whereClause); This way if no conditions are found you don't even bother loading a where statement in the query and save the sql server a micro-second of processing the junk where clause when it parses the sql statement. A: Save the conditions in a list: List<string> conditions = new List<string>(); if (condition1) conditions.Add("Col1=0"); //... if (conditions.Any()) Query += " WHERE " + string.Join(" AND ", conditions.ToArray()); A: There is another solution, which may also not be elegant, but works and solves the problem: String query = "SELECT * FROM Table1"; List<string> conditions = new List<string>(); // ... fill the conditions string joiner = " WHERE "; foreach (string condition in conditions) { query += joiner + condition; joiner = " AND " } For: * *empty conditions list, the result will be simply SELECT * FROM Table1, *a single condition it will be SELECT * FROM Table1 WHERE cond1 *each following condition will generate additional AND condN A: Just do something like this: using (var command = connection.CreateCommand()) { command.CommandText = "SELECT * FROM Table1"; var conditions = ""; if (condition1) { conditions += "Col1=@val1 AND "; command.AddParameter("val1", 1); } if (condition2) { conditions += "Col2=@val2 AND "; command.AddParameter("val2", 1); } if (condition3) { conditions += "Col3=@val3 AND "; command.AddParameter("val3", 1); } if (conditions != "") command.CommandText += " WHERE " + conditions.Remove(conditions.Length - 5); } It's SQL injection safe and IMHO, it's pretty clean. The Remove() simply removes the last AND; It works both if no conditions have been set, if one have been set or if multiple have been set. A: I see this used all the time in Oracle while building dynamic SQL within stored procedures. I use it in queries while exploring data issues as well just to make switching between different filters of data faster... Just comment out a condition or add it back in easily. I find it's pretty common and easy enough to understand to someone reviewing your code. A: Using string function you can also do it this way: string Query = "select * from Table1"; if (condition1) WhereClause += " Col1 = @param1 AND "; // <---- put conditional operator at the end if (condition2) WhereClause += " Col1 = @param2 OR "; WhereClause = WhereClause.Trim(); if (!string.IsNullOrEmpty(WhereClause)) Query = Query + " WHERE " + WhereClause.Remove(WhereClause.LastIndexOf(" ")); // else // no condition meets the criteria leave the QUERY without a WHERE clause I personally feel easy to to remove the conditional element(s) at the end, since its position is easy to predict. A: I thought of a solution that, well, perhaps is somewhat more readable: string query = String.Format("SELECT * FROM Table1 WHERE " + "Col1 = {0} AND " + "Col2 = {1} AND " + "Col3 = {2}", (!condition1 ? "Col1" : "0"), (!condition2 ? "Col2" : "1"), (!condition3 ? "Col3" : "2")); I'm just not sure whether the SQL interpreter will also optimize away the Col1 = Col1 condition (printed when condition1 is false). A: public static class Ext { public static string addCondition(this string str, bool condition, string statement) { if (!condition) return str; return str + (!str.Contains(" WHERE ") ? " WHERE " : " ") + statement; } public static string cleanCondition(this string str) { if (!str.Contains(" WHERE ")) return str; return str.Replace(" WHERE AND ", " WHERE ").Replace(" WHERE OR ", " WHERE "); } } Realisation with extension methods. static void Main(string[] args) { string Query = "SELECT * FROM Table1"; Query = Query.addCondition(true == false, "AND Column1 = 5") .addCondition(18 > 17, "AND Column2 = 7") .addCondition(42 == 1, "OR Column3 IN (5, 7, 9)") .addCondition(5 % 1 > 1 - 4, "AND Column4 = 67") .addCondition(Object.Equals(5, 5), "OR Column5 >= 0") .cleanCondition(); Console.WriteLine(Query); } A: Here is a more elegant way: private string BuildQuery() { string MethodResult = ""; try { StringBuilder sb = new StringBuilder(); sb.Append("SELECT * FROM Table1"); List<string> Clauses = new List<string>(); Clauses.Add("Col1 = 0"); Clauses.Add("Col2 = 1"); Clauses.Add("Col3 = 2"); bool FirstPass = true; if(Clauses != null && Clauses.Count > 0) { foreach(string Clause in Clauses) { if (FirstPass) { sb.Append(" WHERE "); FirstPass = false; } else { sb.Append(" AND "); } sb.Append(Clause); } } MethodResult = sb.ToString(); } catch //(Exception ex) { //ex.HandleException() } return MethodResult; } A: As has been stated, creating SQL by concatenation is never a good idea. Not just because of SQL injection. Mostly because it's just ugly, difficult to maintain and totally unnecessary. You have to run your program with trace or debug to see what SQL it generates. If you use QueryFirst (disclaimer: which I wrote) the unhappy temptation is removed, and you can get straight in ta doin it in SQL. This page has a comprehensive coverage of TSQL options for dynamically adding search predicates. The following option is handy for situations where you want to leave the choice of combinations of search predicates to your user. select * from table1 where (col1 = @param1 or @param1 is null) and (col2 = @param2 or @param2 is null) and (col3 = @param3 or @param3 is null) OPTION (RECOMPILE) QueryFirst gives you C# null to db NULL, so you just call the Execute() method with nulls when appropriate, and it all just works. <opinion>Why are C# devs so reluctant to do stuff in SQL, even when it's simpler. Mind boggles.</opinion> A: For longer filtering steps StringBuilder is the better approach as many says. on your case I would go with: StringBuilder sql = new StringBuilder(); if (condition1) sql.Append("AND Col1=0 "); if (condition2) sql.Append("AND Col2=1 "); if (condition3) sql.Append("AND Col3=2 "); string Query = "SELECT * FROM Table1 "; if(sql.Length > 0) Query += string.Concat("WHERE ", sql.ToString().Substring(4)); //avoid first 4 chars, which is the 1st "AND " A: Concise, elegant and sweet, as shown in the image below.
unknown
d15046
val
Add more resources to Spark. For example if you're working on local mode a configuration like the following should be sufficient: spark = SparkSession.builder \ .appName('app_name') \ .master('local[*]') \ .config('spark.sql.execution.arrow.pyspark.enabled', True) \ .config('spark.sql.session.timeZone', 'UTC') \ .config('spark.driver.memory','32G') \ .config('spark.ui.showConsoleProgress', True) \ .config('spark.sql.repl.eagerEval.enabled', True) \ .getOrCreate() A: I encountered this error while trying to use PySpark within a Docker container. In my case, the error was originating from me assigning more resources to Spark than Docker had access to. A: Just restart your notebook if you are using Jupyter nootbook. If not then just restart the pyspark . that should solve the problem. It happens because you are using too many collects or some other memory related issue. A: I encountered the same problem while working on colab. I terminated the current session and reconnected. It worked for me! A: Maybe the port of spark UI is already occupied, maybe there are other errors before this error. Maybe this can help you:https://stackoverflow.com/questions/32820087/spark-multiple-spark-submit-in-parallel spark-submit --conf spark.ui.port=5051
unknown
d15047
val
In C#,all arrays are 0 based. So the problem must be that you are trying to access from selectedRow.Cells[1] to selectedRow.Cells[11], when you should use selectedRow.Cells[0]to selectedRow.Cells[10]
unknown
d15048
val
Also asked here: https://groups.google.com/forum/#!topic/mongodb-user/iOeEXbUYbo4 I think your better bet in this situation is to use a custom discriminator convention. You can see an example of this here: https://github.com/mongodb/mongo-csharp-driver/blob/v1.x/MongoDB.DriverUnitTests/Samples/MagicDiscriminatorTests.cs. While this example is based on whether a field exists in the document, you could easily base it on what type the field is (BsonType.Int32, BsonType.Date, etc...). A: Basing on @Craig Wilson answer, to get rid off all discriminators, you can: public class NoDiscriminatorConvention : IDiscriminatorConvention { public string ElementName => null; public Type GetActualType(IBsonReader bsonReader, Type nominalType) => nominalType; public BsonValue GetDiscriminator(Type nominalType, Type actualType) => null; } and register it: BsonSerializer.RegisterDiscriminatorConvention(typeof(BaseEntity), new NoDiscriminatorConvention()); A: This problem occurred in my case when I was adding a Dictionary<string, object> and List entities to database. The following link helped me in this regard: C# MongoDB complex class serialization. For your case, I would suggest, following the link given above, as follows: using System; using Newtonsoft.Json; using Newtonsoft.Json.Linq; [JsonConverter(typeof(DataRecordConverter))] public abstract class DataRecord { public string Key { get; set; } public string DataRecordType {get;} } public class DataRecordInt : DataRecord { public int Value { get; set; } public override string DataRecordType => "Int" } public class DataRecordDateTime : DataRecord { public DateTime? Value { get; set; } public override string DataRecordType => "DateTime" } public class DataRecordConverter: JsonConverter { public override bool CanWrite => false; public override bool CanRead => true; public override bool CanConvert(Type objectType) { return objectType == typeof(DataRecord); } public override void WriteJson( JsonWriter writer, object value, JsonSerializer serializer) {} public override object ReadJson( JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) { var jsonObject = JObject.Load(reader); var dataRecord = default(DataRecord); switch (jsonObject["DataRecordType"].Value<string>()) { case "Int": dataRecord = new DataRecordInt(); break; case "DateTime": dataRecord = new DataRecordDataTime(); break; } serializer.Populate(jsonObject.CreateReader, dataRecord); return dataRecord; } } [BsonCollectionName("calls")] [BsonIgnoreExtraElements] public class Call { [BsonId] public CallId _id { get; set; } [JsonIgnore] [BsonElement("responses")] [BsonIgnoreIfNull] public BsonArray Responses { get; set; } } You can add populate the BsonArray using: var jsonDoc = JsonConvert.SerializeObject(source); var bsonArray = BsonSerializer.Deserialize<BsonArray>(jsonDoc); Now, you can get deserialized List from Mongo using: var bsonDoc = BsonExtensionMethods.ToJson(source); var dataRecordList = JsonConvert.DeserializeObject<List<DataRecord>>(bsonDoc, new DataRecordConverter()) Hope this helps, again, thanks to C# MongoDB complex class serialization for this.
unknown
d15049
val
Disclaimer: My english can be bad, sorry :c. That is because in your @RouteConfig you have declarated that you need a item parametter like this: {path: '/BiddingPage/:item', name: 'BiddingPage', component: BiddingPageComponent}, And in your template you doesn't have that parametter: <a [routerLink]="['BiddingPage']">Click here to bid on this item.</a>//the "item" is missing Your template should be like this: <a [routerLink]="['BiddingPage',{item:item}]">Click here to bid on this item.</a> Enjoy.
unknown
d15050
val
You can use the UNION operator to get the distinct list of subjects. select TutorSubject1 FROM tblTutor where TutorSubject1 is not null union select TutorSubject2 FROM tblTutor where TutorSubject2 is not null union select TutorSubject3 FROM tblTutor where TutorSubject3 is not null The important point here is that the UNION operator removes duplicates.
unknown
d15051
val
if you don't HAVE to use an iterator, re.split would be a bit simpler for your use case (custom definition of a sentence): re.split(r'\.\s', text) Note the last sentence will include . or will be empty (if text ends with whitespace after last period), to fix that: re.split(r'\.\s', re.sub(r'\.\s*$', '', text)) also have a look at a bit more general case in the answer for Python - RegEx for splitting text into sentences (sentence-tokenizing) and for a completely general solution you would need a proper sentence tokenizer, such as nltk.tokenize nltk.tokenize.sent_tokenize(text) A: Here you get it as an iterator. Works with my testcases. It considers a sentence to be anything (non-greedy) until a period, which is followed by either a space or the end of the line. import re sentence = re.compile("\w.*?\.(?= |$)", re.MULTILINE) def iterphrases(text): return (match.group(0) for match in sentence.finditer(text)) A: If you are sure that . is used for nothing besides sentences delimiters and that every relevant sentence ends with a period, then the following may be useful: matches = re.finditer('([^.]*?(powder|keyword2|keyword3).*?)\.', text) result = [m.group() for m in matches]
unknown
d15052
val
Your Javascript is missing the actual piece that does the writing to the readme_copy file that you created by calling createWriteStream. Here's an example of what you need to do: http://www.html5rocks.com/en/tutorials/file/filesystem/
unknown
d15053
val
When adding a document, make sure you use Dim d As Word.Document = w.Documents.Add() and not d = New Word.Document 'this syntax can cause a memory leak!'
unknown
d15054
val
I'm not really sure about all this stuff you asked, so I'll try 2 different scenarios. * *If the data you need to display the html and data needed for the javascript is the same - you're doing it very wrong. All you need to do is to query the db only for the html, then structure the data needed for javascript in hidden fields, then, on client side, query the html's hidden fields with javascript and use the data onward. *If the data sets are different than you'll probably need an ajax request from the client side after the html is displayed. That is, an additional http request. I say you close the db connection after the first call, and then reopen again with the next http request. Every http request is on its own, you'll need to take care with opening/closing connections inside a single request. [That is, if you don't use pooling, but that is another issue]
unknown
d15055
val
Wrap the button text in a span, and rotate that on :focus of the button by another 180 degree: .btn:focus { transform: rotate(180deg); } .btn:focus span { display: inline-block; transform: rotate(180deg); } <button class="btn" type="button"><span>Go</span></button> A: Wrap the text with a span and rotate it to the other direction: .btn:focus { transform: rotate(180deg); } .btn:focus > span { display: block; transform: rotate(-180deg); } <button class="btn" type="button"> <span>Go</span> </button>
unknown
d15056
val
generator expressions are not evaluated during definition, they create a iterable generator object. Use a list comprehension instead: [df.to_excel('df_name.xlsx') for df in list_of_dfs] Though as Yatu pointed out, the for loop would be the appropriate method of executing this method
unknown
d15057
val
Try this solution: splitted<-trimws(unlist(strsplit(aDataFrame,","))) t(bind_rows(sapply(splitted[grep(":",splitted)],strsplit,split=":"))) [,1] [,2] Dollar:40 "Dollar" "40" Euro:80 "Euro" "80" Yen:400 "Yen" "400" Pound:50 "Pound" "50" Update df<-data.frame(t(bind_rows(sapply(splitted[grep(":",splitted)],strsplit,split=":")))) > library(reshape2) > acast(df, X2 ~ X1) Using X2 as value column: use value.var to override. Dollar Euro Pound Yen 40 40 <NA> <NA> <NA> 400 <NA> <NA> <NA> 400 50 <NA> <NA> 50 <NA> 80 <NA> 80 <NA> <NA> Levels: 40 400 50 80
unknown
d15058
val
This may not be directly related to the original question's problem, but could come in handy. If you are using Magical Record 2.3 beta 6 or later, it seems from the issues that there was a problem with manual importing that never got sorted out. See https://github.com/magicalpanda/MagicalRecord/issues/1019 (thought the referenced issue claims to have fixed it, I beg to differ) I was able to get my manual version to build and run by converting my imports from the format #import <MagicalRecord/filename.h> to #import "filename.h". A: I had the same problem before. This sounds ridiculous but move #import <CoreData+MagicalRecord.h> above #import <UIKit/UIKit.h>. Note the use of <> instead of "" if you are using pods. Edit Please take note that this is an old answer. It worked for me before when I posted it. It may not work anymore... I have not used MagicalRecord for quite some time now.
unknown
d15059
val
You need to write a loop because, at some point, every item in the list needs to be evaluated, so there's no getting around that (assuming an iterative method; you could, of course, write a recursive algorithm that doesn't contain an explicit loop—I'll illustrate both below). 1. Iteration The iterative method keeps track of the lowest, non-zero number encountered as we work our way, one-by-one, through each number in the list. When we reach the end of the list, the tracked value will be the result we're after: on minimumPositiveNumber from L local L if L = {} then return null set |ξ| to 0 repeat with x in L set x to x's contents if (x < |ξ| and x ≠ 0) ¬ or |ξ| = 0 then ¬ set |ξ| to x end repeat |ξ| end minimumPositiveNumber get the minimumPositiveNumber from {10, 2, 0, 2, 4} --> 2 2. Recursion The recursive method compares the first item in the list with the lowest, non-zero value in the rest of the list, keeping the lowest, non-zero value: on minimumPositiveNumber from L local L if L = {} then return 0 set x to the first item of L set y to minimumPositiveNumber from the rest of L if (y < x and y ≠ 0) or x = 0 then return y x end minimumPositiveNumber get the minimumPositiveNumber from {10, 2, 0, 2, 4} --> 2
unknown
d15060
val
You can try this code: import pandas as pd import glob path = 'Enter your path here' tsvfiles = glob.glob(path + "/*.tsv") for t in tsvfiles: tsv = pd.read_table(t, sep='\t') tsv.to_csv(t[:-4] + '.csv', index=False)
unknown
d15061
val
You'll need to get your data into the VM and configure Apache to serve that data. For starters, add this to you Vagrantfile (after the confiv.vm.network line): config.vm.synced_folder ".", "/var/www/html" It will make your app folder available under /var/www/html on the VM. Apache on Ubuntu serves from that folder by default, so you should be able to see something after doing vagrant reload. A: when you edit the configuration file by using vim or any other editor. After that, you have to reload the vagrant and then try to access the localhost:8080 Use the command vagrant reload
unknown
d15062
val
fixed.. :) here s the code "paypalhere://takePayment/?returnUrl={{returnUrl}}&invoice=%7B%22merchantEmail%22%3A%22{{merchantEmails}}%22,%22payerEmail%22%3A%22{{payerEmails}}%22,%22itemList%22%3A%7B%22item%22%3A%5B%7B%22name%22%3A%22{{name}}%22,%22description%22%3A%22{{description}}%22,%22quantity%22%3A%221.0%22,%22unitPrice%22%3A%22{{price}}%22,%22taxName%22%3A%22Tax%22,%22taxRate%22%3A%220.0%22%7D%5D%7D,%22currencyCode%22%3A%22{{currency}}%22,%22paymentTerms%22%3A%22DueOnReceipt%22,%22discountPercent%22%3A%220.0%22%7D" url = url.replace("{{merchantEmails}}", "[email protected]"); url = url.replace("{{payerEmails}}", "[email protected]"); url = url.replace("{{price}}", "1"); url = url.replace("{{name}}", Title); url = url.replace("{{description}}", "Title"); url = url.replace("{{currency}}", "currencyType");
unknown
d15063
val
Hope this helps. For all languages you can try Count number of characters present in foreign language function countChars(text){ return text.split("").filter( function(char){ let charCode = char.charCodeAt(); return charCode >= 2309 && charCode <=2361 || charCode == 32; }).length; } let chars = "हा य"; console.log(countChars(chars));
unknown
d15064
val
According to the GCC ARM Options documentation, you need to pass the -mapcs-frame option to GCC to generate stack frames on the ARM platform. -mapcs-frame Generate a stack frame that is compliant with the ARM Procedure Call Standard for all functions, even if this is not strictly necessary for correct execution of the code. Specifying -fomit-frame-pointer with this option causes the stack frames not to be generated for leaf functions. The default is -mno-apcs-frame. This was pointed out to me in a comment to my Linux specific answer to How to generate a stacktrace when my gcc C++ app crashes, which you may also find useful.
unknown
d15065
val
The fileInput contains the base64-encoded file data. It's a string, so HttpPostedFileBase would not work. Change the form HTML: <input class="form-control" type="file" data-bind="file: {data: fileInput}" /> Change the viewmodel code as follows: // store file data here self.fileInput = ko.observable(); // is this present already? var feedBackData = { versionId: self.versionId(), text: self.feedbackText(), screenShot: self.fileInput() }; $.ajax({ type: 'POST', url: apiUrl + '/Feedback/Add', contentType: 'application/json; charset=utf-8', data: ko.toJSON(feedBackData) }).done(function (data) { self.result("Done!"); }).fail(showError); If the controller method is in an API controller it should accept JSON and the model binder will extract the values: public void Add(String screenShot, String versionId, String text) { String imgId = null; if(!String.IsNullOrEmpty(screenShot)) { Byte[] data = Convert.FromBase64String(screenShot); // rest of code omitted Sorry have not been able to test this for syntax etc. but should put you on the right track. A good tip for debugging Knockout pages is to use this line while developing so you can see what is happening in your viewModel: <pre data-bind="text: ko.toJSON($data, null, 2)"></pre> See http://www.knockmeout.net/2013/06/knockout-debugging-strategies-plugin.html for more help.
unknown
d15066
val
It should not pose any problem to connect with an database on AWS. But be sure that the database on AWS is configured to accept external access, so that Heroku can connect. And I would sugest that you take the credentials out of the source code and put it in the Config Vars that Heroku provide (environment variables). A: Will it work? I think yes, provided you configure your project and database for external access. Should you want it? How may queries does an average page execute? Some applications may make tens of queries for every endpoint and added wait can combine into seconds of waiting for every request.
unknown
d15067
val
The Excel-DNA packing does not currently support native or mixed assemblies. So you won't be able to use the mechanism to pack the mkl library. You might be able to store it in your C# assembly as a resource, and extract it yourself at runtime (in an AutoOpen or something, before any of the functions using it are run). If you extract it to a temp file, then call LoadLibrary yourself to load it into the process, it should work when ILNumerics needs it.
unknown
d15068
val
Of course it is possible, this simple configuration should do the job: resource "aws_ami" "aws_ami_name" { name = "aws_ami_name" virtualization_type = "hvm" root_device_name = "/dev/sda1" ebs_block_device { snapshot_id = "snapshot_ID” device_name = "/dev/sda1" volume_type = "gp2" } } resource "aws_instance" "ec2_name" { ami = "${aws_ami.aws_ami_name.id}" instance_type = "t3.large" } A: It's not really a Terraform-type task, since you're not deploying new infrastructure. Instead, do it manually: * *Create a new EBS Volume from the Snapshot *Stop the instance *Detach the existing root volume (make a note of the device identifier such as /dev/sda1) *Attach the new Volume with the same identifier *Start the instance
unknown
d15069
val
Capistrano does not touch database migrations unless it is specified by deploy:migrate task within your Capfile or by calling bundle exec cap deploy:migrate. Your database 'disappears' because SQLite is simply a file in your db directory. Since you do not specify it should be shared among releases (to be within shared directory) then it just disappears and stays in previous release. Add db/production.sqlite3 to your linked_files declaration.
unknown
d15070
val
To clarify on the answer provided by Kevin Patel you can get the link directly from your browser, however you must have the reading pane set to "No split", otherwise you get a generic URL. A: I don't know about a specific "email", but you can view a specific thread (which is usually one email) by clicking in the URL bar and copying that. Then, change the label to "all". So if the url is "https://mail.google.com/mail/u/0/#inbox/abc123def456", you would change "#inbox" to say "#all" like this: "https://mail.google.com/mail/u/0/#all/abc123def456" Now the link will work, even after you archive it and it's not in your inbox. A: Since Google introduced the integration of Google Tasks and Gmail a few weeks ago, one way you can get the URL of a specific Gmail email us: * *Turn a given Gmail email into a Google Task; *Open the task editing window (you can't perform the next task from the task display, it only seems to work when the task editing function is invoked); *Hover over the link to the originating Gmail email in the next task you just created (the link is at the bottom of the task display); *Use that URL to access the specific Gmail email by clicking on the link in the corresponding Google task or just pop the URL into the URL bar of any browser or other workflow (of course, the session must be logged in to the relevant Google account). Enjoy! A: If you are happy with a bookmarklet that copies a link to the current email to your clipboard, you can try adding this to your bookmarks: javascript:(function()%7Basync%20function%20copyPermalink()%20%7Btry%20%7BsearchURL%20%3D%20'https%3A%2F%2Fmail.google.com%2Fmail%2Fu%2F0%2F%23search%2Fmsgid%253A'%3BmessageId%20%3D%20document.querySelector('div%5Bdata-message-id%5D').getAttribute('data-message-id').substring(7)%3Bawait%20navigator.clipboard.writeText(searchURL%20%2B%20messageId)%3Bconsole.log('Mail%20message%20permalink%20copied%20to%20clipboard')%3B%7D%20catch%20(err)%20%7Bconsole.error('Failed%20to%20copy%3A%20'%2C%20err)%3B%7D%7DcopyPermalink()%7D)() It essentially searches the currently focussed email for its data-message-id attribute, and then converts that into a search url using the msgid predicate (this doesn't seem to be documented, but was quite guessable.). The full link is then copied to your clipboard. Caveat: Seems to work with or without a preview pane, but this hasn't been extensively tested. A: I don't think Gmail can show one email, it always shows a thread. You can look for one message, but still will see the whole thread. If this is OK for you, the URL is https://mail.google.com/mail/u/0/#search/rfc822msgid: followed by the message ID (can be found by looking at "show original"). See this question for some more details. A: Gmail Sharable Link Creator Site Link: GmailLink.GitHub.io Steps to Follow to generate the Link * *Get the Message ID Of Mail Thread (Mail > 3 dot menu in Right side (⋮) > Click on Show Original > See the MessageID). *Copy the Message-Id *Use the MessageId & click On Submit to generate the Mail Sharable Link. https://stackoverflow.com/a/61849710/7950511 A: You can specify the inbox email address in the link to open the email in the correct inbox. if [email protected] is your inbox email Create the link as follows https://mail.google.com/mail/u/[email protected]/#all/YOUR_EMAIL_ID A: You can open up an email that you want in Gmail and after that, you can simply copy the link location from the search bar of your browser. It will create a unique weblink for every single email. It is as simple as that. A: You just have to click on "show original" and copy the URL. A: Actyally, get a single email link, without the full thread, seems not to be directly supported; apart from manual url rewriting or coding tricks, that require time or specific skills, the faster workarounds I know are: Copy the URL from the tab that opens clicking on one of these 2 entry on the email menu: * *print - CONS: It also open the print popup, that you have to dismiss *show original - CONS: The email is not formatted, you see original sources, so images and formatting are missed, and messy code are added A: How about this Chrome Extension https://chrome.google.com/webstore/detail/gmail-message-url/bkdmmijcdflpcjchjglegcipjiabagei/related
unknown
d15071
val
Post your code, but sounds to me like you have to set the selected value in your dropdown in the grids onrowdatabound event - but I'm not entirely sure based on the vagueness of your question.
unknown
d15072
val
How about new XElement("To", new XAttribute("Type", "C"), "John Smith") A: I'd use: new XElement("To", new XAttribute("Type", "C"), "John Smith"); Any plain text content you provide within the XElement constructor ends up as a text node. You can call SetValue separately of course, but as it doesn't return anything, you'll need to store a reference to the element in a variable first.
unknown
d15073
val
Try not to create a connection instance ( like new sqlconnection) for each insertion. better loop 3 insertions with a same connection string/variable you already created.
unknown
d15074
val
Fakeroot uses the dynamic linker in order to do its magic (specifically, LD_PRELOAD). Unfortunately, the dynamic linker is not involved in loading statically linked binaries (which is how the dynamic linker itself is invoked: /lib/ld-linux.so.2 is statically compiled). As answered above, your only option, as far as I'm aware, is to use fakeroot-ng, which uses a completely different mechanism to inject into the process, and is, thus, able to work on statically linked libraries without a problem. In fact, statically linked libraries was part of the reason I set out to write fakeroot-ng in the first place. At the time, there was no way to tell ldconfig to run on a subtree, and ldconfig is statically linked. Shachar A: seems the solution is to use fakeroot-ng, which works for statically linked binaries.
unknown
d15075
val
The link that you yourself quoted in a comment provides the answer to the question you meant to ask: instead of giving clients a pointer to the underlying object that you'd like to fuss with, give them a pointer to a wrapper object: type internal struct { // all the internal stuff goes here } // change the name Wrapper below to something more suitable type Wrapper struct { *internal // or p *internal if you want to be overly verbose } func NewWhatever(/*args*/) *Wrapper { p := &Wrapper{...} // fill this part in runtime.SetFinalizer(p, wrapperGotCollected) return p } func wrapperGotCollected(p *Wrapper) { // since p itself is about to be collected, // **p (or *((*p).p)) is no longer accessible by // the user who called NewWhatever(). Do // something appropriate here. } Note that this does not use a finalizer on the *internal, but rather a finalizer on the *Wrapper: at the time that wrapperGotCollected is called, the *internal object itself is guaranteed still to be live because p itself has not yet been GC-ed (it's sort of halfway there, and will be the rest of the way there as soon as, or shortly after, wrapperGotCollected returns). A: The unsafe documentation allows for GC that moves values in memory. A uintptr is an integer, not a reference. Converting a Pointer to a uintptr creates an integer value with no pointer semantics. Even if a uintptr holds the address of some object, the garbage collector will not update that uintptr's value if the object moves, nor will that uintptr keep the object from being reclaimed. A: For completeness this is what I wound up doing something like: type actualThing struct { Data int } type internalHandle struct { *actualThing } type ExternalHandle struct { *internalHandle } So a user of ExternalHandle can still write code like ExternalHandle.Data but the internal maintenance code can still atomically update *actualThing. A finalizer on a pointer to an ExternalHandle will signal the rest of the stack to delete references to internalHandle and stop propagating updates to it. Basically the super convenient thing is that by nesting the structs as such, users of ExternalHandle use it without realizing or dealing with the fact that it's a double pointer dereference.
unknown
d15076
val
Yes, switch statement can take an expression for the value on which you switch. Your code should work fine, except that getchar() would read the leftover '\n' character from scanf of the operands. Add another call to getchar() before the switch to read and discard that extra character. A: While the code is valid and correct, I'd do the following to make it more readable: switch(getchar()) { case '1': // ... case '2': // ... case '3': // ... case '4': // ... } Or switch(getchar() - '0') { case 1: // ... case 2: // ... } This is to avoid using the magic number 48, which may not be understood easily by readers. Furthermore, you can discard input until the next \n using a simple while loop: while(getchar() != '\n') ; In addition to the \n, this will also read and discard anything before the newline. A: switch ( expression ) So you have a valid expression and you can have a expression like you have if you really have a need for something like this. Else char ch = getchar(); /* or scanf(" %c",&ch); (Better)*/ int a = ch - '0'; switch(a) { } A: For your answer : Yes the switch can accept an expression in its arguments but it should returns only one of these types char int short long int long long int it can be also signed or unsigned ! There is no need to make a cast for the expression getchar() - 48 because getchar() returns int and 48 is an int so th result would be an int now after compiling you have to add 3 number one for the variable a and the second for the variable b and the third for the switch statement... for instance $./executable_file Enter two numbers: 1 2 3 A: Switch formatting This is a suggested formatting of your switch statement. I disagree with the idea of using getchar() in the switch statement (though it is technically legal, it is simply a bad idea in practice), so I've replaced that with c: int c; /* c is set by some input operation. ** It might be char c; if you read its value with scanf(), but it must be int if you use getchar() */ switch (c) { case 1: printf("The Sum of %.2f and %.2f is : %.2f", a, b, (a+b)); break; case 2: printf("The Difference of %.2f and %.2f is : %.2f", a, b, (a-b)); break; case 3: printf("The Product of %.2f and %.2f is : %.2f", a, b, (a*b)); break; case 4: if (b != 0) printf("The Quotient of %.2f and %.2f is : %.2f",a, b, (a/b)); else fputs("Error, divide by zero not possible.\n", stderr); break; default: fprintf(stderr, "Error, Invalid choice %c\n", c); break; } Note the use of break; after the default: label too; it protects you against future additions to the code and is completely uniform. (The default: label does not have to be last, though that is the conventional place for it to go.) Commas get a space after them; so do if, for, while, switch, but function calls do not get a space between the name and the open parenthesis. You don't normally need a space after an open parenthesis or before a close parenthesis. Errors are reported to standard error. Personally, I like the actions of the switch to be indented just one level, not two levels, so I'd use: switch (c) { case 1: printf("The Sum of %.2f and %.2f is : %.2f", a, b, (a+b)); break; case 2: printf("The Difference of %.2f and %.2f is : %.2f", a, b, (a-b)); break; case 3: printf("The Product of %.2f and %.2f is : %.2f", a, b, (a*b)); break; case 4: if (b != 0) printf("The Quotient of %.2f and %.2f is : %.2f",a, b, (a/b)); else fputs("Error, divide by zero not possible.\n", stderr); break; default: fprintf(stderr, "Error, Invalid choice %c\n", c); break; } Many people disagree, so you're certainly not under an obligation to do that.
unknown
d15077
val
EDIT: You can restrict the type of affiliation with intersects_with(&block) : has_permission_on [:organizations], :to => :edit do if_attribute :affiliations => intersects_with { user.affiliations.with_type_3 } end Why not create a named_scope to find affiliations whose affiliationtype_id = 3? From declarative_authorization documentation: To reduce redundancy in has_permission_on blocks, a rule may depend on permissions on associated objects: authorization do role :branch_admin do has_permission_on :branches, :to => :manage do if_attribute :managers => contains {user} end has_permission_on :employees, :to => :manage do if_permitted_to :manage, :branch # instead of #if_attribute :branch => {:managers => contains {user}} end end end
unknown
d15078
val
you need to write an aggregate pipeline * *$match - filter the documents by criteria *$group - group the documents by key field *$addToSet - aggregate the unique elements *$project - project in the required format *$reduce - reduce the array of array to array by $concatArrays aggregate query db.tt.aggregate([ {$match : {"time_broadcast" : "09:13"}}, {$group : {"_id" : "$page_id", "category_list" : {$addToSet : "$category_list"}}}, {$project : {"_id" : 0, "page_id" : "$_id", "category_list" : {$reduce : {input : "$category_list", initialValue : [], in: { $concatArrays : ["$$value", "$$this"] }}}}} ]).pretty() result { "page_id" : "123456", "category_list" : [ "news", "updates" ] } { "page_id" : "1234", "category_list" : [ "sport", "handball", "football", "sport" ] } you can add $sort by page_id pipeline if required
unknown
d15079
val
You can export this function as the top level/default through module.exports = listEvents if this is all you're interested in exporting, and use it as const listEvents = require('./googlecal') (the path would mean that the importer is at the same level). Another popular convention is to export as an object so you have space to expand your module's exported funtionalities: module.exports = { listEvents, // future stuff } And use it so by leveraging destructuring: const { listEvents } = require('./googlecal') A: If your environment support ES6 Module syntax, you can use import and export, if not then you should use require and module.exports. if you are going to use module.exports then just add module.exports = { listEvents } and importing it with const { listEvents } = require('./path/to/your/file') You can read this for using export: How to use import/export
unknown
d15080
val
You can use: (df.where(df.eq('Not Reported')).stack(dropna=False) .groupby(level=1).agg(Value='first', Count='count') .reset_index() ) Output: index Value Count 0 Location Not Reported 1 1 Outcome Not Reported 1 2 Severity Not Reported 1 3 Substance Used Not Reported 1 4 Time Not Reported 2 5 Traffic Signal None 0 A: You can count Not Reported by compare all values by Not Reported with sum, no groupby necessary: s = df.eq('Not Reported').sum() print (s) Location 1 Severity 1 Time 2 Outcome 1 Substance Used 1 Traffic Signal 0 dtype: int64 Your expected ouput is possible get in DataFrame constructor: df1 = pd.DataFrame({'Column': s.index, 'Value':'Not Reported', 'Count': s.to_numpy()}) print (df1) Column Value Count 0 Location Not Reported 1 1 Severity Not Reported 1 2 Time Not Reported 2 3 Outcome Not Reported 1 4 Substance Used Not Reported 1 5 Traffic Signal Not Reported 0
unknown
d15081
val
The issue is due to your loss function; mean squared error (MSE) is meaningful for regression problems, while here you face a classification one (3-class), hence your loss function should be Cross Entropy (also called log-loss). For multi-class classification, sigmoid is also not advisable; so, at a high-level, here are some other code modifications advisable for your problem: * *One-hot encode your 3 classes *Use softmax activation for your last layer, which should have 3 units (i.e. as many as the number of your classes)
unknown
d15082
val
Here's the code snippet on Swift which resizes CMSampleBuffer: private func scale(_ sampleBuffer: CMSampleBuffer) -> CVImageBuffer? { guard let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return nil } CVPixelBufferLockBaseAddress(imgBuffer, CVPixelBufferLockFlags(rawValue: 0)) // create vImage_Buffer out of CVImageBuffer var inBuff: vImage_Buffer = vImage_Buffer() inBuff.width = UInt(CVPixelBufferGetWidth(imgBuffer)) inBuff.height = UInt(CVPixelBufferGetHeight(imgBuffer)) inBuff.rowBytes = CVPixelBufferGetBytesPerRow(imgBuffer) inBuff.data = CVPixelBufferGetBaseAddress(imgBuffer) // perform scale var err = vImageScale_ARGB8888(&inBuff, &scaleBuffer, nil, 0) if err != kvImageNoError { print("Can't scale a buffer") return nil } CVPixelBufferUnlockBaseAddress(imgBuffer, CVPixelBufferLockFlags(rawValue: 0)) var newBuffer: CVPixelBuffer? let attributes : [NSObject:AnyObject] = [ kCVPixelBufferCGImageCompatibilityKey : true as AnyObject, kCVPixelBufferCGBitmapContextCompatibilityKey : true as AnyObject ] let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(scaleBuffer.width), Int(scaleBuffer.height), kCVPixelFormatType_32BGRA, scaleBuffer.data, Int(scaleBuffer.width) * 4, nil, nil, attributes as CFDictionary?, &newBuffer) guard status == kCVReturnSuccess, let b = newBuffer else { print("Can't create new CVPixelBuffer") return nil } return b } And here's a definition of scaleBuffer which acts as a destination in scale operation. I do not need to create it each scale so I do it only once: scaleBuffer.data = UnsafeMutableRawPointer.allocate(byteCount: Int(new_width * new_height * 4), alignment: MemoryLayout<UInt>.size) scaleBuffer.width = vImagePixelCount(new_width) scaleBuffer.height = vImagePixelCount(new_height) scaleBuffer.rowBytes = Int(new_width * 4)
unknown
d15083
val
Posted community wiki for better visibility. Feel free to expand it. Currently the only way to get the client source IP address in GKE Ingress is to use X-Forwarded-For header. It's known limitation for all GCP HTTP(s) Load Balancers (GKE Ingress is using External HTTP(s) LB). If it does not suit your needs, consider migrating to a third-party Ingress Controller which is using an external TCP/UDP network LoadBalancer, like NGINX Ingress Controller.
unknown
d15084
val
A normal Hash coerces all its keys to strings: my %a = '1' => 'foo', 2 => 'bar'; say %a.pairs.perl; # ("1" => "foo", "2" => "bar").Seq Note how the second key became the string "2", even though it was originally passed to the Hash as an integer. When you do hash look-ups, the subscript is also automatically converted to a string before it is used: say %a{"2"}.perl; # "bar" say %a{2}.perl; # "bar" Note how the subscript 2 correctly found the element with key "2". The conversion from integers to strings is well-defined in Perl 6, yielding a unique string for each unique integer, so the example you gave is fine. If you don't want your Hash keys to be converted to strings, you can override the key handling using the {} notation in the declaration: my %b{Any} = '1' => 'foo', 2 => 'bar'; say %b.pairs.perl; # ("1" => "foo", 2 => "bar").Seq say %b{"1"}.perl; # "foo" say %b{1}.perl; # Any say %b{"2"}.perl; # Any say %b{2}.perl; # "bar" Note how in this case the second key 2 stays an integer, and doing a look-up with the string subscript "2" does not find it, nor does the subscript 1 find the entry with key "1". %b{Any} means "accept keys of any type, and don't coerce them". This is sometimes called an 'object Hash' because it can map from any object to a value. %b{Int} would mean "accept only Int keys, and don't coerce them". In this case you'll get an error if you even try to use anything that isn't already an Int.
unknown
d15085
val
It doesn't work. That way you simply suppress the warning by making the situation harder to analyze. The behavior is still undefined. A: Your "fix" doesn't. To preserve the signature, you must make some tradeoffs. At the minimum, getString is not reentrant, and that can't be fixed other than returning a copy of the string. It is also not thread-safe, although that's fixable without changing the signature. At a minimum, to preserve the signature you must retain the string yourself. A simple solution might look as follows: const QString & getString() { static QString string = createString(); return string; } Another approach would be to make the string a class member, if your function is really a method: class Foo { QString m_getString_result; public: const QString & getString() { m_getString_result = createString(); return m_getString_result; } }; For thread safety, you'd need to keep the result in thread local storage. That still wouldn't fix the reentrancy issue - as such, it's not fixable given the signature that you have. A: This behavior is undefined. const QString& getString() { const QString& binder = createString(); return binder; } Once binder goes out of scope. It is not defined any more. You can make it defined by keeping the binder alive.
unknown
d15086
val
To set a border, you must specify the thickness and style. Also, don't use inline styles or inline event handlers. Just set a pre-existing class upon the event. And instead of checking to see if the radio button is selected, just use the click event of the button. If you've clicked it, it's selected. Lastly, if you are using a form field, but only as a UI element and not to actually submit data anywhere, you don't need the name attribute. document.getElementById("theme1").addEventListener("click", function() { document.body.classList.add("lightTheme"); }); .lightTheme { border:1px solid red; background-color:lightgrey; } <div class="dropdown"> <input type="radio" id="theme1" value="1">Light<br><br> </div> A: As @Tyblitz remarked: your "border" needs to be set properly: function changeTheme() { if (document.getElementById("theme1").checked = true) { document.container.style.backgroundColor = "lightgray"; document.container.style.borderColor = "red"; } } function bgcolor(color, border) { document.body.style.background = color; document.body.style.border = border; } <div class="dropdown"> <input type="radio" id="theme1" name="theme" value="1" onclick="bgcolor('lightgray', '6px solid red');" onchange="console.log('You changed the theme!')">Light<br><br> </div> A: 2nd Edit : (Thanks to Scott Marcus answer and Mike 'Pomax' Kamermans comment) Here is a pen for it. I create classes for each theme with the naming pattern : prefix "colorTheme" followed by the input value attribute's value. function themeChange() { /* theme class naming pattern : prefix "colorTheme" followed by the input "value attribute" 's value */ var themeCheckedClassValue = "colorTheme" + this.value; /* don't do anything if we click on the same input/label */ if (themeCheckedClassValue != document.body.dataset.currentThemeClass) { document.body.classList.remove(document.body.dataset.currentThemeClass); document.body.classList.add(themeCheckedClassValue); /* new current theme value stored in data-current-theme-class custom attribut of body */ document.body.dataset.currentThemeClass = themeCheckedClassValue; } } document.addEventListener('DOMContentLoaded', function () { /* querySelector and if statement only needed at DOMContentLoaded So I did a seperate function for this event */ var themeChecked = document.querySelector('input[id^="theme"]:checked'); if (themeChecked) { var themeCheckedClassValue = "colorTheme" + themeChecked.value; document.body.classList.add(themeCheckedClassValue); document.body.dataset.currentThemeClass = themeCheckedClassValue; } else { /* if there is no input with the attribute "checked" the custom attribut data-current-theme-class takes the value "null" */ document.body.dataset.currentThemeClass = null; } }); /* input[id^="theme] mean every input with an id starting with "theme" */ var themeInputs = document.querySelectorAll('input[id^="theme"]'); for (let i = 0; i < themeInputs.length; i++) { themeInputs[i].addEventListener("click", themeChange); } This way you can simply add new themes by following the same naming pattern. function themeChange() { /* theme class naming pattern : prefix "colorTheme" followed by the input "value attribute" 's value */ var themeCheckedClassValue = "colorTheme" + this.value; /* don't do anything if we click on the same input/label */ if (themeCheckedClassValue != document.body.dataset.currentThemeClass) { document.body.classList.remove(document.body.dataset.currentThemeClass); document.body.classList.add(themeCheckedClassValue); /* new current theme value stored in data-current-theme-class custom attribut of body */ document.body.dataset.currentThemeClass = themeCheckedClassValue; } } document.addEventListener('DOMContentLoaded', function() { /* querySelector and if statement only needed at DOMContentLoaded So I did a seperate function for this event */ var themeChecked = document.querySelector('input[id^="theme"]:checked'); if (themeChecked) { var themeCheckedClassValue = "colorTheme" + themeChecked.value; document.body.classList.add(themeCheckedClassValue); document.body.dataset.currentThemeClass = themeCheckedClassValue; } else { /* if there is no input with the attribute checked the custom attribut data-current-theme-class takes the value "null" */ document.body.dataset.currentThemeClass = null; } }); /* input[id^="theme] mean every input with an id starting with "theme" */ var themeInputs = document.querySelectorAll('input[id^="theme"]'); for (let i = 0; i < themeInputs.length; i++) { themeInputs[i].addEventListener("click", themeChange); } html { -webkit-box-sizing: border-box; box-sizing: border-box; font-size: 16px; } *, *::after, *::before { -webkit-box-sizing: inherit; box-sizing: inherit; } body { margin: 0; padding: 0; min-height: 100vh; width: 100vw; border: 10px solid; border-color: transparent; -webkit-transition: all .4s; transition: all .4s; } ul { list-style: none; margin: 0; padding: 10px; font-size: 20px; } li { padding: 2px; -webkit-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; } input, label { cursor: pointer; -webkit-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; outline: none; } .colorTheme1 { color: rgb(36, 36, 33); font-weight: 400; background-color: rgb(248, 236, 220); border-color: rgb(60, 75, 124); } .colorTheme2 { background-color: rgb(39, 34, 28); border-color: rgb(102, 102, 153); color: rgb(248, 236, 220); font-weight: 700; } .colorTheme3 { background-color: rgb(255, 0, 0); border-color: rgb(0, 255, 255); color: rgb(255, 255, 0); font-weight: 700; } <body> <ul> <li><input type="radio" id="theme1" name="theme" value="1" checked><label for="theme1">Light</label></li> <li> <input type="radio" id="theme2" name="theme" value="2"><label for="theme2">Dark</label></li> <li> <input type="radio" id="theme3" name="theme" value="3"><label for="theme3">Flashy</label></li> </ul> </body>
unknown
d15087
val
To generate all possible bit patterns inside an int and thus all possible subsets defined by that bit map would simply require you to start your int at 1 and keep incrementing it to the highest possible value an unsigned short int can hold (all 1s). At the end of each inner loop, compare the sum to the target. If it matches, you got a solution subset - print it out. If not, try the next subset. Can someone help to explain how to go about doing this? I understand the concept but lack the knowledge of how to implement it. A: OK, so you are allowed one array. Presumably, that array holds the first set of data. So your approach needs to not have any additional arrays. The bit-vector is simply a mental model construct in this case. The idea is this: if you try every possible combination (note, NOT permutation), then you are going to find the closest sum to your target. So lets say you have N numbers. That means you have 2^N possible combinations. The bit-vector approach is to number each combination with 0 to 2^N - 1, and try each one. Assuming you have less that 32 numbers in the array, you essentially have an outer loop like this: int numberOfCombinations = (1 << numbers.length - 1) - 1; for (int i = 0; i < numberOfCombinations; ++i) { ... } for each value of i, you need to go over each number in numbers, deciding to add or skip based on shifts and bitmasks of i. A: So the task is to what an algorithm that, given a set A of non-negative numbers and a goal value k, determines whether there is a subset of A such that the sum of its elements is k. I'd approach this using induction over A, keeping track of which numbers <= k are sums of a subset of the set of elements processed so far. That is: boolean[] reachable = new boolean[k+1]; reachable[0] = true; for (int a : A) { // compute the new reachable // hint: what's the relationship between subsets of S and S \/ {a} ? } return reachable[k]; A bitmap is, mathematically speaking, a function mapping a range of numbers onto {0, 1}. A boolean[] maps array indices to booleans. So one could call a boolean[] a bitmap. One disadvanatage of using a boolean[] is that you must process each array element individually. Instead, one could use that a long holds 64 bits, and use bitshifting and masking operations to process 64 "array" elements at a time. But that sort of microoptimization is error-prone and rather involved, so not commonly done in code that should be reliable and maintainable. A: I think you need something like this: public boolean equalsTarget( int bitmap, int [] numbers, int target ) { int sum = 0; // this is the variable we're storing the running sum of our numbers int mask = 1; // this is the bitmask that we're using to query the bitmap for( int i = 0; i < numbers.length; i++ ) { // for each number in our array if( bitmap & mask > 0 ) { // test if the ith bit is 1 sum += numbers[ i ]; // and add the ith number to the sum if it is } mask <<= 1; // shift the mask bit left by 1 } return sum == target; //if the sum equals the target, this bitmap is a match } The rest of your code is fairly simple, you just feed every possible value of your bitmap (1..65535) into this method and act on the result. P.s.: Please make sure that you fully understand the solution and not just copy it, otherwise you're just cheating yourself. :) P.p.s: Using int works in this case, as int is 32 bit wide and we only need 16. Be careful with bitwise operations though if you need all the bits, as all primitive integer types (byte, short, int, long) are signed in Java. A: There are a couple steps in solving this. First you need to enumerate all the possible bit maps. As others have pointed out you can do this easily by incrementing an integer from 0 to 2^n - 1. Once you have that, you can iterate over all the possible bit maps you just need a way to take that bit map and "apply" it to an array to generate the sum of the elements at all indexes represented by the map. The following method is an example of how to do that: private static int bitmapSum(int[] input, int bitmap) { // a variable for holding the running total int sum = 0; // iterate over each element in our array // adding only the values specified by the bitmap for (int i = 0; i < input.length; i++) { int mask = 1 << i; if ((bitmap & mask) != 0) { // If the index is part of the bitmap, add it to the total; sum += input[i]; } } return sum; } This function will take an integer array and a bit map (represented as an integer) and return the sum of all the elements in the array whose index are present in the mask. The key to this function is the ability to determine if a given index is in fact in the bit map. That is accomplished by first creating a bit mask for the desired index and then applying that mask to the bit map to test if that value is set. Basically we want to build an integer where only one bit is set and all the others are zero. We can then bitwise AND that mask with the bit map and test if a particular position is set by comparing the result to 0. Lets say we have an 8-bit map like the following: map: 1 0 0 1 1 1 0 1 --------------- indexes: 7 6 5 4 3 2 1 0 To test the value for index 4 we would need a bit mask that looks like the following: mask: 0 0 0 1 0 0 0 0 --------------- indexes: 7 6 5 4 3 2 1 0 To build the mask we simply start with 1 and shift it by N: 1: 0 0 0 0 0 0 0 1 shift by 1: 0 0 0 0 0 0 1 0 shift by 2: 0 0 0 0 0 1 0 0 shift by 3: 0 0 0 0 1 0 0 0 shift by 4: 0 0 0 1 0 0 0 0 Once we have this we can apply the mask to the map and see if the value is set: map: 1 0 0 1 1 1 0 1 mask: 0 0 0 1 0 0 0 0 --------------- result of AND: 0 0 0 1 0 0 0 0 Since the result is != 0 we can tell that index 4 is included in the map.
unknown
d15088
val
No, this "feature" isn't available without explicit permission from Microsoft. And no, there are no real alternative solutions.
unknown
d15089
val
$conn = new \mysqli('127.0.0.1', 'dev', 'SoMuchDev', 'test', '3306'); $result = $conn->query('select * from test'); while ($row = $result->fetch_assoc()) { $res[] = $row; } $res['total'] = $result->num_rows; echo "<pre>"; var_export($res);die(); http://php.net/manual/en/class.mysqli-result.php As for editors, PhpStorm is widely used but afaik there's only a 30 day trial use. You could also give notepad ++ a try, I think it has some php plugins for autocomplete and what not.
unknown
d15090
val
You need to have the content of the page fit within your viewport (or screen) dimensions for the given media type. You can have different stylesheets based on the media type in a link/stylesheet tag. Say, for the iphone, use media="handheld" and try to keep the content within around 600px, or whatever looks good on your mobile device. Here is another resource from w3c.
unknown
d15091
val
For some reason it does not appear it Spark configurations docs but you can find it in SQLConf.scala: When LEGACY, java.text.SimpleDateFormat is used for formatting and parsing dates/timestamps in a locale-sensitive manner, which is the approach before Spark 3.0. When set to CORRECTED, classes from java.time.* packages are used for the same purpose. The default value is EXCEPTION, RuntimeException is thrown when we will get different results.
unknown
d15092
val
The draw method is not called as a method of the object, it's called as a function in the global scope, so this will be a reference to window, not to the Game object. Copy this to a variable, and use it to call the method from a function: var t = this; window.setInterval(function() { t.draw(); }, 1000 / 30);
unknown
d15093
val
Think about what the INNER JOIN is doing. For every row in CaseLot, its finding any row in Budget that has a matching date. If you were to remove your aggregation statements in SQL, and just show the inner join, you would see the following result set: DateProduced kgProduced OperatingDate BudgetHours October 1, 2013 10000 October 1, 2013 24 October 1, 2013 10000 October 1, 2013 24 October 2, 2013 10000 October 2, 2013 24 (dammit StackOverflow, why don't you have Markdown for tables :( ) Running your aggregation on top of that it is easy to see how you get the 72 hours in your result. The correct query needs to aggregate the CaseLots table first, then join onto the Budget table. SELECT DateProduced, TotalKgProduced, SUM(BudgetHours) AS TotalBudgetHours FROM ( SELECT DateProduced, SUM(kgProduced) AS TotalKgProduced FROM CaseLots GROUP BY DateProduced ) AS TotalKgProducedByDay INNER JOIN Budget ON TotalKgProducedByDay.DateProduced = Budget.OperatingDate WHERE DateProduced BETWEEN '1 Oct 2013' AND '2 Oct 2013' GROUP BY DateProduced A: The problem is in the INNER JOIN produces a 3 row table since the keys match on all. So there is three '24's with a sum of 72. To fix this, it would probably be easier to split this into two queries. SELECT Sum(kgProduced) AS TotalProduction FROM dbo.CaseLots WHERE dbo.CaseLots.OperatingDate BETWEEN '2013-10-01' AND '2013-10-02' LEFT JOIN SELECT Sum(BudgetHours) AS TotalBudgetHours FROM dbo.Budget WHERE dbo.Budget.OperatingDate BETWEEN '2013-10-01' AND '2013-10-02' A: This could be easily achieved by this: SELECT (SELECT SUM(kgProduced) FROM dbo.CaseLots WHERE DateProduced BETWEEN '2013-10-01' AND '2013-10-02') AS TotalProduction, (SELECT SUM(BudgetHours) FROM dbo.Budget WHERE OperatingDate BETWEEN '2013-10-01' AND '2013-10-02') AS TotalBudgetHours There's no need for joining the two tables. A: Try this: select DateProduced,TotalProduction,TotalBudgetHours from (select DateProduced,sum(kgProduced) as TotalProduction from CaseLots group by DateProduced) p join (select OperatingDate,sum(BudgetHours) as TotalBudgetHours from Budget group by OperatingDate) b on (p.DateProduced=b.OperatingDate) where p.DateProduced between '2013-10-01' AND '2013-10-02' A: The other answers are simpler for this particular case. However if you needed to SUM 10 different values on the CaseLots table, you'd need 10 different subqueries. The following is a general, more scaleable solution: SELECT SUM(DayKgProduced) AS TotalProduction, SUM(BudgetHours) AS TotalBudgetHours FROM ( SELECT DateProduced, SUM(kgProduced) AS DayKgProduced, FROM dbo.CaseLots WHERE DateProduced BETWEEN '2013-10-01' AND '2013-10-02' GROUP BY DateProduced ) DailyTotals INNER JOIN dbo.Budget b ON DailyTotals.DateProduced = b.OperatingDate First you SUM the production of each CaseLot without having to SUM the BudgetHours. If you used a SELECT * FROM in the query above you'd see: Date DayKgProduced BudgetHours 2013-10-01 20000 24 2013-10-02 10000 24 But you want the overall total, so we SUM those daily values, correctly producing: TotalProduction TotalBudgetHours 30000 48
unknown
d15094
val
If you want to use the 'karma' command, you will need to install the following: npm install -g karma-cli As specified by th official documentation: you need a separate package to use karma within command line interface Otherwise you would need to call karma from within the node_modules folder everytime
unknown
d15095
val
In sql server there are two special tables availble in the trigger called inserted and deleted. Same structure as the table on which the trigger is implemented. inserted has the new versions, deleted the old.
unknown
d15096
val
If you want to update nested array you need use $push to update array. Reference: * *https://www.mongodb.com/community/forums/t/pushing-array-of-elements-to-nested-array-in-mongo-db-schema/112494
unknown
d15097
val
I have tried using lead window function. I arranged the dataframe by partitioning on station and dateS and ordering by hour and calculated the difference with the previous hour. If we are considering for 4 consecutive hours, there should be three 1's in the difference column one after another. To find out that, I have collected all the diffs based on station and dateS and checked if it contains " 1 1 1 ". The code for the same is shown below. I hope it is helpful. //Creating Test Data val df = Seq(("Roma",2.2,"2018-10-02",1 ) , ("Roma",1.5,"2018-10-02",2 ) , ("Roma",1.4,"2018-10-02",3 ) , ("Roma",1.4,"2018-10-02",4 ) , ("Milano",0.6,"2018-11-02",12 ) , ("Milano",1.0,"2018-11-02",13 ) , ("Napoli",0.3,"2018-12-02",20 ) , ("Napoli",0.0,"2018-12-02",21 ) , ("Napoli",1.8,"2018-12-02",4 ) , ("Napoli",2.0,"2018-12-03",5 ) , ("Napoli",1.8,"2018-12-03",6)) .toDF("station", "temp", "dateS", "hour") val filterDF = df.withColumn("hour_lead", lead($"hour", 1) .over(Window.partitionBy("station","dateS") .orderBy(col("hour"))) .filter($"hour_lead".isNotNull) .withColumn("hour_diff", $"hour_lead" - $"hour") .groupBy("station","dateS") .agg(collect_list($"hour_diff".cast("string")).as("hour_diff_list")) .withColumn("hour_diff_list_str", concat(lit(" "), concat_ws(" ", $"hour_diff_list"), lit(" "))) .filter($"hour_diff_list_str".contains(" 1 1 1 ")) filterDF.show(false) +-------+----------+--------------+------------------+ |station|dateS |hour_diff_list|hour_diff_list_str| +-------+----------+--------------+------------------+ |Roma |2018-10-02|[1, 1, 1] | 1 1 1 | +-------+----------+--------------+------------------+
unknown
d15098
val
Assuming you mean micro-controllers: A watchdog timer (sometimes called a computer operating properly or COP timer, or simply a watchdog) is an electronic timer that is used to detect and recover from computer malfunctions. Source
unknown
d15099
val
Even when netstat showed that the port was open, it was closed in the server. I managed t open it with the following command: sudo firewall-cmd --zone=public --add-port=3001/tcp --permanent sudo firewall-cmd –reload I hope this helps anyone else having similar issues. A: The default IP address, 127.0.0.1, is not accessible from other machines on your network. To access your machine from other machines on the network, use its own IP address 192.168.1.4 or 0.0.0.0. python manage.py runserver 0.0.0.0:8000
unknown
d15100
val
You can't access MainBackground.infoButton from LoginUI because infoButton is not static. to solve this you could inject MainBackground trough a property like the example below public partial class LoginUI : UserControl { public MainBackground MainBackground { get; set; } ... } in MainBackground you should initalize your LoginUI.MainBackground propery loginUI1.MainBackground = this; Make sure to make infoButton public by setting the modifiers property to public Now you can access MainBackground.loginUI1 private void login_Click(object sender, EventArgs e) { MainBackground.InfoButton.Enabled = true; } A: The method described in your question of Enabling the MainBackground forms InfoButton when the Login button is pressed is a common action. However, instead of directly binding the two items where the LoginUI Control is now forever bound to the MainBackground Form, you should de-couple the two by using Events. The LoginUI Control should publish an event, perhaps called, LoginClicked. The MainBackground form can then subscribe to this event and execute whatever actions are required when the Login button is clicked. In the LoginUI Control, declare an event: public event EventHandler LoginClicked; And, raise it whenever the Login button is pressed: private void login_Click(object sender, EventArgs e) { OnLoginClicked(EventArgs.Empty); } protected virtual void OnLoginClicked(EventArgs e) { EventHandler handler = LoginClicked; if (handler != null) { handler(this, e); } } Finally, in the MainBackground form class, subscribe to the LoginClicked event loginUI.LoginClicked += this.loginUI_LoginClicked; Handle the LoginClicked event like this: private void loginUI_LoginClicked(object sender, EventArgs e) { InfoButton.Enabled = true; }
unknown