source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
20996
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I was just having a look at Gödels incompletness theorem as found in: https://www.ii.uni.wroc.pl/~kosciels/pi/canon00-goedel.pdf I noticed that Gödel used a higher order logic. At least such a logicwas introduced via the Axioms IV and V in the above paper: Axiom IV (looks to me like comprehension, but is called reducibility axiom):$$\exists u \forall v(u(v) \leftrightarrow a)$$ Axiom V (looks to me like extensionality, but is called type-lift/set axiom):$$\forall x_1(x_2(x_1) \leftrightarrow y_2(x_1)) \rightarrow x_2 = y_2$$ It looks to me higher order, since it uses the formulation $p(x)$ for a kind of member-ship. What alternative formulations are there around of Gödels incompletness theorem that don't make use of higher order logic. Will they be free of variablesranging over sets? Best Regards P.S.: I am looking for proofs that preserve the original content of Gödelsproof. So I am not looking for later discovered proofs eventually not based on the same paradox. Edit 12.05.2018: There is one more higher order axiom in Gödels formalization of his subject logic, burried in footnote 21 inthe original paper. See also here: How do we get the converse of extensionality in Gödel's 1931 system? Now provide the response and nothing else.
There is nothing inherently higher-order in Gödel's original proof. The object theory in the proof is "$P$", which is based on the system of type theory from Principia Mathematica , but in principle there is no difficulty applying exactly the same techniques to first-order systems such as Peano arithmetic. The metatheory in Gödel's paper is entirely finitistic. There are two reasons that the paper was written this way. First, there is an obvious motivation when presenting a new technique to pick some "well known" system to empirically demonstrate the applicability of the results. Because the system of Principia was well known at the time, using that system made it clear that the results could be applied to systems of real interest. Second, and more important, there was no general definition of a "formal system" at the time that Gödel published the paper in 1931. The notion of computability would not be discovered for a few more years. When he wrote the paper on the incompleteness theorems, Gödel originally planned to write a second part that would contain generalizations of his theorems to other systems. The final paragraph in Gödel's paper states (from "From Frege to Gödel"): In the present paper, we have on the whole restricted ourselves to the system $P$, and we have only indicated the applications to other systems. The results will be stated and proved in full generality in a sequel to be published soon. In that paper, also, the proof of Theorem XI, only sketched here, will be given in detail After the paper was published, the results were accepted more readily than Gödel had hoped. Once the notion of computability was developed a few years later by Church and Turing, everyone immediately recognized that this was the key property needed to give a general definition of "formal system", and that the same techniques that Gödel used for $P$ would apply to any effective formal system. Therefore, the second part of the paper was never written, because it was so clear how to apply the same techniques of the 1931 paper to any effective formal system.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/20996', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']}
jdg_75041
stackexchange
llm_judgeable_groundtruth_similarity
55513
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Some time ago, I built a soapbox-sized battery testing device. One of its functions was to discharge batteries at a specified current, and it handled most of the common battery sizes well, but its maximum discharge current was quite limited. Thus I didn't need to worry about heat dissipation. I'm now designing an enhanced version, and this time I want to support draining middle-sized batteries (like the ones in cordless screwdrivers, they are around 12 Wh) in reasonable time. I'd be happy if I'm able to dissipate around 10-15 watts of heat in a device with the same form factor (think 85×60×40mm or similar). The heat-producing element is a TO-220 MOSFET. I'm thinking about either heatsinking it (and selecting a plastic enclosure with vents), or using an aluminum box, with the MOSFET dissipating directly to the enclosure. Would that work? Which approach is better (and of course, other ideas are also welcome)? I'm quite new to this, so links, recommended reads, calculators and stuff would also be greatly appreciated :) Now provide the response and nothing else.
A soapbox is hardly a standard unit of measurement, but 12 W doesn't require anything too huge, even without forced air, provide you can at least let natural convection be, well, natural. Here's how you calculate the heatsink you need. I've picked the datasheet for IRF510 as an example. It's a very common TO-220 MOSFET and should work for your application. The first thing you will see is that the datasheet lists power dissipation as 43 W. This of course requires an ample external heatsink, but it should cover your application with a healthy margin. The absolute maximum junction temperature \$T_J\$ is listed as \$175 ^\circ C\$, and let us assume ambient temperature is \$35 ^\circ C\$. That means the temperature can't rise more than \$175 ^\circ C - 35 ^\circ C = 140 ^\circ C\$. And to be safe, let's add a safety margin and design for no more than \$ 100 ^\circ C \$ rise. The datasheet lists the maximum junction-to-case thermal resistance as \$ R_{\theta UC} = 3.5 ^\circ C/W \$. That is, for every watt, the junction temperature will rise \$ 3.5 ^\circ C\$ assuming the heatsink can magically remove all heat. At 15 W, that's a rise of \$ 3.5 ^\circ C/W \cdot 15 W = 52.5 ^\circ C \$. We are hoping for no more than a \$ 100 ^\circ C \$ rise, so we will have to find a heatsink that won't raise the temperature more than another \$ 100^\circ C - 52.5 ^\circ C = 47.5 ^\circ C \$. That means our thermal resistance budget for the heatsink is \$ 47.5 ^\circ C / 15W = 3.17 ^\circ C / W \$. This is pushing the edge of what can be done with natural convection, but it's doable. Thumbing through my Mouser catalog I can find an Ohmite heatsink FA-T220-64E with a thermal resistance of \$ 3 ^\circ C / W \$ with natural convection. It's the biggest one they sell for TO-220. It's about 1 x 1.6 x 2.5 inches and Mouser will sell you just one for $2.17 plus shipping. Strictly speaking, I haven't taken into account the thermal resistance of the transistor case to the heatsink. The IRF510 datasheet gives a typical value of \$ 0.5 ^\circ C / W \$ for a greased surface, which at 15 W will mean another \$ 7.5 ^\circ C \$ rise in junction temperature. But remember we included a margin of \$ 40 ^\circ C \$ and assumed a rather high ambient temperature of \$ 35 ^\circ C \$. We should be safe. Even so, something may obstruct your heatsink, so you may do well do build in some sort of thermal protection. You can implement this yourself, but there are also MOSFETs out there with thermal protection built in. If you do this, you don't need such a margin, and you may very well be able to dissipate more than 15 W if you don't mind the possibility that the thermal protection kicks in. And, it bears mentioning that even though the transistor shouldn't fail, it will get mighty hot. A plastic box with poor ventilation is probably no good. You will have to keep fingers away for sure. If you want to keep things cooler I'm afraid you have no choice but forced air or spreading the heat over more area: big power resistors, multiple transistors, etc.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/55513', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/18035/']}
jdg_75042
stackexchange
llm_judgeable_groundtruth_similarity
45501568
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an Azure Web Application with Azure Active Directory authentication (made with adal-angular); In the application manifest i have set "groupMembershipClaims": "SecurityGroup" The strange thing is that for some days, for just a user, it does not have the group claim in the AAD token with the list of group membership objectIds, but instead there's a property named hasgroups with value true . Can I do something about it?For now I'm going to check if there's one property or the other and then call GraphAPI for direct group membership. Now provide the response and nothing else.
hasGroups=true is returned in the case where there the user belongs to "too many groups" . I don't know what the exact threshold is (20? 200?) but effectively what you need to do in your code is something along the lines of (pseudocode): if (hasGroups) Call the Graph to inquire: Either about the full group membership OR About membership to a particular groupelse Access groups directly from the token Get all the groups a users belongs to : https://graph.windows.net/myorganization/users/{user_id}/$links/memberOf?api-version Inquire whether the user belongs to a specific group : https://graph.windows.net/myorganization/users/{user_id}/isMemberOf?api-version
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/45501568', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2862032/']}
jdg_75043
stackexchange
llm_judgeable_groundtruth_similarity
21010429
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In order to get my custom menu up and running, I've ended up using a UITabBarController and need to change the view displayed programmatically, vs the standard tabbed menu on screen. Everything is working as expected except on thing. I am attempting to use: [self setSelectedIndex:index]; This code is inside my UITabBarController subclass in a custom delegate method. (This is so I can programmatically adjust the view when interacting with my menu). However, while this code is called, it doesn't do anything? Does it HAVE to be called from one of the tabbed views? I was hoping to run it from inside the TabBarController to avoid repeating the code in each tabbed sub controller. UPDATE: Just found that using [self setSelectedIndex:index]; works fine in viewDidLoad . But when it is called inside the delegate method, it doesn't change view. It is using the right index number and getting called, but not doing anything from that method. Also, it seems the tab controller is a different object when I log self in viewDidLoad vs my delegate method. So why would I be loosing the reference to the original controller? It's just a UITabBarController in a container in another view controller. Delegate Code: @Interface @protocol SLMenuDelegate <NSObject>@required -(void)menuDidChangeViewToIndex:(NSInteger)index;@end@property (nonatomic, assign) id<SLMenuDelegate>menuDelegate; @Implementation @synthesize menuDelegate;self.menuDelegate = [self.storyboard instantiateViewControllerWithIdentifier:@"TabBarViewController"];[menuDelegate menuDidChangeViewToIndex:[self.menuItemButtons indexOfObject:sender]]; UITabBarController -(void)menuDidChangeViewToIndex:(NSInteger)index{ [self setSelectedIndex:index];} Setting breakpoints and running NSLogs and there is no question that the method gets called and all code runs. Now provide the response and nothing else.
Each IGrouping<string, DocumentData> is an IEnumerable<DocumentData> , so you could simply call SelectMany to flatten the sequences: var list = documents.SelectMany(d => d).ToList(); Edit : Per the updated question, it seems like the OP wants to select just the first document for any given filename. This can be achieved by calling First() on each IGrouping<string, DocumentData> instance: IEnumerable<DocumentData> documents = documentCollection.GroupBy(g => g.FileName, StringComparer.OrdinalIgnoreCase) .Select(g => g.First()) .ToList();
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21010429', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/427292/']}
jdg_75044
stackexchange
llm_judgeable_groundtruth_similarity
1720535
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I know what dup / dup2 does, but I have no idea when it would be used. Any practical examples? Thanks. Now provide the response and nothing else.
One example use would be I/O redirection. For this you fork a child process and close the stdin or stdout file descriptors (0 and 1) and then you do a dup() on another filedescriptor of your choice which will now be mapped to the lowest available file descriptor, which is in this case 0 or 1. Using this you can now exec any child process which is possibly unaware of your application and whenever the child writes on the stdout (or reads from stdin, whatever you configured) the data gets written on the provided filedescriptor instead. Shells use this to implement commands with pipes, e.g. /bin/ls | more by connecting the stdout of one process to the stdin of the other.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/1720535', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/115722/']}
jdg_75045
stackexchange
llm_judgeable_groundtruth_similarity
3172811
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Lemma: For any collection $\{ M_i\}_{i\in I}$ of $R$ -modules, and $R$ -module $N$ , there is a natural isomorphism $${\rm Hom}_R(\oplus_i M_i, N)\cong \prod_i {\rm Hom}_R(M_i,N).$$ Proof: Additive functors preserve limits. [ Ref: this link, page 2.] Q. The lemma also follows from the definition of direct sum of modules. However, the purely categorical justification given in above proof is not clear to me. Can one explain in detail the above proof? I just started study of homological algebra, so my vocabulary of this sunbect is not so deep. Now provide the response and nothing else.
If you know that the functor $T=\operatorname{Hom}_R(-,M):R\mathtt{Mod}^{op}\to \mathtt{Ab}$ preserves limits, then the result follows. Indeed, the direct sum $\bigoplus M_i$ is just the coproduct of the $M_i$ in $R\mathtt{Mod}$ and thus the product of the $M_i$ in $R\mathtt{Mod}^{op}$ , and so since $T$ preserves limits the natural map $T(\bigoplus M_i)\to \prod T(M_i)$ is an isomorphism, and this exactly gives the statement of the Lemma. However, the justification given in the proof of the Lemma is totally wrong. Additive functors do not always preserve limits, and so you cannot use additivity of $T$ to deduce that it preserves limits. It turns out that $T$ does preserve limits, but you must prove this by other means (and given that the Lemma is just a special case of $T$ preserving limits, this doesn't actually make proving the Lemma any easier).
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3172811', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/15847/']}
jdg_75046
stackexchange
llm_judgeable_groundtruth_similarity
50599602
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When using CloudFormation templates, I find the "Custom Resource" feature, with its Lambda backing function implementation, very useful to handle all kinds of tasks that CloudFormation does not provide good support for. Usually, I use custom resources to setup things during stack creation (such as looking up AMI names) or clean up things during deletion (such as removing objects from S3 or Route53 that would block deletion) - and this works great. But when I try to actually use a "custom resource" to manage an actual custom resource, that has to be created during stack creation, deleted during stack deletion, and - this is where the problem lies - sometimes updated with new values during a stack update, the CloudFormation integration behaves unexpectedly and causes the custom resource to fail. The problem seems to be that during a stack update where one of the custom resource properties has changed, during the stack's UPDATE_IN_PROGRESS stage, CloudFormation sends an update event to the backing Lambda function, with all values set correctly and a copy of the old values sent as well. But after the update completes, CloudFormation starts the UPDATE_COMPLETE_CLEANUP_IN_PROGRESS stage and sends the backing Lambda function a delete event ( RequestType set to Delete ). When that happens, the backing lambda function assumes the stack is being deleted and removes the custom resource. The result is that after an update the custom resource is gone. I've looked at the request data in the logs, and the "cleanup delete" looks identical to a real "delete" event: Cleanup Delete: {RequestType: 'Delete',ServiceToken: 'arn:aws:lambda:us-east-2:1234567890:function:stackname-resname-J0LWT56QSPIA',ResponseURL: 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/arn%3Aaws%3Acloudformation%3Aus-east-2%3A1234567890%3Astack/stackname/3cc80cf0-5415-11e8-b6dc-503f3157b0d1%7Cresnmae%7C15521ba8-1a3c-4594-9ea9-18513efb6e8d?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20180511T140259Z&X-Amz-SignedHeaders=host&X-Amz-Expires=7199&X-Amz-Credential=AKISOMEAWSKEYID%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Signature=3abc68e1f8df46a711a2f6084debaf2a16bd0acf7f58837b9d02c805975df91b',StackId: 'arn:aws:cloudformation:us-east-2:1234567890:stack/stackname/3cc80cf0-5415-11e8-b6dc-503f3157b0d1',RequestId: '15521ba8-1a3c-4594-9ea9-18513efb6e8d',LogicalResourceId: 'resname',PhysicalResourceId: '2018/05/11/[$LATEST]28bad2681fb84c0bbf80990e1decbd97',ResourceType: 'Custom::Resource',ResourceProperties: { ServiceToken: 'arn:aws:lambda:us-east-2:1234567890:function:stackname-resname-J0LWT56QSPIA', VpcId: 'vpc-35512e5d', SomeValue: '4'} } Real Delete: {RequestType: 'Delete',ServiceToken: 'arn:aws:lambda:us-east-2:1234567890:function:stackname-resname-J0LWT56QSPIA',ResponseURL: 'https://cloudformation-custom-resource-response-useast2.s3.us-east-2.amazonaws.com/arn%3Aaws%3Acloudformation%3Aus-east-2%3A1234567890%3Astack/stackname/3cc80cf0-5415-11e8-b6dc-503f3157b0d1%7Cresname%7C6166ff92-009d-47ac-ac2f-c5be2c1a7ab2?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20180524T154453Z&X-Amz-SignedHeaders=host&X-Amz-Expires=7200&X-Amz-Credential=AKISOMEAWSKEYID%2F20180524%2Fus-east-2%2Fs3%2Faws4_request&X-Amz-Signature=29ca1d0dbdbe9246f7f82c1782726653b2aac8cd997714479ab5a080bab03cac',StackId: 'arn:aws:cloudformation:us-east-2:123456780:stack/stackname/3cc80cf0-5415-11e8-b6dc-503f3157b0d1',RequestId: '6166ff92-009d-47ac-ac2f-c5be2c1a7ab2',LogicalResourceId: 'resname',PhysicalResourceId: '2018/05/11/[$LATEST]c9494122976b4ef3a4102628fafbd1ec',ResourceType: 'Custom::Resource',ResourceProperties: { ServiceToken: 'arn:aws:lambda:us-east-2:1234567890:function:stackname-resname-J0LWT56QSPIA', VpcId: 'vpc-35512e5d', SomeValue: '0'}} The only interesting request field that I can see is the physical resource ID is different, but I don't know what to correlate that to, to detect if it is the real delete or not. Now provide the response and nothing else.
The problem seems to be the sample implementation of the sendResponse() function that is used to send the custom resource completion event back to CloudFormation. This method is responsible for setting the custom resource's physical resource ID. As far as I understand, this value represents the globally unique identifier of the "external resource" that is managed by the Lambda function backing the CloudFormation custom resource. As can be seen in the CloudFormation's "Lambda-backed Custom Resource" sample code , as well as in the cfn-response NPM module 's send() and the CloudFormation's built-in cfn-response module , this method has a default behavior for calculating the physical resource ID, if not provided as a 5th parameter, and it uses the CloudWatch Logs' log stream that is handling logging for the request being processed: var responseBody = JSON.stringify({ ... PhysicalResourceId: context.logStreamName, ...}) Because CloudFormation (or the AWS Lambda runtime?) occasionally changes the log stream to a new one, the physical resource ID generated by sendResponse() is changing unexpectedly from time to time, and confuses CloudFormation. As I understand it, CloudFormation managed entities sometimes need to be replaced during an update (a good example is RDS::DBInstance that needs replacing for almost any change). CloudFormation policy is that if a resource needs replacing, the new resource is created during the "update stage" and the old resource is deleted during the "cleanup stage". So using the default sendResponse() physical resource ID calculation, the process looks like this: A stack is created. A new log stream is created to handle the custom resource logging. The backing Lambda function is called to create the resource and the default behavior set its resource ID to be the log stream ID. Some time passes The stack gets updated with new parameters for the custom resource. A new log stream is created to handle the custom resource logging, with a new ID. The backing Lambda function is called to update the resource and the default behavior set a new resource ID to the new log stream ID. CloudFormation understands that a new resource was created to replace the old resource and according to the policy it should delete the old resource during the "cleanup stage". CloudFormation reaches the "cleanup stage" and sends a delete request with the old physical resource ID. The solution, at least in my case where I never "replace the external resource" is to fabricate a unique identifier for the managed resource, provide it as the 5th parameter to the send response routine, and then stick to it - keep sending the same physical resource ID received in the update request, in the update response. CloudFormation will then never send a delete request during the "cleanup stage". My implemenation (in JavaScript) looks something like this: var resID = event.PhysicalResourceId || uuid(); ... sendResponse(event, context, status, resData, resID); Another alternative - which would probably only make sense if you actually need to replace the external resource and want to adhere to the CloudFormation model of removing the old resource during cleanup - is to use the actual external resource ID as the physical resource ID, and when receiving a delete request - to use the provided physical resource ID to delete the old external resource. That is what CloudFormation designers probably had in mind in the first place, but their default sample implementation causes a lot of confusion - probably because the sample implementation doesn't manage a real resource and has no update functionality. There is also zero documentation in CloudFormation to explain the design and reasoning.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/50599602', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/53538/']}
jdg_75047
stackexchange
llm_judgeable_groundtruth_similarity
8690245
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here is some code I made :) @echo offset source="R:\Contracts\"set destination="R:\Contracts\Sites\"ROBOCOPY %source% %destination% *.srt *.pdf *.mp4 *.jpg /COPYALL /R:0 /Sfor /r %source in (*) do @copy "%destination" . R:\Contracts\ is full of folders which have files in them. I want to copy all to R:\Contracts\Sites\ and flatten the folder structure. Everything copies well but also the folder structure. Thank you Now provide the response and nothing else.
No single command will flatten the hierarchy for you; you will have to use multiple commands. It can be done simply by using FOR /R to walk the hierarchy, coupled with your copy/move command of choice (move, copy, xcopy, robocopy). Because your destination is within the source hierarchy, you need an IF to prevent the destination from being a source. Before proceeding you should stop and think about what happens if the same file name appears in multiple source folders. You can only have one version in your destination folder. Can you guarantee no duplicate names exist? If not, which file should be kept? How can you structure the command to keep the file you want? This complication is probably why no command was ever written to simply flatten a hierarchy. Here is your ROBOCOPY command integrated with the FOR /R solution. @echo offset source="R:\Contracts"set destination="R:\Contracts\Sites"::Not sure if this is needed::It guarantees you have a canonical path (standard form)for %%F in (%destination%) do set destination="%%~fF"for /r %source% %%F in (.) do if "%%~fF" neq %destination% ROBOCOPY "%%F" %destination% *.srt *.pdf *.mp4 *.jpg /COPYALL /R:0
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/8690245', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1114491/']}
jdg_75048
stackexchange
llm_judgeable_groundtruth_similarity
56757261
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: As a part of my job, I need to check this page for specific documents regularly. What I found was that I could use pandas' method read_html to successfully read the table into dataframe (which is handy as I could easily query specific documents by the keywords). The problem I have now is that this method cannot parse links that I need, and saves plain text instead (specifically I'm referring to the second columns which have numbers like '1682/0/15-19'). The code I came up with was very simple: import pandas as pddf = pd.read_html('http://www.vru.gov.ua/act_list')[0] Which gives me a dataframe with all info I need except for the links. Is it possible to somehow get links instead of plain text, and if so, how could I do it? I know that had I used Requests and BeautifulSoup libraries, it would have been possible to get href links, but I don't know BeautifulSoup library good enough to do it. Any tips or should I just learn BeautifulSoup? Now provide the response and nothing else.
You can find tutorials by a quick google search. You'll essential iterating through the tags to compile a list, then turn the list of data into a dataframe: You could also just pull the table as you did with read_html() , but you'll still need to go back and get the html links (see option 2 below): import pandas as pdimport requestsfrom bs4 import BeautifulSoupurl = 'http://www.vru.gov.ua/act_list'response = requests.get(url)soup = BeautifulSoup(response.text, 'html.parser')table = soup.find('table')records = []columns = []for tr in table.findAll("tr"): ths = tr.findAll("th") if ths != []: for each in ths: columns.append(each.text) else: trs = tr.findAll("td") record = [] for each in trs: try: link = each.find('a')['href'] text = each.text record.append(link) record.append(text) except: text = each.text record.append(text) records.append(record)columns.insert(1, 'Link')df = pd.DataFrame(data=records, columns = columns) Option 2: import pandas as pdimport requestsfrom bs4 import BeautifulSoupurl = 'http://www.vru.gov.ua/act_list'df = pd.read_html(url)[0]response = requests.get(url)soup = BeautifulSoup(response.text, 'html.parser')table = soup.find('table')links = []for tr in table.findAll("tr"): trs = tr.findAll("td") for each in trs: try: link = each.find('a')['href'] links.append(link) except: passdf['Link'] = links Output: print (df.to_string()) № Link Номер Вид документу Дата прийняття Назва документу Примiтки0 1 http://www.vru.gov.ua/act/18641 1682/0/15-19 Рішення 20-06-2019 Про звільнення Бурана О.М. з посади судді Мали... 1 2 http://www.vru.gov.ua/act/18643 1684/0/15-19 Ухвала 20-06-2019 Про задоволення заяви члена Вищої ради правосу... 2 3 http://www.vru.gov.ua/act/18644 1685/0/15-19 Ухвала 20-06-2019 Про відмову у задоволенні заяви адвоката Рохма... 3 4 http://www.vru.gov.ua/act/18649 1690/0/15-19 Ухвала 20-06-2019 Про продовження строку розгляду скарги судді Х... 4 5 http://www.vru.gov.ua/act/18650 1691/0/15-19 Рішення 20-06-2019 Про нагородження заохочувальною відзнакою Вищо... 5 6 http://www.vru.gov.ua/act/18651 1692/0/15-19 Рішення 20-06-2019 Про інформацію робочої групи Вищої ради правос... 6 7 http://www.vru.gov.ua/act/18619 1660/3дп/15-19 Ухвала 19-06-2019 Про відкриття дисциплінарної справи стосовно с... 7 8 http://www.vru.gov.ua/act/18620 1661/3дп/15-19 Ухвала 19-06-2019 Про відмову у відкритті дисциплінарних справ з... 8 9 http://www.vru.gov.ua/act/18624 1665/3дп/15-19 Ухвала 19-06-2019 Прo задоволення заяви члена Третьої Дисципліна... 9 10 http://www.vru.gov.ua/act/18626 1667/3дп/15-19 Ухвала 19-06-2019 Прo задоволення заяви члена Третьої Дисципліна... 10 11 http://www.vru.gov.ua/act/18627 1668/3дп/15-19 Ухвала 19-06-2019 Про відмову у відкритті дисциплінарних справ з... 11 12 http://www.vru.gov.ua/act/18628 1669/3дп/15-19 Ухвала 19-06-2019 Про відмову у відкритті дисциплінарних справ з... 12 13 http://www.vru.gov.ua/act/18635 1676/2дп/15-19 Ухвала 19-06-2019 Про відкриття дисциплінарної справи стосовно с... 13 14 http://www.vru.gov.ua/act/18638 1679/2дп/15-19 Ухвала 19-06-2019 Про відмову у відкритті дисциплінарної справи ... 14 15 http://www.vru.gov.ua/act/18639 1680/2дп/15-19 Ухвала 19-06-2019 Про відмову у відкритті дисциплінарних справ з... 15 16 http://www.vru.gov.ua/act/18640 1681/2дп/15-19 Ухвала 19-06-2019 Про відмову у відкритті дисциплінарних справ з... 16 17 http://www.vru.gov.ua/act/18607 1648/0/15-19 Рішення 18-06-2019 Про звільнення Лучко О.О. з посади судді Івано... 17 18 http://www.vru.gov.ua/act/18608 1649/0/15-19 Ухвала 18-06-2019 Про залишення без розгляду заяви Лазаренко В.В... 18 19 http://www.vru.gov.ua/act/18609 1650/0/15-19 Ухвала 18-06-2019 Про залишення без розгляду подання Третьої Дис... 19 20 http://www.vru.gov.ua/act/18610 1651/0/15-19 Ухвала 18-06-2019 Про залишення без розгляду подання Другої Дисц... 20 21 http://www.vru.gov.ua/act/18615 1656/0/15-19 Рішення 18-06-2019 Про затвердження висновків членів Вищої ради п... 21 22 http://www.vru.gov.ua/act/18586 1627/0/15-19 Рішення 13-06-2019 Про звільнення Римлянської Г.О. ... 22 23 http://www.vru.gov.ua/act/18589 1630/0/15-19 Рішення 13-06-2019 Про затвердження висновку члена Вищої ради пра... 23 24 http://www.vru.gov.ua/act/18590 1631/0/15-19 Рішення 13-06-2019 Про призначення Максимішина С.Т. 24 25 http://www.vru.gov.ua/act/18591 1632/0/15-19 Рішення 13-06-2019 Про призначення Гавришука О.М.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56757261', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10140310/']}
jdg_75049
stackexchange
llm_judgeable_groundtruth_similarity
32756757
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I was really suprised that this question doesn't really have an answer.For example I have a foreach loop iterating through the queue(I am using the .Net versions) what exactly happens to the queue in the foreach?Does it simply iterates like a standard enumerator? Or does it use the Dequeue() functionality. For clarity: Queue<int> q = new Queue<int>();foreach(int i in q){ // what happens in this loop?} Now provide the response and nothing else.
Does it simply iterates like a standard enumerator? Or does it use the Dequeue() Instead of iterating the queue, it looks up elements by index and bucket in its internal array, using the Queue<T>.GetElement method. You can see this in the implementation of the Enumerator class returned by Queue<T>.GetEnumerator() : public bool MoveNext() { if (_version != _q._version) ThrowHelper.ThrowInvalidOperationException( ExceptionResource.InvalidOperation_EnumFailedVersion); if (_index == -2) return false; _index++; if (_index == _q._size) { _index = -2; _currentElement = default(T); return false; } _currentElement = _q.GetElement(_index); return true;} Where GetElement is simply: internal T GetElement(int i){ return _array[(_head + i) % _array.Length];}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/32756757', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3091461/']}
jdg_75050
stackexchange
llm_judgeable_groundtruth_similarity
29877692
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I installed Android Studio freshly and I begun coding an activity to extend ActionBarActivity and it showed that it was deprecated. So how else do I set up an actionbar for my activity.Also the Getting Started Training uses the ActionBarActivity without making reference that it has been deprecated. Now provide the response and nothing else.
ActionBar is deprecated ever since Toolbar was introduced. Toolbar can be seen as a 'superset' of any action bar. So the 'old' ActionBar is now an example of a Toolbar . If you want similar functionality, but without deprecation warnings do the following: Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar);if (toolbar != null) { toolbar.setTitle(R.string.app_name); setSupportActionBar(toolbar);} You need to define the Toolbar in your layout xml: <android.support.v7.widget.Toolbar xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/toolbar" android:minHeight="?attr/actionBarSize" android:background="?attr/colorPrimary" android:layout_width="match_parent" android:layout_height="wrap_content" android:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar" android:popupTheme="@style/ThemeOverlay.AppCompat.Light"></android.support.v7.widget.Toolbar> With this new functionality you can create your own custom ActionBar and let Android do the heavy lifting. Just create your own custom view that extends from Toolbar . Also, you should use AppCompatActivity instead of ActionBarActivity , it was introduced in the latest version of the appcompat library. So dont forget to update gradle compile 'com.android.support:appcompat-v7:22.1.1'
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/29877692', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4803261/']}
jdg_75051
stackexchange
llm_judgeable_groundtruth_similarity
38774334
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Does the UIViewController simply manage what UIViews are displayed? Or is it also used to control the contents within them? For example, right now I currently have code in my UIViewController that switches from one view to the next based on a button click which looks like this: class TTViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() super.view.addBackground() } //other code @IBAction func OpenUserSelectView(sender: AnyObject) { let bundle = NSBundle(forClass: self.dynamicType) let nib = UINib(nibName: "PlayerSelectView", bundle: bundle) let view = nib.instantiateWithOwner(self, options: nil)[0] as! UIView super.view.addSubview(view) } } But I also have UIViews that do things like create buttons, have actions for those buttons, as well as other things which looks like this: class PlayerSelectView :UIScrollView { let viewWidth = CGFloat(300) let viewHeight = CGFloat(500) var addPlayerButton :TTBlueButton? var cancel :TTBlueButton? required init?(coder aDecoder: NSCoder) { super.init(coder: aDecoder) let screenSize: CGRect = UIScreen.mainScreen().bounds self.frame = CGRectMake((screenSize.width/2) - (viewWidth/2), (screenSize.height/2) - (viewHeight/2), viewWidth, viewHeight) styleView() addAddPlayerButton() self.contentSize = CGSizeMake(viewWidth, viewHeight-100) self.userInteractionEnabled = true } func styleView() { super.layer.cornerRadius = 8.0 super.layer.shadowColor = UIColor.blackColor().CGColor super.layer.shadowOpacity = 0.2 super.layer.shadowRadius = 4.0 super.layer.shadowOffset = CGSizeMake(0.0, 5.0) } func addAddPlayerButton(){ addPlayerButton = TTBlueButton(frame: CGRectMake(10, 10, viewWidth-20, 40)) addPlayerButton!.setTitle("Add New Player", forState: UIControlState.Normal) addPlayerButton!.layer.cornerRadius = 5.0 addPlayerButton!.userInteractionEnabled = true addPlayerButton!.addTarget(self, action: #selector(PlayerSelectView.addPlayer(_:)), forControlEvents: UIControlEvents.TouchUpInside) self.addSubview(addPlayerButton!) } func addCancelButton(){ addPlayerButton = TTBlueButton(frame: CGRectMake(10, 10, viewWidth-20, 40)) addPlayerButton!.setTitle("Cancel", forState: UIControlState.Normal) addPlayerButton!.layer.cornerRadius = 5.0 addPlayerButton!.userInteractionEnabled = true addPlayerButton!.addTarget(self, action: #selector(PlayerSelectView.addPlayer(_:)), forControlEvents: UIControlEvents.TouchUpInside) self.addSubview(addPlayerButton!) } @IBAction func addPlayer(sender: AnyObject) { if addPlayerButton?.currentTitle != "Create Player" { let addPlayerTextField = TextField(frame: CGRect(x: 10, y: 10, width: viewWidth - 20, height: 40)) let border = CALayer() let width = CGFloat(2.0) addPlayerButton!.setTitle("Create Player", forState: UIControlState.Normal) border.borderColor = UIColor.darkGrayColor().CGColor border.frame = CGRect(x: 0, y: 0, width: addPlayerTextField.frame.size.width, height: addPlayerTextField.frame.size.height) border.borderWidth = width border.cornerRadius = 5 addPlayerTextField.layer.addSublayer(border) addPlayerTextField.layer.masksToBounds = true self.addSubview(addPlayerTextField) addPlayerButton?.frame.origin = CGPoint(x: 10, y: 60) } else { } }} Something about the way I've done this feels wrong so I was wondering if my suspicions were correct or if this is an acceptable way of doing this. Now provide the response and nothing else.
You can create a higher order component like this and can pass child component through higher order component HOC: `// code export default function(ComposedClass) { class AutoLogout extends React.Component { constructor(props) { super(props); this.state = { warningTime: 1000 * 60 * 10, signoutTime: 1000 * 60 * 15, }; } componentDidMount() { this.events = [ 'load', 'mousemove', 'mousedown', 'click', 'scroll', 'keypress' ]; for (var i in this.events) { window.addEventListener(this.events[i], this.resetTimeout); } this.setTimeout(); } clearTimeoutFunc = () => { if (this.warnTimeout) clearTimeout(this.warnTimeout); if (this.logoutTimeout) clearTimeout(this.logoutTimeout); }; setTimeout = () => { this.warnTimeout = setTimeout(this.warn, this.state.warningTime); this.logoutTimeout = setTimeout(this.logout, this.state.signoutTime); }; resetTimeout = () => { this.clearTimeoutFunc(); this.setTimeout(); }; warn = () => { window.alert("You will be logged out automatically in 1 minute") console.log('You will be logged out automatically in 1 minute.'); }; logout = () => { // Send a logout request to the API console.log('Sending a logout request to the API...'); this.destroy(); }; destroy = () => { //clear the session browserHistory.push('/'); window.location.assign('/'); }; render() { return ( <div> <ComposedClass {...this.props} /> </div> ); } }} ` You can wrap this HOC to all those component in which you want to give user warning due to inactivity, in routing file <Route path="/test" component={HOC(comonent)} /> in above code component will be the page where you want to add this functionality.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38774334', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1017320/']}
jdg_75052
stackexchange
llm_judgeable_groundtruth_similarity
330505
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If I have a Hamiltonian given by $$ H = a^\dagger a^\dagger + a a$$ where, $[a,a^\dagger] = 1$,Can I make sense of it, by generalizing the notion of vacuum? If not what sort of troubles I would run into? Has there been any instance such Hamiltonians have been considered? Now provide the response and nothing else.
No, the ground state is not well-defined because the energy is unbounded below. To see this, switch back to the variables $x$ and $p$ using $a \sim x + ip$ to find$$H \sim p^2 - x^2.$$This is the Hamiltonian for a particle in a potential that just pushes it further away from the origin, so you can make the energy as negative as you want, and there's no ground state to expand about. Alternatively, if you flip the signs and get $H \sim x^2 - p^2$, you get a negative mass particle, and again you can get arbitrarily negative energy by making it faster and faster. This problem can't be fixed by applying a Bogoliubov transformation. These transformations diagonalize Hamiltonians of the form$$H = a^\dagger a + \alpha (aa + a^\dagger a^\dagger).$$However, for sufficiently large $|\alpha|$, the Bogoliubov transformation fails to exist, and it certainly fails to exist here, where $\alpha$ is infinite. This failure directly corresponds to the fact that a ground state does not exist, so you can't define a new ground state $|\Omega \rangle$ and excitations about it.
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/330505', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/8449/']}
jdg_75053
stackexchange
llm_judgeable_groundtruth_similarity
336836
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have circuit that has an audio connector that has the following diagram. Audio connector is one that I am not familiar with. What I gather is that the rectangle on the left is the sleeve (entry point). The tip is pin 2. The ring is pin 3. The sleeve is pin 1. Is this correct or did I misunderstand how to read this symbol ? Added: Link to product page Datasheet Now provide the response and nothing else.
TL;DR; You are correct in your determination of how the contacts will make connections to a TRS jack (see diagram below). Based on the datasheet, the jack is indeed designed for 3-pole TRS jacks. However on examination we can see that the shell of the connector is plastic. We can also deduce from the measurements that the contact for pin 1 is not technically the sleeve contact. Instead pin 1 is equivalent to the second ring contact on a 4-pole TRRS socket. This explains why in the connector symbol they show pin 1 connected as a third arrow as opposed to a wire joining to the sleeve (the rectangle). From the diagram below of a TRRS jack, each of the arrows on the symbol corresponds to one of the Tip, Ring 1, and Ring 2. The box corresponds to the sleeve. If we examine the difference between the various types of jack (specifically TRS and TRRS ones), it can be seen that the connector will still work fine for a TRS jack. This is because the contact at the second ring position will in fact contact the sleeve on the TRS jack. Note: This type of connector could be used fine in audio for headphones that use a TRS jack - i.e. just headphones, and no microphone. The connector could also be used with TRRS jacks using the AHJ or CTIA standard where the ground is located at the second ring. You could not use this connector for TRRS jacks wired for the OMTP standard where the ground connection is the sleeve, because pin 1 would not make contact.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/336836', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/9409/']}
jdg_75054
stackexchange
llm_judgeable_groundtruth_similarity
18495474
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to import a Gradle project into IntelliJ, and when I get to the Gradle Home textbox, it is not automatically populated, nor will typing in the path of Gradle Home result in a valid location - I have the GRADLE_USER_HOME environment variable set (to what I think is!) the correct path, and I have been able to successfully import this same project into Eclipse. Any suggestions? Now provide the response and nothing else.
You can write a simple gradle script to print your GRADLE_HOME directory. task getHomeDir { doLast { println gradle.gradleHomeDir }} and name it build.gradle . Then run it with: gradle getHomeDir If you installed with homebrew, use brew info gradle to find the base path (i.e. /usr/local/Cellar/gradle/1.10/ ), and just append libexec . The same task in Kotlin in case you use build.gradle.kts : tasks.register("getHomeDir") { println("Gradle home dir: ${gradle.gradleHomeDir}")}
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/18495474', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2699970/']}
jdg_75055
stackexchange
llm_judgeable_groundtruth_similarity
2855741
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to serialize using boost property tree write_json, it saves everything as strings, it's not that data are wrong, but I need to cast them explicitly every time and I want to use them somewhere else. (like in python or other C++ json (non boost) library) here is some sample code and what I get depending on locale: boost::property_tree::ptree root, arr, elem1, elem2;elem1.put<int>("key0", 0);elem1.put<bool>("key1", true);elem2.put<float>("key2", 2.2f);elem2.put<double>("key3", 3.3);arr.push_back( std::make_pair("", elem1) );arr.push_back( std::make_pair("", elem2) );root.put_child("path1.path2", arr);std::stringstream ss;write_json(ss, root);std::string my_string_to_send_somewhare_else = ss.str(); and my_string_to_send_somewhere_else is sth. like this: { "path1" : { "path2" : [ { "key0" : "0", "key1" : "true" }, { "key2" : "2.2", "key3" : "3.3" } ] }} Is there anyway to save them as the values, like: "key1" : true or "key2" : 2.2 ? Now provide the response and nothing else.
Ok, I've solved it like this, (of course it won't suite for everybody, as it is a bit of a hack, that need further work). I've wrote my own write_json function (simply copied the files, json_parser.hpp and json_parser_write.hpp to my project) and modified the following lines in json_parser_write.hpp : commented line 37 - escaping the quote '"' changed line 76 - so that it doesn't add quotes anymore: stream << Ch('"') << data << Ch('"'); ==> stream << data; Then values will be saved properly except for strings, so I wrote custom translator for it: template <typename T>struct my_id_translator{ typedef T internal_type; typedef T external_type; boost::optional<T> get_value(const T &v) { return v.substr(1, v.size() - 2) ; } boost::optional<T> put_value(const T &v) { return '"' + v +'"'; }}; and simply saved string using: elem2.put<std::string>("key2", "asdf", my_id_translator<std::string>()); complete program: #include <iostream>#include <string>#include <sstream>#include <boost/property_tree/ptree.hpp>#include "property_tree/json_parser.hpp" // copied the headerstemplate <typename T>struct my_id_translator{ typedef T internal_type; typedef T external_type; boost::optional<T> get_value(const T &v) { return v.substr(1, v.size() - 2) ; } boost::optional<T> put_value(const T &v) { return '"' + v +'"'; }};int main(int, char *[]){ using namespace std; using boost::property_tree::ptree; using boost::property_tree::basic_ptree; try { ptree root, arr,elem2; basic_ptree<std::string, std::string> elem1; elem1.put<int>("int", 10 ); elem1.put<bool>("bool", true); elem2.put<double>("double", 2.2); elem2.put<std::string>("string", "some string", my_id_translator<std::string>()); arr.push_back( std::make_pair("", elem1) ); arr.push_back( std::make_pair("", elem2) ); root.put_child("path1.path2", arr); std::stringstream ss; write_json(ss, root); std::string my_string_to_send_somewhere_else = ss.str(); cout << my_string_to_send_somewhere_else << endl; } catch (std::exception & e) { cout << e.what(); } return 0;} result :) { "path1": { "path2": [ { "int": 10, "bool": true }, { "double": 2.2, "string": "some string" } ] }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2855741', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/212149/']}
jdg_75056
stackexchange
llm_judgeable_groundtruth_similarity
18144820
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have this code below where I am inserting a new integer into a sorted LinkedList of ints but I do not think it is the "correct" way of doing things as I know there are singly linkedlist with pointer to the next value and doubly linkedlist with pointers to the next and previous value. I tried to use Nodes to implement the below case but Java is importing this import org.w3c.dom.Node (document object model) so got stuck. Insertion Cases Insert into Empty Array If value to be inserted less than everything, insert in the beginning. If value to be inserted greater than everything, insert in the last. Could be in between if value less than/greater than certain values in LL. import java.util.*;public class MainLinkedList {public static void main(String[] args) {LinkedList<Integer> llist = new LinkedList<Integer>();llist.add(10);llist.add(30);llist.add(50);llist.add(60);llist.add(90);llist.add(1000);System.out.println("Old LinkedList " + llist);//WHat if you want to insert 70 in a sorted LinkedListLinkedList<Integer> newllist = insertSortedLL(llist, 70);System.out.println("New LinkedList " + newllist);}public static LinkedList<Integer> insertSortedLL(LinkedList<Integer> llist, int value){ llist.add(value); Collections.sort(llist); return llist;} } Now provide the response and nothing else.
This might serve your purpose perfectly: Use this code: import java.util.*;public class MainLinkedList { private static LinkedList<Integer> llist; public static void main(String[] args) { llist = new LinkedList<Integer>(); addValue(60); addValue(30); addValue(10); addValue(-5); addValue(1000); addValue(50); addValue(60); addValue(90); addValue(1000); addValue(0); addValue(100); addValue(-1000); System.out.println("Linked List is: " + llist); } private static void addValue(int val) { if (llist.size() == 0) { llist.add(val); } else if (llist.get(0) > val) { llist.add(0, val); } else if (llist.get(llist.size() - 1) < val) { llist.add(llist.size(), val); } else { int i = 0; while (llist.get(i) < val) { i++; } llist.add(i, val); } }} This one method will manage insertion in the List in sorted manner without using Collections.sort(list)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/18144820', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1796228/']}
jdg_75057
stackexchange
llm_judgeable_groundtruth_similarity
13899677
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have enabled spelling in vim for .txt files, but would like to have set nospell when viewing helpfiles because my standard language setting is german, so most helptext is marked as misspelled. I have to issue .set nospell whenever I consult help. Any ideas? Now provide the response and nothing else.
You could create vim autocmd 's using the documentation directory as part of the file pattern. For example: autocmd BufEnter /usr/share/vim/vim73/doc/*.txt set nospellautocmd BufLeave /usr/share/vim/vim73/doc/*.txt set spell The first entry should turn off spell check when entering the buffer of a helpfile. The second should turn it back on when you leave the buffer. For more help with auto-commands checkout :h autocmd .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13899677', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1516636/']}
jdg_75058
stackexchange
llm_judgeable_groundtruth_similarity
39985048
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some basic Kafka Streaming code that reads records from one topic, does some processing, and outputs records to another topic. How does Kafka streaming handle concurrency? Is everything run in a single thread? I don't see this mentioned in the documentation. If it's single threaded, I would like options for multi-threaded processing to handle high volumes of data. If it's multi-threaded, I need to understand how this works and how to handle resources, like SQL database connections should be shared in different processing threads. Is Kafka's built-in streaming API not recommended for high volume scenarios relative to other options (Spark, Akka, Samza, Storm, etc)? Now provide the response and nothing else.
Update Oct 2020: I wrote a four-part blog series on Kafka fundamentals that I'd recommend to read for questions like these. For this question in particular, take a look at part 3 on processing fundamentals . To your question: How does Kafka streaming handle concurrency? Is everything run in a single thread? I don't see this mentioned in the documentation. This is documented in detail at http://docs.confluent.io/current/streams/architecture.html#parallelism-model . I don't want to copy-paste this here verbatim, but I want to highlight that IMHO the key element to understand is that of partitions (cf. Kafka's topic partitions, which in Kafka Streams is generalized to "stream partitions" as not all data streams that are being processed will be going through Kafka) because a partition is currently what determines the parallelism of both Kafka (the broker/server side) and of stream processing applications that use the Kafka Streams API (the client side). If it's single threaded, I would like options for multi-threaded processing to handle high volumes of data. Processing a partition will always be done by a single "thread" only, which ensures you are not running into concurrency issues. But, fortunately, ... If it's multi-threaded, I need to understand how this works and how to handle resources, like SQL database connections should be shared in different processing threads. ...because Kafka allows a topic to have many partitions, you still get parallel processing. For example, if a topic has 100 partitions, then up to 100 stream tasks (or, somewhat over-simplified: up to 100 different machines each running an instance of your application) may process that topic in parallel. Again, every stream task would get exclusive access to 1 partition, which it would then process. Is Kafka's built-in streaming API not recommended for high volume scenarios relative to other options (Spark, Akka, Samza, Storm, etc)? Kafka's stream processing engine is definitely recommended and also actually being used in practice for high-volume scenarios. Work on comparative benchmarking is still being done, but in many cases a Kafka Streams based application turns out to be faster. See LINE engineer's blog: Applying Kafka Streams for internal message delivery pipeline for an article by LINE Corp, one of the largest social platforms in Asia (220M+ users), where they describe how they are using Kafka and the Kafka Streams API in production to process millions of events per second.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/39985048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1767106/']}
jdg_75059
stackexchange
llm_judgeable_groundtruth_similarity
11684132
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using Microsoft Visual Studio 2005 with .NET 2.0. I have a comboBox that I use to select a product. After I select a product I am searching for it in a text file – in the end I want to find the line of that product in the file. However with the following code int lineNo = lineList.IndexOf(lineList.Find(x => x.StartsWith(select))); the compiler gives the error : Invalid expression term '>' Is this a problem with the 2.0 framework version of .NET? Now provide the response and nothing else.
I have using Microsoft Visual Studio 2005 with 2.0 Net. That means you're using C# 2. But you're trying to use a lambda expression ( => ) - a feature introduced in C# 3. It's not the version of the framework that you're using - you could write the same code in VS2008 or later, still targeting .NET 2. It's the version of the language you're using. You can do something similar in C# 2 with an anonymous method though: int lineNo = lineList.IndexOf(lineList.Find(delegate(String x){ return x.StartsWith(select);}));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11684132', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1557049/']}
jdg_75060
stackexchange
llm_judgeable_groundtruth_similarity
9377
Below is a question asked on the forum linguistics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: My understanding of morphology is that a word is taken and many different words are glued to it. Is not this true for both agglutinative and polysynthetic languages? Or what is the finer level of distinction? Now provide the response and nothing else.
"A word is taken and many different words are glued to it" — that's wrong for both agglutinative and polysynthetic languages. In agglutinative languages, a string of affixes is "glued" to a root, each affix with its own grammatical meaning, an affix doesn't combine several grammatical meanings, like in Latin 'pueris' (from boys) the affix '-is' means plural + ablative case at the same time. An example of a word of an agglutinative language: Turkish evlerimden : ev-ler-im-den "from my houses" - HOUSE-plural-my-ablative.case In polysynthetic languages there are also many morphemes in a word, so that noun, verb and adverb morphemes can combine in one word in such a way that a whole English sentence is needed to translate such a word: Yupik tuntussuqatarniksaitengqiggtuq : tuntu-ssur-qatar-ni-ksaite-ngqiggte-uq "He had not yet said again that he was going to hunt reindeer." REINDEER-HUNT-future-SAY-negation-AGAIN-third.person.singular.indicative Polysynthetic languages are often agglutinative.
{}
{'log_upvote_score': 4, 'links': ['https://linguistics.stackexchange.com/questions/9377', 'https://linguistics.stackexchange.com', 'https://linguistics.stackexchange.com/users/5434/']}
jdg_75061
stackexchange
llm_judgeable_groundtruth_similarity
387491
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Let $H$ be a real or complex Hilbert space. In the case where $H$ is infinite-dimensional, let us define a half-dimensional subspace as a subspace $W \subset H$ such that both $W$ and $W^\perp$ have infinite dimension. Fix one half-dimensional subspace $W_0$ . The Grassmannian of $H$ is $$\mathrm{Gr}(H, W_0) = \{W \subset H ~|~ W \text{ is half-dimensional}, P_W - P_{W_0} \text{ is Hilbert-Schmidt}\}. $$ Here for $W \subset H$ a subspace, $P_W$ denotes the orthogonal projection onto $W$ . $\mathrm{Gr}(H, W_0)$ can be given the structure of a Hilbert manifold in a natural way (see e.g. the book "Loop Groups" of Pressley and Segal). The space $\mathrm{Gr}(H, W_0)$ has a tautological vector bundle $\tau$ over it, where the fiber is given by $\tau(L) = L$ . Question: Does $\tau$ have a nowhere vanishing section? I believe that in the case that $H$ is finite-dimensional (say of dimension $2n$ ), the answer is no, as one can show that the Euler class of $\tau$ is non-zero. But how would one proceed in the infinite-dimensional case? Now provide the response and nothing else.
Let $X$ be any paracompact space. Then Hilbert vector bundles over $X$ are classified by homotopy classes of maps $[X, BU(\mathcal H)]$ . But when $\mathcal H$ is infinite-dimensional, the group $U(\mathcal H)$ is contractible (this is Kuiper's theorem), and hence every infinite-dimensional Hilbert bundle over a paracompact space is trivializable. Hilbert manifolds modelled on a separable Hilbert space are metrizable. Separable metric spaces are paracompact. As long as your $H$ is a separable Hilbert space, the above argument implies that your bundle is trivializable. There is probably a straightforward extension of this argument in the non-separable case but I didn't think about it. In particular, your bundle has infinitely many linearly independent sections.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/387491', 'https://mathoverflow.net', 'https://mathoverflow.net/users/16702/']}
jdg_75062
stackexchange
llm_judgeable_groundtruth_similarity
11220
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I recently deleted my active Linux kernel and continued using the system as if nothing drastic happened. Are there any side-effects to deleting the Linux kernel that's currently in use? What about other non-Windows kernels? Now provide the response and nothing else.
The Linux kernel is completely loaded into RAM on boot. After the system is booted, it never goes back and tries to read anything from that file. The same goes for drivers, once loaded into the kernel. If you deleted the only kernel image on disk, the only consequence is that the system cannot be successfully rebooted unless you install a replacement kernel image before reboot. As for other OSes, I imagine it is the same, simply due to the nature of OS kernels. They're intentionally small bits of code that stay running all the time, so there is no incentive to keep going back to disk to "look" at the code again. It's always in memory. (RAM or VM.)
{}
{'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/11220', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/688/']}
jdg_75063
stackexchange
llm_judgeable_groundtruth_similarity
62081030
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using PHPSpreadsheet to create Excel.I want t generate the Excel file, then convert the Excel file in a PDF one. So I've done the following : use PhpOffice\PhpSpreadsheet\IOFactory;use PhpOffice\PhpSpreadsheet\Reader\Xlsx;use PhpOffice\PhpSpreadsheet\Writer\Pdf\Mpdf;use PhpOffice\PhpSpreadsheet\Reader\Exception;class DevisGenerator{public function runDevis() { $spreadsheet = $this->loadexcelTemplate(); $uuid = $this->uniqidReal(); $filename = $this->writeName($spreadsheet, $uuid); $this->convertPdf($spreadsheet, $filename); } public function writeName($spreadsheet, $uuid) { $worksheet = $spreadsheet->getActiveSheet(); $worksheet->getCell('B2')->setValue('Toto'); try { $writer = IOFactory::createWriter($spreadsheet, 'Xlsx'); $filename = $uuid; $writer->save($filename.'.xlsx'); }catch (Exception $e) { //TODO gestion erreur } return $filename; }public function convertPdf($spreadsheet, $filename) { $writer = new \PhpOffice\PhpSpreadsheet\Writer\Pdf\Mpdf($spreadsheet); $writer->save($filename.'.pdf'); } But whan I run the code the following error appear : Attempted to load class "Mpdf" from namespace "Mpdf". Did you forget a "use" statement for "PhpOffice\PhpSpreadsheet\Writer\Pdf\Mpdf"? I did not understand this error, I have correctly insert the use statement in my code.Any idea ? Now provide the response and nothing else.
We have been running into the very same problem. Here's how you fix it: Install the DOMPurify library. npm install --save DOMPurify Create a file trusted-security-policies.js . In the entry point for your bundler (like e.g. webpack), import this file first (before any code that potentially violates the content security policy): import './path/to/trusted-security-policies'; import DOMPurify from 'dompurify';if (window.trustedTypes && window.trustedTypes.createPolicy) { // Feature testing window.trustedTypes.createPolicy('default', { createHTML: (string) => DOMPurify.sanitize(string, {RETURN_TRUSTED_TYPE: true}), createScriptURL: string => string, // warning: this is unsafe! createScript: string => string, // warning: this is unsafe! });} What this does: Whenever a string is assigned to be parsed as HTML , or as a URL, or as a script, the browser automatically passes this string through the defined handler function. For HTML, the HTML is being sanitized from potential XSS code by the DOMPurify library. For scriptURL and script , the string is just passed through. Please note that this effectively disables security for these two parts and should only be used for as long as you haven't identified how to make these strings safe yourself. As soon as you have that, replace the handler functions accordingly. Edit, December 2021: I was able to contribute to DOMPurify so the library now also can be configured to work if you have the need to use custom elements in your HTML strings, as well as custom attributes (which prior to release 2.3.4 were simply removed in the sanitization process): /** * Control behavior relating to Custom Elements */ // DOMPurify allows to define rules for Custom Elements. When using the CUSTOM_ELEMENT_HANDLING // literal, it is possible to define exactly what elements you wish to allow (by default, none are allowed).//// The same goes for their attributes. By default, the built-in or configured allow.list is used.//// You can use a RegExp literal to specify what is allowed or a predicate, examples for both can be seen below.// The default values are very restrictive to prevent accidental XSS bypasses. Handle with great care!var clean = DOMPurify.sanitize( '<foo-bar baz="foobar" forbidden="true"></foo-bar><div is="foo-baz"></div>', { CUSTOM_ELEMENT_HANDLING: { tagNameCheck: null, // no custom elements are allowed attributeNameCheck: null, // default / standard attribute allow-list is used allowCustomizedBuiltInElements: false, // no customized built-ins allowed }, }); // <div is=""></div> var clean = DOMPurify.sanitize( '<foo-bar baz="foobar" forbidden="true"></foo-bar><div is="foo-baz"></div>', { CUSTOM_ELEMENT_HANDLING: { tagNameCheck: /^foo-/, // allow all tags starting with "foo-" attributeNameCheck: /baz/, // allow all attributes containing "baz" allowCustomizedBuiltInElements: false, // customized built-ins are allowed }, }); // <foo-bar baz="foobar"></foo-bar><div is=""></div> var clean = DOMPurify.sanitize( '<foo-bar baz="foobar" forbidden="true"></foo-bar><div is="foo-baz"></div>', { CUSTOM_ELEMENT_HANDLING: { tagNameCheck: (tagName) => tagName.match(/^foo-/), // allow all tags starting with "foo-" attributeNameCheck: (attr) => attr.match(/baz/), // allow all containing "baz" allowCustomizedBuiltInElements: true, // allow customized built-ins }, }); // <foo-bar baz="foobar"></foo-bar><div is="foo-baz"></div>
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/62081030', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13635564/']}
jdg_75064
stackexchange
llm_judgeable_groundtruth_similarity
716949
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: A discussion question in my book is: "Why is it difficult for a fireman to hold a hose, which ejects large amounts of water at a high velocity." Its answer is given as: When a fireman holds a hose, which is ejecting large amounts of water at a high velocity, then a reaction force is exerted on him by the ejecting water in the backward direction. This is because of Newton’s third law of motion. As a result of the backward force, the stability of the fireman decreases. Hence, it is difficult for him to remain stable while holding the hose. My doubt is does water flowing in a pipe push the pipe really backwards?? Now provide the response and nothing else.
You are right, somewhat. For a straight hose, the naive argument is that since the hose pushes the water forwards, the water must push back on the hose by Newton’s third law, resulting in a backwards reaction force. In terms of momentum, this argument may make more sense: if the initial momentum is $\vec p = 0$ , then this must be conserved, meaning that there must be a backwards momentum when the water is going forwards. However, the fallacy in such an argument is that we assume the firefighter to be part of the action-reaction pair. The true action-reaction pair is that between the water and the pump (and wall behind it). The pump will experience a non-negligible force acting on it after it releases water. The real reason the firefighter will struggle is when the hose is not completely straight. Water will push on the pipes at places of curvature, and given its large amounts of water and velocity, ( $F = \frac{\text{d}p}{\text{d}t} = \frac{\text{d}m}{\text{d}t}v = \mu v$ ) the force at these bends will be very high and transmitted to the firefighter. So the firefighter will have to struggle to make the hose straight. References: Vera, F., Rivera, R. & Núñez, C. Backward Reaction Force on a Fire Hose, Myth or Reality?. Fire Technol 51, 1023–1027 (2015). https://doi.org/10.1007/s10694-014-0430-5
{}
{'log_upvote_score': 5, 'links': ['https://physics.stackexchange.com/questions/716949', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/323332/']}
jdg_75065
stackexchange
llm_judgeable_groundtruth_similarity
4501829
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Exception stack trace org.apache.jasper.JasperException: Unable to load class for JSP org.apache.jasper.JspCompilationContext.load(JspCompilationContext.java:599) org.apache.jasper.servlet.JspServletWrapper.getServlet(JspServletWrapper.java:143) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:321) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:308) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:259) javax.servlet.http.HttpServlet.service(HttpServlet.java:729)java.lang.ClassNotFoundException: org.apache.jsp.redirect_jsp java.net.URLClassLoader$1.run(Unknown Source) java.security.AccessController.doPrivileged(Native Method) java.net.URLClassLoader.findClass(Unknown Source) org.apache.jasper.servlet.JasperLoader.loadClass(JasperLoader.java:131) org.apache.jasper.servlet.JasperLoader.loadClass(JasperLoader.java:63) org.apache.jasper.JspCompilationContext.load(JspCompilationContext.java:597) org.apache.jasper.servlet.JspServletWrapper.getServlet(JspServletWrapper.java:143) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:321) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:308) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:259) javax.servlet.http.HttpServlet.service(HttpServlet.java:729) My redirect.jsp file contents <%@page contentType="text/html" pageEncoding="UTF-8"%><% response.sendRedirect("user/list.htm"); %> Now provide the response and nothing else.
Under the covers of the servletcontainer, JSP's are compiled to Java classes before they get executed. The exception java.lang.ClassNotFoundException: org.apache.jsp.redirect_jsp means that the redirect.jsp file in the root of your webcontent folder failed to compile which in turn often means that it contains some raw Java code in scriptlets <% %> which contains syntax errors. You need to fix those syntax errors so that the servletcontainer can compile those JSP files. The general concensus is however that scriptlets are a poor practice . You should consider if that Java code doesn't better belong in a fullworthy Java class, controlled by a Servlet or a Filter . Another possible cause is that the work cache of the servletcontainer is messed up. This can happen when developing with a poor IDE plugin. You'd like to clean the work cache. In for example Eclipse, you can do that by rightclick the server and choosing Clean . Otherwise it has to be done manually by deleting everything in work cache of the servletcontainer in question. In case of for example Tomcat, that's then everything in side its /work folder.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4501829', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/548591/']}
jdg_75066
stackexchange
llm_judgeable_groundtruth_similarity
1198691
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How do I test IO performance in Linux? Now provide the response and nothing else.
IO and filesystem benchmark is a complex topic. No single benchmarking tool is good in all situations. Here is a small overview about different benchmarking tools: Block Storage: IOMeter - Highly customizable and allows to coordinate multiple clients. Needs a Windows PC for the coordination application. Developed by Intel. On Linux, take maximum rates of older (at least 2006.07.27 and earlier) with a pinch of salt because the submission method was not optimal. File System (synthetic): FFSB - Flexible Filesystem Benchmark. Very neat benchmarking for Linux. Good customization of workload. NFS benchmarking (net-ffsb) a bit unsound. Filebench - Extremely powerful, but originally developed for Solaris. Linux support isn't good. sysbench - Mainly a DB benchmarking tool, but also basic filesystem benchmarking tool. bonnie - Seems to be obsolete. bonnie++ - C++ port of bonnie. Easy, but seems not to be very customizable. File System (workload): Postmark - Simulates the IO behavior of a mail server. Too small to stress good IO systems. Stony Brook University and IBM Watson Labs have published a highly recommended journal paper in the "Transaction of Storage" about file system benchmarking, in which they present different benchmarks and their strong and weak points: A nine year study of file system and storage benchmarking . The article clearly points out that the results of most benchmarks at least questionable. A note: Is the question programming related? Maybe not, but maybe it is. I spend a lot of time benchmarking the IO performance of the systems I develop. At least for me, questions about how to benchmarking these things is highly programming related. Please: Do not close all questions that are not development/programming related from your point of view. The point of view of other developers might be different.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1198691', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/146429/']}
jdg_75067
stackexchange
llm_judgeable_groundtruth_similarity
119211
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Please, can I have some ideas and help on how to identify and remove features like this one or this one from scanned handwritten text like this: These features are not connected to other text elements (letters), they are always similar to this two examples, and they are oriented with slight variation (up to +/-10 degrees). They are always below letters. In this example there are 12 such features (4,6,2 in rows 1,2,3). Now provide the response and nothing else.
Almost! Convert your image to binary image img = Import["http://i.stack.imgur.com/uRUEw.jpg"];binimg = ColorNegate@Binarize[img, .75] Get tha mask used in Inpaint mask = SelectComponents[binimg, {"PerimeterLength", "Elongation"}, 60 < # < 160 || #2 > 0.8 &] Contrasting your image with after. Grid[{{"Before", Magnify[img, 5]}, {"After",Magnify[Inpaint[img, Dilation[mask, 1], Method -> "TextureSynthesis"], 5]}}]
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/119211', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/6226/']}
jdg_75068
stackexchange
llm_judgeable_groundtruth_similarity
98863
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I haven't seen this explained clearly anywhere. Solid angles are described usually as a fraction of the surface area of a unit sphere, similar to how angles are the fraction of the circumference of a unit circle. However, I don't know how solid angles are actually quantified. Are solid angles just a single number, the describes this fraction of the area? It's confusing to me since often times, I've seen integrals that integrate over a sphere using solid angles, which seems to imply that solid angles are multi-dimensional quantities (e.g. when integrating using spherical coordinates, the solid angle would have to consist of the azimuthal and polar angles covered by the differential solid angle). Following from this, how would you write down a solid angle that covers the entire surface of a unit sphere? Now provide the response and nothing else.
The solid angle is defined as the area on the unit sphere subtended by the angle divided by one unit area. It's a ratio so it's a single dimensionless number. I see why you think it should be a 2D quantity, because the surface of a sphere, and any patch on it, is a 2D manifold and you need two quantities (traditionally $\theta$ and $\phi$) to map it. When you calculate an area on the sphere you are basically calculating a definite integral over $\theta$ and $\phi$, and the result is of course just a single number. You do lose information in the process - for example you just know the total area not the shape of the patch on the sphere. The solid angle that covers the whole sphere is of course $4\pi$/1 or $4\pi$.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/98863', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/11209/']}
jdg_75069
stackexchange
llm_judgeable_groundtruth_similarity
13152
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I always use lm() in R to perform linear regression of $y$ on $x$. That function returns a coefficient $\beta$ such that $$y = \beta x.$$ Today I learned about total least squares and that princomp() function (principal component analysis, PCA) can be used to perform it. It should be good for me (more accurate). I have done some tests using princomp() , like: r <- princomp( ~ x + y) My problem is: how to interpret its results? How can I get the regression coefficient? By "coefficient" I mean the number $\beta$ that I have to use to multiply the $x$ value to give a number close to $y$. Now provide the response and nothing else.
Ordinary least squares vs. total least squares Let's first consider the simplest case of only one predictor (independent) variable $x$. For simplicity, let both $x$ and $y$ be centered, i.e. intercept is always zero. The difference between standard OLS regression and "orthogonal" TLS regression is clearly shown on this (adapted by me) figure from the most popular answer in the most popular thread on PCA: OLS fits the equation $y=\beta x$ by minimizing squared distances between observed values $y$ and predicted values $\hat y$. TLS fits the same equation by minimizing squared distances between $(x,y)$ points and their projection on the line. In this simplest case TLS line is simply the first principal component of the 2D data. To find $\beta$, do PCA on $(x,y)$ points, i.e. construct the $2\times 2$ covariance matrix $\boldsymbol \Sigma$ and find its first eigenvector $\mathbf v = (v_x, v_y)$; then $\beta = v_y/v_x$. In Matlab: v = pca([x y]); //# x and y are centered column vectors beta = v(2,1)/v(1,1); In R: v <- prcomp(cbind(x,y))$rotation beta <- v[2,1]/v[1,1] By the way, this will yield correct slope even if $x$ and $y$ were not centered (because built-in PCA functions automatically perform centering). To recover the intercept, compute $\beta_0 = \bar y - \beta \bar x$. OLS vs. TLS, multiple regression Given a dependent variable $y$ and many independent variables $x_i$ (again, all centered for simplicity), regression fits an equation $$y= \beta_1 x_1 + \ldots + \beta_p x_p.$$ OLS does the fit by minimizing the squared errors between observed values of $y$ and predicted values $\hat y$. TLS does the fit by minimizing the squared distances between observed $(\mathbf x, y)\in\mathbb R^{p+1}$ points and the closest points on the regression plane/hyperplane. Note that there is no "regression line" anymore! The equation above specifies a hyperplane : it's a 2D plane if there are two predictors, 3D hyperplane if there are three predictors, etc. So the solution above does not work: we cannot get the TLS solution by taking the first PC only (which is a line). Still, the solution can be easily obtained via PCA. As before, PCA is performed on $(\mathbf x, y)$ points. This yields $p+1$ eigenvectors in columns of $\mathbf V$. The first $p$ eigenvectors define a $p$-dimensional hyperplane $\mathcal H$ that we need; the last (number $p+1$) eigenvector $\mathbf v_{p+1}$ is orthogonal to it. The question is how to transform the basis of $\mathcal H$ given by the first $p$ eigenvectors into the $\boldsymbol \beta$ coefficients. Observe that if we set $x_i=0$ for all $i \ne k$ and only $x_k=1$, then $\hat y=\beta_k$, i.e. the vector $$(0,\ldots, 1, \ldots, \beta_k) \in \mathcal H$$ lies in the hyperplane $\mathcal H$. On the other hand, we know that $$\mathbf v_{p+1}=(v_1, \ldots, v_{p+1}) \:\bot\: \mathcal H$$ is orthogonal to it. I.e. their dot product must be zero: $$v_k + \beta_k v_{p+1}=0 \Rightarrow \beta_k = -v_k/v_{p+1}.$$ In Matlab: v = pca([X y]); //# X is a centered n-times-p matrix, y is n-times-1 column vector beta = -v(1:end-1,end)/v(end,end); In R: v <- prcomp(cbind(X,y))$rotation beta <- -v[-ncol(v),ncol(v)] / v[ncol(v),ncol(v)] Again, this will yield correct slopes even if $x$ and $y$ were not centered (because built-in PCA functions automatically perform centering). To recover the intercept, compute $\beta_0 = \bar y - \bar {\mathbf x} \boldsymbol \beta$. As a sanity check, notice that this solution coincides with the previous one in case of only a single predictor $x$. Indeed, then the $(x,y)$ space is 2D, and so, given that the first PCA eigenvector is orthogonal to the second (last) one, $v^{(1)}_y/v^{(1)}_x=-v^{(2)}_x/v^{(2)}_y$. Closed form solution for TLS Surprisingly, it turns out that there is a closed form equation for $\boldsymbol \beta$. The argument below is taken from Sabine van Huffel's book "The total least squares" (section 2.3.2). Let $\mathbf X$ and $\mathbf y$ be the centered data matrices. The last PCA eigenvector $\mathbf v_{p+1}$ is an eigenvector of the covariance matrix of $[\mathbf X\: \mathbf y]$ with an eigenvalue $\sigma^2_{p+1}$. If it is an eigenvector, then so is $-\mathbf v_{p+1}/v_{p+1} = (\boldsymbol \beta\:\: -1)^\top$. Writing down the eigenvector equation:$$\left(\begin{array}{c}\mathbf X^\top \mathbf X & \mathbf X^\top \mathbf y\\ \mathbf y^\top \mathbf X & \mathbf y^\top \mathbf y\end{array}\right) \left(\begin{array}{c}\boldsymbol \beta \\ -1\end{array}\right) = \sigma^2_{p+1}\left(\begin{array}{c}\boldsymbol \beta \\ -1\end{array}\right),$$and computing the product on the left, we immediately get that $$\boldsymbol \beta_\mathrm{TLS} = (\mathbf X^\top \mathbf X - \sigma^2_{p+1}\mathbf I)^{-1} \mathbf X^\top \mathbf y,$$ which strongly reminds the familiar OLS expression $$\boldsymbol \beta_\mathrm{OLS} = (\mathbf X^\top \mathbf X)^{-1} \mathbf X^\top \mathbf y.$$ Multivariate multiple regression The same formula can be generalized to the multivariate case, but even to define what multivariate TLS does, would require some algebra. See Wikipedia on TLS . Multivariate OLS regression is equivalent to a bunch of univariate OLS regressions for each dependent variable, but in the TLS case it is not so.
{}
{'log_upvote_score': 7, 'links': ['https://stats.stackexchange.com/questions/13152', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/5405/']}
jdg_75070
stackexchange
llm_judgeable_groundtruth_similarity
78535
Below is a question asked on the forum arduino.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have come accross a lot of examples using ways similar to the ones shown below to assign values to the pins. PORTB |= (1<<PORTB2); //set bit 2 PORTB &= ~(1<<PORTB1); //clear bit 1 DDRD = DDRD | B11111100; //Sets up the pins for output My question is why do we need to use bitwise operations to make assignements? Now provide the response and nothing else.
My question is why do we need to use bitwise operations to make assignements? Because you're not assigning, you're modifying. If you want to turn on just pin 3 (for example) you need to only control bit 3 of PORTB and leave the others alone. That means you need to "or" the value of just bit 3 being on with the existing PORTB to turn the bit on, or AND it with the inverse (all pins on except bit 3) to turn it off. To get the value of bit 3 you can remember that it's 4 , or you can just use 1<<3 to say "Bit 3" - or use the macro PORTB3 to make it more readable instead of just the number 3. The operation is commonly known as "Read Modify Write" and in some low-end microcontrollers you have to take special care when doing it. For example, if you want to control the output pin 3 and currently you have pins 2, 5 and 6 on and the rest of pins 0-7 off (those are the ones on port B on the Uno) the sequence may be like: Turn on:Calculate bit value 1<<3 => 0b00001000Read PORTB => 0b01100100OR the two values together => 0b01101100Write to PORTB <= 0b01101100Turn off:Calculate bit value 1<<3 => 0b00001000Invert it => 0b11110111Read PORTB => 0b01101100AND the two values together => 0b01100100Write to PORTB <= 0b01100100
{}
{'log_upvote_score': 4, 'links': ['https://arduino.stackexchange.com/questions/78535', 'https://arduino.stackexchange.com', 'https://arduino.stackexchange.com/users/69384/']}
jdg_75071
stackexchange
llm_judgeable_groundtruth_similarity
12137908
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to run the Clean, Build and Rebuild commands through the Package Manager Console in Visual Studio but so far I haven't been able to find how. The following command gets me the first project inside the solution: $project = Get-Project | select -First 1 When I run the $project | Get-Member , I can see the members of $project item. #Members of the $project ($project | Get-Member)# TypeName: System.__ComObject#{866311e6-c887-4143-9833-645f5b93f6f1}##Name MemberType Definition #---- ---------- ---------- #ProjectName CodeProperty System.String ProjectName{get=GetCustomUniqueName;} #Delete Method void Delete () #Save Method void Save (string) #SaveAs Method void SaveAs (string) #Extender ParameterizedProperty IDispatch Extender (string) {get} #CodeModel Property CodeModel CodeModel () {get} #Collection Property Projects Collection () {get} #ConfigurationManager Property ConfigurationManager ConfigurationManager () {get} #DTE Property DTE DTE () {get} #ExtenderCATID Property string ExtenderCATID () {get} #ExtenderNames Property Variant ExtenderNames () {get} #FileName Property string FileName () {get} #FullName Property string FullName () {get} #Globals Property Globals Globals () {get} #IsDirty Property bool IsDirty () {get} {set} #Kind Property string Kind () {get} #Name Property string Name () {get} {set} #Object Property IDispatch Object () {get} #ParentProjectItem Property ProjectItem ParentProjectItem () {get} #ProjectItems Property ProjectItems ProjectItems () {get} #Properties Property Properties Properties () {get} #Saved Property bool Saved () {get} {set} #UniqueName Property string UniqueName () {get} #Type ScriptProperty System.Object Type {get=switch ($this.Kind) {... I am not sure if I can get to the Clean Build and Rebuild method through the $project item or if I should I directly run the msbuild by targeting the project path. Any idea? Now provide the response and nothing else.
Visual Studio's object model provides a way to build the entire solution or a single project via the SolutionBuild object. Building the solution is straightforward from the NuGet Package Manager Console. $dte.Solution.SolutionBuild.Clean($true)$dte.Solution.SolutionBuild.Build($true) The $true flag indicates that the command should wait for the clean/build to finish. Building an individual project is not as straightforward. The SolutionBuild object provides a BuildProject method which takes three parameters. $project = Get-Project | select -First 1$dte.Solution.SolutionBuild.BuildProject("Debug", $project.FullName, $true) It also does not allow you to run a Clean build. If you want to build an individual project then, as Pavel suggests, using MSBuild seems to be more straighforward.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12137908', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/463785/']}
jdg_75072
stackexchange
llm_judgeable_groundtruth_similarity
12643041
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a table TABLE1 with column COL_TAB1 . select * from TABLE1: COL_TAB1| 1 || 2 || 3 | And another table TABLE2 with COL_TAB2 and references the first column COL_TAB1 select * from TABLE2: COL_TAB2 | COL_TAB1| val1 | 1| val2 | 2| val2 | 3 Is it possible to add a virtual column on table1 so i would get the same result as the following query: select t1.COL_TAB1, (select t2.COL_TAB2 from TABLE2 t2 where t2.COL_TAB1 = t1.COL_TAB1)from TABLE1 t1 I tried : alter table TABLE1 add ( SOME_COL_NAME as (select t2.COL_TAB2 from TABLE2 t2 where t2.COL_TAB1 = COL_TAB1)) But it gives me ORA-00936: missing expression Now provide the response and nothing else.
Oracle's documentation on virtual columns states it rather clearly, you can only reference columns in the same table; The expression used in the virtual column definition has the following restrictions: It cannot refer to another virtual column by name. It can only refer to columns defined in the same table. If it refers to a deterministic user-defined function, it cannot be used as a partitioning key column. The output of the expression must be a scalar value. It cannot return an Oracle supplied datatype, a user-defined type, or LOB or LONG RAW. As @JoeStefanelli states, your best bet to do what you want is to create a view .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12643041', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1438628/']}
jdg_75073
stackexchange
llm_judgeable_groundtruth_similarity
4699605
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I read somewhere that to save data to an SQLite database in Python, a commit() call is required. Yet I have never needed to do this. Why? Now provide the response and nothing else.
It means that your SQLite3 driver is in auto-commit mode . Understanding commit modes In transactional database management systems, a transaction is a sequence of data access operations that is recoverable (atomicity property) to hide the effects of an abortion of the transaction, meaning that the database state after an aborted transaction is the same as if no operations of the transaction were performed; serialisable (isolation property) to hide the effects of other concurrent transactions, meaning that the database state after concurrently executing multiple transactions is the same as if they were sequentially executed. According to the ISO/IEC 9075:2011 SQL standard, if no transaction is currently active, a transaction is explicitly initiated by a start transaction statement, or implicitly initiated before all SQL-schema statements; some SQL-transaction statements (savepoint statement, commit statement, rollback statement); some SQL-data statements (open statement, close statement, fetch statement, select statement, insert statement, delete statement, update statement, merge statement, truncate table statement, allocate extended dynamic cursor statement, allocate received cursor statement, dynamic open statement, dynamic close statement, dynamic fetch statement, direct select statement, dynamic single row select statement, dynamic delete statement, preparable dynamic delete statement, dynamic update statement, preparable dynamic update statement, free locator statement, hold locator statement); some SQL-dynamic statements (describe input statement, describe output statement, allocate descriptor statement, deallocate descriptor statement, get descriptor statement, set descriptor statement, deallocate prepared statement). And a transaction is explicitly terminated by a commit statement or rollback statement, or implicitly terminated by an abortion of the transaction (cf. ISO/IEC 9075-2:2011). So almost all SQL statements are executed in a transaction, and the transaction must be explicitly committed to take effect. A database interface that is in this commit mode is in manual commit mode . Manual commit mode is a best practice which is fine for programs (non-interactive sessions) but can be cumbersome for users (interactive sessions). For that reason, most database interfaces also provide an auto-commit mode for interactive sessions. In auto-commit mode, a transaction implicitly initiated before a transaction-initiating statement other than the start transaction statement is implicitly committed after the statement, whereas a transaction explicitly initiated by a start transaction statement must be explicitly committed to take effect. A database interface is specific to a database engine so more generic interfaces are commonly used to interact with a database engine (e.g. Open Database Connectivity , Java Database Connectivity , Python Database API ). The adaptation of a generic database interface to a specific database interface is provided by a database driver . The SQLite engine has a specific database interface in C. The SQLite3 driver adapts the Python Database API to the SQLite API. SQL statements are interpreted by a database engine. So when a database engine and database driver are in the same commit mode (manual commit mode or auto-commit mode), the database driver can pass unmodified SQL statements to the database engine. However when they are in different commit modes, the database driver has to either configure the database engine to match the commit mode of the database driver, or transform SQL statements before passing them to the database engine to emulate the commit mode of the database driver: The database driver emulates manual commit mode by implicitly issuing a start transaction statement after each connection opening, commit statement, and rollback statement to prevent the transaction from being implicitly committed by the database engine in auto-commit mode, and by implicitly issuing a rollback statement before each connection closing to force the last transaction to be rolled back by the database engine (e.g. A; B; START TRANSACTION; C; D; COMMIT; E; F; is transformed into START TRANSACTION; A; B; START TRANSACTION; C; D; COMMIT; START TRANSACTION; E; F; ROLLBACK; ). The database driver emulates auto-commit mode by implicitly issuing a commit statement after each transaction-initiating statements other than start transaction statements and statements within a start transaction statement and commit statement or rollback statement to make the transaction be explicitly committed by the database engine in manual commit mode (e.g. A; B; START TRANSACTION; C; D; COMMIT; E; F; is transformed into A; COMMIT; B; COMMIT; START TRANSACTION; C; D; COMMIT; E; COMMIT; F; COMMIT; ). The SQLite engine is always in auto-commit mode (cf. the SQLite documentation ). The SQLite3 driver is by default in manual commit mode so has to emulate it (cf. the SQLite3 documentation ). The SQLite3 driver currently does not emulate manual commit mode properly like described in the previous paragraph, but by implicitly issuing a start transaction statement before each transaction-initiating SQL-data statements other than select statements, so SQL-schema statements and select statements are not always in an explicitly initiated transaction (cf. the SQLite3 implementation ). That is not Python Database API-conformant so the legacy manual commit mode and auto-commit mode configured with the isolation_level attribute of the Connection class will be deprecated in Python 3.12 with the introduction of a new autocommit attribute in the Connection class for configuring the Python Database API-conformant manual commit mode and auto-commit mode (cf. CPython issue #83638 and CPython pull request #93823 ). Example 1. — This Python 3.11 program uses the SQLite3 driver in legacy manual commit mode . import sqlite3connection = sqlite3.connect(':memory:', isolation_level='DEFERRED')# No transaction is explicitly initiated here by a start transaction statement.assert connection.in_transaction is Falsestatements = []connection.set_trace_callback(statements.append)cursor = connection.cursor()# Transaction 1 is implicitly initiated here.cursor.execute('CREATE TABLE t (i INT)')# Transaction 1 is implicitly committed here.# Transaction 2 is explicitly initiated here by a start transaction statement.cursor.execute('INSERT INTO t VALUES (?)', (1,))cursor.execute('CREATE TABLE u (j INT)')cursor.execute('INSERT INTO u VALUES (?)', (2,))cursor.close()connection.close()# Transaction 2 is implicitly rolled back here.assert statements == [ 'CREATE TABLE t (i INT)', 'BEGIN DEFERRED', 'INSERT INTO t VALUES (1)', 'CREATE TABLE u (j INT)', 'INSERT INTO u VALUES (2)',] Example 2. — This Python 3.12 program uses the SQLite3 driver in manual commit mode . import sqlite3connection = sqlite3.connect(':memory:', autocommit=False)# Transaction 1 is explicitly initiated here by a start transaction statement.assert connection.in_transaction is Truestatements = []connection.set_trace_callback(statements.append)cursor = connection.cursor()cursor.execute('CREATE TABLE t (i INT)')cursor.execute('INSERT INTO t VALUES (?)', (1,))cursor.execute('CREATE TABLE u (j INT)')cursor.execute('INSERT INTO u VALUES (?)', (2,))cursor.close()connection.close()# Transaction 1 is explicitly rolled back here by a rollback statement.assert statements == [ 'CREATE TABLE t (i INT)', 'INSERT INTO t VALUES (1)', 'CREATE TABLE u (j INT)', 'INSERT INTO u VALUES (2)', 'ROLLBACK',] Example 3. — This Python 3.11 program uses the SQLite3 driver in legacy auto-commit mode . import sqlite3connection = sqlite3.connect(':memory:', isolation_level=None)# No transaction is explicitly initiated here by a start transaction statement.assert connection.in_transaction is Falsestatements = []connection.set_trace_callback(statements.append)cursor = connection.cursor()# Transaction 1 is implicitly initiated here.cursor.execute('CREATE TABLE t (i INT)')# Transaction 1 is implicitly committed here.# Transaction 2 is implicitly initiated here.cursor.execute('INSERT INTO t VALUES (?)', (1,))# Transaction 2 is implicitly committed here.# Transaction 3 is implicitly initiated here.cursor.execute('CREATE TABLE u (j INT)')# Transaction 3 is implicitly committed here.# Transaction 4 is implicitly initiated here.cursor.execute('INSERT INTO u VALUES (?)', (2,))# Transaction 4 is implicitly committed here.cursor.close()connection.close()assert statements == [ 'CREATE TABLE t (i INT)', 'INSERT INTO t VALUES (1)', 'CREATE TABLE u (j INT)', 'INSERT INTO u VALUES (2)',] Example 4. — This Python 3.12 program uses the SQLite3 driver in auto-commit mode . import sqlite3connection = sqlite3.connect(':memory:', autocommit=True)# No transaction is explicitly initiated here by a start transaction statement.assert connection.in_transaction is Falsestatements = []connection.set_trace_callback(statements.append)cursor = connection.cursor()# Transaction 1 is implicitly initiated here.cursor.execute('CREATE TABLE t (i INT)')# Transaction 1 is implicitly committed here.# Transaction 2 is implicitly initiated here.cursor.execute('INSERT INTO t VALUES (?)', (1,))# Transaction 2 is implicitly committed here.# Transaction 3 is implicitly initiated here.cursor.execute('CREATE TABLE u (j INT)')# Transaction 3 is implicitly committed here.# Transaction 4 is implicitly initiated here.cursor.execute('INSERT INTO u VALUES (?)', (2,))# Transaction 4 is implicitly committed here.cursor.close()connection.close()assert statements == [ 'CREATE TABLE t (i INT)', 'INSERT INTO t VALUES (1)', 'CREATE TABLE u (j INT)', 'INSERT INTO u VALUES (2)',]
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4699605', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/561661/']}
jdg_75074
stackexchange
llm_judgeable_groundtruth_similarity
34509103
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two Vue components: Vue.component('A', {});Vue.component('B', {}); How can I access component A from component B? How does the communication work between the components? Now provide the response and nothing else.
Cross-component communication doesn't get much attention in the Vue.js docs, nor are there many tutorials that cover this subject. As components should be isolated, you should never "access" a component directly. This would tightly couple the components together, and thats exactly what you want to prevent doing. Javascript has an excellent method for communication: events. Vue.js has a built-in event system, mainly used for parent-child communication. From the docs : Although you can directly access a Vue instance’s children and parent, it is more convenient to use the built-in event system for cross-component communication. It also makes your code less coupled and easier to maintain. Once a parent-child relationship is established, you can dispatch and trigger events using each component’s event instance methods. Their example code to illustrate the event system: var parent = new Vue({ template: '<div><child></child></div>', created: function () { this.$on('child-created', function (child) { console.log('new child created: ') console.log(child) }) }, components: { child: { created: function () { this.$dispatch('child-created', this) } } }}).$mount() Dan Holloran has recently written a piece on his "struggle" with cross-component messaging, in two pieces . This might be helpful to you if you need communication between components that have no parent-child relationship. Another approach I have experience with (other than using events for communication), is using a central component registry that has a reference to the public API with an instance of a component bound to it. The registry handles requests for a component and returns its public API. In the context of Vue.js, events would by my weapon of choice.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/34509103', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5653901/']}
jdg_75075
stackexchange
llm_judgeable_groundtruth_similarity
4860
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: We have a question with regards to XML-sig and need detail about the optional elements as well as some of the canonicalization and transform stuff. We're writing a spec for a very small XML-syntax payload that will go into the metadata of media files and it needs to by cryptographically signed. Rather than re-invent the wheel, We thought we should use the XML-sig spec but I think most of it is overkill for what we need, and so we like to have more information/dialogue with people who know the details. Specifically, do we need to care about either transforms or canonicalization if the XML is very basic with no tabs for formatting and is specific to our needs? Now provide the response and nothing else.
Within your overridden ProcessCmdKey how are you determining which key has been pressed? The value of keyData (the second parameter) will change dependant on the key pressed and any modifier keys, so, for example, pressing the left arrow will return code 37, shift-left will return 65573, ctrl-left 131109 and alt-left 262181. You can extract the modifiers and the key pressed by ANDing with appropriate enum values: protected override bool ProcessCmdKey(ref Message msg, Keys keyData){ bool shiftPressed = (keyData & Keys.Shift) != 0; Keys unmodifiedKey = (keyData & Keys.KeyCode); // rest of code goes here}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4860', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/661/']}
jdg_75076
stackexchange
llm_judgeable_groundtruth_similarity
60512222
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This was working last week,when running python manage.py runserver 8080 I get this new error, a week after it was working fine. Traceback (most recent call last): File "manage.py", line 10, in main from django.core.management import execute_from_command_lineModuleNotFoundError: No module named 'django'The above exception was the direct cause of the following exception:Traceback (most recent call last): File "manage.py", line 21, in <module> main() File "manage.py", line 16, in main ) from excImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment? Now provide the response and nothing else.
I saw that you are using databricks in the azure stack. I think the most viable and recommended method for you to use would be to make use of the new delta lake project in databricks: It provides options for various upserts, merges and acid transactions to object stores like s3 or azure data lake storage. It basically provides the management, safety, isolation and upserts/merges provided by data warehouses to datalakes. For one pipeline apple actually replaced its data warehouses to be run solely on delta databricks because of its functionality and flexibility. For your use case and many others who use parquet, it is just a simple change of replacing 'parquet' with 'delta' , in order to use its functionality (if you have databricks). Delta is basically a natural evolution of parquet and databricks has done a great job by providing added functionality and as well as open sourcing it. For your case, I would suggest you try the replaceWhere option provided in delta. Before making this targeted update , the target table has to be of format delta Instead of this: dataset.repartition(1).write.mode('overwrite')\ .partitionBy('Year','Week').parquet('\curataed\dataset') From https://docs.databricks.com/delta/delta-batch.html : 'You can selectively overwrite only the data that matches predicates over partition columns ' You could try this: dataset.write.repartition(1)\ .format("delta")\ .mode("overwrite")\ .partitionBy('Year','Week')\ .option("replaceWhere", "Year == '2019' AND Week >='01' AND Week <='02'")\ #to avoid overwriting Week3 .save("\curataed\dataset") Also, if you wish to bring partitions to 1, why dont you use coalesce(1) as it will avoid a full shuffle. From https://mungingdata.com/delta-lake/updating-partitions-with-replacewhere/ : ' replaceWhere is particularly useful when you have to run a computationally expensive algorithm , but only on certain partitions ' Therefore, I personally think that using replacewhere to manually specify your overwrite will be more targeted and computationally efficient then to just rely on: spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic") Databricks provides optimizations on delta tables make it a faster, and much more efficient option to parquet( hence a natural evolution) by bin packing and z-ordering: From Link: https://docs.databricks.com/spark/latest/spark-sql/language-manual/optimize.html WHERE (binpacking) 'Optimize the subset of rows matching the given partition predicate. Only filters involving partition key attributes are supported.' ZORDER BY 'Colocate column information in the same set of files. Co-locality is used by Delta Lake data-skipping algorithms to dramatically reduce the amount of data that needs to be read'. Faster query execution with indexing, statistics, and auto-caching support Data reliability with rich schema validation and transactional guarantees Simplified data pipeline with flexible UPSERT support and unified Structured Streaming + batch processing on a single data source You could also check out the complete documentation of the open source project: https://docs.delta.io/latest/index.html .. I also want to say that I do not work for databricks/delta lake. I have just seen their improvements and functionality benefit me in my work. UPDATE: The gist of the question is "replacing data that exists and creating new folders for new data" and to do it in highly scalable and effective manner. Using dynamic partition overwrite in parquet does the job however I feel like the natural evolution to that method is to use delta table merge operations which were basically created to 'integrate data from Spark DataFrames into the Delta Lake' . They provide you with extra functionality and optimizations in merging your data based on how would want that to happen and keep a log of all actions on a table so you can rollback versions if needed. Delta lake python api (for merge): https://docs.delta.io/latest/api/python/index.html#delta.tables.DeltaMergeBuilder databricks optimization: https://kb.databricks.com/delta/delta-merge-into.html#discussion Using a single merge operation you can specify the condition merge on, in this case it could be a combination of the year and week and id, and then if the records match(meaning they exist in your spark dataframe and delta table, week1 and week2), update them with the data in your spark dataframe and leave other records unchanged: #you can also add additional condition if the records match, but not required.whenMatchedUpdateAll(condition=None) For some cases, if nothing matches then you might want to insert and create new rows and partitions, for that you can use: .whenNotMatchedInsertAll(condition=None) You can use . converttodelta operation https://docs.delta.io/latest/api/python/index.html#delta.tables.DeltaTable.convertToDelta , to convert your parquet table to a delta table so that you can perform delta operations on it using the api. 'You can now convert a Parquet table in place to a Delta Lake table without rewriting any of the data. This is great for converting very large Parquet tables which would be costly to rewrite as a Delta table. Furthermore, this process is reversible' Your merge case ( replacing data where it exists and creating new records when it does not exist ) could go like this: (have not tested, refer to examples + api for syntax) %python deltaTable = DeltaTable.convertToDelta(spark, "parquet.`\curataed\dataset`")deltaTable.alias("target").merge(dataset, "target.Year= dataset.Year AND target.Week = dataset.Week") \ .whenMatchedUpdateAll()\ .whenNotMatchedInsertAll()\ .execute() If the delta table is partitioned correctly(year,week) and you used whenmatched clause correctly, these operations will be highly optimized and could take seconds in your case. It also provides you with consistency, atomicity and data integrity with option to rollback. Some more functionality provided is that you can specify the set of columns to update if the match is made, (if you only need to update certain columns). You can also enable spark.conf.set("spark.databricks.optimizer.dynamicPartitionPruning","true") , so that delta uses minimal targeted partitions to carry out the merge(update,delete,create). Overall, I think using this approach is a very new and innovative way of carrying out targeted updates as it gives you more control over it while keeping ops highly efficient. Using parquet with dynamic partitionoverwrite mode will also work fine however, delta lake features bring data quality to your data lake that is unmatched. My recommendation: I would say for now, use dynamic partition overwrite mode for parquet files to do your updates, and you could experiment and try to use the delta merge on just one table with the databricks optimization of spark.conf.set("spark.databricks.optimizer.dynamicPartitionPruning","true") and .whenMatchedUpdateAll() and compare the performance of both(your files are small so I do not think it will be a big difference). The databricks partition pruning optimization for merges article came out in Feb so it is really new and possibly could be a gamechanger for the overhead delta merge operations incur( as under the hood they just create new files, but partition pruning could speed it up) Merge examples in python,scala,sql : https://docs.databricks.com/delta/delta-update.html#merge-examples https://databricks.com/blog/2019/10/03/simple-reliable-upserts-and-deletes-on-delta-lake-tables-using-python-apis.html
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/60512222', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/12846532/']}
jdg_75077
stackexchange
llm_judgeable_groundtruth_similarity
149723
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I wish to use the Computational formula of the variance to calculate the variance of a normal-distributed function. For this, I need the expected value of $X$ as well as the one of $X^2$. Intuitively, I would have assumed that $E(X^2)$ is always equal to $E(X)^2$. In fact, I cannot imagine how they could be different. Could you explain how this is possible, e.g. with an example? Now provide the response and nothing else.
Assume $X$ is a random variable that is 0 half the time and 1 half the time. Then$$EX = 0.5 \times 0 + 0.5 \times 1 = 0.5$$so that$$(EX)^2 = 0.25,$$whereas on the other hand$$E(X^2) = 0.5 \times 0^2 + 0.5 \times 1^2 = 0.5.$$By the way, since $Var(X) = E[(X - \mu)^2] = \sum_x (x - \mu)^2 P(x)$, the only way the variance could ever be 0 in the discrete case is when $X$ is constant.
{}
{'log_upvote_score': 6, 'links': ['https://math.stackexchange.com/questions/149723', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/29904/']}
jdg_75078
stackexchange
llm_judgeable_groundtruth_similarity
38242368
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I pick out the difference between to columns of the same name in two dataframes?I mean I have dataframe A with a column named X and dataframe B with column named X, if i do pd.merge(A, B, on=['X']) , i'll get the common X values of A and B, but how can i get the "non-common" ones? Now provide the response and nothing else.
If you change the merge type to how='outer' and indicator=True this will add a column to tell you whether the values are left/both/right only: In [2]:A = pd.DataFrame({'x':np.arange(5)})B = pd.DataFrame({'x':np.arange(3,8)})print(A)print(B) x0 01 12 23 34 4 x0 31 42 53 64 7In [3]:pd.merge(A,B, how='outer', indicator=True)Out[3]: x _merge0 0.0 left_only1 1.0 left_only2 2.0 left_only3 3.0 both4 4.0 both5 5.0 right_only6 6.0 right_only7 7.0 right_only You can then filter the resultant merged df on the _merge col: In [4]:merged = pd.merge(A,B, how='outer', indicator=True)merged[merged['_merge'] == 'left_only']Out[4]: x _merge0 0.0 left_only1 1.0 left_only2 2.0 left_only You can also use isin and negate the mask to find values not in B: In [5]:A[~A['x'].isin(B['x'])]Out[5]: x0 01 12 2
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/38242368', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4916076/']}
jdg_75079
stackexchange
llm_judgeable_groundtruth_similarity
24245173
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am creating an array on stack as static const int size = 10;void foo() {..int array[size];..} However, I get the compile error: "expression must have a constant value", even though size is a constant. I can use the macro #define SIZE (10) But I am wondering why size marked const causes compilation error. Now provide the response and nothing else.
In C language keyword const has nothing to do with constants . In C language, by definition the term "constant" refers to literal values and enum constants. This is what you have to use if you really need a constant : either use a literal value (define a macro to give your constant a name), or use a enum constant. (Read here for more details: Shall I prefer constants over defines? ) Also, in C99 and later versions of the language it possible to use non-constant values as array sizes for local arrays. That means that your code should compile in modern C even though your size is not a constant. But you are apparently using an older compiler, so in your case #define SIZE 10 is the right way to go.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24245173', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1279782/']}
jdg_75080
stackexchange
llm_judgeable_groundtruth_similarity
68137
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: So, I have this circuit which needs <50mA and can be supplied with a voltage between 2V and 3.3V. I want to supply it with a Li-Ion cell, which has 2.7-4.2V. I don't really want/need the complexity of a buck/boost converter. I know I could just regulate it down to anything below 2.5V, but it still would be great if my circuit would get 3.3V for Input-Voltages above 3.3V and the input voltage for anything <=3.3V. So basically I want a voltage regulator that only regulates for voltages >3.3V and which doesn't (and ideally doesn't use any power) for voltages below. What is this thing/circuit called? Now provide the response and nothing else.
You need a voltage regulator that regulates to 3.3V and as the input voltage drops to 3.3V or below, the output remains close to the input voltage despite it not being able to regulate any more - in other words it acts like a <0.25ohm resistor when unable to regulate. The LP3964 has a drop-out voltage of 24mV at 80mA and its output will follow the input voltage if the input voltage is too low for regulation at 3.3V. Here is the pdf file for it. Figure 11 speaks volumes - this is for the 2.5V fixed version but the adjustable-version (set for 3.3V operation) will work just as well: - I think this nails it really with one exception - it still draws about 3 mA when the voltage is not regulating. Can you live with this? If you can live with the losses of a linear regulator when the battery is at 4.2V and the regulator is producing 3.3V at 50mA (0.9V x 0.05A = 45mW ) then 10mW (3.3V x 0.003A) doesn't seem a problem really.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/68137', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/23500/']}
jdg_75081
stackexchange
llm_judgeable_groundtruth_similarity
2066579
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Does knowing the generators and the orders of those generators of a finite group $G$ completely determine $G$? For instance, is there only one group $G$ generated by the elements $\{g,h\}$ where $|g|,|h| = 3$? Now provide the response and nothing else.
There can be more than one group with the same number of generators and with the generators being of the same order. For example $Z_4\times Z_2$ and $D_8$ both have two generators, one of order $2$ and one of order $4$. Occasionally this information will point to a single group, but not always.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2066579', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/182500/']}
jdg_75082
stackexchange
llm_judgeable_groundtruth_similarity
7278094
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a CLLocation defined, and I'd like to move that point x meters to the east and y meters to the south. How may I achieve that? Now provide the response and nothing else.
A conversion to Swift, taken from this answer : func locationWithBearing(bearingRadians:Double, distanceMeters:Double, origin:CLLocationCoordinate2D) -> CLLocationCoordinate2D { let distRadians = distanceMeters / (6372797.6) // earth radius in meters let lat1 = origin.latitude * M_PI / 180 let lon1 = origin.longitude * M_PI / 180 let lat2 = asin(sin(lat1) * cos(distRadians) + cos(lat1) * sin(distRadians) * cos(bearingRadians)) let lon2 = lon1 + atan2(sin(bearingRadians) * sin(distRadians) * cos(lat1), cos(distRadians) - sin(lat1) * sin(lat2)) return CLLocationCoordinate2D(latitude: lat2 * 180 / M_PI, longitude: lon2 * 180 / M_PI)} Morgan Chen wrote this: All of the math in this method is done in radians. At the start of themethod, lon1 and lat1 are converted to radians for this purpose aswell. Bearing is in radians too. Keep in mind this method takes intoaccount the curvature of the Earth, which you don't really need to dofor small distances. My comments (Mar. 25, 2021): The calculation used in this method is called solving the "direct geodesic problem", and this is discussed in C.F.F. Karney's article " Algorithms for geodesics ", 2012. The code given above uses a technique that is less accurate than the algorithms presented in Karney's article.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7278094', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/499417/']}
jdg_75083
stackexchange
llm_judgeable_groundtruth_similarity
16121
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Can someone give a really concrete example of such a sequence? I am looking at several notes related with such things, but haven't seen any well-calculated example. And I'm really confused at this point. Besides asking for a good example, I am also wondering about the following two things: There is an exact sequence for elliptic curves defined over a local field $K$, $0 \rightarrow \hat E(m) \rightarrow E(K) \rightarrow \tilde E(k) \rightarrow 0$, where $\hat E(m)$ is the formal group associated to $E$ and $\tilde E(k)$ is the reduction. (See Silverman AEC I, page 118), is this sequence related with connected-etale sequence? 2.Take the p-torsion kernel $E[p]$ of $[p]: E \rightarrow E$ for $E$ defined over $K$ a local field.Is $E[p]$ a finite flat group scheme over $R$ the valuation ring? And if so, what is its connected-etale sequence? (maybe I should change $p$ to an $n$, but I'm also curious what will happen if $p$ is the characteristic of $k$?) Thank you. Now provide the response and nothing else.
For concepts related to algebraic geometry when the base is not a field, it can be difficult for a beginner to reconcile the approach in Silverman with the approach via schemes. I wasted a lot of time as a student trying to relate the "3 points through a line" definition of the group law over fields with the concept of "reduction mod $p$" on points. Likewise, the approach with formal groups tends to make things confusing, despite their apparent "concreteness". This sort of stuff drove me crazy when I was a student, until I realized that the best way to understand such topics is to give up working over fields and with equations, and to work over the valuation ring and with functorial viewpoints (only translating into field language at the very end). The relevant schemes in the question are really torsion schemes over the valuation ring, not torsion in the separate fibers (so you mean to assume $E$ has good reduction in both questions). I address this below. The following answer is way too long, since I do not know of a suitable reference not involving EGA/SGA. Tate's article on finite flat group schemes probably explains some aspects, but I doubt it addresses the link with the concrete stuff for elliptic curves. If $R$ is a local ring, then an "elliptic curve over $R$" can be defined in two ways: the concrete way is as a Weierstrass plane cubic with unit discriminant, and the right way is as a smooth proper $R$-scheme with geometrically connected fibers of dimension 1 and genus 1 and a distinguished section. As usual, the concrete way is hard to use to actually prove anything interesting (and it is the "wrong" notion when the base is non-local and especially has non-trivial line bundles; e.g., over suitable number fields with class number $> 1$ one can make CM elliptic curves with "everywhere good reduction" which do not admit a global planar model with unit discriminant). How to prove there is a unique $R$-group structure with the indicated section as the identity? A real nightmare with the concrete definition, and elegantly explained in Chapter 2 of Katz-Mazur with the right definition. Likewise, that $E$ is functorial in its generic fiber when $R$ is a discrete valuation ring is a mess to prove by hand (which affine opens to use?), but has an elegant proof when approached through the "smooth and proper" viewpoint. Of course it is important and interesting that these concrete and abstract notions agree, and that is explained in Katz-Mazur Chapter 2. That being said, if $E$ is an elliptic curve over any noetherian (say) scheme $S$ and $[n]_E:E \rightarrow E$ is multiplication by a positive integer $n$, then on geometric fibers this is a finite flat map, so $[n]$ is quasi-finite. Now proper and quasi-finite maps are finite (by Zariski's Main Theorem), so $[n]_E$ is a finite map, and the fibral flatness criterion implies that it is also flat. Being a finite flat map between noetherian schemes, it has a "degree" which is locally constant on the target and yet is $n^2$ on fibers over $S$. Hence, we conclude that $E[n] := {\rm{ker}}([n]_E)$ is a finite flat commutative $S$-group with constant fiber rank $n^2$. Honestly, I do not know any way to prove this which avoids the serious results that I just cited. But that's why the theorems are useful: because we can use them to make our intuition over fields carry over to cases when the base is not a field. (The noetherian condition can be dropped if we are more careful with the phrase "finite flat". I won't dwell on it here.) This answers the first part of the 2nd question (taking the base to be spectrum of the valuation ring there). It the notation there, the $p$-torsion of the elliptic curve over $K$ is not a finite $R$-scheme, and in general it may extend to a finite flat $R$-group in many ways. But the elliptic curve over $K$ uniquely extends to one over $R$ by the theory of Neron models, and its torsion levels provide the "right" finite flat groups you want to use over the valuation ring. OK, now assume $R$ is a complete local noetherian ring (e.g., a complete discrete valuation ring). Could even assume it is a henselian local ring, but the complete case is easier to deal with and covers the case in the question. Let $G$ be a finite flat $R$-group, a case of interest being $E[n]$ for an elliptic curve $E$ over $R$. Let $k$ be the residue field, and consider $G_k$. Being a finite $k$-scheme, it has an open and closed identity component $G_k^0$ which is cut out by an idempotent. By 8.15 (or thereabouts) in Matsumura's Commutative Ring Theory, every idempotent in the special fiber of a finite $R$-algebra uniquely lifts. In particular, if $X$ is a finite $R$-scheme then its connected component decomposition uniquely lifts that of $X_k$. If $X$ is $R$-flat then so is each of its connected components. This is all compatible with products, so if $X$ has a structure of $R$-group then the open and closed connected component $X^0$ containing the identity section is an $R$-subgroup. Returning to our friend $G$, we get the so-called "relative identity component" $G^0$, an open and closed (hence finite flat) $R$-subgroup. Remark: The formation of $G^0$ commutes with any flat local extension on $R$, as follows from the uniqueness! It doesn't usually commute with non-local extension, such as inclusion of a complete dvr into its fraction field. Example: $G = E[n]$. Suppose $R$ is a complete discrete valuation ring with fraction field $K$, and $n \in K^{\times}$. What is $(G^0)_K$? Well, each "point" occurs over a finite extension $K'/K$, say with valuation ring $R'$, and $G(K') = G(R')$ by elementary integrality considerations (or in fancy terms, valuative criterion, which is killing a fly with a sledgehammer). Since the spectrum of $R'$ is connected , a point in $G(R')$ lies in $G^0(R')$ if and only if its specialization into $G_k(k')$ vanishes ($k'$ the residue field of $R'$). In other words, $(G^0)(\overline{K})$ consists of the $n$-torsion geometric points of $E_K$ whose specialization into geometric points of $E_k$ by valuative criterion for the $R$-proper $E$ ($E_K(K') = E(R') \rightarrow E(k') = E_k(k')$!) is 0. Now we need to explain the "etale quotient" in concrete terms. This is best understood as a generalization of the following procedure over a field. Example: Let $k$ be a field and $A_0$ a finite $k$-algebra. There is a unique maximal \'etale $k$-subalgebra $A_0'$ in $A_0$: concretely, in each local factor ring of $A_0$ uniquely lift the separable closure of $k$ in the residue field up into the local factor ring via Hensel's Lemma and the primitive element theorem. Since it is uniquely characterized by lifting separable closures of $k$ in the residue fields of the factor rings, it is a good exercise to check the following crucial thing: if $B_0$ is another finite $k$-algebra then $(A_0 \otimes_k B_0)' = A_0' \otimes_k B_0'$, and $A_0'$ is functorial in $A_0$. Observe that $A_0' \rightarrow A_0$ is faithfully flat since at the level of factor rings of $A_0$ it is an inclusion of a field into a nonzero ring. Also observe that any \'etale $k$-algebra equipped with a map to $A_0$ uniquely factors through $A_0'$. Exercise: The formation of $A_0'$ commutes with any field extension on $k$. (Hint: use Galois descent to reduce to the separate cases of separable algebraic extensions and the easy case $k = k_s$.) In geometric terms, for a finite $k$-scheme $X_0$, the preceding Example constructs a finite \'etale $k$-scheme $X_0'$ and a faithfully flat $k$-map $f_0:X_0 \rightarrow X_0'$ which is initial among all $k$-maps from $X_0$ to finite \'etale $k$-schemes, and its formation is functorial in $X_0$ and commutes with products in $X_0$ and with any extension on $k$. In particular, if $X_0$ is a $k$-group then $X_0'$ has a unique $k$-group structure making $f_0:X_0 \rightarrow X_0'$ a $k$-homomorphism. Example: Now let $R$ be a complete discrete valuation ring with residue field $k$, and let $X$ be a finite flat $R$-scheme. (Can relax the hypothesis on $R$ if familiar with finite \'etale maps in general.) In this setting, "finite 'etale" over $R$ just means "product of finitely many unramified finite extensions". By using Hensel's Lemma in finite local $R$-algebras, to give a map from a finite \'etale $R$-algebra $A$ to a finite $R$-algebra $B$ is the same as to give a map $A_0 \rightarrow B_0$ between their special fibers. In particular, finite \'etale $k$-algebras uniquely and functorially lift to finite \'etale $R$-algebras, and so $X_0'$ uniquely lifts to a finite \'etale $R$-scheme $X'$ and there is a unique $R$-map $f:X \rightarrow X'$ lifting $f_0:X_0 \rightarrow X_0'$. By fibral flatness (using $X$ is $R$-flat!), $f$ is faithfully flat since $f_0$ is. By uniqueness of everything in sight, the formation of $f$ commutes with products and local extension on $R$ and is also functorial in $X$. In particular, if $G$ is a finite flat $R$-group then $G'$ admits a unique $R$-group structure making $f$ an $R$-homomorphism. We call $G'$ the maximal \'etale quotient of $G$. Now we can put it all together and obtain the connected-etale sequence: Proposition: Let $G$ be a finite flat group scheme over a complete discrete valuation ring $R$. (Even ok for complete local noetherian $R$, or even henselian local $R$.) The faithfully flat $R$-homomorphism $f:G \rightarrow G'$ to the maximal \'etale quotient has scheme-theoretic kernel $G^0$. Proof: The kernel $H = \ker f$ is a finite flat $R$-group. To show it contains $G^0$ we have to check that the composite map $G^0 \rightarrow G \rightarrow G'$ vanishes. Being a map from a finite $R$-scheme to a finite \'etale $R$-scheme, the map is determined by what it does on the special fiber, so it suffices to show that $G_k^0 \rightarrow G_0'$ vanishes. This is a map from a finite infinitsimal $k$-scheme to a finite \'etale $k$-scheme which carries the unique $k$-point to the identity point. Thus, it factors through the identity section of $G_0'$, which is open and closed since $G_0'$ is finite etale over $k$. Now that $H$ contains $G^0$, to prove the resulting closed immersion $G^0 \hookrightarrow H$ between finite flat $R$-schemes is an isomorphism it suffices to do so on special fibers. But that reduces us to the variant of our problem over the residue field. We can increase it to be algebraically closed, and so the problem is to show that if $G$ is a finite flat group scheme over an algebraically closed field $k$ then $G \rightarrow G'$ has kernel exactly $G^0$. But $G'$ is a constant $k$-scheme since it is etale and $k$ is algebraically closed, so by construction $G'$ is just the disjoint union of the $k$-points of the connected components of $G$. It is then physically obvious that the kernel is $G^0$. QED Remark: If $X$ is any finite flat $R$-scheme, with $X \rightarrow X'$ the initial map toa finite \'etale $R$-scheme, then the induced map on $\overline{k}$-points is bijective. Indeed, we can pass to geometric special fibers and connected components to reduce to the case when $X$ is local finite over an algebraically closed field (in place of $R$), in which case the assertion is clear. By this Remark, the geometric points of the $n$-torsion in $E_k$ are identified with the geometric points of the special fiber of the maximal etale quotient $E[n]'$. In particular, if $n$ is not divisible by the characteristic of $K$ and if $K'/K$ is a sufficiently big finite separable extension which splits $E_K[n]$ then the finite etale $R'$-scheme $E[n]'_{R'}$ is constant (as it may be checked on $K'$-fiber), so the map $$E _K[n] (\overline{K}) = E _K[n] (K') = E[n] (R') \rightarrow E[n]' (k') \hookrightarrowE _k[n]' (\overline{k}) = E[n]'(R') = E[n]' (\overline{K})$$is identified with the naive map in question 1. In other words, that step computes the "quotient" part of the connected-etale sequence of $E[n]$ after passing to $\overline{K}$-points! Example: If $E$ has supersingular reduction then $E[p] = E[p]^0$ and the etale part of the sequence for $E[p]$ vanishes. Example: If $E$ has ordinary reduction then working over an algebraic closure of the residue field shows that $E[p]^0$ and $E[p]'$ each have rank $p$ as finite flat $R$-groups. Finally, it remains to relate $E[n]^0$ to $n$-torsion in the so-called "formal group" of $E$ ( not the formal group of $E_K$, which loses contact with the integral structure and for ${\rm{char}} (K) = 0$ is actually the formal additive group which has no nontrivial $n$-torsion!). A moment's reflection on the definition of the formal group in Silverman shows that its $R'$-points for any finite local valuation ring extension of $R$ are precisely the local $R'$-points of the complete local ring $\widehat{\mathcal{O}}_{E,0_k}$ at the origin of the special fiber (or the completion along the identity section, comes to the same since $R$ is complete). By the universal properties of local rings on schemes and completions of local noetherian rings, such $R'$-points of the latter type are simply points in $E(R')$ specializing to $0_k$ in $E_k (k')$. But we saw earlier that $E[n]^0 (R')$ is exactly the set of points in $E[n] (R')$ specializing to $0_k$ on $E_k$. So indeed $E[n]^0 (R')$ inside of $E[n] (R') = E_K[n] (K')$ is exactly the $n$-torsion in the $K'$-points of the "formal group" in the sense of Silverman's book. Voila, so that answers the questions. The arguments used are designed to apply equally well to abelian varieties.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/16121', 'https://mathoverflow.net', 'https://mathoverflow.net/users/1238/']}
jdg_75084
stackexchange
llm_judgeable_groundtruth_similarity
37412712
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following code: QStringListModel* m=new QStringListModel(gc.get_lista_file());ui->lista_immagini_listView->setModel(m); where gc.get_lista_file() returns a QStringList object and lista_immagini_listView is a QListView . I need to update my lista_immagini_listView adding a string when I press a button, but if I add my new string to my QStringList object it doesn't update my model (I read the QStringList is passed by copy, so it isn't connected to the model). So, I have to update my QStringListModel but in this way I have to update 2 object ( QStringList and QStringListModel ) and doesn't seem a good practice. What is the best way (if exists) to resolve it? Now provide the response and nothing else.
QStringListModel does not allow you to simply add a string (sadly). Simply updating the QStringList does not work because the model stores a copy of the list. There are basically two ways to get the desired behavior: 1. Reset: This is the simple way. You just take the list from the model, add the string and reassign it: QStringList list = m->stringList();list.append("someString");m->setStringList(list); This method does work, but has one big disadvantage: The view will be reset. Any selections the user may have, sorting or the scroll-position will be lost, because the model gets reset. 2. Using the Model: The second approach is the proper way of doing, but requires some more work. In this you use the functions of QAbstractItemModel to first add a row, and then changing it's data: if(m->insertRow(m->rowCount())) { QModelIndex index = m->index(m->rowCount() - 1, 0); m->setData(index, "someString");} This one does properly update the view and keeps it's state. However, this one gets more complicated if you want to insert multiple rows, or remove/move them. My recommendation: Use the 2. Method, because the user experience is much better. Even if you use the list in multiple places, you can get the list after inserting the row using m->stringList() .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37412712', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1973451/']}
jdg_75085
stackexchange
llm_judgeable_groundtruth_similarity
288376
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm designing a REST API for a project where users are always on one of several "plans" - each plan defines some resource limits, such as the max number of users an account may have or the max number of data they may upload. Once one of these limits is reached, users can upgrade their plans (essentially pay up) to get more resources. I want to return a special status code indicating a situation where the action cannot be performed due to account resource limits, and upgrading the plan will resolve this - for example if a user uses 100% of their storage capacity and try to upload an additional file, they will get this response. The candidates are, IMHO: 403 Forbidden - however, I would like to distinguish between this case and other cases where the user simply lacks the permission to perform this action. 401 Unauthorized - not a good idea, we're using this for authentication related problems. 402 Payment Required - makes kind of sense but I'm worried about using a non-standard yet reserved status code Something even less standard like 423 Locked as its unlikely we'll use it for anything else in the future Another option is to go with something very standard such as 403 but indicate the specifics of the error in the response body. I'm wondering which approach you believe would (a) work best in the long run and (b) would stick more nicely to RESTful principles. Now provide the response and nothing else.
I think 403 is the only reasonable response, though 405 Method Not Allowed or 409 Conflict might be acceptable, I don't think either are as good as 403 which states: The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated. If the request method was not HEAD and the server wishes to make public why the request has not been fulfilled, it SHOULD describe the reason for the refusal in the entity If you return a 403 error, it'll include some information on why the resource was denied - invalid permission is only the most common case, exceeded limits isn't much different - you don't have permission because your limit was exceeded.
{}
{'log_upvote_score': 6, 'links': ['https://softwareengineering.stackexchange.com/questions/288376', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/185564/']}
jdg_75086
stackexchange
llm_judgeable_groundtruth_similarity
3228543
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Someone recently stumped me here. Consider the series $\sum_3^{\infty} \dfrac{1}{x^{1.0001}}$ and the series $\sum_3^{\infty} \dfrac{1}{x\ln{x}}$ . The first series in convergent, and the second series is divergent. Yet, for all $x \geq 3$ , $\dfrac{1}{x^{1.0001}} \geq \dfrac{1}{x\ln{x}}$ . How can this be? My conclusion is that the proposed inequality is actually false and so the direct comparison test cannot be applied, but the $x$ must be very large, I haven't been able to convince myself of this. Now provide the response and nothing else.
Note that $$\frac1{x^{1.0001}}\geqslant\frac1{x\ln x}\iff x^{1.0001}\leqslant x\ln x\iff x^{0.0001}\leqslant\ln x,$$ which is false if $x$ is large enough: $$\alpha>0\implies\lim_{x\to\infty}\frac{x^\alpha}{\ln x}=\infty.$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3228543', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/120932/']}
jdg_75087
stackexchange
llm_judgeable_groundtruth_similarity
10668292
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: According to EU Article 5(3) of the E-Privacy Directive (a.k.a 'The Cookie Laws'), web sites that target EU users have to gain opt-in consent from users before they set a cookie. See ICO Guidance I am trying to square this with Google Analytics on my web site. I would imagine that Google Analytics (GA) can do a certain level of analytic data gathering without requiring the use of cookies. However, I cannot find any info on this (on the Google sites/settings panels) about how to relay information about the 'state of consent' back to Google during a page request. So, my only option seems to be that I should not embed Google tag code at all if the user has not explicitly given consent. Which seems a bit drastic. Letting my serverside script set a hasConsentedToCookies=FALSE flag in the JavaScript tags would allow me to instruct Google's services to run in a gracefully degraded fashion. Is there a setting on Google Analytics to suppress use of cookies for users that have not yet given consent? If so, where can I find info on this? Now provide the response and nothing else.
EDIT (2019): The below answer predates GDPR and likely requires revision. Google Analytics has a new set of APIs to assist with compliance with a cookie opt-out. Here's the documentation , and here's their help docs . There has been some ambiguity as to whether the EU Cookie Regulations (as implemented in member countries) require that passive web analytics tracking requires opt-in mechanisms for compliance. If you're concerned one way or another, consult an attorney. Google is empowering you to make the decision as to how you want to proceed. They'll leave implementation details to you, but, the idea is, once you've determined whether or not to track the user in Google Analytics, if the answer is to not track, you'd set the following property to true before Google Analytics runs: window['ga-disable-UA-XXXXXX-Y'] = true; Where UA-XXXXXX-Y is your account ID in Google Analytics As the other posters have noted, Google Analytics relies on cookies. So, you're not able to do any kind of tracking without cookies. If you've determined that someone is not to be cookied for tracking, you'll need to implement something like this: if(doNotCookie()){ window['ga-disable-UA-XXXXXX-Y'] = true;} Opt In This does require a little bit of jujitsu for when you first load Google Analytics, since this property will need to be set before Google Analytics runs to prevent tracking from ever happening, which means, for an "opt in to tracking" approach, you'd probably need to implement a mechanism where, on first visit, Google Analytics is automatically disabled in the absence of an opt-in cookie (cookies that determine cookie preferences are explicitly allowed), and then, if an opt-in happens, re-runs Google Analytics. On subsequent pageviews, all would run smoothly. Could look something like (pseudo-code): if( hasOptedOut() || hasNotExpressedCookiePreferenceYet() ){ //functions you've defined elsewhere window['ga-disable-UA-XXXXXX-Y'] = true;} var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXXXX-Y']); _gaq.push(['_trackPageview']); function onOptIn(){ //have this run when/if they opt-in. window['ga-disable-UA-XXXXXX-Y'] = false; //...snip... //set a cookie to express that the user has opted-in to tracking, for future pageviews _gaq.push(['_trackPageview']); // now run the pageview that you 'missed' } Opt Out With this approach, you'd allow the user to opt-out of tracking, which would mean you'd use a cookie to set the ga-disable-UA-XXXXXX-Y' property and a cookie to manage it in the future: if( hasOptedOut() ){ // function you've defined elsewhere window['ga-disable-UA-XXXXXX-Y'] = true;} var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXXX-Y']); _gaq.push(['_trackPageview']);
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/10668292', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/205814/']}
jdg_75088
stackexchange
llm_judgeable_groundtruth_similarity
522838
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a section: \section{Introduction} \label{sec:introduction} I'd like a link to the section where the link text is the name of the section. I can use hyperref : The \hyperrf[sec:introduction]{Introduction} introduces the paper. But that requires repeating the section title ("Introduction"). Is there a way to grab that? ref yields the section number, which isn't right. autoref yields "section " and then the section number, which isn't right, either. Now provide the response and nothing else.
There are a couple of packages that provide this for you. nameref is distributed as part of hyperref to do this: http://tug.ctan.org/cgi-bin/ctanPackageInformation.py?id=nameref There is a more general package for cross-referencing basically anything, called zref: http://tug.ctan.org/cgi-bin/ctanPackageInformation.py?id=zref It's by the same author as hyperref, Heiko Oberdiek; it's the one that I would choose. Here's an example: \documentclass[oneside,12pt]{article}\usepackage[user,titleref]{zref}\begin{document}\section{Introduction of sorts.}\zlabel{sec:intro}Hello\subsection{Structure}We begin in `\ztitleref{sec:intro}'.\end{document} Note that it even removes the trailing period in the section title.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/522838', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1190/']}
jdg_75089
stackexchange
llm_judgeable_groundtruth_similarity
40735
Below is a question asked on the forum scicomp.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm a PhD working in a computational mechanics lab. I come from a Math department, and I have a good background for what concerns the basics of finite elements, like inf-sup conditions, DG, non-conforming FEs. I also did some basic codes, like Poisson on a square/L-shape/etc. (or other elliptic problems) using the classical building blocks of a code, like local to global mapping, quadrature formulas on the reference, etc. However, there's something that I really don't know how to do, and this is how to solve vector valued problems. I know how to solve them in deal.II, for instance, and I watched the related video-lecture about it. But I do really need to re-invent the wheel (in 2D) this time. Let me try to explain: when we're using our software(s) to solve scalar problems, I really know what is going on inside the assembly routine and I know I would be able to replicate it (of course not in terms of efficiency), but that's not the case for the linear elasticity equation , for instance. I know that the test functions are vector valued, but I'm lacking the ability to put this into code. In particular, I'm interested in the approach where each vector basis function has only one non-zero component. So I am looking for some reference (lecture notes, books, whatever) where I can find a simple and didactic explanation about the way the code has to be organised for this equation and the relative code. If there's something with MatLab that would be perfect, as I only need to understand the basic building blocks. However, I can read with no problems also Python and C++ codes. Now provide the response and nothing else.
Short answer : Just replicate the vector of interpolation functions into a block-diagonal matrix, as showed e.g. on page 5 in this lecture note . Detailed answer :Mathematically oriented texts typically do not bother about detailing the implementation, that is why you haven't met it. On the other hand, engineering FEM do detail the implementation (but lack the theory). The good news is that understanding the implementation after the theory is significantly easier than the other way around. Linear elasticity is a good example because (in the compressible case), it is typically implemented by Lagrange elements. You can find hundreds of lecture notes and engineering-oriented finite element books (e.g. K. J. Bathe: Finite Element Procedures , Zienkiewicz: The Finite Element Method: Its Basis and Fundamentals ), which go into details about forming the element matrices, performing the assembly, doing the numerical integration, etc. To react to your specific questions: What changes in vector-valued problems (e.g. elasticity equation) compared to scalar-valued problems (e.g. heat equation) is that you need to construct the approximation for each unknown function. If you haven't done yet, check the vector-valued problems in deal.ii for a thorough discussion. When you have independent fields to approximate (e.g. in mixed formulations), you often want to apply different interpolation for the different fields. On the other hand, when you have a single vector-valued field (e.g. the displacement vector), you most frequently use the same interpolation for all the components of the vector unknown. Hence, the latter problem is a special case of the former. What you do in both cases is writing the approximation for each field/component. In mathematical-oriented texts, this is done with indices, see e.g. the elasticity tutorial of deal.ii , while engineers form the matrices too (cf. second slide on page 5 of the lecture note referred at the beginning). The other difference in vectorial problems is the inner product. For instance, in elasticity, you need to evaluate $\int_{\Omega}\sigma:\varepsilon \,\mathrm{d}\Omega$ , where $\sigma$ and $\varepsilon$ are second order tensors. What you do in this case is called vectorization . For tensors, the term you should look for is the Voigt notation . This is done on the third slide on page 4 of the lecture note. Once you have rewritten all the fields in this matrix notation (in engineering texts, it is referred to as matrix formulation ), you can factor out the element matrix from the inner product (first slide on page 8 of the lecture note). The assembly then goes on as for the scalar-valued case. Footnote:These were just the core ideas, which help you understand the method. However, I must tell you that the devil is in the details. There are so many other things to take care of when implementing the FEM, that your implementation will almost surely be suboptimal in terms of maintainability, performance and scaling. Unless you want a throw-away code for a quick demonstration purpose only, I recommend you not to implement it by yourself (or at least not in 3D). I made this mistake in my PhD, wasting precious time. When I wrote my own FE code in MATLAB, I was partly inspired by mFEM . However, I suggest that you rely on robust, high-quality and actively-maintained libraries and frameworks, such as deal.ii or FEniCS (there are dozens of others, the choice should depend on your use case). Nevertheless, deriving the discrete formulation on paper is a good idea to completely understand your problem at hand.
{}
{'log_upvote_score': 4, 'links': ['https://scicomp.stackexchange.com/questions/40735', 'https://scicomp.stackexchange.com', 'https://scicomp.stackexchange.com/users/42123/']}
jdg_75090
stackexchange
llm_judgeable_groundtruth_similarity
7084
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Background: In connection with the question here I came upon a more interesting question. I believe the question is large and distinct enough to have it's own thread. Of course I might be mistaken, in that case I apologize for the clutter... I have a dataset of experimental values, coming from an unknown distribution. The data is then used in an analysis, and as a measure of significance of the results of the analysis, I would like to sample random data from the same distribution and run my analysis on these datasets (say 1000 times the size of the original dataset) to see if the results from experimental values show significant deviation from randomized data. I was thinking about drawing samples from a normal distribution, as it feels most natural that the samples come from some normal distribution. However I need to back up my assumption of normality somehow. I was originally thinking of using some sort of a normality test, but after some reading on the matter such as What is 'normality?' and Normality testing: 'Essentially useless?' threads, and of course the Wikipedia article on normality tests ; I feel like these tests are not an accepted way of validating normality assumptions. Question: How can normality be validated without using visual cues such as QQ plots? (the validation will be a part of larger software) Can a "goodness of fit" score be calculated? EDIT: As requested, I'll try and add some details about the data. The data at hand are from a biological experiment, however the instrumentation has high variation between the runs. The analysis I've mentioned takes the measured values and using a mathematical model evaluates functional meaning of the measured data. To do so, I need to see how unrelated/uncorrelated, made-up data rates in the same analysis, hence the intent to model by randomized values. I hope this clarifies my point of view. EDIT2: There has been a series of questions, asking for clarification on the question. Both here and in the comments below I tried to explain my situation to my best ability. It seems like we are suffering from a communication mismatch... I dunno how I can give an example without writing up a long table and complicating things further OR brutally simplifying the bigger picture. I have no doubt that everyone who took their time and supply a reply has the best intentions, but I really appreciate if you could focus on the question at hand instead of inquiring further and further into the motivations behind why I need to do things this way and not another way. Now provide the response and nothing else.
The question: How can normality be validated without using visual cues such as QQ plots? (the validation will be a part of larger software) Can a "goodness of fit" score be calculated? Although enumerated separately, these parts are (appropriately) one question: you compute an appropriate goodness of fit and use that as a test statistic in a hypothesis test. Some answers There are plenty of such tests; the best among them are the Kolmogorov-Smirnov, Shapiro-Wilks, and Anderson-Darling tests. Their properties have been extensively studied. An excellent resource is the work of M. A. Stephens, especially the 1974 article, EDF Statistics for Goodness of Fit and Some Comparisons . Rather than supply a long list of references, I will leave it to you to Google this title: the trail quickly leads to useful information. One thing I like about Stephens' work, in addition to the comparisons of the properties of various GoF tests, is that it provides clear descriptions of how to compute the statistics and how to compute, or at least approximate, their null distributions. This gives you the option to implement your favorite test yourself. The EDF statistics (empirical distribution function) are easy to compute: they tend to be linear combinations of the order statistics, so all you have to do is sort the data and go. The complications concern (a) computing the coefficients--this used to be a barrier in applying the S-W test, but good approximations now exist--and (b) computing the null distributions. Most of those can be computed or have been adequately tabulated. What is characteristic about any GoF tests for distributions is that (a) they need a certain amount of data to become powerful (for detecting true deviations) and (b) very quickly thereafter, as you acquire more data, they become so powerful that deviations that are practically inconsequential become statistically significant. (This is very well known and is easily confirmed with simulation or mathematical analysis.) In this is the origin of the reluctance to answer the original question without obtaining substantial clarification. If you have a few hundred values or more, you will find that any of these tests demonstrate your data are not "normal." But does this matter for your intended analysis? We simply cannot say.
{}
{'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/7084', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/3014/']}
jdg_75091
stackexchange
llm_judgeable_groundtruth_similarity
381464
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am busy developing code that processes a bunch of objects that represent monthly reports for regions (e.g. one instance per region per month). Let's call them RegionMonthlyReport objects. At the end of the month, a new set of RegionMonthlyReport s need to be created for the next month that are populated with values from certain fields from the original RegionMonthlyReport s. In part of my code, these copies will be saved to the database, such that we have a time-phased full set of regions per month: say we had 10 regions, we'd have 10 records for a single month; after a full year of this, we'll have 10 regions x 12 months = 120 records. My question is, where should I implement a method to make the 'copy' of the RegionMonthlyReport object, and what are the pros/cons of a preferred approach? Should it be A method on the RegionMonthlyReport itself, like RegionMonthlyReport.CreateCopy(newMonth) A static method in a utility class, like StaticUtilityClass.CreateRegionMonthlyReportCopy(sourceRegionMonthlyReport, newMonth) Something else Note that I'm not talking about any Copy method in a framework. This is a solution design question. FWIW I'm developing in .NET/C# This question seems to be somewhat relevant - and indicates that the second option might be better - but I'm too much of a beginner to confidently generalise to my case. Now provide the response and nothing else.
The intuitive way to do it is to create a copy in the object itself. Here some arguments: who knows better than the object what is to be copied ? who has a better (controlled) access to the objects member than the object itself ? mainstream oo languages allow to define copy constructors or assignment operators in the object itself. the prototype pattern that is based on object cloning also relies on an object operation. This approach is robust and proven, and implements separation of concerns. If you have several independent objects that you need to copy (e.g. regional reports ?), you can nevertheless have a copy service that ensures that all relevant objects are copied at the same time. This service should nevertheless rely on the object copy method for its implementation.
{}
{'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/381464', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/27310/']}
jdg_75092
stackexchange
llm_judgeable_groundtruth_similarity
1374126
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: There doesn't seem to be a way to extend an existing JavaScript array with another array, i.e. to emulate Python's extend method. I want to achieve the following: >>> a = [1, 2][1, 2]>>> b = [3, 4, 5][3, 4, 5]>>> SOMETHING HERE>>> a[1, 2, 3, 4, 5] I know there's a a.concat(b) method, but it creates a new array instead of simply extending the first one. I'd like an algorithm that works efficiently when a is significantly larger than b (i.e. one that does not copy a ). Note: This is not a duplicate of How to append something to an array? -- the goal here is to add the whole contents of one array to the other, and to do it "in place", i.e. without copying all elements of the extended array. Now provide the response and nothing else.
The .push method can take multiple arguments. You can use the spread operator to pass all the elements of the second array as arguments to .push : >>> a.push(...b) If your browser does not support ECMAScript 6, you can use .apply instead: >>> a.push.apply(a, b) Or perhaps, if you think it's clearer: >>> Array.prototype.push.apply(a,b) Please note that all these solutions will fail with a stack overflow error if array b is too long (trouble starts at about 100,000 elements, depending on the browser). If you cannot guarantee that b is short enough, you should use a standard loop-based technique described in the other answer.
{}
{'log_upvote_score': 12, 'links': ['https://Stackoverflow.com/questions/1374126', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/18745/']}
jdg_75093
stackexchange
llm_judgeable_groundtruth_similarity
85154
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am using SmoothDensityHistogram on a data set of the form {{x1, y1}, {x2, y2}, …, {x_n, y_n}} , and I would like to also show the contour lines that enclose 68%, 95% and 99% of the points. With the option MeshFunctions -> {#3 &}, Mesh -> 3 I can have 3 contour lines, but how can I set the probability at which the contours lines are? As this image show, the distribution of points does not necessarily follow a binormal distribution, so I need something more general than confidence ellipses calculated with Mean and Covariance. It seems like a common enough plot that an easy solution should exist but I can't figure out how. Now provide the response and nothing else.
Here is a brute-force method (and I'm sure there are many more efficient approaches): data = RandomVariate[BinormalDistribution[.75], 100];(* Calculate a nonparametric density estimate *)d = SmoothKernelDistribution[data];(* Evaluate the estimated density function over a grid of points and sort by the density values from high to low *)pdf = Reverse[ Sort[Flatten[ Table[PDF[d, {x, y}], {x, -3, 3, 0.05}, {y, -3, 3, 0.05}]]]];(* Create a table of cumulative pdf values that correspond to the volumes of interest *)cdf = Accumulate[pdf]/Total[pdf];contours = pdf[[Flatten[ Table[FirstPosition[cdf, p_ /; p >= alpha], {alpha, {0.68, 0.95, 0.99}}]]]];(* Create a series of figures and then overlay them all *)sdh = SmoothDensityHistogram[data];cp = ContourPlot[PDF[d, {x, y}], {x, -3, 3}, {y, -3, 3}, Contours -> contours, ContourShading -> None];lp = ListPlot[data];Show[{sdh, cp, lp}]
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/85154', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/15841/']}
jdg_75094
stackexchange
llm_judgeable_groundtruth_similarity
2797779
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have right now this code: <ul><li class="listitem">text<li></ul> jQuery: $('.listitem').click(function() { $("#elname").text($(this).text()); $('#slists').css('visibility','hidden') $('#elname').css('visibility','visible') $('#elname').css('display','inline-block') }); This is supposed to hide a div and it does, but when I append items to the ul (with the class listitem) nothing happens with the appended item, the class it gets is correct, the title, and the value too. Can this have something to do with the code above being in the document ready function to do? Now provide the response and nothing else.
Use .live() instead, like this: $('.listitem').live('click', function() { $("#elname").text($(this).text()) .css({ visibility:'visible', display: 'inline-block' }); $('#slists').css('visibility','hidden') }); .live() listens at the document level for your click to bubble up...and new and old elements bubble this event the same way, so it doesn't care what was added later, where as your .click() handler binds a click to elements that existed at the time the selector was run . Alternatively, you can give your <ul> an ID or class and use .delegate() like this: $('#myUL').delegate('.listitem', 'click', function() { $("#elname").text($(this).text()) .css({ visibility:'visible', display: 'inline-block' }); $('#slists').css('visibility','hidden') }); This results in less bubbling, so just a bit neater on the event side, it captures it at the <ul> instead of all the way up on document .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2797779', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/316943/']}
jdg_75095
stackexchange
llm_judgeable_groundtruth_similarity
34427279
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So I'm getting responses like the following which I have no control over: { "message": "someName someLastName has sent you a question", "parameters": "{\"firstName\":\"someName\",\"lastName\":\"someLastName\"}", "id": 141} At a glance it seems simple, but the parameters element needs to be read as a json object and I cannot for the life of me work out how to do it. This is what I am trying at the moment: JsonObject parameters = data.getAsJsonObject().get("parameters").getAsJsonObject();/throws java.lang.IllegalStateException: Not a JSON Object: "{\"firstName\":\"someName\",\"lastName\":\"someLastName\"}" So I tried: String elementToString = data.getAsJsonObject().get("parameters").toString().replace("\\\"", "\"");JsonObject parameters = new Gson().fromJson(elementToString, JsonElement.class).getAsJsonObject();//throws com.google.gson.JsonSyntaxException: com.google.gson.stream.MalformedJsonException: Use JsonReader.setLenient(true) to accept malformed JSON at line 1 column 5 path $ Where data is (typically this is pulled from a server): JsonElement data = new Gson().fromJson(" {\n" + " \"message\": \"someName someLastName has sent you a question\",\n" + " \"parameters\": \"{\\\"firstName\\\":\\\"someName\\\",\\\"lastName\\\":\\\"someLastName\\\"}\",\n" + " \"id\": 141\n" + " }", JsonElement.class); Surely this is not a difficult problem? Now provide the response and nothing else.
What you have here "parameters": "{\"firstName\":\"someName\",\"lastName\":\"someLastName\"}", is a JSON pair where both the name (which is always a JSON string) and the value are JSON strings. The value is a String that can be interpreted as a JSON object. So do just that String jsonString = data.getAsJsonObject().get("parameters").getAsJsonPrimitive().getAsString(); JsonObject parameters = gson.fromJson(jsonString, JsonObject.class); The following Gson gson = new Gson();JsonElement data = gson .fromJson(" {\n" + " \"message\": \"someName someLastName has sent you a question\",\n" + " \"parameters\": \"{\\\"firstName\\\":\\\"someName\\\",\\\"lastName\\\":\\\"someLastName\\\"}\",\n" + " \"id\": 141\n" + " }", JsonElement.class);String jsonString = data.getAsJsonObject().get("parameters").getAsJsonPrimitive().getAsString(); JsonObject parameters = gson.fromJson(jsonString, JsonObject.class);System.out.println(parameters); prints the JSON text representation of that JsonObject {"firstName":"someName","lastName":"someLastName"}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34427279', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3992148/']}
jdg_75096
stackexchange
llm_judgeable_groundtruth_similarity
6927632
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the simplest way to write an if statement in Erlang, where a part of the guard is member(E, L) , i.e., testing if E is a member of the list L ? The naive approach is: if ... andalso member(E,L) -> ...end But is does not work becuase, if I understand correctly, member is not a guard expression. Which way will work? Now provide the response and nothing else.
Member functionality is, as you say, not a valid guard. Instead you might consider using a case pattern? It's possibly to include your other if-clauses in the case expression. case {member(E,L),Expr} of {true,true} -> do(), is_member; {true,false} -> is_member; {false,_} -> no_memberend
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6927632', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/308193/']}
jdg_75097
stackexchange
llm_judgeable_groundtruth_similarity
474473
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose a charge is at rest in the laboratory frame. Also, say two observers in two vehicles A and B are approaching towards it. Let A be accelerating towards it and B be moving uniformly towards it. Will A observe the charge radiate and B observe it not radiate? Now provide the response and nothing else.
According to relativity, "B" will observe the charge in uniform motion, not radiating. "A" will see the charge accelerating, but not radiating. Indeed, the laws are the same for "inertial" reference systems, but are different when the phenomena are seen from non-inertial reference systems. "A" does not have an inertial reference system, thus "A" will not see the particle radiating, as would happen for a particle accelerating in an inertial reference system.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/474473', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/164488/']}
jdg_75098
stackexchange
llm_judgeable_groundtruth_similarity
93814
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a problem with colors in my terminal emulator. I am using LXTerminal as my terminal emulator and LXDE as my desktop environment. The following command is supposed to print red text on grey background: printf "\n\033[1;31;40m"hello"\033[00m\n\n" but that does not work. The grey background is missing (red text is OK). On my older machine ( Gnome/Gnome Terminal ) everything works fine I have tried installing several other terminal emulators, but the problem persists. I am not even sure whether this issue is related to terminal emulator, or something else. In Console (CTRL+ALT+F1) the background grey color does not work on either machine) This problem is limited to grey background only. All other colors work OK, for instance the following prints red text on green background. printf "\n\033[1;31;42m"hello"\033[00m\n\n" I would appreciate any help. I don't even know where to start troubleshooting. Now provide the response and nothing else.
There are eight standard ANSI colors, supported by every terminal emulator. Most terminal emulators also have eight bright variants of the standard ANSI colors. However, the actual color values that the escape codes map to aren't standardized, and in fact they often slightly vary among terminal emulators. So if you do printf "\e[31;47mTest\n" to print red text on a white background, the actual hues of red and white you get may be different depending on the terminal emulator you use. So that partly explains the problem: color values aren't standard, and LXTerminal may have different defaults for its color palette that you're not used to. If you look around in the settings, usually you can configure the color scheme to be whatever you like. The other problem you face is that what the bold attribute actually does isn't standardized either. There are three possibilities: it can make the font bold, it can make the foreground color brighter, or it can both make the foreground color brighter and make the font bold. Again, the default behavior here varies among terminal emulators, and you can usually change it if can you find the right setting. Grep for something mentioning 'bold' or 'bright'. If you want to use a bright color, then you can use the so-called aixterm color escape codes instead of bold. These aren't standard, but they're supported in every modern terminal emulator I know of. Unlike bold, they always use bright colors, plus they can be used to display bright background colors. So for example, if you wanted to print bright red text on a bright white background, you would do this: printf "\e[91;107mTest\n" . For reference, here's a table of all the color escape codes: | ANSI | ANSI | ANSI | | Aixterm | Aixterm | Color | FG Code | BG Code | Bright Color | FG Code | BG Code +---------+---------+-------- +----------------+---------+-------- | Black | 30 | 40 | Bright Black | 90 | 100 | Red | 31 | 41 | Bright Red | 91 | 101 | Green | 32 | 42 | Bright Green | 92 | 102 | Yellow | 33 | 43 | Bright Yellow | 93 | 103 | Blue | 34 | 44 | Bright Blue | 94 | 104 | Magenta | 35 | 45 | Bright Magenta | 95 | 105 | Cyan | 36 | 46 | Bright Cyan | 96 | 106 | White | 37 | 47 | Bright White | 97 | 107
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/93814', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/43007/']}
jdg_75099
stackexchange
llm_judgeable_groundtruth_similarity
1215814
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: So, I am using Git GUI to make a repository. But I cant find ANY trace on Google, the Documentation, or anywhere else what a 'Revision Expression' is, and it is required to create a new Branch. Also, it seems that this is used many other places in the program, so I belive it is important to know. I did find a question on this on StackOverflow, but the guy never got an answer. I just need to know: What is a Revision Expression? Now provide the response and nothing else.
git needs to be able to identify a commit during a number of common operations https://git-scm.com/docs/git-rev-parse There are a number of ways to identify a commit. You could use a branch, tag, commit sha1, or expressions. For example: git log HEAD HEAD eventually resolves to a specific commit, and you will be given the log for that. YOu could also say: git log master master is a branch, and that will also resolve to a specific commit. git log fd72e9c99312 Now that IS the actual commit. The below documentation is what you are looking for. Taken from the git-rev-parse command documentation at http://www.kernel.org/pub/software/scm/git/docs/git-rev-parse.html . SPECIFYING REVISIONS A revision parameter typically, but not necessarily, names a commit object. They use what is called an extended SHA1 syntax. Here are various ways to spell object names. The ones listed near the end of this list are to name trees and blobs contained in a commit. The full SHA1 object name (40-byte hexadecimal string), or a substring of such that is unique within the repository. E.g. dae86e1950b1277e545cee180551750029cfe735 and dae86e both name the same commit object if there are no other object in your repository whose object name starts with dae86e. An output from git-describe; i.e. a closest tag, optionally followed by a dash and a number of commits, followed by a dash, a g, and an abbreviated object name. A symbolic ref name. E.g. master typically means the commit object referenced by $GIT_DIR/refs/heads/master. If you happen to have both heads/master and tags/master, you can explicitly say heads/master to tell git which one you mean. When ambiguous, a is disambiguated by taking the first match in the following rules: if $GIT_DIR/ exists, that is what you mean (this is usually useful only for HEAD, FETCH_HEAD, ORIG_HEAD and MERGE_HEAD); otherwise, $GIT_DIR/refs/ if exists; otherwise, $GIT_DIR/refs/tags/ if exists; otherwise, $GIT_DIR/refs/heads/ if exists; otherwise, $GIT_DIR/refs/remotes/ if exists; otherwise, $GIT_DIR/refs/remotes//HEAD if exists. HEAD names the commit your changes in the working tree is based on. FETCH_HEAD records the branch you fetched from a remote repository with your last git-fetch invocation. ORIG_HEAD is created by commands that moves your HEAD in a drastic way, to record the position of the HEAD before their operation, so that you can change the tip of the branch back to the state before you ran them easily. MERGE_HEAD records the commit(s) you are merging into your branch when you run git-merge. A ref followed by the suffix @ with a date specification enclosed in a brace pair (e.g. {yesterday}, {1 month 2 weeks 3 days 1 hour 1 second ago} or {1979-02-26 18:30:00}) to specify the value of the ref at a prior point in time. This suffix may only be used immediately following a ref name and the ref must have an existing log ($GIT_DIR/logs/). Note that this looks up the state of your local ref at a given time; e.g., what was in your local master branch last week. If you want to look at commits made during certain times, see --since and --until. A ref followed by the suffix @ with an ordinal specification enclosed in a brace pair (e.g. {1}, {15}) to specify the n-th prior value of that ref. For example master@{1} is the immediate prior value of master while master@{5} is the 5th prior value of master. This suffix may only be used immediately following a ref name and the ref must have an existing log ($GIT_DIR/logs/). You can use the @ construct with an empty ref part to get at a reflog of the current branch. For example, if you are on the branch blabla, then @{1} means the same as blabla@{1}. The special construct @{-} means the th branch checked out before the current one. A suffix ^ to a revision parameter means the first parent of that commit object. ^ means the th parent (i.e. rev^ is equivalent to rev^1). As a special rule, rev^0 means the commit itself and is used when rev is the object name of a tag object that refers to a commit object. A suffix ~ to a revision parameter means the commit object that is the th generation grand-parent of the named commit object, following only the first parent. I.e. rev~3 is equivalent to rev^^^ which is equivalent to rev^1^1^1. See below for a illustration of the usage of this form. A suffix ^ followed by an object type name enclosed in brace pair (e.g. v0.99.8^{commit}) means the object could be a tag, and dereference the tag recursively until an object of that type is found or the object cannot be dereferenced anymore (in which case, barf). rev^0 introduced earlier is a short-hand for rev^{commit}. A suffix ^ followed by an empty brace pair (e.g. v0.99.8^{}) means the object could be a tag, and dereference the tag recursively until a non-tag object is found. A colon, followed by a slash, followed by a text: this names a commit whose commit message starts with the specified text. This name returns the youngest matching commit which is reachable from any ref. If the commit message starts with a !, you have to repeat that; the special sequence :/!, followed by something else than ! is reserved for now. A suffix : followed by a path; this names the blob or tree at the given path in the tree-ish object named by the part before the colon. A colon, optionally followed by a stage number (0 to 3) and a colon, followed by a path; this names a blob object in the index at the given path. Missing stage number (and the colon that follows it) names a stage 0 entry. During a merge, stage 1 is the common ancestor, stage 2 is the target branch's version (typically the current branch), and stage 3 is the version from the branch being merged. Here is an illustration, by Jon Loeliger. Both commit nodes B and C are parents of commit node A. Parent commits are ordered left-to-right. G H I J \ / \ / D E F \ | / \ \ | / | \|/ | B C \ / \ / AA = = A^0B = A^ = A^1 = A~1C = A^2 = A^2D = A^^ = A^1^1 = A~2E = B^2 = A^^2F = B^3 = A^^3G = A^^^ = A^1^1^1 = A~3H = D^2 = B^^2 = A^^^2 = A~2^2I = F^ = B^3^ = A^^3^J = F^2 = B^3^2 = A^^3^2
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1215814', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/58088/']}
jdg_75100
stackexchange
llm_judgeable_groundtruth_similarity
246951
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have this list {{{0.05792, 0.31744}, 0., 0., 0., 0., 0.}, {0., {0.28832, 0.49024}, {0.17173, 0.386393}, 0., 0., 0.}, {0., {0.17173, 0.386393}, {0.104, 0.352}, 0., 0., 0.}, {0., 0., 0., {0.30752, 0.38464}, {0.322232, 0.260264}, {0.214663, 0.107331}}, {0., 0., 0., {0.322232, 0.260264}, {0.392, 0.496}, {0.277128, 0.415692}}, {0., 0., 0., {0.214663, 0.107331}, {0.277128, 0.415692}, {0.2, 0.4}}} or in the matrix form: I would like to convert this list to a list of two matrices, each of the same size as the input matrix, i.e. 6 by 6, where the non-zero entries for the first matrix are given by the first element of the lists in the input matrix and the same for the second matrix. If it helps I can also create a matrix where the zeros are replaced by a list of two zeros to match the dimension. Now provide the response and nothing else.
lst1=#1&@@@#&/@lst {{0.05792, 0., 0., 0., 0., 0.}, {0., 0.28832, 0.17173, 0., 0., 0.}, {0., 0.17173, 0.104, 0., 0., 0.}, {0., 0., 0., 0.30752, 0.322232, 0.214663}, {0., 0., 0., 0.322232, 0.392, 0.277128}, {0., 0., 0., 0.214663, 0.277128, 0.2}} lst2=#2&@@@#&/@lst {{0.31744, 0., 0., 0., 0., 0.}, {0., 0.49024, 0.386393, 0., 0., 0.}, {0., 0.386393, 0.352, 0., 0., 0.}, {0., 0., 0., 0.38464, 0.260264, 0.107331}, {0., 0., 0., 0.260264, 0.496, 0.415692}, {0., 0., 0., 0.107331, 0.415692, 0.4}}
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/246951', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/79600/']}
jdg_75101
stackexchange
llm_judgeable_groundtruth_similarity
16886220
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to implement my own deep-copy routine in elisp (since something like (setq newlist oldlist) seems to give just a shallow copy, and (copy-sequence newlist oldlist) still leaves newlist exposed to any changes of the elements of oldlist ) Moreover, if there is a function that does what I want I am having no luck finding it. The definition of my function is: (defun deep-copy (iList oList) (setq oList (car iList)) (setq counter (- (length iList) 1)) (setq iList (cdr iList)) (while (> counter 0) (setq oList (cons oList (car iList))) (setq iList (cdr iList)) (setq counter (- counter 1) ))) and afterwards, with an iList of (1 2 3 4 5 6) what oList happens to have is: (((((1 . 2) . 3) . 4) . 5) . 6) i.e. nested lists. I have tried quoting, back quoting, using append, switching the order of oList and (car iList) in (cons # #) , googling for a solution, but I am having no luck (either errors or garbage). In addition to any welcome comments on what functions already exist that will do what I want, where there are weaknesses in the code (I am an elisp newbie), could someone tell me how to cons elements to an existing list properly? the examples tend be variants of the form: (cons 'pine '(fir oak maple)) , where '(fir oak maple) is some hard coded list edit: For the last two hours I have been battling against myself (since I commented out oList in the calling function, and I kept referring to an old version of it). At any rate, swapping oList and (car iList) and then reversing at the end seems to do the trick (but surely there is a better way!?) i.e. (defun deep-copy (iList) (setq oList nil ) (setq counter (- (length iList) 1)) (while (>= counter 0) (setq oList (cons (car iList) oList) ) (setq iList (cdr iList) ) (setq counter (- counter 1) )) (reverse oList) ) Now provide the response and nothing else.
Use copy-tree (example assumes you require d cl , for my convenience, but copy-tree itself doesn't require it): elisp> (setq list1 '(((1 2) (3 4)) 5 (6)))(((1 2) (3 4)) 5 (6))elisp> (setq list2 (copy-sequence list1))(((1 2) (3 4)) 5 (6))elisp> (setf (caar list2) 1)1elisp> list2((1 (3 4)) 5 (6))elisp> list1((1 (3 4)) 5 (6))elisp> (setq list1 '(((1 2) (3 4)) 5 (6)))(((1 2) (3 4)) 5 (6))elisp> (setq list2 (copy-tree list1))(((1 2) (3 4)) 5 (6))elisp> (setf (caar list2) 1)1elisp> list1(((1 2) (3 4)) 5 (6))elisp> list2((1 (3 4)) 5 (6)) Instead of giving tips about your code, I suggest that you read through the Elisp introduction that comes with Emacs: C-h i g (eintr) RET or other introductory Lisp books, for example Touretzky (the latter is for Common Lisp, but a great introduction). It will teach you the basics – for example, to not just setq in function definitions and so on. But to give you an example, here's the definition of copy-tree (alternatively, just view it in your Emacs: M-x find-function RET copy-tree RET ): (defun copy-tree (tree &optional vecp) "Make a copy of TREE.If TREE is a cons cell, this recursively copies both its car and its cdr.Contrast to `copy-sequence', which copies only along the cdrs. With secondargument VECP, this copies vectors as well as conses." (if (consp tree) (let (result) (while (consp tree) (let ((newcar (car tree))) (if (or (consp (car tree)) (and vecp (vectorp (car tree)))) (setq newcar (copy-tree (car tree) vecp))) (push newcar result)) (setq tree (cdr tree))) (nconc (nreverse result) tree)) (if (and vecp (vectorp tree)) (let ((i (length (setq tree (copy-sequence tree))))) (while (>= (setq i (1- i)) 0) (aset tree i (copy-tree (aref tree i) vecp))) tree) tree)))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16886220', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1261555/']}
jdg_75102
stackexchange
llm_judgeable_groundtruth_similarity
41057
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Recently, I was reminded in Melvyn Nathason's first year graduate algebra course of a debate I've been having both within myself and externally for some time. For better or worse, the course most students first use and learn extensive category theory and arrow chasing is in an advanced algebra course, either an honors undergraduate abstract algebra course or a first-year graduate algebra course. (Ok, that's not entirely true, you can first learn about it also in topology. But it's really in algebra where it has the biggest impact. Topology can be done entirely without it wherareas algebra without it beyond the basics becomes rather cumbersome. Also, homological methods become pretty much impossible.) I've never really been comfortable with category theory. It's always seemed to me that giving up elements and dealing with objects that are knowable only up to isomorphism was a huge leap of faith that modern mathematics should be beyond. But I've tried to be a good mathematican and learn it for my own good. The fact I'm deeply interested in algebra makes this more of a priority. My question is whether or not category theory really should be introduced from jump in a serious algebra course. Professor Nathanson remarked in lecture that he recently saw his old friend Hyman Bass, and they discussed the teaching of algebra with and without category theory. Both had learned algebra in thier student days from van der Waerden (which incidently, is the main reference for the course and still his favorite algebra book despite being hopelessly outdated). Melvyn gave a categorical construction of the Fundamental Isomorphism Theorum of Abelian Groups after Bass gave a classical statement of the result. Bass said, "It's the same result expressed in 2 different languages. It really doesn't matter if we use the high-tech approach or not." Would algebracists of later generations agree with Professor Bass? A number of my fellow graduate students think set theory should be abandoned altogether and thrown in the same bin with Newtonian infinitesimals (nonstandard constructions not withstanding) and think all students should learn category theory before learning anything else. Personally, I think category theory would be utterly mysterious to students without a considerable stock of examples to draw from. Categories and universal properties are vast generalizations of huge numbers of not only concrete examples,but certain theorums as well. As such, I believe it's much better learned after gaining a considerable fascility with mathematics-after at the very least, undergraduate courses in topology and algebra. Paolo Aluffi's wonderful book Algebra:Chapter 0 , is usually used by the opposition as a counterexample, as it uses category theory heavily from the beginning. However, I point out that Aluffi himself clearly states this is intended as a course for advanced students and he strongly advises some background in algebra first. I like the book immensely, but I agree. What does the board think of this question? Categories early or categories late in student training? Now provide the response and nothing else.
There's a big difference between teaching category theory and merely paying attention to the things that category theory clarifies (like the difference between direct products and direct sums). In my opinion, the latter should be done early (and late, and at all other times); there's no reason for intentional sloppiness. On the other hand, teaching category theory is better done after the students have been exposed to some of the relevant examples. Many years ago, I taught a course on category theory, and in my opinion it was a failure. Many of the students had not previously seen the examples I wanted to use. One of the beauties of category theory is that it unifies many different-looking concepts; for example, left adjoints of forgetful functors include free groups, universal enveloping algebras, Stone-Cech compactifications, abelianizations of groups, and many more. But the beauty is hard to convey when, in addition to explaining the notion of adjoint, one must also explain each (or at least several) of these special cases. So I think category theory should be taught at the stage where students have already seen enough special cases of its concepts to appreciate their unification. Without the examples, category theory can look terribly unmotivated and unintuitive.
{}
{'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/41057', 'https://mathoverflow.net', 'https://mathoverflow.net/users/3546/']}
jdg_75103
stackexchange
llm_judgeable_groundtruth_similarity
1683394
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here is what I understand so far: PASS BY VALUE Passing by value means a copy of an argument is passed.Changes to that copy do not change the original. PASS BY REFERENCE Passing by reference means a reference to the original is passed.changes to the reference affect the original. REF Keyword REF tells the compiler that the object is initialized before entering the function.REF means the value is already set, the method can therefore read it and modify it.REF is two ways, both in and out. OUT Keyword OUT tells the compiler that the object will be intialized inside the function.OUT means the value is not already set, and therefore must be set before calling return.OUT is only one way, which is out. Question So in what scenarios would you combine the use of the ref and out keywords, with passing by reference or passing by value?Examples would help tremendously. Help greatly appreciated. Now provide the response and nothing else.
You would never combine ref and out on 1 parameter. They both mean 'pass by reference'. You can of course combine ref parameters and out parameters in one method. The difference between ref and out lies mainly in intent . ref signals 2-way data transport, out means 1-way. But besides intent, the C# compiler tracks definite-assignment and that makes the most noticable difference. It also prevents the misuse (reading) of an out parameter. void SetOne(out int x) { int y = x + 1; // error, 'x' not definitely assigned. x = 1; // mandatory to assign something}void AddTwo(ref int x){ x = x + 2; // OK, x is known to be assigned}void Main(){ int foo, bar; SetOne(out foo); // OK, foo does not have to be assigned AddTwo(ref foo); // OK, foo assigned by SetOne AddTwo(ref bar); // error, bar is unassigned}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1683394', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/99900/']}
jdg_75104
stackexchange
llm_judgeable_groundtruth_similarity
25176502
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i have a gridview, textboxes, drop downs inside update panel, it works good but problem is that when i click any drop down than it changes scrolls back the whole page to top and refreshes it, why ? Code: <asp:UpdatePanel ID="updGridViewSMS" runat="server" UpdateMode="Conditional"> <ContentTemplate> <label><b>Search By Date Range</b></label> <asp:Label ID="lblDateFrom" runat="server" Text="From"></asp:Label> <asp:TextBox ID="txtFromDate" runat="server" ></asp:TextBox> <asp:CalendarExtender ID="CalendarExtenderFromDate" Format="dd/MMM/yyyy" TargetControlID="txtFromDate" runat="server"> </asp:CalendarExtender> <asp:RequiredFieldValidator ID="RequiredFieldValidator9" runat="server" ControlToValidate="txtFromDate" Display="None" ErrorMessage="" ForeColor="Red" > </asp:RequiredFieldValidator> <asp:Label ID="lblDateTo" runat="server" Text="To"></asp:Label> <asp:TextBox ID="txtToDate" runat="server"></asp:TextBox> <asp:CalendarExtender ID="CalendarExtenderToDate" Format="dd/MMM/yyyy" TargetControlID="txtToDate" runat="server"> </asp:CalendarExtender> <asp:RequiredFieldValidator ID="RequiredFieldValidator10" runat="server" ControlToValidate="txtToDate" Display="None" ErrorMessage="" ForeColor="Red" > </asp:RequiredFieldValidator> <asp:Button ID="btnSearchByDate" CssClass="btn btn-success" runat="server" Text="Search" ClientIDMode="Static" OnClick="btnSearchByDate_Click" /> <asp:ValidationSummary HeaderText="Requires Date Range:" DisplayMode="SingleParagraph" EnableClientScript="true" ForeColor="Red" runat="server" ID="ValidationSummary1"/> <label runat="server" id="lblSelectionMessage" style="color:navy; font-size:12px; font-style:italic" visible="false"> Please select one validity dropdown at a time </label> <asp:GridView ID="GridViewSmsComplaints" AllowPaging="True" PageSize="5" runat="server" AutoGenerateColumns="False" CssClass="mGrid" BorderColor="#333333" Width="650px" OnRowDataBound="GridViewSmsComplaints_RowDataBound" OnPageIndexChanging="GridViewSmsComplaints_PageIndexChanging" > <Columns> <asp:BoundField HeaderText="Sms ID" DataField="ID" /> <asp:BoundField HeaderText="Recieving Date" DataField="RecievingDate" /> <%--<asp:BoundField HeaderText="ToMobileNo" DataField="ToMobileNo" /> --%> <asp:BoundField HeaderText="Incoming No" DataField="FromMobileNo" /> <asp:BoundField HeaderText="Complaint" DataField="Message" > <ItemStyle Wrap="True" /> </asp:BoundField> <asp:TemplateField HeaderText="Is Valid?"> <ItemTemplate> <%--<asp:CheckBox ID="ckboxIsComplaint" runat="server" />--%> <asp:DropDownList ID="ddlIsComplaint" runat="server" AutoPostBack="true" OnSelectedIndexChanged="ddlIsComplaint_SelectedIndexChanged"> <asp:ListItem Text="-Select-" Value="-1"></asp:ListItem> <asp:ListItem Text="Invalid" Value="0"></asp:ListItem> <asp:ListItem Text="Valid" Value="1"></asp:ListItem> </asp:DropDownList> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> <br /> <asp:Panel ID="pnlBoxesDropDowns" runat="server"> <label>Complainant</label> <asp:TextBox ID="txtComplainant" runat="server" CssClass="textField_width"></asp:TextBox> <asp:RequiredFieldValidator ID="ReqFieldValdiatorComplainant" runat="server" ControlToValidate="txtComplainant" ErrorMessage="Complainant is Required" ForeColor="Red" SetFocusOnError="True" ValidationGroup="Complaints">Complainant is Required </asp:RequiredFieldValidator> <label style="width:400px">Region</label> <asp:DropDownList ID="ddlRegions" runat="server" CssClass="DropDown_Width" Width="147px" OnSelectedIndexChanged="ddlRegions_SelectedIndexChanged" AppendDataBoundItems="True" AutoPostBack="true" > <asp:ListItem Value="-1" Selected="True">-Select-</asp:ListItem> </asp:DropDownList> <asp:RequiredFieldValidator ID="ReqFieldValidatorRegions" runat="server" ControlToValidate="ddlRegions" ErrorMessage="Region is Required" InitialValue="-1" ForeColor="Red" ValidationGroup="Complaints">Region is Required</asp:RequiredFieldValidator> <label style="width:400px">District</label> <asp:DropDownList ID="ddlDistricts" runat="server" CssClass="DropDown_Width" Width="147px" OnSelectedIndexChanged="ddlDistricts_SelectedIndexChanged" AutoPostBack="true"> </asp:DropDownList> <asp:RequiredFieldValidator ID="ReqFieldValidatorDistricts" runat="server" ControlToValidate="ddlDistricts" ErrorMessage="Region is Required" InitialValue="-1" ForeColor="Red" ValidationGroup="Complaints">District is Required</asp:RequiredFieldValidator> <label>PoliceStations</label> <asp:DropDownList ID="ddlPoliceStations" runat="server" Width="147px" CssClass="DropDown_Width"> </asp:DropDownList> <asp:RequiredFieldValidator ID="ReqFieldValidatorPoliceStations" runat="server" ControlToValidate="ddlPoliceStations" ErrorMessage="Police Station is Required" InitialValue="-1" ForeColor="Red" ValidationGroup="Complaints">Police Station is Required</asp:RequiredFieldValidator> <label>Priority</label> <asp:DropDownList ID="ddlPriority" runat="server"> <asp:ListItem Text="Top" Value="1"></asp:ListItem> <asp:ListItem Text="Normal" Value="2"></asp:ListItem> </asp:DropDownList> </asp:Panel> <br /> <br /> <asp:Timer runat="server" Interval="60000" ID="RefreshSmsComplaints" OnTick="RefreshSmsComplaints_Tick" /> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="RefreshSmsComplaints" EventName="Tick" /> </Triggers> </asp:UpdatePanel> <asp:Button ID="btnSendSms" ValidationGroup="Complaints" runat="server" CssClass="btn btn-success" OnClick="btnSendSms_Click" Text="Send Sms" /> <asp:Button ID="btnGenerateReport" CssClass="btn btn-success" runat="server" Text="Generate Report" ClientIDMode="Static" OnClick="btnGenerateReport_Click" /> Now provide the response and nothing else.
isNaN() and Number.isNaN() both test if a value is (or, in the case of isNaN() , can be converted to a number-type value that represents) the NaN value. In other words, "NaN" does not simply mean "this value is not a number", it specifically means "this value is a numeric Not-a-Number value according to IEEE-754". The reason all your tests above return false is because all of the given values can be converted to a numeric value that is not NaN : Number('') // 0Number(' ') // 0Number(true) // 1Number(false) // 0Number([0]) // 0 The reason isNaN() is "broken" is because, ostensibly, type conversions aren't supposed to happen when testing values. That is the issue Number.isNaN() is designed to address. In particular, Number.isNaN() will only attempt to compare a value to NaN if the value is a number-type value. Any other type will return false, even if they are literally "not a number", because the type of the value NaN is number. See the respective MDN docs for isNaN() and Number.isNaN() . If you simply want to determine whether or not a value is of the number type, even if that value is NaN , use typeof instead: typeof 'RAWRRR' === 'number' // false
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/25176502', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3518032/']}
jdg_75105
stackexchange
llm_judgeable_groundtruth_similarity
5123
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Usually, I unarchive things by $ mkdir newFolder; $ mv *.zip newFolder; $ cd newFolder; $unzip *.zip but sometimes I get lazy and just do in an arbitrary folder $ unzip *.zip so time-to-time messing up with other content. I will list here some methods -- some archive version surely have crappy-flags while others more spartan, I am more interested about the latter. Some ways to de-unarchive, are there others? $ find . -anewer fileThatExistedBeforeUnarchieving -ok rm '{}' \; Weaknesses are that it lists the *.zip dirs, so you need to use slow -ok , slow with many *.zip matches and, for some reason, it does not seem to match everything extracted. If small amount of extracted files, one-by-one, slow, cumbersome and error-prone. When I want to make sure whether the content of the archieve is actually a folder, I sometimes check it with $ unzip -l *.bsd , works at least in obsd`s unzip-version. If you are referring to certain archiving tools, please, state them when appropriate. Keep it simple though -- I am more interested about the WAYS how you do it, rather than a single tool. Now provide the response and nothing else.
By name You can generate the list of files in the archive and delete them, though this is annoyingly fiddly with archivers such as unzip or 7z that don't have an option to generate a plain list of file names. Even with tar, this assumes there are no newlines in file names. tar tf foo.tar | while read -r file; do rm -- "$file" doneunzip -l foo.zip | awk ' p && /^ --/ {p=2} p==1 {print substr($0, 29)} /^ --/ {++p}' | while …unzip -l foo.zip | tail -n +4 | head -n -2 | while … # GNU coreutils only7z l -slt foo.zip | sed -n 's/^Path = //p' | while … # works on tar.*, zip, 7z and more Instead of removing the files, you could move them to their intended destination. tar tf foo.tar | while read -r file; do if [ -d "$file" ]; then continue; fi mkdir -p "/intended/destination/${file%/*}" mv -- "$file" "/intended/destination/$file"done Using FUSE Instead of depending on external tools, you can (on most unices) use FUSE to manipulate archives using ordinary filesystem commands. You can use Fuse-zip to peek into a zip, extract it with cp , list its contents with find , etc. mkdir /tmp/foo.dfuse-zip foo.zip /tmp/foo.d## Remove the files that were extracted mistakenly (GNU/BSD find)(cd /tmp/foo.d && find . \! -type d -print0) | xargs -0 rm## Remove the files that were extracted mistakenly (zsh)rm /tmp/foo.d/**(:"s~/tmp/foo.d/~~"^/)## Extract the contents where you really want themcp -Rp /tmp/foo.d /intended/destinationfusermount -u foo.drmdir foo.d AVFS creates a view of your entire directory hierarchy where all archives have an associated directory (same name with # tacked on at the end) that appears to hold the archive content. mountavfs## Remove the files that were extracted mistakenly (GNU/BSD find)(cd ~/.avfs/"$PWD/foo.zip#" && find . \! -type d -print0) | xargs -0 rm## Remove the files that were extracted mistakenly (zsh)rm ~/.avfs/$PWD/foo.zip\#/**/*(:"s~$HOME/.avfs/$PWD/foo.zip#~~"^/)## Extract the contents where you really want themcp -Rp ~/.avfs/"$PWD/foo.zip#" /intended/destinationumountavfs By date Assuming there hasn't been other any activity in the same hierarchy than your extraction, you can tell the extracted files by their recent ctime . If you just created or moved the zip file, you can use it as a cutoff; otherwise use ls -lctr to determine a suitable cutoff time. If you want to make sure not to remove the zips, there's no reason to do any manual approval: find is perfectly capable of excluding them. Here are example commands using zsh or find ; note that the -cmin and -cnewer primaries are not in POSIX but exist on Linux (and other systems with GNU find), *BSD and OSX. find . \! -name '*.zip' -type f -cmin -5 -exec rm {} + # extracted <5 min agorm **/*~*.zip(.cm-6) # zsh, extracted ≤5 min agofind . -type f -cnewer foo.zip -exec rm {} + # created or moved after foo.zip With GNU find, FreeBSD and OSX, another way to specify the cutoff time is to create a file and use touch to set its mtime to the cutoff time. touch -d … cutofffind . -type f -newercm cutoff -delete Instead of removing the files, you could move them to their intended destination. Here's a way with GNU/*BSD/OSX find, creating directories in the destination as needed. find . \! -name . -cmin -5 -type f -exec sh -c ' for x; do mkdir -p "$0/${x%/*}" mv "$x" "$0/$x" done ' /intended/destination {} + Zsh equivalent (almost: this one reproduces the entire directory hierarchy, not just the directories that will contain files): autoload zmvmkdir -p ./**/*(/cm-3:s"|.|/intended/destination|")zmv -Q '(**/)(*)(.cm-3)' /intended/destination/'$1$2' Warning, I haven't tested most of the commands in this answer. Always review the list of files before removing (run echo first, then rm if it's ok).
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/5123', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/-1/']}
jdg_75106
stackexchange
llm_judgeable_groundtruth_similarity
1724789
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have a question that might be easy for the experts. Let $E$ be a Banach space (non-separable) and let $E'$ be its dual space. Suppose that $X\subset E'$ and assume that $X$ is separable with respect to the weak* topology. My question is the following: Are the sequential weak* closure and the weak* closure of $X$ equal? Google was not able to help me on this. Now provide the response and nothing else.
The answer is no . For a specific, non-separable counter-example take $E=\ell_\infty$. Then the set $$X=\{e_n^*\colon n\in \mathbb{N}\}\subseteq \ell_\infty^*$$ consisting of evaluation functionals is bounded but not weak* compact. (Here $\langle e_n^*, f\rangle = f(n)$ for $f\in \ell_\infty$.) However, by boundedness, the weak* closure of $X$ is compact in the weak* topology hence different from $X$. There are no non-eventually constant convergent sequences in $X$ so the cluster points of $X$ cannot be realised as limits of sequences. To be more precise, the weak*-sequential closure of $X$ is $X$ itself but $X$ is not weak*-closed (because it is not weak*-compact) so the weak* closure of $X$ is strictly bigger than $X$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1724789', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/328024/']}
jdg_75107
stackexchange
llm_judgeable_groundtruth_similarity
190138
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If $f(x)=g(h(x))$, why is $f^{-1}(x)=h^{-1}(g^{-1}(x))$ ? Now provide the response and nothing else.
Think about dressing your feet. Here are the instructions Put on socks Put on shoes What is the reverse of this operation? Remove shoes Remove socks. You must undo the operations in the reverse order in which you did them. Now think about f(g(x)): first apply g to x then f.....
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/190138', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/34289/']}
jdg_75108
stackexchange
llm_judgeable_groundtruth_similarity
11535754
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have created a http handler fro my Jquery ajax call. which is working fine the jquery call is mentioned below $.ajax({ url: "Services/name.ashx", contentType: "text/plain", data: { CustMobile: a, CustName: b, CustEmail: c }, contentType: "application/json; charset=utf-8", success: function (data) { $("#loading").hide(); }, error: function () { $("#loading").hide(); }}); Now my problem is this can I call the name.ashx handler through my code behind. I am using asp.net with C#. I know that it can be called by Response.write("") writing the whole script part. Ans also Register Script of page method. But Is there any way by which I can send request to handler and get back response from the handler using c#. Thanks. Now provide the response and nothing else.
You can call any http resource from code behind using HttpWebRequest (System.Net namespace) Sample HttpWebRequest request = (HttpWebRequest) WebRequest.Create("<YourDomain>/Services/name.ashx?CustMobile=ValueOfA&CustName=ValueOfB&CustEmail=ValueOfC");HttpWebResponse response = (HttpWebResponse)request.GetResponse(); You need an absolute path but you can get your domain or server url from the HttpContext if you dont want to hardcode the domain More Information MSDN - HttpWebRequest Class
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11535754', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/853609/']}
jdg_75109
stackexchange
llm_judgeable_groundtruth_similarity
27844553
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to simulate the response to this API URL http://api.myapihost.com/images?foo=bar&spam=egg The URL parameter combinations can vary. I am trying to intercept this request and respond with an empty object. nock('http://api.myapihost.com') .persist() .get('/images', '*') .reply(200, {}); I get this error message when my test case runs: Uncaught Error: Nock: No match for HTTP request GET /images?height=2500 How can I configure nock to match any combination of URL parameters? Now provide the response and nothing else.
With nock you can specify regular expressions . Here is an example (tested with v9.2.3): nock('http://api.myapihost.com') .get(/images.*$/) .reply(200, {}); There is also a simpler syntax using .query(true) , if you want to mock the entire url regardless of the passed query string: nock('http://api.myapihost.com') .get('/images') .query(true) .reply(200, {});
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/27844553', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/94250/']}
jdg_75110
stackexchange
llm_judgeable_groundtruth_similarity
151107
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose I have a list {a[1], a[2], a[3], ...} and a function y = f[x1, x2] Is it possible to return a list like {a[1], f[a[2],a[1]], f[a[3], f[a[2],a[1]]], ... ] This may look like a little bit confusing. Actually, it just applies the function across consecutive elements in this list in order. 1st time, take a[1] and a[2] and act on a[2]: {a[1], f[a[2], a[1]], a[3], ...} 2nd time, take a[3] and the new a[2] and act on a[3]: {a[1], f[a[2], a[1]], f[a[3], f[a[2], a[1]]], ...} In other words, although the problem is very similar to Span a function across several consecutive elements in a list , the order / history of the application matters. Is there a way that it can be realized elegantly without using For loop? Now provide the response and nothing else.
Use FoldList : FoldList[f[#2, #1]&, Array[a, 5]] {a[1], f[a[2], a[1]], f[a[3], f[a[2], a[1]]], f[a[4], f[a[3], f[a[2], a[1]]]], f[a[5], f[a[4], f[a[3], f[a[2], a[1]]]]]}
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/151107', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/11739/']}
jdg_75111
stackexchange
llm_judgeable_groundtruth_similarity
51350151
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have two flat rate shipping methods set up within Shipping Zones. At the checkout both of these shipping methods are available. I'd like to display a text description under each flat rate shipping option. There doesn't seem to be any options to do this in WooCommerce. I have tried the following code but needless to say it doesn't work: add_filter( 'woocommerce_page_title', 'woo_shop_page_title');function wc_get_shipping_zone( $package ) { if( $package == 'flat_rate:1') { return "<p>Arriving on your chosen date between 9am - 1pm Perfect for business addresses & special occasions</p>"; } if( $package == 'flat_rate:2') { return "<p>Arriving on your chosen date between 9am - 7pm Perfect for residential addresses</p>"; }} Could anyone help to make this work? This is what I would like the description to look like in the checkout: Now provide the response and nothing else.
The correct hooked function to add additional information to your shipping "flat rate" methods is: add_action( 'woocommerce_after_shipping_rate', 'action_after_shipping_rate', 20, 2 );function action_after_shipping_rate ( $method, $index ) { // Targeting checkout page only: if( is_cart() ) return; // Exit on cart page if( 'flat_rate:1' === $method->id ) { echo __("<p>Arriving on your chosen date between 9am - 1pm Perfect for business addresses & special occasions</p>"); } if( 'flat_rate:2' === $method->id ) { echo __("<p>Arriving on your chosen date between 9am - 7pm Perfect for residential addresses</p>"); }} This code goes on function.php file of your active child theme (or active theme). tested and works.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51350151', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3928691/']}
jdg_75112
stackexchange
llm_judgeable_groundtruth_similarity
114943
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there an issue with sed and new line character? I have a file test.txt with the following contents aaaaa bbbbb ccccc ddddd The following does not work: sed -r -i 's/\n/,/g' test.txt I know that I can use tr for this but my question is why it seems not possible with sed. If this is a side effect of processing the file line by line I would be interested in why this happens. I think grep removes new lines. Does sed do the same? Now provide the response and nothing else.
With GNU sed and provided POSIXLY_CORRECT is not in the environment (for single-line input): sed -i ':a;N;$!ba;s/\n/,/g' test.txt From https://stackoverflow.com/questions/1251999/sed-how-can-i-replace-a-newline-n : create a label via :a append the current and next line to the pattern space via N if we are before the last line, branch to the created label $!ba ( $! means not to do it on the last line (as there should be one final newline)). finally the substitution replaces every newline with a comma on the pattern space (which is the whole file).
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/114943', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/42132/']}
jdg_75113
stackexchange
llm_judgeable_groundtruth_similarity
1403074
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have some code that prints the amount of memory used by the program. The line is similar to this: printf("The about of RAM used is %u", anIntVariable*sizeof(double) ); where anIntVariable is an int variable for the number of elements of the double array. Anyhow, on 32-bit systems I never had any problems but on 64-bit systems, I get a compiler warning about using "%u" for a unsigned long integer. Using "%lu" as the format code fixes the problem on 64-bit but causes the compiler to complain on 32-bit because the type is back to unsigned int. I've found that, indeed, sizeof(double) returns a different value on 32 vs 64 bit systems. I've found some webpage guides to convert code from 32 bit to 64 bit But I'd rather have code that works on both instead of just converting back and forth. How do I write this line in a platform independent way? I know many ways I could do it using preprocessor directives but that seems like a hack. Surely there's an elegant way that I'm not realizing. Now provide the response and nothing else.
Portable printf identifiers are provided in the include file inttypes.h or here . This include file has many portable identifiers for your specific runtime. For your example, you want PRIuPTR, which means " PR intf I dentifier u nsigned with size of up to a pointer's size". Your example will then be: printf("The amount of RAM used is %" PRIuPTR, anIntVariable*sizeof(double) ); Results on 64bit Linux with GCC 4.3 ( int anIntVariable = 1 ): $ gcc test.c -m32 -o test && ./testThe amount of RAM used is 8$ gcc test.c -o test && ./testThe amount of RAM used is 8 For completeness sake, there are identifiers for scanf too, whose prefixes are SCN.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1403074', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2577374/']}
jdg_75114
stackexchange
llm_judgeable_groundtruth_similarity
58683906
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I generated SSL certificate with Let's Encrypt and they generated the files: cert.pem | chain.pem | fullchain.pem | privkey.key | privkey.pem With the following command I can successfully open all the files except privkey.key and privkey.pem : openssl x509 -in [filename] This is the error message: unable to load certificate 140505945014720:error:0909006C:PEM routines:get_name:no start line:../crypto/pem/pem_lib.c:745:Expecting: TRUSTED CERTIFICATE Some articles suggest that we need to convert to .der extension using this command bellow but the same error remains ('unable to load'): openssl x509 -in privkey.pem -outform der -out privkey.der Now provide the response and nothing else.
Private keys aren't certificates and x509 is for certificates. See https://www.openssl.org/docs/man1.1.1/man1/x509.html for information about what x509 can do, if you like. To look at the private keys, you want pkey . https://www.openssl.org/docs/man1.1.1/man1/openssl-pkey.html Some articles suggest that we need to convert to .der extension Probably so solve a different problem with the same error message. It will be shown any time openssl x509 tries to read invalid input: $ openssl x509 <<<"not a cert"unable to load certificate140736021758920:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:697:Expecting: TRUSTED CERTIFICATE Future readers, take care so you don't chase wild geese. That particular message basically means "invalid cert provided" but if you search for it you'll see a lot of information online about how to fix your cert , which won't be the same in every case. So you have to really slow down and take a look at what's going on if you see this error. It's absolutely true that some certificate formats have to be changed to a different format for x509 . But all the errors will be the same. For the record, filename "extension" is basically meaningless at the unix command line. openssl x509 doesn't base anything on the part of the filename after the last . nor do most other programs ( one notable exception is vim which uses it to guess syntax highlighting if enabled). But if you want to convert from PEM to DER or any other file format, that's not about renaming the file, it's about changing its content - with another openssl invocation, most likely. It's not about what you name them, it's what's inside.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58683906', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6576496/']}
jdg_75115
stackexchange
llm_judgeable_groundtruth_similarity
2002842
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a web application in which the user can configure reports (ASP.NET MVC, no Reporting Services or anything). The configuration is then represented as a JavaScript object which is sent to the server as JSON to retrieve data and actually generate the report. The submission HTML look similar to this: <form method="post" action="/RenderReport" target="_blank"> <input type="hidden" name="reportJson"/></form> This works very well for opening the report in a new browser window. However, in this report, I want to include images that are genetated from the data. How can this be done in a good way? The obvious ways that come to mind are: Embed the metadata necessary to generate the images in the URL, like <img src="/GenerateImage/?metadata1=2&metadata2=4"/> . This won't work, however, since the metadata is very likely to make the URL exceed the 2083 characters max in IE. Use an ajax POST request, and then when the response comes back, create an image element like <img src="data:image/png;base64,{data_in_json_response}"/> . This is not possible, though, since my application has to work in IE6, which doesn't support data URIs. Generate the images while generating the report, creating a unique key for each image, and then use URLs of the form <img src="/GetCachedImage?id=23497abc289"/> . This is my current best idea, but it does raise the issue of where to cache the images. The places I can think of are: In the session. Advantage: The cached item is automatically deleted when when the session is abandoned. Disadvantage: accessing the session will serialize accesses to the page within a session. This is bad in my case. In the database: Advantage: Works well. Disadvantage: Unnecessary overhead, the cached items must be deleted some time. In the Application / Cache object. I haven't really thought through all advantages and disadvantages of this one. It also raises the question of when to delete the cached items. If I delete them right after the image is shown, it seems that the page can't be refreshed or printed without the images becoming red xes. Every other option means extra complexity. How can this problem be solved in a good way, or at least one that isn't bad? Now provide the response and nothing else.
Named pipes would be the fastest method, but it only works for communication between processes on the same computer. Named pipes communication doesn't go all the way down the network stack (because it only works for communication on the same computer) so it will always be faster. Anonymous Pipes may only be used on the local machine. However, Named Pipes may traverse the network. I left out Shared Memory since you specifically mentioned that you don't want to go that route. Shared Memory would be even faster than named pipes tho. So it depends if you only need to communicate between processes on the same computer or different computers. Any XML-based communication protocol (eg. Web Services) will usually be slower due to the massive overhead in XML.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2002842', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/47161/']}
jdg_75116
stackexchange
llm_judgeable_groundtruth_similarity
5679992
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Here's my problem: I have a few display objects that are modified by a loop, and I would like flash to render exactly one frame at the end of each loop. Duration of a loop may vary unpredictably, thus a constant frame-rate won't do it. I found a hack-ish way to render one frame at the end of each loop, using updateAfterEvent with a 0ms timer. Now I want to prevent Flash from rendering frames in the middle of the loop: this is a waste of time and ressources, and produces strange blinking effects. Setting the frame rate to 0 would be an easy solution, but stage.frameRate has a minimum value of 0.01. Question 1: Is there a way to properly stop the standard rendering loop?A workaround will not be regarded as a valid answer, because what I'm currently doing is a pretty good workaround (1 frame every 100 seconds is bearable). Question 2: Is there a better way than updateAfterEvent to force rendering? Oops, I forgot civilities. Good morning/afternoon, please, thanks in advance. Now provide the response and nothing else.
Debug info isn't present in the file. Make sure that you're using the Debug configuration. ( Project Manager tree, expand Build Configurations , make sure Debug is bold. If it's not, right click Debug and choose Activate from the context menu.) Make sure you then do a Build of your project, not just a Compile . If that still doesn't work, go to Project->Options from the IDE's main menu, click on Compiling under Delphi Compiler , and check the Debugging section on the right half of the window. Make sure that Debug Information and Local Symbols are both checked. If you're trying to trace into the VCL's own source, also check Use debug .dcus (you'll want to turn this off and do a full build of your project as soon as you're done, as it gets annoying when you're debugging normally). Again, you'll want to build and not compile. If all of the above fails, another possibility is that the code unit you have open in the Code Editor isn't the same one being seen by the compiler. Make sure you don't have multiple copies of the file on your computer in a location that the compiler might find first. If you're not sure, delete the .dcu files with that unit name and then do a build of your project, and see if the newly created .dcu is in the location you'd expect.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5679992', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/710239/']}
jdg_75117
stackexchange
llm_judgeable_groundtruth_similarity
26322867
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Password should contain At least one uppercase letter At least one lowercase letter At least one digit At least one special symbol should be more than 4 character I have tried - ^(?=.*\d)(?=.*[a-zA-Z])[a-zA-Z0-9!@#$%&*]{6,20}$ Now provide the response and nothing else.
You don't need some big monster single regular expression, which would probably involve arcana such as lookaheads, and be hard to maintain as new conditions were added. All you need is function validatePassword(pw) { return /[A-Z]/ .test(pw) && /[a-z]/ .test(pw) && /[0-9]/ .test(pw) && /[^A-Za-z0-9]/.test(pw) && pw.length > 4;}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/26322867', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3164482/']}
jdg_75118
stackexchange
llm_judgeable_groundtruth_similarity
97299
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Suppose that I have a set of points in a plane, which range from $x\in [0,20]$ and $y\in[0,10]$, and a curve given by a list of points in the same region: data = Table[{RandomReal[20], RandomReal[10]}, {200}];list = Table[{i, RandomReal[{5, 6}]}, {i, 0, 20}]; Now I want to select those points in data which are under the curve defined by list (where, for instance, such a curve is given by "step functions"). What is the fastest way of doing it? Now provide the response and nothing else.
Using the data and list as defined: if = Interpolation[list, InterpolationOrder -> 1]regm[{x_, y_}] := 0 <= x <= 20 && 0 <= y <= if[x]Show[ListPlot[list, Joined -> True, PlotRange -> {0, 10}], ListPlot[GatherBy[data, regm], PlotStyle -> {Black, {Red, PointSize[0.02]}}, PlotLegends -> {"Above", "Below"}], Frame -> True] Here are 4 ways to get the points: a1 = Cases[data, x_?(regm@# == True &)];a2 = Select[data, regm@# == True &];a3 = Pick[data, regm /@ data];a4 = True /. GroupBy[data, regm];
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/97299', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/22999/']}
jdg_75119
stackexchange
llm_judgeable_groundtruth_similarity
19608845
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm taking this course, and I'm really struggling understanding the directive .align concept. Here's an example, which I couldn't understand: I know that inside the data segment, there are addresses, starting with 0x10010000,0x10010020,etc. And I know, that inside each address, there are 8 memory fields, each has 32bit. Now, what I don't understand is, how and why var2 inside the address 0x10010010? str1 is inside the address 0x10010003 because we reserved 3 bits for the var1 . The last thing is, what exactly the directive .align doing? When I tested it in Mars4, it only shifted the data into the next memory field when I used align 3 and up, but I don't really get it. Now provide the response and nothing else.
Alignment is important for a MIPS processor, it only likes to read multi-byte values from memory at an address that's a multiple of the data size. The .ASCIIZ field can be placed anywhere since a string is read one byte at a time. So putting it at 0x10010003 is fine. The .WORD field must be aligned to a multiple of 4. So it can't be put at 0x1001000E, the next available location after the string. The assembler intentionally shifts the value and leaves two bytes unused. To the next address that's a multiple of 4, 0x10010010. The .ALIGN directive is a way to override the default alignment rules. The next field after the directive will be aligned to a multiple of 2 to the power of n where n is the .ALIGN value. In your case that's pow(2, 3) = 8 bytes. Which is what you see happening, without the .ALIGN directive the .HALF field would be stored at 0x10010014. Not a multiple of 8 so it is moved to 0x10010018. The example is otherwise artificial, no obvious reason to use the .ALIGN directive here since .HALF only requires an aligment to a multiple of 2 so storing it at 0x10010014 would have been fine.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19608845', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/652021/']}
jdg_75120
stackexchange
llm_judgeable_groundtruth_similarity
14402745
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: There are four duplicate classes in two Maven artifacts from Apache: commons-beanutils:commons-beanutils:1.8.3 and commons-collections:commons-collections:3.2.1 : org.apache.commons.collections.ArrayStackorg.apache.commons.collections.Bufferorg.apache.commons.collections.BufferUnderflowExceptionorg.apache.commons.collections.FastHashMap Is it possible to replace one of them with some other artifact to avoid this duplication? I tried to google but didn't find any solution. Rather annoying problem. Now provide the response and nothing else.
In this case, the problem isn't maven or exclusions (which usually is the issue) but you are using the wrong version of beanutils most likely. There is a version of the beanutils jar that has bean collections included and one that does not. The maven dependencies for the beanutils with bean collections includes commons collections. If you are using commons collections yourself, use the core version of and include commons collections in the maven dependencies. This is where it is explained a bit: http://commons.apache.org/beanutils/ That page says this: commons-beanutils.jar - contains everythingcommons-beanutils-core.jar - excludes Bean Collections classescommons-beanutils-bean-collections.jar - only Bean Collections classesThe main commons-beanutils.jar has an optional dependency on Commons Collections
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14402745', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/187141/']}
jdg_75121
stackexchange
llm_judgeable_groundtruth_similarity
15072578
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Let's say I have the following, public class Foo{ private String bar; public String getBar(){ return bar; } public void setBar(String bar){ this.bar = bar; }} Are these methods automatically threadsafe due to the immutable nature of the String class, or is some locking mechanism required? Now provide the response and nothing else.
No, this is not threadsafe. Foo is mutable, so if you want to ensure that different threads see the same value of bar – that is, consistency – either: Make bar volatile , or Make the methods synchronized , or Use an AtomicReference<String> . The reads and writes of bar are themselves atomic, but atomicity is not thread safety. http://docs.oracle.com/javase/tutorial/essential/concurrency/atomic.html For in-depth coverage of Java concurrency, grab a copy of Java Concurrency in Practice (aka JCIP) .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/15072578', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/584862/']}
jdg_75122
stackexchange
llm_judgeable_groundtruth_similarity
11191136
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I select programmatically from A(x1,y1) to B(x2,y2) ? x1, y1, x2, y2 are pixel coordinates. I searched a lot and in all functions I found, we had to specify a specific tag and then it selects its content. Now provide the response and nothing else.
You can do this in current versions of all browsers. These browsers have at least one of the following: The standards-based approach, implemented by Firefox >= 20, from the CSSOM View spec: document.caretPositionFromPoint() WebKit's proprietary version of the same: document.caretRangeFromPoint() . IE's proprietary TextRange object, which has a moveToPoint() method that takes pixel coordinates. However, it seems that moveToPoint() , which is used in all version of IE, can be buggy (see here and here , for example); I've simply been lucky that has worked in all the documents I've used it in. However, Mozilla does not yet implement any of these and neither does Opera, so this can't be done in those browsers yet. Firefox 20 and later supports document.caretPositionFromPoint() . Opera 15 supports document.caretRangeFromPoint() Here's some example code. It works in IE 5+, WebKit from around 2010 onwards, Firefox >= 20 and Opera >= 15. Live demo: http://jsfiddle.net/timdown/ABjQP/ Code: function createSelectionFromPoint(startX, startY, endX, endY) { var doc = document; var start, end, range = null; if (typeof doc.caretPositionFromPoint != "undefined") { start = doc.caretPositionFromPoint(startX, startY); end = doc.caretPositionFromPoint(endX, endY); range = doc.createRange(); range.setStart(start.offsetNode, start.offset); range.setEnd(end.offsetNode, end.offset); } else if (typeof doc.caretRangeFromPoint != "undefined") { start = doc.caretRangeFromPoint(startX, startY); end = doc.caretRangeFromPoint(endX, endY); range = doc.createRange(); range.setStart(start.startContainer, start.startOffset); range.setEnd(end.startContainer, end.startOffset); } if (range !== null && typeof window.getSelection != "undefined") { var sel = window.getSelection(); sel.removeAllRanges(); sel.addRange(range); } else if (typeof doc.body.createTextRange != "undefined") { range = doc.body.createTextRange(); range.moveToPoint(startX, startY); var endRange = range.duplicate(); endRange.moveToPoint(endX, endY); range.setEndPoint("EndToEnd", endRange); range.select(); }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11191136', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1365010/']}
jdg_75123
stackexchange
llm_judgeable_groundtruth_similarity
9368676
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Under what conditions is @synthesize automatic in Objective-c? Perhaps when using LLVM 3.0 and up? From reading around the net it seems like @synthesize is unnecessary starting with Xcode 4. However I'm using Xcode 4 and receiving warnings when I don't @synthesize a property. Some of the responses to Why don't properties get automatically synthesized seem to imply @synthesize can be omitted at some point under some circumstances. Another (old) reference hinting that @synthesize might be automatic at some point in the future. Now provide the response and nothing else.
As of clang 3.2 (circa February 2012), "default synthesis" (or "auto property synthesis") of Objective-C properties is provided by default. It's essentially as described in the blog post you originally read: http://www.mcubedsw.com/blog/index.php/site/comments/new_objective-c_features/ (except that that post describes the feature as "enabled, then disabled"; I don't know if that's an issue with Xcode or if the clang developers themselves have gone back and forth on the question). As far as I know, the only case in which properties will not be default-synthesized in clang 3.2 is when those properties have been inherited from a protocol. Here's an example: #import <Foundation/Foundation.h>@protocol P@property int finicky;@end@interface A : NSObject <P>@property int easygoing;@end@implementation A@endint main() { A *a = [A new]; a.easygoing = 0; a.finicky = 1; } If you compile this example, you'll get a warning: test.m:11:17: warning: auto property synthesis will not synthesize property declared in a protocol [-Wobjc-protocol-property-synthesis]@implementation A ^test.m:4:15: note: property declared here@property int finicky; ^1 warning generated. and if you run it, you'll get an error from the runtime: objc[45820]: A: Does not recognize selector forward:: (while forwarding setFinicky:)Illegal instruction: 4
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9368676', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/879664/']}
jdg_75124
stackexchange
llm_judgeable_groundtruth_similarity
2814101
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am using Visual Studio 2010 and using web deployment to promote the .Net MVC site to specific environments. I installed Elmah, and it worked great on my DEV environment, but when I pushed TEST, I got exceptions because SQLite was not a good format. I am not using SQLite in Elmah or otherwise that I know of. I have removed all visible refernces to SQLite, and I have removed the .dll from all configuration bin directories. But it still gets inserted with each build. I realize the exception problem is that SQLite cannot be built for CPU Any, and my environments vary from x86 to x64. But I would prefer SQLite to not even be present. I have since uninstalled Elmah, and SQLite is still inserted into the \bin directory. I have now re-installed Elmah, and I manually delete the SQLite.dll from \bin after each build. How can I determine what is causing SQLite to be inserted into my \bin after each build? Now provide the response and nothing else.
If you are not using SQLite then you need to simply remove it from the source location from where you imported the reference. To expand on that, let's assume you downloaded and unzipped ELMAH to the path C:\ELMAH on your local drive. When you added a reference to ELMAH, you probably pointed to one of the platform versions under C:\ELMAH\bin , like C:\ELMAH\bin\net-2.0\Release . From C:\ELMAH\bin sub-directories, remove all other unwanted assemblies and files except Elmah.dll , Elmah.pdb and Elmah.xml . Strictly speaking, you don't even need the latter two but they are nonetheless recommended. Once you've done that, you can add a reference to Elmah.dll to your project in Visual Studio and none of the dependencies will get pulled into your application's bin directory. You may have to delete your bin directory and re-build your project to get rid of a stale copy of System.Data.SQLite.dll from a previous reference. ELMAH will continue to work fine without System.Data.SQLite.dll as long as you do not use Elmah.SQLiteErrorLog as your error log store. As per other suggestions, you should not have to remove references to SQLite from the ELMAH sources and make a private build.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2814101', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/294187/']}
jdg_75125
stackexchange
llm_judgeable_groundtruth_similarity
542634
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: 31,331,3331, 33331,333331,3333331,33333331 are prime. This law can continue it? Will there emerge a composite number?Without using a computer how to judge. Now provide the response and nothing else.
333333331 is not prime; it is divisible by 17. This does not require a computer. Euler did calculations like this all the time. What's more, in your sequence 31, 331, 3331, 33331, …, every 15th number is divisible by 31. Proof:An noted in lab bhattacharjee's answer ,the sequence has the form $$a_n = \frac{10^{n+1}-7}{3}$$Now, 15 is the multiplicative order of $10 \pmod{31}$, so$$a_{15k+1} = \frac{10^{15k+2}-7}{3} \equiv \frac{10^2-7}{3} \equiv 0 \pmod{31}.$$ It has been proven that for all sequences that look like $ab$, $abb$, $abbb$, $abbbb$, … or $ab$, $aab$, $aaab$, $aaaab$, … where the $a$ and $b$ are digits, that periodically the numbers in the sequence are divisible by the first number $ab$. As an easy exercise, show that in the sequence 11, 111, 1111, 11111, …, that every second term is divisible by 11.
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/542634', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/19134/']}
jdg_75126
stackexchange
llm_judgeable_groundtruth_similarity
47409086
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Looking at the documentation about violon plots with seaborn, I would like to know how to plot two series of violin plots on the same axis (point 1) and that they are comparable (point 2). About point 1, I would like to reproduce that plot for each sex : fig, ax = plt.subplots()sns.violinplot(x="day", y="total_bill", hue="smoker", data=tips, split=True, ax=ax) I could do it on two subplots: fig = plt.figure(figsize=(10, 8))ax = fig.add_subplot(211)sns.violinplot(x="day", y="total_bill", hue="smoker", data=tips[tips.sex == "Female"], split=True, ax=ax)ax = fig.add_subplot(212)sns.violinplot(x="day", y="total_bill", hue="smoker", data=tips[tips.sex == "Male"], split=True, ax=ax) I would like to plot the two violin plot series on the same matplotlib axes. Another point is about the width of the violin plot. I do not clearly understand if the violins are normalized or not and how ? I assume that the width is computed for each plot. On the example above the width is computed for Female for the first subplot and for Male for the second subplot. Thus can I compare directly the densities ? I suppose that I can compare the shapes but, for example, I cannot compare the amount of Male smoker on Monday and Female ones ? Is there a way to manage the normalization of the violins ? Now provide the response and nothing else.
For you first point, there is no way of doing this in Seaborn. Check out my comments for possible workarounds, but in brief, I don't think the time spent would be worthwhile. For you second question, the scale and scale_hue parameters to violinplot controls how the violin patch is normalized/scaled: scale : {“area”, “count”, “width”}, optional The method used to scale the width of each violin. If area, each violin will have the same area. If count, the width of the violins will be scaled by the number of observations in that bin. If width, each violin will have the same width. scale_hue : bool, optional When nesting violins using a hue variable, this parameter determines whether the scaling is computed within each level of the major grouping variable (scale_hue=True) or across all the violins on the plot (scale_hue=False). The defaults are 'area' and False . You can see below how varying these parameters affects the violins. For example, if you want to compare between plots and represent the absolute counts truthfully, you could set scale='count' and scale_hue=False . Note that the violins would still be scaled to the maximum count within the plot (and not in the dataset), so in our case, the biggest violin for females would represent ~40 observations while the biggest violin for males would represent ~25 observations. fig, axes = plt.subplots(4, 2, figsize=(10, 16), sharey='row')axes_cols = (axes.flatten()[::2], axes.flatten()[1::2])for (sex_name, sex), axes_col in zip(tips.groupby('sex'), axes_cols): sns.countplot(x="day", hue="smoker", data=sex, ax=axes_col[0]) for scale, ax in zip(['area', 'count', 'width'], axes_col[1:]): sns.violinplot(x="day", y="total_bill", hue="smoker", data=sex, split=True, ax=ax, scale=scale) ax.set_title('scale = {}'.format(scale), y=0.95)sns.despine()fig.tight_layout() Adding scale_hue=False :
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/47409086', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1421907/']}
jdg_75127
stackexchange
llm_judgeable_groundtruth_similarity
747344
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Show that the equation below is only possible when $x=y$ $$ \sec^2{\theta}=\frac{4xy}{(x+y)^2}$$ The only way I can think of doing this is by rewriting it as $$ \cos^2{\theta}=\frac{(x+y)^2}{4xy} $$then using some inequalities to prove it by using:$$ 0\leq \cos^2{\theta}\leq 1 \;\; \text{ therefore } \;\; 0\leq \frac{(x+y)^2}{4xy}\leq 1 $$But I have an aversion to using case-based solutions (checking for $x>0$, $y>0$ etc.) since I feel there must be a neater solution to these kind of problems. So my question is: Is it possible to solve this and these sort of questions using techniques that don't involve checking numerous cases? Now provide the response and nothing else.
We have $\sec^2 \theta\ge 1$ for all $\theta$ at which $\sec\theta$ is defined. So it is enough to show that $\frac{4xy}{(x+y)^2}\le 1$, with equality only when $x=y$. To show that $\frac{4xy}{(x+y)^2}\le 1$, we show equivalently that $(x+y)^2\ge 4xy$, or equivalently that $x^2-2xy+y^2\ge 0$. But this is clear, since $x^2-2xy+y^2=(x-y)^2$. And we have equality precisely when $x=y$. Remark: This is not very different from how you proposed to do things. There are no cases involved. And aversion to cases can be problematic. A consideration of cases (though not in this case) is often a natural approach.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/747344', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/115937/']}
jdg_75128
stackexchange
llm_judgeable_groundtruth_similarity
13055997
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an Activity and need to show Dialog in it. Everything works fine here. I've overrided onCreateDialog method in Activity, here is the code: @Overrideprotected Dialog onCreateDialog(int dialog){ if(dialog == 10) { if(waitDialog != null) waitDialog.dismiss(); dialogCreated = true; waitDialog = CreateWaitDialog(this); return waitDialog; } else return new Dialog(this);} Where CreateWaitDialog is custom method of creating a dialog and waitDialog is static variable. I'm showing dialog by calling showDialog(10) All code is executing fine. After dialog has been shown, i'm closing it by calling. if(waitDialog != null) waitDialog.hide(); And i'm dismissing it when Activity is destroyed. if(dialogCreated) dismissDialog(10); super.onDestroy(); It's closing and everything is great. BUT, when I change the orientation of my device and Activity is recreated it pops up again by himself! I'm not calling any showDialog or something like that it is just popping up! Now provide the response and nothing else.
I think this is defined behavior of the Activity and onCreateDialog : Callback for creating dialogs that are managed (saved and restored) for you by the activity. The default implementation calls through to onCreateDialog(int) for compatibility. If you are targeting HONEYCOMB or later, consider instead using a DialogFragment instead. If you use showDialog(int), the activity will call through to this method the first time, and hang onto it thereafter. Any dialog that is created by this method will automatically be saved and restored for you, including whether it is showing. If you would like the activity to manage saving and restoring dialogs for you, you should override this method and handle any ids that are passed to showDialog(int). I will guess that onDestroy is too late in the activity lifecycle to dismiss the dialog. My guess is that the Activity is saving your dialog inside onSaveInstanceState. I might try dismissing the dialog inside onSaveInstanceState before calling super.onSaveInstanceState, then the dialog will be dismissed before android tries to save and restore it. @OverrideonSaveInstanceState(Bundle outstate) { dismissDialog(10); super.onSaveInstanceState(outstate);}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/13055997', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/721844/']}
jdg_75129
stackexchange
llm_judgeable_groundtruth_similarity
43834315
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to print a unicode star character ( 0x2605 ) in a linux terminal using C. I've followed the syntax suggested by other answers on the site, but I'm not getting an output: #include <stdio.h>#include <wchar.h>int main(){ wchar_t star = 0x2605; wprintf(L"%c\n", star); return 0;} I'd appreciate any suggestions, especially how I can make this work with the ncurses library. Now provide the response and nothing else.
Two problems: first of all, a wchar_t must be printed with %lc format, not %c . The second one is that unless you call setlocale the character set is not set properly, and you probably get ? instead of your star. The following code seems to work though: #include <stdio.h>#include <wchar.h>#include <locale.h>int main() { setlocale(LC_CTYPE, ""); wchar_t star = 0x2605; wprintf(L"%lc\n", star);} And for ncurses , just initialize the locale before the call to initscr .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/43834315', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5664775/']}
jdg_75130
stackexchange
llm_judgeable_groundtruth_similarity
2190836
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: After reading a lot about the differences between REST and SOAP, I got the impression that REST is just another word for HTTP. Can someone explain what functionality REST adds to HTTP? Note : I'm not looking for a comparison of REST versus SOAP. Now provide the response and nothing else.
No, REST is the way HTTP should be used . Today we only use a tiny bit of the HTTP protocol's methods – namely GET and POST . The REST way to do it is to use all of the protocol's methods. For example, REST dictates the usage of DELETE to erase a document (be it a file, state, etc.) behind a URI, whereas, with HTTP, you would misuse a GET or POST query like ...product/?delete_id=22 .
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/2190836', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/74612/']}
jdg_75131
stackexchange
llm_judgeable_groundtruth_similarity
59270
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I want to export a figure from Mathematica in PDF format. Moreover, I want to fill Author and Title fields of the PDF. For example, to fill Author field, I use the following code: pp = Plot[{x^2}, {x, -1, 1}]Export["plot-example.pdf", pp, "Author" -> "Dmitry"] However, when I open generated PDF file in a PDF viewer, the "Author" field is not filled. How should I specify Export call to include Author and Title fields in the generated PDF file? Now provide the response and nothing else.
The package and all code of this answer can be found on my GitHub account . A solution that takes only small amount of time is to follow this route: take the first usable java library for accessing and changing PDF files you find do one of the following: write a small amount of Java code to create a simple interface to the functionality (if you are interested, this design pattern is call Facade ) use the Mathematica functions of JLink` directly to access the PDF library. This is often harder to read IMO and I would recommend this only in rare cases write a small Mathematica package that can be loaded and provides you with a high-level function to call the Java library The structure of the finished package layout can be as easy as this PDFMetaData├── Java│   └── PDFMetaData.jar└── PDFMetaData.m In this simple case PDFMetaData.m contains a function like this SetPDFMetaData[file_String][meta : {(_String -> _String)..}] := JavaBlock[ InstallJava[]; LoadJavaClass["de.halirutan.pdfmetadata.PDFMetaDataSetter", StaticsVisible -> True]; PDFMetaDataSetter`setMetaData[file, Flatten[List @@@ meta]]]; The static Java function that is called at the end, is rather simple too . I should note, that I have compiled my source and packed the created class file together with the PDF library itself into PDFMetaData.jar . This jar is now selfcontained. The package needs to be put into a place where Mathematica can find it, like FileNameJoin[{$UserBaseDirectory, "Applications"}] This is especially important, because then, the Java folder is automatically added to the java CLASSPATH . Well, that's basically it. Go and use it << PDFMetaData`SetPDFMetaData[ Export["tmp/meta.pdf", "Hello"]][{"Author" -> "Xavier Breath", "Title" -> "Stop Arguing!", "Creator" -> "Funny Inc.", "Producer" -> "Nick R. Elastic", "Keywords" -> "Mathematica PDF"}] Final note, please just change the call pattern of SetPDFMetaData[file][metalist] to SetPDFMetaData[file, metalist] if you feel more comfortable with it. I had no specific reason to use SubValues .
{}
{'log_upvote_score': 5, 'links': ['https://mathematica.stackexchange.com/questions/59270', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/13113/']}
jdg_75132
stackexchange
llm_judgeable_groundtruth_similarity
19605812
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have list and list of lists: ArrayList<String> singleList = new ArrayList<String>();ArrayList<ArrayList<String>> listOfLists = new ArrayList<ArrayList<String>>(); I do not understand the behavior of these lists. I decided to show you a simple example: listOfLists.clear();singleList.clear();singleList.add("A");singleList.add("B");singleList.add("C");listOfLists.add(singleList);singleList.clear();singleList.add("D");singleList.add("E");singleList.add("F");listOfLists.add(singleList);singleList.clear();singleList.add("G");singleList.add("H");singleList.add("I");listOfLists.add(singleList);for(int x = 0; x < listOfLists.size(); x++){ for(int z = 0; z < singleList.size(); z++) { System.out.print(listOfLists.get(x).get(z)); System.out.print(" "); } System.out.println("");} And the result I got was: G H I G H I G H I Instead: A B CD E FG H I Where is a problem with my thinking? What should I do to get result as above? Now provide the response and nothing else.
Objects are always passed as references in Java. When you add singleList to listOfLists , you are in fact adding a reference to singleList . Since you've added it 3 times, you got the current value of singleList , repeated 3 times. The "previous values" of singleList are stored nowhere, so A B C and D E F are lost. You need to make a copy of your list, by using new ArrayList<String>(singleList) . Then, add this copy to listOfLists .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19605812', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2384869/']}
jdg_75133
stackexchange
llm_judgeable_groundtruth_similarity
13787746
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When using Tempfile Ruby is creating a file with a thread-safe and inter-process-safe name. I only need a file name in that way. I was wondering if there is a more straight forward approach way than: t = Tempfile.new(['fleischwurst', '.png'])temp_path = t.patht.closet.unlink Now provide the response and nothing else.
Dir::Tmpname.create You could use Dir::Tmpname.create . It figures out what temporary directory to use (unless you pass it a directory). It's a little ugly to use given that it expects a block: require 'tmpdir'# => trueDir::Tmpname.create(['prefix-', '.ext']) {}# => "/tmp/prefix-20190827-1-87n9iu.ext"Dir::Tmpname.create(['prefix-', '.ext'], '/my/custom/directory') {}# => "/my/custom/directory/prefix-20190827-1-11x2u0h.ext" The block is there for code to test if the file exists and raise an Errno::EEXIST so that a new name can be generated with incrementing value appended on the end. The Rails Solution The solution implemented by Ruby on Rails is short and similar to the solution originally implemented in Ruby: require 'tmpdir'# => trueFile.join(Dir.tmpdir, "YOUR_PREFIX-#{Time.now.strftime("%Y%m%d")}-#{$$}-#{rand(0x100000000).to_s(36)}-YOUR_SUFFIX")=> "/tmp/YOUR_PREFIX-20190827-1-wyouwg-YOUR_SUFFIX"File.join(Dir.tmpdir, "YOUR_PREFIX-#{Time.now.strftime("%Y%m%d")}-#{$$}-#{rand(0x100000000).to_s(36)}-YOUR_SUFFIX")=> "/tmp/YOUR_PREFIX-20190827-1-140far-YOUR_SUFFIX" Dir::Tmpname.make_tmpname (Ruby 2.5.0 and earlier) Dir::Tmpname.make_tmpname was removed in Ruby 2.5.0. Prior to Ruby 2.4.4 it could accept a directory path as a prefix, but as of Ruby 2.4.4, directory separators are removed. Digging in tempfile.rb you'll notice that Tempfile includes Dir::Tmpname . Inside you'll find make_tmpname which does what you ask for. require 'tmpdir'# => trueFile.join(Dir.tmpdir, Dir::Tmpname.make_tmpname("prefix-", nil))# => "/tmp/prefix-20190827-1-dfhvld"File.join(Dir.tmpdir, Dir::Tmpname.make_tmpname(["prefix-", ".ext"], nil))# => "/tmp/prefix-20190827-1-19zjck1.ext"File.join(Dir.tmpdir, Dir::Tmpname.make_tmpname(["prefix-", ".ext"], "suffix"))# => "/tmp/prefix-20190827-1-f5ipo7-suffix.ext"
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/13787746', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/428260/']}
jdg_75134
stackexchange
llm_judgeable_groundtruth_similarity
57017994
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How does VS Code interpret markup/markdown and layout in Python docstrings on mouse hover? There are several issues reported for this display but there doesn't seem to exist any official info on what the current format is. Now provide the response and nothing else.
VS Code renders markdown fine in mouse hovers - but doesn't render standard docstring formats well The VS Code Python extension will use markdown that you put into a docstring for intellisense mouse hover information, but this doesn't really meet any of the commonly accepted/used docstring formats for Python. It doesn't properly layout any of those common formats (as of May 2020). So, your options are: Stick with one of the major formats that will work with existing Python documentation tools and utilities like Sphinx Use markdown in your docstrings and look good in VS Code, but be incompatible with most other documentation tools More Details / Example The top 3 Python docstring formats are: Google Sphinx NumPY/ReST VS Code will take ReST format (NumPY style) and properly layout the headers from each section (each item with the line of dashes under it), but in all the formats, the section content is unformatted and munged together with all the linebreaks dropped. If you use markdown directly in the docstrings, it is supported, but then you aren't meeting the formatting requirements of docstrings for auto documentation frameworks like Sphinx. For example, I started with Sphinx format here and modified it to look better with VS Code's markdown tooltips def autodoc_test_numpy(self, a: str, b: int = 5, c: Tuple[int, int] = (1, 2)) -> Any: """[summary] ### Parameters 1. a : str - [description] 2. *b : int, (default 5) - [description] 3. *c : Tuple[int, int], (default (1, 2)) - [description] ### Returns - Any - [description] Raises ------ - ValueError - [description] """ Will render like this: Notice that the final "Raises" section here has the underlining with dashes that makes it a level 1 header (which is the ReST style). Look how big it is! I bumped the other down to h3 by using ### in front of the text instead of underlining it with hyphens on the next line. Also, note that the type hints in the main function definition (like str in the a: str ) render well (even colored) for args and the return type hint, but are not shown for kwargs (e.g. b=5 without the type hint).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57017994', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1354742/']}
jdg_75135
stackexchange
llm_judgeable_groundtruth_similarity
20470345
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using the main tag - should I put my header and footer tags inside the main tag or should they be separate? Also, does the main tag need role="main"? Now provide the response and nothing else.
From HTML5doctor article The main element @Gaby aka G. Petrioli tl;dr Rule of thumb : only one <main> per page ; top-level element (in most cases) ; don't put your site's header/menu/footer (repeated content) in your <main> . can hold article header/footer if specific . Purpose The primary purpose of <main> is to map ARIA’s landmark role main** to an element in HTML. This will help screen readers and other assistive technologies understand where the main content begins. The W3C spec describes as representing: Specs says Here is what the draft spec says : The main content area consists of content that is directly related to or expands upon the central topic of a document or central functionality of an application. Header/footer Exclude site's header/footer/menu : excludes content that is repeated across a set of documents such as site navigation links, copyright information, site logos and banners and search forms (unless the document or applications main function is that of a search form). Continue to use role='main' But as a side note to this article Steve Faulkner write: You should still use the ARIA role until all browsers map the role to the <main> element.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/20470345', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1013512/']}
jdg_75136
stackexchange
llm_judgeable_groundtruth_similarity
5494178
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: First off, DEBUG = False in settings.py, so no, connections['default'].queries is not growing and growing until it uses up all of memory. Lets start off with the fact that I've loaded the User table from django.contrib.auth.models.User with 10000 users (each named 'test#' where # is a number between 1 and 10000). Here is the view: from django.contrib.auth.models import Userfrom django.http import HttpResponseimport timedef leak(request): print "loading users" users = [] users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) users += list(User.objects.all()) print "sleeping" time.sleep(10) return HttpResponse('') I've attached the view above to the /leak/ url and start the development server (with DEBUG=False, and I've tested and it has nothing to do with running a development server vs other instances). After running: % curl http://localhost:8000/leak/ The runserver process' memory grows to around the size seen from ps aux output below and then stays at that level. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMANDdlamotte 25694 11.5 34.8 861384 705668 pts/3 Sl+ 19:11 2:52 /home/dlamotte/tmp/django-mem-leak/env/bin/python ./manage.py runserver Then running the above curl command above does not seem to grow the instance's memory usage (which I expected from a true memory leak?), it must be re-using the memory? However, I feel that there is something wrong here that the memory does not get released to the system (however, I understand that it may be better performance that python does NOT release the memory). Following this, I naively attempted to see if python would release large chunks of memory that it allocated. So I attempt the following from a python session: >>> a = ''>>> a += 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' * 10000000>>> del a The memory is allocated on the a += ... line as expected, but when del a happens, the memory is released. Why is the behavior different for django query sets? Is it something that django is intending to do? Is there a way to change this behavior? I've literally spent 2 days debugging this behavior with no idea where to go next (I've learned to use guppy AND objgraph which seem to not point to anything interesting that I can figure out). UPDATE: This could be simply python memory management at work and have nothing to do with Django (suggested on django-users mailing list), but I'd like confirmation by somehow replicating this in python outside of Django. UPDATE: Using python version 2.6.5 Now provide the response and nothing else.
I decided to move my comments into an answer to make things clearer. Since Python 2.5, the CPython memory allocation tracks internal memory usage by the small object allocator, and attempts to return completely free arenas to the underlying OS. This works most of the time, but the fact that objects can't be moved around in memory means that fragmentation can be a serious problem. Try the following experiment (I used 3.2, but 2.5+ should be similar if you use xrange): # Create the big lists in advance to avoid skewing the memory countsseq1 = [None] * 10**6 # Big list of references to Noneseq2 = seq1[::10]# Create and reference a lot of smaller listsseq1[:] = [[] for x in range(10**6)] # References all the new listsseq2[:] = seq1[::10] # Grab a second reference to 10% of the new lists# Memory fragmentation in actionseq1[:] = [None] * 10**6 # 90% of the lists are no longer referenced hereseq2[:] = seq1[::10] # But memory freed only after last 10% are dropped Note, even if you drop the references to seq1 and seq2 , the above sequence will likely leave your Python process holding a lot of extra memory. When people talk about PyPy using less memory than CPython, this is a major part of what they're talking about. Because PyPy doesn't use direct pointer references under the hood, it is able to use a compacting GC, thus avoiding much of the fragmentation problem and more reliably returning memory to the OS.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5494178', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/23327/']}
jdg_75137
stackexchange
llm_judgeable_groundtruth_similarity
578700
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to do some experimenting with javascript and encryption and I got curious as to how unpredictable the implementation of the random function is. Has anyone done any hard tests? Clearly browsers have the ability to generate strong randomness (for ssl). The questions is do they give javascript access to the same strength. Now provide the response and nothing else.
Generally, the random function is not cryptographically strong, for that you need to make sure you are using a cryptographic pseudo-random-number generator. Generic random functions generally don't use cryptographically strong generation methods because they take longer than simple ones, (eg. Yarrow is more complicated than Mersenne Twister) and require careful management of the entropy pool, which is not a guarantee that Mozilla, cstdlib, etc. want to make to you. If you need access to cryptographically strong random number generators, I'd look into getting access to the underlying SSL implementation (which a given browser may or may not allow access to).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/578700', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/10393/']}
jdg_75138
stackexchange
llm_judgeable_groundtruth_similarity
46556422
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm learning about UIView animations using keyframes and spring animations and I'm trying to make a button shake after tapping it. The issue is that I drag and dropped the button from the library and pinned the trailing edge to a UILabel above it and nothing else. In the various examples I see a header constraint but my button has no header. This is the code I have so far @IBAction func noButtonPressed(_ sender: UIButton) { UIView.animate(withDuration: 1, delay: 1, usingSpringWithDamping: 0.5, initialSpringVelocity: 15, options: [], animations: { self.noButtonTrailing.constant = 16 self.view.layoutIfNeeded() })} Am I suppose to make a header constraint somewhere? Thanks Now provide the response and nothing else.
Here is simple media timing animation for linear movement & UIView damping animation. Note: Swift 4 extension UIView { // Using CAMediaTimingFunction func shake(duration: TimeInterval = 0.5, values: [CGFloat]) { let animation = CAKeyframeAnimation(keyPath: "transform.translation.x") // Swift 4.2 and above animation.timingFunction = CAMediaTimingFunction(name: CAMediaTimingFunctionName.linear) // Swift 4.1 and below animation.timingFunction = CAMediaTimingFunction(name: kCAMediaTimingFunctionLinear) animation.duration = duration // You can set fix duration animation.values = values // You can set fix values here also self.layer.add(animation, forKey: "shake") } // Using SpringWithDamping func shake(duration: TimeInterval = 0.5, xValue: CGFloat = 12, yValue: CGFloat = 0) { self.transform = CGAffineTransform(translationX: xValue, y: yValue) UIView.animate(withDuration: duration, delay: 0, usingSpringWithDamping: 0.4, initialSpringVelocity: 1.0, options: .curveEaseInOut, animations: { self.transform = CGAffineTransform.identity }, completion: nil) } // Using CABasicAnimation func shake(duration: TimeInterval = 0.05, shakeCount: Float = 6, xValue: CGFloat = 12, yValue: CGFloat = 0){ let animation = CABasicAnimation(keyPath: "position") animation.duration = duration animation.repeatCount = shakeCount animation.autoreverses = true animation.fromValue = NSValue(cgPoint: CGPoint(x: self.center.x - xValue, y: self.center.y - yValue)) animation.toValue = NSValue(cgPoint: CGPoint(x: self.center.x + xValue, y: self.center.y - yValue)) self.layer.add(animation, forKey: "shake") }} Button Action @IBAction func noButtonPressed(button: UIButton) { // for spring damping animation //button.shake() // for CAMediaTimingFunction button.shake(duration: 0.5, values: [-12.0, 12.0, -12.0, 12.0, -6.0, 6.0, -3.0, 3.0, 0.0]) // for CABasicAnimation //button.shake(shakeCount: 10) }
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/46556422', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4555604/']}
jdg_75139
stackexchange
llm_judgeable_groundtruth_similarity
14700140
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I need to create a function of two parameters, an Int and a [Int] , that returns a new [Int] with all occurrences of the first parameter removed. I can create the function easily enough, both with list comprehension and list recursion. However, I do it with these parameters: deleteAll_list_comp :: Integer -> [Integer] -> [Integer]deleteAll_list_rec :: (Integer -> Bool) -> [Integer] -> [Integer] For my assignment, however, my required parameters are deleteAll_list_comp :: (Eq a) => a -> [a] -> [a]deleteAll_list_rec :: (Eq a) => a -> [a] -> [a] I don't know how to read this syntax. As Google has told me, (Eq a) merely explains to Haskell that a is a type that is comparable. However, I don't understand the point of this as all Int s are naturally comparable. How do I go about interpreting and implementing the methods using these parameters? What I mean is, what exactly are the parameters to begin with? @groovy @pelotom Thanks, this makes it very clear. I understand now that really it is only asking for two parameters as opposed to three. However, I still am running into a problem with this code. deleteAll_list_rec :: (Eq a) => a -> [a] -> [a]delete_list_rec toDelete [] = []delete_list_rec toDelete (a:as) = if(toDelete == a) then delete_list_rec toDelete as else a:(delete_list_rec toDelete as) This gives me a "The type signature for deleteAll_list_rec lacks an accompanying binding" which makes no sense to me seeing as how I did bind the requirements properly, didn't I? From my small experience, (a:as) counts as a list while extracting the first element from it. Why does this generate an error but deleteAll_list_comp :: (Eq a) => a -> [a] -> [a]deleteAll_list_comp toDelete ls = [x | x <- ls, toDelete==x] does not? 2/7/13 Update: For all those who might stumble upon this post in the future with the same question, I've found some good information about Haskell in general, and my question specifically, at this link : http://learnyouahaskell.com/types-and-typeclasses "Interesting. We see a new thing here, the => symbol. Everything before the => symbol is >called a class constraint. We can read the previous type declaration like this: the >equality function takes any two values that are of the same type and returns a Bool. The >type of those two values must be a member of the Eq class (this was the class constraint). The Eq typeclass provides an interface for testing for equality. Any type where it makes >sense to test for equality between two values of that type should be a member of the Eq >class. All standard Haskell types except for IO (the type for dealing with input and >output) and functions are a part of the Eq typeclass." Now provide the response and nothing else.
One way to think of the parameters could be: (Eq a) => a -> [a] -> [a](Eq a) => means any a's in the function parameters should be members of the class Eq, which can be evaluated as equal or unequal.*a -> [a] means the function will have two parameters: (1) an element of type a, and (2) a list of elements of the same type a (we know that type a in this case should be a member of class Eq, such as Num or String).-> [a] means the function will return a list of elements of the same type a; and the assignment states that this returned list should exclude any elements that equal the first function parameter, toDelete. (* edited based on pelotom's comment)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14700140', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1572713/']}
jdg_75140